bettysrohl Profile Banner
betts Profile
betts

@bettysrohl

Followers
147
Following
191
Statuses
128

AI/LLM enthusiast | Web3 | Privacy advocate šŸ’» Head of DevRel @PartisiaMPC Engineer @imperialcollege

LDN | NYC
Joined September 2022
Don't wanna be here? Send us removal request.
@bettysrohl
betts
3 days
RT @dabit3: Verifiable agents are the next meta in crypto x AI - agents that don't require trust. ā€¢ can't be shut down or censored ā€¢ actioā€¦
0
28
0
@bettysrohl
betts
3 days
@clydedevv @elizaOS @dabit3 @alfaketchum @togethercompute This is an awesome project! saving it to try it out. Have you hit any roadblocks with getting banned from twitter for scrapping, happened to me before šŸ˜…
1
0
1
@bettysrohl
betts
13 days
RT @signulll: netscape built a browser, sold it like boxed retail softwareā€”you had to go to compusa & pay a solid chunk of change for it. tā€¦
0
1K
0
@bettysrohl
betts
15 days
@king__choo @GroqInc Damn thatā€™s neat šŸ‘€
0
0
1
@bettysrohl
betts
16 days
1
1
11
@bettysrohl
betts
19 days
šŸšØHappening right now! Tune in to hear more about the liquid staking on Partisia with @SceptreLS I'll also be joining to talk about on our new exciting developments in the AI space šŸ¤– p.d. : feel free to ask me any questions on the topic!
@partisiampc
Partisia Blockchain
19 days
0
2
11
@bettysrohl
betts
20 days
After spending ages plugging into big-name APIs (looking at you, @OpenAI), I decided to dust off my AI fundamentals and build a model from the ground up. Not because those models are trashā€”heck, theyā€™re amazingā€”but because I missed the thrill of total control. Here how and why: I wanted to add a serious privacy layer. Letā€™s be real: continuous learning from every userā€™s data is convenient but can feel invasive if they didnā€™t opt in. So Iā€™m forcing myself to create a design where the model only learns when users explicitly say, ā€˜Yes, Iā€™m cool with that.ā€™ Step one was reacquainting myself with raw data wrangling. You canā€™t appreciate a good data pipeline until you label everything by hand and realize how easy it is to miss, oh, 10,000 outliers. Fun times. Next up: model training. Itā€™s like that proud moment you break away from pre-cooked meal kits and actually chop veggies for the first time in months. Thereā€™s something satisfying about seeing your GPU fan rev up because of your code, not someone elseā€™s API calls. Donā€™t get me wrongā€”Iā€™m still a fan of high-level frameworks. Agents and half-built solutions can be a devā€™s best friend in a pinch. But occasionally, going end-to-end reminds you what the sausage-making process really looks like. Spoiler: itā€™s messy, but oh-so-rewarding. Finally, the privacy piece: if you want to keep user data out of the shared training loop, you need tight cryptographic or HPC-laced solutionsā€”think multi-party computation or zero-knowledge proofs. Tools like @partisiampc (if youā€™re into blockchain) can help, but even a well-structured local DB with user consents goes a long way. The biggest lesson? Rebuilding a model from scratch teaches you more about performance tweaks, data ethics, and user trust than any quick AI wrap job ever will. And hey, if we can bake privacy in from day one, we all sleep better. edit note: I was very much hungry when I wrote this, all I could think about is obscure food references... šŸ˜‚
1
5
22
@bettysrohl
betts
29 days
RT @pippalamb: šŸ‡¬šŸ‡§šŸ–„ļøThe UK government just announced the most accelerationist changes to national AI policy we've ever seen (incl. 20x moreā€¦
0
93
0
@bettysrohl
betts
29 days
I cancelled my @GitHubCopilot and moved to @Tabby_ML itā€™s a better open-source version of GC, fully open for customization. If that is not enough reason, here is why I think it's awesome: - Runs Locally Tabby can be self-hosted, meaning youā€™re not sending every keystroke to a third-party server. Perfect for dev teams working with sensitive or proprietary code bases. - Configurable Models Want to fine-tune your own code suggestions? as itā€™s open-source, I've adapted the model to my stack and style guidelines. - Privacy Forward If youā€™re big on data privacy (and I am), not having to rely on a closed API is a game-changer. You control how and where code snippets are stored. My experience: I spun up Tabby in a local dev environment. Setup was surprisingly painless. I had it reading my projectā€™s code patternsā€”and the auto-completions felt spookily accurate by day two. Check out the GitHub repo. If you give it a try, let me know how your setup goes. Always curious about new tips for local AI dev toolsā€”and love seeing open-source projects push the space forward.
1
0
9
@bettysrohl
betts
30 days
A filing suggests @Meta Llama may have been trained on copyrighted materials. This opens a major can of worms about how training data is sourced. Legally, itā€™s complicated. Fair use? Possibly. But scraping copyrighted text or images without explicit consent blurs ethical lines. Weā€™ve seen calls for more regulated data pipelines or ā€œopt-inā€ frameworks. The AI community is slowly realizing you canā€™t just slurp the entire internet for free. Technical solution: privacy-preserving tech, like secure multi-party computation $MPC, can let models learn insights without ever directly exposing raw dataā€”shoutout to real breakthroughs in that field @partisiampc My takeaway? Future AI might evolve with explicit licensing deals or share revenue with content creators. If we want sustainable AI, respecting IP rights is essential. Letā€™s see how Meta responds.
1
6
29
@bettysrohl
betts
1 month
šŸ”„ @base notched $2.3B in daily trading volume vs. @ethereum Mainnetā€™s $2.2B. Thatā€™s no small featā€”especially for a relatively new chain with far lower total TVL. Cheaper transaction fees + targeted user incentives can turbocharge volume. This mirrors BSCā€™s meteoric rise in 2021, when it offered near-zero fees for DeFi degens. But does this mean Ethereum is done? Hardly. Mainnet is still the global settlement layer. L2 or alt-chains feed off that security, bridging back for final settlement. The future? A multi-chain environment. Each chain or L2 can optimize for specific use casesā€”gaming, DeFi, micropayments, etc. I see Ethereum as the ā€œbase layer,ā€ ironically, while specialized solutions flourish on top.
0
0
5
@bettysrohl
betts
1 month
Saturday night thoughts: would pay top dollar for an AI that automatically does code reviewsā€”and roasts me for my questionable variable names šŸ‘€ should i build this?
1
0
6
@bettysrohl
betts
1 month
Betts on Tech āœØ weekly recap on AI&crypto My top 3 bets of the week. 1ļøāƒ£ Synthetic Data is Booming Tech giants like @nvidia , @Google , @Meta and @OpenAI are increasingly turning to synthetic data to train AI models. This approach addresses data scarcity and privacy concerns, enabling the creation of vast, diverse datasets without the limitations of real-world data. The next generation of AI models, including GPT iterations, are poised to leverage synthetic data for enhanced performance and efficiency. 2ļøāƒ£ Cache-Augmented Generation (CAG): The New Cool Kid in Town CAG is emerging as a promising alternative to Retrieval-Augmented Generation (RAG). By preloading relevant documents into an AI model's context and precomputing key-value caches, CAG eliminates the need for real-time retrieval, reducing latency and potential errors. However, this method raises privacy concerns, especially when handling sensitive data, necessitating robust data governance and security measures. You can read more on my last post on CAG. 3ļøāƒ£ AI Governance Talks are Heating Up The rapid advancement of AI technologies has sparked intense discussions on ethics, regulation, and governance. Industry leaders and policymakers are striving to establish frameworks that ensure AI development aligns with societal values and legal standards. The challenge lies in balancing innovation with responsibility, ensuring AI's benefits are realized without compromising ethical principles. Follow for more weekly recaps and daily updates āœØ
0
0
5
@bettysrohl
betts
1 month
@king__choo would love to read more of your views on the topic. I agree I think consensus is starting to shift and long context will be the next thing (or a hybrid).
1
0
1
@bettysrohl
betts
1 month
The validity of RAG is fading, and long-context LLMs + prompt caching are why. RAG, to a certain degree, was premised on: 1ļøāƒ£ Lower latency 2ļøāƒ£ Lower costs 3ļøāƒ£ Input token limits of early models 4ļøāƒ£ Solving "needle-in-a-haystack" problems that long(er) context LLMs had But RAG has flaws: rigid preprocessing (metadata, chunking, embeddings) + bottlenecks at retrieval, where systems must predict queries before they exist. šŸ›‘ This is where Cache-Augmented Generation (CAG) steps in: šŸ’”Hereā€™s how CAG works: - Preload all relevant docs into the LLMā€™s context. - Precompute key-value (KV) caches for instant, accurate responses. - No retrieval needed. āš” No latency. No errors. With long-context LLMs, CAG + prompt caching = unbeatable. @AnthropicAI , @OpenAI , and @Google are already leveraging it. But hereā€™s the kicker: Caching isnā€™t just an efficiency hackā€”itā€™s a paradigm shift in AI knowledge integration. šŸšØ The catch? Caching requires temporarily storing data in memory. For enterprise use cases with sensitive data, this raises huge privacy concerns. OpenAi's Zero Retention Policy likely doesnā€™t extend to cached prompts. Cached sensitive data? Thatā€™s a compliance nightmare waiting to happen. So, whatā€™s next? I predict hybrid systems will emerge: - Cache static, reusable knowledge upfront. - Dynamically retrieve only whatā€™s absolutely necessary. And if we solve the privacy puzzleā€”through ephemeral memory systems, encryption, or private cachingā€”prompt caching could unlock enterprise AI adoption at massive scale. CAG isnā€™t just a clever workaround. Itā€™s laying the foundation for the next era of scalable, privacy-conscious AI systems. If youā€™re not thinking about this now, youā€™re already behind. Thoughts?
Tweet media one
0
1
10
@bettysrohl
betts
1 month
šŸ˜…@bing styling its 'Google' search results to look like Google feels like the ultimate inspect -> copy CSS move. Imitation is the sincerest form of... SEO? šŸ¤”
Tweet media one
0
0
5