![Rob Imbeault Profile](https://pbs.twimg.com/profile_images/1867944945559994368/M6KxRMxM_x96.jpg)
Rob Imbeault
@raw_bimbo
Followers
123
Following
2K
Statuses
1K
🤖 /acc - saas unicorn founder - here to support (new acct to purge the bots)
Joined November 2024
Like it says in the article really thinking through what the UI is and what the experience is going to be comparative to what it used to be with the systems. Making sure the front line workers are getting real value, not just looking at a chat screen or spreadsheets, but real insights and in this case the reasoning behind the recommendations.
0
0
1
RT @TimRunsHisMouth: You may not always agree with Jon Stewart.. but in this clip, he asked the former Deputy Secretary of Defense about h…
0
460
0
Great blueprint at an optimum time: My takeaways: - Knowing the tech at macro and micro level - Communication clarity and good vibes (don't be a dick) - Design for adoption ux/ui - Building customer ROI models with follow up to help pricing
Must read for all founders of AI first startups - especially like the “design for adoption not just accuracy” rule 👇🏽👇🏽👇🏽
0
0
0
RT @KatKanada_TM: Morons in Ottawa booed US military members at a hockey game. Well, I boooo the booers. I love my American friends.…
0
1K
0
@thatsKAIZEN Exactly!
@chamath I see this as sending a huge message conveying that cancel culture is over. Redemption is about getting better and how are people gonna get better unless we move on and encouraging them.
0
0
0
RT @reidhoffman: The Greeks spoke of gnōthi seauton (to know thyself) and the deep reflection it demands. AI can offer a new lens for sel…
0
6
0
Worth the read.
A Conversation with One of My 8090 Co-Founders I talked to @sinasojoodi, one of my co-founders at 8090, after we published our Deep Dive on AI last week. Let me know if this is interesting to you. Chamath: There’s this great story of how OpenAI accidentally discovered LLMs. Walk me through what happened there. Sina: LLM Grokking – this is funny. It's almost like how penicillin was discovered. OpenAI made this discovery where typically in traditional machine learning, there's a point of overfitting, where you stop training because the model is just memorizing the training set rather than learning to generalize. But OpenAI accidentally ran their experiment for days instead of hours, well beyond this overfitting point. And they reached what they call the 'grokking zone', where somehow, even though the training should have been completed many times over, the model started to actually understand concepts and show emergent behaviors beyond memorization. This discovery completely changed how we approach training transformer models. Instead of following traditional machine learning wisdom to avoid overfitting, we now know that transformers actually benefit from training far beyond conventional limits. And the scaling laws that emerged from this work give us precise formulas for the resources needed – parameters, compute, and data – to reach these emergent capabilities. They discovered this phenomenon, basically by accident. And that's the story of deep learning, to some degree. It's more about trial and error and trying a bunch of stuff than theory, although having a mental model for how it works does help. Chamath: Recently, I decided to use the word "brain" instead of saying "AI". What are your thoughts on that analogy? Sina: The first thing that comes to mind is the incredible complexity of neurons in the human brain. Each biological neuron is like its own universe of computation. It's doing these incredibly complex operations, it has its own learning mechanisms, and it maintains its own state. When we talk about the brain having 100 trillion connections between biological neurons, in some sense, that number doesn't even capture the full power of a human brain because each biological synapse is its own sophisticated processor. Artificial neural networks took inspiration from this but went in a different direction. We simplified our artificial neurons down to basic weights and biases – basically 'y = Wx + b'. While artificial neural networks might have billions of artificial neurons, each one is just doing these elementary mathematical operations: multiply inputs by weights, add a bias, run through an activation function. The trade-offs between these approaches are stark. Biological brains excel at power efficiency but operate in milliseconds, while artificial neurons can process information billions of times faster on H100s but consume massive amounts of energy to do so. The learning mechanisms are fundamentally different too. Nature spent a billion years evolving these incredibly sophisticated biological systems, while we're trying to accelerate that evolution with artificial neurons through massive computational parallelism and gradient descent. Both approaches work, just in fundamentally different ways: biological evolution through natural selection over eons versus algorithmic optimization over weeks of training. It reminds me of the whole birds and airplanes parallel. Both achieve flight, but through completely different engineering approaches.
0
0
0
In getting a history lesson from Grok, woke ideology started out a a good thing. It was working to end injustice, racism, sexism, inequality. But somewhere along the way it was weaponized and pendulum swung too far. Progress and innovation was sacrificed on the alter of inclusion; The idea of equality, but you can’t have equality without fairness and meritocracy represents fairness. I honestly believe that people think woke is still about writing some of these wrongs from the past while others recognize it for what has become. Maybe we just need to get away from using the word or at least coming to an agreement on how it is defined now.
1
0
2
@FloribamaManX @Concern70732755 Exactly!!! and even when people break in, you can get arrested for shooting them if you had a gun.
1
0
2