![Jon Evans Profile](https://pbs.twimg.com/profile_images/758038542245203968/15mA3abP_x96.jpg)
Jon Evans
@rezendi
Followers
10K
Following
17K
Statuses
50K
engineer / novelist, occasional journalist / CTO / co-founder / archivist / peripatetist; see https://t.co/ozL3Xbg4RT
Latent space, these days
Joined March 2008
RT @KelseyTuoc: I think it's completely fine that DOGE's staffers are young and the hysteria over their ages was inappropriate but you can'…
0
99
0
RT @DanielleFong: follower dropped this interesting take in my DMs. truth computes, deception doesn’t. reminds me of @paulg adopting a stat…
0
4
0
RT @aelfred_D: BREAKING: DOGE has discovered that USAID delved too greedily and too deep in the mines of Moria, awakening in the darkness o…
0
2K
0
RT @meaning_enjoyer: the endless struggle between people loyal to coalitions and people loyal to principles
0
11
0
RT @jasminewsun: where did the tech right come from? @pmarca's recent interview with @DouthatNYC is very illuminating here's a 4-part gra…
0
460
0
RT @DavidSHolz: i worry a unsustainable need to raise billions of dollars has broken the discourse around agi/asi timelines. theres too man…
0
48
0
RT @natfriedman: I am hiring a Synchrotron Tomography Reconstruction Expert at Vesuvius Challenge. If you know anyone who can do this job,…
0
29
0
RT @janrosenow: Batteries start playing a huge role in energy systems around the world. Look at California - not long ago you wouldn’t eve…
0
278
0
"These thoughts are *emergent* (!!!) and this is actually seriously incredible, impressive and new"
I don't have too too much to add on top of this earlier post on V3 and I think it applies to R1 too (which is the more recent, thinking equivalent). I will say that Deep Learning has a legendary ravenous appetite for compute, like no other algorithm that has ever been developed in AI. You may not always be utilizing it fully but I would never bet against compute as the upper bound for achievable intelligence in the long run. Not just for an individual final training run, but also for the entire innovation / experimentation engine that silently underlies all the algorithmic innovations. Data has historically been seen as a separate category from compute, but even data is downstream of compute to a large extent - you can spend compute to create data. Tons of it. You've heard this called synthetic data generation, but less obviously, there is a very deep connection (equivalence even) between "synthetic data generation" and "reinforcement learning". In the trial-and-error learning process in RL, the "trial" is model generating (synthetic) data, which it then learns from based on the "error" (/reward). Conversely, when you generate synthetic data and then rank or filter it in any way, your filter is straight up equivalent to a 0-1 advantage function - congrats you're doing crappy RL. Last thought. Not sure if this is obvious. There are two major types of learning, in both children and in deep learning. There is 1) imitation learning (watch and repeat, i.e. pretraining, supervised finetuning), and 2) trial-and-error learning (reinforcement learning). My favorite simple example is AlphaGo - 1) is learning by imitating expert players, 2) is reinforcement learning to win the game. Almost every single shocking result of deep learning, and the source of all *magic* is always 2. 2 is significantly significantly more powerful. 2 is what surprises you. 2 is when the paddle learns to hit the ball behind the blocks in Breakout. 2 is when AlphaGo beats even Lee Sedol. And 2 is the "aha moment" when the DeepSeek (or o1 etc.) discovers that it works well to re-evaluate your assumptions, backtrack, try something else, etc. It's the solving strategies you see this model use in its chain of thought. It's how it goes back and forth thinking to itself. These thoughts are *emergent* (!!!) and this is actually seriously incredible, impressive and new (as in publicly available and documented etc.). The model could never learn this with 1 (by imitation), because the cognition of the model and the cognition of the human labeler is different. The human would never know to correctly annotate these kinds of solving strategies and what they should even look like. They have to be discovered during reinforcement learning as empirically and statistically useful towards a final outcome. (Last last thought/reference this time for real is that RL is powerful but RLHF is not. RLHF is not RL. I have a separate rant on that in an earlier tweet
0
0
1
RT @jackclarkSF: I have sympathy for people studying DeepSeek cold. Reads like: Cyberbla's new "Pastrami" technique has increased throughp…
0
44
0