![Andrzej Dąbrowski Profile](https://pbs.twimg.com/profile_images/1646136679407779842/2UphpuW6_x96.jpg)
Andrzej Dąbrowski
@ardabrowski
Followers
415
Following
5K
Statuses
1K
AI + Product Engineer. Building AI-powered e-commerce. Co-founder and co-owner @commerce_ui.
Gdańsk
Joined September 2014
@SullyOmarr SQL > everything else tailwind / css > everything else React > everything else in distribution trumps out of distribution completely
1
0
8
@barticz 100%
I don't have too too much to add on top of this earlier post on V3 and I think it applies to R1 too (which is the more recent, thinking equivalent). I will say that Deep Learning has a legendary ravenous appetite for compute, like no other algorithm that has ever been developed in AI. You may not always be utilizing it fully but I would never bet against compute as the upper bound for achievable intelligence in the long run. Not just for an individual final training run, but also for the entire innovation / experimentation engine that silently underlies all the algorithmic innovations. Data has historically been seen as a separate category from compute, but even data is downstream of compute to a large extent - you can spend compute to create data. Tons of it. You've heard this called synthetic data generation, but less obviously, there is a very deep connection (equivalence even) between "synthetic data generation" and "reinforcement learning". In the trial-and-error learning process in RL, the "trial" is model generating (synthetic) data, which it then learns from based on the "error" (/reward). Conversely, when you generate synthetic data and then rank or filter it in any way, your filter is straight up equivalent to a 0-1 advantage function - congrats you're doing crappy RL. Last thought. Not sure if this is obvious. There are two major types of learning, in both children and in deep learning. There is 1) imitation learning (watch and repeat, i.e. pretraining, supervised finetuning), and 2) trial-and-error learning (reinforcement learning). My favorite simple example is AlphaGo - 1) is learning by imitating expert players, 2) is reinforcement learning to win the game. Almost every single shocking result of deep learning, and the source of all *magic* is always 2. 2 is significantly significantly more powerful. 2 is what surprises you. 2 is when the paddle learns to hit the ball behind the blocks in Breakout. 2 is when AlphaGo beats even Lee Sedol. And 2 is the "aha moment" when the DeepSeek (or o1 etc.) discovers that it works well to re-evaluate your assumptions, backtrack, try something else, etc. It's the solving strategies you see this model use in its chain of thought. It's how it goes back and forth thinking to itself. These thoughts are *emergent* (!!!) and this is actually seriously incredible, impressive and new (as in publicly available and documented etc.). The model could never learn this with 1 (by imitation), because the cognition of the model and the cognition of the human labeler is different. The human would never know to correctly annotate these kinds of solving strategies and what they should even look like. They have to be discovered during reinforcement learning as empirically and statistically useful towards a final outcome. (Last last thought/reference this time for real is that RL is powerful but RLHF is not. RLHF is not RL. I have a separate rant on that in an earlier tweet
0
0
0
@oleg008 Same. When I played with querying data with JSONPath or JMESPath I had constant errors. SQL just worked and it solved much harder cases easily.
1
0
3
@blanklob Yeah. in ecomm it's often not what users want to see but what brands want to show users. Inspiration >>> personalisation. People build their taste by watching Hermes products, not the other way round. the opposite side of spectrum is salesforce dashboard :D
0
0
1
Absolute BANGER from one and only @commerce_ui!!! 🔥🔥🔥 The best e-commerce agency in the world ❤️
Our new website for @ladygaga is live!🟢 The platform creates an incredible space for Gaga to showcase her projects, upcoming events and connect even more deeply with her millions of Little Monsters worldwide. Full case study coming soon! Digital Design: Yung_studio Stack: @shopify Hydrogen, @sanity_io @MuxHQ
0
0
2