Clément Bonnet Profile
Clément Bonnet

@ClementBonnet16

Followers
539
Following
787
Statuses
191

AI Research @ndeainc

Paris, France
Joined April 2020
Don't wanna be here? Send us removal request.
@ClementBonnet16
Clément Bonnet
27 days
Excited for the launch of @ndeainc 🔥 We’re making the bet that merging structured reasoning with deep learning will constitute a new AI breakthrough! Come join us if you want to unlock AI for reasoning tasks and scientific innovation!
@fchollet
François Chollet
27 days
I'm joining forces with @mikeknoop to start Ndea (@ndeainc), a new AI lab. Our focus: deep learning-guided program synthesis. We're betting on a different path to build AI capable of true invention, adaptation, and innovation.
Tweet media one
7
10
93
@ClementBonnet16
Clément Bonnet
1 day
@MattVMacfarlane @jerber888 E.g. deep learning guided program synthesis? If you use language for search, you can guide it using an LLM trained on language. But if you start using non-human language in your “programs” (e.g. R1 with language mixing), then search is guided by a transformer trained on new lang
0
0
1
@ClementBonnet16
Clément Bonnet
1 day
@MattVMacfarlane @jerber888 You can manipulate or guide the reasoning in a non-human way, you don’t need the tokens to be human readable for an AI to guide the reasoning. The way I see these think tokens is essentially “here is your computation window, do whatever here, then give me the answer”.
1
0
0
@ClementBonnet16
Clément Bonnet
1 day
@MattVMacfarlane @jerber888 What you’re saying is not mutually exclusive with having reasoning encapsulated in <think> tokens. Actually, these tokens are just roughly what’s between the query and the final answer, whether it’s human-readable, continuous or something else.
1
0
0
@ClementBonnet16
Clément Bonnet
1 day
RT @dwarkesh_sp: I still haven't heard a good answer to this question, on or off the podcast. AI researchers often tell me, "Don't worry b…
0
705
0
@ClementBonnet16
Clément Bonnet
11 days
RT @VictorTaelin: Please do not over-hype this post! HOC is doing a $4m post-seed at $100m valuation to build a dataset with the shortest…
0
16
0
@ClementBonnet16
Clément Bonnet
13 days
RT @mikeknoop: just published my full @arcprize analysis of deepseek's r1-zero and r1. link below. key points: r1-zero is more important t…
0
136
0
@ClementBonnet16
Clément Bonnet
18 days
RT @VictorTaelin: DEMO TIME SupGen is a generative coding AI... except it isn't an AI. There is no model, there is no pre-training. You j…
0
121
0
@ClementBonnet16
Clément Bonnet
26 days
RT @MLStreetTalk: We just dropped the long-awaited second part of our interview with @SchmidhuberAI
0
27
0
@ClementBonnet16
Clément Bonnet
26 days
RT @jerber888: Excited to announce @ndeainc. We're building a new architecture for AGI that merges deep learning with symbolic reasoning.…
0
22
0
@ClementBonnet16
Clément Bonnet
27 days
RT @ndeainc: Ndea is a new intelligence science lab building frontier AI systems that blend intuitive pattern recognition and formal reason…
0
25
0
@ClementBonnet16
Clément Bonnet
27 days
RT @fchollet: We're building a world-class research team. If you're excited about our ideas and you'd like to join us, let's chat! We're a…
0
23
0
@ClementBonnet16
Clément Bonnet
27 days
RT @mikeknoop: AGI is the most important technology in the history of the world. That's why I'm going all in on a new adventure with @fchol
0
96
0
@ClementBonnet16
Clément Bonnet
30 days
RT @MattVMacfarlane: 1/ An interesting outcome of the @arcprize 2024 was the high performance of test-time fine-tuning. It is common in the…
0
7
0
@ClementBonnet16
Clément Bonnet
1 month
Tweet media one
0
8
0
@ClementBonnet16
Clément Bonnet
1 month
RT @fchollet: New interview with MLST is up you YouTube! Do check it out. We cover a wide range of topics -- what we learned from ARC Prize…
0
48
0
@ClementBonnet16
Clément Bonnet
1 month
RT @fchollet: AI-first research companies like $RXRX will be one of the primary beneficiaries of the improvement of AI reasoning capabiliti…
0
21
0
@ClementBonnet16
Clément Bonnet
1 month
@witkowski_cam @andrezfu I feel like you are suggesting to add linguistics priors to naive self-supervised learning. I don’t do linguistics but it’s essentially what LLMs took over with self-supervised learning. Seems like you’re saying “can’t we go back in time a little bit and add linguistics to DL?”
0
0
0
@ClementBonnet16
Clément Bonnet
1 month
@witkowski_cam I’m not saying LLM pre training is an efficient way to learn language. But given 15T of language tokens, you easily learn all these synonyms correlations. I can refer you to literature around model calibration if you’re interested.
1
0
2