Xiangning Chen
@XiangningChen
Followers
756
Following
776
Statuses
65
Researcher @OpenAI. Previously: @GoogleDeepMind @UCLA @Tsinghua_Uni
Joined May 2017
RT @__nmca__: o3 represents enormous progress in general-domain reasoning with RL — excited that we were able to announce some results toda…
0
383
0
🚀
Excited to present o3-mini today at @OpenAI 12 days with @sama, @markchen90 and @GregKamradt! Check out the live demo where I asked the model to write a script to automatically evaluate itself on GPQA dataset from an interactive code generation/execution UI created by itself. (@_aidan_clark_ came up with this great idea!!) Everything done in 4 min, and a successful repro of the number I got a couple days ago.
0
0
4
RT @shuchaobi: 10x cheaper realtime voice API. The internet went from text only to multimodal over the last 25 years: blogs + Google → inst…
0
12
0
RT @sunjiao123sun_: Mitigating racial bias from LLMs is a lot easier than removing it from humans! Can’t believe this happened at the bes…
0
906
0
RT @annadgoldie: 这对我来说是个人事。我以前有机会在中国工作和教课,我永远忘不了有这么多人花时间与我交谈并帮助我学习语言。我非常感谢,也非常尊重中国人民。而且每次我提到我是麻省理工的,人们总是很友善和尊重。所以我必须说尊重和钦佩是相互的!无论国际或种族,我们都是…
0
77
0
RT @OpenAI: OpenAI o1 is now out of preview in ChatGPT. What’s changed since the preview? A faster, more powerful reasoning model that’s b…
0
2K
0
RT @OpenAI: 🌐 Introducing ChatGPT search 🌐 ChatGPT can now search the web in a much better way than before so you get fast, timely answers…
0
3K
0
RT @_zxchen_: Excited to share our method called 𝐒𝐞𝐥𝐟-𝐏𝐥𝐚𝐲 𝐟𝐈𝐧𝐞-𝐭𝐮𝐍𝐢𝐧𝐠 (SPIN)! 🌟Without acquiring additional human-annotated data, a superv…
0
73
0
RT @YangsiboHuang: I am at #NeurIPS2023 now. I am also on the academic job market, and humbled to be selected as a 2023 EECS Rising Star✨…
0
32
0
RT @YangsiboHuang: Microsoft's recent work ( shows how LLMs can unlearn copyrighted training data via strategic fin…
0
50
0
RT @GoogleAI: Today on the blog, read all about symbol tuning, a method that fine-tunes models on tasks where natural language labels are r…
0
77
0
RT @arankomatsuzaki: Lost in the Middle: How Language Models Use Long Contexts Finds that performance of LMs is often highest when relevan…
0
127
0
RT @crazydonkey200: An open-source 30B language model trained with our Lion optimizer🦁 ( Another proof that Lion is…
0
2
0
RT @zhuohan123: 🌟 Thrilled to introduce vLLM with @woosuk_k! 🚀 vLLM is an open-source LLM inference and serving library that accelerates H…
0
253
0
@typedfemale Thanks for the reminder. This information is internally approved and our latest arxiv version contains it already.
1
0
34
@GozukaraFurkan For stable diffusion training, I also heard of mixed opinions. Adam has been used and tuned for years, so Lion might require a bit of tuning. In our paper, we also compare them on training regular diffusion models (not the latent diffusion in SD).
1
0
2