![Shiqi Chen Profile](https://pbs.twimg.com/profile_images/1685180570920480768/X7GONRw5_x96.jpg)
Shiqi Chen
@chenshi51326099
Followers
131
Following
164
Statuses
32
PhD student @CityUHongKong. NLPer. Visiting PhD @hkust and @NorthwesternU.
Hong Kong
Joined March 2023
RT @AndrewZeng17: 🚀 Excited to share our latest research: B-STAR! 💡 Tackling the stagnation in self-improvement, we present a framework th…
0
25
0
RT @WeiLiu99: 🔔🎄Christmas Gift for Multimodal Reasoning: Introducing M-STaR 🎁 (1/6) How can we dive deeper to help Large Multimodal Models…
0
36
0
RT @furongh: I don't usually share personal matters here, but I feel unsafe and need your help. This morning, I was left feeling deeply sh…
0
22
0
RT @srush_nlp: I'm putting together a quick reading list of papers related to o1 & test-time scaling for grad students following this area.…
0
119
0
RT @jinghan23: In Philadelphia for #COLM2024! Excited to chat about long-context, multimodal, reasoning, and everything related to LMs! Com…
0
5
0
Big congrats to Chang and Junxian !!!
AgentBoard has been accepted as oral in NeurIPS D&B track! 🥳 Amazing results by all collaborators, especially @junxian_he, @junleiz0609, @209zzh, and @ChengYANG_yc, for great teamwork! We will upgrade the codebase and feature more LLMs on our leaderboard shortly. Stay tuned!
1
0
2
RT @tongyx361: "Bereft of that Accept, how shall we show the world our ideas are novel, and our experiments are solid?" 🐵 Publication of DA…
0
5
0
RT @junteng88716710: ☺️Excited to know that our paper is accepted by EMNLP main!( It studies the foundational ques…
0
2
0
RT @ZeyuanAllenZhu: (1/7) Physics of LM, Part 2.2 with 8 results on "LLM how to learn from mistakes" now on arxiv: …
0
104
0
RT @junxian_he: I’ll be in Bangkok for #ACL2024, looking forward to meeting old and new friends!
0
2
0
If you are interested in interpreting and mitigating hallucinations from an inner states view, come and step by the poster session at Hall C 4-9 tomorrow morning, July 23, #1000! 💡💡
Discover how sharper context activations can cut through hallucinations in language models! 🌟✨ If you're interested in hallucination detection and mitigation, join us at our poster session tomorrow morning, July 23 in Hall C 4-9 #1000! (I like this number!) Check out our full paper "In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation”: Code: #ICML2024 #ICML #hallucination #LLM #interpretability
5
1
7
RT @junxian_he: Great to see DART-Math is featured in NuminaMath's report and outperforms DeepseekMath-RL by 5 points on OOD test, when our…
0
4
0
Single datasets like TruthfulQA aren’t enough—scaling up with diverse datasets is the key to unlocking universal inner truthfulness. #LLM #Truthfulness
Several studies find that the correct (truthfulness) and the wrong (untruthfulness) data can be separated linearly in LLM's inner representations. 🧐 The simple linear hyperplane is intriguing, but have you considered whether it universally captures the real truthfulness? (1/n)
0
2
10
RT @tongyx361: 🤯All previous SOTA math SFT datasets are biased?! Introducing 🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical…
0
15
0
RT @ManlingLi_: I will be at Seattle for #CVPR2024 this week! Excited to talk about LLMs/VLMs for embodied agents and physical world knowle…
0
6
0
RT @AndrewZeng17: 🚀 Introducing "Auto Evol-Instruct": A pioneering framework that automates instruction evolution for LLMs without human in…
0
1
0
RT @junxian_he: Downstream scores can be noisy. If you wonder about Llama 3's compression perf in this figure, we have tested the BPC: Lla…
0
9
0