Shiqi Chen Profile
Shiqi Chen

@chenshi51326099

Followers
131
Following
164
Statuses
32

PhD student @CityUHongKong. NLPer. Visiting PhD @hkust and @NorthwesternU.

Hong Kong
Joined March 2023
Don't wanna be here? Send us removal request.
@chenshi51326099
Shiqi Chen
10 months
Activation Decoding has been accepted by #ICML2024! 🎊🎊 Excited to meet old and new friends in #Vienna!
@chenshi51326099
Shiqi Chen
11 months
Thrilled to share our latest paper on understanding the factual behavior of LLMs from a mechanism interpretability view! 💡 💡 📄Paper: 💻Code: (1/n)
Tweet media one
0
12
41
@chenshi51326099
Shiqi Chen
1 month
RT @AndrewZeng17: 🚀 Excited to share our latest research: B-STAR! 💡 Tackling the stagnation in self-improvement, we present a framework th…
0
25
0
@chenshi51326099
Shiqi Chen
2 months
RT @WeiLiu99: 🔔🎄Christmas Gift for Multimodal Reasoning: Introducing M-STaR 🎁 (1/6) How can we dive deeper to help Large Multimodal Models…
0
36
0
@chenshi51326099
Shiqi Chen
2 months
RT @hengjinlp: Interesting … so called AI ethics expert is a racist
0
19
0
@chenshi51326099
Shiqi Chen
3 months
RT @furongh: I don't usually share personal matters here, but I feel unsafe and need your help. This morning, I was left feeling deeply sh…
0
22
0
@chenshi51326099
Shiqi Chen
4 months
RT @srush_nlp: I'm putting together a quick reading list of papers related to o1 & test-time scaling for grad students following this area.…
0
119
0
@chenshi51326099
Shiqi Chen
4 months
RT @yqsong: Follow us @hkustNLP 😁
0
11
0
@chenshi51326099
Shiqi Chen
4 months
RT @jinghan23: In Philadelphia for #COLM2024! Excited to chat about long-context, multimodal, reasoning, and everything related to LMs! Com…
0
5
0
@chenshi51326099
Shiqi Chen
5 months
Big congrats to Chang and Junxian !!!
@ma_chang_nlp
chang ma
5 months
AgentBoard has been accepted as oral in NeurIPS D&B track! 🥳 Amazing results by all collaborators, especially @junxian_he, @junleiz0609, @209zzh, and @ChengYANG_yc, for great teamwork! We will upgrade the codebase and feature more LLMs on our leaderboard shortly. Stay tuned!
1
0
2
@chenshi51326099
Shiqi Chen
5 months
RT @tongyx361: "Bereft of that Accept, how shall we show the world our ideas are novel, and our experiments are solid?" 🐵 Publication of DA…
0
5
0
@chenshi51326099
Shiqi Chen
5 months
RT @junteng88716710: ☺️Excited to know that our paper is accepted by EMNLP main!( It studies the foundational ques…
0
2
0
@chenshi51326099
Shiqi Chen
6 months
RT @ZeyuanAllenZhu: (1/7) Physics of LM, Part 2.2 with 8 results on "LLM how to learn from mistakes" now on arxiv: …
0
104
0
@chenshi51326099
Shiqi Chen
6 months
RT @junxian_he: I’ll be in Bangkok for #ACL2024, looking forward to meeting old and new friends!
0
2
0
@chenshi51326099
Shiqi Chen
7 months
If you are interested in interpreting and mitigating hallucinations from an inner states view, come and step by the poster session at Hall C 4-9 tomorrow morning, July 23, #1000! 💡💡
@miao_xiong_cs
Miao Xiong @ NeurIPS'24
7 months
Discover how sharper context activations can cut through hallucinations in language models! 🌟✨ If you're interested in hallucination detection and mitigation, join us at our poster session tomorrow morning, July 23 in Hall C 4-9 #1000! (I like this number!) Check out our full paper "In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation”: Code: #ICML2024 #ICML #hallucination #LLM #interpretability
5
1
7
@chenshi51326099
Shiqi Chen
7 months
RT @junxian_he: Great to see DART-Math is featured in NuminaMath's report and outperforms DeepseekMath-RL by 5 points on OOD test, when our…
0
4
0
@chenshi51326099
Shiqi Chen
7 months
Single datasets like TruthfulQA aren’t enough—scaling up with diverse datasets is the key to unlocking universal inner truthfulness. #LLM #Truthfulness
@junteng88716710
Junteng Liu
7 months
Several studies find that the correct (truthfulness) and the wrong (untruthfulness) data can be separated linearly in LLM's inner representations. 🧐 The simple linear hyperplane is intriguing, but have you considered whether it universally captures the real truthfulness? (1/n)
0
2
10
@chenshi51326099
Shiqi Chen
7 months
RT @tongyx361: 🤯All previous SOTA math SFT datasets are biased?! Introducing 🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical…
0
15
0
@chenshi51326099
Shiqi Chen
8 months
RT @ManlingLi_: I will be at Seattle for #CVPR2024 this week! Excited to talk about LLMs/VLMs for embodied agents and physical world knowle…
0
6
0
@chenshi51326099
Shiqi Chen
8 months
RT @AndrewZeng17: 🚀 Introducing "Auto Evol-Instruct": A pioneering framework that automates instruction evolution for LLMs without human in…
0
1
0
@chenshi51326099
Shiqi Chen
10 months
RT @junxian_he: Downstream scores can be noisy. If you wonder about Llama 3's compression perf in this figure, we have tested the BPC: Lla…
0
9
0