siddkaramcheti Profile Banner
Siddharth Karamcheti Profile
Siddharth Karamcheti

@siddkaramcheti

Followers
3K
Following
5K
Media
52
Statuses
981

PhD student @stanfordnlp & @StanfordAILab. Robotics Intern @ToyotaResearch. I like language, robots, and people. On the academic job market!

Stanford, CA
Joined September 2018
Don't wanna be here? Send us removal request.
@siddkaramcheti
Siddharth Karamcheti
3 months
Thrilled to introduce Vocal Sandbox ( – our new framework for situated human-robot collaboration. We'll be @corl_conf all week; don't miss @jenngrannen's oral today (Session 3 @ 5 PM) or our poster tomorrow!. Why am I so proud of this paper? 🧵👇.
@jenngrannen
Jenn Grannen
3 months
Introducing 🆚Vocal Sandbox: a framework for building adaptable robot collaborators that learns new 🧠high-level behaviors and 🦾low-level skills from user feedback in real-time. ✅. Appearing today at @corl_conf as an Oral Presentation (Session 3, 11/6 5pm). 🧵(1/6)
1
5
41
@siddkaramcheti
Siddharth Karamcheti
5 years
Since getting academic access, I’ve been thinking about GPT-3’s applications to grounded language understanding — e.g. for robotics and other embodied agents. In doing so, I came up with a new demo: . Objects to Affordances: “what can I do with an object?”. cc @gdb
18
63
491
@siddkaramcheti
Siddharth Karamcheti
2 years
How can we use language supervision to learn better visual representations for robotics?. Introducing Voltron: Language-Driven Representation Learning for Robotics!. Paper: Models: Evaluation: 🧵👇(1 / 12)
Tweet media one
5
96
393
@siddkaramcheti
Siddharth Karamcheti
3 years
We're excited to open-source Mistral 🚀 - a codebase for accessible large-scale LM training, built as part of Stanford's CRFM (. We're releasing 10 GPT-2 Small & Medium models with different seeds & 600+ checkpoints per run!. [1/4].
4
102
373
@siddkaramcheti
Siddharth Karamcheti
3 years
Thrilled to announce I'm joining @huggingface 🤗 as a research intern while doing my PhD! . Step 1: scalable training that's accessible and transparent w/ @StasBekman and Thomas Wang. Step 2+: multimodality, robotics, RL!. Huge thanks to @Thom_Wolf and team for the opportunity!.
8
13
290
@siddkaramcheti
Siddharth Karamcheti
1 year
What design choices matter when developing a visually-conditioned language model (VLM)?. Check out our paper – Prismatic VLMs – and open-source training code, evaluation suite, and 42 pretrained VLMs at the 7B-13B scale!. 📜 ⚙️ + 🤗
Tweet media one
6
53
194
@siddkaramcheti
Siddharth Karamcheti
3 years
Super honored (and very embarrassed) that @karpathy took the time to look at some of our code and fix an inefficiency (*cough*, bug, *cough*) I introduced 😅. Loving the @huggingface open source community today 🤗!.
@julien_c
Julien Chaumond
3 years
I'm gonna memorize commit hash `e02037b3524686b57c5a861ea49ac751f15568af` forever ❤️❤️🔥.
1
3
165
@siddkaramcheti
Siddharth Karamcheti
4 years
Incredibly excited (and still a bit in shock) that our #ACL2021 paper with the amazing @RanjayKrishna @drfeifei and @chrmanning won an Outstanding Paper award!. This paper has a fun story that doesn’t quite fit in 8 pages; blog post, paper, and all code up soon!.
@sameer_
Sameer Singh
4 years
The best paper awards for ACL 2021 are out! @aclmeeting #NLProc #ACL2021 .
Tweet media one
5
16
158
@siddkaramcheti
Siddharth Karamcheti
4 years
How do we build adaptive language interfaces that learn through interaction with real human users? . New work w/ my amazing advisors @DorsaSadigh and @percyliang, to be presented at the @intexsempar2020 workshop at #emnlp2020. Link: A thread 🧵(1 / N).
Tweet media one
1
31
136
@siddkaramcheti
Siddharth Karamcheti
3 years
Proud to share "LILA: Language-Informed Latent Actions," our paper at #CoRL2021. How can we build assistive controllers by fusing language & shared autonomy?. Jointly authored w/ @megha_byte, with my advisors @percyliang & @DorsaSadigh. 📜: 🧵: (1 / 10)
2
24
89
@siddkaramcheti
Siddharth Karamcheti
2 years
Want to build robots that adapt to language corrections in real-time?. Introducing "No, to the Right – Online Language Corrections for Manipulation via Shared Autonomy" ( w/ @YuchenCui1, Raj, Nidhya, @percyliang & @DorsaSadigh at #HRI2023 - 🧵👇 (1/N).
Tweet media one
2
25
89
@siddkaramcheti
Siddharth Karamcheti
2 years
Incredibly honored that Voltron is one of the nominees for Best Paper at #RSS2023!. Just landed in Daegu, South Korea and can’t wait to present on Tuesday (catch our talk in Session 4 at 3 PM KST). Excited to meet everyone, and stoked for a week of awesome talks and demos!.
@siddkaramcheti
Siddharth Karamcheti
2 years
How can we use language supervision to learn better visual representations for robotics?. Introducing Voltron: Language-Driven Representation Learning for Robotics!. Paper: Models: Evaluation: 🧵👇(1 / 12)
Tweet media one
4
17
87
@siddkaramcheti
Siddharth Karamcheti
3 years
Diverse, representative data is becoming increasingly important for building generalizable robotic systems. We're organizing the Workshop on Learning from Diverse, Offline Data (L-DOD) at RSS 2022 (NYC/hybrid) to come together and discuss this!.
Tweet media one
2
24
84
@siddkaramcheti
Siddharth Karamcheti
2 years
I've been struggling to put words to Professor Charniak's passing. I can count on one hand the people in my life that have single-handedly reshaped the trajectory of my life – he's at the top of that list. He was a man of honor, humility, and boundless energy.
@BrownCSDept
Brown CS
2 years
@BrownCSDept is mourning the loss of University Professor Emeritus of Computer Science and Cognitive Science Eugene Charniak, one of our founding faculty members. He passed away on June 13, just a few days after his seventy-seventh birthday. (1 of 3)
Tweet media one
1
7
68
@siddkaramcheti
Siddharth Karamcheti
4 years
How do we build visually guided controllers that help humans operate complex robots?. Thrilled to share our #L4DC paper "Learning Visually Guided Latent Actions for Assistive Teleoperation" with Albert Zhai, @loseydp, and @DorsaSadigh!. Paper: A 🧵 [1/7]
Tweet media one
1
17
68
@siddkaramcheti
Siddharth Karamcheti
3 years
How do we conduct ethical research from the *start*? . At Hugging Face, we've started working on multimodal pretraining (💬, 📸, 🎤, 🎥), involving collecting a dataset & training models. Ethics can't be an afterthought! A 🧵⬇️ (1/3).
@LucileSaulnier
Saulnier Lucile
3 years
How do we integrate ethical principles into the ML research cycle?. A few months ago, we kicked off a project at Hugging Face on multimodal datasets and models.🐙. Instead of discussing ethics at the end, we wrote down our ethical values from the start!.
1
7
67
@siddkaramcheti
Siddharth Karamcheti
8 months
Thrilled to announce OpenVLA ( – a vision-language-action policy for robotic control!. Shout out to my co-leads @moo_jin_kim & @KarlPertsch; see their threads for overviews of our work. Here though, I want to talk about observations & next steps! 🧵⬇️.
@moo_jin_kim
Moo Jin Kim
8 months
✨ Introducing 𝐎𝐩𝐞𝐧𝐕𝐋𝐀 — an open-source vision-language-action model for robotics! 👐. - SOTA generalist policy.- 7B params.- outperforms Octo, RT-2-X on zero-shot evals 🦾.- trained on 970k episodes from OpenX dataset 🤖.- fully open: model/code/data all online 🤗. 🧵👇
2
12
67
@siddkaramcheti
Siddharth Karamcheti
9 months
Extremely honored to be named an RSS Pioneer! Thrilled to get to know the rest of the cohort in Delft this summer — thank you @RSSPioneers for organizing such a wonderful event!.
@RSSPioneers
RSS Pioneers
9 months
We are thrilled to announce our #RSSPioneers2024 cohort! 🎉 Congratulations to these 30 rising stars in robotics! We thank all of our applicants for their inspiring submissions, and our selection committee and reviewers for their participation and insight.
7
4
66
@siddkaramcheti
Siddharth Karamcheti
2 years
After a few days of playing with #ChatGPT by @OpenAI, I'm inspired by the potential for enriching systems for human-robot interaction! . > "Let's role play: You're the "brain" behind an assistive robot that may not be perfect, and I'm a human trying to work with you. " 🧵👇
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
10
61
@siddkaramcheti
Siddharth Karamcheti
4 years
Huge congratulations to @ethayarajh for being named a Facebook AI PhD Fellow in NLP! . Kawin is a brilliant researcher, collaborator, and friend who has taught me so much; this is incredibly well-deserved and I'm so proud!.
1
8
61
@siddkaramcheti
Siddharth Karamcheti
2 years
This is such a cool paper; really simple idea at its core, and incredible results!. Must read for anyone working on imitation learning for robotics!.
@_akhaliq
AK
2 years
Diffusion Policy: Visuomotor Policy Learning via Action Diffusion . abs: .project page:
0
5
51
@siddkaramcheti
Siddharth Karamcheti
5 years
Finally, I’d like to thank @sh_reya and @notsleepingturk for putting together an incredibly easy to use Github Repo ( for putting together these interactive demos. What an awesome resource!.
2
3
50
@siddkaramcheti
Siddharth Karamcheti
3 years
Just touched down in London for #CoRL2021 — what a beautiful city!. Looking forward to meeting lots of new folks, please email/DM me if you’re down to chat robotics & NLP or shared autonomy for manipulation or (better yet) both!. Super excited 🗣🤖!.
2
1
52
@siddkaramcheti
Siddharth Karamcheti
3 years
Our #RSS2022 workshop on Learning from Diverse, Offline Data (#LDOD) is on Monday 6/27!. Amazing set of papers, incredible speakers (below), and a full panel moderated by @chelseabfinn & @DorsaSadigh! . Sneak Peek: 1-1 chats between students & speakers:
Tweet media one
2
8
50
@siddkaramcheti
Siddharth Karamcheti
3 years
Professor Charniak gave me my start in ML. Each week, I’d come to his office and we’d talk ideas — no judgement, no feeling I needed to prove anything. He’d always meet me with patience and joy. There’s no way I’d be where I am without those meetings. Congrats on retirement!.
@BrownCSDept
Brown CS
3 years
First seen in the pages of Conduit, the annual @BrownCSDept magazine, we're excited to share an extensive look back at Professor Eugene Charniak's work and life as he enters retirement:
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
6
48
@siddkaramcheti
Siddharth Karamcheti
3 years
S4 (by the amazing @_albertgu and @krandiash) is a new sequence model that can reliably scale to *huge* contexts. To dive into how it works, @srush_nlp and I wrote a code library (~200 lines of JAX) and blog post: "The Annotated S4" (. Check it out!.
@srush_nlp
Sasha Rush
3 years
The Annotated S4 (/w @siddkaramcheti) . A step-by-step guide for building your own 16,000-gram language model.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
6
45
@siddkaramcheti
Siddharth Karamcheti
3 years
Check out the Robotics section (§2.3), discussing opportunities in applying #foundationmodels across the robotics pipeline. Challenges await! Collecting the right data, ensuring safety are crucial. But tackling these problems now – *before* building models – is key!
Tweet media one
@StanfordHAI
Stanford HAI
3 years
NEW: This comprehensive report investigates foundation models (e.g. BERT, GPT-3), which are engendering a paradigm shift in AI. 100+ scholars across 10 departments at Stanford scrutinize their capabilities, applications, and societal consequences.
Tweet media one
1
9
39
@siddkaramcheti
Siddharth Karamcheti
3 years
Honored to be presenting our work "Mind Your Outliers" ( at the #ACL2021NLP Best Paper Session today at 4 PM PST (23:00 UTC+0). ACL-Internal link w/ Q&A: Video (I'll present a "punchier" version tonight):
@stanfordnlp
Stanford NLP Group
4 years
Congrats to @siddkaramcheti, @RanjayKrishna, @drfeifei & @chrmanning for #ACL2021NLP Outstanding Paper Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering. Code #NLProc
Tweet media one
4
8
40
@siddkaramcheti
Siddharth Karamcheti
4 years
Paper and code is out! . Getting to this point, this paper – which is fundamentally about a negative result – wasn't a linear path, but one that took months. Excited to share that story (the one that didn't make it into the paper) with y'all. Stay tuned for the blog post!.
@stanfordnlp
Stanford NLP Group
4 years
Congrats to @siddkaramcheti, @RanjayKrishna, @drfeifei & @chrmanning for #ACL2021NLP Outstanding Paper Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering. Code #NLProc
Tweet media one
0
6
37
@siddkaramcheti
Siddharth Karamcheti
6 years
Incredibly honored to be named a 2019 Open Philanthropy AI Fellow. Thanks so much to all my advisors, mentors, friends, and family who helped me get this far!.
@open_phil
Open Philanthropy
6 years
Excited to announce the 2019 class of the Open Phil AI Fellowship. Eight machine learning students will collectively receive up to $2 million in PhD fellowship support over the next five years. Meet the 2019 fellows:
1
1
37
@siddkaramcheti
Siddharth Karamcheti
4 years
This is amazing news, and truly incredible. I'm so lucky to have @DorsaSadigh as an advisor, and really looking forward to the awesome work our brilliant lab will come out with over the next few years! Congratulations @DorsaSadigh!!!.
@StanfordAILab
Stanford AI Lab
4 years
Congratulations to @StanfordAILab faculty Dorsa Sadigh on receiving an MIT Tech Review TR-35 award for her work on teaching robots to be better collaborators with people.
Tweet media one
1
0
35
@siddkaramcheti
Siddharth Karamcheti
5 years
I just finished the Residency this past August, and it was one of the most enriching experiences I’ve ever had — I got to work with amazing people on really hard research, and I learned a ton!. I highly recommend applying — opportunities like this are few and far between!.
@ylecun
Yann LeCun
5 years
Applications for the Facebook AI Residency program are open. US (NYC, Seattle, Menlo Park): UK (London): Deadline: 2020-01-31
1
1
35
@siddkaramcheti
Siddharth Karamcheti
5 years
Interestingly, @_eric_mitchell_ and I found that if you “prime” GPT-3 with “natural” (less structured) text, you get less ambiguous action associations (set it to stun vs. stun). Maybe this provides insight on how to structure your “prompts” — the more “natural” the better!
Tweet media one
4
6
32
@siddkaramcheti
Siddharth Karamcheti
5 years
How do reading comprehension models select supporting evidence? How does this evidence compare to those chosen by human users? . Very excited to share our new #emnlp2019 paper ( w/ @EthanJPerez, Rob Fergus, @jaseweston, @douwekiela, and @kchonyc!.
@EthanJPerez
Ethan Perez
5 years
What evidence do people find convincing? Often, the same evidence that Q&A models find convincing. Check out our #emnlp2019 paper: And blog post: w/ @siddkaramcheti Rob Fergus @jaseweston @douwekiela.@kchonyc
0
3
34
@siddkaramcheti
Siddharth Karamcheti
3 years
Congrats to my advisor @DorsaSadigh for being named a 2022 Sloan Fellow! I’m incredibly lucky and grateful to be one of your students!.
0
2
32
@siddkaramcheti
Siddharth Karamcheti
3 years
🎉 Stoked to share our #NeurIPS2021 paper "ELLA: Exploration through Learned Language Abstraction." . How can language help RL agents solve sparse-reward tasks more efficiently?. Led by @suvir_m (applying to PhDs now!), with my advisor @DorsaSadigh!. 🔗:
@suvir_m
Suvir Mirchandani
3 years
Training RL agents to complete language instructions can be difficult & sample-inefficient. A key challenge is exploration. Our method, ELLA, helps guide exploration in terms of simpler subtasks. Paper: Talk: #NeurIPS21
Tweet media one
3
2
31
@siddkaramcheti
Siddharth Karamcheti
4 years
Very excited for this line-up of amazing speakers, and to be presenting our work "Learning Adaptive Language Interfaces through Decomposition" ( w/ @DorsaSadigh and @percyliang!. If you're at #emnlp2020 tomorrow, definitely stop by!.
@intexsempar2020
Interactive and Executable Semantic Parsing
4 years
Looking forward to see you tomorrow (19/11) at the Interactive and Executable Semantic Parsing workshop, starting at 8:15am PT!. On our page you'll find our schedule, list of invited speakers and link to the zoom session:
Tweet media one
0
6
29
@siddkaramcheti
Siddharth Karamcheti
4 years
Stoked to kick-off the 2020 @stanfordnlp seminar series ( with a talk from @_jessethomason_ on "From Human Language to Agent Action.". As a student working in #RoboNLP, I can't wait to hear about his recent work/perspective on the field as a whole!
Tweet media one
1
4
29
@siddkaramcheti
Siddharth Karamcheti
3 years
The Bay Area Robotics Symposium 2021 is in full swing - we’ve got a full house! #BARS2021. Catch the second set of faculty talks now: 1:15 PT: Keynotes by @percyliang and @robreich + buzzing afternoon session w/ more faculty, student, and sponsor talks!
Tweet media one
1
3
27
@siddkaramcheti
Siddharth Karamcheti
5 years
“Priming” the model was pretty straightforward — I just picked four random objects, and chose the first few affordances that came to mind:
Tweet media one
2
2
27
@siddkaramcheti
Siddharth Karamcheti
3 years
Really excited to see this at #CoRL2021 tomorrow! @coreylynch and the entire Google robotics team have really inspired my research (especially with their RoboNLP work). Super stoked to hear more about implicit behavioral cloning — y’all should make it to the poster if you can!.
@coreylynch
Corey Lynch
3 years
How can robots learn to imitate precise 🎯 and multimodal 🔀 human behaviors?. “Implicit Behavioral Cloning” 🦾🔷🟦💛🟨.paper, videos, code: See IBC learning combinatorial sorting and 1mm precision insertion from vision, tasks explicit BC struggles with.
1
1
27
@siddkaramcheti
Siddharth Karamcheti
2 months
Really grateful to @StanfordHAI for covering our work on Vocal Sandbox - a framework for building robots that can seamlessly work with and learn from you in the real world (w/ @jenngrannen @suvir_m @percyliang @DorsaSadigh). In case you missed it:
@StanfordHAI
Stanford HAI
2 months
A new robot system called Vocal Sandbox is the first of many systems that promise to help integrate robots into our daily lives. Learn about the prototype that @Stanford researchers presented at the 8th annual Conference on Robot Learning.
0
6
26
@siddkaramcheti
Siddharth Karamcheti
4 years
🎉 Incredibly thrilled to share our work "Targeted Data Acquisition for Evolving Negotiation Agents" to be presented at #ICML2021, led by my inspirational labmate @MinaeKwon, Mariano-Florentino Cuéllar, and @DorsaSadigh!. Reasons why I find this work exciting - a 🧵.
@MinaeKwon
Minae Kwon
4 years
Excited to share our #ICML2021 paper “Targeted Data Acquisition for Evolving Negotiation Agents” with the amazing @siddkaramcheti, Mariano-Florentino Cuéllar, and @DorsaSadigh!. Paper: Talk: 🧵👇 [1/6]
Tweet media one
1
1
25
@siddkaramcheti
Siddharth Karamcheti
2 years
How can we teach humans to provide better demonstration data for robotic manipulation?. Check out our #CoRL2022 paper on "Eliciting Compatible Demonstrations for Multi-Human Imitation Learning" ( w/ @gandhikanishk @madelineliao & @DorsaSadigh – 🧵👇 (1/N).
@gandhikanishk
Kanishk Gandhi @ NeurIPS
2 years
Are you collecting demonstrations for imitation learning from multiple demonstrators? Naively collecting demonstrations might actually hurt performance!! We present a simple way to teach people to teach robots better! Appearing at CoRL ‘22. 🧵 (1/5)
1
2
24
@siddkaramcheti
Siddharth Karamcheti
10 months
David (@dlwh) is an incredible friend and mentor. Highly recommend following his work — he not only dives deep into understanding *all* the parts of the systems he works with, but also cares about sharing these insights in a way that’s accessible. Levanter is just one example!.
@srush_nlp
Sasha Rush
10 months
Recommend following David Hall (@dlwh) and the Levanter project from @StanfordCRFM . Just no nonsense details about fixing the pain-points of scaling LLM training, one at a time.
1
3
24
@siddkaramcheti
Siddharth Karamcheti
5 years
I have a grounded language joke, but you’d miss the context.
@criticalneuro
Ida Momennejad
5 years
I have a reinforcement learning joke, but not sure it's rewarding.
0
2
24
@siddkaramcheti
Siddharth Karamcheti
2 years
@kevin_zakka Never to late to start working on robotics & NLP! It’s such a wonderful time, so many great questions to explore!.
1
0
23
@siddkaramcheti
Siddharth Karamcheti
4 years
This is going to be a fantastic talk - can’t wait for Thursday!.
@stanfordnlp
Stanford NLP Group
4 years
We’re very excited to kick off our 2021 Stanford NLP Seminar series with Ian Tenney (@iftenney) of Google Research presenting on “BERTology and Beyond”! Thursday 10am PT. Open to the public non-Stanford people register at
Tweet media one
0
5
22
@siddkaramcheti
Siddharth Karamcheti
7 months
I'm really loving alphaXiv (from a great team of Stanford students including @rajpalleti314)! . Beyond just reading arXiv papers – it's an awesome platform for discussion, collaborative annotation, and note-taking. Give it a try – clear win for open-science. .
@askalphaxiv
alphaXiv
7 months
How do LLMs learn new facts while pre-training? . Excited to have authors @hoyeon_chang and Jinho Park.answer questions on their latest paper "How Do Large Language Models Acquire Factual Knowledge During Pretraining?". Leave questions for the authors:
2
6
22
@siddkaramcheti
Siddharth Karamcheti
5 years
The affordance prediction is fairly good (e.g. interdimensional portal), but it’s not perfect (e.g. soup can). That being said, I think this has potential for text-based games (@jaseweston, @mark_riedl), Nethack (@_rockt, @egrefen), and more importantly robotic manipulation!.
5
2
21
@siddkaramcheti
Siddharth Karamcheti
3 years
In addition to the codebase, @laurel_orr1 and I wrote up a blog post (with the rest of the Propulsion team!) describing a bit more about Mistral and our journey in more detail. Check it out here, and we'd love to hear your thoughts: [1/5].
@siddkaramcheti
Siddharth Karamcheti
3 years
We're excited to open-source Mistral 🚀 - a codebase for accessible large-scale LM training, built as part of Stanford's CRFM (. We're releasing 10 GPT-2 Small & Medium models with different seeds & 600+ checkpoints per run!. [1/4].
1
7
22
@siddkaramcheti
Siddharth Karamcheti
2 years
This is incredibly cool — I really really like this line of work on learning to quickly deploy language-aware robots to completely new environments. Amazing work!.
@notmahi
Mahi Shafiullah 🏠🤖
2 years
How can we train data-efficient robots that can respond to open-ended queries like “warm up my lunch” or “find a blue book”?.Introducing CLIP-Field, a semantic neural field trained w/ NO human labels & only w/ web-data pretrained detectors, VLMs, and LLMs
1
5
19
@siddkaramcheti
Siddharth Karamcheti
5 years
I had a great time compiling this post. There’s some really exciting and compelling work coming out of Stanford in a lot of different areas. Very proud to call these people my peers!.
@StanfordAILab
Stanford AI Lab
5 years
The International Conference on Machine Learning (ICML) 2020 is being hosted virtually this week. We’re excited to share all the work from SAIL that’s being presented, and you’ll find links to papers, videos and more in our latest blog post:.
0
6
21
@siddkaramcheti
Siddharth Karamcheti
2 years
This is the best kind of paper by my labmates @suneel_belkhale and @YuchenCui1 — it starts with a simple punchline (data quality matters for imitation learning) but really really drives home exactly what “good data” looks like. Definitely worth a read!.
@suneel_belkhale
Suneel Belkhale
2 years
In imitation learning (IL), we often focus on better algorithms, but what about improving the data? What does it mean for a dataset to be high quality?. Our work takes a first step towards formalizing and analyzing data quality. (1/5).
0
2
20
@siddkaramcheti
Siddharth Karamcheti
2 years
He gave me my start in research, shaped the way I think about my work (depth over distance, the value of simple words and ideas), and convinced me that a PhD, a career in research was something that I could do. I will always remember his patience and support for me.
2
1
18
@siddkaramcheti
Siddharth Karamcheti
4 years
Thrilled to have @ybisk from @SCSatCMU at this week's @stanfordnlp Seminar (Thursday @ 10 AM PST - open to the public: !. @ybisk's work in #RoboNLP has been truly inspirational - I can't wait to learn from him and get a taste of where the field is moving!
Tweet media one
1
4
19
@siddkaramcheti
Siddharth Karamcheti
4 years
My incredible mother is talking with fellow professionals tomorrow about managing mental health in the midst of the India COVID Crisis on @Radiozindagisfo. This is an incredibly important discussion to be having, for those here and abroad. Please tune in if you can!
Tweet media one
0
0
18
@siddkaramcheti
Siddharth Karamcheti
1 year
Really excited by this work from my incredible labmate @priyasun_! . Sketches are an intuitive and expressive way of specifying not just a goal, but also *how* to perform a task — can’t wait to see sketches + language + gestures in the context of rich, collaborative robotics!.
@priyasun_
Priya Sundaresan
1 year
We can tell our robots what we want them to do, but language can be underspecified. Goal images are worth 1,000 words, but can be overspecified. Hand-drawn sketches are a happy medium for communicating goals to robots!. 🤖✏️Introducing RT-Sketch: 🧵1/11
1
1
18
@siddkaramcheti
Siddharth Karamcheti
4 years
Everyone needs a bit of @douwekiela in their life! . Tune into this week's Stanford NLP Seminar this Thursday at 10 AM PST (open to the public - register here: where he'll talk about "Rethinking Benchmarking in AI" and Dynabench (!
Tweet media one
1
0
18
@siddkaramcheti
Siddharth Karamcheti
5 years
Grounding, embodiment, and interaction - really excited to see this, and can’t wait to explore these areas throughout the rest of my PhD!.
@universeinanegg
Ari Holtzman
5 years
"You can't learn language from the radio." 📻. Why does NLP keep trying to?. In we argue that physical and social grounding are key because, no matter the architecture, text-only learning doesn't have access to what language is *about* and what it *does*.
1
0
18
@siddkaramcheti
Siddharth Karamcheti
4 years
Incredible to see LXMERT added to Transformers - it’s a clean and impressive implementation that’s really going to make building vision-and-language applications more accessible and widespread. Excited to see people adopt it!.
@mohitban47
Mohit Bansal
4 years
Amazing effort by @avalmendoz+@haotan5 & @huggingface @LysandreJik @qlhoest @Thom_wolf on LXMERT demo+backend! Comes with flexible dataset generation via HF/datasets for feat+box predn from bottomup-FRCNN w ultra-fast access; allows extension to other mmodal tasks by community 🤗.
0
2
18
@siddkaramcheti
Siddharth Karamcheti
1 year
Check out IDEFICS - an open vision-language model that can accept sequences of images and text, for use in tasks like visual dialogue, dense captioning, and more!. Demo & Models:
@SanhEstPasMoi
Victor Sanh
1 year
Introducing IDEFICS, the first open state-of-the-art visual language model at the 80B scale!. The model accepts arbitrary sequences of images and texts and produces text. A bit like a multimodal ChatGPT!. Blogpost: Playground:.
Tweet media one
1
5
17
@siddkaramcheti
Siddharth Karamcheti
4 years
This is an incredible initiative! In addition, if you want to chat about grad school/applying, please feel free to DM/email me - I couldn’t have gotten into grad school without the support of older students and mentors, and I’d love to do what I can to help out!.
@parastooabtahi
Parastoo Abtahi
4 years
We’ve started a Student-Applicant Support program for underrepresented students. If you’re considering applying to the Computer Science PhD program at Stanford, we will do our best to give you one round of feedback on your application. Apply by October 31.
0
1
16
@siddkaramcheti
Siddharth Karamcheti
5 years
Very cool work presenting a brand new language-centric collaborative task pairing a human user and a bot situated in a grounded environment! Very thorough evaluation, including a bonafide human eval. Really exciting stuff - we need more tasks like this.
@yoavartzi
Yoav Artzi
5 years
Upcoming in EMNLP: Executing Instructions in Situated Collaborative Interactions (. New language collaboration environment and large dataset, modeling and learning methods, and a new evaluation protocol for sequential instructions.
0
5
16
@siddkaramcheti
Siddharth Karamcheti
4 years
Thank you so much @michellearning and @andrey_kurenkov for everything you’ve done for the @StanfordAILab Blog!. I learned so much from you both; really looking forward to continuing what you started with the amazing @nelsonfliu, @jmschreiber91, and @megha_byte!.
@michellearning
Michelle Lee
4 years
It's official! I am now an "alumni editor" of the @StanfordAILab blog! It was an amazing journey to have started the blog and led it as the co-editor-in-chief with @andrey_kurenkov, but even more amazing to see the new editorial board take over!.
0
1
16
@siddkaramcheti
Siddharth Karamcheti
5 years
Fantasy fans rejoice! Very excited that our paper introducing the LIGHT dialogue dataset was accepted at EMNLP! . Can't wait to see others build on the fantasy text adventure platform and develop new grounded agents capable of speaking and acting in the world.
@parlai_parley
ParlAI
5 years
Accepted at EMNLP! Built in ParlAI. Learning to Speak and Act in a Fantasy Text Adventure Game. @JackUrbs Angela Fan @siddkaramcheti Saachi Jain, Samuel Humeau, Emily Dinan @_rockt @douwekiela kiela, Arthur Szlam, @jaseweston .
Tweet media one
Tweet media two
Tweet media three
0
4
15
@siddkaramcheti
Siddharth Karamcheti
2 years
When faced with a socially ambiguous cleanup task (a half-complete Lego model, a Starbucks cup), what should a robot do?. Our approach – iterate an LLM "reasoner" with active perception/VQA: "move above the cup" --> "is it empty?" (yes) --> `cleanup(cup)`. See @MinaeKwon's 🧵👇.
@MinaeKwon
Minae Kwon
2 years
How can 🤖s act in a socially appropriate manner without human specification?. Our 🤖s reason socially by actively gathering missing info in the real world. We release the MessySurfaces dataset to assess socially appropriate behavior. 🧵👇
0
3
15
@siddkaramcheti
Siddharth Karamcheti
2 years
Really grateful to have the chance to present our work at @HRI_Conference this week! Had so much fun in Stockholm - lots of great papers and new friends.
@HRI_Conference
The HRI Conference
2 years
Takayuki Kanda has just announced the start of the Human-robot communication – 1 session 🗣.Enjoy! ✨. #hri2023 #hri
Tweet media one
0
2
15
@siddkaramcheti
Siddharth Karamcheti
4 years
I’ve been incredibly honored to be an OpenPhil Fellow and part of the fellows community!. It’s a great group of people and a wonderful program, so I highly recommend current (and incoming) PhD students apply!.
@open_phil
Open Philanthropy
4 years
Applications are open for the Open Phil AI Fellowship!. This program extends full support to a community of current & incoming PhD students, in any area of AI/ML, who are interested in making the long-term, large-scale impacts of AI a focus of their work.
0
1
15
@siddkaramcheti
Siddharth Karamcheti
4 years
Dilip and I met in a class. I was a lonely transfer student, and didn’t really know anyone. In a stroke of fate, @StefanieTellex helped pair us together. Years later, we’re PhD students at the same school, trade ideas all the time, and (COVID-permitting) get KBBQ once a quarter.
@laura4lano
Dr. Laura Forlano
4 years
Academic love letters: A thread on how you met your closest collaborator, intellectual soulmate, favorite coauthor or other kindred spirit. Do tell.
2
0
15
@siddkaramcheti
Siddharth Karamcheti
3 years
It's a wonderful week! Really proud of my advisor @percyliang for being named a AI2050 fellow! Thrilled for you, and ever so grateful to be one of your students!.
0
1
15
@siddkaramcheti
Siddharth Karamcheti
3 years
@nabla_theta I hear the sqrt might be optional?.
1
0
14
@siddkaramcheti
Siddharth Karamcheti
3 years
#LDOD was yesterday, and we had a blast! Thanks to everyone for coming out!. In case you missed it, congratulations to our outstanding paper - "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations.". Looking forward to next time!
@siddkaramcheti
Siddharth Karamcheti
3 years
Our #RSS2022 workshop on Learning from Diverse, Offline Data (#LDOD) is on Monday 6/27!. Amazing set of papers, incredible speakers (below), and a full panel moderated by @chelseabfinn & @DorsaSadigh! . Sneak Peek: 1-1 chats between students & speakers:
Tweet media one
1
5
12
@siddkaramcheti
Siddharth Karamcheti
3 years
Very lucky to have @ebiyik_ as a labmate and friend. His work is insightful, thorough, and just plain cool!. He's also on the academic job market this year 🎉.
@HRIPioneers
HRI Pioneers
3 years
#HRIPioneers2022 Erdem Bıyık is working on “Learning from Humans for Adaptive Interaction”. Erdem’s website: Twitter: And check out our full list of participants on our website:
1
0
13
@siddkaramcheti
Siddharth Karamcheti
3 years
Incredibly excited to see this new paper at @corl_conf - scalable language-conditioned policy learning for manipulation using CLIP + Transporter networks. Also comes with a great suite of benchmark tasks!. Stoked to build off this in future work - congrats @mohito1905! #RoboNLP.
@_akhaliq
AK
3 years
CLIPort: What and Where Pathways for Robotic Manipulation.pdf: abs: project page:
1
0
13
@siddkaramcheti
Siddharth Karamcheti
3 years
At 10:20 PDT, @laurel_orr1 and I will be talking at the Workshop for #FoundationModels ( about Mistral, as well as our journey towards transparent and accessible training. We hope to see you there - bring your questions! [2/4].
1
4
14
@siddkaramcheti
Siddharth Karamcheti
2 years
Incredible work by @tonyzzhao on low-cost, fine-grained bimanual teleoperation. This work is clean, open, and is a game changer for data collection and enabling new, complex tasks. Check out the demos — they’re the real deal.
@tonyzzhao
Tony Z. Zhao
2 years
Introducing ALOHA 🏖: 𝐀 𝐋ow-cost 𝐎pen-source 𝐇𝐀rdware System for Bimanual Teleoperation. After 8 months iterating @stanford and 2 months working with beta users, we are finally ready to release it!. Here is what ALOHA is capable of:
0
1
14
@siddkaramcheti
Siddharth Karamcheti
4 years
Excited to be at #ICML2021 this week! Catch @MinaeKwon's amazing talk at the "Reinforcement Learning 2" session tomorrow from 7 - 8 AM PDT (lots of other fantastic work too - don't miss it!). We'll also be at our poster from 8 - 11 AM PDT at Section C4! Let's chat negotiation!.
@siddkaramcheti
Siddharth Karamcheti
4 years
🎉 Incredibly thrilled to share our work "Targeted Data Acquisition for Evolving Negotiation Agents" to be presented at #ICML2021, led by my inspirational labmate @MinaeKwon, Mariano-Florentino Cuéllar, and @DorsaSadigh!. Reasons why I find this work exciting - a 🧵.
0
2
13
@siddkaramcheti
Siddharth Karamcheti
8 months
John is going to be an incredible advisor — apply apply apply!. (And if you’re a Columbia student, take all his classes too!).
@johnhewtt
John Hewitt
8 months
I’m joining the Columbia Computer Science faculty as an assistant professor in fall 2025, and hiring my first students this upcoming cycle!!. There’s so much to understand and improve in neural systems that learn from language — come tackle this with me!
Tweet media one
0
0
13
@siddkaramcheti
Siddharth Karamcheti
5 years
Fantastic work from my labmate @michiyasunaga around leveraging error messages to perform program repair in code generation/editing style tasks! . Learning from feedback is a pretty general principle - excited to see other applications of this in related work!.
@michiyasunaga
Michi Yasunaga
5 years
Excited to share our work, DrRepair: "Graph-based, Self-Supervised Program Repair from Diagnostic Feedback"! . #ICML2020. When writing code, we spend a lot of time debugging. Can we use machine learning to automatically repair programs from errors? [1/5]
Tweet media one
0
7
13
@siddkaramcheti
Siddharth Karamcheti
10 months
Amazing work from @LeoTronchon @HugoLaurencon @SanhEstPasMoi and others at HF on extending VLMs for *interleaved* images and text. Really cool to see the open-source multimodal instruct data (Cauldron), high-res image support, and a super efficient image encoding scheme!.
@SanhEstPasMoi
Victor Sanh
10 months
New multimodal model in town: Idefics2! . 💪 Strong 8B-parameters model: often on par with open 30B counterparts. 🔓Open license: Apache 2.0. 🚀 Strong improvement over Idefics1: +12 points on VQAv2, +30 points on TextVQA while having 10x fewer parameters. 📚 Better data:.
0
3
13
@siddkaramcheti
Siddharth Karamcheti
5 years
For example, I’d really love to see a robot (equipped with a robust object detection pipeline) use GPT-3 to figure out how to manipulate new objects!.
2
1
12
@siddkaramcheti
Siddharth Karamcheti
3 years
Join us as we scale Mistral ( and tackle research around responsibly training/understanding large-scale language models!.And looking forward – multimodality: models for language + video, robotics, amongst others. Please share & DMs open for questions!.
@percyliang
Percy Liang
3 years
The Stanford Center for Research on Foundation Models (CRFM) is looking for a research engineer to join our development team! Interested in large-scale training / being immersed in an interdisciplinary research environment? Please apply!
1
3
13
@siddkaramcheti
Siddharth Karamcheti
3 years
Hugging Face is an incredible place to work, and I’ve been so lucky to learn from a diverse and kind group of researchers, engineers, and other interns. We’ve got some great stuff on the horizon; definitely apply!.
@douwekiela
Douwe Kiela
3 years
🥳 We are hiring researchers and research interns! Apply here: People with characteristics that are underrepresented in tech are especially encouraged to apply. We will also be having a residency program soon @HuggingFace, stay tuned! 🤗.
0
0
13
@siddkaramcheti
Siddharth Karamcheti
3 years
The openness and transparency of the HF ecosystem is truly great, as is the drive of the team behind it. Enabling data curation, training, and evaluation (at scale) is fundamental to tackling the problems plaguing large-scale models – I'm hopeful and ready to build these tools.
1
1
12
@siddkaramcheti
Siddharth Karamcheti
2 years
Very excited to be co-organizing the *2nd* Workshop on Learning from Diverse, Offline Data at ICRA this year!. Submissions due March 23rd – super excited to see all the amazing work in this area for the second year in a row!.
@xiao_ted
Ted Xiao
2 years
Announcing the 2nd Workshop on Learning from Diverse, Offline Data (L-DOD) at @ICRA2023 in London on June 2!. There has been tremendous progress in scaling AI systems with data - can we apply this paradigm to generalizable robotic systems?.
0
1
12
@siddkaramcheti
Siddharth Karamcheti
3 years
Very appreciative of the thoughtful summary of our work! Thanks for the highlight @MosaicML - stoked to follow your progress towards more efficient ML!.
@DbrxMosaicAI
Databricks Mosaic Research
3 years
New year, new summaries! Let's look at dataset quality and its impact on sample efficiency. This paper ( studies the ineffectiveness of active learning on visual question answering (VQA) datasets and points to *collective outliers* as the culprit. (1/8).
0
0
12
@siddkaramcheti
Siddharth Karamcheti
3 years
Question for @ReviewAcl: on submitting reviews, I can see the full names of my fellow reviewers. Is this intentional (I don't think this was the case pre-OpenReview). Could potentially bias the discussion period (junior vs. senior voices)?.
1
0
11
@siddkaramcheti
Siddharth Karamcheti
5 years
@wellformedness >>> import torch as tf.
0
0
11
@siddkaramcheti
Siddharth Karamcheti
3 years
Big thanks to everyone who helped us build Mistral -- from @Thom_Wolf & @StasBekman who helped us navigate @huggingface Transformers, to @carey_phelps for providing support with @weights_biases. Also huge shoutout to @BlancheMinerva from #EleutherAI for providing feedback! [3/5].
1
0
11
@siddkaramcheti
Siddharth Karamcheti
2 years
For almost two years, I’ve been incredibly lucky to learn from the @AiEleuther community — from sharing tips around training LLMs, to discussing open research problems. Huge congrats to my friend @BlancheMinerva and the entire community! Can’t wait to see what’s up next!.
@AiEleuther
EleutherAI
2 years
Over the past two and a half years, EleutherAI has grown from a group of hackers on Discord to a thriving open science research community. Today, we are excited to announce the next step in our evolution: the formation of a non-profit research institute.
0
0
11
@siddkaramcheti
Siddharth Karamcheti
3 years
@sp_monte_carlo Dijkstra -> A*?.
1
0
10
@siddkaramcheti
Siddharth Karamcheti
3 years
We're building this codebase for the community; we really want your engagement on how to "open up" training so that it's more accessible. We're holding office hours twice a month to hear from all of you - we'd love to see you (! [4/4].
1
3
11
@siddkaramcheti
Siddharth Karamcheti
4 years
Incredible work on automatically learning perturbations/augmentations (views) for contrastive learning without domain expertise!. Congratulations @AlexTamkin @mike_h_wu and Noah Goodman!.
@AlexTamkin
Alex Tamkin
4 years
New work on learning views for contrastive learning!. Our Viewmaker Networks learn perturbations for image, speech, and sensor data, matching or outperforming human-designed views. w/ @mike_h_wu and Noah Goodman at @stanfordnlp and @stanfordailab. ⬇️ 1/
1
1
11
@siddkaramcheti
Siddharth Karamcheti
5 years
These GPT-3 results have me really excited about exploring its potential for spatial reasoning and grounded language understanding, including applications for robot navigation given textual representations of state. @gdb - would love an invite!.
0
0
11
@siddkaramcheti
Siddharth Karamcheti
3 years
@__apf__ Hyperion by Dan Simmons is a revelation (even just the first novel)!. If you're willing to delve into science fantasy, the Book of the New Sun series by Gene Wolf (1980-1983) is phenomenal. I also really enjoyed Permutation City by Greg Egan (1995) - but might be a bit intense.
1
1
7
@siddkaramcheti
Siddharth Karamcheti
3 years
Finally, a big thanks to my advisors @percyliang and @DorsaSadigh for their support and fait in me! I'm really excited to see what the next year looks like, both in terms of research and open-source. And robots! Don't forget the robots!.
1
0
11
@siddkaramcheti
Siddharth Karamcheti
4 years
Phenomenal work by my amazing labmates. Really excited to see this paper go public!.
@sangmichaelxie
Sang Michael Xie
4 years
🍔🍟"In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness". Real-world tasks (crop yield prediction from satellites) are often label-scarce. Only some countries have labels - how do we generalize globally?
Tweet media one
0
1
11
@siddkaramcheti
Siddharth Karamcheti
4 years
Ranjay is not only a phenomenal Computer Vision / HCI researcher, but an incredible and supportive mentor. I'm so grateful to be learning from him, and I know that he'll be a strong addition to any department out there. Best of luck @RanjayKrishna!.
@RanjayKrishna
Ranjay Krishna
4 years
🎓 I'm on the faculty job market this year! Please send me a message if your department (or one you know) is interested in a Computer Vision / HCI researcher who designs models inspired by human perception and social interaction!. My application materials:
0
1
10
@siddkaramcheti
Siddharth Karamcheti
2 years
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP ( and R3M (. The secret is *balance* (3/12).
1
2
11