shahdhruv_ Profile Banner
Dhruv Shah Profile
Dhruv Shah

@shahdhruv_

Followers
4K
Following
6K
Media
98
Statuses
824

robot whisperer @GoogleDeepMind | he/him

San Francisco, CA
Joined April 2012
Don't wanna be here? Send us removal request.
@shahdhruv_
Dhruv Shah
9 months
PhD life peaked during my last official week as a @berkeley_ai student. My research on scaling cross-embodiment learning and robot foundation models won TWO Best Conference Paper Awards at #ICRA2024 ๐Ÿ†๐Ÿ†. Kudos to @ajaysridhar0 @CatGlossop @svlevine & OXE collaborators!. #PhDone?
Tweet media one
Tweet media two
Tweet media three
Tweet media four
33
6
432
@shahdhruv_
Dhruv Shah
5 months
Excited to share that I will be joining @Princeton as an Assistant Professor in ECE & Robotics next academic year! ๐Ÿฏ๐Ÿค–. I am recruiting PhD students for the upcoming admissions cycle. If you are interested in working with me, please consider applying.
103
49
821
@shahdhruv_
Dhruv Shah
6 months
I โ€œdefendedโ€ my thesis earlier today โ€” super grateful to @svlevine and everyone at @berkeley_ai for their support through the last 5 years! ๐Ÿป. Excited to be joining @GoogleDeepMind and continue the quest for bigger, better, smarter robot brains ๐Ÿค–๐Ÿง 
Tweet media one
58
10
505
@shahdhruv_
Dhruv Shah
2 months
The Robotics team @GoogleDeepMind is hiring PhD student interns for 2025!. Apply to the portal or DM/ping for more info: Come work with us the bleeding edge of data, models, and algorithms for robot foundation models ๐Ÿฆพ๐Ÿ—๏ธ. Locations: Mountain View,.
4
38
297
@shahdhruv_
Dhruv Shah
2 years
Excited to share our attempt at a general-purpose "foundation model" for visual navigation, with capabilities that generalize in zero-shot, and that can serve as a backbone for efficient downstream adaptation. Check out @svlevine's ๐Ÿงตbelow:
@svlevine
Sergey Levine
2 years
We developed a new navigation model that can be trained on many robots and provides a general initialization for a wide range of downstream navigational tasks: ViNT (Visual Navigation Transformer) is a general-purpose navigational foundation model: ๐Ÿงต๐Ÿ‘‡
Tweet media one
3
44
213
@shahdhruv_
Dhruv Shah
1 year
We just open-sourced the training and deployment code for ViNT, along with model checkpoints. Try it out on your own robot at We will also be doing a live robot demo @corl_conf #CoRL2023 in Atlanta! Come say hi to our robots ๐Ÿค–.
@shahdhruv_
Dhruv Shah
2 years
Excited to share our attempt at a general-purpose "foundation model" for visual navigation, with capabilities that generalize in zero-shot, and that can serve as a backbone for efficient downstream adaptation. Check out @svlevine's ๐Ÿงตbelow:
3
42
173
@shahdhruv_
Dhruv Shah
2 months
Gemini 2.0 can reason about the physical world!. Try it out today at Your robots will thank you for it :)
8
20
170
@shahdhruv_
Dhruv Shah
9 months
Iโ€™m supposed to be graduating this week ๐ŸŽ“. But instead, Iโ€™ll be at #ICRA2024 in beautiful Japan all week, presenting an award finalist talk on Tue and at a workshop on Friday! Come find me / DM to chat all things robot learning, job market, veggie food @ Japan and karaoke ๐Ÿœ๐Ÿต๐ŸŽค
Tweet media one
10
1
158
@shahdhruv_
Dhruv Shah
1 year
Visual Nav Transformer ๐Ÿค Diffusion Policy. Works really well and ready for deployment on your robot today! We will also be demoing this @corl_conf ๐Ÿค–. Videos, code and checkpoints: Work led by @ajaysridhar0 in collaboration with @CatGlossop @svlevine.
@svlevine
Sergey Levine
1 year
ViNT (Visual Nav Transformer) now has a diffusion decoder, which enables some cool new capabilities! We call it NoMaD, and it can explore new environments, control different robots, and seek out goals. If you want an off-the-shelf navigation foundation model, check it out! A ๐Ÿงต๐Ÿ‘‡
3
21
133
@shahdhruv_
Dhruv Shah
6 years
@guykirkwood @elonmusk @xkcdComic I mean he did send a mannequin to space in a cherry red electric roadster. Go easy on the chap!.
0
0
117
@shahdhruv_
Dhruv Shah
1 year
Iโ€™ll be in Atlanta for #CoRL2023 with new papers, robots, and an engaging workshop!. Also thrilled to share that Iโ€™m on the job market, looking for tenure-track & industry research positions focused on robot learning and embodied AI. Would love to chat about potential roles. ๐Ÿงต:
1
10
118
@shahdhruv_
Dhruv Shah
2 years
I scraped OpenReview to generate the @corl_conf review distribution so you donโ€™t have to. #CoRL2022
Tweet media one
4
5
84
@shahdhruv_
Dhruv Shah
2 years
Announcing the 6th Robot Learning Workshop @NeurIPSConf on Pretraining, Fine-Tuning, and Generalization with Large Scale Models. #NeurIPS2023. CfP: Don't like your #CoRL2023 reviews? Love them? We welcome your contributions either way ๐Ÿซถ.
1
12
83
@shahdhruv_
Dhruv Shah
2 months
Excited to be in Vancouver for #NeurIPS2024 after a wild last few days (thanks Canada Post ๐Ÿ™ƒ). Hit me up to chat all things robotics, VLA, internships (and life) @GoogleDeepMind, job market and more! ๐Ÿฆพ ๐Ÿง . TIL: Canada does not have TSA for domestic flights. Greatest country on
Tweet media one
5
3
77
@shahdhruv_
Dhruv Shah
2 years
Super excited to be in London next week for #ICRA2023, presenting some exciting research works, and meeting the community!. I'll be presenting 3 recent projects with my collaborators, and organizing a workshop on Friday. If you're around and want to meet, come say hi/DM!. ๐Ÿงต:
Tweet media one
3
10
73
@shahdhruv_
Dhruv Shah
2 years
A simple interface to remotely teleop your robot over the internet: For days when you don't feel like going into lab but need to get work done. Works on any ROS-based robot and from *anywhere*, super lightweight. #Robotics #OpenSource @rosorg
0
13
69
@shahdhruv_
Dhruv Shah
2 years
Announcing the Workshop on Language and Robot Learning at @corl_conf #CoRL2022, Dec 15๐Ÿค–. Exciting lineup of speakers from the robotics, ML and NLP communities to discuss the present and future of language in robot learning!. Inviting papers, due Oct 28๐Ÿ“….
Tweet media one
2
24
68
@shahdhruv_
Dhruv Shah
3 years
New blog post on making robots physically explore real world spaces, so you can invite them home for the holidays!. Iโ€™ll be presenting this work as an Oral @corl_conf in London on Tuesday. If youโ€™re attending, come say hi! #robotics #CoRL2021.
@svlevine
Sergey Levine
3 years
Check out @_prieuredesion's blog post on RECON: a robot that learns to search for goals in new ens using 10s of hours of offline data!. RECON dynamically builds "mental maps" of new environments. Dhruv will give a long pres. about RECON @corl_conf nxt wk!
1
22
66
@shahdhruv_
Dhruv Shah
2 years
I'll be presenting LM-Nav at the evening poster session today at @corl_conf: 4pm in the poster lobby outside FPAA. Come find me!. Videos, code and more here:
@svlevine
Sergey Levine
3 years
Can we get robots to follow language directions without any data that has both nav trajectories and language? In LM-Nav, we use large pretrained language models, language-vision models, and (non-lang) navigation models to enable this in zero shot!.Thread:
2
9
62
@shahdhruv_
Dhruv Shah
4 years
We're making strides towards truly "in-the-wild" robotic learning systems that can operate with no human intervention. RECON leverages the strengths of latent goals models and topo maps to perform rapid goal-directed exploration in unstructured real-world envs with no supervision.
@svlevine
Sergey Levine
4 years
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals. ๐Ÿงต>
1
11
64
@shahdhruv_
Dhruv Shah
3 years
I had a great time at #RSS2022 this week, incredibly grateful to the organizers for putting up such a wholesome show @RoboticsSciSys!. Presented ViKiNG, which was a Best Systems Paper Finalist! #LDoD workshop: talks and panel on YT
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
6
57
@shahdhruv_
Dhruv Shah
3 months
Excited to be in Munich (ish) for @corl_conf! We'll be presenting some exciting new research in the main conference and workshops, stay tuned. Come find me to talk all things robot foundation models, GDM, Princeton, job search and veggie food in ๐Ÿ‡ฉ๐Ÿ‡ช!
Tweet media one
0
0
56
@shahdhruv_
Dhruv Shah
3 years
Waking up to exciting news: ViKiNG was accepted to #RSS2022 @RoboticsSciSys! . Looks like itโ€™s an east coast summer for roboticists ๐Ÿค–.
@svlevine
Sergey Levine
3 years
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
1
7
54
@shahdhruv_
Dhruv Shah
3 years
Contrastive learning + GCRL teaches robots the lost art of map reading, enabling kilometer-scale visual navigation without interventions. We deploy the robot in suburban neighborhoods, Berkeley hills and even take it hiking!. Great summary ๐Ÿงต by @svlevine.
@svlevine
Sergey Levine
3 years
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
1
11
49
@shahdhruv_
Dhruv Shah
9 months
Weโ€™ll be presenting NoMaD today at the Awards track at #ICRA2024, where itโ€™s nominated for 3 Best Paper awards!!. If you missed the talk yesterday, come find @ajaysridhar0 and me at the 13:30 poster session. Good luck navigating the maze.
@svlevine
Sergey Levine
1 year
ViNT (Visual Nav Transformer) now has a diffusion decoder, which enables some cool new capabilities! We call it NoMaD, and it can explore new environments, control different robots, and seek out goals. If you want an off-the-shelf navigation foundation model, check it out! A ๐Ÿงต๐Ÿ‘‡
0
4
48
@shahdhruv_
Dhruv Shah
4 years
Excited to share what I've been working on for the past few months!. We teach robots to reach arbitrary goals that you can specify as images from a phone camera! This versatility lets it do cool things -- like delivering pizza or patrolling a campus. @svlevine tweets details ->.
@svlevine
Sergey Levine
4 years
RL enables robots to navigate real-world environments, with diverse visually indicated goals: w/ @_prieuredesion, B. Eysenbach, G. Kahn, @nick_rhinehart . paper: video: Thread below ->
0
6
47
@shahdhruv_
Dhruv Shah
5 months
Yet another year, yet another @ieee_ras_icra PaperCept server crash. Happy ICRA (not a) deadline to those who celebrate :)
Tweet media one
2
3
45
@shahdhruv_
Dhruv Shah
1 year
Afficianados of robot learning: join us in Hall B2 at #NeurIPS2023 for some cutting-edge talks, posters, a spicy debate, and live robot demos!. The robots are here, are you?. We also have some GPUs for a "Spicy Question of the Day Prize" ๐ŸŒถ๏ธ, don't miss out
0
6
43
@shahdhruv_
Dhruv Shah
4 months
We train open-vocabulary, open-world language navigation policies in the real-world by leveraging YouTube videos and cheap VLM annotations. Sub-100M parameter policy runs on the edge, can couple with a long-range memory and reach arbitrary targets!. ๐Ÿงต:.
@NoriakiHirose
noriaki_hirose
4 months
Excited to share our recent research, LeLaN for learning language-condtitioned navigation policy from in-the-wild video in UC Berkeley and Toyota Motor North America. We present the LeLaN on CoRL 2024. @CatGlossop .@ajaysridhar0 .@shahdhruv_.@oier_mees.and.@svlevine
1
6
44
@shahdhruv_
Dhruv Shah
3 years
VFS was accepted to @iclr_conf #ICLR2022 !!. The ICLR rebuttal process remains the most productive and effective review cycles in the game and it is extremely satisfying as an author and reviewer to collectively improve the quality of submissions.
@svlevine
Sergey Levine
3 years
Value function spaces (VFS) uses low-level primitives to form a state representation in terms of their "affordances" - the value functions of the primitives serve as the state. This turns out to really improve generalization in hierarchical RL!. Short ๐Ÿงต>
Tweet media one
1
4
42
@shahdhruv_
Dhruv Shah
3 years
RECON accepted as an Oral Talk at @corl_conf 2021!! What are the odds we actually get a live audience in London? ๐Ÿคž๐Ÿผ.
@svlevine
Sergey Levine
4 years
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals. ๐Ÿงต>
1
8
42
@shahdhruv_
Dhruv Shah
2 years
New video from @twominutepapers features our recent research on zero-shot instruction following with real robots! Joint work with @berkeley_ai @GoogleAI @blazejosinski @brian_ichter @svlevine . Check out our paper for more:
3
8
38
@shahdhruv_
Dhruv Shah
3 years
On Tuesday, Iโ€™m stoked to be presenting ViKiNG โ€” which has been nominated for the Best Systems Paper award โ€” at the Long Talk Session 3!. Please stop by my talk or poster later that evening. Joint work with @svlevine @berkeley_ai @RoboticsSciSys.
@svlevine
Sergey Levine
3 years
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
2
7
31
@shahdhruv_
Dhruv Shah
4 years
โ€œVirtualโ€ socials have come a long way since the start of the pandemic and is a stellar example of what they can be!. Karaoke, conference rooms, game rooms, photo booths,. The @berkeley_ai social was the most badass virtual party EVER! Kudos to the team
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
32
@shahdhruv_
Dhruv Shah
1 year
Iโ€™ll be at #NeurIPS all week with new research, robot demos, and an exciting workshop on robot learning!. Come find me/reach out to chat about all things robotics, embodied reasoning, and vegetarian food in NOLA ๐Ÿฅ—. Hereโ€™s where you can find me:
Tweet media one
1
1
32
@shahdhruv_
Dhruv Shah
1 year
On Tuesday, we'll be presenting FastRLAP at the evening poster session. We make an RC Car go brrr, pixels-to-actions, in under 20 minutes of real-world practice! Work w/ @KyleStachowicz Arjun Bhorkar @ikostrikov @svlevine .
@svlevine
Sergey Levine
2 years
Can we use end-to-end RL to learn to race from images in just 10-20 min? FastRLAP builds on RLPD and offline RL pretraining to learn to race both indoors and outdoors in under an hour, matching a human FPV driver (i.e., the first author. ): Thread:
1
1
30
@shahdhruv_
Dhruv Shah
3 months
In-context learning with *frozen* frontier VLMs enables zero-shot value functions for videos on the wild โ€” including robot trajectories! Generative value learning can enable automatic data filtering, dense reward labels and more. Check out Jasonโ€™s ๐Ÿงต for details and a web demo:.
@JasonMa2020
Jason Ma
3 months
Excited to finally share Generative Value Learning (GVL), my @GoogleDeepMind project on extracting universal value functions from long-context VLMs via in-context learning!. We discovered a simple method to generate zero-shot and few-shot values for 300+ robot tasks and 50+
0
4
29
@shahdhruv_
Dhruv Shah
4 months
A simple recipe for solving new tasks: use a SOTA video generation model to generate a video rollout and train a video-conditioned policy to execute it on your robot!. Check out @mangahomanga's ๐Ÿงตbelow for more:.
@mangahomanga
Homanga Bharadhwaj
4 months
Gen2Act: Casting language-conditioned manipulation as *human video generation* followed by *closed-loop policy execution conditioned on the generated video* enables solving diverse real-world tasks unseen in the robot dataset!. 1/n
0
5
29
@shahdhruv_
Dhruv Shah
2 years
On Monday, I'll present FastRLAP at the Pretraining for Robotics workshop. #ICRA2023. We make an RC Car go brrr, pixels-to-actions, in under 20 minutes of real-world practice! Work co-led with @KyleStachowicz and Arjun Bhorkar.
@svlevine
Sergey Levine
2 years
Can we use end-to-end RL to learn to race from images in just 10-20 min? FastRLAP builds on RLPD and offline RL pretraining to learn to race both indoors and outdoors in under an hour, matching a human FPV driver (i.e., the first author. ): Thread:
2
2
29
@shahdhruv_
Dhruv Shah
2 years
Had a great time in Auckland @corl_conf over the past week! Big thanks to the organizers for the wonderful conference ๐Ÿ™‚. Presented LM-Nav and ReViND. Great discussions @ LangRob workshop (videos coming soon).
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
28
@shahdhruv_
Dhruv Shah
3 months
@jbhuang0604 @umdcs I thought this was going to be a tutorial on making better figures for CVPR.
3
0
28
@shahdhruv_
Dhruv Shah
3 years
VFS is a simple, yet effective, way to obtain skill-centric representations that really helps long-horizon reasoning and generalization in HRL. Check out this great summary thread by @svlevine. Work done during my internship @GoogleAI with @brian_ichter, @alexttoshev.
@svlevine
Sergey Levine
3 years
Value function spaces (VFS) uses low-level primitives to form a state representation in terms of their "affordances" - the value functions of the primitives serve as the state. This turns out to really improve generalization in hierarchical RL!. Short ๐Ÿงต>
Tweet media one
1
7
28
@shahdhruv_
Dhruv Shah
1 year
I'm excited to be speaking at the ML4AD symposium (colocated with #NeurIPS2023) at noon!. Stop by if you're interested
@Waymo
Waymo
1 year
Weโ€™re thrilled to sponsor this yearโ€™s ML for Autonomous Driving Symposium on December 14. ML4AD 2023 will see researchers, industry experts, and practitioners come together to redefine the future of autonomous driving technologies. Join us in New Orleans!
Tweet media one
0
2
26
@shahdhruv_
Dhruv Shah
1 year
At #CoRL2023, the lines are long but the speakers are strong ๐Ÿ’ช๐Ÿผ . Come join us in Sequoia 2 and
@oier_mees
Oier Mees
1 year
Join us for the 2nd edition of the #LangRob workshop at #CoRL2023 in vibrant Atlanta! Get ready for an unforgettable day with an all-star ensemble of speakers and two spicy panels that will ignite your passion for language and robotics! ๐Ÿ”ฅ๐Ÿค–.P.S. Guess who wrote this tweet ๐Ÿ˜‰
Tweet media one
0
2
26
@shahdhruv_
Dhruv Shah
1 year
On Wednesday, I will be presenting ViNT as an oral talk in the morning session ๐Ÿฅฑ. We show that cross-embodiment training generalizes well in zero-shot and can be adapted to various downstream tasks.
@svlevine
Sergey Levine
2 years
We developed a new navigation model that can be trained on many robots and provides a general initialization for a wide range of downstream navigational tasks: ViNT (Visual Navigation Transformer) is a general-purpose navigational foundation model: ๐Ÿงต๐Ÿ‘‡
Tweet media one
1
2
23
@shahdhruv_
Dhruv Shah
3 months
Simple recipe to repurpose *existing* datasets to learn fine-grained skills for functional manipulation: generate dense (structured) language annotations, train language-conditioned robot policy, STEER (:) with off-the-shelf VLM to solve tasks!. ๐Ÿงตbelow:.
@smithlaura1028
Laura Smith
3 months
Excited to share our work on STEERing robot behavior! With structured language annotation of offline data, STEER exposes fundamental manipulation skills that can be modulated and combined to enable zero-shot adaptation to new situations and tasks.
0
2
24
@shahdhruv_
Dhruv Shah
2 years
Excited to share preliminary generations from our new photorealistic text-to-image diffusion model for "Berkeley in the snow".
@Connorstp
itโ€™s me, csp
2 years
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
22
@shahdhruv_
Dhruv Shah
2 years
On Wednesday, I'll present GNM: a pre-trained embodiment-agnostic navigation model (with public checkpoints!) that can drive any robot! #ICRA2023.
@svlevine
Sergey Levine
2 years
We're releasing our code for "driving any robot", so you can also try driving your robot using the general navigation model (GNM): Code goes with the GNM paper: Should work for locobot, hopefully convenient to hook up to any robot
1
3
23
@shahdhruv_
Dhruv Shah
2 years
Some late night encouragement from @weights_biases to keep churning experiments for your sweet sweet papers!
1
0
21
@shahdhruv_
Dhruv Shah
3 years
The @ieee_ras_icra Workshop on Robotic Perception and Mapping seems like such a throwback to busy conference days โ€” 100s of attendees in a room! Great panel @lucacarlone1 @AjdDavison @fdellaert . #ICRA2022
Tweet media one
0
1
22
@shahdhruv_
Dhruv Shah
3 years
Very interesting and engaging tutorial on Social Reinforcement Learning for your robots by @natashajaques at #CoRL2021 @corl_conf . Itโ€™s being streamed on YouTube if you wanna join:
Tweet media one
0
3
21
@shahdhruv_
Dhruv Shah
3 years
Excited to finally be attending a physical conference and meet friends in-person at #CoRL2021. Lovely conference venue, looking forward to the exciting talks. If youโ€™re attending and wanna chat, letโ€™s catch up! DMs open.
Tweet media one
Tweet media two
0
1
21
@shahdhruv_
Dhruv Shah
2 years
@corl_conf @d_yuqing @pratyusha_PS @xiao_ted @mohito1905 @StefanieTellex @brian_ichter @_jessethomason_ Join our technical session in-person in Auckland, or virtually, to hear from @jacobandreas @jeffclune @alsuhr @jackayline @ybisk @andyzengtweets, Cynthia Matuszek @UMBC , Dieter Fox @uwcse, Jean Oh @CMU_Robotics , Nakul Gopalan @GeorgiaTech
Tweet media one
1
3
19
@shahdhruv_
Dhruv Shah
3 years
Cool new work combining CLIP and a 3D model for zero-shot 3D reasoning!!. Lots of exciting progress using CLIP as a reliable language interface for vision and robotics tasks. We found CLIP to be very useful to ground landmarks for robotic navigation too:
@SongShuran
Shuran Song
3 years
Semantic abstraction -- give CLIP new 3D reasoning capabilities, so your robots can find that โ€œ rapid test behind the Harry Potter book.โ€ ๐Ÿ˜‰.w. Huy Ha.
2
1
21
@shahdhruv_
Dhruv Shah
3 years
RECON was featured on Computer Vision News @RSIPvision: I also presented RECON at @corl_conf in London last month, talk video here: Videos and dataset @
@svlevine
Sergey Levine
4 years
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals. ๐Ÿงต>
0
5
20
@shahdhruv_
Dhruv Shah
1 year
Also on Wednesday, we'll be presenting LFG at the evening poster session. We use CoT and a novel scoring method for LLMs to construct narratives and guide the robot unseen environments in search of a goal.
@svlevine
Sergey Levine
1 year
Can LLMs help robots navigate? It's hard for LLMs to just *tell* the robot what to do, but they can provide great heuristics for navigation by coming up with guesses. With LFG, we ask the LLM to come up with a "story" about which choices are better, then use it w/ planning. ๐Ÿงต๐Ÿ‘‡
0
4
19
@shahdhruv_
Dhruv Shah
3 years
Excited to be in New York City for @RoboticsSciSys #RSS2022!. On Monday, weโ€™ll be organizing this workshop on Learning from Diverse, Offline Data with cutting edge papers, speakers and an in-person panel!. Please join us if youโ€™re in town, or virtually!
@siddkaramcheti
Siddharth Karamcheti
3 years
Diverse, representative data is becoming increasingly important for building generalizable robotic systems. We're organizing the Workshop on Learning from Diverse, Offline Data (L-DOD) at RSS 2022 (NYC/hybrid) to come together and discuss this!.
Tweet media one
3
1
17
@shahdhruv_
Dhruv Shah
3 years
Excited to be in Philly for @ieee_ras_icra! Looking forward to catching up with friends and checking out exciting new research in person after a long hiatus. Please reach out if youโ€™re around and wanna chat ๐Ÿ˜
Tweet media one
0
0
19
@shahdhruv_
Dhruv Shah
5 years
@TrackCityChick Very unfortunate, exacerbated for international students with the sky-high living expenses in Bay area. Very glad that Berkeley provides a semester's stipend in advance (~start of classes) for grad students.! This should totally be the norm.
0
1
16
@shahdhruv_
Dhruv Shah
3 years
Have a recent @NeurIPSConf /WIP draft on offline learning, dataset curation, benchmarking, learning from multimodal data or related topics? Please consider submitting to our RSS workshop for quick feedback. Paper deadline now extended to *May 27* โณ
Tweet media one
0
9
17
@shahdhruv_
Dhruv Shah
3 months
Come to Hรถrsaal for the 3rd edition of the LangRob workshop! We have 6 exciting talks, 2 panels, 60 brilliant posters, and 8 spotlight research talks. All talks and panels will be recorded and uploaded after the conference.
@Ed__Johns
Edward Johns
3 months
Our LangRob Workshop is kicking off! #CoRL2024. Come and join us in the Hรถrsaal room on the ground floor. As @shahdhruv_ said in our introduction, โ€œLangRob is now bigger than the first ever CoRL!โ€. See you here ๐Ÿ˜Š.
Tweet media one
1
2
18
@shahdhruv_
Dhruv Shah
6 years
Dr. Signe Redfield on the definition of robotics as a field and rise of a Kuhnian scientific revolution. Read more at . #ICRA2019 #RoboticsDebates
Tweet media one
Tweet media two
Tweet media three
0
3
18
@shahdhruv_
Dhruv Shah
1 year
@jeremyphoward @simonw I would've thought they are A/B testing different variants, but trying out the (hopefully more stable) API, this is not a new thing. The original release model (gpt-4-0314) also seems to know about the slap.
Tweet media one
3
2
15
@shahdhruv_
Dhruv Shah
3 years
Itโ€™s a full house!. Our @RoboticsSciSys Workshop on Learning from Diverse, Offline Data is happening now at Mudd 545, and online at
Tweet media one
Tweet media two
Tweet media three
@siddkaramcheti
Siddharth Karamcheti
3 years
#LDOD is kicking off in ~30 minutes; join our free-to-view livestream here: See y'all soon!.
0
3
17
@shahdhruv_
Dhruv Shah
2 years
TIL: You can anonymize a @github code repo for double-blind peer review with this neat tool by @thodurieux. It comes with a navigator and also has some basic anonymization options like removing links or specific words. Really cool!.
0
1
17
@shahdhruv_
Dhruv Shah
2 years
On Tuesday, we'll present ExAug at the 3PM poster session. #ICRA2023. Work led by @noriakihirose.
@svlevine
Sergey Levine
2 years
Experience augmentation (ExAug) uses 3D transformations to augment data from different robots to imagine what other robots would do in similar situations. This allows training policies that generalize across robot configs (size, camera placement): Thread:
1
1
15
@shahdhruv_
Dhruv Shah
2 years
LangRob workshop happening now at #CoRL2022 in ENG building, room 401!. Pheedloop and stream for virtual attendees:
Tweet media one
@shahdhruv_
Dhruv Shah
2 years
Announcing the Workshop on Language and Robot Learning at @corl_conf #CoRL2022, Dec 15๐Ÿค–. Exciting lineup of speakers from the robotics, ML and NLP communities to discuss the present and future of language in robot learning!. Inviting papers, due Oct 28๐Ÿ“….
Tweet media one
0
3
13
@shahdhruv_
Dhruv Shah
3 years
Reminder -- papers are due AoE tonight!!. Please consider sharing your new research with the broader robotics and machine learning community at @RoboticsSciSys 2022 in NYC or remotely ๐Ÿค–.
@shahdhruv_
Dhruv Shah
3 years
Have a recent @NeurIPSConf /WIP draft on offline learning, dataset curation, benchmarking, learning from multimodal data or related topics? Please consider submitting to our RSS workshop for quick feedback. Paper deadline now extended to *May 27* โณ
Tweet media one
1
4
15
@shahdhruv_
Dhruv Shah
3 years
Are you an early-stage researcher (grad student/postdoc) interested in a fireside chat with one of our workshop speakers? We're inviting signups for a 1:1 discussion. Please email ldod_rss2022@googlegroups.com with a bit about yourself and the speaker you'd like to chat with!.
@siddkaramcheti
Siddharth Karamcheti
3 years
Diverse, representative data is becoming increasingly important for building generalizable robotic systems. We're organizing the Workshop on Learning from Diverse, Offline Data (L-DOD) at RSS 2022 (NYC/hybrid) to come together and discuss this!.
Tweet media one
3
7
15
@shahdhruv_
Dhruv Shah
1 year
@sea_snell you did it before it was cool
Tweet media one
0
0
14
@shahdhruv_
Dhruv Shah
6 years
It's unbelievable what the Ocean One achieved with neat research and remarkable engineering efforts! Oussama Khatib on the need for compliant robots and the story behind the Marseille shipwreck recovery.@StanfordAILab #ICRA2019 .Interesting video:
Tweet media one
Tweet media two
Tweet media three
0
1
11
@shahdhruv_
Dhruv Shah
3 years
@CSProfKGD The Berkeley DL course generally gets a huge undergraduate cohort: . Prev offering:
0
1
12
@shahdhruv_
Dhruv Shah
3 years
@ammaryh92 @jeremyphoward @__mharrison__ @rasbt @svpino I love Jupyter Lab but the real champ is VSCode + Jupyter notebook extension โ€” itโ€™s like Lab, but much more customizable and feels like a notebook inside of your favorite editor with keybindings. Bonus: works with @OpenAI @Github Copilot!.
1
1
12
@shahdhruv_
Dhruv Shah
2 years
Excited to share our latest research on customizing learned navigation behaviors by combining offline RL with topological graphs -- ReViND. I'll be presenting ReViND at the 11am oral session today @corl_conf. Please join!. Videos, code and more:
@svlevine
Sergey Levine
2 years
Offline RL with large navigation datasets can learn to drive real-world mobile robots while accounting for objectives (staying on grass, on paths, etc.). We'll present ReVIND, our offline RL + graph-based navigational method at CoRL 2022 tomorrow. Thread:
0
0
13
@shahdhruv_
Dhruv Shah
2 years
@hardmaru To be fair, thatโ€™s probably just the cost to train the final/released model, and does not include the compute used in tuning hyperparameters and failed experiments? The overall $$ of the project would likely be at least an order higher than that of a final model.
0
0
12
@shahdhruv_
Dhruv Shah
5 years
Action-packed day at Bay Area Robotics Symposium 2019 @UCBerkeley @berkeley_ai with exciting research from Berkeley, @Stanford, @ucsc and industry :D.Props to Mark Mueller and @DorsaSadigh for organizing ๐Ÿค–.
Tweet media one
0
1
13
@shahdhruv_
Dhruv Shah
6 years
Unknowingly kicked off the #icra2019ScavengerHunt earlier today at this beautiful place!.#icra2019MontRoyal #TeamWookie #icra2019
Tweet media one
Tweet media two
Tweet media three
1
3
12
@shahdhruv_
Dhruv Shah
1 year
@xuanalogue @jeremyphoward This is a shame! My collaborators and I have done a lot of work that leverages the logprobs in a probabilistic planning framework and found it very useful, I guess thatโ€™s why you shouldnโ€™t use closed models for researchโ€ฆ.
1
0
12
@shahdhruv_
Dhruv Shah
2 months
@chris_j_paxton Used it quite extensively and it's great. The Orin AGX is my defacto replacement for any silly laptop GPU and I think more people should use it :). I've also requested this many times and I know there's a lot on the forum too -- would be great if Stretch shipped with an Orin.
1
0
13
@shahdhruv_
Dhruv Shah
2 years
@andreasklinger @ajayj_ had this really cool CVPR paper that does some version of this:
0
0
12
@shahdhruv_
Dhruv Shah
3 years
At the @Tesla AI Day event today and thereโ€™s a Cybertruck to greet us at the gate. Looking forward to whatโ€™s waiting inside!
Tweet media one
1
0
12
@shahdhruv_
Dhruv Shah
1 year
Deadline for submitting papers and demo proposals now EXTENDED to **next** Friday, Oct 6 AoE!.
@shahdhruv_
Dhruv Shah
2 years
Announcing the 6th Robot Learning Workshop @NeurIPSConf on Pretraining, Fine-Tuning, and Generalization with Large Scale Models. #NeurIPS2023. CfP: Don't like your #CoRL2023 reviews? Love them? We welcome your contributions either way ๐Ÿซถ.
0
3
11
@shahdhruv_
Dhruv Shah
1 year
@SOTA_kke quadrupeds unite
0
1
11
@shahdhruv_
Dhruv Shah
2 years
@ikostrikov @OpenAI jaxgpt coming soon.
0
0
10
@shahdhruv_
Dhruv Shah
6 years
@maththrills moderating the most scintillating debate of #ICRA2019: "The pervasiveness of deep learning is an impediment to gaining scientific insights into robotics problems".It's a full-house! @angelaschoellig, Nick Roy @MIT_CSAIL, Ryan Gariepy @clearpathrobots and Oliver Brock
Tweet media one
2
3
10
@shahdhruv_
Dhruv Shah
6 years
The number of passengers with poster tubes on this flight from Frankfurt to Montreal is too high. ! Coincidence or #ToICRA2019? @icra2019 @ieee_ras_icra.
0
0
10
@shahdhruv_
Dhruv Shah
1 year
TIL: @GoogleAI Bard works quite well with images. Pretty impressive!.
Tweet media one
0
1
10
@shahdhruv_
Dhruv Shah
4 years
Some great demos of exploring unseen cafeterias and fire stations under different seasons and lighting on the project page: Video: Work with amazing collaborators @berkeley_ai : @ben_eysenbach @nick_rhinehart and @svlevine.
0
1
9
@shahdhruv_
Dhruv Shah
5 years
Looking forward to my first "virtual" @iclr_conf! The interface looks very clean and well-designed, great effort pioneering this @srush_nlp and co. ๐Ÿ‘.
@srush_nlp
Sasha Rush
5 years
1/ Spent the last couple weeks in quarantine obsessively coding a website for Virtual ICLR with @hen_str. We wanted to build something that was fun to browse, async first, and feels alive.
2
0
9
@shahdhruv_
Dhruv Shah
6 years
@mikb0b Libraries on top of Matplotlib usually work well enough. Once in a while, I've used Inkscape/online software for a particular graphic I wanted. PS: I challenge everyone to use Matplotlib+XKCD in a paper
Tweet media one
0
0
8
@shahdhruv_
Dhruv Shah
2 years
@andyzengtweets and @xf1280 from @GoogleAI on language as a glue for intelligent machines and a live demo of their PaLM-SayCan system!. (9/12)
Tweet media one
1
1
8
@shahdhruv_
Dhruv Shah
6 years
A good friend introduced me to @MathpixApp today. Works way beyond my expectations!.Biggest thing to happen to me since starting TeXing. Highly recommend to everyone #phdlife #AcademicTwitter
Tweet media one
1
3
8
@shahdhruv_
Dhruv Shah
5 years
We'll be presenting our spotlight talk on getting robots to learn in the real-world without hand-engineered resets, rewards and state information at ICLR 2020!.Tune in at 10PM tonight or 5AM tomorrow PDT to know more. Blogpost: @iclr_conf @berkeley_ai.
@svlevine
Sergey Levine
5 years
Check out our ICLR spotlight: Ingredients of Real-World Robotic Reinforcement Learning! How can we set up robots to learn with RL, without manual engineering for resets, rewards, or vision?. Talk Paper Poster
1
0
8
@shahdhruv_
Dhruv Shah
8 months
@ehsanik Not at CVPR, but loving this! We need this at every conference ๐Ÿ™ƒ.
0
0
7
@shahdhruv_
Dhruv Shah
6 years
#ICRA2019 Milestone Award for best paper from 20 years ago to Stev n LaValle and James Kuffner! (The world needs better telepresence tools ๐Ÿคท)
Tweet media one
1
0
8
@shahdhruv_
Dhruv Shah
6 years
Interesting (and important) ideas on the cycle of bias and the need for inclusiveness at venues like #ICRA2019 by Karime Pereida and Melissa Greeff @utiasSTARS #RoboticsDebates #robotics #diversitymatters
Tweet media one
0
1
6
@shahdhruv_
Dhruv Shah
3 months
Join us for the final panel of the day and submit questions for our panelists!. We have @_jessethomason_ and @siddkaramcheti taking on Giulia @GoogleDeepMind @DannyDriess @physical_int @AjayMandlekar @nvidia -- come take your last swings of #CoRL2024
Tweet media one
0
1
8