Dhruv Shah
@shahdhruv_
Followers
4K
Following
6K
Media
98
Statuses
824
robot whisperer @GoogleDeepMind | he/him
San Francisco, CA
Joined April 2012
PhD life peaked during my last official week as a @berkeley_ai student. My research on scaling cross-embodiment learning and robot foundation models won TWO Best Conference Paper Awards at #ICRA2024 ๐๐. Kudos to @ajaysridhar0 @CatGlossop @svlevine & OXE collaborators!. #PhDone?
33
6
432
Excited to share that I will be joining @Princeton as an Assistant Professor in ECE & Robotics next academic year! ๐ฏ๐ค. I am recruiting PhD students for the upcoming admissions cycle. If you are interested in working with me, please consider applying.
103
49
821
I โdefendedโ my thesis earlier today โ super grateful to @svlevine and everyone at @berkeley_ai for their support through the last 5 years! ๐ป. Excited to be joining @GoogleDeepMind and continue the quest for bigger, better, smarter robot brains ๐ค๐ง
58
10
505
The Robotics team @GoogleDeepMind is hiring PhD student interns for 2025!. Apply to the portal or DM/ping for more info: Come work with us the bleeding edge of data, models, and algorithms for robot foundation models ๐ฆพ๐๏ธ. Locations: Mountain View,.
4
38
297
Excited to share our attempt at a general-purpose "foundation model" for visual navigation, with capabilities that generalize in zero-shot, and that can serve as a backbone for efficient downstream adaptation. Check out @svlevine's ๐งตbelow:
We developed a new navigation model that can be trained on many robots and provides a general initialization for a wide range of downstream navigational tasks: ViNT (Visual Navigation Transformer) is a general-purpose navigational foundation model: ๐งต๐
3
44
213
We just open-sourced the training and deployment code for ViNT, along with model checkpoints. Try it out on your own robot at We will also be doing a live robot demo @corl_conf #CoRL2023 in Atlanta! Come say hi to our robots ๐ค.
Excited to share our attempt at a general-purpose "foundation model" for visual navigation, with capabilities that generalize in zero-shot, and that can serve as a backbone for efficient downstream adaptation. Check out @svlevine's ๐งตbelow:
3
42
173
Iโm supposed to be graduating this week ๐. But instead, Iโll be at #ICRA2024 in beautiful Japan all week, presenting an award finalist talk on Tue and at a workshop on Friday! Come find me / DM to chat all things robot learning, job market, veggie food @ Japan and karaoke ๐๐ต๐ค
10
1
158
Visual Nav Transformer ๐ค Diffusion Policy. Works really well and ready for deployment on your robot today! We will also be demoing this @corl_conf ๐ค. Videos, code and checkpoints: Work led by @ajaysridhar0 in collaboration with @CatGlossop @svlevine.
ViNT (Visual Nav Transformer) now has a diffusion decoder, which enables some cool new capabilities! We call it NoMaD, and it can explore new environments, control different robots, and seek out goals. If you want an off-the-shelf navigation foundation model, check it out! A ๐งต๐
3
21
133
@guykirkwood @elonmusk @xkcdComic I mean he did send a mannequin to space in a cherry red electric roadster. Go easy on the chap!.
0
0
117
Iโll be in Atlanta for #CoRL2023 with new papers, robots, and an engaging workshop!. Also thrilled to share that Iโm on the job market, looking for tenure-track & industry research positions focused on robot learning and embodied AI. Would love to chat about potential roles. ๐งต:
1
10
118
I scraped OpenReview to generate the @corl_conf review distribution so you donโt have to. #CoRL2022
4
5
84
Announcing the 6th Robot Learning Workshop @NeurIPSConf on Pretraining, Fine-Tuning, and Generalization with Large Scale Models. #NeurIPS2023. CfP: Don't like your #CoRL2023 reviews? Love them? We welcome your contributions either way ๐ซถ.
1
12
83
Excited to be in Vancouver for #NeurIPS2024 after a wild last few days (thanks Canada Post ๐). Hit me up to chat all things robotics, VLA, internships (and life) @GoogleDeepMind, job market and more! ๐ฆพ ๐ง . TIL: Canada does not have TSA for domestic flights. Greatest country on
5
3
77
Super excited to be in London next week for #ICRA2023, presenting some exciting research works, and meeting the community!. I'll be presenting 3 recent projects with my collaborators, and organizing a workshop on Friday. If you're around and want to meet, come say hi/DM!. ๐งต:
3
10
73
A simple interface to remotely teleop your robot over the internet: For days when you don't feel like going into lab but need to get work done. Works on any ROS-based robot and from *anywhere*, super lightweight. #Robotics #OpenSource @rosorg
0
13
69
Announcing the Workshop on Language and Robot Learning at @corl_conf #CoRL2022, Dec 15๐ค. Exciting lineup of speakers from the robotics, ML and NLP communities to discuss the present and future of language in robot learning!. Inviting papers, due Oct 28๐
.
2
24
68
New blog post on making robots physically explore real world spaces, so you can invite them home for the holidays!. Iโll be presenting this work as an Oral @corl_conf in London on Tuesday. If youโre attending, come say hi! #robotics #CoRL2021.
Check out @_prieuredesion's blog post on RECON: a robot that learns to search for goals in new ens using 10s of hours of offline data!. RECON dynamically builds "mental maps" of new environments. Dhruv will give a long pres. about RECON @corl_conf nxt wk!
1
22
66
I'll be presenting LM-Nav at the evening poster session today at @corl_conf: 4pm in the poster lobby outside FPAA. Come find me!. Videos, code and more here:
Can we get robots to follow language directions without any data that has both nav trajectories and language? In LM-Nav, we use large pretrained language models, language-vision models, and (non-lang) navigation models to enable this in zero shot!.Thread:
2
9
62
We're making strides towards truly "in-the-wild" robotic learning systems that can operate with no human intervention. RECON leverages the strengths of latent goals models and topo maps to perform rapid goal-directed exploration in unstructured real-world envs with no supervision.
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals. ๐งต>
1
11
64
I had a great time at #RSS2022 this week, incredibly grateful to the organizers for putting up such a wholesome show @RoboticsSciSys!. Presented ViKiNG, which was a Best Systems Paper Finalist! #LDoD workshop: talks and panel on YT
2
6
57
Excited to be in Munich (ish) for @corl_conf! We'll be presenting some exciting new research in the main conference and workshops, stay tuned. Come find me to talk all things robot foundation models, GDM, Princeton, job search and veggie food in ๐ฉ๐ช!
0
0
56
Waking up to exciting news: ViKiNG was accepted to #RSS2022 @RoboticsSciSys! . Looks like itโs an east coast summer for roboticists ๐ค.
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
1
7
54
Contrastive learning + GCRL teaches robots the lost art of map reading, enabling kilometer-scale visual navigation without interventions. We deploy the robot in suburban neighborhoods, Berkeley hills and even take it hiking!. Great summary ๐งต by @svlevine.
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
1
11
49
Weโll be presenting NoMaD today at the Awards track at #ICRA2024, where itโs nominated for 3 Best Paper awards!!. If you missed the talk yesterday, come find @ajaysridhar0 and me at the 13:30 poster session. Good luck navigating the maze.
ViNT (Visual Nav Transformer) now has a diffusion decoder, which enables some cool new capabilities! We call it NoMaD, and it can explore new environments, control different robots, and seek out goals. If you want an off-the-shelf navigation foundation model, check it out! A ๐งต๐
0
4
48
Excited to share what I've been working on for the past few months!. We teach robots to reach arbitrary goals that you can specify as images from a phone camera! This versatility lets it do cool things -- like delivering pizza or patrolling a campus. @svlevine tweets details ->.
RL enables robots to navigate real-world environments, with diverse visually indicated goals: w/ @_prieuredesion, B. Eysenbach, G. Kahn, @nick_rhinehart . paper: video: Thread below ->
0
6
47
Yet another year, yet another @ieee_ras_icra PaperCept server crash. Happy ICRA (not a) deadline to those who celebrate :)
2
3
45
Afficianados of robot learning: join us in Hall B2 at #NeurIPS2023 for some cutting-edge talks, posters, a spicy debate, and live robot demos!. The robots are here, are you?. We also have some GPUs for a "Spicy Question of the Day Prize" ๐ถ๏ธ, don't miss out
0
6
43
We train open-vocabulary, open-world language navigation policies in the real-world by leveraging YouTube videos and cheap VLM annotations. Sub-100M parameter policy runs on the edge, can couple with a long-range memory and reach arbitrary targets!. ๐งต:.
Excited to share our recent research, LeLaN for learning language-condtitioned navigation policy from in-the-wild video in UC Berkeley and Toyota Motor North America. We present the LeLaN on CoRL 2024. @CatGlossop .@ajaysridhar0 .@shahdhruv_.@oier_mees.and.@svlevine
1
6
44
VFS was accepted to @iclr_conf #ICLR2022 !!. The ICLR rebuttal process remains the most productive and effective review cycles in the game and it is extremely satisfying as an author and reviewer to collectively improve the quality of submissions.
Value function spaces (VFS) uses low-level primitives to form a state representation in terms of their "affordances" - the value functions of the primitives serve as the state. This turns out to really improve generalization in hierarchical RL!. Short ๐งต>
1
4
42
RECON accepted as an Oral Talk at @corl_conf 2021!! What are the odds we actually get a live audience in London? ๐ค๐ผ.
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals. ๐งต>
1
8
42
New video from @twominutepapers features our recent research on zero-shot instruction following with real robots! Joint work with @berkeley_ai @GoogleAI @blazejosinski @brian_ichter @svlevine . Check out our paper for more:
3
8
38
On Tuesday, Iโm stoked to be presenting ViKiNG โ which has been nominated for the Best Systems Paper award โ at the Long Talk Session 3!. Please stop by my talk or poster later that evening. Joint work with @svlevine @berkeley_ai @RoboticsSciSys.
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
2
7
31
โVirtualโ socials have come a long way since the start of the pandemic and is a stellar example of what they can be!. Karaoke, conference rooms, game rooms, photo booths,. The @berkeley_ai social was the most badass virtual party EVER! Kudos to the team
0
2
32
Iโll be at #NeurIPS all week with new research, robot demos, and an exciting workshop on robot learning!. Come find me/reach out to chat about all things robotics, embodied reasoning, and vegetarian food in NOLA ๐ฅ. Hereโs where you can find me:
1
1
32
On Tuesday, we'll be presenting FastRLAP at the evening poster session. We make an RC Car go brrr, pixels-to-actions, in under 20 minutes of real-world practice! Work w/ @KyleStachowicz Arjun Bhorkar @ikostrikov @svlevine .
Can we use end-to-end RL to learn to race from images in just 10-20 min? FastRLAP builds on RLPD and offline RL pretraining to learn to race both indoors and outdoors in under an hour, matching a human FPV driver (i.e., the first author. ): Thread:
1
1
30
In-context learning with *frozen* frontier VLMs enables zero-shot value functions for videos on the wild โ including robot trajectories! Generative value learning can enable automatic data filtering, dense reward labels and more. Check out Jasonโs ๐งต for details and a web demo:.
Excited to finally share Generative Value Learning (GVL), my @GoogleDeepMind project on extracting universal value functions from long-context VLMs via in-context learning!. We discovered a simple method to generate zero-shot and few-shot values for 300+ robot tasks and 50+
0
4
29
A simple recipe for solving new tasks: use a SOTA video generation model to generate a video rollout and train a video-conditioned policy to execute it on your robot!. Check out @mangahomanga's ๐งตbelow for more:.
Gen2Act: Casting language-conditioned manipulation as *human video generation* followed by *closed-loop policy execution conditioned on the generated video* enables solving diverse real-world tasks unseen in the robot dataset!. 1/n
0
5
29
On Monday, I'll present FastRLAP at the Pretraining for Robotics workshop. #ICRA2023. We make an RC Car go brrr, pixels-to-actions, in under 20 minutes of real-world practice! Work co-led with @KyleStachowicz and Arjun Bhorkar.
Can we use end-to-end RL to learn to race from images in just 10-20 min? FastRLAP builds on RLPD and offline RL pretraining to learn to race both indoors and outdoors in under an hour, matching a human FPV driver (i.e., the first author. ): Thread:
2
2
29
Had a great time in Auckland @corl_conf over the past week! Big thanks to the organizers for the wonderful conference ๐. Presented LM-Nav and ReViND. Great discussions @ LangRob workshop (videos coming soon).
0
2
28
@jbhuang0604 @umdcs I thought this was going to be a tutorial on making better figures for CVPR.
3
0
28
VFS is a simple, yet effective, way to obtain skill-centric representations that really helps long-horizon reasoning and generalization in HRL. Check out this great summary thread by @svlevine. Work done during my internship @GoogleAI with @brian_ichter, @alexttoshev.
Value function spaces (VFS) uses low-level primitives to form a state representation in terms of their "affordances" - the value functions of the primitives serve as the state. This turns out to really improve generalization in hierarchical RL!. Short ๐งต>
1
7
28
I'm excited to be speaking at the ML4AD symposium (colocated with #NeurIPS2023) at noon!. Stop by if you're interested
Weโre thrilled to sponsor this yearโs ML for Autonomous Driving Symposium on December 14. ML4AD 2023 will see researchers, industry experts, and practitioners come together to redefine the future of autonomous driving technologies. Join us in New Orleans!
0
2
26
At #CoRL2023, the lines are long but the speakers are strong ๐ช๐ผ . Come join us in Sequoia 2 and
Join us for the 2nd edition of the #LangRob workshop at #CoRL2023 in vibrant Atlanta! Get ready for an unforgettable day with an all-star ensemble of speakers and two spicy panels that will ignite your passion for language and robotics! ๐ฅ๐ค.P.S. Guess who wrote this tweet ๐
0
2
26
On Wednesday, I will be presenting ViNT as an oral talk in the morning session ๐ฅฑ. We show that cross-embodiment training generalizes well in zero-shot and can be adapted to various downstream tasks.
We developed a new navigation model that can be trained on many robots and provides a general initialization for a wide range of downstream navigational tasks: ViNT (Visual Navigation Transformer) is a general-purpose navigational foundation model: ๐งต๐
1
2
23
Simple recipe to repurpose *existing* datasets to learn fine-grained skills for functional manipulation: generate dense (structured) language annotations, train language-conditioned robot policy, STEER (:) with off-the-shelf VLM to solve tasks!. ๐งตbelow:.
Excited to share our work on STEERing robot behavior! With structured language annotation of offline data, STEER exposes fundamental manipulation skills that can be modulated and combined to enable zero-shot adaptation to new situations and tasks.
0
2
24
On Wednesday, I'll present GNM: a pre-trained embodiment-agnostic navigation model (with public checkpoints!) that can drive any robot! #ICRA2023.
We're releasing our code for "driving any robot", so you can also try driving your robot using the general navigation model (GNM): Code goes with the GNM paper: Should work for locobot, hopefully convenient to hook up to any robot
1
3
23
Some late night encouragement from @weights_biases to keep churning experiments for your sweet sweet papers!
1
0
21
The @ieee_ras_icra Workshop on Robotic Perception and Mapping seems like such a throwback to busy conference days โ 100s of attendees in a room! Great panel @lucacarlone1 @AjdDavison @fdellaert . #ICRA2022
0
1
22
Very interesting and engaging tutorial on Social Reinforcement Learning for your robots by @natashajaques at #CoRL2021 @corl_conf . Itโs being streamed on YouTube if you wanna join:
0
3
21
Excited to finally be attending a physical conference and meet friends in-person at #CoRL2021. Lovely conference venue, looking forward to the exciting talks. If youโre attending and wanna chat, letโs catch up! DMs open.
0
1
21
@corl_conf @d_yuqing @pratyusha_PS @xiao_ted @mohito1905 @StefanieTellex @brian_ichter @_jessethomason_ Join our technical session in-person in Auckland, or virtually, to hear from @jacobandreas @jeffclune @alsuhr @jackayline @ybisk @andyzengtweets, Cynthia Matuszek @UMBC , Dieter Fox @uwcse, Jean Oh @CMU_Robotics , Nakul Gopalan @GeorgiaTech
1
3
19
Cool new work combining CLIP and a 3D model for zero-shot 3D reasoning!!. Lots of exciting progress using CLIP as a reliable language interface for vision and robotics tasks. We found CLIP to be very useful to ground landmarks for robotic navigation too:
Semantic abstraction -- give CLIP new 3D reasoning capabilities, so your robots can find that โ rapid test behind the Harry Potter book.โ ๐.w. Huy Ha.
2
1
21
RECON was featured on Computer Vision News @RSIPvision: I also presented RECON at @corl_conf in London last month, talk video here: Videos and dataset @
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals. ๐งต>
0
5
20
Also on Wednesday, we'll be presenting LFG at the evening poster session. We use CoT and a novel scoring method for LLMs to construct narratives and guide the robot unseen environments in search of a goal.
Can LLMs help robots navigate? It's hard for LLMs to just *tell* the robot what to do, but they can provide great heuristics for navigation by coming up with guesses. With LFG, we ask the LLM to come up with a "story" about which choices are better, then use it w/ planning. ๐งต๐
0
4
19
Excited to be in New York City for @RoboticsSciSys #RSS2022!. On Monday, weโll be organizing this workshop on Learning from Diverse, Offline Data with cutting edge papers, speakers and an in-person panel!. Please join us if youโre in town, or virtually!
Diverse, representative data is becoming increasingly important for building generalizable robotic systems. We're organizing the Workshop on Learning from Diverse, Offline Data (L-DOD) at RSS 2022 (NYC/hybrid) to come together and discuss this!.
3
1
17
Excited to be in Philly for @ieee_ras_icra! Looking forward to catching up with friends and checking out exciting new research in person after a long hiatus. Please reach out if youโre around and wanna chat ๐
0
0
19
@TrackCityChick Very unfortunate, exacerbated for international students with the sky-high living expenses in Bay area. Very glad that Berkeley provides a semester's stipend in advance (~start of classes) for grad students.! This should totally be the norm.
0
1
16
Have a recent @NeurIPSConf /WIP draft on offline learning, dataset curation, benchmarking, learning from multimodal data or related topics? Please consider submitting to our RSS workshop for quick feedback. Paper deadline now extended to *May 27* โณ
0
9
17
Come to Hรถrsaal for the 3rd edition of the LangRob workshop! We have 6 exciting talks, 2 panels, 60 brilliant posters, and 8 spotlight research talks. All talks and panels will be recorded and uploaded after the conference.
Our LangRob Workshop is kicking off! #CoRL2024. Come and join us in the Hรถrsaal room on the ground floor. As @shahdhruv_ said in our introduction, โLangRob is now bigger than the first ever CoRL!โ. See you here ๐.
1
2
18
Dr. Signe Redfield on the definition of robotics as a field and rise of a Kuhnian scientific revolution. Read more at . #ICRA2019 #RoboticsDebates
0
3
18
@jeremyphoward @simonw I would've thought they are A/B testing different variants, but trying out the (hopefully more stable) API, this is not a new thing. The original release model (gpt-4-0314) also seems to know about the slap.
3
2
15
Itโs a full house!. Our @RoboticsSciSys Workshop on Learning from Diverse, Offline Data is happening now at Mudd 545, and online at
#LDOD is kicking off in ~30 minutes; join our free-to-view livestream here: See y'all soon!.
0
3
17
TIL: You can anonymize a @github code repo for double-blind peer review with this neat tool by @thodurieux. It comes with a navigator and also has some basic anonymization options like removing links or specific words. Really cool!.
0
1
17
On Tuesday, we'll present ExAug at the 3PM poster session. #ICRA2023. Work led by @noriakihirose.
Experience augmentation (ExAug) uses 3D transformations to augment data from different robots to imagine what other robots would do in similar situations. This allows training policies that generalize across robot configs (size, camera placement): Thread:
1
1
15
LangRob workshop happening now at #CoRL2022 in ENG building, room 401!. Pheedloop and stream for virtual attendees:
Announcing the Workshop on Language and Robot Learning at @corl_conf #CoRL2022, Dec 15๐ค. Exciting lineup of speakers from the robotics, ML and NLP communities to discuss the present and future of language in robot learning!. Inviting papers, due Oct 28๐
.
0
3
13
Reminder -- papers are due AoE tonight!!. Please consider sharing your new research with the broader robotics and machine learning community at @RoboticsSciSys 2022 in NYC or remotely ๐ค.
Have a recent @NeurIPSConf /WIP draft on offline learning, dataset curation, benchmarking, learning from multimodal data or related topics? Please consider submitting to our RSS workshop for quick feedback. Paper deadline now extended to *May 27* โณ
1
4
15
Are you an early-stage researcher (grad student/postdoc) interested in a fireside chat with one of our workshop speakers? We're inviting signups for a 1:1 discussion. Please email ldod_rss2022@googlegroups.com with a bit about yourself and the speaker you'd like to chat with!.
Diverse, representative data is becoming increasingly important for building generalizable robotic systems. We're organizing the Workshop on Learning from Diverse, Offline Data (L-DOD) at RSS 2022 (NYC/hybrid) to come together and discuss this!.
3
7
15
It's unbelievable what the Ocean One achieved with neat research and remarkable engineering efforts! Oussama Khatib on the need for compliant robots and the story behind the Marseille shipwreck recovery.@StanfordAILab #ICRA2019 .Interesting video:
0
1
11
@CSProfKGD The Berkeley DL course generally gets a huge undergraduate cohort: . Prev offering:
0
1
12
@ammaryh92 @jeremyphoward @__mharrison__ @rasbt @svpino I love Jupyter Lab but the real champ is VSCode + Jupyter notebook extension โ itโs like Lab, but much more customizable and feels like a notebook inside of your favorite editor with keybindings. Bonus: works with @OpenAI @Github Copilot!.
1
1
12
Excited to share our latest research on customizing learned navigation behaviors by combining offline RL with topological graphs -- ReViND. I'll be presenting ReViND at the 11am oral session today @corl_conf. Please join!. Videos, code and more:
Offline RL with large navigation datasets can learn to drive real-world mobile robots while accounting for objectives (staying on grass, on paths, etc.). We'll present ReVIND, our offline RL + graph-based navigational method at CoRL 2022 tomorrow. Thread:
0
0
13
@hardmaru To be fair, thatโs probably just the cost to train the final/released model, and does not include the compute used in tuning hyperparameters and failed experiments? The overall $$ of the project would likely be at least an order higher than that of a final model.
0
0
12
Action-packed day at Bay Area Robotics Symposium 2019 @UCBerkeley @berkeley_ai with exciting research from Berkeley, @Stanford, @ucsc and industry :D.Props to Mark Mueller and @DorsaSadigh for organizing ๐ค.
0
1
13
Unknowingly kicked off the #icra2019ScavengerHunt earlier today at this beautiful place!.#icra2019MontRoyal #TeamWookie #icra2019
1
3
12
@xuanalogue @jeremyphoward This is a shame! My collaborators and I have done a lot of work that leverages the logprobs in a probabilistic planning framework and found it very useful, I guess thatโs why you shouldnโt use closed models for researchโฆ.
1
0
12
@chris_j_paxton Used it quite extensively and it's great. The Orin AGX is my defacto replacement for any silly laptop GPU and I think more people should use it :). I've also requested this many times and I know there's a lot on the forum too -- would be great if Stretch shipped with an Orin.
1
0
13
At the @Tesla AI Day event today and thereโs a Cybertruck to greet us at the gate. Looking forward to whatโs waiting inside!
1
0
12
Deadline for submitting papers and demo proposals now EXTENDED to **next** Friday, Oct 6 AoE!.
Announcing the 6th Robot Learning Workshop @NeurIPSConf on Pretraining, Fine-Tuning, and Generalization with Large Scale Models. #NeurIPS2023. CfP: Don't like your #CoRL2023 reviews? Love them? We welcome your contributions either way ๐ซถ.
0
3
11
@maththrills moderating the most scintillating debate of #ICRA2019: "The pervasiveness of deep learning is an impediment to gaining scientific insights into robotics problems".It's a full-house! @angelaschoellig, Nick Roy @MIT_CSAIL, Ryan Gariepy @clearpathrobots and Oliver Brock
2
3
10
The number of passengers with poster tubes on this flight from Frankfurt to Montreal is too high. ! Coincidence or #ToICRA2019? @icra2019 @ieee_ras_icra.
0
0
10
@corl_conf @d_yuqing @pratyusha_PS @xiao_ted @mohito1905 @StefanieTellex @brian_ichter @_jessethomason_ @jacobandreas @jeffclune @alsuhr @jackayline @ybisk @andyzengtweets @UMBC @uwcse @CMU_Robotics @GeorgiaTech Massive shout-out to my wonderful co-organizers for their support in putting this together!. @d_yuqing @xiao_ted @mohito1905 @pratyusha_PS @brian_ichter @_jessethomason_ @StefanieTellex , Kaiyu, Oier and Valts!.
0
0
9
Some great demos of exploring unseen cafeterias and fire stations under different seasons and lighting on the project page: Video: Work with amazing collaborators @berkeley_ai : @ben_eysenbach @nick_rhinehart and @svlevine.
0
1
9
Looking forward to my first "virtual" @iclr_conf! The interface looks very clean and well-designed, great effort pioneering this @srush_nlp and co. ๐.
1/ Spent the last couple weeks in quarantine obsessively coding a website for Virtual ICLR with @hen_str. We wanted to build something that was fun to browse, async first, and feels alive.
2
0
9
@NeurIPSConf Bring out your robots and spread the word!. @brian_ichter @chichengcc @andyzeng_ @antoniloq @ryan_hoque @tonyzzhao @oier_mees @pathak2206 @chris_j_paxton @DhruvBatraDB.
1
0
9
@mikb0b Libraries on top of Matplotlib usually work well enough. Once in a while, I've used Inkscape/online software for a particular graphic I wanted. PS: I challenge everyone to use Matplotlib+XKCD in a paper
0
0
8
@andyzengtweets and @xf1280 from @GoogleAI on language as a glue for intelligent machines and a live demo of their PaLM-SayCan system!. (9/12)
1
1
8
A good friend introduced me to @MathpixApp today. Works way beyond my expectations!.Biggest thing to happen to me since starting TeXing. Highly recommend to everyone #phdlife #AcademicTwitter
1
3
8
We'll be presenting our spotlight talk on getting robots to learn in the real-world without hand-engineered resets, rewards and state information at ICLR 2020!.Tune in at 10PM tonight or 5AM tomorrow PDT to know more. Blogpost: @iclr_conf @berkeley_ai.
Check out our ICLR spotlight: Ingredients of Real-World Robotic Reinforcement Learning! How can we set up robots to learn with RL, without manual engineering for resets, rewards, or vision?. Talk Paper Poster
1
0
8
#ICRA2019 Milestone Award for best paper from 20 years ago to Stev n LaValle and James Kuffner! (The world needs better telepresence tools ๐คท)
1
0
8
Interesting (and important) ideas on the cycle of bias and the need for inclusiveness at venues like #ICRA2019 by Karime Pereida and Melissa Greeff @utiasSTARS #RoboticsDebates #robotics #diversitymatters
0
1
6
Join us for the final panel of the day and submit questions for our panelists!. We have @_jessethomason_ and @siddkaramcheti taking on Giulia @GoogleDeepMind @DannyDriess @physical_int @AjayMandlekar @nvidia -- come take your last swings of #CoRL2024
0
1
8