Jeannette Bohg Profile Banner
Jeannette Bohg Profile
Jeannette Bohg

@leto__jean

Followers
6,960
Following
506
Media
193
Statuses
1,541

Assistant Professor @StanfordAILab @StanfordIPRL . Perception, learning and control for autonomous robotic manipulation #BlackLivesMatter she/her 🌈

Stanford, CA
Joined May 2017
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@leto__jean
Jeannette Bohg
4 months
We dramatically sped up Diffusion policies through consistency distillation. With the resulting single step policy, we can run fast inference on laptop GPUs and robot on-board compute. 👇
@_Aaditya_Prasad
Aaditya Prasad 🇺🇸
4 months
Diffusion Policies are powerful and widely used. We made them much faster. Consistency Policy bridges consistency distillation techniques to the robotics domain and enables 10-100x faster policy inference with comparable performance. Accepted at #RSS2024
10
62
293
1
14
104
@leto__jean
Jeannette Bohg
3 years
Back in Summer 2019, a group of Roboticists and Machine Learning Researchers met at the Robot Learning Summit in Montreal. We had lively discussions on the Challenges and Opportunities for Embodied Intelligence. Here are our thoughts:
4
57
258
@leto__jean
Jeannette Bohg
1 year
Recognizing symbols like "dish in dishwasher" or "cup on table" enables 🤖 task planning But how do we get data to train models for recognizing symbols? Introducing "Grounding Predicates through Actions" to automatically label human video datasets 🧵
1
35
213
@leto__jean
Jeannette Bohg
4 years
Dear Academic colleagues, please VOTE in November! Your international colleagues and students cannot.
@AndrewYNg
Andrew Ng
4 years
New @ICEgov policy regarding F-1 visa international students is horrible & will hurt the US, students, and universities. Pushes universities to offer in-person classes even if unsafe or no pedagogical benefit, or students to leave US amidst pandemic and risk inability to return.
49
833
3K
0
24
208
@leto__jean
Jeannette Bohg
5 years
We made the first place at the NuScenes Tracking Challenge (AI Driving Olympics @NeurIPSConf ) Spoiler alert: It's a Kalman filter! We beat the AB3DMOT baseline by a large margin. arXiv
Tweet media one
1
36
192
@leto__jean
Jeannette Bohg
4 years
Don't we all want our robots to do many different tasks? We learned a single policy for 78 manipulation tasks. We don't use goal images. We use natural language instructions to index the task. Chat with us tomorrow 8am PDT #RSS2020 1/4
1
29
156
@leto__jean
Jeannette Bohg
3 years
How do you safely navigate a complex world with only an RGB camera? Turns out, the density represented by NeRFs can serve as a collision metric. 👇
@StanfordMSL
Stanford MSL
3 years
Excited to share our new work on vision-only navigation using NeRFs! We show how to use a pre-trained NeRF to both represent collision geometry and localize a robot with an onboard camera. Project page: Paper:
1
30
127
3
23
141
@leto__jean
Jeannette Bohg
5 years
We propose a method for in-hand manipulation that combines low-level manipulation primitives with a learned, mid-level policy that orchestrates these primitives. Our method keep the object firmly grasped while transporting it to distant goal poses.
2
12
125
@leto__jean
Jeannette Bohg
1 year
Seeing TidyBot come together in my lab has been fun! And yes, our lab has been exceptionally clean 🧹 Many of you commented on the mobile platform which I'm so pleased about seeing it move! Let me tell you the story behind these platforms! 🧵
@jimmyyhwu
Jimmy Wu
1 year
When organizing a home, everyone has unique preferences for where things go. How can household robots learn your preferences from just a few examples? Introducing 𝗧𝗶𝗱𝘆𝗕𝗼𝘁: Personalized Robot Assistance with Large Language Models Project page:
22
115
523
2
19
117
@leto__jean
Jeannette Bohg
2 years
People often use customized tools for a variety of manipulation tasks: 🥄🍴🥢🪛🪝🪓🛠️✂️🧵📎 We look at the problem of automatically designing customized tools for robots and leverage differentiable simulation and continual learning. Video & Paper: 🧵
Tweet media one
2
14
115
@leto__jean
Jeannette Bohg
1 year
How do you sequence learned skills for a manipulation task? Use STAP to plan with learned skills and maximize the expected success of each skill in the plan where success is encoded in Q-functions. 🐙 Code for STAP is now on github
Tweet media one
2
16
110
@leto__jean
Jeannette Bohg
2 years
I am very grateful to the Sloan Foundation to recognize and support the research done at my lab @StanfordIPRL ! This wouldn't have been possible without the hard work of my students, postdocs and the support by my mentors!
@SloanFoundation
Sloan Foundation
2 years
We are delighted to announce the winners of this year’s Sloan Research Fellowship! These outstanding researchers are shining examples of innovation and impact—and we are thrilled to support them. Meet the winners here: 🎉 #SloanFellow #STEM #ScienceTwitter
Tweet media one
13
61
409
11
4
105
@leto__jean
Jeannette Bohg
2 years
How do you perform a manipulation task with a novel object that is heavily occluded? We propose a method for task-driven in-hand manipulation of unknown objects with tactile sensing. 🧵 1/n
1
16
104
@leto__jean
Jeannette Bohg
2 years
3D multi-object tracking is a challenging problem, because it requires effective data association, track lifecycle management, false-positive elimination, and false-negative propagation. To address all 4 of these problems, we propose ShaSTA! 🧵 1/5
Tweet media one
1
28
101
@leto__jean
Jeannette Bohg
4 years
Thank you so much for all the congratulations to the 2020 RSS Early Career Award! I'm honored to receive it and astounded and happy to hear that I serve as an inspiration. I want to mention who inspires me
@nkalavak
Nivii Kalavakonda
4 years
Congratulations @leto__jean on your 2020 RSS Early Career Award! You are an inspiration to many.
0
0
2
5
3
101
@leto__jean
Jeannette Bohg
4 years
Our multi-robot task allocation algorithm is out! It addresses the key computational challenges of sequential decision-making under uncertainty and multi-agent coordination. RSS'20 Paper, Code, Reviews:
3
16
99
@leto__jean
Jeannette Bohg
3 years
Hanging objects is a common daily task. We propose OmniHang to learn to hang arbitrary objects onto a diverse set of supporting items using contact point correspondences and neural collision estimation. Project webpage: Paper:
2
19
96
@leto__jean
Jeannette Bohg
5 years
Proud of my amazing students who worked incredibly hard for this 🙌
Tweet media one
@michellearning
Michelle Lee
5 years
Members of @StanfordIPRL submitted a total of 11 papers for ICRA. We have 11 acceptances. Congratulations everyone! 🎊🎉🗼
6
3
139
2
1
87
@leto__jean
Jeannette Bohg
4 years
Target networks in deep RL can slow down the learning process. We propose a self-regularized and self-guided actor-critic (GRAC) method and achieve state of the art in MuJoCo continuous control tasks. No target network needed. Project Page:
1
19
85
@leto__jean
Jeannette Bohg
5 years
How do you generate stable grasps for n-fingered hands? Check out our new work UniGrasp! One model that generates n contact points for novel objects and novel hands. Project Page arXiv
4
30
84
@leto__jean
Jeannette Bohg
5 years
Manipulation of deformable objects is hard. Check out our approach! We propose a self-supervising training objective on real images to robustly perceive a high-dimensional object state w/ occlusions, some clutter, appearance variation. Project + arXiv
1
15
81
@leto__jean
Jeannette Bohg
4 years
Tracking by detection is a powerful paradigm. Multi-object tracking performance is largely driven by detection accuracy: the better the detector, the better tracking. So just work on better object detectors? Not so fast .... 🧵 Details:
5
13
79
@leto__jean
Jeannette Bohg
1 year
At #CVPR2023 , we present CARTO - a model that reconstructs articulated objects from a single stereo image. This includes the object's 3D shape, 6D pose, size, joint type, and joint state. All this in a category agnostic fashion.
Tweet media one
2
11
73
@leto__jean
Jeannette Bohg
4 years
I was invited to the #IROS2020 Tutorial on Deep Representation and Estimation of State for Robotics by @deanh_tw @danfei_xu @KuanFang : I created a mini-course on Representations and their interplay with decision-making & control:
Tweet media one
1
10
70
@leto__jean
Jeannette Bohg
4 years
We are presenting our new work on learning user-preferred mappings for assistive teleop #IROS2020 !
@DorsaSadigh
Dorsa Sadigh
4 years
Making assistive teleoperation more intuitive by learning user-preferred mappings for latent actions w/ Mengxi Li, @loseydp and @leto__jean
1
14
93
1
9
71
@leto__jean
Jeannette Bohg
2 years
So many ways to generate a sequence of robot skills to solve a task that requires long-horizon reasoning: LLMs, task planners, high-level policies, … But how do you ensure that each skill is executed such that the next skill can be successful at all?
1
15
71
@leto__jean
Jeannette Bohg
4 years
So happy to see @Stanford joining our peer institutions in an amicus brief to support Harvard and MIT in a lawsuit against the #StudentBan The letter of MTL to the acting secretary of the Department of Homeland Security
@MIT_CSAIL
MIT CSAIL
4 years
BREAKING: MIT and Harvard just filed suit against ICE and the DHS seeking to reverse their ban on international students being able to stay in the US while taking classes remotely. Full complaint:
Tweet media one
18
493
2K
1
6
69
@leto__jean
Jeannette Bohg
4 years
Are there more Roboticists out there right now, on a deadline grind, wondering: How is this even possible? Such an incredible achievement. So many possible points of failure. So many years of work.
@NASAJPL
NASA JPL
4 years
JUST, WOW.😍 Grab the popcorn because the @NASAPersevere rover has sent us a one-of-kind video of her Mars landing. For the first time in history, we can see multiple angles of what it looks like to touch down on the Red Planet. #CountdownToMars
451
7K
23K
2
3
69
@leto__jean
Jeannette Bohg
5 years
Learning contact-rich manipulation skills is hard for robots! But what if the robot could modify its environment to help skill learning? We let a robot discover how to place fixtures that funnel uncertainty and dramatically speed up skill learning.
1
16
63
@leto__jean
Jeannette Bohg
4 months
We perform precision grasping with multi-fingered hands on objects seen from only a single RGB-D camera. What is new and different about this work is that we model grasping as a dynamic process where the object is allowed to move while the hand establishes a stable grasp.
2
6
63
@leto__jean
Jeannette Bohg
4 years
New Grasping Data Set Alert 🚨 We trained a model called UniGrasp to generate stable grasps for multi-fingered hands. Code + Data now available including not only a variety of objects but also hands:
@leto__jean
Jeannette Bohg
5 years
How do you generate stable grasps for n-fingered hands? Check out our new work UniGrasp! One model that generates n contact points for novel objects and novel hands. Project Page arXiv
4
30
84
0
10
62
@leto__jean
Jeannette Bohg
5 years
Understanding dynamic 3D environment is crucial for robotic agents. We propose MeteorNet for learning representations of dynamic 3D point cloud sequences (Oral @ICCV19 ). Project Page: Arxiv:
1
8
59
@leto__jean
Jeannette Bohg
3 years
Learning of manipulation skills often considers point-to-point motion only. Many manipulation tasks are however periodic and repeated indefinitely: 🧵🥣🛠️🪚🧼🪠🧹🪡✂️ 🪛🪥 We learn and represent periodic policies with complex deformable objects or granular material from vision
@yjy0625
Jingyun Yang
3 years
How can robots learn to perform various periodic tasks in our everyday life, like wiping tables and stirring food? Check out our approach ViPTL that can learn to execute a periodic task from a single human demo! Paper: Website: 👇
3
12
40
0
11
60
@leto__jean
Jeannette Bohg
4 years
Ever worked on a real robot in a not so controlled environment? Then you may have experienced changing lighting conditions, sensor drift or just a broken sensor. We propose a way to deal with these unexpected measurements through crossmodal compensation 🧵
@michellearning
Michelle Lee
4 years
What happens if your camera gets occluded? Use remaining sensors to compensate for the corrupted image! Introducing Crossmodal Compensation Model, a representation model that compensates for corrupted sensors. Paper: Project:
2
9
70
0
8
59
@leto__jean
Jeannette Bohg
11 months
We want our robots to extrapolate from a few examples of a manipulation task to many variations. Embedding equivariance in both, our object representation and policy architecture allows our 🤖 to do just that.
@yjy0625
Jingyun Yang
11 months
From a few examples of solving a task, humans can: 🚀 easily generalize to unseen appearance, scale, pose 🎈 handle rigid, articulated, soft objects 0️⃣ all that with zero-shot transfer. Introducing EquivAct to help robots gain these capabilities. 🔗  🧵↓
5
33
193
0
5
58
@leto__jean
Jeannette Bohg
4 years
✋ My PhD advisor is the power roboticist and woman @DanicaKragic Where would I be without her? I aspire to be to my students what Dani has been to me!
@Murphy_Lab_OU
Murphy Lab
4 years
Raise your hand if you are a female scientist who had a female mentor who was pivotal to you success. This paper is way off base! 🤚
196
303
4K
2
1
57
@leto__jean
Jeannette Bohg
4 years
Happy to announce that @StanfordIPRL is part of the challenge and has qualified for Phase 2! We are so excited to remotely work on a real robot
@robo_challenge
Real Robot Challenge
4 years
Phase 2 of the challenge starts this Monday, October 12! Amazing effort by all the teams in Phase 1, thank you for participating! 🙂 Congratulations to the selected teams!! Come Monday, training on the real TriFinger platforms will be as simple as running jobs on a cluster 😉
Tweet media one
2
5
31
3
7
54
@leto__jean
Jeannette Bohg
4 years
Mark your calendars for this new public Robotics Seminar Series starting Friday May 15th 1PM EST with @AjdDavison ! Upcoming amazing speakers: @AjdDavison , Leslie Kaelbling, Allison Okamura, @ancadianadragan Hope to see you there and participate in the Q&As!
@RoboticsSeminar
RoboticsTodaySeminar
4 years
We are launching a new #virtual #robotics seminar “Robotics Today--A Series of Technical Talks”. Andrew Davison @AjdDavison will give the first seminar “From SLAM to Spatial AI” on Friday May 15th 1PM EST. Watch and learn more about the seminars here:
Tweet media one
2
70
224
1
9
55
@leto__jean
Jeannette Bohg
2 months
We embedded equivariance in Diffusion policies to let them generalize to scenarios that would otherwise be out of distribution. 👇 Check Jingyun's thread for all the details!
@yjy0625
Jingyun Yang
2 months
Want a robot that learns household tasks by watching you? EquiBot is a ✨ generalizable and 🚰 data-efficient method for visuomotor policy learning, robust to changes in object shapes, lighting, and scene makeup, even from just 5 mins of human videos. 🧵↓
10
72
314
0
10
54
@leto__jean
Jeannette Bohg
4 years
Thank you Scott Kuindersma @BostonDynamics , for this very exciting and entertaining talk! On top of all the controller details, I loved hearing about how you integrate perception. Re-watch talk and Q&A here!
@RoboticsSeminar
RoboticsTodaySeminar
4 years
0
10
50
0
9
53
@leto__jean
Jeannette Bohg
5 years
1/3 There are all these rich human activity datasets out there! Wouldn’t it be great if a robot could learn task-oriented grasping from them? Unfortunately, these datasets often lack 6D hand-object pose annotations. Here are some ideas on how to automatically annotate them.
Tweet media one
1
10
51
@leto__jean
Jeannette Bohg
8 months
Proud of my former grad student Toki Migimatsu! I love the self-corrections of mistakes at the end of the video
@Figure_robot
Figure
8 months
Figure 01 has learned to make coffee ☕️ End-to-end AI system, trained in 10 hours, just by watching humans make coffee Our neural networks are taking video in, trajectories out Join us to ship a fleet of AI robots:
127
389
2K
1
0
51
@leto__jean
Jeannette Bohg
7 years
Working with robots? Sometimes having trouble with coordinate transforms and rotations? Here are STL files for printing these extremely useful 3D coordinate systems with magnets, so you can attach them to your whiteboard. Kudos to Felix from @MPI_IS
Tweet media one
Tweet media two
3
14
49
@leto__jean
Jeannette Bohg
3 years
Looking forward to present our line of work on enabling robots to do long-horizon task planning by leveraging large-scale human activity datasets! #CVPR2021 Thanks for the invite!
@dimadamen
Dima Damen
3 years
Sun 20/6 10:00 EDT (full day) EPIC Workshop with 5 challenges announcement, and live talks with Q&A by winners. Keynotes by L Torresani, D Krandall @leto__jean @davsca1 and K Grauman. Program: 2/6
Tweet media one
1
1
8
0
5
50
@leto__jean
Jeannette Bohg
2 years
We have previously shown how multimodal representation learning enables generalization in contact-rich manipulation tasks. I'm excited about this new #CVPR2022 work that is object-centric and includes vision, touch and audio.
@RuohanGao1
Ruohan Gao
2 years
Excited to share our #CVPR2022 paper ObjectFolder 2.0, a multisensory object dataset with visual, acoustic, and tactile data for Sim2Real transfer. Paper: Project page: Dataset/Code: Details with narration👇
4
23
144
0
3
49
@leto__jean
Jeannette Bohg
5 years
Solving long-horizon tasks requires task and motion planning! Our robot optimizes hand poses relative to target objects. The plan remains valid even with moving objects. Check out for the Tower of Hanoi and robot demos! URL:
Tweet media one
1
13
48
@leto__jean
Jeannette Bohg
2 years
To do daily chores, robots need to understand articulated objects. Sometimes a single picture of an object is deceiving. We propose a novel method that leverages temporal data to estimate the object articulation mechanism. 🧵 1/9
1
13
48
@leto__jean
Jeannette Bohg
4 years
Tune in! Because @siddhss5 wants to share some important open research problems with you so that he can retire :)
@RoboticsSeminar
RoboticsTodaySeminar
4 years
We'll be back next week (7/24, 1PM EDT/10AM PDT) with @uwcse 's Siddhartha Srinivasa! #Robotics
Tweet media one
0
6
43
1
3
48
@leto__jean
Jeannette Bohg
4 years
Fusing multiple modalities for state estimation requires dynamics and measurement models. Differentiable filters enable learning these models end-to-end while retaining the algorithmic structure of recursive filters. Project Page: #IROS2020
Tweet media one
2
12
47
@leto__jean
Jeannette Bohg
6 years
Happy to share our new paper on Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks. Featuring RL on a real robot.
Tweet media one
3
10
46
@leto__jean
Jeannette Bohg
11 months
How do you autonomously learn a library of composable, visuomotor skills? At #IROS2023 we present an approach that can create novel yet feasible tasks to gradually train the skill policies on harder and harder tasks. Monday morning Poster Session: MoAIP-19.9
@KuanFang
Kuan Fang
2 years
Active Task Randomization (ATR) learns to create novel and feasible tasks for acquiring generalizable visuomotor skills. The learned skills can be composed to solve unseen sequential manipulation tasks in the real world.
4
23
101
0
8
47
@leto__jean
Jeannette Bohg
5 years
Interested in creating a truly immersive multimodal virtual reality experience? We propose a learned model to generate realistic haptic sensory data for a multitude of virtual materials. Project: arXiv:
1
12
46
@leto__jean
Jeannette Bohg
2 months
I'll be presenting EquiBot today (Friday) at #RSS2024 in the Workshop on Geometric and Algebraic Structure in Robot learning. Room: Room: ME B - Newton Poster Session: 3-4pm
@yjy0625
Jingyun Yang
2 months
Want a robot that learns household tasks by watching you? EquiBot is a ✨ generalizable and 🚰 data-efficient method for visuomotor policy learning, robust to changes in object shapes, lighting, and scene makeup, even from just 5 mins of human videos. 🧵↓
10
72
314
1
5
47
@leto__jean
Jeannette Bohg
1 year
How do you sequence learned skills for a manipulation task? Use STAP to plan with learned skills and maximize the expected success of each skill in the plan where success is encoded in Q-functions. Today 3-4pm #ICRA2023 at Pod 42 WePO2S-21.1 @agiachris
Tweet media one
4
5
47
@leto__jean
Jeannette Bohg
4 years
Maybe unpopular opinion: Unless you control the environment, manipulation robots will always make mistakes ... just like people. We looked at how a robot can online learn from failure and overcome it: No, this is not about RL!
1
7
45
@leto__jean
Jeannette Bohg
4 months
This afternoon #ICRA2024 , we are presenting RoboFuME - a method that implements the pre-train + fine-tuning paradigm for multi-task robot policies. 4:30-6pm at "Big Data in Robotics and Automation" Room: CC-313
@yjy0625
Jingyun Yang
10 months
Announcing RoboFuME🤖💨, a system for autonomous & efficient real-world robot learning that 1. pre-trains a VLM reward model and a multi-task RL policy from diverse off-the-shelf demo data; 2. runs RL fine-tuning online with the VLM reward model. 🔗 🧵↓
1
44
222
1
6
44
@leto__jean
Jeannette Bohg
5 years
It is this time of the year: We are teaching rotations in CS336. Check out these awesome 3D printed coordinate frames for visualizing rotations and making your life easier.
@leto__jean
Jeannette Bohg
7 years
Working with robots? Sometimes having trouble with coordinate transforms and rotations? Here are STL files for printing these extremely useful 3D coordinate systems with magnets, so you can attach them to your whiteboard. Kudos to Felix from @MPI_IS
Tweet media one
Tweet media two
3
14
49
2
3
43
@leto__jean
Jeannette Bohg
4 years
Really cool, out-of-the-box hand-design allowing for new ways to think about in-hand manipulation! My student @linshaonju also contributed with his wisdom on policy learning - in this case through imitation
@shenli_yuan
Shenli Yuan
4 years
Manipulating objects using a robot grasper with steerable rolling fingertips. Our Roller Grasper V2 project is being presented at #IROS2020 Details at:
3
7
56
1
4
41
@leto__jean
Jeannette Bohg
1 year
Large Language Models promise to replace task planners in robotics. But how do we verify that these plans are correct - especially for tasks that require long-horizon reasoning? 👇Check out Kevin's 🧵 on text2motion!
@linkevin0
Kevin Lin
1 year
Large language models (LLMs) can readily convert language instructions into high-level plans. However, should we trust robots to execute these plans without verifying that they actually satisfy the instructions and are feasible in the real world?
6
56
271
0
9
43
@leto__jean
Jeannette Bohg
4 years
Differentiable recursive filters allow end-to-end learning of process and observation models + noise profiles. This is incredibly useful for state estimation of systems that are hard to model. But HOW exactly do you train your filter? The nitty gritty:
1
5
43
@leto__jean
Jeannette Bohg
2 years
Looking forward to discuss this work with all of you tomorrow at our poster #ICRA2022
@StanfordMSL
Stanford MSL
2 years
Check out our paper “Vision-Only Robot Navigation in a Neural Radiance World” at #ICRA2022 ! Come learn how to use NeRFs for trajectory optimization, state estimations, and MPC. We’ll be presenting at 4:15pm on Thursday in Room 124! More info:
1
28
203
1
7
42
@leto__jean
Jeannette Bohg
4 years
A lot of us @StanfordEng spend a lot of time at Bytes and Coupa. The employees there are hourly, and with Stanford’s effective shutdown, their hours are being cut or people are being let go. This fundraiser is to help them with financial assistance:
1
8
42
@leto__jean
Jeannette Bohg
4 years
Thanks so much @siddhss5 for this wonderful talk on motion planning! You can re-watch it below. Some of the audience questions were on "What should I work on?" Sidd's advise: Be fearless! To unwrap this, check out his talk on this topic here:
@RoboticsSeminar
RoboticsTodaySeminar
4 years
0
2
10
1
4
41
@leto__jean
Jeannette Bohg
1 year
We are organizing the RSS’23 Workshop on Learning for Task and Motion Planning Contributions of short papers or Blue Sky papers are due May 19th, 2023.
2
8
42
@leto__jean
Jeannette Bohg
2 years
Tell me some of your favorite examples of Robotics in the space of Sustainability and Climate Change!
8
6
36
@leto__jean
Jeannette Bohg
2 years
How do you autonomously learn a library of composable, visuomotor skills? We propose an approach that can create novel yet feasible tasks to gradually train the skill policies on harder and harder tasks.
@KuanFang
Kuan Fang
2 years
Active Task Randomization (ATR) learns to create novel and feasible tasks for acquiring generalizable visuomotor skills. The learned skills can be composed to solve unseen sequential manipulation tasks in the real world.
4
23
101
1
6
39
@leto__jean
Jeannette Bohg
4 years
Tying a knot is a topological task that has infinitely many geometric solutions. How can a robot translate a topological plan into a motion plan? We learn a library of topological motion primitives for creating various knots. #iros2020 Project:
1
3
38
@leto__jean
Jeannette Bohg
6 years
Thanks @StanfordEng for this profile! Also thanks to @ai4allorg for everything you do and specifically for organizing the fireside chat described in this story!
@StanfordEng
Stanford Engineering
6 years
#IAmAnEngineer : I'm a first-generation university graduate who grew up in communist East Germany. I'm passionate about mentoring young women in science and engineering. -Jeannette Bohg, Assistant Professor of Computer Science
Tweet media one
0
13
55
2
4
38
@leto__jean
Jeannette Bohg
5 years
+ The six lessons for computer vision. At least 4 of which robotics has acknowledged for decades.
@dvazquezcv
David Vazquez
5 years
5 challenges for the next 5 years of computer vision research by Jitendra Malik at @ICCV19
Tweet media one
Tweet media two
Tweet media three
2
36
103
1
2
38
@leto__jean
Jeannette Bohg
11 months
We want robots that can manipulate any articulated object 🚪🪟📪🗃️🗄️📒📦 Turns out: this is still harder than you think! AO-Grasp proposes robot grasps on articulated objects and generalizes 0⃣ shot from sim2real. Talk to us at #BARS2023
@carlotapares
Carlota Parés-Morlans
11 months
Finding grasps that enable robots to interact with articulated objects is challenging because: 🗄️articulated objects exist in infinite joint configurations 🤖grasps need to be both stable and actionable We tackle this in our work AO-Grasp 🔗 [1/7] 🧵
2
17
81
1
5
37
@leto__jean
Jeannette Bohg
4 months
The Open-X Embodiment project will be presented today, Wed at #ICRA2024 . Award Session on Robot Manipulation 🎉 Room: CC-Main Hall Time: 10:30-12:00
@QuanVng
Quan Vuong
11 months
RT-X: generalist AI models lead to 50% improvement over RT-1 and 3x improvement over RT-2, our previous best models. 🔥🥳🧵 Project website:
7
144
619
0
10
36
@leto__jean
Jeannette Bohg
2 months
Today at #RSS2024 we are presenting SpringGrasp at the evening grasping session. Come to our poster at 6pm with all your comments and questions!
@eric_srchen
Sirui Chen
4 months
Can we grasp everyday objects with a dexterous hand from only a single RGB-D image? We propose SpringGrasp, an optimization-based grasp planner that can synthesize compliant dexterous grasp for shape-uncertain objects.
1
10
65
1
4
37
@leto__jean
Jeannette Bohg
6 years
Using contact feedback leads to a more robust grasp acquisition policy - especially for objects with a complex shape and when given noisy pose estimates. Kudos to Hamza, Miroslav, Daniel and Ludo!
1
10
36
@leto__jean
Jeannette Bohg
4 years
Yes!
0
3
36
@leto__jean
Jeannette Bohg
2 years
Come to our NeRF-Shop today at #ICRA22 ! We may have some live demos 😊
@pdculbert
Preston Culbertson
2 years
We're excited to announce our @ieee_ras_icra workshop "Motion Planning with Implicit Neural Representations of Geometry!" We'll discuss the future of INRs -- like DeepSDFs, NeRFs, and more -- in robotics. Submissions due April 15.
1
24
136
1
6
35
@leto__jean
Jeannette Bohg
3 years
Yes people, we are aiming for in-person 🥳🎆🎊
@pappasg69
George Pappas
3 years
Leaning for Dynamics and Control 2022 will take place @Stanford . More information can be found at @CSSIEEE @NeurIPSConf
1
29
131
0
0
34
@leto__jean
Jeannette Bohg
4 years
This is such a great opportunity!
@robo_challenge
Real Robot Challenge
4 years
We are happy to announce the organized by @MPI_IS , @MILAMontreal and @NYU . Each team passing the simulation stage will have more than 100 real-robot-hours. Coding in Python, simulator in PyBullet and tutorials are provided. Start 3rd Aug.
2
80
232
0
4
35
@leto__jean
Jeannette Bohg
6 years
Dedicated to giving girls the opportunity to explore robotics. Age doesn't matter 🤖 Featuring Naos, Athena & Apollo @MPI_IS , @FRANKAEMIKA & Ocean1 @Stanford
Tweet media one
0
2
34
@leto__jean
Jeannette Bohg
3 years
So excited to see these news! Matt is the best. Congratulations to CMU and to Matt. He was a PostDoc at KTH in Stockholm when I was doing my PhD. Working with Matt gave me such a boost of energy! Truly enjoyed it! Can't wait to see him leading RI
@CMU_Robotics
CMU Robotics Institute
3 years
New Robotics Institute Director Ready To Shape Future of Robotics
1
17
90
0
1
33
@leto__jean
Jeannette Bohg
7 years
Need a method for your robot to visually track objects and its own arm during manipulation? Look no further: Bayesian Object tracking now also runs on Ubuntu 16.04 LTS
Tweet media one
Tweet media two
2
7
33
@leto__jean
Jeannette Bohg
4 years
Super excited about co-organizing this virtual Robotics Retrospectives workshop @rssconf ! The good thing about making it virtual? We don't have to worry about visas! 😊 Check out the webpage for submission info and our virtual formats
@_kainoa_
Franziska Meier
4 years
Very excited about the program of our virtual Robotics Retrospectives workshop ! Want to participate? Submit your reflections on your own past work or a subfield of robotics. with @leto__jean , Arunkumar Byravan and Akshara Rai.
0
0
13
1
6
33
@leto__jean
Jeannette Bohg
4 months
We want our robots to extrapolate from a few examples of a manipulation task to many variations. Embedding equivariance in both, our object representation and policy architecture allows our 🤖 to do just that. Today Wed #ICRA2024 Room: CC-418 Time: 1:30-3pm Poster: 4:30-6pm
@yjy0625
Jingyun Yang
11 months
From a few examples of solving a task, humans can: 🚀 easily generalize to unseen appearance, scale, pose 🎈 handle rigid, articulated, soft objects 0️⃣ all that with zero-shot transfer. Introducing EquivAct to help robots gain these capabilities. 🔗  🧵↓
5
33
193
0
9
33
@leto__jean
Jeannette Bohg
5 years
My first time at #CVPR2019 . I'll be giving a talk at the Vision meets Cognition ws in 103c along with other great speakers! Come by if you like to hear about the challenges and opportunities of bringing together vision and robotics
3
0
32
@leto__jean
Jeannette Bohg
3 years
Interested in combining good old state estimation with learning-based approaches? We got you!
2
2
32
@leto__jean
Jeannette Bohg
4 years
Extremely happy to have @contactrika join @StanfordIPRL as a PostDoc in January through the CIFellows 2020 program! Can't wait to work together :)
3
2
32
@leto__jean
Jeannette Bohg
3 years
Imagine you’re in a dark room searching for something when an object drops to the floor. The sound will tell you a lot about the object's material. Our model enables robots to do the same: infer object properties from their sound (in the future w/o clumsily dropping objects).
1
1
32
@leto__jean
Jeannette Bohg
4 years
Tune in again on Friday! This week we have @ancadianadragan talking about how to finally get rid of this annoying thing called reward engineering :)
@RoboticsSeminar
RoboticsTodaySeminar
4 years
Watch live: 1 PM Friday, June 12: @Berkeley_EECS ’s Anca Dragan @ancadianadragan #humanrobot interaction "Optimizing Intended Reward Functions: Extracting all the right information from all the right places"
1
7
16
2
1
31
@leto__jean
Jeannette Bohg
4 years
@black_in_ai launches the 2020/21 Graduate Application Program for those of you who self-identify as Black and/or African and intend to apply for grad school this cycle. More info here: Paging @BlackInRobotics for signal boosting 🤖
1
24
31
@leto__jean
Jeannette Bohg
6 months
Proud that our lab has contributed to this long-term data collection effort called Droid - an in-the-wild robot manipulation dataset of unprecedented diversity. Amazing leadership by @SashaKhazatsky , @KarlPertsch and @chelseabfinn 🧵👇
@SashaKhazatsky
Alexander Khazatsky
6 months
After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset” DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices
5
78
303
0
2
31
@leto__jean
Jeannette Bohg
5 years
Love this! A lecture that gives insights into how research directions come about, how much effort and time they take to follow through and about the context of the research community in which they blossom.
@pabbeel
Pieter Abbeel
5 years
Last semester, a guest lecture fell through last minute. So improvised a lecture on the backstories behind some research papers -- i.e. not about the research results, but about how we ended up doing research in that direction. #metaresearch Online now:
1
90
504
0
1
30
@leto__jean
Jeannette Bohg
5 years
Love the idea of this retrospective NeurIPS WS! The overarching goal of retrospectives is to do better science, increase the openness and accessibility of the machine learning field, and to show that it’s okay to make mistakes. One of those at #ICRA2020 ?
4
2
30
@leto__jean
Jeannette Bohg
1 year
Hi #ICRA2023 folks, we are presenting text2motion that can verify LLM plans in the 'Pretraining for Robotics' Workshop this morning. Location: ICC Capital Suite 7 Spotlights 10am, Poster session 10:20am. CU there with @agiachris
@linkevin0
Kevin Lin
1 year
Large language models (LLMs) can readily convert language instructions into high-level plans. However, should we trust robots to execute these plans without verifying that they actually satisfy the instructions and are feasible in the real world?
6
56
271
0
3
30
@leto__jean
Jeannette Bohg
10 months
How can we adopt the successful pre-training + finetuning paradigm in Robotics? Presenting RoboFuME 🤖that can learn new manipulation tasks by finetuning a multi-task policy. And all of that with minimal human supervision! 👇
@yjy0625
Jingyun Yang
10 months
Announcing RoboFuME🤖💨, a system for autonomous & efficient real-world robot learning that 1. pre-trains a VLM reward model and a multi-task RL policy from diverse off-the-shelf demo data; 2. runs RL fine-tuning online with the VLM reward model. 🔗 🧵↓
1
44
222
0
4
30
@leto__jean
Jeannette Bohg
11 months
Key Insight of TidyBot: Summarization with LLMs is an effective way to achieve generalization in robotics from just a few example preferences. Jimmy presents TidyBot on Monday afternoon at #IROS2023 : MoBIP-16.5 Come by to chat!
@jimmyyhwu
Jimmy Wu
1 year
When organizing a home, everyone has unique preferences for where things go. How can household robots learn your preferences from just a few examples? Introducing 𝗧𝗶𝗱𝘆𝗕𝗼𝘁: Personalized Robot Assistance with Large Language Models Project page:
22
115
523
0
4
30
@leto__jean
Jeannette Bohg
2 years
What is happening in Iran is heartbreaking and difficult to watch. With fellow roboticists I condemn any form of violence against innocent children, women and the people in Iran.
0
6
29
@leto__jean
Jeannette Bohg
5 years
@linshaonju @StanfordIPRL @krishpopdesu Thanks for all the congrats 😊 Maybe some perspective on the 100% success rate: 4 of those papers were rejected before. Although it can be frustrating to be rejected, we worked hard on improving them based on the reviews. So I am extra happy for those 4! And never give up 🙌
1
2
30
@leto__jean
Jeannette Bohg
4 years
A blog post on our work on fusing vision and touch in robotics! And as a bonus, check out code & data here:
@StanfordAILab
Stanford AI Lab
4 years
While humans can seamlessly combine our sensory inputs, can we teach robots to do the same? @michellearning writes about how we can use self-supervision to learn a representation that combines vision and touch.
2
14
60
0
5
29
@leto__jean
Jeannette Bohg
10 months
We worked on a new, more convenient way to goal-condition robot policies: user sketches! 👨‍🎨 Sketches are less ambiguous than language and not too specific (like goal images).
@priyasun_
Priya Sundaresan
10 months
We can tell our robots what we want them to do, but language can be underspecified. Goal images are worth 1,000 words, but can be overspecified. Hand-drawn sketches are a happy medium for communicating goals to robots! 🤖✏️Introducing RT-Sketch: 🧵1/11
6
49
270
1
5
29