Danfei Xu Profile
Danfei Xu

@danfei_xu

Followers
7K
Following
4K
Media
65
Statuses
806

Faculty at Georgia Tech @ICatGT, researcher at @NVIDIAAI | Ph.D. @StanfordAILab | Making robots smarter

Atlanta, GA
Joined August 2013
Don't wanna be here? Send us removal request.
@danfei_xu
Danfei Xu
3 months
I gave an Early Career Keynote at CoRL 2024 on Robot Learning from Embodied Human Data. Recording: Slides: Extended summary thread 1/N.
2
22
137
@danfei_xu
Danfei Xu
5 years
How to do Research.At the MIT AI Lab (1988). Almost all advices are still valid more than three decades later. Highly recommended.
Tweet media one
8
192
857
@danfei_xu
Danfei Xu
3 years
I’ll be joining GaTech @gtcomputing @ICatGT as an Assistant Professor in Fall 2022! . Looking forward to continuing my work in Robot Learning as a faculty and collaborating with researchers & students at GTCS @GTrobotics @mlatgt. Reach out for collaborations / joining the lab!.
47
12
512
@danfei_xu
Danfei Xu
2 months
The past decade has seen a troubling decline in Western perceptions of people of Chinese origin, largely fueled by political narratives and biases. For Chinese scholars and students in the US, this has created a constant sense of alienation. Academia has long served as a refuge.
@sunjiao123sun_
Jiao Sun
2 months
Mitigating racial bias from LLMs is a lot easier than removing it from humans! . Can’t believe this happened at the best AI conference @NeurIPSConf . We have ethical reviews for authors, but missed it for invited speakers? 😡
Tweet media one
13
37
347
@danfei_xu
Danfei Xu
4 years
I defended!.
@drfeifei
Fei-Fei Li
4 years
Very proud of my student @danfei_xu (co-advised with @silviocinguetta ) for his wonderful PhD thesis defense today! Danfei’s work in computer vision and robotic learning pushes the field forward towards enabling robots to do long horizon tasks of the real world. 1/2
Tweet media one
Tweet media two
Tweet media three
Tweet media four
19
4
290
@danfei_xu
Danfei Xu
2 years
I'm recruiting! If you are excited about teaching robots to perceive, reason about, manipulate, and move around everyday environments, apply the CS Ph.D program at GT (Interactive Computing) and mention my name. Apps from underrepresented groups in AI&Robo are especially welcome!
Tweet media one
5
53
272
@danfei_xu
Danfei Xu
3 years
A bit more formally: I'm hiring Ph.D. students in Robot Learning this year! . If you are excited about the future of data-driven approaches to robotics, apply through the School of Interactive Computing at @gtcomputing by Dec 15th.
@danfei_xu
Danfei Xu
3 years
I’ll be joining GaTech @gtcomputing @ICatGT as an Assistant Professor in Fall 2022! . Looking forward to continuing my work in Robot Learning as a faculty and collaborating with researchers & students at GTCS @GTrobotics @mlatgt. Reach out for collaborations / joining the lab!.
0
39
182
@danfei_xu
Danfei Xu
2 years
In case anyone is curious about large-scale RLHF for robotics. This is probably the first paper you should read:
2
41
172
@danfei_xu
Danfei Xu
3 months
We started this moonshot project a year ago. Now we are excited to share our progress on robot learning from egocentric human data 🕶️🤲. Key idea: Egocentric human data is robot data in disguise. By bridging the kinematic, visual, and distributional gap, we can directly leverage.
@simar_kareer
Simar Kareer
3 months
Introducing EgoMimic - just wear a pair of Project Aria @meta_aria smart glasses 👓 to scale up your imitation learning datasets!. Check out what our robot can do. A thread below👇
3
18
158
@danfei_xu
Danfei Xu
4 years
Greetings Twitterverse! Excited to share that I'm going on the academic job market this year! Check out my research at
7
22
155
@danfei_xu
Danfei Xu
3 months
Language-conditioned policy is kind of boring until we can have sensorimotor data that can reach (even a fraction) of the diversity. Before that, language is just one-hot task encoding.
6
15
155
@danfei_xu
Danfei Xu
3 months
We figured out a way to solve long-horizon planning problem by composing a bunch of modular diffusion models in a factor graph! . This allows us to reuse the diffusion models in unseen new tasks and achieve zero-shot generalization to multi-robot collaborative manipulation tasks.
@utkarshm0410
Utkarsh Mishra
3 months
How can robots compositionally generalize over multi-object multi-robot tasks for long-horizon planning?. At #CoRL2024, we introduce Generative Factor Chaining (GFC), a diffusion-based approach that composes spatial-temporal factors into long-horizon skill plans. (1/7)
1
27
129
@danfei_xu
Danfei Xu
3 months
Just got to Munich! Looking forward to catching up with people at #CoRL2024. Also extremely honored, nervous, and excited about giving an Early Career Keynote on Thursday.
Tweet media one
1
12
116
@danfei_xu
Danfei Xu
5 years
Excited to share my internship project @DeepMindAI with Misha Denil @notmisha!. Positive-Unlabeled Reward Learning.arXiv:
Tweet media one
3
17
104
@danfei_xu
Danfei Xu
8 years
@elonmusk No it's not a more complex game. It only requires optimizing immediate reward like health and money. Go requires longer horizon planning.
5
6
95
@danfei_xu
Danfei Xu
1 year
Since we are entering the "BC is all you need" phase of Robot Learning😜 --- Robomimic ( allows you to play with SOTA algorithms (BC-Transformer, DiffusionPolicy, etc.) on challenging tasks. Also easy to integration with physical robots!
Tweet media one
2
19
99
@danfei_xu
Danfei Xu
10 months
I often get this question: Is LLM all you need for robot planning? . I'd go: "obviously not, because you need to consider physical constraints, dynamics, . ", which then turn into a non-stop rant. Now I'll just point them to this paper 😎.
@GT_LIDAR
LIDAR@GT
10 months
If you're interested in learning SOTA of optimization-based task and motion planning, please give it a read of our recent survey paper, ranging from classical to learning methods. @ZhaoZhigen @ShuoCheng94 @yding25 @ZiyiZhou2 @ShiqiZhang7 @danfei_xu .
Tweet media one
0
9
99
@danfei_xu
Danfei Xu
10 months
Super neat system! It seems that Chinese robotics startups have everything they need to quickly iterate on capable & low-cost hardware. Will US startups be able to compete? Chaining together dynamixals/off-the-shelf motors likely won’t cut it….
@simonkalouche
Simon Kalouche
10 months
Nice smooth hardware
6
11
96
@danfei_xu
Danfei Xu
10 months
This is clearly going to benefit the privileged. Even the info that this conference/track existed probably will only circulate in a small group with direct tie to academia/tech (parents etc). How about we flip this into a track for creating accessible tutorials, lectures,.
@NeurIPSConf
NeurIPS Conference
10 months
This year, we invite high school students to submit research papers on the topic of machine learning for social impact! See our call for high school research project submissions below.
1
7
94
@danfei_xu
Danfei Xu
5 years
Excited to share Generalization Through Imitation (GTI)! GTI learns visuomotor control from human demos and generalizes to new long-horizon tasks by leveraging latent compositional structures. Joint w/ @AjayMandlekar @RobobertoMM @silviocinguetta @drfeifei
2
26
92
@danfei_xu
Danfei Xu
1 year
Can't believe that I just came across this insanely cool paper. 3D gaussian seems to be such an intuitive representation to model large & dynamic scenes (Lagrangian vs. Eularian). Expect it to drive a whole new wave of dense/obj-centric representation w/ self-supervision.
@JonathonLuiten
Jonathon Luiten
1 year
Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis. We model the world as a set of 3D Gaussians that move & rotate over time. This extends Gaussian Splatting to dynamic scenes, with accurate novel-view synthesis and dense 3D trajectories.
3
10
86
@danfei_xu
Danfei Xu
10 months
160 H100 for GT Makerspace!.
@gatechengineers
Georgia Tech College of Engineering
10 months
Putting the promise of AI directly in students’ hands: We’re powering up the Georgia Tech AI Makerspace - a student-focused AI supercomputer hub. Proud to work with @nvidia and @WeAre_Penguin to make this a reality on campus for our students.
4
5
83
@danfei_xu
Danfei Xu
9 months
Only in Japan you come across a proper bimanual mobile manipulator in a random shopping mall
Tweet media one
4
4
82
@danfei_xu
Danfei Xu
2 months
Is teleoperation + BC our ultimate path to productizing Robot Learning?. Well . Talk is cheap, show me your data & policy!. We are thrilled to organize a teleoperation and imitation learning challenge at #ICRA2025, with a total prize pool of $200,000 (cash + robots)!. General
Tweet media one
0
16
75
@danfei_xu
Danfei Xu
5 years
Accepted to RSS 2020!.
@danfei_xu
Danfei Xu
5 years
Excited to share Generalization Through Imitation (GTI)! GTI learns visuomotor control from human demos and generalizes to new long-horizon tasks by leveraging latent compositional structures. Joint w/ @AjayMandlekar @RobobertoMM @silviocinguetta @drfeifei
2
6
71
@danfei_xu
Danfei Xu
4 years
New preprint!. Affordance is a versatile repr. to reason about interactions in a complex world. But it is also *myopic*, because it only means that an action is feasible instead of leading to a long-term goal. How can we use affordances to plan for long-horizon tasks? 1/
Tweet media one
4
6
69
@danfei_xu
Danfei Xu
5 years
We present Regression Planning Network (RPN), a type of recursive network architecture that learns to perform high-level task planning from video demonstrations. #NeurIPS2019 (1/3)
Tweet media one
1
21
67
@danfei_xu
Danfei Xu
10 months
🤖 Inspiring the Next Generation of Roboticists! 🎓. Our lab had an incredible opportunity to demo our robot learning systems to local K-12 students for the National Robotics Week program @GTrobotics . A big shout-out to @saxenavaibhav11 @simar_kareer @pranay_mathur17 for hosting
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
11
65
@danfei_xu
Danfei Xu
3 years
Applying imitation learning to real world problems takes more than new algorithms. We are organizing a workshop "Overlooked Aspects of Imitation Learning: Systems, Data, Tasks, and Beyond” at RSS22! Exciting speakers & more to come. Submit by May 7th!
1
8
63
@danfei_xu
Danfei Xu
5 years
It's a strange time to share this but I'll be co-instructing the Stanford CS231n course next quarter! Now that all courses are pass/fail, we might experiment w/ something new😃.Suggestions / tips on online lecturing are appreciated! .
2
2
61
@danfei_xu
Danfei Xu
11 months
We also made a similar transition to ROS-free. The non-obvious thing is that modern NN models (BC policies, VLMs, LLMs) breaks the abstraction of ROS modules. Raw sensory stream instead of state estimation, actions instead of plans, etc. Need new ROS for the next-gen modules.
@chris_j_paxton
Chris Paxton
11 months
Interesting (and sad) result here; I really had hoped more people would be able to just run with ROS2. But it seems like it's not quite there, if this is in any way worth doing for a small company/fast-moving startup that should be the target audience.
1
4
57
@danfei_xu
Danfei Xu
5 years
Presenting two papers at #NeurIPS2019! Come say hi if you are around. 1. Regression Planning Networks.We combine classic symbolic planning and recursive neural network to plan for long-horizon tasks end-to-end from image input. Paper & Code: . 1/
Tweet media one
1
9
59
@danfei_xu
Danfei Xu
2 years
One of the most impressive CV works I've seen recently. Also huge kudos to Meta AI for sticking to open sourcing despite the trend increasingly going towards the opposite direction.
@AIatMeta
AI at Meta
2 years
Today we're releasing the Segment Anything Model (SAM) — a step toward the first foundation model for image segmentation. SAM is capable of one-click segmentation of any object from any photo or video + zero-shot transfer to other segmentation tasks ➡️
0
3
58
@danfei_xu
Danfei Xu
3 months
#CoRL2024 full conference recording is now public!
0
9
57
@danfei_xu
Danfei Xu
7 months
Can we teach a robot hundreds of tasks with only dozens of demos? . This is only possible with a truly compositional system. Our new work, NOD-TAMP, learns generalizable skills with only a handful of demos and composes them to zero-shot solve long-horizon tasks, including loading.
@ShuoCheng94
Shuo Cheng
7 months
Can we teach a robot hundreds of tasks with only dozens of demos?.Introducing NOD-TAMP: A framework that chains together manipulation skills from as few as one demo per skill to compositionally generalize across long-horizon tasks with unseen objects and scenes. (1/N)
0
5
57
@danfei_xu
Danfei Xu
3 years
Our group headed by @MarcoPavoneSU at NVIDIA Research is hiring fulltime RS and interns! Tons of cool problems in planning, control, imitation, and RL. Job posting 👇. Intern: Full-time:
0
5
54
@danfei_xu
Danfei Xu
5 years
In 2010 I was a high school senior in Shanghai. I cold-called a company making educational robots and started my first internship in robotics. Almost a decade later, I’m doing a Ph.D. at Stanford, still in robotics, still happy. Let’s see where the next decade leads me.
1
0
50
@danfei_xu
Danfei Xu
9 months
Congratulations to @ShuoCheng94 for leading LEAGUE, 1 of 5 papers out of 1200+ to receive an RA-L best paper award honorable mention at ICRA! As the sole student author on a two-person team in a field trending towards 10+ authors/paper, Shuo's vision and technical prowess shine.
3
1
50
@danfei_xu
Danfei Xu
2 years
Annnnd that's a wrap! First semester teaching at GT and it's been an absolute blast. Really happy to see the progression of the student projects and the final poster session joined by ~170 students. Couldn't have made it without my awesome TAs. Thanks @mlatgt for the sponsorship!
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
1
50
@danfei_xu
Danfei Xu
11 months
Detail: 10hz image -> 200hz EEF control. I'm guessing keep the same image token for 20 steps while updating proprio state? Also given how smooth the motion looks --- high-quality OSC implementation?.
@coreylynch
Corey Lynch
11 months
Finally, let's talk about the learned low-level bimanual manipulation. All behaviors are driven by neural network visuomotor transformer policies, mapping pixels directly to actions. These networks take in onboard images at 10hz, and generate 24-DOF actions (wrist poses and
2
2
48
@danfei_xu
Danfei Xu
9 months
A good day for @ICatGT @GTrobotics. Congratulations @sehoonha @naokiyokoyama0 @shahdhruv_ and @ShuoCheng94 !
Tweet media one
Tweet media two
3
7
48
@danfei_xu
Danfei Xu
2 years
First work coming out of my lab at GT! LEAGUE is a "virtuous cycle" system that combines the merit of Task and Motion Planning and RL. The result is continually-learning and generalizable agents that can carry their knowledge to new task and even environments.
@ShuoCheng94
Shuo Cheng
2 years
Introducing LEAGUE - Learning and Abstraction with Guidance! LEAGUE is a new framework that uses symbolic skill operators to guide skill learning and state abstraction, allowing it to solve long-horizon tasks and generalize to new tasks and domains. Joint work with @danfei_xu 1/6
1
10
47
@danfei_xu
Danfei Xu
1 year
Super excited about this new #CoRL2023 work on compositional planning! We introduce a new generative planner (GSC) to compose skill-level diffusion models to solve long-horizon manipulation problem, without ever training on long-horizon tasks. @ICatGT @GTrobotics @mlatgt.
@utkarshm0410
Utkarsh Mishra
1 year
How to enable robots to plan and compositionally generalize over long-horizon tasks?. At #CoRL2023, we introduce Generative Skill Chaining (GSC), a diffusion-based, generalizable and scalable approach to compose skill-level transition models into a task-level plan generator.(1/7)
1
3
44
@danfei_xu
Danfei Xu
1 year
This blew my mind 100x more than GPTs did.
@OpenAI
OpenAI
1 year
Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. Prompt: “Beautiful, snowy
3
1
44
@danfei_xu
Danfei Xu
1 year
Among so many thoughtful & nuanced discussions on regulating AI, the EU chooses to "mitigate the risk of extinction from AI". This is some sort of joke, right?.
@EU_Commission
European Commission
1 year
Mitigating the risk of extinction from AI should be a global priority. And Europe should lead the way, building a new global AI framework built on three pillars: guardrails, governance and guiding innovation ↓
Tweet media one
1
6
40
@danfei_xu
Danfei Xu
4 years
We are organizing the Deep Representation and Estimation of State tutorial at the virtual IROS2020! .Fantastic speaker line-up: @leto__jean, Yunfei Bai, and @ChrisChoy208. Co-organized with @KuanFang and @deanh_tw. A short thread about each session👇
Tweet media one
1
9
43
@danfei_xu
Danfei Xu
11 months
Agreed. Humanoid is a problem, not a solution.
@watneyrobotics
Watney Robotics
11 months
Do you really need legs? We don't think so. As much we love anthropomorphic humanoids (our co-founder built one in 9th grade), we believe virtually all menial tasks can be done with two robot arms, mounted on wheels. In our view, @1x_tech's Eve robot is the optimal form factor
2
3
41
@danfei_xu
Danfei Xu
2 years
Honored to be selected a DARPA Riser and giving a talk about our robot learning works!.
@DARPA
DARPA
2 years
T minus 2 hours until we begin our next #DARPAForward event @GeorgiaTech. @DoDCTO Heidi Shyu will kick off a packed agenda featuring experts on pandemic preparedness, cybersecurity, and more. Visit our page for more on how you can join future events:
4
1
39
@danfei_xu
Danfei Xu
1 year
modular -> end to end -> modular.
@matei_zaharia
Matei Zaharia
1 year
Interesting trend in AI: the best results are increasingly obtained by compound systems, not monolithic models. AlphaCode, ChatGPT+, Gemini are examples. In this post, we discuss why this is and emerging research on designing & optimizing such systems.
4
2
38
@danfei_xu
Danfei Xu
6 years
Dr. Pedro Domingos to Lead the Independent R&D Effort at D. E. Shaw's new machine learning research group.
1
10
38
@danfei_xu
Danfei Xu
5 months
Our #CoRL2024 work on learning tactile control policies directly from human hand demonstrations. Check out Kelin's tweet for more details!.
@ColinYu14116982
Kelin Yu
5 months
Introducing MimicTouch, our new paper accepted by #CoRL2024 (also the Best Paper Award at the #NIPS2024 TouchProcessing Workshop). MimicTouch learns tactile-only policies (no visual feedback) for contact-rich manipulation directly from human hand demonstrations. (1/6)
0
3
40
@danfei_xu
Danfei Xu
1 year
Robotics dataset is expanding at an unprecedented pace. How do we control the quality of the collected data? Our #CoRL2023 work presents an offline imitation learning method that learns to discern (L2D) data from expert in a mixed-quality demonstration dataset. Code coming soon!.
@SachitKuhar
Sachit Kuhar
1 year
Introducing our #CoRL2023 work Learning to Discern (L2D)! As robotics datasets grow, quality control becomes ever more important. L2D is our solution for handling mixed-quality demo data for offline imitation learning. (1/6)
0
9
39
@danfei_xu
Danfei Xu
2 years
An open-source playground for training generative agents from real-world driving data! Work led by @Yuxiao_Chen_ @iamborisi @drmapavone and myself from our team at @NVIDIAAI, in close collaboration with @NVIDIADRIVE.
@Yuxiao_Chen_
Yuxiao Chen
2 years
We are excited to announce the release of Traffic Behavior Simulation (TBSIM) developed by the Nvidia Autonomous Vehicle research group (, which is our software infrastructure for closed-loop simulation with data-driven traffic agents. (1/7)
2
2
36
@danfei_xu
Danfei Xu
5 years
Used ACME for my summer internship project @DeepMind , can confirm it's an amazing framework.
@GoogleDeepMind
Google DeepMind
5 years
Interested in playing around with RL? We’re happy to announce the release of Acme, a light-weight framework for building and running novel RL algorithms. We also include a range of pre-built, state-of-the-art agents to get you started. Enjoy!
0
2
38
@danfei_xu
Danfei Xu
5 years
We are organizing a workshop on imitation learning at RSS2020 (! The workshop will bring together well-known researchers in field. CfP includes short-length, full-length, and position papers. Tentative submission deadline Apr 9th. RT and spread the word!.
0
11
34
@danfei_xu
Danfei Xu
1 year
Need a fully automated sim-to-real pipeline to train locomotion policies for arbitrary robot (with URDF) in ~1hr.
@davsca1
Davide Scaramuzza
1 year
@DisneyResearch introduces their new robot at #IROS2023! Trained in simulation with #reinforcementlearning! @ieeeiros
0
0
36
@danfei_xu
Danfei Xu
4 months
New work on Humanoid Sim2Real by @fukangliuu! . Key takeaway: Trajectory Optimization (TO) can generate dynamics-aware reference trajectories (how much torque should I exert to pick up a heavy vs. a light box?). Policies trained to track these trajectories in simulation can be.
@fukangliuu
Fukang Liu
4 months
Opt2Skill ( combines TO and RL to enable real-world humanoid loco-manipulation tasks. Shoutout to all the collaborators! @gu_zy14 @Yilin_Cai98 @ZiyiZhou2 @HyunyoungJung5 @sehoonha @danfei_xu @GT_LIDAR
1
3
35
@danfei_xu
Danfei Xu
1 year
Looking forward to welcome everyone to the beautiful ATL! @GTrobotics @ICatGT @gtcomputing.
@corl_conf
Conference on Robot Learning
1 year
Gearing up for the conference next week, check this interactive feature as you prep for your time at the conference. Discover cool papers and insights. Did you know that we have 199 contributed papers from 873 authors originating in 25 countries! 🤯.
Tweet media one
0
2
32
@danfei_xu
Danfei Xu
4 years
A thread by my awesome co-instructor @RanjayKrishna recapping @cs231n for the past quarter. It happens to be the *largest* class on campus for the quarter!. Thanks all the teaching staff, especially our head TA @kevin_zakka for making this course possible!.
@RanjayKrishna
Ranjay Krishna
4 years
Academic quarter recap: here's a staff photo after the last lecture of @cs231n. It's crazy that we were the largest course at Stanford this quarter. This year, we added new lectures and assignments (open sourced) on attention, transformers, and self-supervised learning.
Tweet media one
1
1
34
@danfei_xu
Danfei Xu
1 year
Object representation is a fundamental problem for robotic manipulation. Our #CoRL2023 work found that *density field* can efficiently represent state and dynamics of non-rigid objects such as granular material. To be presented as a spotlight&poster on Thursday!.
@ShangjieXue
Shangjie Xue
1 year
How to represent granular materials for robot manipulation?. Introducing our #CoRL2023 project: Neural Field Dynamics Model for Granular Object Piles Manipulation, a field-based dynamics model for granular object piles manipulation. 🌐 👇 Thread
0
5
34
@danfei_xu
Danfei Xu
5 years
Accepted to ICRA 2020! Paper & code:
@RobobertoMM
Roberto
5 years
We present 6-PACK, an RGB-D category-level 6D pose tracker that generalizes between instances of classes based on a set of anchors and keypoints. No 3D models required! Code+Paper: w/ Chen Wang @danfei_xu Jun Lv @cewu_lu @silviocinguetta @drfeifei @yukez
2
7
33
@danfei_xu
Danfei Xu
8 months
My favorite booth at the #CVPR2024 expo
Tweet media one
Tweet media two
0
1
33
@danfei_xu
Danfei Xu
5 years
Join us on Sunday at 9:00-1:30pm PT for the Advances & Challenges in Imitation Learning for Robotics #RSS2020 Workshop: with an exciting list of speakers! Live streaming at
Tweet media one
1
3
32
@danfei_xu
Danfei Xu
10 months
It was an honor to have been part of this epic journey!.
@drfeifei
Fei-Fei Li
10 months
It’s that time of the year - first lecture of @cs231n !! It’s the 9th year since @karpathy and I started this journey in 2015, what an incredible decade of AI and computer vision! Am so excited to this new crop of students in CS231n! (Co-instructing with @eadeli this year 😍🤩)
Tweet media one
Tweet media two
Tweet media three
0
1
31
@danfei_xu
Danfei Xu
3 years
Incredibly excited to be able to attend an academic event in person!.
@DorsaSadigh
Dorsa Sadigh
3 years
Bay Area Robotics Symposium (BARS) will be happening in person this Friday on October 29!. The registration will close on October 27th, 5 p.m. Register here: Program:
Tweet media one
0
0
30
@danfei_xu
Danfei Xu
3 years
Whoa this is huge! No more wrangling w/ python2 compatibility & root access issues.
@TobiasRobotics
Tobias Fischer
3 years
Incredibly happy that our @RoboStack paper has been accepted to the @ieeeras Robotics & Automation Magazine 🥳. @RoboStack brings together #ROS @rosorg with @condaforge and @ProjectJupyter. Preprint: Find out some key benefits in this 🧵: 1/n
Tweet media one
1
5
28
@danfei_xu
Danfei Xu
5 years
Check out our #ICCV2019 work on harnessing mid-level representations in training interactive agents.
@yukez
Yuke Zhu
5 years
We are releasing our #ICCV2019 work on goal-directed visual navigation. We introduced a method that harnesses different perception skills based on situational awareness. It makes a robot reach its goals more robustly and efficiently in new environments.
Tweet media one
0
3
28
@danfei_xu
Danfei Xu
4 years
Check out our new work on imitation learning from human demos! We released a set of sim&real tasks, demo datasets, and a modular codebase & clean APIs to help you develop new algorithms!.
@AjayMandlekar
Ajay Mandlekar
4 years
Robot learning from human demos is powerful yet difficult due to a lack of standardized, high-quality datasets. We present the robomimic framework: a suite of tasks, large human datasets, and policy learning algorithms. Website: 1/
0
2
28
@danfei_xu
Danfei Xu
5 years
Blog post by @deanh_tw and I summarizing our line of work on generalizable imitation of long-horizon tasks: Neural Task Programming, Neural Task Graphs, and Continuous Relaxation of Symbolic Planner. Enjoy!.
@StanfordAILab
Stanford AI Lab
5 years
What if we can teach robots to do new task just by showing them one demonstration? . In our newest blog post, @deanh_tw and @danfei_xu show us three approaches that leverage compositionality to solve long-horizon one-shot imitation learning problems.
1
4
27
@danfei_xu
Danfei Xu
3 years
@gtcomputing @ICatGT @GTrobotics @mlatgt In the meantime, I will spend my gap year at @nvidia . Research. I couldn’t be more excited and I’m immensely grateful for my advisors @silviocinguetta @drfeifei and many collaborators and friends who helped me to get here.
0
1
27
@danfei_xu
Danfei Xu
10 months
Data fuels the progress in robotics, whether it's sim, real teleoperated, or auto-generated. Our workshop at #RSS2024 will bring together researchers from academia, industry, and startups around the world to share insights🧐 and hot takes 🔥.
@AjayMandlekar
Ajay Mandlekar
10 months
Data is the key driving force behind success in robot learning. Our upcoming RSS 2024 workshop "Data Generation for Robotics” will feature exciting speakers, timely debates, and more! Submit by May 20th.
Tweet media one
0
1
25
@danfei_xu
Danfei Xu
2 years
Robot auction sale by Intrinsic (Alphabet’s industrial automation/robotics startup). Sad to see this.
@chr1sa
Chris Anderson
2 years
If you're a hardware biz or R&D lab in Silicon Valley, you should definitely be keeping your eye on the liquidation auctions, which are on fire right now. This one is auctioning off more than 100 new and used Kuka robot arms: .
2
1
25
@danfei_xu
Danfei Xu
5 years
It's today! DeepRL workshop
Tweet media one
@danfei_xu
Danfei Xu
5 years
@RobobertoMM @deanh_tw @yukez @silviocinguetta @drfeifei 2. Positive-Unlabeled Reward Learning . . Deep Reinforcement Learning Workshop. Joint work with @notmisha . 3/.
0
8
27
@danfei_xu
Danfei Xu
3 years
Our new work on training competent robot collaborators from human-human collaboration demonstrations! @corl_conf @stanfordsvl @StanfordAILab.
0
4
27
@danfei_xu
Danfei Xu
1 year
Learning for high-precision manipulation is critical to bridge *intelligence* to repeatable *automation*. C3DM is a diffusion model that learns to remove noise from the input by "fixating" on the target object. To be presented at the Deployable Robot workshop at #CoRL2023 today!.
@saxenavaibhav11
Vaibhav Saxena
1 year
Introducing C3DM 🤖 - a Constrained-Context Conditional Diffusion Model that solves robotic manipulation tasks with:. ✅ high precision and .✅ robustness to distractions!. 👇 Thread
0
2
27
@danfei_xu
Danfei Xu
2 years
Fantastic research led by Chen! Continuing our work on hierarchical imitation starting for real-world long-horizon manipulation. It turns out that we can train high-level planner directly from *human video*. This greatly reduces need for on-robot data and improves robustness 1/2.
@chenwang_j
Chen Wang
2 years
How to teach robots to perform long-horizon tasks efficiently and robustly🦾?. Introducing MimicPlay - an imitation learning algorithm that uses "cheap human play data". Our approach unlocks both real-time planning through raw perception and strong robustness to disturbances!🧵👇
1
1
24
@danfei_xu
Danfei Xu
3 years
To carry out long-horizon tasks, robots must plan far and wide into the future. What state space should the robot plan with, and how can they plan for objects & scenes that they have never seen before? See 👇for our new work on Generalizable Task Planning (GenTP).
@chenwang_j
Chen Wang
3 years
1/ Can we improve the generalization capability of a vision-based task planner with representation pretraining?. Check out our RAL paper on learning to plan with pre-trained object-level representation. Website:
0
0
26
@danfei_xu
Danfei Xu
8 months
Active perception with NeRF!. It’s quite rare to see a work that is both principled and empirically effective. Neural Visibility Field (NVF) led by @ShangjieXue is a delightful exception. NVF unifies both visibility and appearance uncertainty in a Bayes net framework and achieved.
@ShangjieXue
Shangjie Xue
8 months
How can robots efficiently explore and map unknown environments? 🤖📷. Introducing Neural Visibility Field (NVF), a principled framework to quantify uncertainty in NeRF for next-best-view planning. #CVPR2024 1/6. 🌐 👇 Thread
0
2
27
@danfei_xu
Danfei Xu
1 year
Excited to share our milestone in building generalizable long-horizon task solvers at #CoRL2023! As part of our long-term vision for a never-ending data engine for everyday tasks, HITL-TAMP combines the best of structured reasoning (TAMP) and end-to-end imitation learning.
@AjayMandlekar
Ajay Mandlekar
1 year
How can humans help robots improve? Introducing Human-In-The-Loop Task and Motion Planning (HITL-TAMP), a perpetually-evolving TAMP system that learns visuomotor skills from human demos for contact-rich, long-horizon tasks. #CoRL2023. Website: 1/
0
2
26
@danfei_xu
Danfei Xu
6 years
So on a whim I decided that I want to know more about the reward hypothesis of RL and found this page. Quite an interesting read
1
7
25
@danfei_xu
Danfei Xu
5 years
I admire people who can explain complex things clearly.
1
0
24
@danfei_xu
Danfei Xu
8 months
First CVPR in 5 years! Looking forward to catch up with friends & making new ones!.
1
2
24
@danfei_xu
Danfei Xu
1 year
Very nice post!. Slightly different take: Scaling up should be the **question**, not the answer. Yes we need to scale up to more task, envs, robots, but there should be many possible answers to this question. Training on lots of data may be an answer but should not the only one.
@nishanthkumar23
Nishanth Kumar
1 year
There was a lot of good and interesting debate on "is scaling all we need to solve robotics?" at #CoRL23. I spent some time writing up a blog post about all the points I heard on both sides:
1
1
24
@danfei_xu
Danfei Xu
7 years
Our paper on learning generalizable neural programs for complex robot tasks will appear in #icra2018! See you soon. Arxiv: Two minutes paper: Video:
0
7
22
@danfei_xu
Danfei Xu
2 years
Ah yes the familiar feeling of "everything that can go wrong goes wrong" end-of-the-semester chaos.
0
0
21
@danfei_xu
Danfei Xu
4 years
I got my first robotics research experience through CMU RISS. Fantastic program and mentors!.
@roboVisionCMU
CMU Center for Perceptual Computing and Learning
4 years
Fully funded undergraduate research internships at CMU’s Robotics Institute! Apply by Jan 15, 2021!
0
0
21
@danfei_xu
Danfei Xu
4 years
TFW you know enough to understand that something is really hard but don't know enough to make meaningful progress.
1
0
20
@danfei_xu
Danfei Xu
1 year
We! are! hiring! 👏.
@mark_riedl
Mark Riedl
1 year
Yo! Georgia Tech School of Interactive Computing. @ICatGT.is live! Come be part of the coolest computing science department in the world
Tweet media one
0
1
20
@danfei_xu
Danfei Xu
2 years
This year’s EECS rising star will be hosted by GaTech! Submit your materials by July 10th. Please RT and spread the word!.
@kexinrong
Kexin Rong
2 years
The EECS Rising Stars 2023 Workshop is now accepting applications 🎉Check out for more details. Deadline is July 10th. Help spread the word!.
0
3
20
@danfei_xu
Danfei Xu
5 years
Trying to better understand contrastive learning: Intuitively, contrastive learning relies on dense pos/neg sample coverage. SimCLR & others increase coverage using image augmentation. But how dense does the space have to be & what about spaces that cannot be augmented easily?.
2
0
19
@danfei_xu
Danfei Xu
5 years
Google files patent “Deep Reinforcement Learning for Robotic Manipulation”
1
5
19
@danfei_xu
Danfei Xu
2 years
Excited about hierarchy, abstraction, model learning, skill learning, planning with LLMs, and benchmarking long-horizon manipulation tasks? Submit a paper to our learning for Task and Motion Planning (L4TAMP) workshop at RSS'23!.
@leto__jean
Jeannette Bohg
2 years
We are organizing the RSS’23 Workshop on Learning for Task and Motion Planning . .Contributions of short papers or Blue Sky papers are due May 19th, 2023.
1
2
18
@danfei_xu
Danfei Xu
4 years
A neat extension of our Regression Planning Networks to 3D scene graph and more fine-grained skills!.
@yukez
Yuke Zhu
4 years
Delighted to present our recent work on hierarchical Scene Graphs for neuro-symbolic manipulation planning. We use 3D Scene Graphs as an object-centric abstraction to reason about long-horizon tasks. w/ @yifengzhu_ut, Jonathan Tremblay, Stan Birchfield
0
1
18
@danfei_xu
Danfei Xu
1 year
As the Deep Learning course at GT draws to a close this semester, I'd like to extend a heartfelt thanks to @WilliamBarrHeld. His exceptional lecture and programming assignment on Transformers and LLMs were truly enlightening. Don't miss out on these incredible resources!.
@WilliamBarrHeld
Will Held
1 year
For @danfei_xu's Deep Learning course this semester, I made a homework for Transformers and gave a lecture on LLMs. I'm sharing resources I made for both in hopes they are useful for others!. Lecture Slides: HW Colab:
Tweet media one
1
0
17
@danfei_xu
Danfei Xu
5 years
Our new work on learning real-time 6DoF tracking from RGB-D data.
@RobobertoMM
Roberto
5 years
We present 6-PACK, an RGB-D category-level 6D pose tracker that generalizes between instances of classes based on a set of anchors and keypoints. No 3D models required! Code+Paper: w/ Chen Wang @danfei_xu Jun Lv @cewu_lu @silviocinguetta @drfeifei @yukez
0
1
17
@danfei_xu
Danfei Xu
6 years
OPT is practically the only pathway for international students like me to legally work in the U.S. after graduation. This is beyond short-sighted.
@kaishengtai
Kai Sheng Tai
6 years
The OPT program is crucial for retaining talented international students in the US. I relied on the OPT myself for summer internships during college and for full-time work after graduation.
1
1
15
@danfei_xu
Danfei Xu
2 years
Looking forward to welcoming the robot learning community to ATL!.
@corl_conf
Conference on Robot Learning
2 years
The cat is out of the bag! We'll be in Atlanta next year. #CoRL2022 #CoRL2023
Tweet media one
Tweet media two
0
1
17
@danfei_xu
Danfei Xu
1 year
Make sure to check out our workshop on NeRF + robotics happening tomorrow at #ICCV2023, Paris and virtually!.
@yuewang314
Yue Wang
1 year
#ICCV2023 Join us for the “Neural Fields for Autonomous Driving and Robotics” ( workshop 8:55-17:00 on 10/3 at S03! We have a great lineup of speakers @vincesitzmann @jon_barron @AjdDavison @LingjieLiu1 @jiajunwu_cs @lucacarlone1 @Jamie_Shotton.
1
0
16
@danfei_xu
Danfei Xu
11 months
This is a great effort to collect large robot dataset on standardized hardware setup! Also happy to see that Robomimic is adopted as the core policy learning infrastructure.
@SashaKhazatsky
Alexander Khazatsky
11 months
After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset”. DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices
0
2
14
@danfei_xu
Danfei Xu
5 years
cool idea: learning feature embeddings via multi-view contrastive loss, similar to DenseObjectNet by Florence et al., 2018.
@AdamWHarley
Adam W. Harley
5 years
Our #ECCV2020 paper is now on arXiv. We show that 3D object tracking emerges automatically when you train for multi-view correspondence. No object labels necessary!.Video: results from KITTI. Bottom right shows a bird's eye view of the learned 3D features.
0
0
15