yukez Profile Banner
Yuke Zhu Profile
Yuke Zhu

@yukez

Followers
17K
Following
1K
Media
60
Statuses
324

Assistant Professor @UTCompSci | Co-Leading GEAR @NVIDIAAI | CS PhD @Stanford | Building generalist robot autonomy in the wild | Opinions are my own

Austin, TX
Joined August 2008
Don't wanna be here? Send us removal request.
@yukez
Yuke Zhu
10 months
Got a taste of @Tesla's FSD v12.3.4 last night. By no means flawless, but the human-like driving maneuvers (with no interventions) delivered a magical experience. Excited to witness the recipe of scaling law and data flywheel for full autonomy show signs of life in real products.
31
197
2K
@yukez
Yuke Zhu
4 years
The game of tenure-track faculty job:. ℍ𝕒𝕣𝕕 𝕞𝕠𝕕𝕖: 1st year.ℍ𝕖𝕝𝕝 𝕞𝕠𝕕𝕖: 1st year + COVID-19.𝕀𝕟𝕗𝕖𝕣𝕟𝕠 𝕞𝕠𝕕𝕖: 1st year + COVID-19 + No Power/Internet in freezing Texas. P.S. It has been great fun to play. What's next?.
26
26
951
@yukez
Yuke Zhu
6 months
Proud to see our latest progress on Project GR00T featured in Jensen's #SIGGRAPH2024 keynote talk today! We integrated our RoboCasa and MimicGen works into NVIDIA Omniverse and Isaac, enabling model training across the Data Pyramid from real-robot data to large-scale simulations.
13
74
435
@yukez
Yuke Zhu
3 months
The million-dollar question in humanoid robotics is: Can humanoids tap into Internet-scale training data such as online videos due to their human-like physique?. Our #CoRL2024 oral paper showed the promise of humanoids learning new skills from single video demonstrations. (1/n)
13
108
569
@yukez
Yuke Zhu
3 years
My Robot Learning class @UTCompSci is updated with the latest advances and trends, such as implicit representations, attention architectures, offline RL, human-in-the-loop, and synthetic data for AI. All materials will be public. Enjoy! #RobotLearning
13
104
541
@yukez
Yuke Zhu
5 years
New work: we built a meta-learning algorithm for an agent to discover the causal and effect relations from its visual observations and to use such causal knowledge to perform goal-directed tasks. Paper: Joint work w/ @SurajNair_1 @drfeifei @silviocinguetta
Tweet media one
5
107
453
@yukez
Yuke Zhu
8 months
Excited to announce RoboCasa, a large-scale simulation framework of everyday tasks! We use generative AI tools to create diverse objects, scenes, and tasks. Simulation plays a pivotal role in our Data Pyramid for training generalist robots. Open-source at
16
94
463
@yukez
Yuke Zhu
11 months
📢Update announced in today’s #GTC2024 Keynote📢. We are working on Project GR00T, a general-purpose foundation model for humanoid robots. GR00T will enable the robots to follow natural language instructions and learn new skills from human videos and demonstrations. Generalist
12
63
423
@yukez
Yuke Zhu
5 years
Heard students say WFH lowers productivity. In 1665, a Cambridge college student had to WFH during a pandemic. He got away from professors and worked on math alone. When he returned, the world knew him as Issac Newton! Good time to think hard in pajamas.
8
114
404
@yukez
Yuke Zhu
11 months
Thrilled to co-lead this new team with my long-time collaborator @DrJimFan. We are on a mission to build transformative breakthroughs in the landscape of Robotics and Embodied Agents. Come join us and shape the future together!.
@DrJimFan
Jim Fan
11 months
Career update: I am co-founding a new research group called "GEAR" at NVIDIA, with my long-time friend and collaborator Prof. @yukez. GEAR stands for Generalist Embodied Agent Research. We believe in a future where every machine that moves will be autonomous, and robots and
Tweet media one
16
18
304
@yukez
Yuke Zhu
5 years
Life update: I will be joining @UTAustin as an Assitant Professor in @UTCompSci starting Fall 2020. I am thrilled to continue my research on robot learning and perception as a faculty and look forward to collaborating with the exceptional faculty, researchers, and students at UT.
22
15
371
@yukez
Yuke Zhu
3 years
Honored to receive the NSF CAREER award titled "Intelligent Manipulation in the Real World via Modularity and Abstraction" to advance our lab's research on building autonomy stack for general-purpose robot manipulation in the wild!.
26
15
341
@yukez
Yuke Zhu
4 years
Number 50 is a role model for PhD advisors.
2
22
339
@yukez
Yuke Zhu
2 years
Excited to share our latest progress on legged manipulation with humanoids. We created a VR interface to remote control the Draco-3 robot 🤖, which cooks ramen for hungry graduate students at night. We can't wait for the day it will help us at home in the real world! #humanoid
7
65
317
@yukez
Yuke Zhu
5 years
Releasing my Stanford Ph.D. dissertation and talk slides "Closing the Perception-Action Loop: Towards Building General-Purpose Robot Autonomy", a summary of my work on robot perception and control @StanfordSVL. Slides: Dissertation:
Tweet media one
3
54
305
@yukez
Yuke Zhu
3 years
Congratulations to @snasiriany and @huihan_liu on winning the #ICRA2022 Outstanding Learning Paper award for their first paper @UTCompSci “Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks”!
Tweet media one
23
20
303
@yukez
Yuke Zhu
5 years
Today we are on the road to Austin, TX. I have a pleasant melancholy moving out of the Bay Area, a place where we have lived for seven years, leaving behind many fond memories and long-time friends. Meanwhile, thrilled to start out a new life. Tons of exiting things to come!.
22
3
267
@yukez
Yuke Zhu
14 days
We are accepting research proposals to accelerate Robotics + AI through the NVIDIA Academic Grant Program. Our Edge AI call is seeking projects on GPU simulations, learning-based control, and foundation models for humanoid robotics. Apply by March 31:
4
41
261
@yukez
Yuke Zhu
4 years
Taught my first (online) class @UTCompSci. Super pumped to teach a grad-level Robot Learning seminar this fall. Great to see UT students from all kinds of backgrounds passionate about learning what’s going on at the forefront of AI + Robotics🤘Syllabus:
6
21
252
@yukez
Yuke Zhu
1 year
Very impressed by the new @Tesla_Optimus end2end skill learning video!. Our TRILL work ( spills some secret sauce: 1. VR teleoperation, 2. deep imitation learning, 3. real-time whole-body control. It's all open-source! Dive in if you're into humanoids! 👾
1
54
243
@yukez
Yuke Zhu
2 years
Just learned that our MineDojo paper won the Outstanding Paper award at #NeurIPS2022 See you in New Orleans next week!
@DrJimFan
Jim Fan
3 years
Introducing MineDojo for building open-ended generalist agents! ✅Massive benchmark: 1000s of tasks in Minecraft.✅Open access to internet-scale knowledge base of 730K YouTube videos, 7K Wiki pages, 340K Reddit posts.✅First step towards a general agent.🧵
5
31
243
@yukez
Yuke Zhu
1 year
We are unlikely to create an “ImageNet for Robotics”. In retrospect, ImageNet is such a homogeneous dataset. Labeled images w/ boxes. Generalist robot models will be fueled by the Data Pyramid, blending diverse data sources from web and synthetic data to real-world experiences.
Tweet media one
6
30
239
@yukez
Yuke Zhu
5 years
Some of my proudest memories of my PhD are working with people from different countries and being advised by a stellar all-women thesis committee. I encourage students from diverse backgrounds to apply for my future lab @UTCompSci where diversity and inclusion will be valued.
3
19
228
@yukez
Yuke Zhu
1 year
One of RL's most future-proof ideas is that adaptation is just a memory problem in disguise. Simple in theory, scaling is hard!. Our #ICLR2024 spotlight work AMAGO shows the path to training long-context Transformer models with pure RL. Open-source here:
Tweet media one
1
35
191
@yukez
Yuke Zhu
5 years
Excited to start my gap year in @NvidiaAI! Looking forward to a lot of new research opportunities with the brilliant minds!
Tweet media one
7
5
186
@yukez
Yuke Zhu
5 years
Dear academics, check out our 6 pack!! 💪 Ok. I meant 6-PACK, our new 6DoF Pose Anchor-based Category-level Keypoint tracker, real-time tracking of novel objects without known 3D models!.
@RobobertoMM
Roberto
5 years
We present 6-PACK, an RGB-D category-level 6D pose tracker that generalizes between instances of classes based on a set of anchors and keypoints. No 3D models required! Code+Paper: w/ Chen Wang @danfei_xu Jun Lv @cewu_lu @silviocinguetta @drfeifei @yukez
1
35
185
@yukez
Yuke Zhu
1 year
Just wrapped up my #CoRL2023 early-career keynote on 𝐏𝐚𝐭𝐡𝐰𝐚𝐲 𝐭𝐨 𝐆𝐞𝐧𝐞𝐫𝐚𝐥𝐢𝐬𝐭 𝐑𝐨𝐛𝐨𝐭𝐬 on Wed. In case you missed it, here's a brief summary. Check out the slide deck for more detail: 🧵1/N
Tweet media one
4
24
179
@yukez
Yuke Zhu
3 years
Future driverless cars will talk with each other! We introduce Coopernaut, a cooperative driving model that uses vehicle-to-vehicle (V2V) communication for robust driving in challenging traffic conditions. #CVPR2022. Paper: Project:
4
19
177
@yukez
Yuke Zhu
5 years
We are organizing a (virtual) workshop on Visual Learning and Reasoning for Robotic Manipulation at #RSS2020. We invite extended abstract submissions that address the research problems at the intersection of perception and manipulation:
Tweet media one
0
31
173
@yukez
Yuke Zhu
5 years
Loved the Slow Science Manifesto (. We were told, "slow down to go faster." Oh boy, this is so much easier said than done. As a young academic, seeing fellow scholars churning out dozens of papers a year, it takes guts to hit the pause button and think!.
1
22
167
@yukez
Yuke Zhu
5 years
As much as I'd like to tweet positivity and focus on #AcademicChatter, I know how difficult this moment is for the Asian community when my wife and I feel anxious about going out for shopping & errands, hearing recent news about hate crimes. Hatred is NOT a solution to a virus.
5
6
166
@yukez
Yuke Zhu
4 years
Implicit neural representations have pushed the envelope of 3D Vision and Graphics in recent years. How will they be useful for Robot Manipulation?. Our work GIGA demonstrated that they can bridge geometry reasoning and affordance learning for 6-DoF grasping in cluttered scenes.
2
18
163
@yukez
Yuke Zhu
1 year
Can't wait to attend #CoRL2023 for the next two days and give an early career keynote titled "𝐏𝐚𝐭𝐡𝐰𝐚𝐲 𝐭𝐨 𝐆𝐞𝐧𝐞𝐫𝐚𝐥𝐢𝐬𝐭 𝐑𝐨𝐛𝐨𝐭𝐬: 𝐒𝐜𝐚𝐥𝐢𝐧𝐠 𝐋𝐚𝐰, 𝐃𝐚𝐭𝐚 𝐅𝐥𝐲𝐰𝐡𝐞𝐞𝐥, 𝐚𝐧𝐝 𝐇𝐮𝐦𝐚𝐧𝐥𝐢𝐤𝐞 𝐄𝐦𝐛𝐨𝐝𝐢𝐦𝐞𝐧𝐭" on Wed!
3
8
162
@yukez
Yuke Zhu
3 years
🔥robosuite updates🦾After eight months of dev effort, excited to release our v1.3 version! We integrate advanced graphics renderers with our simulation framework and provide vision APIs to bridge robot perception and decision-making research. Try it out!
2
29
159
@yukez
Yuke Zhu
11 months
My #CoRL2023 keynote talk on Pathway to Generalist Robots is on YouTube now: I discussed the three key ingredients for building general-purpose robot autonomy: scaling law, data flywheel, and human-like embodiment. If you want to learn more about our.
1
33
155
@yukez
Yuke Zhu
6 years
Thanks Fei-Fei @drfeifei for being such an amazing advisor, mentor, role model, and friend! Finishing a PhD is the end of the beginning. And greater things have yet to come!.
@drfeifei
Fei-Fei Li
6 years
Very proud of my PhD student @yukez for passing his PhD thesis defense with flying colors! His work is on perception, learning and robotics. Thank you thesis committee members @leto__jean @EmmaBrunskill @silviocinguetta & Dan Yamins!
Tweet media one
Tweet media two
Tweet media three
4
2
152
@yukez
Yuke Zhu
1 year
First time attending @HumanoidsConf (on the @UTAustin campus!) Feels pumped to see the lightning-fast progress in this space. I expect this community to proliferate in the next few years --- Generalist robot intelligence can't be achieved without general-purpose hardware!
Tweet media one
1
12
152
@yukez
Yuke Zhu
5 years
Research used randomized controlled trial to show "tweeting improves citations". What improves the long-term impact of a paper?.
8
30
149
@yukez
Yuke Zhu
1 year
I won't be at NeurIPS next week. But our team is seeking interns to work on exciting and ambitious new projects on Large Language Models for Agents (starting early next year). Please fill out the Application Form below if you're interested.
1
11
60
@yukez
Yuke Zhu
5 months
Dexterous hands have been the Achilles' Heel of humanoid robots. A pair of reliable, sturdy, and low-cost hands would make robot learning 10-100x easier. The hands' morphologies and mechanics are a big part of the algorithm for reaching human-level dexterity. Give me a hand!.
15
7
149
@yukez
Yuke Zhu
2 years
📢Release note📢 We are pleased to release *robosuite* v1.4 and migrate its backend to @DeepMind's MuJoCo binding for long-term support and feature extensibility, solidifying our commitment to building open-source research software. Try it out at
2
20
138
@yukez
Yuke Zhu
5 years
We are releasing our #ICCV2019 work on goal-directed visual navigation. We introduced a method that harnesses different perception skills based on situational awareness. It makes a robot reach its goals more robustly and efficiently in new environments.
Tweet media one
2
33
140
@yukez
Yuke Zhu
5 years
100% agreed! I also felt extremely lucky to have some kindest and smartest advisors @Stanford and colleagues @UTCompSci . "We're all smart. Distinguish yourself by being kind." This quote is one of the first principles I will teach to my students as a scholar.
@vj_chidambaram
Vijay Chidambaram
5 years
All the technically strongest people I know are *kind* people. My advisors/profs at @WisconsinCS, my colleagues at @UTCompSci, they are all competent, caring, empathetic human beings. Sure, there are some jerks, but they are the minority -- there is no need to hire them.
2
14
137
@yukez
Yuke Zhu
5 years
Check out a new blog post of our work on long-horizon planning for robot manipulation. We also released RoboVat, our learning framework that unifies #BulletPhysics simulation and Sawyer robot control interfaces. Sim2real has never been easier.
Tweet media one
@StanfordAILab
Stanford AI Lab
5 years
How can a robot solve complex sequential problems?. In our newest blog post, @KuanFang introduces CAVIN, an algorithm that hierarchically generate plans in learned latent spaces.
1
22
130
@yukez
Yuke Zhu
3 months
robosuite ( has been a true labor of love for the past seven years. Building this open-source simulation framework has required massive collaboration across institutions. Open-source software often goes underappreciated in academic culture. robosuite was.
@yifengzhu_ut
Yifeng Zhu 朱毅枫
3 months
📢New Release📢 We are excited to announce **robosuite v1.5**, supporting more robots, teleoperation interfaces, and controllers ➕ real-time ray tracing rendering! We continue our commitment to building open-source research software. Try it out at
2
18
130
@yukez
Yuke Zhu
3 months
Another key result from my lab in leveraging human-centered data sources for humanoid robots — this time, human motion captures. By training on large-scale mocap databases and remapping human motions to humanoids, Harmon enables the robots to generate motions from text commands.
@SteveTod1998
Zhenyu Jiang
3 months
Excited to share our #CoRL2024 paper on humanoid motion generation! Combining human motion priors with VLM feedback, Harmon generates natural, expressive, and text-aligned humanoid motion from freeform text descriptions. 👇(1/4)
4
16
124
@yukez
Yuke Zhu
3 years
ICML deadline tonight, RSS deadline tomorrow, and CVPR rebuttals due next Monday. For researchers working on robot learning and perception, life is goooood 😌
4
3
124
@yukez
Yuke Zhu
2 years
Roomba builds a static map of your home by moving around. Can a robot create articulated models of indoor scenes through its physical interaction?. Ditto in the House builds digital twins of articulated objects in everyday environments. #ICRA2023. Website:
0
15
124
@yukez
Yuke Zhu
2 years
Our robot can now make morning coffee for you…. The secret recipe:.1⃣ Object-centric representation.2⃣ Transformer-based policy architecture.3⃣ Data-efficient imitation learning algorithm.4⃣ Robust impedance controller. Enjoy ☕️! #CoRL2022 #VIOLA.
@yifengzhu_ut
Yifeng Zhu 朱毅枫
2 years
How can robot manipulators perform in-home tasks such as making coffee for us? We introduce VIOLA, an imitation learning model for end-to-end visuomotor policies that leverages object-centric priors to learn from only 50 demonstrations!
2
13
125
@yukez
Yuke Zhu
5 years
Before the coronavirus outbreak, I almost decided to name my new lab VIRAL, which stands for Visual Intelligence & Robot Autonomy Lab. Now I have to change it 😅 Epidemics make us think harder.
5
1
115
@yukez
Yuke Zhu
3 months
Visiting @Princeton today to speak at the Symposium on Safe Deployment of Foundation Models in Robotics. Fall is a beautiful season to see the Princeton campus! Event website:
Tweet media one
5
6
122
@yukez
Yuke Zhu
1 year
Check out our new survey paper on Foundation Models in Robotics!.
@_akhaliq
AK
1 year
Foundation Models in Robotics: Applications, Challenges, and the Future. paper page: We survey applications of pretrained foundation models in robotics. Traditional deep learning models in robotics are trained on small datasets tailored for specific
Tweet media one
4
25
115
@yukez
Yuke Zhu
1 year
2 years ago I was shopping for a coffee machine at Target. I found a perfect Keurig not for me but for my robot:. - Round tray to insert a K-cup;.- Lid open/close w/ weak forces;.- Coffee out w/ one button click. There's no magic. Human ingenuity is behind every robot's success.
@yifengzhu_ut
Yifeng Zhu 朱毅枫
1 year
If you want to learn more about how the task has motivated a line of research in manipulation, see the list:.- VIOLA: - HYDRA: - AWE: - HITL-TAMP: - MimicGen:
3
10
90
@yukez
Yuke Zhu
6 years
Our #ICRA2019 paper received the Best Conference Paper Award w/ @michellearning @leto__jean @animesh_garg @drfeifei @silviocinguetta
Tweet media one
7
5
114
@yukez
Yuke Zhu
2 years
Heading to @CVPR today! We are organizing a 3D Vision and Robotics workshop tomorrow with a great line-up of speakers: Also, I am recruiting a postdoc on vision + robotics for my group. Come to chat with me if interested - DMs are open!.
0
17
109
@yukez
Yuke Zhu
7 months
MimicGen source code is now publicly available! Our system generates automated robot trajectories from a handful of human demonstrations, enabling large-scale robot learning:
@AjayMandlekar
Ajay Mandlekar
7 months
Want to generate large-scale robot demonstrations automatically?. We have released the full MimicGen code. Excited to see what the community will do with this powerful data generation tool!. Code: Docs:
0
17
115
@yukez
Yuke Zhu
4 years
We are organizing a #CVPR2021 Workshop on 3D Vision and Robotics to promote the cross-pollination of ideas between these two research fields. CfP is open. We look forward to your contributions!
Tweet media one
0
20
113
@yukez
Yuke Zhu
3 years
Rewriting classical robot controller with physics-informed neural network, plugging it as learnable module into data-driven autonomy stack, trained with large-scale GPU-accelerated simulation ➡️ Adaptivity & Robustness to the next level💡.
@josiah_is_wong
Josiah Wong
3 years
How can we enable robot controllers to better adapt to changing dynamics? Idea: learn a data-driven controller implemented with physics-informed neural networks, and finetune on task-specific dynamics. Website: Paper:
0
19
105
@yukez
Yuke Zhu
9 months
Visited the University of Tokyo for the #RSS2024 area chair meeting. Paper decisions have been made and will be announced on Monday. I will attend #ICRA2024 in Yokohama next week. Looking forward to connecting with the Japanese robotics community, particularly on humanoid
Tweet media one
1
5
103
@yukez
Yuke Zhu
4 years
Delighted to present our recent work on hierarchical Scene Graphs for neuro-symbolic manipulation planning. We use 3D Scene Graphs as an object-centric abstraction to reason about long-horizon tasks. w/ @yifengzhu_ut, Jonathan Tremblay, Stan Birchfield
0
15
98
@yukez
Yuke Zhu
3 years
Excited to share my recent talk at the Stanford Robotics Seminar on “Objects, Skills, and the Quest for Compositional Robot Autonomy” featuring projects from my first year @UTCompSci and our lab’s vision of building the next generation of autonomy stack.
1
12
99
@yukez
Yuke Zhu
6 years
We have just released our new work on 6D pose estimation from RGB-D data -- real-time inference with end-to-end deep models for real-world robot grasping and manipulation! Paper: Code: w/ @danfei_xu @drfeifei @silviocinguetta
Tweet media one
3
23
97
@yukez
Yuke Zhu
2 years
Excited to share VIMA, our latest work on building generalist robot manipulation agents with multimodal prompts. Massive transformer model + unified task specification interface for the win!.
@DrJimFan
Jim Fan
2 years
We trained a transformer called VIMA that ingests *multimodal* prompt and outputs controls for a robot arm. A single agent is able to solve visual goal, one-shot imitation from video, novel concept grounding, visual constraint, etc. Strong scaling with model capacity and data!🧵
0
14
97
@yukez
Yuke Zhu
3 years
🎉Public release🎉 Thrilled to kickstart our new embodied AI moonshot: building general-purpose open-ended agents with Internet-scale knowledge!.
@DrJimFan
Jim Fan
3 years
Introducing MineDojo for building open-ended generalist agents! ✅Massive benchmark: 1000s of tasks in Minecraft.✅Open access to internet-scale knowledge base of 730K YouTube videos, 7K Wiki pages, 340K Reddit posts.✅First step towards a general agent.🧵
1
10
96
@yukez
Yuke Zhu
4 years
I felt fortunate to attend all four CoRL conferences in the past and served as an AC the first time. @corl_conf is hands down my favorite conference - focused Robot Learning community, high-quality (<200) papers, YouTube live stream, inclusion events. I couldn't ask for more!.
0
3
90
@yukez
Yuke Zhu
3 years
Texas is a booming state for robotics research and industry. We are bringing together robotics researchers across the state this Friday at Texas Regional Robotics Symposium (TEROS) 2022. Great line-up of speakers and live steam for all talks. Join us at
Tweet media one
0
15
89
@yukez
Yuke Zhu
1 year
We'll witness more and more demos of humanoid robots doing the same tasks the robotics community has mastered with simpler systems. Yet people will still be awed. It speaks more about human psychology than technology. Humanoids make sense in domains requiring social interaction.
7
3
94
@yukez
Yuke Zhu
2 years
We've released an updated version of ACID, our #RSS2022 paper on volumetric deformable manipulation, with real-robot experiments. ACID predicts dynamics, 3d geometry, and point-wise correspondence from partial observations. It learns to maneuver a cute teddy bear into any pose.
1
9
91
@yukez
Yuke Zhu
2 years
All talk recordings of our #CVPR2023 3D Vision and Robotics Workshop are now available on the YouTube playlist: Check them out in case you missed the event!.
@yukez
Yuke Zhu
2 years
Heading to @CVPR today! We are organizing a 3D Vision and Robotics workshop tomorrow with a great line-up of speakers: Also, I am recruiting a postdoc on vision + robotics for my group. Come to chat with me if interested - DMs are open!.
0
13
85
@yukez
Yuke Zhu
3 years
Spot-on! Top AI researchers and institutes have the magic power of pushing a research field years back, simply by publishing initial papers and inadvertently creating a vicious cycle of worthless publications. With great power comes great responsibility.
@ChristophMolnar
Christoph Molnar 🦋 christophmolnar.bsky.social
3 years
A lot of machine learning research has detached itself from solving real problems, and created their own "benchmark-islands". How does this happen? And why are researchers not escaping this pattern?. A thread 🧵
Tweet media one
0
8
82
@yukez
Yuke Zhu
5 years
Sharing the slides of my talk "Learning Keypoint Representations for Robot Manipulation" presented at the Workshop on Learning Representations for Planning and Control @IROS2019MACAU. Slides: Workshop:
1
18
83
@yukez
Yuke Zhu
4 years
We have six papers to be presented at #ICRA2021 this week, spanning the topics of imitation learning for manipulation, neuro-symbolic planning, multimodal perception, uncertainty quantification, and morphological computation. A thread /5.
1
2
82
@yukez
Yuke Zhu
2 years
I will attend ICML in Hawaii next week to present VIMA ( and meet friends. Our NVIDIA team is seeking new talent for AI Agents, LLMs, and Robotics. Reach out via DMs if interested!.
@DrJimFan
Jim Fan
2 years
I'm going to ICML in Hawaii!. My team pushes the research frontier in AI agents, multimodal LLMs, game AI, and robotics. If you're interested in joining NVIDIA or collaborating with me, please reach out by email! My contact info is at If applicable,
Tweet media one
0
6
81
@yukez
Yuke Zhu
5 years
Ajay gave a great talk on our RoboTurk project #IROS2019, nominated for Best Paper on Cognitive Robotics. Large-scale real robot dataset through crowd teleportation! More information can be found at
Tweet media one
0
20
80
@yukez
Yuke Zhu
3 years
Uploading physical objects to the virtual world (metaverse) by observing and interacting with them in the real world. Exciting new work on sim2real via real2sim with articulated objects #CVPR2022 #Ditto.
1
8
78
@yukez
Yuke Zhu
4 years
robosuite v1.2 released: new sensor simulation APIs, visual/dynamics/sensor randomization for sim2real, enhanced operational space controllers, and human demonstrations! Check it out from here:
Tweet media one
0
7
75
@yukez
Yuke Zhu
4 months
Check out our new work, BUMBLE — Vision-language models (VLMs) act as the "operating system" for robots, calling perceptual and motor skills through APIs. The stronger the core VLM's capabilities, the better the robot gets at mobile manipulation.
@rutavms
Rutav
4 months
🤖 Want your robot to grab you a drink from the kitchen downstairs?. 🚀 Introducing BUMBLE: a framework to solve building-wide mobile manipulation tasks by harnessing the power of Vision-Language Models (VLMs). 👇 (1/5). 🌐
0
6
75
@yukez
Yuke Zhu
5 years
Pleased to be invited by @SamsungUS to talk about my research on robot perception and learning. Covered our latest work on self-supervised sensorimotor learning, hierarchical planning, and cognitive learning and reasoning in the open world. Video:
1
10
74
@yukez
Yuke Zhu
3 months
We’re advancing automated runtime monitoring and fleet learning with visual world models — a pivotal step toward building a data flywheel for robot learning. Kudos to @huihan_liu for spearheading the Sirius projects in my lab. Very proud of her achievements!.
@huihan_liu
Huihan Liu
3 months
With the recent progress in large-scale multi-task robot training, how can we advance the real-world deployment of multi-task robot fleets?. Introducing Sirius-Fleet✨, a multi-task interactive robot fleet learning framework with 𝗩𝗶𝘀𝘂𝗮𝗹 𝗪𝗼𝗿𝗹𝗱 𝗠𝗼𝗱𝗲𝗹𝘀! 🌍 #CoRL2024
3
7
71
@yukez
Yuke Zhu
4 years
Our department @UTCompSci @UTAustin is recruiting new Robotics faculty this year. Come join us in the booming city of Austin!
0
21
69
@yukez
Yuke Zhu
3 years
Excited to introduce 𝚛𝚘𝚋𝚘𝚖𝚒𝚖𝚒𝚌, a new framework for Robot Learning from Demonstration. This open-source library is a sister project of 𝚛𝚘𝚋𝚘𝚜𝚞𝚒𝚝𝚎 in our ARISE Initiative. Try it out!.
@AjayMandlekar
Ajay Mandlekar
3 years
Robot learning from human demos is powerful yet difficult due to a lack of standardized, high-quality datasets. We present the robomimic framework: a suite of tasks, large human datasets, and policy learning algorithms. Website: 1/
0
6
72
@yukez
Yuke Zhu
5 years
A nice summary of our recent works on imitation learning from visual demonstration. Compositionality and abstraction are key to scaling up IL algorithms to long-horizon manipulation tasks.
@StanfordAILab
Stanford AI Lab
5 years
What if we can teach robots to do new task just by showing them one demonstration? . In our newest blog post, @deanh_tw and @danfei_xu show us three approaches that leverage compositionality to solve long-horizon one-shot imitation learning problems.
1
13
71
@yukez
Yuke Zhu
2 years
Pleased to see our Sirius paper nominated for the Best Paper Award #RSS2023: Join our presentation in Daegu, Korea on July 11th!. Exciting times ahead as our lab explores the new frontier of 𝗥𝗟𝗢𝗽𝘀 (Robot Learning + Operations) in long-term deployment.
@yukez
Yuke Zhu
2 years
Like the best chess players are human-AI teams (centaurs), trustworthy deployment of robot learning models needs such a partnership! Sirius is our first milestone toward Continuous Integration and Continuous Deployment (CI/CD) for robot autonomy during long-term deployments👇.
3
3
70
@yukez
Yuke Zhu
4 years
Looking forward to sharing our latest progress on GPU-accelerated robotics simulation in the Isaac Gym tutorial @RoboticsSciSys 2021 next Monday.
@NVIDIARobotics
NVIDIA Robotics
4 years
Join us on July 12th at #RSS. This workshop will introduce the end-to-end GPU accelerated training pipeline in #NVIDIA Isaac Gym, demonstrate #robotics applications, and answer questions in breakout sessions. Register here: #AI #robots #nvidiaisaac
0
5
66
@yukez
Yuke Zhu
11 months
I will give a talk at #SXSW2024 on How to Train a Humanoid Robot tomorrow from 10 to 11:30 a.m. Come to check out our ramen-cooking DRACO 3 robot developed @texas_robotics and learn the technical stories behind it!
Tweet media one
2
6
64
@yukez
Yuke Zhu
9 months
Our Eureka follow-up work is out!.
@JasonMa2020
Jason Ma
9 months
Introducing DrEureka🎓, our latest effort pushing the frontier of robot learning using LLMs!. DrEureka uses LLMs to automatically design reward functions and tune physics parameters to enable sim-to-real robot learning. DrEureka can propose effective sim-to-real configurations
1
6
62
@yukez
Yuke Zhu
5 years
Check out our recent work on building agents that self-generate training tasks to facilitate the learning of harder tasks.
@KuanFang
Kuan Fang
5 years
We introduce APT-Gen to procedurally generate tasks of rich variations as curricula for reinforcement learning in hard-exploration problems. Webpage: Paper: w/ @yukez @silviocinguetta @drfeifei
0
12
61
@yukez
Yuke Zhu
10 months
Revisiting @DavidEpstein's book Range: Why Generalists Triumph in a Specialized World as a roboticist unveils a compelling insight: Elite athletes usually start broad and embrace diverse experiences as a generalist prior to delayed specialization. Given this prevailing pathway.
1
5
58
@yukez
Yuke Zhu
1 year
This is a fantastic initiative and exciting collaboration between industry and academia toward unleashing the future of Robot Learning as a Big Science! We must join forces in the quest for the north-star goal of generalist robot autonomy.
@GoogleDeepMind
Google DeepMind
1 year
Introducing 𝗥𝗧-𝗫: a generalist AI model to help advance how robots can learn new skills. 🤖. To train it, we partnered with 33 academic labs across the world to build a new dataset with experiences gained from 22 different robot types. Find out more:
0
6
58
@yukez
Yuke Zhu
6 years
Two papers are accepted in CVPR 2019: Neural Task Graph ( and DenseFusion (.
0
10
57
@yukez
Yuke Zhu
5 years
At #IROS2019 next week, I will be giving invited talks:. 1. "Learning How-To Knowledge from the Web" at AnSWeR (Mon) 2. "Learning Keypoint Representations for Robot Manipulation" at LRPC (Fri) Come to learn about our recent work!.
0
16
58
@yukez
Yuke Zhu
5 years
Excited about coming back to Vancouver, the beautiful city where I attended college, for #NeurIPS2019 Looking forward to catching up with the latest research and hanging out with friends!
Tweet media one
1
1
57
@yukez
Yuke Zhu
2 years
Like the best chess players are human-AI teams (centaurs), trustworthy deployment of robot learning models needs such a partnership! Sirius is our first milestone toward Continuous Integration and Continuous Deployment (CI/CD) for robot autonomy during long-term deployments👇.
@huihan_liu
Huihan Liu
2 years
Deep learning for robotics is hard to perfect. How do we harness existing models for trustworthy deployment, and make them continue to learn and adapt?. Presenting Sirius🌟, a human-in-the-loop framework for continuous policy learning & deployment!. 🌐:
0
13
55
@yukez
Yuke Zhu
2 months
For the past two years, @__jakegrigsby__ and I have been exploring how to make Transformer-based RL scales the same way as supervised learning counterparts. Our AMAGO line of work shows promise for building RL generalists in multi-task settings. Meet him and chat at #NeurIPS2024!.
@__jakegrigsby__
Jake Grigsby
2 months
There’s an RL trick where we turn Q-learning into classification. Among other things, it’s a quick fix for multi-task RL’s most unnecessary problem: that the scale of each task’s training loss evolves unevenly over time. It’d be strange to let that happen in supervised learning,
Tweet media one
0
7
55
@yukez
Yuke Zhu
1 year
Debates abound on the necessity of a human-like form factor for generalist robots. Humanoid robots are overkill now, but they make sense from first principles. A deep tech will always be overkill until we put hard work into it. Why not stop debating and shape the future together?.
7
6
57
@yukez
Yuke Zhu
1 year
MimicPlay is selected for an oral presentation next Thursday at #CoRL2023. Come check out this exciting work led by @chenwang_j.
@DrJimFan
Jim Fan
1 year
What's the best way for humans to teach robots?. I'm excited to announce MimicPlay, an imitation learning algorithm that extracts the most signals from unlabeled human motions. MimicPlay combines the best of 2 data sources:. 1) Human "play data": a person uses their hands to
0
6
49
@yukez
Yuke Zhu
3 months
DexMimicGen is one of our core tools for building the data pyramid to train humanoid robots. Through automated data generation in simulation, it produces orders of magnitude more synthetic training data from a handful of real-robot trajectories.
@SteveTod1998
Zhenyu Jiang
3 months
How can we scale up humanoid data acquisition with minimal human effort?.Introducing DexMimicGen, a large-scale automated data generation system that synthesizes trajectories from a few human demonstrations for humanoid robots with dexterous hands. (1/n)
1
8
54
@yukez
Yuke Zhu
2 years
Yet another manifestation of the power of hybrid Imitation + Reinforcement learning! Imitating high-level cognitive reasoning 🧠 from humans + reinforcing agile motor actions 🦿 in parallel simulation = quadrupedal locomotion in dynamic, human-centered environments #PRELUDE.
@kiwi_sherbet
Mingyo Seo
2 years
Introducing PRELUDE, a hierarchical learning framework that allows a quadruped to traverse across dynamically moving crowds. The robot learns gaits from trial and error in simulation and decision-making from human demonstration. Paper, Code, Videos:
2
7
54
@yukez
Yuke Zhu
4 years
In case you missed our 3D Vision and Robotics (3DVR) Workshop at #CVPR2021, we have released the invited talks, contributed papers, and panel discussions online. Check them out to learn about the latest progress in this booming research area. Website:
0
7
52