Yuzhe Qin Profile
Yuzhe Qin

@QinYuzhe

Followers
2,785
Following
594
Media
23
Statuses
418

Robot Learning @ UC San Diego; prev: @NVIDIA Robotics @GoogleX

San Diego, CA
Joined December 2020
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@QinYuzhe
Yuzhe Qin
7 months
We tackle robotic manipulation with real-world imitation learning or sim2real approaches. But what if we merge the two? Introducing CyberDemo: a novel pipeline that leverages simulated human demos to master real-world dexterous manipulation tasks.
5
39
200
@QinYuzhe
Yuzhe Qin
9 months
How to rotate a tomato with a potato using a robot hand? 🤖🍅🥔 Our new model, Robot Synesthesia, blends touch and vision to manipulate multiple objects, even non-convex ones like a cross!
3
45
266
@QinYuzhe
Yuzhe Qin
1 year
Just dropped sim_web_visualizer! 🚀 Transform the way you view simulation environments right from your web browser like Chrome. Dive into more examples on our Github:
Tweet media one
7
35
236
@QinYuzhe
Yuzhe Qin
2 months
Through my years of PhD research and working with undergrad and master's students, I've realized that finding the sweet spot between guidance and freedom when advising others is a real challenge. But @xiaolonw has made it throughout my PhD journey. Over the past four years, his
@xiaolonw
Xiaolong Wang
2 months
Two PhD students graduated two weeks ago: Yuzhe Qin @QinYuzhe (co-advised with Hao Su), and Yueh-Hua Wu @yh_kris . They are my first batch of robotics students. When I was a student, Alyosha told me: "Good students are your friends, you can learn from them." Yuzhe and Yueh-Hua
Tweet media one
2
6
283
17
2
131
@QinYuzhe
Yuzhe Qin
2 years
Boost your isaacgym coding productivity with our code completion plugin! Simply run 'pip install isaacgym-stubs' to enable hassle-free coding, without even needing to install IsaacGym itself. Perfect for development on unsupported platforms like Mac.
3
15
111
@QinYuzhe
Yuzhe Qin
7 months
Thanks @_akhaliq for posting our work! Teleoperation demo in simulation is more powerful than we thought when considering the powerful data augmentation capability of physical simulators.
@_akhaliq
AK
7 months
CyberDemo Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation We introduce CyberDemo, a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks. By incorporating extensive data augmentation in a
1
38
133
0
12
74
@QinYuzhe
Yuzhe Qin
1 year
🎉🤖 Excited to share our new research on point cloud RL & dexterous hands! 🖐️ We've leveled up from grasping to versatile articulated object manipulation. 🛠️ Checkout the paper & explore the code here:
@xiaolonw
Xiaolong Wang
1 year
Another robotics paper to present in #CVPR2023 ! DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects We continue pushing dexterous manipulation using point cloud RL, studying 3D pre-training, generalizing to unseen objects.
2
17
98
0
13
73
@QinYuzhe
Yuzhe Qin
4 months
Many people are impressed by the bravery of the model (that's me) for allowing a robot to touch my face. However, it's not as risky as it might seem. Here are the takeaways of how common robot manipulation systems enhance safety: 1. Hardware Level Protection: Most robots
@dngxngxng3
Runyu Ding
4 months
🐰BunnyVisionPro Discover the ease of robot control with Vision Pro! Explore our full version of the skin-caring tutorial, ... but by robots. Operated via Vision Pro, the robots expertly cleanse and apply facial masks, serums, sunscreen, and more.
2
6
26
4
9
68
@QinYuzhe
Yuzhe Qin
3 months
🚀 Upgrade your #IsaacGym coding experience with a simple command: pip install isaacgym-stubs --upgrade Compared with the previous stub, we now have: ✨ Improved code completion features 📚 Full doc string for each API function 💻 Detailed function docs right in your editor
Tweet media one
@QinYuzhe
Yuzhe Qin
2 years
Boost your isaacgym coding productivity with our code completion plugin! Simply run 'pip install isaacgym-stubs' to enable hassle-free coding, without even needing to install IsaacGym itself. Perfect for development on unsupported platforms like Mac.
3
15
111
4
6
64
@QinYuzhe
Yuzhe Qin
7 months
Robot Synesthesia is accepted in #ICRA2024 ! Congrats to all the authors especially @Ying_yyyyyyyy for leading the effort.
@QinYuzhe
Yuzhe Qin
9 months
How to rotate a tomato with a potato using a robot hand? 🤖🍅🥔 Our new model, Robot Synesthesia, blends touch and vision to manipulate multiple objects, even non-convex ones like a cross!
3
45
266
0
12
53
@QinYuzhe
Yuzhe Qin
8 months
Maybe this magical floor can also be used for whole body teleoperation of humanoid robot.
@LinusEkenstam
Linus ●ᴗ● Ekenstam
8 months
The HoloTile Disney Research Imagineer Landy Smoot is the inventor of this magical floor. This will make VR/AR experiences super immersive, and we’re one invention closer to the HoloDeck.
140
881
6K
1
3
51
@QinYuzhe
Yuzhe Qin
6 months
Will the future AI be artists, shaping aesthetic visuals and rhythmic poetry like SORA and GPT? I'd argue no. 🎨Art should remain a human expression. Let AI do the housework instead, so we can savor more moments with our beloved Peppa Pig cartoon🖼️
@xiaolonw
Xiaolong Wang
6 months
I have been cleaning my daughter's mess for more than two years now. Last weekend our robot came to home to do the job for me. 🤖 Our new work on visual whole-body control learns a policy to coordinate the robot legs and arms for mobile manipulation. See
23
116
651
1
6
45
@QinYuzhe
Yuzhe Qin
18 days
Choosing a teleoperation pipeline: Vision-based or exoskeletons? 😇The answer: Combine both! Vision-based methods excel for dexterous hand control, while exoskeletons offer precise arm tracking. Together, they create a more comprehensive solution.
@AaronYANG2000
Shiqi Yang
18 days
Introducing ACE - A Cross-Platform Visual-Exoskeletons System! Control all your robots with precision, all at once, with minimal cost, quick assembly, and easy wearability! We’ve open-sourced hardware, software, and step-by-step assembly guides. Get started today! 👉🏻
4
64
308
1
8
43
@QinYuzhe
Yuzhe Qin
1 year
Just dropped a paper on Dynamic Handover - our latest sim2real work on teaching robots to throw and catch. Fun fact: No need for XArm's impedance control to nail this task - just some good old trajectory prediction by the catcher🦾 Project Page:
@xiaolonw
Xiaolong Wang
1 year
Introducing our CoRL work on Dynamic Handover. We humans often pass along objects, a baseball, a bottle of water, using throw and catch. 🫴⚾️ We now enable the robot hands to throw and catch different unseen objects using RL and Sim2Real transfer!
3
43
258
0
6
42
@QinYuzhe
Yuzhe Qin
2 years
I'm in Auckland and presenting our work on PointCloud RL for dexterous hand! If you're attending @CoRL2022 and interested in visual RL, point cloud or dexterous hand, feel free to stop by at our poster in OGG room 040, 4-5pm!
@xiaolonw
Xiaolong Wang
2 years
New work on hands: DexPoint #CoRL2022 ! It is all about geometry and contact👆🤖👇 We perform RL with raw point cloud inputs for manipulation: open door and grasping. This brings generalization across diverse objects and much easier sim2real transfer. 1/n
1
27
138
1
6
39
@QinYuzhe
Yuzhe Qin
2 years
For the learning from human demonstration setting, we try to answer two questions: 1. How to make data collection interface customized with more convenience for human. 2. How to transfer the demonstration to different embodiment with more generalization to robot.
@xiaolonw
Xiaolong Wang
2 years
Excited to share our imitation learning work for dexterous manipulation, using human demos collected with single iPad camera teleoperation. We provide a new system using a customized hand in sim, and transfer to multiple hands and a real Allegro hand. 1/n
2
28
131
1
2
37
@QinYuzhe
Yuzhe Qin
2 years
Finished my first full marathon in US @theSFmarathon Although hard to get the same pace as the marathon have attended during undergraduate, it is glad that I can still make the 26.2 miles after several years.
Tweet media one
Tweet media two
1
0
37
@QinYuzhe
Yuzhe Qin
2 years
No real world data at all for Sim2Real transfer. Point Cloud RL will have great potential for future Sim2Real research🦾
@xiaolonw
Xiaolong Wang
2 years
New work on hands: DexPoint #CoRL2022 ! It is all about geometry and contact👆🤖👇 We perform RL with raw point cloud inputs for manipulation: open door and grasping. This brings generalization across diverse objects and much easier sim2real transfer. 1/n
1
27
138
1
1
34
@QinYuzhe
Yuzhe Qin
8 months
New to robotic simulation? Dive into our beginner tutorial on a range of simulators perfect for starters: It covers 8 distinct simulation platforms, complete with overviews to kickstart your journey for simulated robot
@simulately12492
Simulately
8 months
(1/3) 🚀Excited to announce Simulately🤖, a go-to toolkit for robotics researchers navigating diverse simulators! 💻 Github: 🔗 Website: Let’s level up our robotics research with Simulately! #Robotics #Simulators #ResearchTool
3
12
31
0
1
32
@QinYuzhe
Yuzhe Qin
1 year
Thrilled to be at #RSS2023 in Daegu, Korea! 🤖 I’ll be an invited speaker at the Generalizable Manipulation Policy Learning workshop, presenting our latest work on dexterous manipulation at 11:20am today. Can’t wait for in-person chats here 💬
1
1
32
@QinYuzhe
Yuzhe Qin
1 year
Excited to share our latest work on in-hand manipulation 🔥🔥 How to achieve generalizable in-hand manipulation without perception? The work lead by @zhaohengyin and @binghao_huang provides an answer: touch sensing! Project website:
@xiaolonw
Xiaolong Wang
1 year
Imagine if you have an object in hand, you can rotate the object by feeling without even looking. This is what we enable the robot to do now: Rotating without Seeing. Our multi-finger robot hand learns to rotate diverse objects using only touch sensing.
2
48
220
0
2
31
@QinYuzhe
Yuzhe Qin
1 year
TechXplore highlights AnyTeleop, a project I had the pleasure of contributing to during my previous internship at NVIDIA. Exciting to see the teleoperation initiative gaining recognition!
@NVIDIAAIDev
NVIDIA AI Developer
1 year
📡 🤖 Recent advances in #robotics and #AI have opened exciting new avenues for teleoperation, the remote control of robots to complete tasks in a distant location via @techxplore_com .
0
7
33
0
3
32
@QinYuzhe
Yuzhe Qin
1 year
Exciting news! Our workshop on Learning Dexterous Manipulation is now available on YouTube. Don't miss this chance to watch it at your convenience!
@xiaolonw
Xiaolong Wang
1 year
Our Learning Dexterous Manipulation () at RSS was a hit! Thank you for the speakers @pulkitology @Vikashplus @t_hellebrekers @abhishekunique7 Carmelo, @haoshu_fang , and the organizers, especially in-person ones @QinYuzhe @LerrelPinto @notmahi @ericyi0124
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
5
55
1
11
32
@QinYuzhe
Yuzhe Qin
4 months
🚀Simulator speed is just as crucial as dataset scale. Check out the ManiSkill 3 beta (powered by SAPIEN 3) for faster robot learning advancements. Batched simulation + batched rendering. Arm + Hand + Dog + Humanoid + Mobile Platform Code:
@Stone_Tao
Stone Tao
4 months
📢 ManiSkill 3 beta is out! Simulate everything everywhere all at once 🥯 - 18K RGBD FPS on 1 GPU, 3K on Colab! - Diverse parallel GPU sim - Tons of new robots/tasks All open-sourced: Photo: MS3 Tasks w/ scenes from AI2THOR and ReplicaCAD 🧵(1/6)
Tweet media one
4
65
244
1
4
31
@QinYuzhe
Yuzhe Qin
6 months
Watching NVIDIA GTC is like watching a Sci-Fi movie🤖
1
0
29
@QinYuzhe
Yuzhe Qin
7 months
Learning information-seeking behavior through RL is always hard. However, our DexTouch model can learn to find objects using touch only, no visual cues—especially useful in the dark 🕶️ Project page:
@L_Kangwon
Kang-Won Lee
7 months
How can a robot manipulate objects in the dark? Our new model, DexTouch, uses touch to seek and manipulate objects without looking, and can even open doors or turn valves.
1
5
23
0
3
27
@QinYuzhe
Yuzhe Qin
5 months
Suzume! 🪑
@shin0805__
Shintaro Inoue / 井上信多郎
5 months
『すずめの戸締まり』に登場する3本脚の椅子を再現したロボット設計,強化学習による歩容生成の論文を公開しました! 来週アメリカで開催されるRoboSoft2024にて発表します! website - #すずめの戸締まり
32
4K
15K
0
0
25
@QinYuzhe
Yuzhe Qin
6 months
Fantastic humanoid control from lab mates! Sim2Real is useful when dealing with complex motions. Congrats to the authors!
@xiaolonw
Xiaolong Wang
6 months
Let’s think about humanoid robots outside carrying the box. How about having the humanoid come out the door, interact with humans, and even dance? Introducing Expressive Whole-Body Control for Humanoid Robots: See how our robot performs rich, diverse,
94
210
1K
0
3
24
@QinYuzhe
Yuzhe Qin
2 years
ChatGPT writing an email to ask for a room upgrade of a hotel reservation. Much better than me👻
Tweet media one
1
0
22
@QinYuzhe
Yuzhe Qin
1 year
It is the 2023 that we are still working on hand-eye calibration🤖 Despite well-established theory, the engineering challenge of hand-eye calibration like occluded marker troubles robotics researchers. Check this work from @LinghaoChen97 to save your time👾
@HaoSuLabUCSD
Hao Su Lab
1 year
Hand-eye calibration is critical for sim2real in robotics. We propose EasyHeC, a differentiable-rendering-based hand-eye calibration system that is highly accurate, automatic, & convenient, thus significantly reducing sim2real gap in object manipulation!
3
10
60
0
4
23
@QinYuzhe
Yuzhe Qin
1 year
Works on tactile sensors, hand-object perception algorithms, and groundbreaking robot learning methods are also welcomed at our RSS 2023 workshop! 🤖✋ 🔗
@xiaolonw
Xiaolong Wang
1 year
We are organizing the RSS workshop on Learning Dexterous Manipulation. Please consider submitting your work on hands to our workshop! The deadline is: June 2th!
Tweet media one
0
14
54
0
4
21
@QinYuzhe
Yuzhe Qin
3 months
Exciting new work from Nicklas @ncklashansen to extend the model-based RL to hierarchical settings. We may expect more powerful capability for the model-based approach.
@ncklashansen
Nicklas Hansen
3 months
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
13
66
381
0
6
22
@QinYuzhe
Yuzhe Qin
1 year
A brilliant usage for dexterous robot hand, skillfully merging its pick-and-place capabilities with in-hand re-orientation. A feat simply unachievable for a parallel gripper.
@chenwang_j
Chen Wang
1 year
How to chain multiple dexterous skills to tackle complex long-horizon manipulation tasks? Imagine retrieving a LEGO block from a pile, rotating it in-hand, and inserting it at the desired location to build a structure. Introducing our new work - Sequential Dexterity 🧵👇
26
91
470
1
3
21
@QinYuzhe
Yuzhe Qin
3 years
How can dexterous robot learn from human video? Here we propose a novel pipeline to facilitate multi-finger imitation learning. This is 1 year of my PhD work jointly with @yh_kris @stevenpg8 @hanwenjiang1 @RchalYang @yangfu0817 @xiaolonw Project page:
@xiaolonw
Xiaolong Wang
3 years
Introducing DexMV: Imitation Learning for Dexterous Manipulation from Human Videos. A new platform for recording human hand demonstration videos, and a new imitation learning pipeline to leverage the videos for 5-finger robot hand manipulation. (1/n)
1
17
70
0
3
21
@QinYuzhe
Yuzhe Qin
1 year
Truly amazed by how joint space control can be executed with such ease. It's a clever use of kinematics equivalence for joint space allocation. When operating the Gundam, we do not need to be as tall as Gundam, just mirroring the joints.
@philippswu
Philipp Wu
1 year
🎉Excited to share a fun little hardware project we’ve been working on. GELLO is an intuitive and low cost teleoperation device for robot arms that costs less than $300. We've seen the importance of data quality in imitation learning. Our goal is to make this more accessible 1/n
26
110
689
1
1
20
@QinYuzhe
Yuzhe Qin
16 days
Curious about robot teleoperation methods? Prof. Xiaolong summarizes our research findings on various devices we've tested over the years. From haptics and low-cost cameras to gloves, VR headsets, and exoskeletons - this article guides you through choosing the right option for
@xiaolonw
Xiaolong Wang
17 days
3
33
206
0
1
20
@QinYuzhe
Yuzhe Qin
4 months
The future of teleoperation will be dexterous! 🙌 Excited to see all the innovation.
@ToruO_O
Toru
4 months
A common question we get for HATO () is: can it be more dexterous? Yes! The first iteration of our system actually achieves this -- by capturing finger poses with mocap gloves and remapping them to robot hands. [video taken in late 2023 (with @yuzhang )]
6
34
142
1
0
18
@QinYuzhe
Yuzhe Qin
7 months
The leading author of this project, Jun Wang @Junwang_048 is applying for Ph.D. In a remarkably short period, Jun has become proficient in robot simulation and the complex Allegro hand hardware, positioning himself as a formidable contender in the field of robot learning!
@QinYuzhe
Yuzhe Qin
7 months
We tackle robotic manipulation with real-world imitation learning or sim2real approaches. But what if we merge the two? Introducing CyberDemo: a novel pipeline that leverages simulated human demos to master real-world dexterous manipulation tasks.
5
39
200
0
3
19
@QinYuzhe
Yuzhe Qin
1 year
Very interesting work to utilize simple device for wrist pose tracking, which is often hard to estimate in vision based teleoperation.
@asurobot
Heni Ben Amor
1 year
Motion capture from just a smartwatch? Excited about our IROS 2023 paper: control your robot or move your avatar using just your smartwatch. ML for real-time estimation of arm postures. If you ever needed low-cost MoCap, this one is for you! Paper: #IROS
4
24
149
0
3
19
@QinYuzhe
Yuzhe Qin
6 months
Dexterity comes from simplicity. Glad to see so many wonderful data collection ideas at the beginning of 2024. Congrats for the great work @chenwang_j
@chenwang_j
Chen Wang
6 months
Can we use wearable devices to collect robot data without actual robots? Yes! With a pair of gloves🧤! Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands Everything open-sourced
22
136
621
1
1
16
@QinYuzhe
Yuzhe Qin
4 months
🤲Join us for the 2nd Dexterous Manipulation Workshop at @RoboticsSciSys 2024! We welcome submissions on all topics related to dexterous manipulation, including tactile sensing. Don't miss the deadline on June 10, 2024.
@HaozhiQ
Haozhi Qi
4 months
🎺 As interest in human-like robotics continues to surge, we are excited to announce the “2nd Workshop on Dexterous Manipulation” at RSS 2024. Join us to hear from an incredible lineup of speakers! And don’t miss the opportunity to submit your work and participate!
Tweet media one
2
7
38
0
4
17
@QinYuzhe
Yuzhe Qin
1 year
🎹Exiting dexterous piano playing and more exciting interactive demo from @kevin_zakka
@kevin_zakka
Kevin Zakka
1 year
First, if you enjoyed the video above, we have a live demo that runs MuJoCo in your browser using Javascript and Web Assembly! You can accompany the robot (drag the keys down), or be adversarial and tug at the fingers 🙃
4
14
113
2
1
15
@QinYuzhe
Yuzhe Qin
6 months
Glad to see another sim2real work on dexterous hand! When imitation learning are dominant nowadays for manipulation, sim2real are still powerful for complex dynamical system. Congrats the authors for the great work @ToruO_O @zhaohengyin @HaozhiQ
@ToruO_O
Toru
6 months
Achieving bimanual dexterity with RL + Sim2Real! TLDR - We train two robot hands to twist bottle lids using deep RL followed by sim-to-real. A single policy trained with simple simulated bottles can generalize to drastically different real-world objects.
5
59
217
1
2
15
@QinYuzhe
Yuzhe Qin
4 months
Navigating robotics benchmarking can be frustrating when you have access to open-source datasets but lack the right hardware to utilize them effectively. Leveraging simulations for reproducible evaluations is a valuable strategy worth exploring!
@XuanlinLi2
Xuanlin Li (Simon)
4 months
Scalable, reproducible, and reliable robotic evaluation remains an open challenge, especially in the age of generalist robot foundation models. Can *simulation* effectively predict *real-world* robot policy performance & behavior? Presenting SIMPLER!👇
4
23
134
0
1
15
@QinYuzhe
Yuzhe Qin
1 year
Touch sensor is like a natural friend of multi-finger hand. It is excited to see another fantastic project on Dex Hand + tactile sensor🤖
@pulkitology
Pulkit Agrawal
1 year
Introducing TactoFind: a robot that sees by touching! It finds a target object in the dark by using the sense of touch, just like you would move your hands to find your phone if you woke up in the middle of the night. More videos: #ICRA2023 #Robotics
1
20
134
0
1
16
@QinYuzhe
Yuzhe Qin
2 months
@xiaolonw @yh_kris @xiaolonw , your mentorship has been a guiding light throughout my PhD. Your advice has continuously shaped my "value function", not only for research but also for my attitude toward life. This value function will remain a vital part of my decision-making process, even after
0
0
15
@QinYuzhe
Yuzhe Qin
1 year
The SPA is so great 🤪
@xiaolonw
Xiaolong Wang
1 year
Spring Group Retreat
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
4
125
0
0
14
@QinYuzhe
Yuzhe Qin
9 months
Trained in simulation, our model generalizes to novel object shapes and transfers zero-shot to a real-world allegro hand. Kinect Camera for vision and FSR sensor for touch form our setup. (2/7)
Tweet media one
Tweet media two
1
2
13
@QinYuzhe
Yuzhe Qin
9 months
To achieve this, we convert tactile sensor data into a point cloud, effectively merging vision and touch into one cohesive input. It's as if our robot has the ability to "see" its tactile interactions!  (3/7)
1
3
13
@QinYuzhe
Yuzhe Qin
4 months
How about deploying GPT-4o on Unitree G1 Humanoid robot😜
@OpenAI
OpenAI
4 months
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time: Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks.
3K
14K
61K
2
1
13
@QinYuzhe
Yuzhe Qin
2 years
Attending #CoRL2022 now! Excited to be here at Auckland🥝Looking forward to in-person discussions on manipulation, simulation, and perception.
0
0
13
@QinYuzhe
Yuzhe Qin
10 months
Big congrats to my advisor🎉🎉🎉
@haosu_twitr
Hao Su
10 months
🎉I am now an associate professor! Thank you to all my collaborators and my students who have helped me get here, I look forward to the exciting future that lays ahead of us from computer vision to embodied AI!
Tweet media one
87
16
866
0
0
13
@QinYuzhe
Yuzhe Qin
6 months
Leveraging visual prompts with keypoint annotations offers a intuitive method. Keypoint-centric techniques enjoyed popularity in the years preceding the advent of imitation and reinforcement learning. Today, VLM breathes new life into this approach with fresh possibilities.
@fangchenliu_
Fangchen Liu
6 months
Can we leverage VLMs for robot manipulation in the open world? Checkout our new work MOKA, a simple and effective visual prompting method!
12
43
206
0
3
12
@QinYuzhe
Yuzhe Qin
1 year
The workshop is happening now! Join our at room 323 if you are attending RSS2023 in Daegu.
@notmahi
Mahi Shafiullah 🏠🤖
1 year
The first workshop on Learning Dexterous Manipulation at @RoboticsSciSys is starting now! Check out our speaker lineup at or tune in via zoom at if you are not in person.
0
6
20
0
4
12
@QinYuzhe
Yuzhe Qin
7 months
Great athletic robot from @ZhongyuLi4 ! Maybe next time we can have a robot coach🏃‍♀️🏃‍♂️🏃
@ZhongyuLi4
Zhongyu Li
7 months
Interested in making your bipedal robots to be athletes? We summarized our RL work to create robust & adaptive controllers for general bipedal skills. 400m-dash, running over terrains/against perturbations, targeted jumping, compliant walking, not a problem for bipeds now.🧵👇
15
90
449
0
1
10
@QinYuzhe
Yuzhe Qin
6 months
Generalizable 3D representation helps the robot to understand the surroundings, which is crucial for interactions. Great to see 3D helps robotics again!
@xiaolonw
Xiaolong Wang
6 months
We have seen a lot of legged robots doing navigation in the wild. But how about mobile manipulation in the wild? I have been pushing the direction of learning a unified, efficient, and dynamic 3D representation of scenes (for navigation) and objects (for manipulation) for the
5
49
238
0
2
11
@QinYuzhe
Yuzhe Qin
3 months
Vision-based teleoperation for whole-body control of humanoid robot!
@TairanHe99
Tairan He
3 months
Introduce OmniH2O, a learning-based system for whole-body humanoid teleop and autonomy: 🦾Robust loco-mani policy 🦸Universal teleop interface: VR, verbal, RGB 🧠Autonomy via @chatgpt4o or imitation 🔗Release the first whole-body humanoid dataset
19
68
388
0
1
11
@QinYuzhe
Yuzhe Qin
14 days
The dexterous manipulation frontier awaits at #CoRL2024 Munich. We are hosting another dexterous manipulation workshop at CoRL. Join us in pushing the boundaries of robotic dexterity. 🖐️💡
@HaozhiQ
Haozhi Qi
17 days
🎺 Announcing our CoRL 2024 “Learning Robot Fine and Dexterous Manipulation: Perception and Control” in Munich. Join us to hear from an incredible lineup of speakers! And don’t miss the opportunity to submit your work and participate! Checkout:
Tweet media one
4
15
74
0
0
10
@QinYuzhe
Yuzhe Qin
1 year
Huge thanks to all the speakers for the excellent presentation!
@xiaolonw
Xiaolong Wang
1 year
Our Learning Dexterous Manipulation () at RSS was a hit! Thank you for the speakers @pulkitology @Vikashplus @t_hellebrekers @abhishekunique7 Carmelo, @haoshu_fang , and the organizers, especially in-person ones @QinYuzhe @LerrelPinto @notmahi @ericyi0124
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
5
55
0
0
10
@QinYuzhe
Yuzhe Qin
3 months
📢 RSS 2024 Dexterous Manipulation workshop: Paper submission deadline extended to June 14th (this Friday)! We welcome all workshop paper submissions, even if already submitted to a conference like CoRL. 🤖🔬 Demo-only submission is also welcomed! Learn more:
@HaozhiQ
Haozhi Qi
4 months
🎺 As interest in human-like robotics continues to surge, we are excited to announce the “2nd Workshop on Dexterous Manipulation” at RSS 2024. Join us to hear from an incredible lineup of speakers! And don’t miss the opportunity to submit your work and participate!
Tweet media one
2
7
38
0
1
10
@QinYuzhe
Yuzhe Qin
1 year
Now you can train and visualize your IsaacGym environments right within Jupyter Notebooks. Perfect for those remote server situations without a screen.
1
0
9
@QinYuzhe
Yuzhe Qin
9 months
The architecture of our model is straightforward - PointNet processes augmented points from both vision and touch sensors. This is why we call it "Synesthesia", drawing parallels to humans who perceive color through touch. (4/7)
Tweet media one
1
1
7
@QinYuzhe
Yuzhe Qin
7 months
Leveraging only a cheap RealSense camera (400$) for teleoperation and requiring minimal human input, our CyberDemo system adeptly learns a robust imitation policy.
1
3
7
@QinYuzhe
Yuzhe Qin
10 months
Manipulation continues to be the most trending topic at CoRL
@corl_conf
Conference on Robot Learning
10 months
Gearing up for the conference next week, check this interactive feature as you prep for your time at the conference. Discover cool papers and insights. Did you know that we have 199 contributed papers from 873 authors originating in 25 countries! 🤯
Tweet media one
1
15
71
0
1
6
@QinYuzhe
Yuzhe Qin
9 months
Our policy didn't just stop at rotating tomato and potato. It also took on the challenging Ferrero and toy dinosaurs task, unseen during training. (6/7)
1
0
6
@QinYuzhe
Yuzhe Qin
9 months
Our experiments visualized the critical set selected by PointNet. It's a mix - 42.7% touch-based points on average, rest mainly from finger tips, edges, and palm. (5/7)
Tweet media one
1
0
6
@QinYuzhe
Yuzhe Qin
2 years
Super excited work from @chenwang_j by combing high-level video demo and low-level teleoperation demo.
@chenwang_j
Chen Wang
2 years
How to teach robots to perform long-horizon tasks efficiently and robustly🦾? Introducing MimicPlay - an imitation learning algorithm that uses "cheap human play data". Our approach unlocks both real-time planning through raw perception and strong robustness to disturbances!🧵👇
20
145
734
0
0
6
@QinYuzhe
Yuzhe Qin
4 months
Very clear explanations for the relationship between trajectory optimization and RL. “Cached Optimization” can be more powerful.
@robot_trainer
Nathan Ratliff
4 months
Use both: TO is a Newton step on the Bellman equation. Policies and value functions are "memories" of past solutions; TO should be optimizing over them at inference time. Best of both worlds. Some of the strongest RL methods do this.
5
31
188
0
0
6
@QinYuzhe
Yuzhe Qin
1 year
📣 Great news! The deadline for submitting to the Learning Dexterous Manipulation workshop has been extended to June 16th. We're excited to see your incredible work! 🤖✍️ 🔗
@QinYuzhe
Yuzhe Qin
1 year
Works on tactile sensors, hand-object perception algorithms, and groundbreaking robot learning methods are also welcomed at our RSS 2023 workshop! 🤖✋ 🔗
0
4
21
0
2
6
@QinYuzhe
Yuzhe Qin
1 year
This repository is developed upon the outstanding MeshCat project. Big thanks to the developers and custodians of MeshCat! MeshCat:
0
0
5
@QinYuzhe
Yuzhe Qin
9 months
The concept of in-hand SLAM is really cool! Congrats to the authors!
@HaozhiQ
Haozhi Qi
9 months
Getting rich object representation from vision/touch/proprioception stream, like how we human perceive objects in-hand. 🎺Webiste: ➡️Led by @Suddhus .
0
8
68
0
0
5
@QinYuzhe
Yuzhe Qin
2 years
It Work out of box for major IDE like VSCode and PyCharm with simple `pip install`
0
0
5
@QinYuzhe
Yuzhe Qin
5 months
@zc_alexfan ARCTIC is really great, it also opens up some opportunity for some of the robot manipulation research! Awesome work!
1
0
5
@QinYuzhe
Yuzhe Qin
1 year
For IsaacGym and SAPIEN users, your simulators are already supported. Just a few lines of code modification & you're all set to render on the web. Get visualization 🚀
1
0
5
@QinYuzhe
Yuzhe Qin
1 year
We're welcoming paper submissions in diverse areas, not just on the dexterous hand. If you're working on tactile sensors, hand-object perception algorithms, or innovative robot learning methods to address challenges in this field, we'd love to hear from you!
@xiaolonw
Xiaolong Wang
1 year
We are organizing the workshop on Learning Dexterous Manipulation at #RSS2023 ! Submissions on papers are welcome! The deadline is 06/02.
0
6
35
0
0
5
@QinYuzhe
Yuzhe Qin
2 years
Want to use simulator for embodied AI research but find it hard to start? Our tutorial will offers a practical guideline (with) to build your simulated environment. It is happening now @ CVPR2022.
@HaoSuLabUCSD
Hao Su Lab
2 years
Ever thought of applying your vision algorithm to embodied tasks but cannot find one? Why not make one yourself? Our #CVPR2022 tutorial is starting Monday at 13:00 CT! We'll show you hand-by-hand on building Embodied AI environments from scratch!
1
11
80
0
0
5
@QinYuzhe
Yuzhe Qin
2 months
@haosu_twitr @Jiayuan_Gu @xiaolonw I am grateful for your congratulations and support throughout my academic journey, from my time as a master's student in your lab in 2018 to the completion of my PhD. Your mentorship has been instrumental in shaping my research skills and helping me grow as a scholar. Thank you
0
0
5
@QinYuzhe
Yuzhe Qin
2 years
Really enjoyable conference experience @corl_conf in New Zealand. Fantastic view and most importantly, wonderful food😛
@corl_conf
Conference on Robot Learning
2 years
And one more! Here is our overview of last year's conference #CoRL2022 !
0
6
13
0
0
5
@QinYuzhe
Yuzhe Qin
4 months
@LerrelPinto Thanks, Lerrel! We've gained so much from Open Teach too! Technically, VisionPro's 8 embedded cameras enable broader workspace hand tracking, allowing for larger operator motions. Meanwhile, Quest offers affordability and a robust open-source community!
0
0
3
@QinYuzhe
Yuzhe Qin
1 year
@kevin_zakka Congrats! It looks like the mechanical structure of the gripper works pretty well in this demo! I am curious that whether it can still be parallel when interacting with objects? Will the equity constraint break with large force between the gripper and object.
1
0
4
@QinYuzhe
Yuzhe Qin
2 years
@dhanushisrad Cool system! Just wondering that why the human hand is facing upward when the robot hand facing downward? It is related to the sensor position, i.e. you need to capture the hand pose from VR headset.
1
0
4
@QinYuzhe
Yuzhe Qin
3 months
@anjjei @dngxngxng3 @jiyuezh Thanks you for testing our system and provide the feedback~
0
0
4
@QinYuzhe
Yuzhe Qin
4 months
@HarryXu12 Thanks, Huazhe. The person who writes the controller code should be the first to test it 😉
1
0
4
@QinYuzhe
Yuzhe Qin
3 months
@TairanHe99 @chatgpt4o Cool work Tairan! Excited to see more awesome work from Human2Humaniod family😃
1
0
3
@QinYuzhe
Yuzhe Qin
4 months
@YunfanJiang Really nice sim2real performance! I was curious that how do you simulate the soft TPU-printed gripper in simulator.
1
0
3
@QinYuzhe
Yuzhe Qin
4 months
@kai_junge Fantastic job! I'm curious, is the game controller in your left hand used for overarching control functions of the robot, such as an emergency stop or other high-level commands?
1
0
2
@QinYuzhe
Yuzhe Qin
2 years
Cool project! Think it will be cooler if operator can drag the link pose as blender do in the right GUI window and the left URDF editor reflects the change.
@ihuicatls
Isabel Paredes
2 years
The first URDF live editor for JupyterLab is now released!!
3
60
273
0
0
3
@QinYuzhe
Yuzhe Qin
2 months
@binghao_huang @yh_kris Thank you Binghao, the best dexterous hand buddy. Working closely with you to tackle the challenges of the Allegro hand has been an incredible learning experience for me. Your knowledge and dedication have been truly inspiring. Wish you remarkable success in your research, and I
1
0
3
@QinYuzhe
Yuzhe Qin
8 months
A special robotics dataset from @litian_liang , pushing the understanding of multi-modal manipulation.
@litian_liang
Litian Liang
8 months
Introducing Robo360 dataset 🚀, the first real-world omnispective multi-view and multi-material robotic manipulation dataset. Robo360 captures synchronized multi-modal robot-object interaction data (video, audio, proprioception, control) to facilitate research in dynamic
6
33
175
0
0
3
@QinYuzhe
Yuzhe Qin
3 years
@kevin_zakka Maybe one of the most important reason to use type hint in Python is better code completion.
1
0
2
@QinYuzhe
Yuzhe Qin
1 year
@Vikashplus @MyoSuite Great presentation! Learned a lot from the innovative perspective between dexterity and physiology evidence.
1
0
2
@QinYuzhe
Yuzhe Qin
2 years
0
0
2
@QinYuzhe
Yuzhe Qin
6 months
@chris_j_paxton More robot laundry, and more human poetry
0
0
2
@QinYuzhe
Yuzhe Qin
7 months
Why invest in costly sim2real transfers when we can directly collect teleop data on real robots? Physical simulators allow for extensive, physically accurate data augmentations—impossible for real-world demos but invaluable for policy generalization.
1
0
2
@QinYuzhe
Yuzhe Qin
2 years
@zipengfu @pathak2206 @xuxin_cheng Congrats for the great work!
0
0
2
@QinYuzhe
Yuzhe Qin
2 months
@Jerry_XU_Jiarui Good old time when I still have much hair😅
1
0
2