Xuxin Cheng Profile Banner
Xuxin Cheng Profile
Xuxin Cheng

@xuxin_cheng

Followers
2,736
Following
950
Media
18
Statuses
510

Robot Learning; Embodied AI; PhD @UCSanDiego MS @CarnegieMellon Prev @UCBerkeley

San Diego, CA
Joined November 2016
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@xuxin_cheng
Xuxin Cheng
4 months
Introduce Open-𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧🤖: ⁣ We need an intuitive and remote teleoperation interface to collect more robot data. 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧 lets you immersively operate a robot even if you are 3000 miles away, like in the movie 𝘈𝘷𝘢𝘵𝘢𝘳. Open-sourced!
37
227
1K
@xuxin_cheng
Xuxin Cheng
4 months
Many folks are interested in the latency and speed of 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧. We put the operator and the robot side-by-side so you can have a better perception of the latency and speed of 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧. Check out more in the main thread 🧵.
@xuxin_cheng
Xuxin Cheng
4 months
Introduce Open-𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧🤖: ⁣ We need an intuitive and remote teleoperation interface to collect more robot data. 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧 lets you immersively operate a robot even if you are 3000 miles away, like in the movie 𝘈𝘷𝘢𝘵𝘢𝘳. Open-sourced!
37
227
1K
15
102
534
@xuxin_cheng
Xuxin Cheng
6 months
 🤖Introducing 📺𝗢𝗽𝗲𝗻-𝗧𝗲𝗹𝗲𝗩𝗶𝘀𝗶𝗼𝗻: a web-based teleoperation software!  🌐Open source, cross-platform (VisionPro & Quest) with real-time stereo vision feedback.  🕹️Easy-to-use hand, wrist, head pose streaming. Code:
14
91
378
@xuxin_cheng
Xuxin Cheng
5 months
14k usd humanoid robot is coming. 23-43dofs, dex hands, 35kg. Insane @UnitreeRobotics . H1 is ~100k, released one year ago.
21
69
371
@xuxin_cheng
Xuxin Cheng
4 months
Introduce Open-𝐓𝐞𝐥𝐞𝐯𝐢𝐬𝐢𝐨𝐧: Autonomous Skills⁣ ⁣ The high-quality data can be readily used for imitation learning. We train fully autonomous policies on a series of precise and long-horizon tasks, including: ⁣ - Insertion⁣ - Sorting⁣ - In-hand Passing⁣ - Folding
1
49
258
@xuxin_cheng
Xuxin Cheng
2 months
3 papers accepted to CoRL2024! All open-sourced. TeleVision: ACE: VBC:
8
10
192
@xuxin_cheng
Xuxin Cheng
3 months
Try 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧 now without any setup: It will display a 3D movie recorded during teleoperation. If you open the link with a VR device it will be a 3D movie. You can also see your hand key points if you use VR.
@xuxin_cheng
Xuxin Cheng
4 months
Introduce Open-𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧🤖: ⁣ We need an intuitive and remote teleoperation interface to collect more robot data. 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧 lets you immersively operate a robot even if you are 3000 miles away, like in the movie 𝘈𝘷𝘢𝘵𝘢𝘳. Open-sourced!
37
227
1K
4
14
136
@xuxin_cheng
Xuxin Cheng
3 months
We are presenting 𝐄𝐱𝐩𝐫𝐞𝐬𝐬𝐢𝐯𝐞 𝐇𝐮𝐦𝐚𝐧𝐨𝐢𝐝 at #RSS2024 on July 18th at 11:30am by @JiYandong ! The code is now 𝐨𝐩𝐞𝐧-𝐬𝐨𝐮𝐫𝐜𝐞𝐝! 👉 All of my 4 recent works have been open-sourced now. 👇
@xiaolonw
Xiaolong Wang
8 months
Let’s think about humanoid robots outside carrying the box. How about having the humanoid come out the door, interact with humans, and even dance? Introducing Expressive Whole-Body Control for Humanoid Robots: See how our robot performs rich, diverse,
93
209
1K
5
17
126
@xuxin_cheng
Xuxin Cheng
3 months
TeleVision plays a role in the latest exciting progress at #NVIDIA . Can't wait to see what's next at GEAR!
@DrJimFan
Jim Fan
3 months
We are building a state-of-the-art Apple Vision Pro -> humanoid robot stack. @xiaolonw and his students laid the foundation. Check out their works!
2
7
70
1
8
113
@xuxin_cheng
Xuxin Cheng
1 year
Didn't think this is possible but we made it! Glad to finish my master with this exciting project with amazing collaborators @teenhaci , @anag004 and @pathak2206 !
@pathak2206
Deepak Pathak
1 year
Even after 4yrs of locomotion research, we keep getting surprised by how far we can push the limits of legged robots! We report a major update 🚀🤖 Extreme Parkour: extremely long & high jumps, ramp, handstand, etc. all with a single neural net! 🧵(1/n)
26
231
1K
7
10
95
@xuxin_cheng
Xuxin Cheng
8 months
Robots have always been asked to work. We explore the other direction what robot can do. Expressive Humanoid takes motions from Mocap, generative model, video2motion model, and then show them in the real world with whole-body movements,💃enabling natural human-robot interactions.
@xiaolonw
Xiaolong Wang
8 months
Let’s think about humanoid robots outside carrying the box. How about having the humanoid come out the door, interact with humans, and even dance? Introducing Expressive Whole-Body Control for Humanoid Robots: See how our robot performs rich, diverse,
93
209
1K
4
16
88
@xuxin_cheng
Xuxin Cheng
3 months
Check out TeleVision that is behind this cool demo from unitree.
@UnitreeRobotics
Unitree
3 months
[Open Source] Unitree First View Teleoperation for Humanoid Robots In order to advance the convenience of data collection for humanoid robots, we refer to other solutions to do the adaptation development and open source. Github: #Unitree #Humanoid #AGI
19
124
615
1
12
86
@xuxin_cheng
Xuxin Cheng
4 months
Humanoid whole body grasp and trajectory tracking for any objects. Can’t wait to see it working on real robots!
@zhengyiluo
Zhengyi “Zen” Luo
4 months
Introducing Omnigrasp: Grasping Diverse Objects with Simulated Humanoids. With Omnigrasp, we show that we can control a humanoid equipped with dexterous hands to grasp diverse objects (>1200) and follow diverse trajectories, with one policy! 🌐: 📜:
3
71
313
2
12
85
@xuxin_cheng
Xuxin Cheng
2 months
Check out our low-cost 3D printable exoskeleton system that can teleoperate many robots! Open-sourced!
@AaronYANG2000
Shiqi Yang
2 months
Introducing ACE - A Cross-Platform Visual-Exoskeletons System! Control all your robots with precision, all at once, with minimal cost, quick assembly, and easy wearability! We’ve open-sourced hardware, software, and step-by-step assembly guides. Get started today! 👉🏻
4
64
318
0
9
75
@xuxin_cheng
Xuxin Cheng
6 months
Dr. Jim Fan mentioned it is "non-trivial to set up the software to have first-person video streamed in and precise control streamed out". Try out our open-sourced teleoperation software 📺Open-TeleVision: (Under continuous improvement!)
@DrJimFan
Jim Fan
6 months
Congrats to @Tesla_Optimus team on another stellar update! The video gives us a peek at their human data collection farm, which I believe is Optimus' biggest lead. What does it take to build such a pipeline? Optimus nailed all of the following: 1. Optimus hands are among the
131
421
3K
1
10
68
@xuxin_cheng
Xuxin Cheng
22 days
Check out our "Doggybot" series of works: Helpful Doggybot. 1 DoF gripper + whole-body movements enables helpful fetching and agile traversal of home environments! Feel free to include "Doggybot" in your project to explore more what quadrupeds can do!
@Qi_Wu577
Qi Wu
22 days
Introducing Helpful DoggyBot🐕, a legged mobile manipulation system: - A quadruped with a mouth - Agile whole-body skills like climbing and tilting - Open-world object fetching using VLMs - No real-world training data required!
5
44
253
1
12
66
@xuxin_cheng
Xuxin Cheng
9 months
Extreme Parkour was accepted at @ieee_ras_icra 2024. Won’t be possible without the joint efforts with @teenhaci @anag004 @pathak2206 !
@xuxin_cheng
Xuxin Cheng
1 year
Didn't think this is possible but we made it! Glad to finish my master with this exciting project with amazing collaborators @teenhaci , @anag004 and @pathak2206 !
7
10
95
2
4
63
@xuxin_cheng
Xuxin Cheng
3 months
Knowing why something fails is extremely important for research. Really rare to see such papers of lessons learned from failures.
@HaozhiQ
Haozhi Qi
3 months
When I started my first project on in-hand manipulation, I thought it would be super cool but also quite challenging to make my robot hands spin pens. After almost 2.5 years of effort in this line of research, we have finally succeeded in making our robot hand "spin pens."
22
80
526
0
7
61
@xuxin_cheng
Xuxin Cheng
4 months
Not possible without the joint efforts of @Jialong_LI_UIM @AaronYANG2000 @EpisodeYang @xiaolonw Try now even if you don’t have a VR device! More videos, code, hardware, and dataset at:
Tweet media one
2
2
60
@xuxin_cheng
Xuxin Cheng
3 months
The tracking performance is truly amazing! Congrats!
@haqhuy
Huy Ha
3 months
For stability, we track gripper actions in world frame instead of body frame, like most prior works. So, when our robot is pushed, its arm will compensate the perturbation. To support real world deployment, we mount an iPhone on the dog’s 🍑, running a custom iOS ARKit app.
3
6
51
0
7
48
@xuxin_cheng
Xuxin Cheng
2 months
A summary of our findings regarding data collection for robotics through three projects we have been doing and open-sourced. We hope the open-sourced software and hardware will benefit the community and converge to a better system.
@xiaolonw
Xiaolong Wang
2 months
3
35
213
3
4
46
@xuxin_cheng
Xuxin Cheng
4 months
Teleoperating robots will be as common as driving your car (also robots). Congratulations to the team on the amazing whole-body results towards this goal!
@zipengfu
Zipeng Fu
4 months
Introduce HumanPlus - Shadowing part Humanoids are born for using human data. We build a real-time shadowing system using a single RGB camera and a whole-body policy for cloning human motion. Examples: - boxing🥊 - playing the piano🎹/ping pong - tossing - typing Open-sourced!
17
165
770
1
7
45
@xuxin_cheng
Xuxin Cheng
11 months
Checkout our live demos at #CoRL2023 !
@pathak2206
Deepak Pathak
11 months
Live demos of our parkour robot in Atlanta during @CoRL2023 . 🤖🚀 Check out this uncut 2mins clip of our robot nonstop climbing, leaping across gaps, and jumping down from boxes! More videos: A fun story below how we overcame hardware issues onsite👇
2
12
155
1
0
43
@xuxin_cheng
Xuxin Cheng
5 months
Check out extreme-parkour presented by Kexin 16:30-18:00 on May 15 in AX F205 @icra2024 .
@teenhaci
Kexin Shi
5 months
I will give an oral presentation about Extreme Parkour with Legged Robots at #ICRA2024 . It will happen at 16:30-18:00 on May 15 in AX F205. Feel free to join in if you are interested in RL for Locomotion! Link: Co-authors: @xuxin_cheng @anag004 @pathak2206
4
8
76
1
6
42
@xuxin_cheng
Xuxin Cheng
2 years
Our latest work on extending legs of quadruped beyond locomotion towards manipulation dexterity!
@pathak2206
Deepak Pathak
2 years
While we have made progress towards replicating the agility of animal mobility in robots, legs aren't just for walking, they are extended arms! Our #ICRA 2023 paper enables legs to act as manipulators for agile tasks: climbing walls, pressing button etc.
3
72
357
0
5
40
@xuxin_cheng
Xuxin Cheng
4 months
Real-time stereo video streaming provides spatial/depth understanding, so the operator is confident about the objects’ locations. This enables fine manipulation with challenging objects such as transparent boxes.
Tweet media one
2
4
37
@xuxin_cheng
Xuxin Cheng
7 months
Thanks for @NVIDIAGTC for featuring our live demo at GTC! Wouldn’t be possible without @UnitreeRobotics ‘s technical support. Website:
@NVIDIAGTC
NVIDIA GTC
7 months
Dancing robots at #GTC24 🕺 You can feel the excitement leading up to NVIDIA CEO Jensen Huang's keynote on Monday just outside the SAP center. @UnitreeRobotics
6
27
146
0
2
38
@xuxin_cheng
Xuxin Cheng
2 years
Our latest work on Deep Whole-Body Control that will appear @corl_conf 2022. Detailed explanation thread is coming soon. Stay tuned!
@_akhaliq
AK
2 years
Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion abs: project page:
3
58
298
1
8
37
@xuxin_cheng
Xuxin Cheng
6 months
LLM proven to be able to engineer reward & sim2real params that make things work in the real world.
@JasonMa2020
Jason Ma
6 months
Introducing DrEureka🎓, our latest effort pushing the frontier of robot learning using LLMs! DrEureka uses LLMs to automatically design reward functions and tune physics parameters to enable sim-to-real robot learning. DrEureka can propose effective sim-to-real configurations
25
118
601
0
3
37
@xuxin_cheng
Xuxin Cheng
7 months
Now you can teleop an RL comtrolled quadruped to do BC. Very cool!
@ZhengmaoHe
Zhengmao He
7 months
Quadruped robots already have four manipulators (aka legs), do we really need extra arms for them? 🤔 Dive into our latest work: quadruped robots handle complex loco-manipulation tasks autonomously with just legs! 🐾
1
21
91
1
3
34
@xuxin_cheng
Xuxin Cheng
8 months
Further testing our walking robustness on all kinds of terrains.
@xiaolonw
Xiaolong Wang
8 months
Walking in the morning @UCSanDiego Operators: @xuxin_cheng @JiYandong
12
19
190
1
2
30
@xuxin_cheng
Xuxin Cheng
8 months
Human motions will provide immense amounts of data for robot control!
@TairanHe99
Tairan He
8 months
🤖 Introducing H2O (Human2HumanOid): - 🧠 An RL-based human-to-humanoid real-time whole-body teleoperation framework - 💃 Scalable retargeting and training using large human motion dataset - 🎥 With just an RGB camera, everyone can teleoperate a full-sized humanoid to perform
23
98
486
0
0
29
@xuxin_cheng
Xuxin Cheng
6 months
Very natural motions learned from animal videos!
@eastskykang
Dongho Kang
6 months
Introducing my recent collaboration work, "Spatio-Temporal Motion Retargeting for Quadruped Robots" co-authored with Taerim Yoon, Seungmin Kim, Minsung Ahn, Stelian Coros, and Sungjoon Choi.
2
6
57
1
2
28
@xuxin_cheng
Xuxin Cheng
7 months
Checkout our new work on autonomous whole-body picking for a large quadruped robot! 🦾 Sim-to-real again shows generalization, scalability on a range of tasks that can be well defined.
@xiaolonw
Xiaolong Wang
7 months
I have been cleaning my daughter's mess for more than two years now. Last weekend our robot came to home to do the job for me. 🤖 Our new work on visual whole-body control learns a policy to coordinate the robot legs and arms for mobile manipulation. See
23
115
653
0
2
27
@xuxin_cheng
Xuxin Cheng
4 months
The system is easily accessible from any device with a web browser (Vision Pro, Quest, even mac, iPad, iPhone…).
Tweet media one
2
1
22
@xuxin_cheng
Xuxin Cheng
4 months
An active neck plus IK and retargeting enables intuitive perception and actuation for the operator. The operator just needs to look at the points of interest intuitively and the robot will follow the same head movements.
Tweet media one
1
2
23
@xuxin_cheng
Xuxin Cheng
7 months
Really cool work on loco-manipulation!
@changyi_lin1
Changyi Lin
7 months
LocoMan = Quadrupedal Robot + 2 * Loco-Manipulator Powered by dual lightweight 3-DoF Loco-Manipulators and the Whole-Body Controller, LocoMan achieves various challenging tasks, such as manipulation in narrow spaces and bimanual-manipulation. 👇👇👇
6
48
220
1
5
21
@xuxin_cheng
Xuxin Cheng
6 months
On second thought, it might be even harder for a humanoid to parkour on these obstacles compared to quadrupeds. Congrats and amazing results!
@ziwenzhuang_leo
Ziwen Zhuang
6 months
Introducing 🤖🏃Humanoid Parkour Learning Using vision and proprioception, our humanoid can jump over hurdles, and platforms, leap over gaps, walk up/down stairs, and much more. 🖥️Check our website at 📺Stay tuned for more videos.
14
92
458
0
1
21
@xuxin_cheng
Xuxin Cheng
4 months
@tonyzzhao Maybe it is not appropriate to claim "fully open source" because only training code is open-sourced but both hardware code and how pose estimation is incorporated are not open-sourced. Please correct me if I have any misunderstanding.
1
0
19
@xuxin_cheng
Xuxin Cheng
9 months
Mobile manipulation now goes to the wild with custom built cheap hardware. Imitation learning +Online RL ➡️ auto adaptation to novel in-the-wild articulated objects.
@Haoyu_Xiong_
Haoyu Xiong
9 months
Introducing Open-World Mobile Manipulation 🦾🌍 – A full-stack approach for operating articulated objects in open-ended unstructured environments: Unlocking doors with lever handles/ round knobs/ spring-loaded hinges 🔓🚪 Opening cabinets, drawers, and refrigerators 🗄️ 👇
30
104
774
0
1
18
@xuxin_cheng
Xuxin Cheng
7 months
Feature injected generalizable NeRF now helps real robots understand the environment and execute tasks in the wild!
@xiaolonw
Xiaolong Wang
7 months
We have seen a lot of legged robots doing navigation in the wild. But how about mobile manipulation in the wild? I have been pushing the direction of learning a unified, efficient, and dynamic 3D representation of scenes (for navigation) and objects (for manipulation) for the
5
49
237
0
2
17
@xuxin_cheng
Xuxin Cheng
3 months
Congrats! Check out more about open-sourced Extreme-Parkour project:
@brianzhan1
Brian Zhan
3 months
📣 Excited to share @CRV 's investment in the @SkildAI Series A round! When I was building robots ~10 years ago for Rehabilitation Institute of Chicago, one of the biggest challenges was having robotics perform extreme parkour. Two researchers @pathak2206 and @gupta_abhinav_
2
4
70
0
0
17
@xuxin_cheng
Xuxin Cheng
6 months
Open source robotics is the way!
@RemiCadene
Remi Cadene
6 months
Meet LeRobot, my first library at @huggingface robotics 🤗 The next step of AI development is its application to our physical world. Thus, we are building a community-driven effort around AI for robotics, and it's open to everyone! Take a look at the code:
Tweet media one
35
216
837
0
0
17
@xuxin_cheng
Xuxin Cheng
6 months
Xarm can do skincare for you! Checkout Bunny-VisionPro: open sourced vision teleoperation with motion retargeting code.
@xiaolonw
Xiaolong Wang
6 months
Tesla Optimus can arrange batteries in their factories, ours can do skincare (on @QinYuzhe )! We opensource Bunny-VisionPro, a teleoperation system for bimanual hand manipulation. The users can control the robot hands in real time using VisionPro, flexible like a bunny. 🐇
10
62
335
0
1
17
@xuxin_cheng
Xuxin Cheng
9 months
Perhaps the most Impressive bipedal locomotion result I have seen! from @ZhongyuLi4 .
@ZhongyuLi4
Zhongyu Li
9 months
Interested in making your bipedal robots to be athletes? We summarized our RL work to create robust & adaptive controllers for general bipedal skills. 400m-dash, running over terrains/against perturbations, targeted jumping, compliant walking, not a problem for bipeds now.🧵👇
15
89
451
0
0
15
@xuxin_cheng
Xuxin Cheng
6 months
Really impressive! Parallel grippers can already achieve so many complex tasks. Looking forward to the moment when the one take video extends to hours and the robots being the only employees in a robot repair workshop!
@tonyzzhao
Tony Z. Zhao
6 months
Introducing 𝐀𝐋𝐎𝐇𝐀 𝐔𝐧𝐥𝐞𝐚𝐬𝐡𝐞𝐝 🌋 - Pushing the boundaries of dexterity with low-cost robots and AI. @GoogleDeepMind Finally got to share some videos after a few months. Robots are fully autonomous filmed in one continuous shot. Enjoy!
55
342
2K
1
0
15
@xuxin_cheng
Xuxin Cheng
6 months
Teleoperating a robot to use a joystick to play a game is insane!
@ToruO_O
Toru
6 months
To address the first challenge, we develop HATO ("dove" 🕊️ in Japanese), a low-cost Hands-Arms TeleOperation system using Meta Quest 2. Our system demonstrates precise motion control capabilities and human-like dexterity -- we even controlled our robot to play Hollow Knight!
1
6
29
0
1
14
@xuxin_cheng
Xuxin Cheng
8 months
3D visual language feature helps semantic understanding of the environments!
@GeYan_21
Ge Yan
8 months
Introducing DNAct: Diffusion Guided Multi-Task 3D Policy Learning. We combine neural rendering pre-training and diffusion models to learn a generalizable policy with a strong 3D semantic scene understanding.
2
20
89
0
4
14
@xuxin_cheng
Xuxin Cheng
5 months
Very cool and fast progress! Congratulations!
@TairanHe99
Tairan He
5 months
Introduce OmniH2O, a learning-based system for whole-body humanoid teleop and autonomy: 🦾Robust loco-mani policy 🦸Universal teleop interface: VR, verbal, RGB 🧠Autonomy via @chatgpt4o or imitation 🔗Release the first whole-body humanoid dataset
24
74
417
1
0
13
@xuxin_cheng
Xuxin Cheng
7 months
Nice benchmark on humanoid locomotion and manipulation. Model-based RL Like TD-MPC2 showed very impressive performance! We might want hierachical policies for long horizon tasks.
@carlo_sferrazza
Carlo Sferrazza
7 months
Humanoids 🤖 will do anything humans can do. But are state-of-the-art algorithms up to the challenge? Introducing HumanoidBench, the first-of-its-kind simulated humanoid benchmark with 27 distinct whole-body tasks requiring intricate long-horizon planning and coordination. 🧵👇
8
92
335
0
1
13
@xuxin_cheng
Xuxin Cheng
5 months
Glad to see TeleVision is used by 1x! It is really a magical experience to transmit your mind to a robot embodiment . Maybe in the future everyone will get used to it.
@Scobleizer
Robert Scoble
5 months
A week ago I embodied myself in a humanoid robot from inside a VR headset. It was very weird to look down and see I had robot hands. Here is how.
12
26
108
0
1
13
@xuxin_cheng
Xuxin Cheng
4 months
@StamatisTWIY We built upon Unitree H1 robot and Fourier GR-1 robot. The hands are from Inspire robots. The active cam part of H1 is customized and open-sourced. We use non-modified hardware for GR-1.
1
2
12
@xuxin_cheng
Xuxin Cheng
3 months
VR’s potential is changing how we interact with virtual and real. Very cool project!
@JiashunWang
Jiashun Wang
3 months
Thrilled to share our #SIGGRAPH2024 work on physics-based character animation for ping pong!🏓We show not only agent-agent matches but also human-agent interactions via VR, allowing humans to challenge our trained agents!🎮 🌐: 📜:
5
20
112
2
1
12
@xuxin_cheng
Xuxin Cheng
4 months
@EpisodeYang is truly a great collaborator. Our system is built upon Vuer developed by Ge. Vuer’s capability is more than this and can be useful for all kinds of robotics visualizations.
@BoyuanChen0
Boyuan Chen
4 months
@TairanHe99 @EpisodeYang is an incredible full stack engineer in building this kind of complex software stack. I know the system is good the moment I saw his name on it!
1
1
9
1
2
11
@xuxin_cheng
Xuxin Cheng
2 years
Excited to share our recent work on Deep Whole-Body Control. This work will appear in #CoRL '22 (Oral). And don’t miss the live demo sessions at #CoRL '22 in New Zealand!🤖
@pathak2206
Deepak Pathak
2 years
An arm can increase the utility of legged robots. But due to high dimensionality, most prior methods decouple learning for legs & arm. In #CoRL '22 (Oral), we present an end-to-end approach for *whole-body control* to get dynamic behaviors on the robot.
7
47
299
0
0
11
@xuxin_cheng
Xuxin Cheng
1 year
Glad to see more and more open sourced robotics project! 🎉
@breadli428
Chenhao Li
1 year
Exciting news! We've just released the source code for our WASABI project, a CoRL 2022 best paper finalist, aimed at mastering agile skills through adversarial imitation from rough, partial, handheld demonstrations! Check it out: . 🚀
1
8
79
0
0
10
@xuxin_cheng
Xuxin Cheng
3 months
Open-TeleVision: Visual-Whole-Body: Expressive-Humanoid: Extreme-Parkour:
1
3
10
@xuxin_cheng
Xuxin Cheng
1 year
Thanks for covering our work! Intel RealSense cameras have empowered our robot with unprecedented agile skills!
@IntelRealSense
Intel RealSense
1 year
This #ICRA2023 paper is being presented this week - featuring #IntelRealSense technology. Very impressive!!
0
5
14
1
0
10
@xuxin_cheng
Xuxin Cheng
6 months
So much fun just to watch robots play! I see a lot of potentials what robots can do besides working.
@GoogleDeepMind
Google DeepMind
6 months
Soccer players have to master a range of dynamic skills, from turning and kicking to chasing a ball. How could robots do the same? ⚽ We trained our AI agents to demonstrate a range of agile behaviors using reinforcement learning. Here’s how. 🧵
132
521
2K
0
2
10
@xuxin_cheng
Xuxin Cheng
6 months
A great post about the concept of machine unlearning while current models are learning more and more.
@kenziyuliu
Ken Liu
6 months
The idea of "machine unlearning" is getting attention lately. Been thinking a lot about it recently and decided to write a long post: 📰 Unlearning is no longer just about privacy and right-to-be-forgotten since foundation models. I hope to give a gentle
22
162
749
0
1
8
@xuxin_cheng
Xuxin Cheng
1 year
Live talks happening now across 3 workshops @corl_conf ! Deployable at Station; Generalist at Sequoia 1; Roboletics at Muse2!
0
0
7
@xuxin_cheng
Xuxin Cheng
4 months
@tonyzzhao Thanks Tony! The academia is built upon open-sourced projects and shared knowledge to advance. We improved upon ACT, which is a really good codebase to use. Special thanks for open-sourcing it!
Tweet media one
1
0
7
@xuxin_cheng
Xuxin Cheng
1 year
World models with limited robot data? Check out @mendonca_rl on shared world models between humans and robots!
@mendonca_rl
Russell Mendonca
1 year
World models are promising for enabling general-purpose robots. But how do we train them since robot data is limited? Our solution: Shared world models on human videos and robots using affordances as the joint action space. Pre-train on human video, then fine-tune to robot tasks!
4
30
129
0
0
6
@xuxin_cheng
Xuxin Cheng
3 months
@gezhang001 Congrats!
1
0
6
@xuxin_cheng
Xuxin Cheng
6 months
0
3
5
@xuxin_cheng
Xuxin Cheng
2 years
Cool work by @anag004 @ashishkr9311 . Pushing locomotion to next level with visual feedback.
@pathak2206
Deepak Pathak
2 years
After 3yrs of locomotion research, we report a major update in our #CoRL2022 (Oral) paper: vision-based locomotion. Our small, safe, low-cost robot can walk almost any terrain: high stairs, stepping stones, gaps, rocks. Stair for this robot is like climbing walls for humans.
34
159
895
0
1
5
@xuxin_cheng
Xuxin Cheng
2 months
Bunny: Open-TV: ACE:
1
0
5
@xuxin_cheng
Xuxin Cheng
7 months
This is truly incredible.
@UnitreeRobotics
Unitree
7 months
Unitree H1 The World's First Full-size Motor Drive Humanoid Robot Flips on Ground. Unitree H1 Deep Reinforcement Learning In-place Flipping ! #Unitree #UnitreeRobotics #AI #Robotics #Humanoidrobots #Worldrecord #Flips #EmbodiedAI #ArtificialIntelligence #Technology #Innovation
69
414
2K
0
0
5
@xuxin_cheng
Xuxin Cheng
4 months
Not possible without the joint efforts of @Jialong_LI_UIM @AaronYANG2000 @EpisodeYang @xiaolonw Try now even if you don’t have a VR device! More videos, code, hardware, and dataset at:
Tweet media one
0
1
5
@xuxin_cheng
Xuxin Cheng
5 months
@Haoyu_Xiong_ I think VR devices can be the core of the system. Depending on different tasks, accessories like haptic gloves, joint copy arms could be used.
0
0
4
@xuxin_cheng
Xuxin Cheng
7 months
@kenny__shaw Thanks Kenny. Unitree’s products are always very robust and affordable.
0
0
4
@xuxin_cheng
Xuxin Cheng
4 months
Remote work can be extended to physical ones!
@nikitonsky
Никита Питонский
4 months
Hear me out: 1. Put the robots into the office, 1 robot per 1 engineer 2. Buy each robots a laptop 3. Connect and control from your home 4. Remote work solved!
0
1
16
1
0
4
@xuxin_cheng
Xuxin Cheng
4 months
We use a powerful DinoV2 visual backbone and a transformer to process multimodal information.
Tweet media one
1
0
4
@xuxin_cheng
Xuxin Cheng
6 months
Agreed. If you notice in our demo, the 2 abduction motors for thumbs have both broken, but we can still do this task.
@ToruO_O
Toru
6 months
Indeed, we find that hands, even when used with limited DoFs, can perform a large set of tasks & outperform grippers because of the additional stability! This also allows for smoother and more intuitive teleop (e.g. no retargeting issue), leading to much faster data collection :D
0
0
6
2
0
3
@xuxin_cheng
Xuxin Cheng
3 months
The service is hosted on a small EC2 instance on AWS. The resolution is set low for preview in the beta stage.
0
0
3
@xuxin_cheng
Xuxin Cheng
8 months
Also inspired by Atlas dancing to the music. Atlas' motion is always elegant, expressive and ahead of time. We take the step to make motion imitation for real robots more general and affordable.
1
0
3
@xuxin_cheng
Xuxin Cheng
5 months
@NathanTylerP Unitree is a great company 👍
0
0
3
@xuxin_cheng
Xuxin Cheng
6 months
@TairanHe99 @EpisodeYang Glad to see you made it work! For stereo streaming you can adjust the quality parameter without sacrificing resolution. A high bandwidth router will also significantly affect delay because of tcp. If you want only mono video stream, you can achieve low delay with high quality.
0
0
3
@xuxin_cheng
Xuxin Cheng
4 months
Thanks for featuring our work!
@adcock_brett
Brett Adcock
4 months
Researchers at UCSD and MIT introduced Open-TeleVision It's an open-source tele-op system that allows users to control robots from thousands of miles away It's also accessible from any device with a web browser, including VR
1
24
241
0
0
3
@xuxin_cheng
Xuxin Cheng
4 months
@HarryXu12 Thanks Huazhe!
0
0
2
@xuxin_cheng
Xuxin Cheng
8 months
@teenhaci Thanks Kexin! Visit SD when you get time!
0
0
2
@xuxin_cheng
Xuxin Cheng
6 months
@Stone_Tao Thanks Stone! OpenTeleVision is general to any video input and can be used for either sim or real. Looking forward to the next step of Maniskill3!
0
0
2
@xuxin_cheng
Xuxin Cheng
11 months
@CaiShaoj Thanks Shaojun!
0
0
0
@xuxin_cheng
Xuxin Cheng
6 months
@Stone_Tao Yes, as long as you can extract frames from your live video, it does not matter if it is sim or real.
0
0
2
@xuxin_cheng
Xuxin Cheng
6 months
@ToruO_O Thanks Toru!
0
0
2
@xuxin_cheng
Xuxin Cheng
8 months
@zhengyiluo @JiYandong @xiaolonw Thanks Zhengyi. PHC is where our inspiration comes from.
1
0
2
@xuxin_cheng
Xuxin Cheng
1 year
@ashishkr9311 @UCBerkeley @Tesla @Teslasbot Congratulations Ashish! It was very pleasant working with you.
0
0
2
@xuxin_cheng
Xuxin Cheng
8 months
1
0
2
@xuxin_cheng
Xuxin Cheng
3 years
great work!
@Dr_ThomasZ
Thomas Zurbuchen
3 years
Congratulations to CNSA’s #Tianwen1 team for the successful landing of China’s first Mars exploration rover, #Zhurong ! Together with the global science community, I look forward to the important contributions this mission will make to humanity’s understanding of the Red Planet.
Tweet media one
595
3K
20K
0
0
2
@xuxin_cheng
Xuxin Cheng
2 years
@gezhang001 Congrats!
0
0
2
@xuxin_cheng
Xuxin Cheng
1 month
0
0
2
@xuxin_cheng
Xuxin Cheng
6 months
@ShivinDass From my personal perspective, there is some jittering due to network bandwidth but totally ok to me. This can be further improved in future versions, by adding vignetting effect for example.
0
0
2
@xuxin_cheng
Xuxin Cheng
6 months
@tianyuanzhang99 Great work!
0
0
2
@xuxin_cheng
Xuxin Cheng
8 months
@samhiderman @xiaolonw A few hours. Didn’t really try to drain the battery yet.
0
0
1
@xuxin_cheng
Xuxin Cheng
4 months
@zhengyiluo Thanks Zhengyi! Omni.H2O's whole-body teleoperation is so cool.
0
0
1
@xuxin_cheng
Xuxin Cheng
6 months
@Haoyang1i Thanks Haoyang!
0
0
1