Huy Ha Profile Banner
Huy Ha Profile
Huy Ha

@haqhuy

Followers
701
Following
1
Media
10
Statuses
22

I'm a Ph.D. student in Computer Science at Columbia University, advised by Professor Shuran Song

New York, USA
Joined October 2022
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@haqhuy
Huy Ha
3 months
I’ve been training dogs since middle school. It’s about time I train robot dogs too 😛 Introducing, UMI on Legs, an approach for scaling manipulation skills on robot dogs🐶It can toss, push heavy weights, and make your ~existing~ visuo-motor policies mobile!
12
86
436
@haqhuy
Huy Ha
1 year
How can we put robotics on the same scaling trend as large language models while not compromising on rich low-level manipulation and control?
3
43
263
@haqhuy
Huy Ha
3 months
For stability, we track gripper actions in world frame instead of body frame, like most prior works. So, when our robot is pushed, its arm will compensate the perturbation. To support real world deployment, we mount an iPhone on the dog’s 🍑, running a custom iOS ARKit app.
3
6
51
@haqhuy
Huy Ha
3 months
UMI on Legs starts with UMI, a handheld gripper with a camera, that can be used to collects robot data without robots. From this data, we train manipulation policies which predicts gripper actions from image inputs. But how should the legs move to track those gripper actions?🤔
1
3
24
@haqhuy
Huy Ha
3 months
For precision, we give the controller a trajectory of targets into the future, which allows anticipation of upcoming motion for dynamic trajectory tracking. For instance, the controller will brace before a fast toss, and mobilize all its legs during a push.
1
3
18
@haqhuy
Huy Ha
3 months
A giant shoutout to my co-first author, @YihuaiGao , whose real world robot engineering wizardy and optimism are inspiring 🚀 Special thanks to @zipengfu and Jie Tan for their RL, control, and quadruped wisdom 🐕 Last but not least, thank you to my advisor, @SongShuran
1
1
15
@haqhuy
Huy Ha
3 months
In a massively parallelized simulation, the robot learns through trial and error how to track UMI trajectories with stability and precision.
1
1
14
@haqhuy
Huy Ha
1 year
Our framework, scaling up and distilling down, provides a comprehensive solution by leveraging language guidance. We scale up automatic language and success labeled robot data generation and distill this data down to an expressive language conditioned multi task policy.
Tweet media one
1
1
3
@haqhuy
Huy Ha
1 year
Using no human demonstrations, language annotation, and reward specification, our framework outputs robust visuo motor policies which can be used for real world deployment.
1
1
5
@haqhuy
Huy Ha
3 months
@ARX_Zhang Thank you so much for your amazing hardware! Wouldn't have been possible if we hadn't switched to ARX5 arms 🦾🦾🦾
0
0
1
@haqhuy
Huy Ha
3 months
@sidharthtalia We wanted to use odometry provided by Unitree. However, they only enable odometry during joint control mode. Further, we found that the lidar spinning can cause vibrations and shakes on the arm 🤦This is why we disconnect the lidar entirely during all of our experiments
1
0
1