Haozhi Qi Profile Banner
Haozhi Qi Profile
Haozhi Qi

@HaozhiQ

Followers
1,836
Following
665
Media
40
Statuses
211

Ph.D. Student at UC Berkeley

Berkeley, CA
Joined February 2019
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@HaozhiQ
Haozhi Qi
6 days
When I started my first project on in-hand manipulation, I thought it would be super cool but also quite challenging to make my robot hands spin pens. After almost 2.5 years of effort in this line of research, we have finally succeeded in making our robot hand "spin pens."
21
80
505
@HaozhiQ
Haozhi Qi
2 years
💡We release Hora: a single policy capable of rotating diverse objects 🎾🥝🧊🍋🍊⌛🥑🍅🍐🍑 with a dexterous robot hand. No cameras. No touch sensors. Hora is trained entirely in simulation and directly deployed in the real world. see #CoRL @corl_conf 1/
7
68
312
@HaozhiQ
Haozhi Qi
11 months
🦾 Our robot hand can rotate objects over 6+ axes in the real-world! Introducing RotateIt (CoRL 2023), a Sim-to-Real policy that can rotate many objects over many axes, using vision and touch! Check it out: . Paper: . #CoRL2023
7
29
184
@HaozhiQ
Haozhi Qi
23 days
Introducing tactile skin sim-to-real for dexterous in-hand translation! We propose a simulation model for ReSkin, a magnetic tactile sensing skin. It can simulate ternary shear and binary normal forces. More:
1
45
183
@HaozhiQ
Haozhi Qi
1 month
maybe overly intelligent: "Breakthrough in tactile sensing" -- still can't feel Shame
Tweet media one
1
16
127
@HaozhiQ
Haozhi Qi
2 years
If you are at #CoRL2022 , come to check our poster at the Poster Lobby 352 (4:45 pm - 6:00 pm). See how our multi-finger robot hand rotates an orange peel. @corl_conf Website: Code:
3
13
95
@HaozhiQ
Haozhi Qi
8 months
Getting rich object representation from vision/touch/proprioception stream, like how we human perceive objects in-hand. 🎺Webiste: ➡️Led by @Suddhus .
@_akhaliq
AK
8 months
Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation paper page: Neural perception with vision and touch yields robust tracking and reconstruction of novel objects for in-hand manipulation.
3
37
174
0
8
68
@HaozhiQ
Haozhi Qi
8 months
Had an amazing time at the robot learning workshop #NeurIPS2023 ! Thanks for the organizers (especially🤝 @shahdhruv_ ) for such a great event! I was also thrilled to receive the outstanding demo reward. Related Projects are: and .
Tweet media one
@shahdhruv_
Dhruv Shah
8 months
Afficianados of robot learning: join us in Hall B2 at #NeurIPS2023 for some cutting-edge talks, posters, a spicy debate, and live robot demos! The robots are here, are you? We also have some GPUs for a "Spicy Question of the Day Prize" 🌶️, don't miss out
0
6
42
0
1
53
@HaozhiQ
Haozhi Qi
4 years
We are excited to share our new work "Region Proposal Interaction Networks" on predicting long-term future trajectories from visual input. Our method can be trained in both complex simulated environments and real-world YouTube Videos. (1/5)
1
5
47
@HaozhiQ
Haozhi Qi
8 months
We are organizing a workshop on touch processing @NeurIPSConf 2023! If you want to learn about the current status of future applications of touch processing, join us on Dec 15th at Room 214! Don't miss our amazing lineup of speakers! For more info:
Tweet media one
1
7
41
@HaozhiQ
Haozhi Qi
4 years
Our work on learning visual dynamics is accepted by #ICLR2021 . We obtained state-of-the-art results on multiple prediction tasks as well as the #PHYRE physical reasoning benchmark. Check our latest results at
@HaozhiQ
Haozhi Qi
4 years
We are excited to share our new work "Region Proposal Interaction Networks" on predicting long-term future trajectories from visual input. Our method can be trained in both complex simulated environments and real-world YouTube Videos. (1/5)
1
5
47
0
5
40
@HaozhiQ
Haozhi Qi
3 months
🎺 As interest in human-like robotics continues to surge, we are excited to announce the “2nd Workshop on Dexterous Manipulation” at RSS 2024. Join us to hear from an incredible lineup of speakers! And don’t miss the opportunity to submit your work and participate!
Tweet media one
2
8
38
@HaozhiQ
Haozhi Qi
5 months
We’ve been thinking about this for a while: how to simulate diverse objects in a singular, abstract form? We show one example with generalization to an assortment of 🫙. Time to give two hands to your favorite #humanoid ! Twisting lids (off) is our first step, and more to come!
@ToruO_O
Toru
5 months
Achieving bimanual dexterity with RL + Sim2Real! TLDR - We train two robot hands to twist bottle lids using deep RL followed by sim-to-real. A single policy trained with simple simulated bottles can generalize to drastically different real-world objects.
5
59
217
0
5
31
@HaozhiQ
Haozhi Qi
3 months
always excited to visit japan. i will be at ICRA next week and happy to connect and chat about robotics!
Tweet media one
0
1
33
@HaozhiQ
Haozhi Qi
4 months
truly creative haha
@shin0805__
Shintaro Inoue / 井上信多郎
4 months
『すずめの戸締まり』に登場する3本脚の椅子を再現したロボット設計,強化学習による歩容生成の論文を公開しました! 来週アメリカで開催されるRoboSoft2024にて発表します! website - #すずめの戸締まり
32
4K
15K
0
3
32
@HaozhiQ
Haozhi Qi
3 months
Check out our new bi-dex-hands. To me, it is quite a fun experience to move away from my comfort zone of sim-to-real, and learn about how to collect data. I was very bad at playing video games (teleop), but with this, even I can collect the steak training data in about 3 hours.
@ToruO_O
Toru
3 months
Imitation learning works™ – but you need good data 🥹 How to get high-quality visuotactile demos from a bimanual robot with multifingered hands, and learn smooth policies? Check our new work “Learning Visuotactile Skills with Two Multifingered Hands”! 🙌
7
75
281
1
6
28
@HaozhiQ
Haozhi Qi
8 months
Really enjoyed all the talks, spotlights, and panel discussion of our 1st touch processing workshop! A lot of fun and inspiring discussion! Thank you to everyone who contributed to making it a success! 🙌
Tweet media one
Tweet media two
@HaozhiQ
Haozhi Qi
8 months
We are organizing a workshop on touch processing @NeurIPSConf 2023! If you want to learn about the current status of future applications of touch processing, join us on Dec 15th at Room 214! Don't miss our amazing lineup of speakers! For more info:
Tweet media one
1
7
41
1
1
24
@HaozhiQ
Haozhi Qi
4 months
so cute 😀 congrats to the team!!
@xiaolonw
Xiaolong Wang
4 months
I have been cleaning my daughter's mess for more than two years now. Last weekend our robot came to home to do the job for me. 🤖 Our new work on visual whole-body control learns a policy to coordinate the robot legs and arms for mobile manipulation. See
23
116
653
0
3
21
@HaozhiQ
Haozhi Qi
2 months
📢 RSS 2024 Dexterous Manipulation workshop. Deadline extended to: June 14th (this Friday)! Don't miss this opportunity to share your exciting work. Submit your contribution here:
@HaozhiQ
Haozhi Qi
3 months
🎺 As interest in human-like robotics continues to surge, we are excited to announce the “2nd Workshop on Dexterous Manipulation” at RSS 2024. Join us to hear from an incredible lineup of speakers! And don’t miss the opportunity to submit your work and participate!
Tweet media one
2
8
38
0
7
21
@HaozhiQ
Haozhi Qi
18 days
Recordings available at (also on our website). Check them out for the great invited talks and spotlights.
@taochenshh
Tao Chen
21 days
Excited for our stellar lineup of speakers at the Dexterous Manipulation workshop this Monday! 🎙️ Can't make it? We've got you covered with a live stream. 📺 Don't miss out! Live-stream:
Tweet media one
2
9
43
1
4
19
@HaozhiQ
Haozhi Qi
2 years
Heading to New Zealand for @corl_conf ! It has been 4 years since my last international travel! And ... this is the first time I'm travelling with a robot! We will present our poster on Friday, Dec 16, 4:45PM-6:00PM, with the help of a robot 🤖. Check out the summary thread:
Tweet media one
@HaozhiQ
Haozhi Qi
2 years
💡We release Hora: a single policy capable of rotating diverse objects 🎾🥝🧊🍋🍊⌛🥑🍅🍐🍑 with a dexterous robot hand. No cameras. No touch sensors. Hora is trained entirely in simulation and directly deployed in the real world. see #CoRL @corl_conf 1/
7
68
312
0
0
17
@HaozhiQ
Haozhi Qi
6 months
Amazing!
@ZhongyuLi4
Zhongyu Li
6 months
Interested in making your bipedal robots to be athletes? We summarized our RL work to create robust & adaptive controllers for general bipedal skills. 400m-dash, running over terrains/against perturbations, targeted jumping, compliant walking, not a problem for bipeds now.🧵👇
15
90
447
0
2
16
@HaozhiQ
Haozhi Qi
10 months
Checkout @carohiguerarias ’s work on estimating object-environment contact using tactile sensing, and how it benefits downstream manipulation. And if you want to simulate touch sensing in isaacgym, make sure you also check our code!
@carohiguerarias
Carolina Higuera
10 months
🤔Are extrinsic contacts useful for manipulation policies? Neural Contact Fields estimate extrinsic contacts from touch. However, its utility in real-world tasks remains unknown. We improve NCF to enable sim-to-real transfer and use it to train policies for insertion tasks.
2
6
50
0
3
15
@HaozhiQ
Haozhi Qi
6 months
videos available now!
@HaozhiQ
Haozhi Qi
8 months
We are organizing a workshop on touch processing @NeurIPSConf 2023! If you want to learn about the current status of future applications of touch processing, join us on Dec 15th at Room 214! Don't miss our amazing lineup of speakers! For more info:
Tweet media one
1
7
41
0
3
15
@HaozhiQ
Haozhi Qi
3 months
very nice work! capturing scene geometry using (vision and) touch. Also with correspondence!
@_YimingDou
Yiming Dou
3 months
NeRF captures visual scenes in 3D👀. Can we capture their touch signals🖐️, too? In our #CVPR2024 paper Tactile-Augmented Radiance Fields (TaRF), we estimate both visual and tactile signals for a given 3D position within a scene. Website: arXiv:
9
23
116
3
1
14
@HaozhiQ
Haozhi Qi
11 months
📢 Announcing 1st "Workshop on Touch Processing: a new Sensing Modality for AI" at NeurIPS 2023. If you are interested in touch sensing & machine learning, don’t miss the opportunity to submit your work and participate! 📷 Call for Papers: . #NeurIPS2023
1
5
14
@HaozhiQ
Haozhi Qi
11 months
Great work! Exciting time for sim2real transfer!
@chenwang_j
Chen Wang
11 months
How to chain multiple dexterous skills to tackle complex long-horizon manipulation tasks? Imagine retrieving a LEGO block from a pile, rotating it in-hand, and inserting it at the desired location to build a structure. Introducing our new work - Sequential Dexterity 🧵👇
26
91
471
1
2
12
@HaozhiQ
Haozhi Qi
5 months
Unitree's progress is incredible! One of the hardest robot learning task I saw recently.
@UnitreeRobotics
Unitree
5 months
Unitree H1 The World's First Full-size Motor Drive Humanoid Robot Flips on Ground. Unitree H1 Deep Reinforcement Learning In-place Flipping ! #Unitree #UnitreeRobotics #AI #Robotics #Humanoidrobots #Worldrecord #Flips #EmbodiedAI #ArtificialIntelligence #Technology #Innovation
68
412
2K
0
0
11
@HaozhiQ
Haozhi Qi
10 months
Great opportunity for incoming PhD students!
@antoniloq
Antonio Loquercio
10 months
Excited to share that I'll join UPenn as an Assistant Professor next fall! I couldn't be more grateful to all the mentors and collaborators for their support along the way! I'm looking for motivated students👩‍🎓to join my lab! If you love robots 🤖 and learning, look no further!
Tweet media one
29
27
422
0
0
12
@HaozhiQ
Haozhi Qi
4 years
Excited to share our new work with @xiaolonw , Chong You, Yi Ma, and Jitendra Malik
@xiaolonw
Xiaolong Wang
4 years
How to train very deep ConvNets without residual blocks? Our ICML paper on Deep Isometric Learning successfully trains 100-layer ConvNets without any shortcut connections nor normalization layers (BN/GN) on ImageNet. Paper: Code:
Tweet media one
4
37
182
1
3
10
@HaozhiQ
Haozhi Qi
5 months
When I first saw the video, I was quite impressed by the complexity of the task and the efficiency of the data collection. It definitely reshapes my opinion on robot data collection! Checkout the new paper and fully open-sourced system by @chenwang_j and team!
@chenwang_j
Chen Wang
5 months
Can we use wearable devices to collect robot data without actual robots? Yes! With a pair of gloves🧤! Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands Everything open-sourced
21
134
622
1
0
10
@HaozhiQ
Haozhi Qi
11 months
Also do checkout the cool interactive visualization in our website! , done by Viser + @brenthyi
1
0
9
@HaozhiQ
Haozhi Qi
2 months
very insightful thread!
@ToruO_O
Toru
2 months
A common question we get for HATO () is: can it be more dexterous? Yes! The first iteration of our system actually achieves this -- by capturing finger poses with mocap gloves and remapping them to robot hands. [video taken in late 2023 (with @yuzhang )]
6
28
139
0
1
7
@HaozhiQ
Haozhi Qi
11 months
We achieve this by first training an oracle policy with ground-truth object physical and shape info, then learn another policy with realistic sensory input (all in sim). The object training set covers a large variety of objects, making the policy adaptive and generalizable.
Tweet media one
Tweet media two
1
0
7
@HaozhiQ
Haozhi Qi
5 months
very cool 🤖!!
@xiaolonw
Xiaolong Wang
5 months
Let’s think about humanoid robots outside carrying the box. How about having the humanoid come out the door, interact with humans, and even dance? Introducing Expressive Whole-Body Control for Humanoid Robots: See how our robot performs rich, diverse,
94
209
1K
0
1
7
@HaozhiQ
Haozhi Qi
10 months
🚀 Deadline Extended to Oct 2nd for our touch processing workshop at NeurIPS 2023! We encourage relevant works at all stages of maturity, ranging from initial exploratory results to polished full papers! Also don't miss our amazing lineup of speakers!
Tweet media one
@HaozhiQ
Haozhi Qi
11 months
📢 Announcing 1st "Workshop on Touch Processing: a new Sensing Modality for AI" at NeurIPS 2023. If you are interested in touch sensing & machine learning, don’t miss the opportunity to submit your work and participate! 📷 Call for Papers: . #NeurIPS2023
1
5
14
0
1
7
@HaozhiQ
Haozhi Qi
8 months
Thanks to the amazing speakers: Katherine J. Kuchenbecker, Chiara Bartolozzi, @jiajunwu_cs , Satoshi Funabashi, Ted Adelson, @nathanlepora , Jeremy Fishel, and Veronica Santos; and my co-organizers @RCalandra , @perla_maiolino , Mike Lambeta, Yasemin Bekiroğlu, and @JitendraMalikCV .
0
0
3
@HaozhiQ
Haozhi Qi
2 years
We represent each object by its intrinsic properties (position, scale, mass, etc.) and train an adaptive RL policy with all of them! Then, to deploy it to the real-world, we train an adapatation module to estimate them online from proprioceptive history. 3/
Tweet media one
1
2
6
@HaozhiQ
Haozhi Qi
2 years
The robot hand can sense the discrepancy between its command and the actual state history, and use this signal to estimate the relative object sizes and weights during deployment. We show how the estimation changes when we remove the old object & place a new one. 4/
1
2
6
@HaozhiQ
Haozhi Qi
8 months
🚀🚀🚀
@QinYuzhe
Yuzhe Qin
8 months
How to rotate a tomato with a potato using a robot hand? 🤖🍅🥔 Our new model, Robot Synesthesia, blends touch and vision to manipulate multiple objects, even non-convex ones like a cross!
3
45
267
0
0
6
@HaozhiQ
Haozhi Qi
11 months
To bridge sim-to-real gap, we use object depth as the vision input. It's easy to simulate, easy to get in the real-world (thanks Segment-Anything).
Tweet media one
2
0
6
@HaozhiQ
Haozhi Qi
2 years
From a stable grasp, our policy can rotate objects of different weights (from 5g to 200g), sizes (from 4.5 cm to 8cm), different shapes (spheres, cubes, and one with holes), small coefficients of friction (ice ball), different center of mass (an hourglass and a bottle). 2/
Tweet media one
1
2
6
@HaozhiQ
Haozhi Qi
10 months
Thanks a lot @stokel for writing this!
@stokel
Chris Stokel-Walker
10 months
My latest for @newscientist is on this cool robotic hand which has the dexterity to handle tiny, irregular-shaped objects in a way I've never seen before
1
0
3
0
0
5
@HaozhiQ
Haozhi Qi
2 years
to correct my previous tweet: I receieved an email saying one paper I was reviewing for #NeurIPS2022 is desk rejected by PC. That is [2 days] before the review deadline. I guess many reviewers finished/drafted their reviews and thus part of our time is wasted.😀
1
1
4
@HaozhiQ
Haozhi Qi
2 years
Such an ability enables our policy to apply the “just-right” force during manipulation so that it won't destroy slightly soft objects (such as a shuttlecock) during operation. This figure shows the correlation between the average commanded torque and the object's mass. 5/
Tweet media one
1
2
5
@HaozhiQ
Haozhi Qi
3 months
Really Cool!!! Now I’ll ask for Vision Pro for research!
@xuxin_cheng
Xuxin Cheng
3 months
 🤖Introducing 📺𝗢𝗽𝗲𝗻-𝗧𝗲𝗹𝗲𝗩𝗶𝘀𝗶𝗼𝗻: a web-based teleoperation software!  🌐Open source, cross-platform (VisionPro & Quest) with real-time stereo vision feedback.  🕹️Easy-to-use hand, wrist, head pose streaming. Code:
14
91
378
0
0
5
@HaozhiQ
Haozhi Qi
11 months
We also show what is learned in the feature space, so we try to decode that using a shape prediction task. We found 1) the oracle policy preserve the shape info, even when the only supervision is a scalar reward; 2) The visuotactile policy can also learn to infer that.
Tweet media one
1
0
5
@HaozhiQ
Haozhi Qi
11 months
For touch, we represent vision-based touch sensing as discretized 2D contact locations, to make it easy to simulate.
1
0
5
@HaozhiQ
Haozhi Qi
2 years
A good simulation environment is necessary for a good policy. The environment should provide enough variety to enable generalization in the real world. We find that cylinders with different aspect ratios encourage a generalizable gait to emerge while using spheres does not. 6/
1
2
5
@HaozhiQ
Haozhi Qi
5 months
The robots are so cute! Looking forward to trying out!
@carlo_sferrazza
Carlo Sferrazza
5 months
Humanoids 🤖 will do anything humans can do. But are state-of-the-art algorithms up to the challenge? Introducing HumanoidBench, the first-of-its-kind simulated humanoid benchmark with 27 distinct whole-body tasks requiring intricate long-horizon planning and coordination. 🧵👇
8
91
333
1
0
5
@HaozhiQ
Haozhi Qi
11 months
We show different quantitative results on the benefits of using vision and touch: 1) they improve performance especially on challenging objects; 2) they improve out-of-distribution generalization; 3) With the help of vision and touch, it can almost match oracle policy.
Tweet media one
Tweet media two
Tweet media three
1
0
5
@HaozhiQ
Haozhi Qi
3 months
very interesting task! robot dogs also need to exercise 😀
@JasonMa2020
Jason Ma
3 months
Introducing DrEureka🎓, our latest effort pushing the frontier of robot learning using LLMs! DrEureka uses LLMs to automatically design reward functions and tune physics parameters to enable sim-to-real robot learning. DrEureka can propose effective sim-to-real configurations
24
118
601
0
0
5
@HaozhiQ
Haozhi Qi
3 months
How can we build even easier and more accessible teleoperation? I think maybe it’s a good idea to revisit grasp taxonomy. “Mastering teleoperation requires rethinking grasp taxonomy” [pun intended].
Tweet media one
0
1
4
@HaozhiQ
Haozhi Qi
3 months
Cool sim-to-real results! The lightbulb example is quite impressive!
@YunfanJiang
Yunfan Jiang
3 months
Does your sim2real robot falter at critical moments 🤯? Want to help but unsure how, all you can do is reward tuning in sim 😮‍💨? Introduce 𝐓𝐑𝐀𝐍𝐒𝐈𝐂 for manipulation sim2real. Robots learned in sim can accomplish complex tasks in real, such as furniture assembly. 🤿🧵
16
46
186
1
2
4
@HaozhiQ
Haozhi Qi
5 months
wow 🐉🥎
@find_airobo
find秩父 AI・ロボット研究分科会
5 months
追悼 鳥山明先生、すばらしい作品をありがとうございました ご冥福をお祈りします
3
28
116
0
1
4
@HaozhiQ
Haozhi Qi
11 days
cool work on using sim data to achieve in-the-wild generalization!
@fancy_yzc
Zhecheng Yuan
11 days
How to make your robot handle diverse visual scenarios?🤔 Introduce our recent work: Learning to 𝑴𝒂𝒏𝒊pulate Any𝒘𝒉𝒆𝒓𝒆: 𝐌𝐚𝐧𝐢𝐰𝐡𝐞𝐫𝐞. It enables your robot to manage multiple visual disturbance types 🌈and step out of simulation to achieve sim2real transfer.🤖
9
52
238
1
0
4
@HaozhiQ
Haozhi Qi
2 years
Should we not desk reject paper after review process started to prevent such a thing?
1
1
3
@HaozhiQ
Haozhi Qi
11 months
@dhanushisrad still working on that. We decide to first extend the axes set, then extend object set.
0
0
3
@HaozhiQ
Haozhi Qi
9 months
Congrats! @Jiayuan_Gu
@xiao_ted
Ted Xiao
9 months
Instead of just telling robots “what to do”, can we also guide robots by telling them “how to do” tasks? Unveiling RT-Trajectory, our new work which introduces trajectory conditioned robot policies. These coarse trajectory sketches help robots generalize to novel tasks! 🧵⬇️
3
48
253
0
0
3
@HaozhiQ
Haozhi Qi
5 months
Thank you @QinYuzhe ! Your works are inspiring and helpful for our project! Looking forward to seeing more exciting advancement in this area.
@QinYuzhe
Yuzhe Qin
5 months
Glad to see another sim2real work on dexterous hand! When imitation learning are dominant nowadays for manipulation, sim2real are still powerful for complex dynamical system. Congrats the authors for the great work @ToruO_O @zhaohengyin @HaozhiQ
1
2
15
0
0
3
@HaozhiQ
Haozhi Qi
5 months
@DrJimFan Congrats! Exciting time!
0
0
1
@HaozhiQ
Haozhi Qi
2 years
@iandanforth @corl_conf sure, that's definitely what we plan to work on.
0
0
3
@HaozhiQ
Haozhi Qi
3 years
@talrid23 @ak92501 table1 actually mentions throughput (though not equivalent to runtime).
1
0
3
@HaozhiQ
Haozhi Qi
8 months
also thanks for the help from my great co-organizers @RCalandra , @perla_maiolino , Mike Lambeta, @YsmnBekiroglu , and @JitendraMalikCV .
0
0
3
@HaozhiQ
Haozhi Qi
2 years
We analyze our policy's performance on more than 30 objects in the real world. Our policy performs well on most regular or chubby objects. But thin, small, or objects with large aspect ratios are harder to manipulate. 7/
Tweet media one
Tweet media two
1
2
3
@HaozhiQ
Haozhi Qi
23 days
An interesting thing we find is that Tactile Policies Explore More Finger Gaits. Greater joint state exploration is potentially a factor in task success and an indicator of gait adaptation.
1
0
3
@HaozhiQ
Haozhi Qi
13 days
Great place if you like robot hands!
@taochenshh
Tao Chen
13 days
Seeking robotics wizards to join our quest! 🧙‍♂️🤖 Join our cutting-edge team and shape the future of dexterous robots. We're seeking brilliant minds to push the boundaries of what's possible in robot manipulation. Link: #Robotics #AI #RobotLearning
0
22
134
0
0
3
@HaozhiQ
Haozhi Qi
2 years
(just joking) I was reading social psychology last year. When I read "Insufficient Justification Effect", I realize that's how school decides salaries:
@MannaLiberato
Liberato Manna
2 years
Salaries of PhD students and postdocs in many countries are extremely low and career prospects are very grim. How can one think of convincing the best minds to stay in research, under these conditions?
172
1K
7K
2
0
1
@HaozhiQ
Haozhi Qi
23 days
With this tactile information and proprioception, we can train a policy for generalizable in-hand translation. We conduct comprehensive real-world evaluations on OOD objects and different hand orientations.
1
0
2
@HaozhiQ
Haozhi Qi
4 years
Paper: Code: Joint work with @xiaolonw , @pathak2206 , Yi Ma, Jitendra Malik. (2/5)
Tweet media one
1
0
2
@HaozhiQ
Haozhi Qi
2 years
There is still a long way to go towards general in-hand manipulation, and this field is full of challenges and opportunities! We hope our project can be a solid foundation for future research. Our code is at  and I’m happy to answer any questions. 8/
1
2
2
@HaozhiQ
Haozhi Qi
8 months
Super excited to have Katherine J. Kuchenbecker, Chiara Bartolozzi, @jiajunwu_cs , Satoshi Funabashi, Ted Adelson, @nathanlepora , Jeremy Fishel, Veronica Santos as our speakers!
1
0
2
@HaozhiQ
Haozhi Qi
2 years
Thanks @YiMaTweets , I enjoy the conference a lot.
@YiMaTweets
Yi Ma
2 years
Great job, Haozhi! The best poster for sure.
0
0
11
0
0
2
@HaozhiQ
Haozhi Qi
23 days
Three key design decisions for our tractable tactile skin model: 1) We use object point clouds to calculate intersecting collision volumes, in contrast to the typical handful of points for rigid contact. 2) Each taxel’s sensing range extends beyond its collision geometry to mimic
1
1
2
@HaozhiQ
Haozhi Qi
18 days
Also congrats Zilin Si, Kevin Lee Zhang, @fzeyneptemel , @Oliver_Kroemer for winning the best paper award (tilde) and @kenny__shaw @pathak2206 for winning the best presentation award (leap hand v2)!
0
0
2
@HaozhiQ
Haozhi Qi
4 years
We show such a simple method can capture inter-object and object-environment interactions over a long-range, and easily generalize to new environments. In addition, we show our task-agnostic prediction model can be applied to planning tasks and achieves large improvements. (5/5)
Tweet media one
0
0
2
@HaozhiQ
Haozhi Qi
2 months
cool work! very fast progress.
@TairanHe99
Tairan He
2 months
Introduce OmniH2O, a learning-based system for whole-body humanoid teleop and autonomy: 🦾Robust loco-mani policy 🦸Universal teleop interface: VR, verbal, RGB 🧠Autonomy via @chatgpt4o or imitation 🔗Release the first whole-body humanoid dataset
19
68
385
1
0
2
@HaozhiQ
Haozhi Qi
4 months
0
0
2
@HaozhiQ
Haozhi Qi
2 years
It states that people are more likely to engage in a behavior that contradicts their personally held beliefs when they are offered a smaller reward, in comparison to a larger reward.
0
0
1
@HaozhiQ
Haozhi Qi
11 months
@whoisvaibhav Human hand is so great. Hardware is not so good. But we’ll ping you when we get closer.
1
0
2
@HaozhiQ
Haozhi Qi
22 days
@ChongZitaZhang Thank you Chong! Hope to see you in RSS
0
0
1
@HaozhiQ
Haozhi Qi
4 years
@yukez Curious is it possible to access object (mask, depth, etc) in image space?
1
0
1
@HaozhiQ
Haozhi Qi
2 years
@martin_schrimpf got the same email. Very disappointed.
0
0
1
@HaozhiQ
Haozhi Qi
1 year
@chenwang_j Nice work, congrats!
0
0
1
@HaozhiQ
Haozhi Qi
11 months
@chenwang_j Thanks Chen!
0
0
1
@HaozhiQ
Haozhi Qi
4 years
We propose to utilize this idea to build rich object representations for trajectory prediction. Instead of predicting pixels, we argue that trajectory prediction is the key to reduce complexity in long-term prediction, and more feasible for downstream tasks like planning. (4/5)
1
0
1
@HaozhiQ
Haozhi Qi
11 months
@binghao_huang 😂 next time you can have a string attached on the object.
Tweet media one
1
0
1
@HaozhiQ
Haozhi Qi
1 year
@FahadAlkhater9 @ashishkr9311 @UCBerkeley @Tesla @Teslasbot thank you! we also have a summary post here in case you are interesetd
@HaozhiQ
Haozhi Qi
2 years
💡We release Hora: a single policy capable of rotating diverse objects 🎾🥝🧊🍋🍊⌛🥑🍅🍐🍑 with a dexterous robot hand. No cameras. No touch sensors. Hora is trained entirely in simulation and directly deployed in the real world. see #CoRL @corl_conf 1/
7
68
312
1
0
1
@HaozhiQ
Haozhi Qi
4 years
As someone previously worked on object detection, I'm quite surprised the region feature pooling idea is so under-explored in the filed of intuitive physics and dynamics prediction. Our work tries to bridge the success in computer vision to the intuitive physics community. (3/5)
Tweet media one
1
0
1
@HaozhiQ
Haozhi Qi
21 days
@TairanHe99 Thanks for your kind word, Tairan!!!
0
0
1
@HaozhiQ
Haozhi Qi
1 month
0
0
1
@HaozhiQ
Haozhi Qi
1 year
@YonglongT Congrats!
1
0
1
@HaozhiQ
Haozhi Qi
2 months
@NimaFazeli7 Congrats Nima!
0
0
1
@HaozhiQ
Haozhi Qi
1 month
@QinYuzhe @xiaolonw Congrats Yuzhe!
0
0
1
@HaozhiQ
Haozhi Qi
4 months
0
0
1
@HaozhiQ
Haozhi Qi
9 months
@haosu_twitr Congratulations!
1
0
1