ruoshi_liu Profile Banner
Ruoshi Liu Profile
Ruoshi Liu

@ruoshi_liu

Followers
2K
Following
777
Statuses
358

Building better 👁️ and 🧠 for 🤖 | PhD Student @Columbia

New York, NY
Joined March 2021
Don't wanna be here? Send us removal request.
@ruoshi_liu
Ruoshi Liu
4 months
We present🌊AquaBot🤖: a fully autonomous underwater manipulation system powered by visuomotor policies that can continue to improve through self-learning to perform tasks including object grasping, garbage sorting, and rescue retrieval. more details👇
7
38
206
@ruoshi_liu
Ruoshi Liu
1 day
RT @michigan_AI: Looking forward to hosting @ruoshi_liu tomorrow for a seminar on "Generative Computer Vision for the Physical World:" 📌 FE…
0
3
0
@ruoshi_liu
Ruoshi Liu
1 day
Please submit your papers to the CVPR 4D Vision workshop!
@elliottszwu
Elliott / Shangzhe Wu
1 day
Really excited to put together this @CVPR workshop on "4D Vision: Modeling the Dynamic World" -- one of the most fascinating areas in computer vision today! We've invited incredible researchers who are leading fantastic work at various related fields.
Tweet media one
0
0
12
@ruoshi_liu
Ruoshi Liu
15 days
@ehsanik I, too, as not a startup founder, approve this message.
0
0
4
@ruoshi_liu
Ruoshi Liu
15 days
Happy Lunar New Year to everyone in the year of snake 🐍❤️
0
0
22
@ruoshi_liu
Ruoshi Liu
1 month
RT @random_walker: Something I've observed in academia but I suspect is true in industry as well: as you become more senior, you'll find th…
0
187
0
@ruoshi_liu
Ruoshi Liu
1 month
Some words of wisdom for new PhDs from the great @mia_chiquier !
@mia_chiquier
Mia Chiquier
1 month
"I'm too slow at ML research" - every researcher ever. Over years of trying different strategies, I've landed on a few that have really helped me. I've written them down here, hoping it helps others & becomes a community resource!
0
0
4
@ruoshi_liu
Ruoshi Liu
1 month
RT @allenzren: HNY! Lately I took a crack at implementing the pi0 model from @physical_int PaliGemma VLM (2.3B fine-tuned) + 0.3B "action…
0
59
0
@ruoshi_liu
Ruoshi Liu
2 months
Great work! Looking forward to the real-world demo of this Alyosha result 🤓
Tweet media one
@Boyiliee
Boyi Li
2 months
I’ve dreamt of creating a tool that could animate anyone with any motion from just ONE image… and now it’s a reality! 🎉 Super excited to introduce updated 3DHM: Synthesizing Moving People with 3D Control. 🕺💃3DHM can generate human videos from a single real or synthetic human image. #Animation #GenAI #AI #3DHM ✨ The magic of 3D control? Turning 2D pixels into lifelike, animated humans. 🎥 Check out our demo (and Merry Christmas)! Paper: Github: Webpage: Proudly working with the great @JunmingChenleo, @brjathu, @YGandelsman, Alyosha Efros and @JitendraMalikCV😃 Kindly note: This video is intended solely for research purposes and is not authorized for commercial use.
0
0
7
@ruoshi_liu
Ruoshi Liu
2 months
RT @Stone_Tao: Yesterday the hyped Genesis simulator released. But it's up to 10x slower than existing GPU sims, not 10-80x faster or 430,0…
0
61
0
@ruoshi_liu
Ruoshi Liu
2 months
RT @ZeyuanAllenZhu: (1/3) Let me give Rosalind Picard a lesson on what real values I learned at Tsinghua Physics. In our notorious experime…
0
198
0
@ruoshi_liu
Ruoshi Liu
2 months
I hear arrogance, ignorance, and nonchalance. Let’s try to stop those attitudes as a research community.
1
5
93
@ruoshi_liu
Ruoshi Liu
2 months
RT @jin_linyi: Introducing 👀Stereo4D👀 A method for mining 4D from internet stereo videos. It enables large-scale, high-quality, dynamic, *…
0
89
0
@ruoshi_liu
Ruoshi Liu
2 months
RT @chrisoffner3d: "Sora is a data-driven physics engine."
0
696
0
@ruoshi_liu
Ruoshi Liu
2 months
Congrats on the amazing work by @dangengdg ! If you work with robots, you might be wondering: can we apply amazing tools like these to robot control? Check out: Dreamitate ( where we use a fine-tuned video generation model to guide robot manipulation. Dr. Robot ( where we obtain physically executable robot motion from (generated and real) videos using differentiable rendering.
Tweet media one
Tweet media two
@dangengdg
Daniel Geng
2 months
What happens when you train a video generation model to be conditioned on motion? Turns out you can perform "motion prompting," just like you might prompt an LLM! Doing so enables many different capabilities. Here’s a few examples – check out this thread 🧵 for more results!
0
10
64
@ruoshi_liu
Ruoshi Liu
3 months
RT @elliottszwu: I'm building a new research lab @Cambridge_Eng focusing on 4D computer vision and generative models. Interested in joinin…
0
45
0
@ruoshi_liu
Ruoshi Liu
3 months
RT @ChrisWu6080: 🚀 Introducing CAT4D! 🚀 CAT4D transforms any real or generated video into dynamic 3D scenes with a multi-view video diffusi…
0
79
0
@ruoshi_liu
Ruoshi Liu
3 months
RT @YunzhuLiYZ: 📢 I’ll be admitting PhD students to Columbia CS in the heart of NYC 🗽—the most vibrant city in the world! 🌆 If you're pass…
0
52
0
@ruoshi_liu
Ruoshi Liu
3 months
@rasbt We are with you, not reviewer 2 🫡
0
0
1