Ruoshi Liu Profile Banner
Ruoshi Liu Profile
Ruoshi Liu

@ruoshi_liu

Followers
1,202
Following
616
Media
29
Statuses
272

Building better 👁️ and 🧠 for 🤖 | PhD Student @Columbia

New York, NY
Joined March 2021
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@ruoshi_liu
Ruoshi Liu
2 months
How can a visuomotor policy learn from internet videos? We introduce Dreamitate, where a robot uses a fine-tuned video diffusion model to dream the future (top) and imitate the dream to accomplish a task (bottom). website: paper:
9
54
261
@ruoshi_liu
Ruoshi Liu
11 months
Zero123 can now generate 3D assets in 2 minutes with Gaussian Splatting thanks to amazing work done by Jiaxiang Tang et. al.!! Production-ready 3D generation is getting closer and closer👏
7
93
541
@ruoshi_liu
Ruoshi Liu
1 year
Some naive thoughts about careers of AI researchers as a young uninformed PhD student  💭 An AI researcher has basically the same job as a film director. The three things you care about the most are: creativity, impact, and reputation. (1/n)
5
33
263
@ruoshi_liu
Ruoshi Liu
6 months
Humans can design tools to solve various real-world tasks, and so should embodied agents. We introduce PaperBot, a framework for learning to create and utilize paper-based tools directly in the real world.
7
29
172
@ruoshi_liu
Ruoshi Liu
4 months
What Fei-Fei is trying to say is PhD students at universities produce open-source, trustworthy AI research rather than closed-source, profit-driven business products so please invest in us poor but talented PhD students and we will make the best use of them for our society.
@tsarnick
Tsarathustra
4 months
Fei-Fei Li says Stanford's Natural Language computing lab has only 64 GPUs and academia is "falling off a cliff" relative to industry
99
206
1K
2
4
137
@ruoshi_liu
Ruoshi Liu
1 year
Do you know humans are infrared light bulbs💡? Excited to share our #CVPR2023 paper! With a thermal camera, we leverage “invisible reflections” created by the infrared radiation of humans in order to reconstruct their location, orientation, and 3D pose.
6
24
137
@ruoshi_liu
Ruoshi Liu
1 year
Thanks @_akhaliq for tweeting our work and @huggingface for funding our demo! Also proud to announce that the paper has officially been accepted to #ICCV2023 🥳 P.S. weights of Zero123-XL coming soon : )
@_akhaliq
AK
1 year
Zero-1-to-3: Zero-shot One Image to 3D Object github: demo: learn to control the camera perspective in large-scale diffusion models, enabling zero-shot novel view synthesis and 3D reconstruction from a single image
4
96
343
5
11
122
@ruoshi_liu
Ruoshi Liu
3 months
Nothing makes a PhD student more satisfied than a fully power-utilized 8-GPU server🔥
@LambdaAPI
Lambda
3 months
Love the smell of fresh H200s in the morning!
Tweet media one
17
23
427
2
4
112
@ruoshi_liu
Ruoshi Liu
2 months
CVPR was an absolute blast! See y’all in Milan 🤗
Tweet media one
Tweet media two
2
1
85
@ruoshi_liu
Ruoshi Liu
1 year
Given a single 3D asset, can we generate its variations without relying on prior knowledge? Introducing Sin3DM ✨, a diffusion model that learns from a single 3D asset and generates high-quality variations with fine geometry and texture details. [1/4]
Tweet media one
1
13
82
@ruoshi_liu
Ruoshi Liu
1 year
Introducing Objaverse-XL and Zero123-XL! The era of 3D foundation model is here and welcome aboard 🚀 Joint work with @mattdeitke et al.
@mattdeitke
Matt Deitke
1 year
Introducing Objaverse-XL, an open dataset of over 10 million 3D objects! With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇 📝 Paper:
54
339
2K
3
5
67
@ruoshi_liu
Ruoshi Liu
9 months
Organizing the first NYC vision workshop was super fun! Shout out to other organizers @elliottszwu @Haian_Jin and especially @Jimantha for the generous support!
Tweet media one
3
8
64
@ruoshi_liu
Ruoshi Liu
7 months
Thanks @_akhaliq for tweeting our work! It has been shown in our prior work Zero123 that Stable Diffusion has learned powerful visual priors that can be serve as the foundation of zero-shot generalization ability for many vision tasks. In our recent work pix2gestalt, we show
@_akhaliq
AK
7 months
pix2gestalt: Amodal Segmentation by Synthesizing Wholes paper page: synthesizes whole objects from only partially visible ones, enabling amodal segmentation, recognition, and 3D reconstruction of occluded objects
0
38
234
2
9
63
@ruoshi_liu
Ruoshi Liu
3 months
Recently released video generation models are amazing😍 How can we use them in robotics to learn generalizable visuomotor policies? Come find out in my talks at these 4 CVPR workshops next week, where I will talk about recent works in 3D, generative models, and robotics!
1
6
53
@ruoshi_liu
Ruoshi Liu
4 months
Congrats to my advisor for officially becoming the “big guy” 😈
@iclr_conf
ICLR 2025
4 months
That's a wrap for #ICLR2024 ! Thanks to everyone for participating! See you all at #ICLR2025 in Singapore! We are grateful for the leadership of @cvondrick as Senior Program Chair.
Tweet media one
Tweet media two
5
26
296
0
2
44
@ruoshi_liu
Ruoshi Liu
1 year
My guitar skill has gone better while waiting for ICCV results 🚬 #ICCV2023
1
0
44
@ruoshi_liu
Ruoshi Liu
11 months
Will be giving a talk in person at Brown University tomorrow! Thanks a lot for the invitation! @BrownVisualComp @BrownCSDept @jtompkin @ArmanMaesumi
@BrownVisualComp
Brown Visual Computing
11 months
The Brown Visual Computing Seminar will be hosting @ruoshi_liu this Friday (10/13, 12pm EST)! Ruoshi’s talk, “Generating the 3D World” will be livestreamed at , don’t miss it!
Tweet media one
0
0
7
0
0
40
@ruoshi_liu
Ruoshi Liu
3 months
Latex tip for people struggling to fit paper within 8 pages for CoRL: add "\looseness-1" before paragraphs to remove the hanging word at the last line.
1
2
37
@ruoshi_liu
Ruoshi Liu
1 year
When you write a few high impact papers (blocker-bluster movies), you can afford to write a couple of niche fun papers mainstream audience cease to appreciate (indie artsy movies), which is basically the “one for you, one for me” rule in the film industry.
2
1
31
@ruoshi_liu
Ruoshi Liu
6 months
I was dreaming of this demo when I worked on Zero123(). Awesome to see we finally have both the software and hardware to make this happen!
@blizaine
Blaine Brown 
6 months
Check this out!🔥 Modified version of TripoSR (open source 2D to 3D tool). This version outputs USDZ directly for Apple Vision Pro.🥽 You can easily create 3D objects in AVP with one click & view them in your space! Modified repo in the thread 🧵👇
19
38
237
0
1
29
@ruoshi_liu
Ruoshi Liu
5 months
Check out the new @newscientist article covering PaperBot🤖 Shout out to @AlexWilkins22 🙌
@newscientist
New Scientist
5 months
Watch an origami-mastering robot that can beat humans at making paper planes. Learn more 👉
0
3
21
0
0
30
@ruoshi_liu
Ruoshi Liu
4 months
I always believe that you shouldn't worry too much about the speed of your method because if you create something cool, someone will make it fast. There you have it: 1 hour to 1 second in 1 year. Congrats!!🥳
@SarahWeii
Xinyue Wei
4 months
Check our MeshLRM 🌟, which achieves state-of-the-art mesh reconstruction from sparse-view inputs within 1 second! 🚀🚀🚀 Paper: Project page:
3
10
62
0
2
26
@ruoshi_liu
Ruoshi Liu
10 months
Looks super cool! Thrilled to see the impact Zero123 has made in vision, graphics, and robotics🥳
@haosu_twitr
Hao Su
10 months
📢Thrilled to announce sudoAI ( @sudoAI_ ), founded by a group of leading AI talents and me!🚀 We are dedicated to revolutionizing digital & physical realms by crafting interactive AI-generated 3D environments! Join our 3D Gen AI model waitlist today! 👉
14
101
428
0
0
24
@ruoshi_liu
Ruoshi Liu
5 months
Failed attempt at capturing eclipse with a lab-assembled pin-hole camera… didn’t get anything 🫠
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
0
23
@ruoshi_liu
Ruoshi Liu
1 year
Exciting work that extends Zero123 to diffuse multiple novel views simultaneously to achieve better multiview consistency and subsequently better 3D reconstruction performance!
@_akhaliq
AK
1 year
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image paper page: present a novel diffusion model called that generates multiview-consistent images from a single-view image. Using pretrained large-scale 2D diffusion models, recent
5
99
448
1
0
22
@ruoshi_liu
Ruoshi Liu
12 days
I personally respect Graeber a lot, but I have to say “building a robot that could take your laundry down, wash it, and bring it back again” is the perfect example of something that’s as easy as it may sound but as hard as it may be. I empathize with the disappointment that
1
0
21
@ruoshi_liu
Ruoshi Liu
1 year
The quality of your work is as important as how good it’s promoted. You constantly struggle between: am I a scientist (artist) or am I an engineer (entertainer)?
1
1
19
@ruoshi_liu
Ruoshi Liu
21 days
Author-reviewer discussions on openreview are like couples in toxic relationships. They seem to be having equal and civil discussions, but the underlying power dynamics are so imbalanced.
1
0
20
@ruoshi_liu
Ruoshi Liu
1 year
One sentence to instantly raise any PhD student’s heart rate by 100%: ICCV reviews are out 🙈
0
1
18
@ruoshi_liu
Ruoshi Liu
2 months
🔊Stop by my talk at the Rhobin workshop @ Summit 427 at 2 PM where I will talk about how we use human object interaction data for robotics! @CVPR Here’s a view of the conference hall 😍 and the amazing @geopavlakos giving a talk before me🤩
Tweet media one
0
3
17
@ruoshi_liu
Ruoshi Liu
1 year
It’s hard to explain visa issues to people who haven’t experienced them. This is a great/heartbreaking example of what we are dealing with…
@linguistMasoud
Masoud
1 year
I’m quite used to the cruelty students can face when they apply for a US visa but this one broke me. We offered admission to a stellar, talented & hardworking student. After months of work and hundreds of dollars, an embassy officer saw him for 5 mins & said no. why? …
1K
13K
88K
0
1
17
@ruoshi_liu
Ruoshi Liu
1 year
When you only chase for creativity (independent artistic expression), you’ll have a hard time getting funding for your research (film budget).
1
0
14
@ruoshi_liu
Ruoshi Liu
1 year
Here are my personal GOAT in both jobs who constantly strike a balance between exploration and impact in their legendary careers.
Tweet media one
Tweet media two
2
0
15
@ruoshi_liu
Ruoshi Liu
3 months
What's a better way to kick-start an exciting conference than an early morning talk? 😉 Come tomorrow morning 9:25-9:50 am @ ARCH 4F for my talk! I will talk about a topic I'm super excited about: 3D Generation for Physical Intelligence
@paschalidoud_1
Despoina Paschalidou
3 months
📢Our #CVPR2024 workshop on AI for 3D Generation is happening tomorrow on June 17th in ARCH 4F! We have a fantastic line of speakers and even if you are not in Seattle you can attend all talks via zoom! Check the workshop's website for more details:
Tweet media one
2
12
53
0
2
15
@ruoshi_liu
Ruoshi Liu
1 year
I will be presenting this paper with @cvondrick tomorrow afternoon at the poster session @ 4:30PM (No. 18). Come say hi! P.S. We will bring the thermal camera for demo🔥
@ruoshi_liu
Ruoshi Liu
1 year
Do you know humans are infrared light bulbs💡? Excited to share our #CVPR2023 paper! With a thermal camera, we leverage “invisible reflections” created by the infrared radiation of humans in order to reconstruct their location, orientation, and 3D pose.
6
24
137
0
2
14
@ruoshi_liu
Ruoshi Liu
1 year
And do random things!
@prafull7
Prafull Sharma
1 year
“Make yourself be lucky!”- Alyosha Efros
Tweet media one
6
81
567
0
0
13
@ruoshi_liu
Ruoshi Liu
4 months
RNN policy vs. diffusion policy
0
0
12
@ruoshi_liu
Ruoshi Liu
1 year
When you only chase for impact (box office), you gradually loose reputation among researchers. People despise you for being too utilitarian (directors like Michael Bay 🙃).
2
1
10
@ruoshi_liu
Ruoshi Liu
1 year
Glad to report both of my papers got accepted!! Can finally put my guitar down and go to bed 😴
1
0
11
@ruoshi_liu
Ruoshi Liu
11 months
As a student I cannot agree more!
@jbhuang0604
Jia-Bin Huang
11 months
I once received advice: "If your current project will not change the field, then it's not worthwhile doing it." This was the time when I first started exploring research. Looking back, this is probably the MOST TERRIBLE advice for me.
9
56
620
0
0
11
@ruoshi_liu
Ruoshi Liu
2 months
@nilsingelhag checkout our Dreamitate paper
@ruoshi_liu
Ruoshi Liu
2 months
How can a visuomotor policy learn from internet videos? We introduce Dreamitate, where a robot uses a fine-tuned video diffusion model to dream the future (top) and imitate the dream to accomplish a task (bottom). website: paper:
9
54
261
0
1
11
@ruoshi_liu
Ruoshi Liu
1 year
Nice results and awesome work!
@tanmay2099
Tanmay Gupta
1 year
Imagine a 2D image serving as a window to a 3D world that you could reach into, manipulate objects, and see changes reflected in the image. In our new OBJect 3DIT work, we edit images in this 3D-aware fashion while only operating in the pixel space! 🧵
Tweet media one
1
35
131
0
1
10
@ruoshi_liu
Ruoshi Liu
1 year
Gotta love the NVIDIA job application meme😂
@docmilanfar
Peyman Milanfar
1 year
I hear #NVIDIA is hiring
Tweet media one
7
15
356
0
0
9
@ruoshi_liu
Ruoshi Liu
4 months
What a cute dataset😻
@_amirbar
Amir Bar
4 months
Animals are intelligent agents that plan and act to accomplish complex goals. Can we try learning from them? We present EgoPet, a new ego centric video dataset of animals scraped from YouTube and TikTok.
14
79
421
0
0
9
@ruoshi_liu
Ruoshi Liu
1 year
@_akhaliq Let's make 3D reconstruction of Spyro an academic tradition (see figure 1)😜 @georgiagkioxari
2
0
9
@ruoshi_liu
Ruoshi Liu
4 months
Cool thing I learned today is the majority of Ivy League universities’ presidents are women. Sad thing is I learned about this because they are resigning one by one recently…
1
0
9
@ruoshi_liu
Ruoshi Liu
3 months
I also feel obliged to include a video generated by @LumaLabsAI Prompt: "a robot folding cloth"
1
1
8
@ruoshi_liu
Ruoshi Liu
2 months
Having a video diffusion model as the backbone allows Dreamitate to perform much better than Diffusion Policy when trained with the same amount of demonstration data. We perform experiments on four real-world manipulation tasks and show an average of 41.4% improvement!
Tweet media one
1
0
6
@ruoshi_liu
Ruoshi Liu
9 months
“World model” is becoming a more and more vague term like “zero shot”. How should we define it?
1
0
7
@ruoshi_liu
Ruoshi Liu
9 months
@jon_barron Super cool! Check out our CVPR paper on using shadow for 3D reconstruction under occlusion:
0
0
7
@ruoshi_liu
Ruoshi Liu
1 year
@Eng_Hemdi Ilya Sutskever? Who do you think?
1
0
6
@ruoshi_liu
Ruoshi Liu
2 months
When re-trained with two-thirds and one-third of the rotation task dataset, Diffusion Policy’s performance declines significantly with reduced data. In contrast, Dreamitate maintains a high success rate. We expect Dreamitate to scale along with the rapid scaling progress of video
Tweet media one
1
0
6
@ruoshi_liu
Ruoshi Liu
2 months
Dreamitate is a visuomotor policy learning framework that fine-tunes a video generative model to synthesize videos of humans using tools to complete a task. The tool’s trajectory in the generated video is tracked, and the robot executes this trajectory to accomplish the task in
Tweet media one
1
0
6
@ruoshi_liu
Ruoshi Liu
2 months
Thanks @EMostaque for tweeting our work!! Shout out to the Stable Video Diffusion team for the amazing open-source effort👏 Video models will get better and robots will get smarter with them!
@EMostaque
Emad
2 months
Video models are a big unlock for robotics that can I’ll push it over the edge of mass usefuleness and adoption. Very interesting research here check it out
7
6
78
0
0
6
@ruoshi_liu
Ruoshi Liu
6 months
We creatively leverage paper tools such as printers and Cricut machines, as well as two xArm 7 robot arms, to automate the process of creating paper tools. The whole learning process takes 100 iterations, which can be completed within 3 hours for both tasks.
1
0
5
@ruoshi_liu
Ruoshi Liu
2 months
Dreamitate can also perform tasks that require multiple steps to finish. We generalize the famous "Push-T" task to the "Push-Shape" task, where a robot needs to push an arbitrary shape toward the target within several steps. When trained on demonstrations of pushing alphabets, it
1
0
5
@ruoshi_liu
Ruoshi Liu
6 months
For more details, please go check out our website and paper using the links below. Tool design is a fascinating area in robotics that’s currently under-explored. We hope our work can inspire future work on this important task. I'm honored to work with Junbang Liang,
0
0
5
@ruoshi_liu
Ruoshi Liu
2 months
Our formulation of policy is general. We can also learn to scoop an unseen material to a target container among a group of unseen distractor objects. This example showcases the high fidelity of the generated video which enables subsequent accurate 3D tracking.
1
0
4
@ruoshi_liu
Ruoshi Liu
1 year
The relatively hot temperature of the human body turns people into infrared light sources 🔥. Since this emitted light has a longer wavelength than visible light, many surfaces in typical scenes act as infrared mirrors 🪞 with strong specular reflections. Here are some examples:
1
1
5
@ruoshi_liu
Ruoshi Liu
1 year
This is a work led by @ChrisWu6080 and advised by @cvondrick , @fast_sploosh . Check out our paper, code, and demo for more interesting findings and results! Paper: Code: Demo:
0
0
5
@ruoshi_liu
Ruoshi Liu
6 months
A core advantage of PaperBot is the ability to customize a tool design to fit different reward definitions. In this example, we adapt our gripper design optimized for medium-sized objects to work with smaller or larger objects.
1
0
5
@ruoshi_liu
Ruoshi Liu
1 month
@jon_barron Great idea! This will benefit the effort in pursuing physically correct generative models as well which is a big requirement for using them in robotic tasks.
0
0
5
@ruoshi_liu
Ruoshi Liu
1 year
Wow teaser has completely lost its function as a summary of the main contribution. Gotta say it’s pretty cute though 🙃
@_akhaliq
AK
1 year
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning paper page: Recently, the release of INSTRUCTEVAL has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or
Tweet media one
0
64
268
1
0
5
@ruoshi_liu
Ruoshi Liu
1 year
Really cool work!
@taiyasaki
Andrea Tagliasacchi 🇨🇦☀️🏖️
1 year
📢📢📢 Introducing "𝐁𝐚𝐲𝐞𝐬' 𝐑𝐚𝐲𝐬: Uncertainty Quantification for Neural Radiance Fields" TL;DR: 𝐜𝐚𝐧 𝐈 "𝐭𝐫𝐮𝐬𝐭" 𝐦𝐲 𝐍𝐞𝐑𝐅 𝐚𝐭 𝐩𝐨𝐬𝐢𝐭𝐢𝐨𝐧 𝐱? with @lily_goli , @sellan_s , @code_red7777 , @_AlecJacobson .
2
50
332
0
0
5
@ruoshi_liu
Ruoshi Liu
5 months
And here’s an iPhone + filter one 🌙
Tweet media one
0
0
5
@ruoshi_liu
Ruoshi Liu
6 months
Paper is a cheap, recyclable, and clean material that is often used to make practical tools. We apply PaperBot as a unified framework to solve two different real-world tool design tasks: 1. learning to fold and throw a paper airplane that travels a maximum distance and 2.
Tweet media one
1
0
5
@ruoshi_liu
Ruoshi Liu
1 year
These are apparently all personal opinions so any discussions, corrections, and criticisms are welcome!
1
0
5
@ruoshi_liu
Ruoshi Liu
6 months
Prior works on tool design have resorted to reinforcement learning and simulation. In contrast, PaperBot operates directly in the real world in an online setting, taking an epsilon-greedy exploration strategy. Before each iteration, we train a new surrogate model using prior
1
0
5
@ruoshi_liu
Ruoshi Liu
2 months
@3DVconf @Michael_J_Black Hi Michael, what’s the worst part of being a professor and what’s the best part?
1
0
4
@ruoshi_liu
Ruoshi Liu
3 months
@alexyu00 Isn’t NeRF-type scene what it’s trained on?😜
1
0
4
@ruoshi_liu
Ruoshi Liu
8 months
@yuliangxiu So true 😂
0
0
1
@ruoshi_liu
Ruoshi Liu
2 months
This is joint work with @LiangJunbang , @EgeOzguroglu , @SruthiSudhakar2 , @achalddave , @ptokmakov , @SongShuran , @cvondrick . Please check out our website and paper for more results!
0
0
4
@ruoshi_liu
Ruoshi Liu
1 year
Such a cool work! Love it!
@kyoto_vision
Kyoto University Computer Vision Lab (Nishino Lab)
1 year
DeePoint: Visual Pointing Recognition and Direction Estimation #ICCV2023 We point out that we can tell when you are pointing and which 3D direction you are pointing at. Is my point clear?
1
19
111
0
0
4
@ruoshi_liu
Ruoshi Liu
10 months
@YiMaTweets Hinton and Bengio seemed quite concerned though 🤔
0
0
1
@ruoshi_liu
Ruoshi Liu
2 months
This formulation of policy inherits the prior knowledge learned from internet videos, allowing it to be much more generalizable than policies trained from scratch. For example, this allows us to learn a generalizable policy that can rotate an arbitrary object put on the table.
1
0
4
@ruoshi_liu
Ruoshi Liu
1 year
An exciting potential application of Sin3DM is 3D data augmentation. In an era where we “just need more data”, effective 3D data augmentation methods like Sin3DM are all the more needed!
@nickfloats
Nick St. Pierre
1 year
Midjourney feels confident they can ship 3D stuff this year!! 👀 They just mentioned this in the Midjourney Office Hours. Says there are not really technical limitations, just need more data.
53
59
727
1
0
3
@ruoshi_liu
Ruoshi Liu
1 year
Our system works well in many real-world scenes with various objects including using cars as infrared mirrors 🚗➡️🪞. This could enable an autonomous driving system 🤖 equipped with a thermal camera to detect humans even when they are not in the line of sight.
1
0
3
@ruoshi_liu
Ruoshi Liu
5 months
@Michael_J_Black @docmilanfar A slight digress: what percentage of computer vision research today would you call science? And what percentage do you think it should be?
0
0
2
@ruoshi_liu
Ruoshi Liu
3 months
@MattNiessner Come give a talk @Columbia pls!
1
0
1
@ruoshi_liu
Ruoshi Liu
4 months
@holynski_ Looks amazing!! 😍
0
0
2
@ruoshi_liu
Ruoshi Liu
6 months
@ehsanik @cvondrick @SongShuran Thank you @ehsanik ! Working on this project has been lots of fun😊
0
0
2
@ruoshi_liu
Ruoshi Liu
1 year
@_akhaliq Remember to take care of yourself physicallly and mentally in this incredibly hectic time of AI research!
0
0
2
@ruoshi_liu
Ruoshi Liu
1 year
Given an input 3D asset, we first learn an implicit triplane latent representation. Then we train a latent diffusion model with reduced receptive field to learn the distribution of triplane features. At inference time, we generate variations by sampling with the diffusion model.
Tweet media one
1
0
2
@ruoshi_liu
Ruoshi Liu
2 years
@jbhuang0604 I started following you after starting my PhD but definitely regret not doing it sooner!! Thank you for always thinking about junior researchers like us 🙏
0
0
2
@ruoshi_liu
Ruoshi Liu
1 year
@lam_darius You are right that all fields that require lots of creativity probably share some characteristics. But I think AI researcher and film director are both quite unique as jobs in the sense that creativity is equally important as the impact of your work.
1
0
2
@ruoshi_liu
Ruoshi Liu
4 months
@thiemoall @CSProfKGD Sounds like the second author should be the first author in this case :)
1
0
2
@ruoshi_liu
Ruoshi Liu
7 months
@chengyenhsieh Thank you!! TAO is very cool work!
1
0
1
@ruoshi_liu
Ruoshi Liu
1 year
Always fascinated by creative vision problems and unusual solutions!
@akanazawa
Angjoo Kanazawa
1 year
Love this problem statement! Reconstruct the world from retinal reflections: The original paper from 2004 by Nishino and Nayar was a super inspiring example of creative research for me: Great to see a 3D version of it!!
12
209
998
0
0
2
@ruoshi_liu
Ruoshi Liu
7 months
Decency is a good thing to have as a human being. Just saying.
@elonmusk
Elon Musk
7 months
Boobs just rock, it’s a fact
Tweet media one
17K
23K
358K
0
0
2
@ruoshi_liu
Ruoshi Liu
10 months
@zhong_yuhong CVPR as well 🫠
1
0
2
@ruoshi_liu
Ruoshi Liu
1 year
Thermal computer vision is a wonderland full of fascinating counter-intuitive visual phenomena and many research opportunities to be discovered. If you want to know more details about our work, please check out our paper with @cvondrick ! arxiv:
1
0
2
@ruoshi_liu
Ruoshi Liu
1 year
Inspired by @YuilleAlan 's phenomenal work () back in 2006, we proposed an approach, combining generative models of objects and humans with differentiable rendering. With powerful generative models today, analysis by synthesis is gaining lots of momentum!
Tweet media one
1
0
2
@ruoshi_liu
Ruoshi Liu
1 year
@lt453_ Very intriguing question! I think Bill Freeman is Quentin Tarantino. They are both very commercially and creatively successful, kind as a person, and willing to do very bold things just for fun :) Woody Allen is tough… what do you think?
0
0
2
@ruoshi_liu
Ruoshi Liu
6 months
@georgiagkioxari @SongShuran Me too! Even the talented human participant failed to beat PaperBot with 100 chances (see Fig.8)😜
0
0
1
@ruoshi_liu
Ruoshi Liu
1 year
@MattNiessner Very cool work! How does it compare with MeshDiffusion whose geometric representation is more explicit?
0
0
1
@ruoshi_liu
Ruoshi Liu
3 months
0
0
1
@ruoshi_liu
Ruoshi Liu
1 year
@macfeelay Stanley Kubrick (left) and Geoffrey Hinton (right)
0
0
0