![Qianqian Wang Profile](https://pbs.twimg.com/profile_images/1365359419312193536/JsaYDEpP_x96.jpg)
Qianqian Wang
@QianqianWang5
Followers
2K
Following
127
Statuses
20
Postdoc at UC Berkeley and Visiting Researcher at Google. Former Ph.D. student at Cornell Tech. https://t.co/LyIdb5HmM9
Joined December 2018
Thanks to the amazing team @Ify_Zhang @holynski_, Alexei Efros, and @akanazawa! Code and model will be released soon!
2
1
16
RT @drjingjing2026: 1/3 Today, an anecdote shared by an invited speaker at #NeurIPS2024 left many Chinese scholars, myself included, feelin…
0
633
0
RT @zhengqi_li: Introducing MegaSaM! 🎥 Accurate, fast, & robust structure + camera estimation from casual monocular videos of dynamic scen…
0
90
0
RT @justkerrding: Robot See, Robot Do allows you to teach a robot articulated manipulation with just your hands and a phone! RSRD imitates…
0
34
0
@davheld Thanks for your question! SpatialTracker does not disentangle scene motion from camera motion, whereas in this work we do. This work may give higher quality long 3d tracks due to per-video optimization but it is much slower than SpatialTracker (feedforward).
0
0
0
RT @YuxiXiaohenry: #CVPR If you are interested in how to extract the 3D trajectories from monocular video, do not miss our poster Tomorrow:…
0
21
0
RT @_akhaliq: SpatialTracker Tracking Any 2D Pixels in 3D Space Recovering dense and long-range pixel motion in videos is a challenging p…
0
132
0
RT @holynski_: .@QianqianWang5's 🎉Best Student Paper🎉 is being presented at #ICCV2023 tomorrow (Friday)! ▶️"Tracking Everything Everywher…
0
20
0
RT @_akhaliq: Doppelgangers: Learning to Disambiguate Images of Similar Structures paper page: We consider the vi…
0
34
0
RT @zhengqi_li: Check out our CVPR 2023 Award Candidate paper, DynIBaR! DynIBaR takes monocular videos of dynamic s…
0
93
0
RT @_akhaliq: InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images abs: pro…
0
68
0
RT @JiamingSuen: Glad to share our work “Neural 3D Reconstruction in the Wild” in SIGGRAPH 2022! We show that with a clever sampling strate…
0
55
0
RT @jon_barron: Training NeRFs per-scene is so 2020. Inspired by image based rendering, IBRNet does amortized inference for view synthesis…
0
81
0