Qianqian Wang Profile
Qianqian Wang

@QianqianWang5

Followers
2K
Following
127
Statuses
20

Postdoc at UC Berkeley and Visiting Researcher at Google. Former Ph.D. student at Cornell Tech. https://t.co/LyIdb5HmM9

Joined December 2018
Don't wanna be here? Send us removal request.
@QianqianWang5
Qianqian Wang
12 days
Thanks to the amazing team @Ify_Zhang  @holynski_, Alexei Efros, and @akanazawa! Code and model will be released soon!
2
1
16
@QianqianWang5
Qianqian Wang
2 months
RT @drjingjing2026: 1/3 Today, an anecdote shared by an invited speaker at #NeurIPS2024 left many Chinese scholars, myself included, feelin…
0
633
0
@QianqianWang5
Qianqian Wang
2 months
RT @zhengqi_li: Introducing MegaSaM! 🎥 Accurate, fast, & robust structure + camera estimation from casual monocular videos of dynamic scen…
0
90
0
@QianqianWang5
Qianqian Wang
4 months
RT @justkerrding: Robot See, Robot Do allows you to teach a robot articulated manipulation with just your hands and a phone! RSRD imitates…
0
34
0
@QianqianWang5
Qianqian Wang
6 months
@davheld Thanks for your question! SpatialTracker does not disentangle scene motion from camera motion, whereas in this work we do. This work may give higher quality long 3d tracks due to per-video optimization but it is much slower than SpatialTracker (feedforward).
0
0
0
@QianqianWang5
Qianqian Wang
7 months
0
1
3
@QianqianWang5
Qianqian Wang
8 months
RT @YuxiXiaohenry: #CVPR If you are interested in how to extract the 3D trajectories from monocular video, do not miss our poster Tomorrow:…
0
21
0
@QianqianWang5
Qianqian Wang
10 months
RT @_akhaliq: SpatialTracker Tracking Any 2D Pixels in 3D Space Recovering dense and long-range pixel motion in videos is a challenging p…
0
132
0
@QianqianWang5
Qianqian Wang
1 year
RT @holynski_: .@QianqianWang5's 🎉Best Student Paper🎉 is being presented at #ICCV2023 tomorrow (Friday)! ▶️"Tracking Everything Everywher…
0
20
0
@QianqianWang5
Qianqian Wang
1 year
RT @_akhaliq: Doppelgangers: Learning to Disambiguate Images of Similar Structures paper page: We consider the vi…
0
34
0
@QianqianWang5
Qianqian Wang
2 years
RT @zhengqi_li: Check out our CVPR 2023 Award Candidate paper, DynIBaR! DynIBaR takes monocular videos of dynamic s…
0
93
0
@QianqianWang5
Qianqian Wang
2 years
@Jimantha Thank you!
1
0
4
@QianqianWang5
Qianqian Wang
3 years
RT @_akhaliq: InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images abs: pro…
0
68
0
@QianqianWang5
Qianqian Wang
3 years
RT @JiamingSuen: Glad to share our work “Neural 3D Reconstruction in the Wild” in SIGGRAPH 2022! We show that with a clever sampling strate…
0
55
0
@QianqianWang5
Qianqian Wang
3 years
RT @ak92501: 3D Moments from Near-Duplicate Photos abs: project page:
0
31
0
@QianqianWang5
Qianqian Wang
4 years
RT @jon_barron: Training NeRFs per-scene is so 2020. Inspired by image based rendering, IBRNet does amortized inference for view synthesis…
0
81
0