akanazawa Profile Banner
Angjoo Kanazawa Profile
Angjoo Kanazawa

@akanazawa

Followers
16K
Following
4K
Statuses
804

Assist. Professor at @Berkeley_EECS, @berkeley_ai. KAIR, @nerfstudioteam, advising @WonderDynamics and @LumaLabsAI. she/her.

Berkeley, CA
Joined June 2011
Don't wanna be here? Send us removal request.
@akanazawa
Angjoo Kanazawa
12 days
Very excited about this work!!! The ability to continuously update your representation opens up many possibilities. Its simple yet versatile for all kinds of 3D tasks. A lot of results on our project website
@QianqianWang5
Qianqian Wang
12 days
Introducing CUT3R! An online 3D reasoning framework for many 3D tasks directly from just RGB. For static or dynamic scenes. Video or image collections, all in one!
5
28
231
@akanazawa
Angjoo Kanazawa
12 days
Qianqian and Yifei are on the market for faculty and PhD respectively! They are amazing!!!!!!!
0
0
7
@akanazawa
Angjoo Kanazawa
23 days
Big congrats to the @LumaLabsAI team!!!
@LumaLabsAI
Luma AI
24 days
Introducing Ray2, a new frontier in video generative models. Scaled to 10x compute, #Ray2 creates realistic videos with natural and coherent motion, unlocking new freedoms of creative expression and visual storytelling. Available now. Learn more 
1
4
53
@akanazawa
Angjoo Kanazawa
2 months
Rebecca is applying to MS programs now, if you see her in your apps, she is great!!!
0
0
6
@akanazawa
Angjoo Kanazawa
2 months
I'm not at neurips, but @davidrmcall and @holynski_ are just about to start their poster session on our paper that presents a unified framework for explaining SDS and its variants
@holynski_
Aleksander Holynski
2 months
East Exhibit Hall A-C Poster 2505!!!
0
5
35
@akanazawa
Angjoo Kanazawa
2 months
RT @ethanjohnweber: We ran DUSt3R on our cartoon reconstruction setting and found that it struggles (even with ground truth correspondences…
0
10
0
@akanazawa
Angjoo Kanazawa
2 months
RT @zhengqi_li: Introducing MegaSaM! 🎥 Accurate, fast, & robust structure + camera estimation from casual monocular videos of dynamic scen…
0
90
0
@akanazawa
Angjoo Kanazawa
3 months
Hi! If you found the gsplat library ( useful, we wrote a whitepaper with benchmarking, conventions, derivations, and new features (great effort led by @_maturk & @ruilong_li 🙌).
2
29
281
@akanazawa
Angjoo Kanazawa
3 months
Super cool project led by @gengshanY, that shows how we can learn behavior models of agents from many videos via reconstructing them in a consistent world in 4D. TL;DR: we learn a simulator of his cat🐈!!
@gengshanY
Gengshan Yang
4 months
Sharing my recent project, agent-to-sim: From monocular videos taken over a long time horizon (e.g., 1 month), we learn an interactive behavior model of an agent (e.g., a 🐱) grounded in 3D.
1
10
116
@akanazawa
Angjoo Kanazawa
4 months
EgoAllo takes input from ego (self)😎 to allo (world)🗺️. For ex now the HaMeR predicted local hands are placed in the world and you can retarget human hands to robot hands
0
0
12
@akanazawa
Angjoo Kanazawa
4 months
Teach robots how to manipulate an object via 3D/4D reconstruction🧑🏻‍🏫! Will be presented at CoRL 2024 as an oral presentation.
@justkerrding
Justin Kerr
4 months
Robot See, Robot Do allows you to teach a robot articulated manipulation with just your hands and a phone! RSRD imitates from 1) an object scan and 2) a human demonstration video, reconstructing 3D motion to plan a robot trajectory. #CoRL2024 (Oral)
1
19
187
@akanazawa
Angjoo Kanazawa
4 months
Hellooo! On my way to Milan, my first conference after baby 😁 So excited to be back at #ECCV2024 and talk about my lab's work! You can find the talk details here:
9
3
152
@akanazawa
Angjoo Kanazawa
4 months
RT @ruilong_li: 🌟gsplat==1.4.0🌟 ( is now released! - Supports 2DGS! (Kudos to @WeijiaZeng1) - Supports Fish-eye cam…
0
25
0
@akanazawa
Angjoo Kanazawa
5 months
RT @vonekels: To what extent does social interaction affect behavior in couples swing dancing? Our work looks at how we can predict a danc…
0
25
0
@akanazawa
Angjoo Kanazawa
5 months
4D from a single video is still a very challenging problem but we're taking steps forward in Shape of Motion: we recover a global 4D representation that captures how things are moving across space and time, disentangled from the camera motion.
@QianqianWang5
Qianqian Wang
7 months
We present Shape of Motion, a system for 4D reconstruction from a casual video. It jointly reconstructs temporally persistent geometry and tracks long-range 3D motion. For more details, check our webpage and code
Tweet media one
Tweet media two
0
4
103
@akanazawa
Angjoo Kanazawa
6 months
RT @ruilong_li: 3DGS-MCMC densification strategy is now officially supported with in 🌟gsplat🌟 with a one line code change!
0
16
0
@akanazawa
Angjoo Kanazawa
6 months
More updates to gsplat! So cool to see latest methods getting merged to it so quickly.
@ruilong_li
Ruilong Li
6 months
🌟gsplat🌟 ( now supports multi-GPU distributed training, which nearly linearly reduces the training time and memory footprint. Now Gaussian Splatting is ready for the city-scale reconstruction! kudos to this amazing paper:
Tweet media one
0
2
31
@akanazawa
Angjoo Kanazawa
7 months
A simple but practical approach for gaussians in-the-wild. It's very cool when you can fly into a photo interactively! Congrats to @CongrongX an undergrad who's been making many awesome updates to @nerfstudioteam ! He's applying to grad school this Fall! Highly recommend 🙂
@CongrongX
Congrong Xu
7 months
Excited to share my first paper with @justkerrding and @akanazawa : Splatfacto in the Wild! Our method trains on a single RTX2080Ti and achieves real-time rendering! project page: paper:
0
2
63