CongyueD Profile Banner
Congyue Deng Profile
Congyue Deng

@CongyueD

Followers
3K
Following
1K
Media
34
Statuses
266

CS PhD student @Stanford | Previous: math undergrad @Tsinghua_Uni | ❤️ 3D vision, geometry, and art

Joined February 2020
Don't wanna be here? Send us removal request.
@CongyueD
Congyue Deng
5 months
Do MIT vision PhDs live in the office? 😂.@ishaanpreetam @TianweiY @ShivamDuggal4 @_atewari
Tweet media one
16
16
820
@CongyueD
Congyue Deng
7 months
Our paper "Zero-Shot Image Feature Consensus with Deep Functional Maps" is accepted at #ECCV2024! @eccvconf . Want better image correspondences with noisy and inaccurate features? Let's go to the spectral space with Laplacian eigenfunctions!. ArXiv:
Tweet media one
5
33
259
@CongyueD
Congyue Deng
1 year
Wanna improve your NeRF’s view-dependent effects with just a few lines of code? Check out our work!. 🔁 Swap integration & directional decoding.🟠 Extremely simple modification.🟡 A better numerical estimator.🟣 Interpretation as light fields. 🔗
2
35
173
@CongyueD
Congyue Deng
2 years
From articulated objects to multi-objec scenes, how do we define inter-part equivariance? Check out our latest work Banana🍌! Joint work with @Jiahui77036479, @willbokuishen, @KostasPenn, @GuibasLeonidas. Paper: Website:
Tweet media one
1
25
123
@CongyueD
Congyue Deng
1 year
Our work SparseDFF is accepted at #ICLR2024! .🫳 Dexterous manipulation with a neural feature field. "That small discrepancy in multiview features really matters". 📄Paper: 🐙Code:
Tweet media one
3
16
122
@CongyueD
Congyue Deng
1 year
Will be presenting our spotlight paper Banana🍌 at #NeurIPS2023, welcome to drop by our poster!. Time: Dec 12 (Tue) afternoon.Loc: Poster #2021.Web: Presenting one of my favorite works on my birthday🎂sounds like a special experience
Tweet media one
3
13
112
@CongyueD
Congyue Deng
2 years
My first computer graphics research project in undergrad was to build an interactive system for artists to create complex shapes from a single image. Now, this whole process is automated with diffusion models!. Check out our NeRDi at #CVPR2023 !.Poster:
Tweet media one
Tweet media two
0
12
108
@CongyueD
Congyue Deng
1 year
Our work gets accepted by NeurIPS 2023 as a spotlight — with all reviews positive pre-rebuttal! See you in New Orleans this winter🦞.
@CongyueD
Congyue Deng
2 years
From articulated objects to multi-objec scenes, how do we define inter-part equivariance? Check out our latest work Banana🍌! Joint work with @Jiahui77036479, @willbokuishen, @KostasPenn, @GuibasLeonidas. Paper: Website:
Tweet media one
3
9
105
@CongyueD
Congyue Deng
4 months
What roles can "Geometry" play nowadays in this large model era? What are the current difficulties and future opportunities?. Check out ECCV 2024 workshop "Geometry in Large Model Era"!. 📍 Sep. 30 PM, Brown 3.🔗
Tweet media one
3
20
104
@CongyueD
Congyue Deng
2 years
Introducing NeRF and generative models to the material science department. Really enjoy these cross-disciplinary discussions with thoughts and opinions from very different perspectives @StanfordEng @StanfordAILab
Tweet media one
Tweet media two
4
4
91
@CongyueD
Congyue Deng
2 years
How does the language embedding space balance semantics and appearance for 3D synthesis?.📣Happy to introduce our #CVPR paper NeRDi!.Joint work with @maxjiang93, Charles R. Qi, @skywalkeryxc, Yin Zhou, @GuibasLeonidas, and Dragomir Anguelov.
@_akhaliq
AK
2 years
NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors .abs:
Tweet media one
0
12
84
@CongyueD
Congyue Deng
10 months
Gave a special presentation at our group meeting titled "Introduction to Drawing: Human as an Image Generator".-- with my artwork from the drawing class! 🎨. It's fun to think about the differences between how humans and networks do these tasks
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
2
62
@CongyueD
Congyue Deng
1 year
Excited to share that our EquivAct is accept at ICRA @ieee_ras_icra! 🤖.
@yjy0625
Jingyun Yang
1 year
From a few examples of solving a task, humans can:.🚀 easily generalize to unseen appearance, scale, pose.🎈 handle rigid, articulated, soft objects.0️⃣ all that with zero-shot transfer. Introducing EquivAct to help robots gain these capabilities. 🔗 🧵↓
0
5
58
@CongyueD
Congyue Deng
1 year
How can a robot learn from a few human demonstrations and generalize to variations of the task? Equivariance is all you need!. ⬇️ Check out our EquivAct.😺 Cute cat video included.
@yjy0625
Jingyun Yang
1 year
From a few examples of solving a task, humans can:.🚀 easily generalize to unseen appearance, scale, pose.🎈 handle rigid, articulated, soft objects.0️⃣ all that with zero-shot transfer. Introducing EquivAct to help robots gain these capabilities. 🔗 🧵↓
0
5
56
@CongyueD
Congyue Deng
2 years
Had a wonderful time at @GRASPlab 🤓Thanks @KostasPenn for hosting my visit and @Jiahui77036479 for helping me with everything at Penn 🤗
Tweet media one
0
1
49
@CongyueD
Congyue Deng
4 months
Happening now! @qixing_huang talking about “Geometry of 3D Deformable Shape Generators”
Tweet media one
@CongyueD
Congyue Deng
4 months
Full crowd of people at our workshop #ECCV2024 @eccvconf.
Tweet media one
Tweet media two
1
2
44
@CongyueD
Congyue Deng
7 months
5 years ago, I first learned about spectral geometry in @JustinMSolomon's fabulous Shape Analysis course, which finally led me into the field of 3D vision. Still remember this figure of hitting a Stanford bunny with the hammer🔨.
Tweet media one
0
2
45
@CongyueD
Congyue Deng
1 year
Thank you, Kostas! 😆.
@KostasPenn
Kostas Daniilidis
1 year
Happy Birthday Vector Neuron Lady!!!
Tweet media one
3
1
41
@CongyueD
Congyue Deng
4 years
First research project in my PhD @StanfordAILab #guibaslab with @orlitany, Yueqi Duan, Adrien Poulenard, @taiyasaki, and @GuibasLeonidas 🥳.
@taiyasaki
Andrea Tagliasacchi 🇨🇦
4 years
📢📢📢 Introducing "Vector Neurons". Want a network (and latent space) that act by construction in an equivariant way w.r.t. SO(3) transformations?. All you need is to do is to generalize the scalar non-linearity to a vector one (e.g. Vector ReLU).
Tweet media one
3
3
39
@CongyueD
Congyue Deng
9 months
You can turn on a blender by pressing its container. The object part is a “container” by its semantics, but a “button” by its action. Check out our latest work SAGE🌿 on bridging these two part interpretations for generalizable robot manipulation! #RSS2024
Tweet media one
@HaoranGeng2
Haoran Geng
9 months
Glad to share that SAGE has been accepted by #RSS2024! We built a general framework for language-guided articulated object manipulation with the help of VLM and 3D models!.Want a robot to help control your furniture and appliances? Check out our website:
0
5
38
@CongyueD
Congyue Deng
1 year
We’ll be hosting the EquiVision workshop at #CVPR2024 @CVPR in Seattle this summer!.
@EquiVisionW
EquiVision Workshop
1 year
Our workshop "Equivariant Vision: From Theory to Practice" will be hosted at #CVPR2024 in Seattle this summer! @CVPR . Both original and published works are welcome to submit to our workshop!. 🔗 ⏰Deadline: Mar 22, 2024
Tweet media one
0
5
36
@CongyueD
Congyue Deng
2 years
Super exciting work!.✅ Unsupervised object-centric scene understanding.✅ Equivariant shape priors .✅ A new 3D dataset 🪑& 🍺.
@JiahuiLei1998
Jiahui
2 years
Happy to introduce our new #CVPR2023 paper EFEM with @CongyueD, Karl Schmeckpeper , @GuibasLeonidas, @KostasPenn:.- Learn Equivariant object shape prior on ShapeNet; .- Directly Inference for scene object segmentation!.- A new dataset "Chairs and Mugs"!.
0
2
36
@CongyueD
Congyue Deng
4 months
Full crowd of people at our workshop #ECCV2024 @eccvconf.
Tweet media one
Tweet media two
0
2
32
@CongyueD
Congyue Deng
8 months
Come stopping by our EquiVision workshop tomorrow at Summit 321 with the exciting talks!.
@KostasPenn
Kostas Daniilidis
8 months
Our Equivariant Vision workshop features five great speakers @erikjbekkers @HaggaiMaron @ninamiolane @_machc, and Leo Guibas, spotlight talks, posters, and a tutorial prepared for the vision audience. Come tomorrow, Tuesday, at 8:30am in Summit 321! Thank you @CongyueD for
Tweet media one
0
8
29
@CongyueD
Congyue Deng
2 years
This is so absurd — my Canadian visa for #CVPR2023 get rejected today for the reason of “not submitting my passport “! But the file submission portal then sent me before was not functioning and it told me to try elsewhere and I almost tried everywhere….
2
1
29
@CongyueD
Congyue Deng
3 years
really enjoy my first in-person conference at new orleans! so many cool things!
Tweet media one
0
1
26
@CongyueD
Congyue Deng
5 months
Tweet media one
1
1
25
@CongyueD
Congyue Deng
4 months
@angelaqdai now talking about “Generating 3D Geometry with Limited Data”
Tweet media one
@CongyueD
Congyue Deng
4 months
Full crowd of people at our workshop #ECCV2024 @eccvconf.
Tweet media one
Tweet media two
0
1
19
@CongyueD
Congyue Deng
3 years
📽️5 min video introducing the Vector Neurons.Watch with fun!.
1
3
18
@CongyueD
Congyue Deng
1 year
Really exciting talk!.
@neur_reps
Symmetry and Geometry in Neural Representations
1 year
Exceptional talk by an exceptional speaker starting now in ballroom a/b!. We are honored to welcome Max Welling to talk about Traveling NeurReps in Brains and Machines! 🧠🤖
Tweet media one
0
0
17
@CongyueD
Congyue Deng
1 year
Very exciting work!.
@_akhaliq
AK
1 year
Denoising Vision Transformers. paper page: identify crucial artifacts in ViTs caused by positional embeddings and propose a two-stage approach to remove these artifacts, which significantly improves the feature quality of different pre-trained ViTs
Tweet media one
0
1
9
@CongyueD
Congyue Deng
4 years
@ericyi0124 congrats Prof. Yi gou fu gui wu xiang wang 🤣🤣🤣.
0
0
8
@CongyueD
Congyue Deng
8 months
Poster session happening now!
Tweet media one
Tweet media two
Tweet media three
@KostasPenn
Kostas Daniilidis
8 months
Our Equivariant Vision workshop features five great speakers @erikjbekkers @HaggaiMaron @ninamiolane @_machc, and Leo Guibas, spotlight talks, posters, and a tutorial prepared for the vision audience. Come tomorrow, Tuesday, at 8:30am in Summit 321! Thank you @CongyueD for
Tweet media one
0
0
9
@CongyueD
Congyue Deng
4 months
And finally a remote talk by Tom Funkhouser. Really thank Tom for delivering this nice talk despite his personal inconvenience ❤️
Tweet media one
@CongyueD
Congyue Deng
4 months
Full crowd of people at our workshop #ECCV2024 @eccvconf.
Tweet media one
Tweet media two
0
0
9
@CongyueD
Congyue Deng
2 years
@CVPR Thanks so much for your effort! Have been waiting for almost two months yet still not got my visa😢 But really wanna thank you for continuously keeping in touch with us all and offering so much help!🧡.
2
0
8
@CongyueD
Congyue Deng
1 year
@_akhaliq Check out our LiNeRF:.Improving NeRF view-dependent effects by just swapping two operators.
@CongyueD
Congyue Deng
1 year
Wanna improve your NeRF’s view-dependent effects with just a few lines of code? Check out our work!. 🔁 Swap integration & directional decoding.🟠 Extremely simple modification.🟡 A better numerical estimator.🟣 Interpretation as light fields. 🔗
0
0
8
@CongyueD
Congyue Deng
2 years
Really solid work! -- first time seeing someone coding too much that even hurt his wrist before a deadline😲.
@shenbokui
William Shen
2 years
@CVPR GINA-3D: Learning to Generate Implicit Neural Assets in the Wild with awesome folks @Waymo, @Google, @StanfordAILab: @skywalkeryxc, @charles_rqi, @MahyarNajibi, @boyang_deng, @GuibasLeonidas #guibas_lab, Yin Zhou, Dragomir Anguelov. More updates to come.
0
0
8
@CongyueD
Congyue Deng
2 years
Very exciting work!.
@HaoranGeng2
Haoran Geng
2 years
Our GAPartNet has received a Highlight with all top scores in #CVPR2023. Join us in the afternoon for a discussion! We're looking forward to exchanging ideas with you!.Homepage: Code & Dataset: #CVPR2023 #AIResearch #Robotics #CV
0
0
7
@CongyueD
Congyue Deng
4 months
Andrea Vedaldi talking about “Toward 3D Foundation with the Help of Generative AI”
Tweet media one
@CongyueD
Congyue Deng
4 months
Full crowd of people at our workshop #ECCV2024 @eccvconf.
Tweet media one
Tweet media two
0
0
7
@CongyueD
Congyue Deng
7 months
Project led by a smart PKU undergrad Xinle Cheng, and joint work with @AdamWHarley, Yixin Zhu, and @GuibasLeonidas .@StanfordAILab.
0
0
6
@CongyueD
Congyue Deng
2 years
@yuewang314 Congrats, Yue!! Da lao dai dai wo lol.
0
0
4
@CongyueD
Congyue Deng
4 years
@taiyasaki and you forgot to mention it's an oral 😆.
1
0
5
@CongyueD
Congyue Deng
2 years
And the sushi 🍣 taste so good 😋.
1
0
4
@CongyueD
Congyue Deng
5 years
Quite curious about how a PhD should balance work and life. Sometimes feeling I've wasted so much time indulged in iPad games and social media, but still sometimes feeling stressed and headache about research (literally). .
@PHDcomics
PHD Comics
5 years
Summer ☀️
Tweet media one
0
0
4
@CongyueD
Congyue Deng
8 months
@GhaffariMaani @KostasPenn @ninamiolane Met your students here! Greetings from Seattle :D . In fact we were thinking about inviting you for a talk when initiating this workshop, but didn’t know if you’re interested in coming to vision conferences.
1
0
4
@CongyueD
Congyue Deng
1 year
This work is led by Qianxu Wang (, a very talented undergrad in 3D vision and robotics from Peking University. Qianxu will be interning at Stanford with @leto__jean at @StanfordIPRL this upcoming summer. And he will be applying for PhD next fall!.
1
1
4
@CongyueD
Congyue Deng
5 years
Got the offer from my dream school. But started having all kinds of concerns about the upcoming PhD life ever since I accepted it 😢.
1
0
3
@CongyueD
Congyue Deng
1 year
@charles_rqi She’s sooooo cute!.
0
0
3
@CongyueD
Congyue Deng
8 months
@wendlerch @KostasPenn @ninamiolane Due to CVPR policies, we can only make the recordings public after 3 months. Will post them on our website at the time.
0
0
2
@CongyueD
Congyue Deng
3 years
@AutoVisionGroup @maxjiang93 really like this one!.
0
0
3
@CongyueD
Congyue Deng
1 year
@jasondeanlee @yuxiangw_cs @nanjiang_cs Intel llama! It’s sooo cute!.
1
0
2
@CongyueD
Congyue Deng
1 year
@jbhuang0604 Among these I like the GAN -> diffusion best, cuz GAN training is really a suffer 😂.
0
0
3
@CongyueD
Congyue Deng
3 months
Really exciting work by @ShivamDuggal4 !.
@ShivamDuggal4
Shivam Duggal
3 months
Current vision systems use fixed-length representations for all images. In contrast, human intelligence or LLMs (eg: OpenAI o1) adjust compute budgets based on the input. Since different images demand diff. processing & memory, how can we enable vision systems to be adaptive ? 🧵.
0
0
3
@CongyueD
Congyue Deng
8 months
@erikjbekkers giving his talk on Neural Ideograms and Geometry-Grounded Representation Learning at @EquiVisionW #CVPR2024
Tweet media one
0
0
3
@CongyueD
Congyue Deng
1 year
@YifanJiang17 This was also the first question we asked when coming up with it 😂.
0
0
3
@CongyueD
Congyue Deng
1 year
“Rethinking Directional Integration in Neural Radiance Fields”.📄(LiNeRF) Joint work with @JiaweiYang118 @GuibasLeonidas @yuewang314 .@StanfordAILab @CSatUSC.
0
0
3
@CongyueD
Congyue Deng
8 months
This looks amazing!.
@apparatelabs
Apparate Labs
8 months
Introducing Proteus 0.1, REAL-TIME video generation that brings life to your AI. Proteus can laugh, rap, sing, blink, smile, talk, and more. From a single image!. Come meet Proteus on Twitch in real-time. ↓.Sign up for API waitlist: 1/11
0
0
3
@CongyueD
Congyue Deng
7 months
@junyi42 Thank you, Junyi! We got the inspiration of using two features from you SD-DINO paper 😉.
0
0
2
@CongyueD
Congyue Deng
2 years
@yuewang314 @USCViterbi dalao 66666.
0
0
2
@CongyueD
Congyue Deng
1 year
@CongyueD
Congyue Deng
2 years
From articulated objects to multi-objec scenes, how do we define inter-part equivariance? Check out our latest work Banana🍌! Joint work with @Jiahui77036479, @willbokuishen, @KostasPenn, @GuibasLeonidas. Paper: Website:
Tweet media one
0
0
2
@CongyueD
Congyue Deng
3 years
@SteveTod1998 I guess the positive rate 😂.
1
0
1
@CongyueD
Congyue Deng
7 months
@vincesitzmann Thank you so much, Vincent!!.Love your recent fmap work as well :).
0
0
2
@CongyueD
Congyue Deng
1 year
@yuewang314 @GuibasLeonidas @JiaweiYang118 Haha I checked my file records and realized that we first started discussing this in May 2021 (at a hotpot place lol?). What a long journey! Feel really fortunate to have a friend and collaborator like you!.
1
0
2
@CongyueD
Congyue Deng
4 years
#新头像
Tweet media one
0
0
2
@CongyueD
Congyue Deng
1 year
@haosu_twitr Congrats!.
1
0
2
@CongyueD
Congyue Deng
1 year
@KostasPenn @CSProfKGD Remind me of the ICRA supplementary video which has a max length but not frame size limit — so we put 20 demos per slide 😂.
1
0
2
@CongyueD
Congyue Deng
4 years
@Awfidius @taiyasaki @orlitany @HelgeRhodin (1/2) Thanks for the comments :). For the convolutions: sometimes you may want PointNet-like things to avoid heavy memory consumption by storing all neighbourhood points, or you may want convolutions in the feature space as in DGCNN (where the graph is not embedded in R^3).
0
0
2
@CongyueD
Congyue Deng
2 years
@geopavlakos @UTCompSci Congrats George!! (Wanna visit UT this Oct but it’s before you start…).
1
0
2
@CongyueD
Congyue Deng
4 months
@mapo1 talk happening now!
Tweet media one
@CongyueD
Congyue Deng
4 months
Full crowd of people at our workshop #ECCV2024 @eccvconf.
Tweet media one
Tweet media two
0
0
2
@CongyueD
Congyue Deng
4 months
@richardzhangsfu giving his talk on “The Good & The Bad of Large Models for 3D”
Tweet media one
@CongyueD
Congyue Deng
4 months
Full crowd of people at our workshop #ECCV2024 @eccvconf.
Tweet media one
Tweet media two
2
0
2
@CongyueD
Congyue Deng
3 years
0
0
2
@CongyueD
Congyue Deng
5 years
终于回来啦!.大一的时候跟着社团爬数据,注册了几百个token,结果被封号了,只好换一个邮箱注册回来了😆
Tweet media one
Tweet media two
0
0
1
@CongyueD
Congyue Deng
4 months
@qixing_huang @richardzhangsfu Lol we have the organizers’ chat groups on fb and wechat, the two person there are less actively using Twitter so I’m posting.
0
0
2
@CongyueD
Congyue Deng
1 year
0
0
1
@CongyueD
Congyue Deng
1 year
@docmilanfar pain au chocolat ou chocolatine.
0
0
1
@CongyueD
Congyue Deng
1 year
@nikraghuraman Thank you!.
0
0
1
@CongyueD
Congyue Deng
2 years
@skywalkeryxc lol budget saved for next free food event.
0
0
1
@CongyueD
Congyue Deng
1 year
Joint work with Qianxu Wang, Haotong Zhang, Yang You, Hao Dong, Yixin Zhu, and Leonidas Guibas.
0
0
1
@CongyueD
Congyue Deng
1 year
@docmilanfar quatre-vingt-dix 😨.nonante 😌.
0
0
0
@CongyueD
Congyue Deng
3 years
@taiyasaki @SFU @SFU_CompSci Congrats, Andrea!!!.
0
0
1
@CongyueD
Congyue Deng
1 year
@contactrika Congrats!!.
0
0
1
@CongyueD
Congyue Deng
4 years
@Awfidius @taiyasaki @orlitany @HelgeRhodin (3/2) VN is lovely for its simplicity and resemblance to classical neural nets. Also, it's non-trivial to construct an invariant decoder for neural implicits with the features in SE(3)-transformers (or tensor field networks).
0
0
1
@CongyueD
Congyue Deng
9 months
0
0
1
@CongyueD
Congyue Deng
3 years
@JustinMSolomon @keenanisalive congrats, justin!! 🎉.
0
0
1
@CongyueD
Congyue Deng
8 months
@docmilanfar @KostasPenn @ninamiolane might be less relevant, but here's a paper relating human cognition with optical flows on object rotation:
0
0
1
@CongyueD
Congyue Deng
4 years
@Awfidius @taiyasaki @orlitany @HelgeRhodin (2/2) But anyway the good thing about VN is its versatility -- you may also have a VN-transformer if you like.
0
0
1
@CongyueD
Congyue Deng
4 years
Life becomes much easier ever since I found out that Linux scp works for conda virtual environmens 🉑.
0
0
1
@CongyueD
Congyue Deng
1 year
@haosu_twitr Met several people from ur lab! Wanna meet with you as well haha.
2
0
1
@CongyueD
Congyue Deng
4 years
0
0
1
@CongyueD
Congyue Deng
8 months
@YunzhuLiYZ @ColumbiaCompSci Congrats!! 🎉.
0
0
1
@CongyueD
Congyue Deng
1 year
0
0
1
@CongyueD
Congyue Deng
4 years
@qixing_huang congrats!.
0
0
1
@CongyueD
Congyue Deng
2 years
@JustinMSolomon @yuewang314 @USCViterbi Leo's first great-grandstudent? 😂.
1
0
1
@CongyueD
Congyue Deng
3 years
@maxjiang93 @taiyasaki @NeurIPSConf haha I'll see if I can come -- have a system course final next week and it's killing me 😂.
0
0
1
@CongyueD
Congyue Deng
8 months
@_machc now giving a talk on Geometric Deep Learning for Weather @EquiVisionW #CVPR2024 @CVPR
Tweet media one
0
0
1
@CongyueD
Congyue Deng
3 years
@KaichunMo Congrats!.
0
0
1
@CongyueD
Congyue Deng
1 year
@yuewang314 @GuibasLeonidas @JiaweiYang118 Definitely more hotpot 😋🥘.
0
0
1
@CongyueD
Congyue Deng
4 years
@drsrinathsridha not a common sense review paper but here's this one shared by Leo 😂.
1
0
1
@CongyueD
Congyue Deng
2 years
@yuewang314 congrats!!!.
0
0
1
@CongyueD
Congyue Deng
3 years
@tolga_birdal 不明觉厉.
0
0
1