Ziyang Chen Profile
Ziyang Chen

@CzyangChen

Followers
355
Following
707
Statuses
80

Ph.D. Student at @UMich, advised by @andrewhowens multimodal learning, audio-visual learning prev research Intern @Adobe and @AIatMeta

Joined June 2021
Don't wanna be here? Send us removal request.
@CzyangChen
Ziyang Chen
2 months
🎥 Introducing MultiFoley, a video-aware audio generation method with multimodal controls! 🔊 We can ⌨️Make a typewriter sound like a piano 🎹 🐱Make a cat meow like a lion roars! 🦁 ⏱️Perfectly time existing SFX 💥 to a video
11
43
213
@CzyangChen
Ziyang Chen
22 days
RT @SarahJabbour_: I’m on the PhD internship market for Spr/Summer 2025! I have experience in multimodal AI (EHR, X-ray, text), explainabil…
0
12
0
@CzyangChen
Ziyang Chen
2 months
RT @jin_linyi: Introducing 👀Stereo4D👀 A method for mining 4D from internet stereo videos. It enables large-scale, high-quality, dynamic, *…
0
89
0
@CzyangChen
Ziyang Chen
2 months
RT @dangengdg: I'll be presenting "Images that Sound" today at #NeurIPS2024! East Exhibit Hall A-C #2710. Come say hi to me and @andrewhowe
0
8
0
@CzyangChen
Ziyang Chen
2 months
RT @hugggof: new paper! 🗣️Sketch2Sound💥 Sketch2Sound can create sounds from sonic imitations (i.e., a vocal imitation or a reference sound…
0
21
0
@CzyangChen
Ziyang Chen
2 months
0
0
1
@CzyangChen
Ziyang Chen
2 months
Check out the awesome work from @TianweiY!
@TianweiY
Tianwei Yin
2 months
Video diffusion models generate high-quality videos but are too slow for interactive applications. We @MIT_CSAIL @AdobeResearch introduce CausVid, a fast autoregressive video diffusion model that starts playing the moment you hit "Generate"! A thread 🧵
0
0
1
@CzyangChen
Ziyang Chen
2 months
@hanzhe_hu Awesome work. Congrats!
1
0
0
@CzyangChen
Ziyang Chen
2 months
@david_kup unfortunately, we could not public the model for license issue. I believe it will be in adobe product in the future
1
0
0
@CzyangChen
Ziyang Chen
2 months
RT @dangengdg: What happens when you train a video generation model to be conditioned on motion? Turns out you can perform "motion prompti…
0
148
0
@CzyangChen
Ziyang Chen
2 months
Check the links below for more info! arXiv: website: Work done during my internship @AdobeResearch. Big thanks to all my collaborators @pseetharaman, Bryan Russell, @urinieto, David Bourgin, @andrewhowens, and @justin_salamon!
1
1
14
@CzyangChen
Ziyang Chen
4 months
RT @ayshrv: We present Global Matching Random Walks, a simple self-supervised approach to the Tracking Any Point (TAP) problem, accepted to…
0
23
0
@CzyangChen
Ziyang Chen
7 months
RT @SarahJabbour_: 📢Presenting 𝐃𝐄𝐏𝐈𝐂𝐓: Diffusion-Enabled Permutation Importance for Image Classification Tasks #ECCV2024 We use permutatio…
0
12
0
@CzyangChen
Ziyang Chen
7 months
RT @CohereForAI: This Saturday, be sure to check out @CzyangChen with our Geo Regional Asia Group! Learn more:
0
1
0
@CzyangChen
Ziyang Chen
7 months
I totally agree with @marcomunita that several papers don't cite, mention, or compare properly the related work in the paper. I think a dedicated literature search is one of the important parts of research.
0
0
4
@CzyangChen
Ziyang Chen
7 months
RT @CohereForAI: Mark you calendars! July 13th, join Ziyang Chen for a presentation on "Images that Sound: Composing Images and Sounds on a…
0
2
0
@CzyangChen
Ziyang Chen
8 months
@AlbyHojel Check out our “eyeful” tower and “earful” tower at
0
0
1