Yuming Jiang Profile
Yuming Jiang

@Jiang_Yuming

Followers
583
Following
194
Statuses
36

Ph.D. Student @MMLabNTU

Singapore
Joined March 2019
Don't wanna be here? Send us removal request.
@Jiang_Yuming
Yuming Jiang
3 days
RT @Gradio: VideoLLaMA3, latest MLLMs for image and video understanding. 🖐️ 7B models: DocVQA: 94.9, MathVision: 26.2, VideoMME: 66.2/70.…
0
60
0
@Jiang_Yuming
Yuming Jiang
16 days
RT @lixin4ever: 🚀🚀🚀Announcing VideoLLaMA3, our latest MLLMs for image and video understanding: - Highly capable 7B models: DocVQA: 94.9, M…
0
46
0
@Jiang_Yuming
Yuming Jiang
16 days
RT @AdinaYakup: VideoLLaMA 3🔥multimodal foundation models for Image and Video Understanding by DAMO Alibaba Model:
0
18
0
@Jiang_Yuming
Yuming Jiang
16 days
RT @_akhaliq: VideoLLaMA 3 Frontier Multimodal Foundation Models for Image and Video Understanding
Tweet media one
0
50
0
@Jiang_Yuming
Yuming Jiang
16 days
RT @arankomatsuzaki: Alibaba presents: VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding Open-sources…
0
50
0
@Jiang_Yuming
Yuming Jiang
16 days
RT @aigclink: 阿里巴巴达摩院发布了专注于图像和视频理解的多模态基础模型:VideoLLaMA 3,一个智能看视频助手,可以看懂视频内容、理解图片、能对话 基于最新的Qwen2.5架构,支持多帧视频理解 #VideoLLaMA3 #视频理解模型 #LLM htt…
0
11
0
@Jiang_Yuming
Yuming Jiang
3 months
RT @ziqi_huang_: 📊 VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models 🎞️ 🏛️ What VBench++ Evaluates: •…
0
10
0
@Jiang_Yuming
Yuming Jiang
7 months
RT @_tianxing: FreeInit is accepted to #ECCV2024!🎉Big thanks to co-authors @scy994 @Jiang_Yuming @ziqi_huang_ @liuziwei7 for their efforts!…
0
9
0
@Jiang_Yuming
Yuming Jiang
8 months
🤩Excited to attend #CVPR2024 in Seattle! 🥳Feel free to reach out to our poster session this afternoon! 📍#184, Arch 4A-E ⏰ 17:15-18:45, 19 June
@liuziwei7
Ziwei Liu
1 year
🔥🔥We propose #VideoBooth to enable **customized video generation** with image prompts, which provide more accurate and direct content control beyond the text prompts. - Project: - Code: - Video:
0
1
16
@Jiang_Yuming
Yuming Jiang
8 months
RT @ziqi_huang_: 𝗩𝗕𝗲𝗻𝗰𝗵 𝗟𝗲𝗮𝗱𝗲𝗿𝗯𝗼𝗮𝗿𝗱 has 14 Text-to-Video models, and 12 Image-to-Video models so far. Join our leaderboard! - Code: https:…
0
12
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @_tianxing: Thanks @_akhaliq for sharing! 🎉 We propose #FreeInit to bridge the training/inference gap of video diffusion models, improv…
0
18
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @liuziwei7: 🔥🔥We propose #VideoBooth to enable **customized video generation** with image prompts, which provide more accurate and direc…
0
38
0
@Jiang_Yuming
Yuming Jiang
1 year
Thanks AK for sharing!
@_akhaliq
AK
1 year
VideoBooth: Diffusion-based Video Generation with Image Prompts paper page: Text-driven video generation witnesses rapid progress. However, merely using text prompts is not enough to depict the desired subject appearance that accurately aligns with users' intents, especially for customized content creation. In this paper, we study the task of video generation with image prompts, which provide more accurate and direct content control beyond the text prompts. Specifically, we propose a feed-forward framework VideoBooth, with two dedicated designs: 1) We propose to embed image prompts in a coarse-to-fine manner. Coarse visual embeddings from image encoder provide high-level encodings of image prompts, while fine visual embeddings from the proposed attention injection module provide multi-scale and detailed encoding of image prompts. These two complementary embeddings can faithfully capture the desired appearance. 2) In the attention injection module at fine level, multi-scale image prompts are fed into different cross-frame attention layers as additional keys and values. This extra spatial information refines the details in the first frame and then it is propagated to the remaining frames, which maintains temporal consistency. Extensive experiments demonstrate that VideoBooth achieves state-of-the-art performance in generating customized high-quality videos with subjects specified in image prompts. Notably, VideoBooth is a generalizable framework where a single model works for a wide range of image prompts with feed-forward pass.
0
2
17
@Jiang_Yuming
Yuming Jiang
1 year
RT @ziqi_huang_: 📊 VBench: Comprehensive Benchmark Suite for Video Generative Models 🎞️ 🏛️ Hierarchical and Disentangled Dimensions 👁️ Hum…
0
31
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @scy994: 🔥FreeU: Free Lunch in Diffusion U-Net🔥 👉method update: we proposed structure-based scaling to enhance the performance of FreeU.…
0
19
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @RisingSayak: Amidst @ICCVConference and other stuff, I forgot to announce the shipment of ✨FreeU✨ in 🧨 diffusers 🚀 Enable and disable…
0
11
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @Yaohui29429309: T2V was the dream when I started my video generation phd career in 2018 at Inria, I still remember our very first try w…
0
2
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @liuziwei7: We propose 🔥LaVie🔥, a high-quality text-to-video generation foundation model that operates on cascaded video latent diffusio…
0
14
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @liuziwei7: 🔥🔥 We propose #ReliTalk, a novel framework for relightable audio-driven talking portrait generation from a single monocular…
0
20
0
@Jiang_Yuming
Yuming Jiang
1 year
RT @_akhaliq: FreeU: Free Lunch in Diffusion U-Net paper page: we uncover the untapped potential of diffusion U-N…
0
157
0