Xihui Liu Profile
Xihui Liu

@XihuiLiu

Followers
469
Following
151
Media
2
Statuses
13

Assistant Professor @ HKU. Previous Postdoc @ UC Berkeley and PhD @ CUHK MMLab

Joined August 2015
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@XihuiLiu
Xihui Liu
1 month
Excited to share our work, ScanReason: Empowering 3D Visual Grounding with Reasoning Capabilities, which is accepted by ECCV 2024! Project page: arXiv: Code:
Tweet media one
2
12
85
@XihuiLiu
Xihui Liu
1 month
Arrived at Vienna! I am at #ICML2024 this week. We will host the ICML workshop on Multimodal Foundation Models on Embodied Agents @ICML2024_MFMEAI on June 26. Looking forward to meeting old and new friends!
6
3
79
@XihuiLiu
Xihui Liu
3 months
Arrived at Seattle! I’m at #CVPR2024 from today to June 22. Looking forward to meeting old and new friends! Feel free to DM me if you want to chat on visual generative models or multimodal AI. I’m also open to collaborations with both academia and industry.
0
2
57
@XihuiLiu
Xihui Liu
3 months
Thank @_akhaliq for sharing! Welcome to check out our work DiM: Diffusion Mamba for Efficient High-Resolution Image Synthesis We have made code and models available at
@_akhaliq
AK
3 months
DiM Diffusion Mamba for Efficient High-Resolution Image Synthesis Diffusion models have achieved great success in image generation, with the backbone evolving from U-Net to Vision Transformers. However, the computational cost of Transformers is quadratic to the number of
Tweet media one
2
30
170
0
6
26
@XihuiLiu
Xihui Liu
3 months
Thank @_akhaliq for sharing! Welcome to check out our work 4Diffusion Project page Code
@_akhaliq
AK
3 months
4Diffusion Multi-view Video Diffusion Model for 4D Generation Current 4D generation methods have achieved noteworthy efficacy with the aid of advanced diffusion generative models. However, these methods lack multi-view spatial-temporal modeling and encounter challenges in
2
35
167
0
2
8
@XihuiLiu
Xihui Liu
2 months
#CVPR2024 Yunhan will present this work in this afternoon 17:15-18:45 (poster #321 ). Feel free to drop by if you are interested!
@_akhaliq
AK
9 months
DreamComposer: Controllable 3D Object Generation via Multi-View Conditions paper page: Utilizing pre-trained 2D large-scale generative models, recent works are capable of generating high-quality novel views from a single in-the-wild image. However, due
0
17
85
0
0
7
@XihuiLiu
Xihui Liu
3 months
In this workshop, Meng Wei will introduce the Open-Vocabulary Part Segmentation Challenge (), and Yunhan Yang will present our recent work on open-vocabulary 3D part segmentation. They are both first-year Ph.D. students in my group. See you on June 18!
@AimeryKong
Shu Kong
3 months
Look forward to meeting folks at our 4th workshop of Visual Perception and Learning in the Open World (VPLOW) @ CVPR'24! 🍻 Location: Summit 328, Seattle Convention Center Time: 8:30am - 5:30pm Local Time (PDT), Tuesday, June 18, 2024
2
4
12
0
1
4
@XihuiLiu
Xihui Liu
1 month
Excited to share our new work T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation. Project page: arXiv: Code: Hugging Face Daily Papers:
Tweet media one
1
0
2
@XihuiLiu
Xihui Liu
1 month
We propose a new task called 3D Reasoning Grounding and a new benchmark ScanReason which provides over 10K question-answer-location pairs from five reasoning types that require the synerization of reasoning and grounding. We further design our approach, ReGround3D, for this task.
0
0
1