Ph.D. in Graphics
@HKUniversity
. Visiting Ph.D.
@Penn
. Geometry&Physics-based Animation&AIGC. Graduation 2025, open to postdoc and research scientist positions.
🔥Yes! You can achieve REAL-TIME text-to-motion generation using a simulated humanoid to perform various skills!
This feat is realized through the integration of PHC and EMDM.
💬This combination addresses two pivotal challenges in human motion synthesis: ensuring physical
You can now ask your simulated humanoid to perform actions, in REAL-TIME 👇🏻
Powered by the amazing EMDM (
@frankzydou
,
@Alex_wangjingbo
, etal) and PHC.
EMDM:
PHC:
Simulation: Isaac Gym
Introducing🏃𝐄𝐌𝐃𝐌🏃(𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐭 𝐌𝐨𝐭𝐢𝐨𝐧 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥)! 🔥Now, you can achieve efficient text-to-motion generation using a diffusion model within only ~𝟎.𝟎𝟓𝐬 for one sequence (MDM takes ~𝟏𝟎𝐬).
Our simple yet effective approach, EMDM, ensures
Got five papers accepted by
#ECCV2024
@eccvconf
! Huge thanks to all my collaborators! 😃 See you in Milan 🇮🇹
Summary of Selected Works
(I made a fast-forward for them 😄)
- [Shape Generation] Surf-D: Generating High-Quality Surfaces of Arbitrary Topologies Using Diffusion
🎉 🚀 𝐖𝐨𝐧𝐝𝐞𝐫𝟑𝐃 has been accepted to
#CVPR2024
!
Wonder3D is able to convert a single image into a high-fidelity 3D model, complete with textured meshes and color, in just 2 to 3 minutes.
Congratulations to Xiaoxiao (
@xxlong0
), Benny (
@superbennyguo
) and all
Wonder3D gets accepted to CVPR! Thanks the help from all the coauthors. The full codes including training and data rendering codes are available in our github repo
( too many 3D generative papers recently, cannot believe it just passes three months! )
(1/3) Introducing 🚀Surf-D🚀, a novel method for generating high-quality 3D shapes as Surfaces with arbitrary topologies using Diffusion models.
Surf-D demonstrates superior performance in 3D shape generation across various modalities (TBC)!
Good job, Zhengming
@Proof_Yu
!
(3/3) The clothes 🎽👖👗🩳generated by 🚀Surf-D🚀can be used for virtual try-ons with high quality and fidelity. You can just use sketches to generate whatever clothes you want, then put on your own avatar and use captured/generated human motion to animate them :D
#SurfD
#AIGC
(2/3) 🚀Surf-D🚀 achieves superior performance in generating 3D shapes with various topologies while enabling different conditional generation tasks: unconditional generation, category conditional generation, 3D reconstruction from images, and text-to-shape tasks.
#SurfD
#AIGC
🔥 Check it out! 🚀 The code is now available!
Surf-D at
@eccvconf
#ECCV2024
: Our method achieves high-quality surface generation for detailed geometry and various topologies using a diffusion model. It sets the state of the art in various shape generation tasks, including
Our Surf-D is accepted at ECCV 2024
@eccvconf
. Codes are released. Surf-D can generate arbitrary typology shapes in high resolution using UDF representation.
Project page:
Arxiv:
Codes:
#eccv
#SurfD
💃𝐓𝐋𝐂𝐨𝐧𝐭𝐫𝐨𝐥💃 is a novel and powerful method for controllable human motion generation using both low-level Trajectories ✏️ and high-level Language semantics 💬 controls. It effectively supports multi-joint control :D
𝐓𝐋𝐂𝐨𝐧𝐭𝐫𝐨𝐥 significantly outperforms the SOTAs
Presenting our recent work, 𝐓𝐋𝐂𝐨𝐧𝐭𝐫𝐨𝐥, a new approach to synthesizing realistic human motion. 𝐓𝐋𝐂𝐨𝐧𝐭𝐫𝐨𝐥 effectively integrates low-level trajectories with high-level language semantics for controllable motion generation.
Thank
@frankzydou
and all collaborators!
Compact Representation Computation for Shapes—Summary of the Coverage Axis Series
The latest work in the Coverage Axis series🐱, Coverage Axis++, has recently been completed and accepted by SGP2024. Here, I provide a summary of our efforts on this problem.
In the Coverage Axis
WE GOT THE BEST PAPER AWARD OF
#SIGGRAPH2023
!!!
It's a great honor to receive this award. Our paper focuses on normal consistency, and obtains globally consistent normal vectors by regularizing the Winding-Number field.
✨I'm also actively looking for a PhD position!✨
Super happy our work TORE: Token Reduction for Efficient Human Mesh Recovery with Transformer, has been accepted by
#ICCV2023
!
TORE maintains competitive (even higher) accuracy in the Human Mesh Recovery while significantly reducing computational costs and improving throughput.
Aha! When I was attending
@SIGGRAPH
2024, I got an email saying that our co-first-authored journal paper was conditionally accepted to
@SIGGRAPHAsia
2024. Many thanks to our great collaborators!
An interesting way to tag them together.
See you in Tokyo!
#SIGGRAPH
#SIGGRAPH2024
(1/n) 🌟Introducing our recent work 🚀𝐒𝐎(𝐒𝐞𝐪𝐮𝐞𝐧𝐭𝐢𝐚𝐥𝐥𝐲 𝐎𝐟𝐟𝐬𝐞𝐭)-𝐒𝐌𝐏𝐋🚀.
Now, you can generate animation-ready 3D avatars💃🕺 with disentangled clothes🧥👗👖👔 👚from text descriptions💬.
See our animation result by animating the generated 3D avatars with
Want to generate animation-ready 3D avatars with disentangled clothes, from just text descriptions? Introducing our new work featuring a simple yet effective representation, SO(Sequentially Offset)-SMPL! (1/4)
arxiv:
project:
The code of 𝗖𝗼𝘃𝗲𝗿𝗮𝗴𝗲 𝗔𝘅𝗶𝘀 is now publicly available.
Coverage Axis is a simple and robust method for shape skeletonization based on Medial Axis Transform. It could handle both meshes and point clouds.
paper:
code:
One data point of CVPR 24 submission: three papers received scores of [4,4,5], [4,4,2], and [4,4,2]. One paper was accepted by
#CVPR2024
. The outcome for the 442 papers is mixed—not good, but not bad either.
We thank the reviewers and ACs for their efforts; your feedback is
So many interesting designs at the Fast Forward session
@SIGGRAPHAsia
😄
A constellation of human creativity shining brightly! ✨
And happy to share our work with the community!
🌟Excited to be at
#CVPR2024
! I'm currently on the job market and open to postdoc or research scientist positions.
🌟My research interests include Character Animation, Geometric Modeling and Processing, Simulation, Computer Graphics, and Human Behavior Analysis. Graduation in
✈️ I’ll be at CVPR 2024 next week—can’t wait to see you and chat about all the exciting ideas (generation and reconstruction for shape and motion, motion and shape analysis)
Come say hi!
#CVPR
#CVPR2024
🔍 Check out our latest research on 3D hand-face interactions! Please check the video🎥!
🔥 Introducing 🎲 ᗪIᑕE, the first end-to-end method that captures hand-face interactions and deformations from a single image.
🎯 Our method achieves state-of-the-art accuracy while
🔍 Check out our latest research on 3D hand-face interactions!
🔥 Introducing 🎲 DICE, the first end-to-end method that captures hand-face interactions and deformations from a single image. (1/n)
✈️ I’ll be at CVPR 2024 next week—can’t wait to see you and chat about all the exciting ideas (generation and reconstruction for shape and motion, motion and shape analysis)
Come say hi!
#CVPR
#CVPR2024
Wonder3D: Single Image to 3D using Cross-Domain Diffusion
paper page:
introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images.Recent methods based on Score Distillation Sampling (SDS) have shown
✨ICLR 2024 Spotlight
Still hand-labeling phases or tuning GANs with your trajectories?
We propose FLD, a self-supervised representation method that extracts spatial-temporal relationships in high-dimensional trajectories. RL policies informed by FLD show extended generality.🧵
Introducing our team members at
#AnySyn3D
! Explore our recent works with our collaborative partners. If you’re interested in connecting or partnering with us, please don’t hesitate to reach out via email at anysyn3d
@gmail
.com.
website:
#AIGC
#AI
#3D
I would like to say that
@xbpeng4
's AMP is the most important work for me. Any interesting idea can be quickly validated using AMP with IsaacGym.
Here are some demos I made recently:
(1) a policy that learns various walking styles (the idea comes from cASE
@frankzydou
).
Introducing 🔥InterDreamer: Zero-Shot Text to 3D Dynamic Human-Object Interaction🔥
📰:
🔗:
‼️Generate more complex HOIs with higher quality and better alignment to text by synergizing external knowledge.
🧵[1/5]
🎉Thrilled to share that the paper, "NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction," has surpassed 1,000 citations! NeuS has set a standard for high-quality and robust neural surface reconstruction.
#Nerf
#3DV
#Reconstruction
#CV
#CG
(3/3) The clothes 🎽👖👗🩳generated by 🚀Surf-D🚀can be used for virtual try-ons with high quality and fidelity. You can just use sketches to generate whatever clothes you want, then put on your own avatar and use captured/generated human motion to animate them :D
#SurfD
#AIGC
@Alex_wangjingbo
Similar feeling. Overall the quality is improving, but there are still some artifacts. Check this paper; it's interesting: What is the Best Automated Metric for Text to Motion Generation? >
(2/3) 🚀Surf-D🚀 achieves superior performance in generating 3D shapes with various topologies while enabling different conditional generation tasks: unconditional generation, category conditional generation, 3D reconstruction from images, and text-to-shape tasks.
#SurfD
#AIGC
(1/3) Introducing 🚀Surf-D🚀, a novel method for generating high-quality 3D shapes as Surfaces with arbitrary topologies using Diffusion models.
Surf-D demonstrates superior performance in 3D shape generation across various modalities (TBC)!
Good job, Zhengming
@Proof_Yu
!
Some recent works from AnySyn3D :
🌟 SyncDreamer (
#ICLR2024
Spotlight) attempts to address multi-view consistency issues in single-view 3D generation.
🌟Wonder3D (
#CVPR2024
) uses cross-domain diffusion to solve detail recovery problems.
What will you focus on next? Follow
Hello World! 👋👋👋 This is AnySyn3D!
AnySyn3D is a non-commercial, non-profit research interest group comprising individuals with a strong interest in exploring research problems and cutting-edge technologies in 3D AIGC.
Hello Sydney! 🐨
🧥JFK✈️HKG✈️SYD👕for SIGA23.
If you're in Sydney for
@SIGGRAPHAsia
2023 and would like to chat about Shape Generation, Motion Synthesis (kinematics-based/physics-based) and Geometry Modeling/Processing, please free to reach out 😊.
🌟We will present our work
In preparation for PULSE's code release:
Releasing PHC+, motion imitation model that has learned ALL of the training data (11313 AMASS sequences).
Available at PHC's codebase 👇🏻
We hope everyone had a wonderful time on today's tour of the GRASP Lab at Levine!!!
A BIG Thank you to all GRASP volunteers who took some time out of their day to present to the aspiring young students!
#GRASP
#GRASPLab
#GRASPTour
#AEOP
#SMP
🚀 **Q-MAT is NOW LIVE!** 🎉
After tons of requests, we've finally open-sourced the 2015 SIGGRAPH Q-MAT code on GitHub! Dive in and check it out: !!
💻 Compatible with Windows & MacOS. More support coming soon—stay tuned!
#SIGGRAPH
#QMAT
#MedialAxis
Very excited to share our work on SIGGRAPH 2024: CWF: Consolidating Weak Features in High-quality Mesh Simplification.
Proj:
Code: (some star pls)
Paper:
YouTube:
Less than two weeks until SIGGRAPH Asia 2023! Already registered? Join the conversation by using the event hashtag
#SIGGRAPHAsia2023
and banners we've prepared for you! Download them here 👉
This is amazing! Motion synthesis and analysis with rich human-object interactions is one of my next key focuses.
4D motion generation is COOL!
Looking forward to the open-source dataset.🤩
Alert🚨 This could be by far the most impressive video about 4D human motion generation! 🔥🔥🔥
We present TRUMANS, a pioneering and solid effort that scales up 4D human-scene interactions with both static and dynamic objects. With our model, we could eventually generate
1/ Happy to share our new paper "Surface-Filling Curve Flows via Implicit Medial Axes" at
#SIGGRAPH2024
!
We introduce a fast, robust and user-controllable algorithm to compute surface-filling curves. 🐍✈️
Wow! New foundation model for jointly estimating depth and surface normal from monocular images.
These are crucial for downstream tasks such as surface reconstruction and generation.
The huggingface gradio demo of GeoWizard is out!
Single click for both depth and normal with rich details.
The reconstruction feature will be soon.
Visit our page for more comparisons.
#Depth
#Normal
#Gradio
#Hugggingface
Welcome to check out our paper at EuroGraphics 2022!
Coverage Axis: Inner Point Selection for 3D Shape Skeletonization.
Inspired by the set cover problem, we present a simple yet effective formulation called Coverage Axis for 3D shape skeletonization.
Introduce our new work ComboStoc: Combinatorial Stochasticity for Diffusion Generative Models.
We study an under-explored but important factor of diffusion generative models, i.e., the combinatorial complexity.
Project:
Code:
(3/n) 🚀SO(Sequentially Offset)-SMPL🚀 + 🏃EMDM🏃 for animation making! (combining recent two projects)
⚡️Now, anyone can create character animations using just a few text inputs⚡️.
1️⃣Step 1: Generate animation-ready 3D avatars💃🕺 with disentangled clothes🧥👗👖👔 👚from text
Combining with text to motion methods like EMDM(), we can easily use texts to generate complete sequences of realistic character animations!(4/4)
Huge shout out to all co-authors😀
@YuanLiu41955461
@frankzydou
@Proof_Yu
SO-SMPL can generate high-quality disentangled human body and clothes meshes from text prompts. Compared to other methods, this can achieve much more photorealistic animations.
Don't let our early bird ticket prices fly by 🐥 Get the best savings on your tickets to
#SIGGRAPHAsia2023
and we'll see you in Sydney, Australia, this 12 - 15 December!
Early bird prices end 5 November 👉
HuMoGen
@CVPR
is happening next week🔥🔥🔥
[Tue, June 18, from 8:30 AM to 1:00 PM @ Summit 430]
Don't miss out our great speakers and 21 accepted papers!!
Full schedule @
Interesting work! I’ve always believed that physics-based (motion) priors could handle various sparse input signals, e.g., VR handles, 6DoF and captured videos from the headset, in motion tracking much better!
#CVPR2024
Highlight🌟
SimXR: Real-Time Simulated Avatar from Head-Mounted Sensors
Controlling simulated humanoids and estimate full-body pose using SLAM images and headset pose in real time.
🌐:
📜:
🧑🏻💻:
Need a powerful tool for orientating point cloud normal vectors, even for those challenging shapes with extremely thin structures or high genus?
Try our latest
#SIGGRAPH2023
work: "Globally Consistent Normal Orientation for Point Clouds by Regularizing the Winding-Number Field."
I’m recruiting multiple PhD students in my group at
@CIS_Penn
in the following areas:
- Neural Representations and rendering for 3D/4D Reconstruction
- 3D Generative Models
- Human Motion Generation
- LLM guided Graphics and Vision
- Neural Representations for Robotics
etc.
With OpenAI, Figure 01 can now have full conversations with people
-OpenAI models provide high-level visual and language intelligence
-Figure neural networks deliver fast, low-level, dexterous robot actions
Everything in this video is a neural network:
PHC is so cool; it is super robust to track (with fail-state recovery) the generated kinematics-based human motion from EMDM and yield motion with physics correctness! 👍
Welcome to check out our SIGGRAPH 2023 paper: Globally Consistent Normal Orientation for Point Clouds by Regularizing the Winding-Number Field.
We propose a powerful method for point cloud normal orientation.
#siggraph
#siggraph2023
Project page:
We are excited to announce our recent work at SIGGRAPH 2023: Globally Consistent Normal Orientation for Point Clouds by Regularizing the Winding-Number Field.
#SIGGRAPH2023
Project homepage: …
Paper link:
After 2 years of hard work by the team, we are thrilled to release today! Scholar Inbox is a personal paper recommender which enables you to stay up-to-date with the most relevant progress by delivering personal suggestions directly to your inbox.🧵
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image
paper page:
present a novel diffusion model called that generates multiview-consistent images from a single-view image. Using pretrained large-scale 2D diffusion models, recent
Our pipeline enables generating a diverse range of 3D clothes of different colors, materials, and types, and the generated human body meshes and clothes meshes can be directly utilized in physical simulation environment to produce animations(3/4)
✨Exciting opportunity! Sony US Labs in San Jose, CA, is looking for a 2024 Fall research intern for 3D human motion synthesis and human-object interaction. Remote/in-person options available.
Our team at Sony US Labs (located in San Jose, CA) is looking for a 2024 Fall research intern to work on 3D human motion synthesis and human-object interaction. Remote/in-person options are available. If you’re interested, feel free to DM me.
[1/2] Excited about your paper being accepted to CVPR? Looking to extend its impact to more movers and shakers in the Human Motion Generation community? 🕺💡 Submit a one-page abstract to the HuMoGen Workshop at
#CVPR2024
and reach an even broader audience! 📝✨
@CVPR
#HuMoGen
#SIGGRAPHAsia2022
We introduce RFEPS in siggraph asia 2022, a novel method reconstructs a feature-line equipped polygonal surface from the noisy point cloud.
The recovered feature lines are very beneficial for the CAD model recovery.
3DV Keynote: **Generalized 3D Reconstruction**
Lingjie Liu (University of Pennsylvania)
🗓️5:00 - 6:00 PM CET Wednesday Mar 20, 2024
📺Follow the keynote live on