![Yueh-Cheng Liu Profile](https://pbs.twimg.com/profile_images/1550224148957380608/ezMxpwrH_x96.jpg)
Yueh-Cheng Liu
@liuyuehcheng
Followers
160
Following
55
Statuses
25
PhD Student at @TU_Muenchen 3D AI Lab
Taiwan
Joined May 2020
RT @angelaqdai: ๐ข ScanNet++ v2 Benchmark Release! ๐ Test your state-of-the-art models on: ๐น Novel View Synthesis ๐ธโก๏ธ๐ผ๏ธ ๐น 3D Semantic & Insโฆ
0
42
0
RT @angelaqdai: ๐ขMeshArt: Generating Articulated Meshes with Structure-guided Transformers @DaoyiGao generates articulated meshes with a hโฆ
0
68
0
RT @manuel_dahnert: Super happy to present our #NeurIPS paper ๐๐จ๐ก๐๐ซ๐๐ง๐ญ ๐๐ ๐๐๐๐ง๐ ๐๐ข๐๐๐ฎ๐ฌ๐ข๐จ๐ง ๐
๐ซ๐จ๐ฆ ๐ ๐๐ข๐ง๐ ๐ฅ๐ ๐๐๐ ๐๐ฆ๐๐ ๐ in Vancouver. Come to oโฆ
0
39
0
RT @angelaqdai: How can we generate high-fidelity, complex 3D scenes? @QTDSMQ's LT3SD decomposes 3D scenes into latent tree representationโฆ
0
78
0
CAD retrieval with Diffusion!
Excited to present DiffCAD coming to #SIGGRAPH2024! @DaoyiGao introduces the first probabilistic single-view CAD retrieval & alignment. We train only on synthetic -> generalize robustly to real images! Check out the code: w/@david_roz_, @StefanLeuteneg1
0
0
2
RT @angelaqdai: Excited to present GenZI at #CVPR2024! @craigleili introduces GenZI, the first zero-shot approach to creating realistic 3Dโฆ
0
25
0
RT @MattNiessner: (1/2) LightIt: Illumination Modeling and Control for Diffusion Models! #CVPR2024 We facilitate lighting control for noveโฆ
0
52
0
RT @angelaqdai: Check out @DaoyiGao's DiffCAD - introducing probabilistic CAD retrieval and alignment to an RGB image. We captures ambiguiโฆ
0
42
0
RT @MattNiessner: (1/2) Check out ๐๐๐ฎ๐ฌ๐ฌ๐ข๐๐ง๐๐ฏ๐๐ญ๐๐ซ๐ฌ: Photorealistic Head Avatars with Rigged 3D Gaussians! We create photorealistic head avaโฆ
0
200
0
RT @MattNiessner: (1/2) Check out ๐๐๐ฌ๐ก๐๐๐! MeshGPT generates triangle meshes by autoregressively sampling from a transformer model that prโฆ
0
428
0
RT @angelaqdai: Can we synthesize 3D human-scene interactions without learning from any 3D data? Yes! Check out @craigleili's GenZI, a novโฆ
0
149
0
RT @angelaqdai: We have PhD+PostDoc openings in my lab at TU Munich! Please apply by Dec 20. Come work on 3D/4D reconstruction & generatioโฆ
0
69
0
Come check it out and chat with us at poster 160 in the morning.
Working on 3D semantics and novel view synthesis? Come to our talk at @ICCVConference Wednesday 9:30 AM at Paris Sud, and poster 160, 10:30-12:30 at Foyer Sud! Work done with @liuyuehcheng @angelaqdai @MattNiessner
0
1
7
RT @angelaqdai: We've released the ScanNet++ data! Check it out: 280 high-fidelity 3D scenes w/ 1mm geometry, DSLR+โฆ
0
43
0
RT @MattNiessner: (1/2) How to use GANs for high-quality NeRF reconstruction? GANeRF proposes an adversarial rendering formulation whose gโฆ
0
53
0
First paper of my PhD. Quite a journey! Great thanks to my partner @chandan__yes and supervisors @angelaqdai and @MattNiessner!
Looking for a challenging dataset for novel view synthesis and 3D semantics? Check out ScanNet++ at #ICCV23 (Oral)! 460+ scenes w/ 1mm laser scans, semantics, DSLR images, iPhone RGBD video @chandan__yes @liuyuehcheng @MattNiessner
0
4
29
RT @fdellaert: Nice to see a beautiful 3D geometry paper alongside all the LLM/VIT/DETR bonanza :-) At #CVPR23: LIMAP: A toolbox for mappinโฆ
0
18
0
RT @kike_solarte: We're excited to announce the first multi-view layout estimation challenge, #MVL_CHALLENGE!, for the #OmniCV Workshop atโฆ
0
4
0