![Nikolaos Sarafianos Profile](https://pbs.twimg.com/profile_images/1019987358194720768/4c4IJEjg_x96.jpg)
Nikolaos Sarafianos
@sarafianosn
Followers
1K
Following
3K
Statuses
283
Research Scientist @RealityLabs working on 3D generative models
San Francisco, CA
Joined December 2016
@raywzy1 I actually presented Omages (along with Geometry Image Diffusion from @DoctorDukeGonzo ) this week at a reading group; I really like this line of work. We'll be updating the arxiv in the next few days so we'll include a discussion related to Omages. See you in Singapore :)
0
0
1
RT @zianwang97: ๐ Introducing DiffusionRenderer, a neural rendering engine powered by video diffusion models. ๐ฅ Estimates high-quality geoโฆ
0
129
0
@ehsanik @anikembhavi @inkynumbers Love the TimeMore grinder ๐Looking forward to all the amazing things y'all will build.
1
0
1
@natanielruizg I think that the griptape is not showing up exactly where it should be in the first rotation (I think for a second or so the skateboard consists of graphics on both sides). That being said I love everything about this and where it's heading ๐
1
0
1
@int64_le Finally we opted for showing mostly the underlying geometry and not the colorized meshes to showcase that the desired edits are happening at the geometry level (and not at the color level). I see both methods as complementary that unlock new capabilities for 3D mesh editing [7/7]
1
0
1
@weswinder Hi @weswinder here are a few reasons (some we address and some are still not well-handled): 1) multi-view consistency of the edited areas 2) ensuring that the 3d edit is consistent w/ the image/text guidance 3) getting fine-level quality 3d edits 4) editing non-watertight meshes
0
0
0
At test-time given a 3D mesh, some image guidance with the desired edit, and a coarse 3D mask, it generates the edited 3D mesh in seconds. Work with Will, @dilin_wang, Yuchen, @BozicAljaz, @TuurStuyck, Zhengqin, @flycooler and Rakesh
0
0
1