![Noam Aigerman Profile](https://pbs.twimg.com/profile_images/1820089576485974016/7XRFitRg_x96.jpg)
Noam Aigerman
@AigermanNoam
Followers
502
Following
260
Statuses
27
Assistant Professor @ University of Montreal
Montreal
Joined March 2021
RT @RanaHanocka: 3DL is recruiting PhD students to start in Fall 2025! A thread👇highlighting some of our recent works, and what we are exc…
0
13
0
I'll be recruiting PhD and MSc students through Mila - consider applying if you want to work at the intersection of machine learning and 3D geometry!
Mila's annual supervision request process opens on October 15 to receive MSc and PhD applications for Fall 2025 admission! Join our community! More information here
2
40
104
@sellan_s I think the implicit function is indeed a function: the (local) function that is represented by the zero level set (e.g., in the case of a surface, the *function* from a 2D patch to 3D). That is the "implicit function" that the "implicit function theorem" refers to, methinks.
0
0
0
RT @sellan_s: This is as good time to announce that I am *HIRING* students and postdocs to join my new lab at Columbia in 2025 (see link be…
0
112
0
@amirvaxman_dgp For one, I can say that I've never been able to use the LBFGS solver in Pytorch - always stagnates on problems that I know would've worked with Matlab's LBFGS solver
1
0
3
@sellan_s Let him upload a md5 checksum, we'll see what he says about lack of action after that.
1
0
7
RT @allthatissolids: Excited to be presenting DA-Wand: Distortion Aware Selection Using Neural Mesh Parameterization at #CVPR2023! Come fi…
0
5
0
RT @WillMGao: Excited to share TextDeformer, our SIGGRAPH2023 work on text-driven mesh manipulation! (1/7) https://…
0
6
0
@amirvaxman_dgp No guarantees about prediction in the triangle's (irrelevant) normal direction. It's a null-space for the loss, so intuitively, probably the network chooses its prediction in the normal direction s.t. it facilitates making a correct prediction in the tangent space.
1
0
1
@amirvaxman_dgp The loss is defined on the restriction, and on it it is well-defined. The *gradient* is also well-defined; it's simply zero in each triangles' normal direction (that's due to back-propagation through the restriction operator which discards the normal direction).
1
0
0
@esx2ve Yes, any genus, out of the box! More so, your dataset isn't required to have a consistent genus. (Fig. 9 shows how a network trained solely on genus-0 meshes still makes valid predictions for higher-genus meshes.)
0
0
1
@coreqode We learn to emulate SLIM's locally-injective maps. Neither us nor SLIM have guarantees for global bijectivity, but this could be achieved by training with maps that were generated via a globally-bijective method (e.g., SLIM + ensuring non overlapping boundary)
1
0
1
Joint work with my excellent co-authors, @thibaultgroueix, @KunalMGupta, Vova Kim, Siddhartha Chaudhuri, and Jun Saito @dukecyto at @AdobeResearch. Code and models to be released soon! (10/10)
0
1
5