![Md Ashiqur Rahman Profile](https://pbs.twimg.com/profile_images/1349433237152559104/sG_Ls3GN_x96.jpg)
Md Ashiqur Rahman
@Ashiq_Rahman_s
Followers
277
Following
2K
Statuses
375
PhD student @PurdueCs | ex-intern @nvidia | Machine Learning | Computer Vision
USA
Joined September 2017
RT @AnimaAnandkumar: What a special evening at @TIME 100 Impact Award dinner last night. Giving the acceptance speech
0
5
0
RT @AnimaAnandkumar: Thrilled to receive this honor by @TIME Great to see neural operators and AI + science be recognized!
0
12
0
RT @sama: it is (relatively) easy to copy something that you know works. it is extremely hard to do something new, risky, and difficult wh…
0
1K
0
RT @haomengz99: We are excited to share our #NeurIPS2024 work "Multi-Object 3D Grounding with Dynamic Modules and Language Informed Spatial…
0
4
0
@yangyc666 @AnimaAnandkumar Very briefly: CoDA-NO treats each variable as a token and uses an attention mechanism to capture the interactions among these variables. Similar to the transformer architecture, which processes sequences of varying lengths, CoDA-NO can also manage different variable sets.
0
0
2
Excited to present our work, CoDA-NO, on multi-physics systems! Join us at the first poster session of #NeurIPS2024 on Wednesday, December 11, in the East Exhibit Hall, Poster #4102. See you there!
Excited to present our work on CoDA-NO at #NeurIPS2024 We develop a novel neural operator architecture designed to solve coupled partial differential equations (PDEs) in multiphysics systems. Unlike existing approaches that are limited to fixed sets of variables, CoDA-NO can handle varying combinations of physical variables by tokenizing functions along their codomain space. It extends transformer architecture concepts like positional encoding, self-attention, and normalization to function spaces, allowing it to learn representations of different PDE systems with a single model. We demonstrate CoDA-NO's effectiveness through experiments on complex multiphysics problems like fluid-structure interactions and Rayleigh-Bénard convection. Using a two-stage approach of self-supervised pretraining followed by few-shot supervised finetuning, CoDA-NO outperforms existing methods by over 36% on these challenging tasks. The architecture shows strong generalization capabilities, being able to adapt to new physical variables and geometries not seen during training, while maintaining discretization convergence properties that make it resolution-agnostic. Come view our poster on Wednesday the 11th from 11 am to 2 pm PST. Links: Paper: Poster: Codebase: Authors: @Ashiq_Rahman_s @Robertljg, Mogab Elleithy, @DanielLeibovici, @ZongyiLiCaltech, Boris Bonev, @crwhite_ml, @julberner, @RaymondYeh, @JeanKossaifi, @Azizzadenesheli
0
5
14
RT @ZongyiLiCaltech: #NeurIPS I am on the 2024-25 job market seeking faculty positions and postdocs! My goal is to advance AI for scientifi…
0
64
0
RT @ai_for_success: This is F***ING crazy... Now your AI avatar can take Zoom calls in real-time, and you don’t even have to sit in front o…
0
333
0
RT @thegautamkamath: ICLR submission (not mine) flagged for Ethics Review based on the use of the term "black box." Reminiscent of the dr…
0
52
0
RT @wellingmax: I love this visualization! Now we need one for an irreversible Markov Chain as well. Where can I download this gif for my f…
0
38
0
RT @sethharpesq: I’ve been in four wars, as a soldier or reporter, and seen a lot of atrocities and abuses. But dressing up in the clothes…
0
26K
0
RT @Azizzadenesheli: ICML2024 Tutorial on "Neural Operators & Machine Learning on Function Spaces" is now out. #NeuralOperators #AInSci…
0
88
0
RT @Azizzadenesheli: It was great delivering the #NeuralOperators tutorial at #ICML2024 with a great turn out. Thanks everyone for attendin…
0
12
0
RT @docmilanfar: If your PhD advisor dresses like this, you don’t have to worry about using neural nets in your thesis
0
29
0
RT @snz20: It is not humanely possible to function normally when your country is under total communication blockade and death toll for stud…
0
3
0