![Shalini De Mello Profile](https://pbs.twimg.com/profile_images/1218034344029114368/uHXtCQNf_x96.jpg)
Shalini De Mello
@shalinidemello
Followers
2K
Following
683
Statuses
691
Director of Research @NVIDIA leading the AI-Mediated Reality and Interaction (AMRI) Research lab (https://t.co/ZcBV2ViZ19). Views are my own.
Glen Park, San Francisco
Joined September 2018
Come check out Alex's cool work "What You See is What to GAN" on learning unprecedented high-quality geometry from 3D-aware GANs tomorrow (June 21) at @CVPR Poster # 314 from 10:30 am to noon. Project page:
In Seattle for @CVPR :) Come see our poster tomorrow (June 21): #314 from 10:30 to Noon. What You See is What You GAN!
0
3
13
RT @luminohope: I will be giving a Featured Sessions talk at SIGGRAPH ASIA in Tokyo on our efforts for building a foundation 3D digital hum…
0
6
0
RT @songhan_mit: Explore DuoAttention for long-context LLM: we find full-attention is highly redundant; 50%-75% of the attention heads can…
0
16
0
RT @Deeplearner2: Our autonomous robot dishwasher (ARD1) isn't just acting on its own, it's getting smarter. Here it cleans and stacks a bo…
0
43
0
RT @yukez: Check out our new work, BUMBLE — Vision-language models (VLMs) act as the "operating system" for robots, calling perceptual and…
0
6
0
RT @drmapavone: Our work on leveraging Large Language Models (LLMs) as cognitive agents for autonomous driving (Agent-Driver) has been acce…
0
5
0
RT @yukez: Proud to see our latest progress on Project GR00T featured in Jensen's #SIGGRAPH2024 keynote talk today! We integrated our RoboC…
0
74
0
RT @drmapavone: I am excited to share that our paper: "Real-Time Anomaly Detection and Reactive Planning with Large Language Models” has re…
0
21
0
Wow! A once in a lifetime event. Not to be missed.
We’re excited to announce that our CEO Jensen Huang will be joined by @Meta CEO Mark Zuckerberg to discuss how research enables AI breakthroughs at #SIGGRAPH2024. Attend the discussion to learn about the intersection of #generativeAI and virtual worlds.
0
1
13
RT @zipengfu: To retarget from humans to humanoids, we copy the corresponding Euler angles from SMPL-X to our humanoid model. We use open-…
0
11
0
@xunhuang1995 Many congrats Xun! Looking forward to all the great research you will do in the coming years.
0
0
1
RT @vincesitzmann: We have released the code for Video Diffusion via 3D UNet and Temporal Attention trained with Diffusion Forcing! The res…
0
11
0
RT @yongyuanxi: Are you a researcher or ML engineer or data scientist or anything that remotely sounds like that who feels like the tools w…
0
2
0
RT @yin_hongxu: Try VILA’s interactive demo! Latest model is the best open sourced model on image (MMMU) and video (video MME) benchmarks o…
0
10
0
RT @xuxin_cheng: Humanoid whole body grasp and trajectory tracking for any objects. Can’t wait to see it working on real robots!
0
12
0
RT @_vztu: 🚨ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback (ECCV'24) 🌟𝐏𝐫𝐨𝐣: 🚀𝐀𝐛𝐬:…
0
33
0
RT @Jerry_XU_Jiarui: GPT4-V can describe the location via text, but can’t accurately output the coordinate of each word. Introducing: Pixe…
0
57
0