![Umang Bhatt Profile](https://pbs.twimg.com/profile_images/1496458018086653961/YUI7rr7Y_x96.jpg)
Umang Bhatt
@umangsbhatt
Followers
2K
Following
3K
Statuses
504
Faculty Fellow @NYUDataScience. Research @TuringInst. Previously: PhD @Cambridge_Uni, @Mozilla, @PartnershipAI, @CarnegieMellon.
🇺🇸⇔🇬🇧
Joined July 2016
Our thought partner work is now out in @NatureHumBehav!!! 🎉 We discuss how to build AI systems that meet user expectations and complement their limitations🫱🏿🫲🏾
1
2
30
RT @NYUDataScience: CDS Faculty Fellow Umang Bhatt (@umangsbhatt) explores when AI should step back for cultural fit. His "algorithmic re…
0
1
0
Please help us reach interested applicants! @trustworthy_ml @XAI_Research @QueerinAI @datascifellows @dsa_org @DeepIndaba @black_in_ai @_LXAI @Khipu_AI @AiDisability @wimlds @DiverseInAI
0
0
4
@RivaGiuseppe @anna_wexler @AFeinsinger @NCKobis .@katie_m_collins et al. advance a new Perspective of how we can build systems that complement our limitations and can be considered thought partners. @sucholutsky @umangsbhatt @_k_a_c_h_ @MinaLee__ @xuanalogue @mark_ho_ @vmansinghka @adrian_weller [9/13]
0
0
3
Had a great time in Dakar for my second @DeepIndaba! It was fun to lead a practical session on responsible AI and meet so many wonderful people in Senegal 🇸🇳 🔥 #Indaba2024 #DLI2024
0
2
29
What would it take to build machines that partner with humans? Can we design AI assistants to be thought partners? In a new perspective, we describe how computational cognitive science can help build AI systems that learn and think *with* people! 🧠🫱🏿🫲🏾🤖
[New preprint!] What does it take to build machines that **meet our expectations** and **compliment our limitations**? In this Perspective, we chart out a vision, which engages deeply with computational cognitive science, to design truly human-centric AI “thought partners” 1/
0
0
19
Bonus: we put Modiste to the test in work led by @psiyumm and @gruver_nate. We notice that displaying (calibrated) confidence scores to user can further modulate user agreement with LLM outputs. 15/14
Predictions without reliable confidence are not actionable and potentially dangerous. In new work, we deeply investigate uncertainty calibration of large language models. We find LLMs must be taught to know what they don’t know: w/ @psiyumm et al. 1/8
0
0
4