![Yubin Kim Profile](https://pbs.twimg.com/profile_images/1783267113177927680/gByd_9Ce_x96.jpg)
Yubin Kim
@ybkim95_ai
Followers
163
Following
35
Statuses
20
Graduate student @MIT conducting research on Healthcare AI aiming to build Personal Agents. Currently looking for PhD opportunities :D
Cambridge, MA
Joined April 2024
I will be at #NeurIPS2024 from December 10-16. Thrilled to present our oral paper(MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making) on Friday, December 13th (15:50-16:10 PST). 🔍 Learn more: Project page:
0
0
18
RT @shan23chen: Awesome study lead by Keno! Lead by @ybkim95_ai and @HyewonMandyJ We are trying to get a perspective from AI researchers a…
0
3
0
RT @Orson_Xu: [Please RT📢] SEA Lab ( is hiring 1 postdoc in Spring/Fall'25 and 1-2 PhD in Fall'25! We build next-g…
0
77
0
@chanwoopark20 @HyewonMandyJ I am open to any forms of collaboration for the future work in Healthcare AI domain especially on multi-agent LLM, healthcare AI and wearable sensors. Also, I am actively looking for PhD positions this Fall.
0
0
6
@chanwoopark20 @HyewonMandyJ Our ablation show that the adaptive setting outperforms static complexity settings, with 81.2% accuracy on text-only queries. Most text-only queries were high complexity, while image+text and video+text queries were often low complexity, suggesting visual cues simplify decisions.
0
0
2
@chanwoopark20 @HyewonMandyJ Our findings show that MDAgents consistently reach consensus across different data modalities. text+video modalities converge quickly, while text+image and text-only modalities show a more gradual alignment. Despite varying speeds, all modality cases eventually converged.
0
0
2
@baggeontae18 @chanwoopark20 @HyewonMandyJ the arxiv paper will be updated! sorry for the inconvenience.
0
0
0
@chanwoopark20 @HyewonMandyJ Our ablations reveal that our approach can optimize performance with fewer agents (N=3), improves decision-making at extreme temperatures, and reduces computational costs, making it more efficient and adaptable than Solo and Group settings, especially in complex medical cases.
0
0
2
@chanwoopark20 @HyewonMandyJ Solo settings excel in simpler tasks, achieving up to 83.9% accuracy, while group settings outperform in complex, multi-modal tasks, with up to 91.9% accuracy.
0
0
2
@chanwoopark20 @HyewonMandyJ Surprisingly, our MDAgents significantly outperforms both Solo and Group setting methods, showing the best performance in 7 out of 10 benchmarks. This comprehends both textual information with high precision and visual data.
0
0
2
@chanwoopark20 @HyewonMandyJ MDAgents follows four stages: 1) Medical complexity check to categorize the query 2) Expert recruitment selecting PCC for low and MDT/ICT for moderate and high complexity 3) Initial assessment 4) Collaborative discussion between LLM agents 5) Final decision making by moderator
0
0
2
@chanwoopark20 @HyewonMandyJ Previous approaches in medical decision making have ranged from single- to multi- agent frameworks like voting and debates. However, they often stick to static setups. However, MDAgents dynamically choose the best collaboration structure based on the complexity of medical tasks.
0
0
3
RT @CHILconference: A framework for LLMs to make inference about health based on contextual information and physiological data. Our fine-tu…
0
2
0
Excited to share a #ACL2024 Findings paper "EmpathicStories++: A Multimodal Dataset for Empathy towards Personal Experiences" co-authored with @jocelynjshen. We provide a valuable data for work in empathetic AI, quantitative exploration of cognitive insights and empathy modeling.
Excited to share our #ACL2024 Findings paper "EmpathicStories++: A Multimodal Dataset for Empathy towards Personal Experiences" 🧵(1/7) Dataset request:
0
1
19
RT @taotu831: What unprecedented opportunities can 1M+ context open up in medicine? Introducing 🩺Med-Gemini, a family of multimodal medica…
0
16
0