ybkim95_ai Profile Banner
Yubin Kim Profile
Yubin Kim

@ybkim95_ai

Followers
163
Following
35
Statuses
20

Graduate student @MIT conducting research on Healthcare AI aiming to build Personal Agents. Currently looking for PhD opportunities :D

Cambridge, MA
Joined April 2024
Don't wanna be here? Send us removal request.
@ybkim95_ai
Yubin Kim
2 months
I will be at #NeurIPS2024 from December 10-16. Thrilled to present our oral paper(MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making) on Friday, December 13th (15:50-16:10 PST). 🔍 Learn more: Project page:
Tweet media one
0
0
18
@ybkim95_ai
Yubin Kim
4 months
RT @shan23chen: Awesome study lead by Keno! Lead by @ybkim95_ai and @HyewonMandyJ We are trying to get a perspective from AI researchers a…
0
3
0
@ybkim95_ai
Yubin Kim
4 months
RT @Orson_Xu: [Please RT📢] SEA Lab ( is hiring 1 postdoc in Spring/Fall'25 and 1-2 PhD in Fall'25! We build next-g…
0
77
0
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ I am open to any forms of collaboration for the future work in Healthcare AI domain especially on multi-agent LLM, healthcare AI and wearable sensors. Also, I am actively looking for PhD positions this Fall.
0
0
6
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ Our ablation show that the adaptive setting outperforms static complexity settings, with 81.2% accuracy on text-only queries. Most text-only queries were high complexity, while image+text and video+text queries were often low complexity, suggesting visual cues simplify decisions.
Tweet media one
0
0
2
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ Our findings show that MDAgents consistently reach consensus across different data modalities. text+video modalities converge quickly, while text+image and text-only modalities show a more gradual alignment. Despite varying speeds, all modality cases eventually converged.
Tweet media one
0
0
2
@ybkim95_ai
Yubin Kim
5 months
@baggeontae18 @chanwoopark20 @HyewonMandyJ the arxiv paper will be updated! sorry for the inconvenience.
0
0
0
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ Our ablations reveal that our approach can optimize performance with fewer agents (N=3), improves decision-making at extreme temperatures, and reduces computational costs, making it more efficient and adaptable than Solo and Group settings, especially in complex medical cases.
Tweet media one
0
0
2
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ Solo settings excel in simpler tasks, achieving up to 83.9% accuracy, while group settings outperform in complex, multi-modal tasks, with up to 91.9% accuracy.
Tweet media one
0
0
2
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ Surprisingly, our MDAgents significantly outperforms both Solo and Group setting methods, showing the best performance in 7 out of 10 benchmarks. This comprehends both textual information with high precision and visual data.
Tweet media one
0
0
2
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ MDAgents follows four stages: 1) Medical complexity check to categorize the query 2) Expert recruitment selecting PCC for low and MDT/ICT for moderate and high complexity 3) Initial assessment 4) Collaborative discussion between LLM agents 5) Final decision making by moderator
Tweet media one
0
0
2
@ybkim95_ai
Yubin Kim
5 months
@chanwoopark20 @HyewonMandyJ Previous approaches in medical decision making have ranged from single- to multi- agent frameworks like voting and debates. However, they often stick to static setups. However, MDAgents dynamically choose the best collaboration structure based on the complexity of medical tasks.
Tweet media one
0
0
3
@ybkim95_ai
Yubin Kim
8 months
RT @CHILconference: A framework for LLMs to make inference about health based on contextual information and physiological data. Our fine-tu…
0
2
0
@ybkim95_ai
Yubin Kim
9 months
Excited to share a #ACL2024 Findings paper "EmpathicStories++: A Multimodal Dataset for Empathy towards Personal Experiences" co-authored with @jocelynjshen. We provide a valuable data for work in empathetic AI, quantitative exploration of cognitive insights and empathy modeling.
@jocelynjshen
Jocelyn Shen
9 months
Excited to share our #ACL2024 Findings paper "EmpathicStories++: A Multimodal Dataset for Empathy towards Personal Experiences" 🧵(1/7) Dataset request:
Tweet media one
0
1
19
@ybkim95_ai
Yubin Kim
10 months
@hwchung27 @MIT Thanks for the great talk today Hyung Won!
0
0
1
@ybkim95_ai
Yubin Kim
10 months
@taotu831 This is really inspiring work! congrats Tao.
1
0
0
@ybkim95_ai
Yubin Kim
10 months
RT @taotu831: What unprecedented opportunities can 1M+ context open up in medicine? Introducing 🩺Med-Gemini, a family of multimodal medica…
0
16
0
@ybkim95_ai
Yubin Kim
10 months
I'm excited to share my recent publication in CHIL 2024, "Health-LLM: Large Language Models for Health Prediction via Wearable Sensor Data". Our study reveals the potential of LLMs as personal health learners with wearable sensors. Arxiv:
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
10
@ybkim95_ai
Yubin Kim
10 months
Happy to share our latest paper "Adaptive Collaboration Strategy for LLMs in Medical Decision Making" We introduce MDAgents - a framework that constructs LLM team for medical decision making, showing best performance in 5 out 7 medical benchmarks.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
12