![xinlu zhang Profile](https://pbs.twimg.com/profile_images/1660853804181622786/ezMimGHM_x96.jpg)
xinlu zhang
@XZ1023_
Followers
72
Following
14
Statuses
24
Ph.D. @ucsbcs, Bachelor& Master @IUBloomington. Research interests: Instructional tuning, LLM evaluation, LLM prompt (Looking for full-time/internship).
Joined April 2022
RT @ZhiyuChen4: π€ Can LLMs effectively assist in cognitive behavior therapy (CBT)? πNew paper: We present the firβ¦
0
6
0
Many thanks to my wonderful collaborators: Zhiyu (@ZhiyuChen4), Xi(@xiye_nlp), Xianjun ( @Qnolan4), Lichang (@LichangChen2), William Wang ( @WilliamWangNLP) and Linda Ruth Petzold.
0
0
2
@Qnolan4 @LichangChen2 @ZekunLi0323 6/π Takeaway: Our study shows the power of tuning LLMs with diverse & domain-specific instructions. High-quality, diverse data with domain knowledge boosts domain-specific capacity & generalization, even with small quantities.
0
0
1
@Qnolan4 @LichangChen2 @ZekunLi0323 5/ π₯ In-depth comparisons of AlpaCare-13B vs. its 13B instruction-tuned LLM counterparts reveal a consistent edge in medical proficiency and adaptability.
0
0
1
@Qnolan4 @LichangChen2 @ZekunLi0323 4/π Evaluation on AlpacaFarm reveals AlpaCare's robust generalization abilities in both medical and general domains. Training with a diverse, domain-specific instruction dataset also enhances generalizability.
0
0
1
@Qnolan4 @LichangChen2 @ZekunLi0323 2/π How do we create a diverse med dataset? Begin with expert-crafted tasks spanning medical topics, types & levels. Use #GPT4 for generating diverse tasks, ensuring depth! Tasks are then input into #ChatGPT sequentially for detailed output. See our seed example:
0
0
1
π§΅6/6 A big shoutout to our fantastic team of authors: @ShiyangLi6, @Qnolan4, Chenxin Tian, @YaoQinUCSD, and Linda Ruth Petzold! Don't miss out on more insightful results in our paper! ππ
0
0
1