Changho Shin Profile
Changho Shin

@Changho_Shin_

Followers
428
Following
666
Media
3
Statuses
146
Explore trending content on Musk Viewer
@Changho_Shin_
Changho Shin
10 months
Heading to #NeurIPS 2023 from Dec 10-17! My collaborators and I will present 1. Fairness in weak supervision 2. How to use LLMs for foundation model test-time adaptation @ R0-FoMo Workshop 3. Improving data wrangling LLMs with better demonstrations @ TRL Workshop 🧵
1
9
32
@Changho_Shin_
Changho Shin
5 months
Curious if you can robustify💪 foundation models🤖 almost for free?(!!)💸 Join us for the poster presentation on “Zero-Shot Robustification of Zero-Shot Models” with @dyhadila , @CaiLinrong @fredsala !
Tweet media one
@fredsala
Fred Sala
5 months
Come by #ICLR2024 Session 2 on Tuesday to see our work using representation editing to make foundation models robust! No fine-tuning, no additional data, no problem.
Tweet media one
1
8
43
1
4
22
@Changho_Shin_
Changho Shin
2 months
🚀 Excited to share our latest research at #ICML2024 Workshops! 1️⃣ Weak-to-Strong Generalization Through the Data-Centric Lens What aspects of data induce weak-to-strong generalization? We show that overlap density (the proportion of data with both easy and hard features)
1
6
20
@Changho_Shin_
Changho Shin
10 months
Excited to share our work on "Mitigating Source Bias for Fairer Weak Supervision" with @SonNicCr , @dyhadila , and @fredsala at poster session 1 today! 🚀 Come and let's chat!
Tweet media one
@Changho_Shin_
Changho Shin
10 months
1. Mitigating Source Bias for Fairer Weak Supervision: Weak supervision is great for efficiently building labeled datasets—but it can amplify bias. What can we do about this?
Tweet media one
1
0
4
0
2
13
@Changho_Shin_
Changho Shin
5 months
@dyhadila
Dyah Adila🦄
1 year
While you’re waiting for NeurIPS decisions—check out a fun problem! Can we improve large pre-trained models’ robustness *without* getting more data and fine-tuning?
1
12
42
1
0
5
@Changho_Shin_
Changho Shin
10 months
1. Mitigating Source Bias for Fairer Weak Supervision: Weak supervision is great for efficiently building labeled datasets—but it can amplify bias. What can we do about this?
Tweet media one
1
0
4
@Changho_Shin_
Changho Shin
10 months
It turns out, some good things in life are free. We propose a simple method to improve both fairness *and* accuracy in weak supervision. Check it out at Great Hall & Hall B1+B2 (level 1) #2014 10:45, Tuesday Paper: Code:
1
0
4
@Changho_Shin_
Changho Shin
10 months
Feel free to reach out if you wanna chat! Happy to chat about weak supervision, foundation models, optimal transport or anything else!
0
0
2
@Changho_Shin_
Changho Shin
2 months
@jzhao326 Somehow DMLR paper is not accessible externally 😇
0
0
2
@Changho_Shin_
Changho Shin
10 months
🔼Check it out at Room 235 - 236, Spotlight talk (TBD), Poster (10:20, 15:20), Friday
1
0
2
@Changho_Shin_
Changho Shin
10 months
2. Foundation Models can Robustify Themselves, For Free (w/ @dyhadila ): How do we make foundation models more robust? We ask them questions about what knowledge they were supposed to use---and use their answers to improve predictions. Join us at Ballroom A/B, Oral (16:10), Friday
2
0
2
@Changho_Shin_
Changho Shin
10 months
Check it out at Room 235 - 236, Spotlight talk (TBD), Poster (10:20, 15:20), Friday
0
0
1
@Changho_Shin_
Changho Shin
10 months
3. Improving Data-wrangling LLMs via better in-context examples (w/ Joon Suk Huh): Narayan et al. (VLDB 2023) showed LLMs can be used for data wrangling tasks. Our study explores if better demonstrations (quantity, relevancy, diversity) can further improve LLMs in data wrangling.
1
0
1