Irena Gao Profile
Irena Gao

@irena_gao

Followers
746
Following
231
Media
5
Statuses
67
Explore trending content on Musk Viewer
@irena_gao
Irena Gao
21 days
Applying to Stanford's CS PhD program? Current graduate students are running a SoP + CV feedback program for URM applicants (broadly defined). Apply to SASP by Oct. 25! Info:
3
98
475
@irena_gao
Irena Gao
1 year
@shiorisagawa and I will be at the ICML 11am poster session tomorrow, Wednesday, presenting "Targeted Augmentations Improve Out-of-Domain Robustness." () 🏝️Come chat with us about how to design data augmentations for robustness!
0
9
52
@irena_gao
Irena Gao
1 year
Excited to be at #ICCV2023 this week presenting our paper "Adaptive Testing of Computer Vision Models." Work w/ the incredible @gabriel_ilharco , @scottlundberg , and @marcotcr . Oral presentation: Wednesday 13:30-14:30 Poster session: Wednesday 14:30-16:30
Tweet media one
2
7
47
@irena_gao
Irena Gao
1 year
We're excited to announce a new release of OpenFlamingo models. Check out the blog post and keep an eye out for our technical report!
@anas_awadalla
Anas Awadalla
1 year
We are excited to announce OpenFlamingo V2 🦩! We are releasing five new multimodal models, across the 3B, 4B, and 9B scales, that outperform our previous model. w/ @irena_gao . Repo: Demo: Blog:
13
144
659
0
2
27
@irena_gao
Irena Gao
1 year
Check out our benchmark evaluating the compositionality of CLIP-like models!
@zixianma02
Zixian Ma
1 year
Have vision-language models achieved human-level compositional reasoning? Our research suggests: not quite yet. We’re excited to present CREPE – a large-scale Compositional REPresentation Evaluation benchmark for vision-language models – as a 🌟highlight🌟at #CVPR2023 . 🧵1/7
3
14
68
0
2
12
@irena_gao
Irena Gao
1 year
Vision models can have unexpected failure modes. For example, object detection models identify stop signs in sunny environments, but can fail when the stop signs are covered in snow. How can we efficiently test models for such bugs before deployment?
Tweet media one
1
0
3
@irena_gao
Irena Gao
1 year
Rather than having users generate tests on their own, we use foundation models (e.g. CLIP and GPT-3) to rapidly propose tests. Throughout the testing process, we also adapt these models to learn from previously found failures.
Tweet media one
1
0
3
@irena_gao
Irena Gao
1 year
We find in user studies that AdaVision helps users find significantly more failures than a non-adaptive baseline. This holds across classification, detection, and captioning tasks. We also show that finetuning ViT-H/14 on AdaVision tests fixes behavior on buggy topics.
Tweet media one
1
0
3
@irena_gao
Irena Gao
1 year
Human-in-the-loop testing (e.g. red teaming) enables open-ended bug discovery, but is slow and limited by user creativity. How can we address these issues? We propose Adaptive Testing for computer vision models (AdaVision)!
1
0
2
@irena_gao
Irena Gao
1 year
Unfortunately, testing for bugs is hard. Most evaluation sets are small and underestimate OOD vulnerabilities. Automatic testing methods, like suites of synthetic perturbations, are fast but close-ended: they can only test along a set of pre-defined axes.
Tweet media one
1
0
2
@irena_gao
Irena Gao
1 year
@dailyaibrief ICCV registrants should be able to view a livestream in the Imagina app, under the "Paris Nord" room.
0
0
0
@irena_gao
Irena Gao
20 days
@JKhindkar We only read PhD applications, unfortunately. I'm not aware of a similar program for the MS, but I could be wrong. Good luck!
1
0
1