Yonatan Bitton Profile
Yonatan Bitton

@YonatanBitton

Followers
1K
Following
25K
Statuses
11K

Senior Research Scientist at @Google, working on multimodal consistency. CS PhD at @HebrewU.

Joined January 2020
Don't wanna be here? Send us removal request.
@YonatanBitton
Yonatan Bitton
4 months
Visual Riddles is accepted to NeurIPS D&B 2024 🎉 Check it out in
@nitzanguetta
nitzan guetta
4 months
Excited to announce that our paper has been accepted to NeurIPS D&B 2024! Visual Riddles: a Commonsense and World Knowledge Challenge for Large Vision and Language Models Read the paper here: Check out the project website:
1
1
26
@YonatanBitton
Yonatan Bitton
2 days
RT @zorikgekhman: Our work on abductive reasoning across multiple images has been accepted to #ICLR2025! Congrats, @mor_ventura95 !
0
4
0
@YonatanBitton
Yonatan Bitton
4 days
RT @hbXNov: @hila_chefer Great work! you might want to evaluate the model on VideoPhy for assessing the physical commonsense of the model o…
0
1
0
@YonatanBitton
Yonatan Bitton
5 days
RT @hila_chefer: VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with m…
0
192
0
@YonatanBitton
Yonatan Bitton
6 days
RT @NitCal: This is an excellent vision-language benchmark! It's surprising how easily humans solve the examples, while even the strongest…
0
1
0
@YonatanBitton
Yonatan Bitton
6 days
RT @michael_toker: Happy to share that our work on reasoning in VLMs has been accepted to ICLR 2025! In this work, we introduce a challeng…
0
1
0
@YonatanBitton
Yonatan Bitton
6 days
RT @mor_ventura95: We’re excited to share that our paper, "NL-EYE: Abductive NLI for Images" has been accepted to ICLR 2025!🥳 Our benchmar…
0
11
0
@YonatanBitton
Yonatan Bitton
6 days
Excited to see NL-EYE accepted to ICLR 2025! 🎉 I especially like this benchmark because it: - Features a challenging test set—easy for humans, tough for models. - Pushes VLMs beyond single-image reasoning with a multi-image setup. - Leverages t2i for creative task design.
@mor_ventura95
Mor Ventura
7 days
We’re excited to share that our paper, "NL-EYE: Abductive NLI for Images" has been accepted to ICLR 2025!🥳 Our benchmark reveals that VLMs struggle with abductive reasoning across multiple images (worse than random!) Think you can solve it? Check out these examples! Link⬇️
0
1
20
@YonatanBitton
Yonatan Bitton
10 days
@OmriAvr @pika_labs Congrats!
0
0
1
@YonatanBitton
Yonatan Bitton
13 days
RT @abk_tau: Dense captions are highly informative! But it turns out that sometimes they can be overly detailed.. 🤔📚 A great work led by @…
0
2
0
@YonatanBitton
Yonatan Bitton
14 days
RT @moranynk: For additional information: ArXive: Code:
0
1
0
@YonatanBitton
Yonatan Bitton
14 days
Thrilled to share our new work, led by @moranynk, on improving image captioning by fine-tuning models with knowledge-adapted captions. We also introduce DNLI, a fine-grained evaluation framework. This paper will be presented at NAACL 2025🇺🇸. Huge kudos to Moran and the team!
@moranynk
Moran Yanuka
14 days
🚨 Excited to share our paper “Bridging the Visual Gap: Fine-Tuning Multimodal Models with Knowledge Adapted Captions” was accepted to NAACL 2025! 🚨 🎥 Shoutout to Discover AI for covering our work: @abk_tau @YonatanBitton Idan Szpektor @RGiryes
0
1
15
@YonatanBitton
Yonatan Bitton
14 days
RT @moranynk: 🚨 Excited to share our paper “Bridging the Visual Gap: Fine-Tuning Multimodal Models with Knowledge Adapted Captions” was acc…
0
4
0
@YonatanBitton
Yonatan Bitton
14 days
RT @NitCal: Do you use LLM-as-a-judge or LLM annotations in your research? There’s a growing trend of replacing human annotators with LLMs…
0
38
0
@YonatanBitton
Yonatan Bitton
16 days
RT @NitCal: Have you written an NLP/LLM interpretability or analysis paper in recent years? 🤖🔍 I’m finalizing the camera-ready version of…
0
6
0
@YonatanBitton
Yonatan Bitton
17 days
Happy to see VideoPhy in #ICLR205 !
@hbXNov
Hritik Bansal
18 days
VideoPhy is now accepted to #ICLR2025 🇸🇬 please evaluate your state-of-the-art video models on our accessible physical commonsense benchmark! paper:
0
1
13
@YonatanBitton
Yonatan Bitton
17 days
RT @hbXNov: VideoPhy is now accepted to #ICLR2025 🇸🇬 please evaluate your state-of-the-art video models on our accessible physical commons…
0
6
0
@YonatanBitton
Yonatan Bitton
17 days
RT @hbXNov: our work is now accepted to #ICLR2025 🇸🇬 paper:
0
4
0
@YonatanBitton
Yonatan Bitton
24 days
@mohitban47 @WhiteHouse @POTUS Congrats, well deserved!
1
0
3
@YonatanBitton
Yonatan Bitton
1 month
Exciting to see Gemini 2.0 and Gemini-2.0-thinking taking on the Visual Riddles challenge! The leaderboard is heating up, with open-ended auto-rating accuracy currently around the mid-50s. Lots of room for improvement across all models!
@nitzanguetta
nitzan guetta
1 month
🚀🚀🚀 OpenAI O1, Gemini-2.0 and Gemini-2.0-thinking are on the #VisualRiddles leaderboard! Multiple Choice: Gemini-2.0-thinking hits 60% accuracy (84% with hints!) Open-Ended (Auto-Rating): O1 leads with 58% accuracy. Check it out: 🔗 @YonatanBitton
Tweet media one
0
2
19
@YonatanBitton
Yonatan Bitton
1 month
RT @nitzanguetta: 🚀🚀🚀 OpenAI O1, Gemini-2.0 and Gemini-2.0-thinking are on the #VisualRiddles leaderboard! Multiple Choice: Gemini-2.0-thin…
0
1
0