Amir Taubenfeld Profile
Amir Taubenfeld

@TaubenfeldAmir

Followers
36
Following
32
Statuses
25

ML Engineer @GoogleAI

Joined March 2022
Don't wanna be here? Send us removal request.
@TaubenfeldAmir
Amir Taubenfeld
2 days
New Preprint 🎉 LLM self-assessment unlocks efficient decoding ✅ Our Confidence-Informed Self-Consistency (CISC) method cuts compute without losing accuracy. We also rethink confidence evaluation & contribute to the debate on self-verification. 1/8👇
Tweet media one
1
19
48
@TaubenfeldAmir
Amir Taubenfeld
2 days
Taken together, our results and analyses demonstrate that LLMs can effectively judge the correctness of their own outputs, contributing to the ongoing debate on self-verification. ✅ w/ @TomSheffer17807 eran_ofek @amir_feder @GoldsteinYAriel @zorikgekhman @_galyo @GoogleAI
0
0
0
@TaubenfeldAmir
Amir Taubenfeld
2 days
CISC consistently outperforms standard self-consistency across diverse sets of models and datasets, achieving equivalent performance with an average reduction of over 40% in the required sample size 4/8
Tweet media one
1
0
0
@TaubenfeldAmir
Amir Taubenfeld
2 days
LLMs demonstrate strong reasoning capabilities by generating a sequence of reasoning steps that leads them to an answer. The Self-consistency method can enhance performance by sampling diverse reasoning paths and selecting the most frequent answer 2/8
1
0
0
@TaubenfeldAmir
Amir Taubenfeld
3 months
RT @zorikgekhman: I'll be at #EMNLP2024 next week to give an oral presentation on our work about how fine-tuning with new knowledge affects…
0
6
0
@TaubenfeldAmir
Amir Taubenfeld
4 months
RT @DariaLioub: 📢Paper release📢 What computation is the Transformer performing in the layers after the top-1 becomes fixed (a so called "sa…
0
14
0
@TaubenfeldAmir
Amir Taubenfeld
4 months
@RWeisman15348 Thanks! A quick note on (4.3) — the survey questions aren’t saved in the conversation history, so the agents don’t have access to the answers other agents gave. However, they do have access to the prior conversation messages.
0
0
0
@TaubenfeldAmir
Amir Taubenfeld
4 months
RT @zorikgekhman: Our work on the effects of exposing LLMs to new knowledge through fine-tuning has been accepted to #EMNLP2024! We show…
0
11
0
@TaubenfeldAmir
Amir Taubenfeld
9 months
RT @zorikgekhman: Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? New preprint!📣 - LLMs struggle to integrate new factua…
0
57
0
@TaubenfeldAmir
Amir Taubenfeld
1 year
Our findings underscore the need for future research aimed at exposing the inherent biases of LLMs and helping agents transcend these biases, paving the way for more accurate and human-like simulations for both research and practical applications. 9/9
0
0
2