![Arianna Bisazza Profile](https://pbs.twimg.com/profile_images/1734887601687941120/RAVKAz24_x96.jpg)
Arianna Bisazza
@AriannaBisazza
Followers
1K
Following
808
Statuses
359
Associate Prof #NLProc @univgroningen | Multilingualism | Interpretability | Language Learning in Humans vs NeuralNets | Mum^2
Groningen, The Netherlands
Joined July 2017
RT @Jirui_Qi: 🚨 Our paper on **efficient prompt engineering** has been accepted by NAACL2025 Main Conference! Key Point: LLMs tend to gen…
0
5
0
Three years ago as part of the #InDeep consortium we started to brainstorm on how interpretability could empower users of Machine Translation. It was a long journey with many interesting detours, but we finally have our first results! Thread & blogpost ⤵️
Our piece is finally out in @translation @imminent_news blog! 🎉It presents preliminary findings of our recent study evaluating the usefulness of word-level quality estimation in real-world post-editing settings (paper forthcoming)! 🧵1/
0
1
12
RT @CLiC_it_conf: #CLiCit2024 enigmistica with "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses" by @gsar…
0
6
0
RT @GroNlp: 🌴We had a great time at #EMNLP2024 presenting our works, meeting old friends, getting to know new people, and winning some priz…
0
7
0
Fruit of a collaboration started just a few months ago, our investigation of how child-directed Variation Sets help learning in LMs has received a @babyLMchallenge award #CoNLL2024 #EMNLP2024 !! 🎉❤️ W/ amazing co-authors @_akari000 @Rodamille @akiyohukat_u and Yohei Oseki
👶I am happy to announce that our paper "BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency" received the ✨Outstanding Paper Award✨ at @babyLMchallenge !! #CoNLL2024 #EMNLP2024
1
4
44
Hi fellow #EMNLP2024 attendees! Interested in interpretability-powered citations for RAG LLMs? Don’t miss our poster TODAY (Wed 13 Nov) at 16:00-17:30 - Question Answering 3 session! w/ @Jirui_Qi @gsarti_ and @raquel_dmg
[1/8] Struggling with verifying the trustworthiness of RAG outputs? Check our latest work where we utilize *model internals* as a powerful and faithful tool for attributing answers to retrieved docs! (w/ @gsarti_ @AriannaBisazza @raquel_dmg) đź“„: #NLProc
0
3
29
NeLLComX is accepted at #CoNLL2024 🎉 See you in Miami to discuss new ways of simulating processes of language learning & evolution under a unified NN framework! w/Y.Lian @VerhoefTessa
Why do human languages look the way they do? Which learning constraints, communication pressures &group dynamics cause the emergence of common language properties? We introduce NeLLComX, a framework to study this w/ neural-net agents w/Y.Lian @VerhoefTessa
0
0
9
Our MIRAGE paper is accepted at #EMNLP2024 Main! 🎉 See you in Miami 👋 to chat about interesting applications of feature attribution to Retrieval-Augmented Generation w/ @Jirui_Qi @gsarti_ and @raquel_dmg
0
2
15
RT @GroNlp: 📢 PhD position open in our group! We are looking for a student eager to work on using language technology for cultural heritag…
0
8
0
@ahmetustun89 @gsarti_ @CohereForAI I hope it’s not too late to join the party and congratulate you on this amazing award @ahmetustun89! I knew you were up to great things since the first day we met!
0
0
3
RT @gsarti_: Model Internals-based RAG Evaluation (MIRAGE) 🌴 is accepted to #EMNLP2024 Main! ➡️ To celebrate, here's our new MIRAGE demo c…
0
6
0
It was awesome to host the whole InDeep consortium in Groningen last week! @GroNlp @univgroningen Thanks a lot to invited speakers @Yevgen_M & Daniel Herrmann, and to all poster presenters!
Indeep consortium meeting in Groningen, with @AriannaBisazza @wzuidema @gsarti_ @hmohebbi75 @JumeletJ @linguisticshen @antske @iris_hx and several others.
0
4
14
It’s been exactly 5 years for me @GroNlp. I couldn’t be more proud and grateful to be part of this group!
What a weekend! We welcomed the academic year with a retreat in the nature of Erm! From cooking meals together, to intense rounds of Codenames and peaceful yoga sessions – we’re refreshed and ready to crush it this year!
0
0
22
@JumeletJ Looking forward to welcoming you @GroNlp @univgroningen and get this new part of the journey started!
0
0
7
Languages minimize overall length of their syntactic dependencies. What learning & communication factors cause this pref? Can we find out using a neural-agent communication framework? Come to @Yuqing199704 poster today #CogSci2024 (P1-81) w/ @VerhoefTessa
1
8
35
Excited to be at #CogSci2024! Here to present work on the emergence of common language properties using neural-net simulations (đź‘€ our poster on dep.length minimization w/@Yuqing199704 @VerhoefTessa today 13.00) Generally interested in lang.acquisition/learnability & neural nets!
0
2
14