ltl_uva Profile Banner
LTL-UvA Profile
LTL-UvA

@ltl_uva

Followers
58
Following
28
Statuses
34

Language Technology Lab @UvA_Amsterdam

Amsterdam, The Netherlands
Joined December 2022
Don't wanna be here? Send us removal request.
@ltl_uva
LTL-UvA
3 months
Amsterdam NLPers at #EMNLP2024 🔥 @illc_amsterdam @UvA_IvI
Tweet media one
0
0
23
@ltl_uva
LTL-UvA
4 months
RT @sethjsa: Our work “Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book?” is now on arXiv!
0
22
0
@ltl_uva
LTL-UvA
4 months
4. Representational Isomorphism and Alignment of Multilingual Large Language Models. We will release Di's paper later! #EMNLP2024 #NLProc
@ltl_uva
LTL-UvA
4 months
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
0
3
@ltl_uva
LTL-UvA
4 months
3. How to identify intrinsic task modularity within multilingual translation networks? Check out Shaomu's paper:
@ltl_uva
LTL-UvA
4 months
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
0
5
@ltl_uva
LTL-UvA
4 months
2. ApiQ: Finetuning of 2-Bit Quantized Large Language Model, check out Baohao's paper: #EMNLP2024
@ltl_uva
LTL-UvA
4 months
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
0
5
@ltl_uva
LTL-UvA
4 months
1. Can you learn the meaning of words from someone who thinks you are smarter than you are? Check out Kata's paper: #EMNLP2024 #NLProc
@ltl_uva
LTL-UvA
4 months
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
1
8
@ltl_uva
LTL-UvA
4 months
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
1
7
@ltl_uva
LTL-UvA
5 months
RT @baohao_liao: #NeurIPS decision is out
Tweet media one
0
2
0
@ltl_uva
LTL-UvA
5 months
RT @sethjsa: Just returned from MT Marathon 2024 in Prague - thanks to @ufal_cuni for organising a great week! Between the insightful talks…
0
1
0
@ltl_uva
LTL-UvA
6 months
RT @baohao_liao: 🚨 New paper 🚨 Our multilingual system for the WMT24 general shared task obtain: --- Constrained track: 6 🥇3 🥈 1 🥉 --- Ope…
0
4
0
@ltl_uva
LTL-UvA
6 months
Language Technology Lab at ACL🇹🇭! Busy poster presentation by @davidstap @diwuNLP #ACL2024 #ACL2024NLP
Tweet media one
Tweet media two
0
0
6
@ltl_uva
LTL-UvA
7 months
RT @evgtokarchuk: Inspiring day at GRaM @GRaM_org_ workshop! My only complaint: too short! I want more! 😁 Thanks to organizers for such a…
0
5
0
@ltl_uva
LTL-UvA
7 months
RT @evgtokarchuk: Come check our poster tomorrow at @GRaM_org_ @icmlconf if you want to discuss dispersion of text embeddings on hyperspher…
0
17
0
@ltl_uva
LTL-UvA
7 months
Congrats to David and Di again! #acl2024 #NLProc
0
0
2
@ltl_uva
LTL-UvA
7 months
How Far Can 100 Samples Go? Unlocking Overall Zero-Shot Multilingual Translation via Tiny Multi-Parallel Data:
0
1
3
@ltl_uva
LTL-UvA
7 months
Paper details: 1. The Fine-Tuning Paradox: Boosting Translation Quality Without Sacrificing LLM Abilities
0
1
4
@ltl_uva
LTL-UvA
7 months
🔥Check out David's paper on the impact of fine-tuning LLMs in machine translation! #acl2024 #NLProc
@davidstap
David Stap
7 months
1/4 #ACL2024 Excited to share our new paper on the impact of fine-tuning on the qualitative advantages of LLMs in machine translation! 🤖 Our work highlights the importance of preserving LLM capabilities during fine-tuning.
0
0
2
@ltl_uva
LTL-UvA
7 months
🤔️How to explicitly model embeddings on the hypersphere and encourage dispersion? Check Evgenniia's recent work at @icmlconf @GRaM_workshop #ICML2024
@evgtokarchuk
Evgeniia Tokarchuk
7 months
Next week I'll be in Vienna at @icmlconf! Want to learn more on how to explicitly model embeddings on hypersphere and encourage dispersion during training? Come to the @GRaM_workshop poster session 2 on 27.07 Shoutout to my collaborators Hua Chang Bakker and @vnfrombucharest 💫
Tweet media one
0
1
6
@ltl_uva
LTL-UvA
8 months
RT @baohao_liao: Introducing our new paper🥳: ApiQ: Finetuning of 2-Bit Quantized Large Language Model🧐 In short: ApiQ can work as a quant…
0
3
0