![LTL-UvA Profile](https://pbs.twimg.com/profile_images/1818509490452357120/PpynTGM9_x96.jpg)
LTL-UvA
@ltl_uva
Followers
58
Following
28
Statuses
34
Language Technology Lab @UvA_Amsterdam
Amsterdam, The Netherlands
Joined December 2022
4. Representational Isomorphism and Alignment of Multilingual Large Language Models. We will release Di's paper later! #EMNLP2024 #NLProc
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
0
3
3. How to identify intrinsic task modularity within multilingual translation networks? Check out Shaomu's paper:
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
0
5
2. ApiQ: Finetuning of 2-Bit Quantized Large Language Model, check out Baohao's paper: #EMNLP2024
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
0
5
1. Can you learn the meaning of words from someone who thinks you are smarter than you are? Check out Kata's paper: #EMNLP2024 #NLProc
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
1
8
Language Technology Lab got four papers accepted for #EMNLP2024! Congrats to authors Kata Naszadi, Shaomu Tan, Baohao Liao @baohao_liao, Di Wu @diwuNLP 🥳🥳
0
1
7
RT @sethjsa: Just returned from MT Marathon 2024 in Prague - thanks to @ufal_cuni for organising a great week! Between the insightful talks…
0
1
0
RT @baohao_liao: 🚨 New paper 🚨 Our multilingual system for the WMT24 general shared task obtain: --- Constrained track: 6 🥇3 🥈 1 🥉 --- Ope…
0
4
0
Language Technology Lab at ACL🇹🇭! Busy poster presentation by @davidstap @diwuNLP #ACL2024 #ACL2024NLP
0
0
6
RT @evgtokarchuk: Inspiring day at GRaM @GRaM_org_ workshop! My only complaint: too short! I want more! 😁 Thanks to organizers for such a…
0
5
0
RT @evgtokarchuk: Come check our poster tomorrow at @GRaM_org_ @icmlconf if you want to discuss dispersion of text embeddings on hyperspher…
0
17
0
1/4 #ACL2024 Excited to share our new paper on the impact of fine-tuning on the qualitative advantages of LLMs in machine translation! 🤖 Our work highlights the importance of preserving LLM capabilities during fine-tuning.
0
0
2
🤔️How to explicitly model embeddings on the hypersphere and encourage dispersion? Check Evgenniia's recent work at @icmlconf @GRaM_workshop #ICML2024
Next week I'll be in Vienna at @icmlconf! Want to learn more on how to explicitly model embeddings on hypersphere and encourage dispersion during training? Come to the @GRaM_workshop poster session 2 on 27.07 Shoutout to my collaborators Hua Chang Bakker and @vnfrombucharest 💫
0
1
6
RT @baohao_liao: Introducing our new paper🥳: ApiQ: Finetuning of 2-Bit Quantized Large Language Model🧐 In short: ApiQ can work as a quant…
0
3
0