![CambridgeLTL Profile](https://pbs.twimg.com/profile_images/964228050815782912/AiBNJHAT_x96.jpg)
CambridgeLTL
@CambridgeLTL
Followers
2K
Following
103
Statuses
241
Language Technology Lab (LTL) at the University of Cambridge. Computational Linguistics / Machine Learning / Deep Learning. Focus: Multilingual NLP and Bio NLP.
Cambridge, England
Joined February 2018
RT @li_chengzu: Forget just thinking in words. 🚀 New Era of Multimodal Reasoning🚨 🔍 Imagine While Reasoning in Space with MVoT Multimodal…
0
163
0
RT @tiancheng_hu: 1/9 🧵 New paper alert (now in Nature Computational Science)! As polarisation continues to shape our world, we asked: Do s…
0
16
0
RT @caiqizh: 🔥Conformity in Large Language Models🔥 Our latest paper dives into how LLMs align their answers with incorrect majorities. We e…
0
20
0
RT @river_dong121: Thrilled to share our updated paper: "UNDIAL: Self-Distillation with Adjusted Logits for Robust Unlearning in Large Lang…
0
5
0
RT @FrohmannM: Introducing 🪓Segment any Text! 🪓 A new state-of-the-art sentence segmentation tool! Compared to existing tools (and strong…
0
27
0
RT @zhang_meiru: Attention Instruction: Amplifying Attention in the Middle via Prompting Key findings: 1. LLMs lack relative position awar…
0
36
0
RT @tiancheng_hu: Thrilled to share our new paper: "Can LLM be a Personalized Judge?" We investigate the reliability of LLMs in judging use…
0
22
0
RT @fdschmidt: Introducing NLLB-LLM2Vec! 🚀 We fuse the NLLB encoder & Llama 3 8B trained w/ LLM2Vec to create NLLB-LLM2Vec which supports…
0
18
0
RT @hanzhou032: Which output is better? [A] or [B]? LLM🤖: B❌ [B] or [A]? LLM🤖: A✅ Thrilled to share our preprint in addressing preference…
0
22
0
RT @li_chengzu: Excited to introduce TopViewRS: VLMs as Top-View Spatial Reasoners🤖 TopViewRS assess VLMs’ spatial reasoning in top-view s…
0
10
0
RT @tiancheng_hu: "Role-playing" with LLMs is increasingly popular in chatbots and also "simulation" for social sciences. Can LLMs simulate…
0
13
0
RT @bminixhofer: Introducing Zero-Shot Tokenizer Transfer (ZeTT) ⚡ ZeTT frees language models from their tokenizer, allowing you to use an…
0
144
0
We can’t wait to see you!
I am crossing Adrian's wall to give a series of invited talks in England! - 27/5 2 pm @OxUniMaths: - 29/5 noon @KingsCollegeLon: Bush House (S) 2.01 - 30/5 5:30 pm @ucl_nlp: - 4-5/6 @CambridgeLTL
0
0
1
Here’s a kind reminder the talk by @krisgligoric from @stanfordnlp will be happening tomorrow! See you at 4 PM GMT!
🎙Talks talks talks! 🎙 As the new term is just around the corner, we’re happy to invite you to the Easter term Seminar series. Find the schedule below and an up-to-date information with abstract and links at
0
1
2
RT @YinhongLiu2: 🔥New paper!📜 Struggle to align LLM evaluators with human judgements?🤔 Introducing PairS🌟: By exploiting transitivity, we p…
0
10
0
Happy to share a new NAACL paper! 🎉
Very happy to share that SQATIN ⛸ was accepted to #NAACL2024 Main! Thanks a lot to my co-authors who made this happen — @gg42554, @annalkorhonen, @licwu! 🎉 📄
0
0
1
Happy to share our new preprint! 📜 Congratulations to all the co-authors, @erazumovskaia @licwu and @annalkorhonen! 🥳
🚨New preprint🚨 Analyzing and Adapting Large Language Models for Few-Shot Multilingual NLU: Are We There Yet? 📃
0
0
3
Happy to share a new PEFT method! Congratulations to all the authors, @AlanAnsell5 @licwu @h_sterz @annalkorhonen @PontiEdoardo! 🎉
We scaled sparse fine-tuning (SFT) to LLMs (such as Llama 2) by making it both parameter- and memory-efficient! (q)SFT instruction tuning performance is often better than (q)LoRA with comparable speed and memory load. Paper: Code: (SFT PEFT) (experiments) @AlanAnsell5 @licwu @h_sterz @annalkorhonen
0
1
9