TsinghuaNLP Profile Banner
TsinghuaNLP Profile
TsinghuaNLP

@TsinghuaNLP

Followers
3K
Following
274
Statuses
266

Natural Language Processing Lab at Tsinghua University

Beijing
Joined January 2021
Don't wanna be here? Send us removal request.
@TsinghuaNLP
TsinghuaNLP
2 months
RT @xcjthu1: 1/4 🚀 Densing Law of LLMs 🚀 OpenAI's Scaling Law showed how model capabilities scale with size. But what about the trend towa…
0
42
0
@TsinghuaNLP
TsinghuaNLP
3 months
3/3 ⚡️ Hardware-optimized with a custom Triton Kernel, we achieve 3x faster inference over PyTorch. With just one 80GB A100 GPU, you can load 50 7B models!
Tweet media one
0
0
2
@TsinghuaNLP
TsinghuaNLP
3 months
RT @nlp_rainy_sunny: (Repost) We are thrilled to introduce our new work 🔥#SparsingLaw🔥, a comprehensive study on the quantitative scaling p…
0
3
0
@TsinghuaNLP
TsinghuaNLP
3 months
Moreover, in the 1280K "needle-in-a-haystack" (NIAH) test, the framework achieved perfect scores. Thanks to its highly parallelized segmented processing mechanism, it matches the speed of direct decoding for extremely long texts, surpassing other divide-and-conquer frameworks!
Tweet media one
Tweet media two
0
0
2
@TsinghuaNLP
TsinghuaNLP
3 months
🚀 Excited to propose LLM×MapReduce framework, expanding LLMs' context length to infinity! A training-free, divide-and-conquer approach for long text processing and comprehensive document understanding. 📑: 🔧: #THUNLP #LLM #NLP
Tweet media one
Tweet media two
1
6
17
@TsinghuaNLP
TsinghuaNLP
3 months
RT @OpenBMB: 🚀 Excited to share our latest work: “RAGEval”! 🎉 It’s a versatile framework for generating scenario-specific RAG evaluation d…
0
6
0
@TsinghuaNLP
TsinghuaNLP
4 months
Optima explores the evolution of agent communication and scaling laws. Intriguing findings from @JeffreyChen_THU —join the conversation! 💬 #TechTalk #AI #THUNLP
@JeffreyChen_THU
Weize Chen
4 months
🤖💬 Excited to share our new work, Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent Systems! We explore training techniques to make AI agents communicate better and more efficiently, and also observe it leads to improved inference scaling law! 🧵👇
Tweet media one
0
1
9
@TsinghuaNLP
TsinghuaNLP
5 months
RT @xcjthu1: 1/5 🚀 Excited to share our latest paper on Configurable Foundation Models! 🧠 Inspired by the human brain's functional special…
0
23
0
@TsinghuaNLP
TsinghuaNLP
6 months
📑
0
0
1
@TsinghuaNLP
TsinghuaNLP
6 months
📑
0
0
1
@TsinghuaNLP
TsinghuaNLP
7 months
RT @thudcst: 🏆 We're thrilled to announce that our paper "Scaling Laws For Dense Retrieval"won the SIGIR'24 Best Paper Award! Congratulatio…
0
6
0
@TsinghuaNLP
TsinghuaNLP
7 months
RT @JeffreyChen_THU: Introducing Internet of Agents (IoA) - a novel framework for AI agent collaboration! 🌐🤖 Imagine a world where heteroge…
0
12
0
@TsinghuaNLP
TsinghuaNLP
7 months
RT @thudcst: 🚀 Exciting day at #Tsinghua! The 2024 Summer School for Large Language Models kicks off today. 🌐 CS students from around the g…
0
3
0
@TsinghuaNLP
TsinghuaNLP
8 months
Excited to see the #ChatDev team pushing the boundaries of LLM-powered multi-agent collaboration with their curated collection of seminal papers. Dive into the latest advancements and explore the interactive e-book here: 📚🤖 #AI #Research #Innovation
@qianc62
ChenQian
8 months
🎉To foster development in LLM-powered multi-agent collaboration🤖, the #ChatDev team has curated a collection of representative papers📄 presented in an interactive e-book📚 format. Explore the latest advancements and download the paper list here:
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
2
9
@TsinghuaNLP
TsinghuaNLP
8 months
We're in Seattle for #CVPR2024 this week!
Tweet media one
0
0
5