mattmucklm Profile Banner
Matthew Muckley Profile
Matthew Muckley

@mattmucklm

Followers
970
Following
775
Statuses
226

Research Engineer, Meta Fundamental AI Research (FAIR). ML for compression, computer vision, medicine. Threads: https://t.co/IwcbQ8VDPn

New York, NY
Joined December 2018
Don't wanna be here? Send us removal request.
@mattmucklm
Matthew Muckley
3 months
Here we go again...
Tweet media one
0
0
8
@mattmucklm
Matthew Muckley
4 months
@desirivanova Hah I'm speechless! 🤣
0
0
0
@mattmucklm
Matthew Muckley
4 months
RT @brandondamos: If you prompt an LLM and stop in the middle of a token, what happens? ❌ The generated response doesn't correctly complete…
0
7
0
@mattmucklm
Matthew Muckley
4 months
Tokenization is a limitation of modern LLMs. How to make it better? Find a way to convert your LLM's probabilities to byte-level probabilities! Details in the thread below! ⬇️ Work done by our great intern @buutphan
@buutphan
Buu Phan
4 months
🤔Tokenization (1) makes your LLMs produce odd text when prompts are cut off mid-token? (2) gives you problems when ensembling different LLMs? 💡We solve both by converting tokenized LLMs into equivalent byte-level LLMs! No training required! 📎Paper:
Tweet media one
1
0
11
@mattmucklm
Matthew Muckley
4 months
One example here by the responsible AI team!
@adinamwilliams
Adina Williams
4 months
Our responsible AI team is hiring 3 research scientist interns this cycle (2 in Montreal, one in NYC). We're seeking enrolled PhD students who are excited to spend their summer figuring out how to ensure vision and/or language models work for everyone!
0
0
0
@mattmucklm
Matthew Muckley
4 months
@avinab_saha There are a lot more legal restrictions around generative diffusion models, and it took too for us long to resolve before the first author finished their contract. However, an independent group has released an open implementation with some improvements:
1
0
0
@mattmucklm
Matthew Muckley
4 months
@rohitrango There are some projects that will touch on health/medicine (e.g., the DINO team with , but to my knowledge there are no teams where that is their core research.
1
0
1
@mattmucklm
Matthew Muckley
5 months
0
0
0
@mattmucklm
Matthew Muckley
5 months
README link:
0
0
2
@mattmucklm
Matthew Muckley
6 months
RT @karen_ullrich: Even with preference alignment, LLMs can be enticed into harmful behavior via adversarial prompts 😈. 🚨 Breaking: our t…
0
20
0
@mattmucklm
Matthew Muckley
7 months
Appearing at ICML this week!
@IamHuijben
Iris Huijben @ICML
1 year
Preprint and code of my internship @Meta on neural-augmented residual quantization is now online: ⚡️ 🚨We heavily improve SOTA compression and search performance by conditioning the codebook in each residual quantization step on selected codewords so far.
Tweet media one
0
0
4
@mattmucklm
Matthew Muckley
7 months
RT @buutphan: Why do LLMs fail simple completion tasks, but not on a harder task? Learn about tokenization bias in LLMs and how to fix it…
0
11
0
@mattmucklm
Matthew Muckley
8 months
RT @Piovrasca: Are sota image generative models effective world models? Consistency-diversity-realism Pareto fronts show they're not (yet)…
0
42
0
@mattmucklm
Matthew Muckley
9 months
@TacoCohen Thank you for articulating why machine learning papers are so confusing for us image processing people.
0
0
3