mattf1n Profile Banner
Matthew Finlayson ✈️ NeurIPS Profile
Matthew Finlayson ✈️ NeurIPS

@mattf1n

Followers
944
Following
662
Statuses
137

First year PhD at @nlp_usc | Former predoc at @allen_ai on @ai2_aristo | Harvard 2021 CS & Linguistics

Los Angeles, CA
Joined October 2013
Don't wanna be here? Send us removal request.
@mattf1n
Matthew Finlayson ✈️ NeurIPS
11 months
Wanna know gpt-3.5-turbo's embed size? We find a way to extract info from LLM APIs and estimate gpt-3.5-turbo’s embed size to be 4096. With the same trick we also develop 25x faster logprob extraction, audits for LLM APIs, and more! 📄 Here’s how 1/🧵
Tweet media one
6
79
360
@mattf1n
Matthew Finlayson ✈️ NeurIPS
2 months
RT @RobertTLange: Loving the #NeurIPS2024 'Beyond Decoding: Meta-Generation Algorithms for LLMs' workshop ❤️ by @wellecks @mattf1n @hailey
0
25
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
2 months
I didn’t realize when making these diagrams that my Taylor example would be so timely 😂
@wellecks
Sean Welleck
2 months
In Vancouver for NeurIPS but don't have Taylor Swift tickets? You can still spend the day going through our tutorial reading list: - Tuesday December 10, 1:30-4:00pm @ West Exhibition Hall C, NeurIPS
Tweet media one
0
0
9
@mattf1n
Matthew Finlayson ✈️ NeurIPS
2 months
RT @wellecks: We're incredibly honored to have an amazing group of panelists: @agarwl_ , @polynoamial , @BeidiChen, @nouhadziri, @j_foerst
0
3
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
2 months
RT @wellecks: Curious about inference-time scaling, the #1 trending topic in LLMs? Come to our NeurIPS tutorial: Beyond Decoding: Meta-Gen…
0
49
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
3 months
RT @wellecks: Excited to give a NeurIPS tutorial on LLM inference strategies, inference-time scaling laws & more with @mattf1n and @haileys
0
21
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
4 months
RT @jaspreetranjit_: Thank you so much @SpecNews1SoCal @jaskang21 for featuring our work on OATH-Frames: Characterizing Online Attitudes to…
0
7
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
4 months
RT @xiangrenNLP: Arrived in Philadelphia for the very 1st @COLM_conf! Excited to catch up w/ everyone & happy to chat about faculty/phd pos…
0
6
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
4 months
RT @harsh3vedi: I had a fantastic time visiting USC and talking about 🌎AppWorld ( last Friday!! Thank you, @swabhz,…
0
1
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
4 months
Just landed in Philly for @COLM_conf where I’ll be presenting my work on extracting secrets from LLM APIs at the Wednesday afternoon poster sesh. Please reach out if you wanna hang and talk about sneaky LLM API hacks, accountability, and the geometry of LLM representations!
@mattf1n
Matthew Finlayson ✈️ NeurIPS
11 months
Wanna know gpt-3.5-turbo's embed size? We find a way to extract info from LLM APIs and estimate gpt-3.5-turbo’s embed size to be 4096. With the same trick we also develop 25x faster logprob extraction, audits for LLM APIs, and more! 📄 Here’s how 1/🧵
Tweet media one
0
10
55
@mattf1n
Matthew Finlayson ✈️ NeurIPS
4 months
@nthngdy @agarwl_ @COLM_conf @andreasgrv First off, really cool paper @nthngdy I’m excited to see you at COLM and talk about it! I agree that I don’t see a direct link, as there are many other confounding vars affecting small LM performance on math reasoning. If using greedy-like decoding I’d expect no effect from SMB.
0
0
2
@mattf1n
Matthew Finlayson ✈️ NeurIPS
5 months
@DanielePaliotta @avnermay @srush_nlp @tri_dao Nice work! How do you compare this with
1
0
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
7 months
@alisawuffles @JonathanHayase I wonder, could this be combined with identifying undertrained tokens to figure out how the training mixture differs from the bpe mixture?
@iScienceLuvr
Tanishq Mathew Abraham, Ph.D.
9 months
Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models abs: code: Cohere presents a method for consistently identifying glitch tokens, across both open-source and closed-source LLMs.
Tweet media one
0
0
6
@mattf1n
Matthew Finlayson ✈️ NeurIPS
7 months
RT @xiangrenNLP: Congratulations to the GDM @GoogleDeepMind team on their best paper award at #ICML2024 & Appreciate @afedercooper's shout…
0
8
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
7 months
Congratulations on this well-deserved award for a brilliant paper! And thank you for the shout-out :)
0
3
33
@mattf1n
Matthew Finlayson ✈️ NeurIPS
7 months
@andreasgrv @StatMLPapers Very cool! Mapping this to the problem in my paper their technique can be used to reconstruct a model’s embedding matrix knowing only a few of its entries and the model image. Funnily enough I just learned about ADMM while working on some follow up work
1
0
1
@mattf1n
Matthew Finlayson ✈️ NeurIPS
8 months
Jealous that Will gets to mess around with Rust as part of his research.
@lambdaviking
William Merrill
8 months
📜New preprint w/ @nlpnoah and @yanaiela that evaluates the novelty of LM-generated text using our n-gram search tool Rusty-DAWG 🐶 Code: Paper:
0
0
4
@mattf1n
Matthew Finlayson ✈️ NeurIPS
8 months
RT @wellecks: What do nucleus sampling, tree-of-thought, and PagedAttention have in common? They're all part of our new survey: "From Deco…
0
113
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
8 months
Grateful to be part of this multi-institution effort to document the state of LLM generation. Interested in decoding algorithms? Start by reading our survey paper 🤓
@wellecks
Sean Welleck
8 months
What do nucleus sampling, tree-of-thought, and PagedAttention have in common? They're all part of our new survey: "From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models"
Tweet media one
0
1
31
@mattf1n
Matthew Finlayson ✈️ NeurIPS
8 months
RT @srush_nlp: Everything would be simpler if we all just read this tutorial.
0
39
0
@mattf1n
Matthew Finlayson ✈️ NeurIPS
8 months
RT @jaspreetranjit_: 📢 Can LLMs 🤖 assist social workers 👩🏻‍💻 in characterizing discourse on social issues? We introduce OATH-Frames: a res…
0
17
0