sriniiyer88 Profile Banner
Srini Iyer Profile
Srini Iyer

@sriniiyer88

Followers
1K
Following
628
Statuses
132

Research Scientist at Facebook AI Research

Seattle, WA
Joined February 2012
Don't wanna be here? Send us removal request.
@sriniiyer88
Srini Iyer
2 months
New paper! Byte-Level models are finally competitive with tokenizer-based models with better inference efficiency and robustness! Dynamic patching is the answer! Read all about it here: (1/n)
1
22
88
@sriniiyer88
Srini Iyer
27 days
We're hiring PhD interns for Summer 2025 in Seattle to work with us on improving BLT even more! If this is something that excites you, reach out to me on dm/email asap!
@AIatMeta
AI at Meta
2 months
New from Meta FAIR — Byte Latent Transformer: Patches Scale Better Than Tokens introduces BLT, which for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency & robustness. Paper ➡️
Tweet media one
4
28
317
@sriniiyer88
Srini Iyer
2 months
BLT related post by Meta AI - eliminate all tokenization once and for all!
@AIatMeta
AI at Meta
2 months
New from Meta FAIR — Byte Latent Transformer: Patches Scale Better Than Tokens introduces BLT, which for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency & robustness. Paper ➡️
Tweet media one
0
2
10
@sriniiyer88
Srini Iyer
2 months
RT @dimitrizho: Meta's Byte Latent Transformer (BLT) paper looks like the real-deal. Outperforming tokenization models even up to their tes…
0
1
0
@sriniiyer88
Srini Iyer
2 months
RT @edkesuma: Gm. Woke up to a new paper on Byte Latent Transformers (BLT). Now you can increase model size without increasing inference…
0
2
0
@sriniiyer88
Srini Iyer
2 months
RT @PowerSystemAuto: Meta AI's Byte Latent Transformer (BLT) is revolutionizing the tokenization process, enhancing scalability and efficie…
0
1
0
@sriniiyer88
Srini Iyer
2 months
RT @Smol_AI: [13 Dec 2024] Meta BLT: Tokenizer-free, Byte-level LLM a few months ago @karpathy noted that tokeni…
0
6
0
@sriniiyer88
Srini Iyer
2 months
RT @ZainHasan6: Pretty cool work on tokenization-less transformer from Meta! > Byte Latent Transformer (BLT), byte-level LLM architecture,…
0
4
0
@sriniiyer88
Srini Iyer
2 months
RT @AkshatS07: Been waiting for this one, a strong step in removing tokenization from LLMs. Congrats to the team!
0
3
0
@sriniiyer88
Srini Iyer
2 months
RT @jmbollenbacher_: This could be one of the biggest AI papers of the year, if it really works as well as they report in this paper. It's…
0
3
0
@sriniiyer88
Srini Iyer
2 months
RT @_xjdr: Llamas ... Tokenizer Free?! USING ENTROPY STEERING?!?!! sometimes the universe conspires to make a paper just for you and it f…
0
38
0
@sriniiyer88
Srini Iyer
2 months
RT @scaling01: I can rest now🥲 I have gathered all the infinity stones. thanks @karpathy
Tweet media one
0
13
0
@sriniiyer88
Srini Iyer
2 months
RT @AaronJaech: Maybe if this gets enough retweets, the genai team will use it in their next llama model?
0
4
0
@sriniiyer88
Srini Iyer
2 months
RT @liliyu_lili: We scaled up Megabyte and ended up with a BLT! A pure byte-level model, has a steeper scaling law than the BPE-based mod…
0
9
0
@sriniiyer88
Srini Iyer
2 months
RT @ArmenAgha: Have been waiting for this one to come out for a bit. Congrats @ArtidoroPagnoni and the team!
0
1
0
@sriniiyer88
Srini Iyer
2 months
RT @jaseweston: Byte Latent Transformer 🥪🥪🥪 Introduces dynamic patching of bytes & scales better than BPE
Tweet media one
0
38
0
@sriniiyer88
Srini Iyer
2 months
RT @universeinanegg: We broke the byte bottleneck.
0
2
0