![Srini Iyer Profile](https://pbs.twimg.com/profile_images/1010397129368469505/-4syvDVU_x96.jpg)
Srini Iyer
@sriniiyer88
Followers
1K
Following
628
Statuses
132
Research Scientist at Facebook AI Research
Seattle, WA
Joined February 2012
We're hiring PhD interns for Summer 2025 in Seattle to work with us on improving BLT even more! If this is something that excites you, reach out to me on dm/email asap!
New from Meta FAIR — Byte Latent Transformer: Patches Scale Better Than Tokens introduces BLT, which for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency & robustness. Paper ➡️
4
28
317
BLT related post by Meta AI - eliminate all tokenization once and for all!
New from Meta FAIR — Byte Latent Transformer: Patches Scale Better Than Tokens introduces BLT, which for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency & robustness. Paper ➡️
0
2
10
RT @dimitrizho: Meta's Byte Latent Transformer (BLT) paper looks like the real-deal. Outperforming tokenization models even up to their tes…
0
1
0
RT @edkesuma: Gm. Woke up to a new paper on Byte Latent Transformers (BLT). Now you can increase model size without increasing inference…
0
2
0
RT @PowerSystemAuto: Meta AI's Byte Latent Transformer (BLT) is revolutionizing the tokenization process, enhancing scalability and efficie…
0
1
0
RT @ZainHasan6: Pretty cool work on tokenization-less transformer from Meta! > Byte Latent Transformer (BLT), byte-level LLM architecture,…
0
4
0
RT @AkshatS07: Been waiting for this one, a strong step in removing tokenization from LLMs. Congrats to the team!
0
3
0
RT @jmbollenbacher_: This could be one of the biggest AI papers of the year, if it really works as well as they report in this paper. It's…
0
3
0
RT @_xjdr: Llamas ... Tokenizer Free?! USING ENTROPY STEERING?!?!! sometimes the universe conspires to make a paper just for you and it f…
0
38
0
RT @AaronJaech: Maybe if this gets enough retweets, the genai team will use it in their next llama model?
0
4
0
RT @liliyu_lili: We scaled up Megabyte and ended up with a BLT! A pure byte-level model, has a steeper scaling law than the BPE-based mod…
0
9
0
RT @ArmenAgha: Have been waiting for this one to come out for a bit. Congrats @ArtidoroPagnoni and the team!
0
1
0
RT @jaseweston: Byte Latent Transformer 🥪🥪🥪 Introduces dynamic patching of bytes & scales better than BPE
0
38
0
@ArtidoroPagnoni @ramakanth1729 @EntilZhaPR @__JohnNguyen__ @ben_mlr @margs_li @violet_zct @liliyu_lili @jaseweston @LukeZettlemoyer @gargighosh @ml_perception @universeinanegg Plus, all the code for training is open-sourced here!
0
0
4