Dr. Pedro Rodriguez @[email protected]
@EntilZhaPR
Followers
741
Following
519
Statuses
1K
Researcher @MetaAI FAIR CS PhD: UMD 🐢, @clipumd UGrad: Berkeley CS 🐻 Natural Language Processing - QA+Retrieval LMs+Eval He/Him 🏳️🌈
Seattle, WA
Joined February 2011
The #ACL2023 program will be/is on both UnderlineIO and MiniConf ( 🥳! Has there ever been something wanted on the conference site 🙏? Now's your time to make it so 😎! We're open to contributions at RocketChat 🚀 is back too!
12
0
2
RT @garrethleee: 🚀 With Meta's recent paper replacing tokenization in LLMs with patches 🩹, I figured that it's a great time to revisit how…
0
235
0
RT @AkshatS07: Been waiting for this one, a strong step in removing tokenization from LLMs. Congrats to the team!
0
3
0
RT @jaseweston: Byte Latent Transformer 🥪🥪🥪 Introduces dynamic patching of bytes & scales better than BPE
0
38
0
RT @ben_mlr: Groundbreaking scaling trends for Byte-level Language Modeling with the new BLT architecture 🚀 More insights in the thread 🧵
0
3
0
RT @sriniiyer88: New paper! Byte-Level models are finally competitive with tokenizer-based models with better inference efficiency and robu…
0
22
0
RT @gargighosh: Sharing new research from my team- 1)Dynamic Byte Latent Transformer- First byte level model that matches current LLM perfo…
0
6
0
RT @ArtidoroPagnoni: 🚀 Introducing the Byte Latent Transformer (BLT) – An LLM architecture that scales better than Llama 3 using byte-patch…
0
139
0
RT @AIatMeta: Today is a good day for open science. As part of our continued commitment to the growth and development of an open ecosystem…
0
515
0
RT @AIatMeta: Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models. This research presents a family of ea…
0
192
0
RT @violet_zct: 🚀 Excited to introduce Chameleon, our work in mixed-modality early-fusion foundation models from last year! 🦎 Capable of un…
0
19
0
RT @ArmenAgha: I’m excited to announce our latest paper, introducing a family of early-fusion token-in token-out (gpt4o….), models capable…
0
226
0
RT @sriniiyer88: New paper! How to train LLMs to effectively answer questions on new documents? Introducing *pre-instruction-tuning* - ins…
0
33
0
@IkonPass Reservations for @SummitSnow411 Alpental have run out for today? I didn't expect this since conditions will be marginal, webcams show Armstrong/Alpental isn't busy. What's the point of a season pass if I can't even go on non-busy days?
0
0
0
@josephimperial_ I had a much easier time with videos after swapping to Davinci Resolv (from Premier). Still need a sweet idea, but at least it’s easier to execute on.
0
0
2
@soldni @hipsterelectron I like copilot in general, but I’ve mostly disabled it for a UI/UX reason, at least in VS Code it fights with intellisense pretty badly. Ideally I’d like to be able to easily see both autocomplete options and easily accept either one. Suggestions on settings to do that?
0
0
0
@complingy I'd guess since they are single-use. If someone stole it after you'd already used them, it would be useless.
1
0
1