EntilZhaPR Profile Banner
Dr. Pedro Rodriguez @par@sigmoid.social Profile
Dr. Pedro Rodriguez @[email protected]

@EntilZhaPR

Followers
741
Following
519
Statuses
1K

Researcher @MetaAI FAIR CS PhD: UMD 🐢, @clipumd UGrad: Berkeley CS 🐻 Natural Language Processing - QA+Retrieval LMs+Eval He/Him 🏳️‍🌈

Seattle, WA
Joined February 2011
Don't wanna be here? Send us removal request.
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 years
The #ACL2023 program will be/is on both UnderlineIO and MiniConf ( 🥳! Has there ever been something wanted on the conference site 🙏? Now's your time to make it so 😎! We're open to contributions at RocketChat 🚀 is back too!
12
0
2
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @garrethleee: 🚀 With Meta's recent paper replacing tokenization in LLMs with patches 🩹, I figured that it's a great time to revisit how…
0
235
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @AkshatS07: Been waiting for this one, a strong step in removing tokenization from LLMs. Congrats to the team!
0
3
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @jaseweston: Byte Latent Transformer 🥪🥪🥪 Introduces dynamic patching of bytes & scales better than BPE
Tweet media one
0
38
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @universeinanegg: We broke the byte bottleneck.
0
2
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @ben_mlr: Groundbreaking scaling trends for Byte-level Language Modeling with the new BLT architecture 🚀 More insights in the thread 🧵
0
3
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @sriniiyer88: New paper! Byte-Level models are finally competitive with tokenizer-based models with better inference efficiency and robu…
0
22
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @gargighosh: Sharing new research from my team- 1)Dynamic Byte Latent Transformer- First byte level model that matches current LLM perfo…
0
6
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
2 months
RT @ArtidoroPagnoni: 🚀 Introducing the Byte Latent Transformer (BLT) – An LLM architecture that scales better than Llama 3 using byte-patch…
0
139
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
7 months
To clarify, we had a comms mixup (due to visa issues) on who is presenting the paper. This can only really be done during business hours and *should not* be done on personal time. If the deadline is truly immovable, the announcement should have gone out earlier, e.g., Monday.
0
0
1
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
8 months
RT @AIatMeta: Today is a good day for open science. As part of our continued commitment to the growth and development of an open ecosystem…
0
515
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
9 months
RT @AIatMeta: Newly published work from FAIR, Chameleon: Mixed-Modal Early-Fusion Foundation Models. This research presents a family of ea…
0
192
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
9 months
RT @violet_zct: 🚀 Excited to introduce Chameleon, our work in mixed-modality early-fusion foundation models from last year! 🦎 Capable of un…
0
19
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
9 months
RT @ArmenAgha: I’m excited to announce our latest paper, introducing a family of early-fusion token-in token-out (gpt4o….), models capable…
0
226
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
11 months
RT @sriniiyer88: New paper! How to train LLMs to effectively answer questions on new documents? Introducing *pre-instruction-tuning* - ins…
0
33
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
1 year
@IkonPass Reservations for @SummitSnow411 Alpental have run out for today? I didn't expect this since conditions will be marginal, webcams show Armstrong/Alpental isn't busy. What's the point of a season pass if I can't even go on non-busy days?
Tweet media one
0
0
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
1 year
@josephimperial_ I had a much easier time with videos after swapping to Davinci Resolv (from Premier). Still need a sweet idea, but at least it’s easier to execute on.
0
0
2
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
1 year
@soldni @hipsterelectron I like copilot in general, but I’ve mostly disabled it for a UI/UX reason, at least in VS Code it fights with intellisense pretty badly. Ideally I’d like to be able to easily see both autocomplete options and easily accept either one. Suggestions on settings to do that?
0
0
0
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
1 year
@boydgraber FWIW, can do in VS code natively and vim via plugins
0
0
2
@EntilZhaPR
Dr. Pedro Rodriguez @[email protected]
1 year
@complingy I'd guess since they are single-use. If someone stole it after you'd already used them, it would be useless.
1
0
1