GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home — and then share that creation with others. No code required.
The full set of OpenAI Dev Day talks are now online! Prep for
@nickaturley
diving into GPTs with talking pirates, TODOs, rickrolls, rap battles and mood lighting
Bio s/currently/formerly/g. Wrapping up seven years of writing algorithms and cussing out SEVs at Instagram this week. Hard to believe we grew from 50 employees to serve a billion people. Don’t be a stranger
#follow4follow
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
The biggest change to online discourse in past decade has been the algorithmic feed
Thomas Dimson, one of the early engineers at Insta, developed their algorithmic feed, explore & story ranking
We talked about how it came to be & more on this weeks ep!
Although I spent 7 years at Instagram introducing content ranking systems, I haven’t spoken much about it — I wanted to start with why it makes more sense to focus on transparency in experiments instead of ‘algorithms’
The size and sophistication of modern algorithmic ranking models makes it impossible to fully understand how they make predictions, writes
@turtlesoupy
, who built Instagram’s first content ranking system, but social platforms can still be more transparent.
30 years of centralization tamed the Simpsons from a symbol of American counter-culture into mass media. Today, we liberate their free spirit in the “Fucked Crypto Homers” NFT.
20K purchasable AI-authored Homers minted eternally on the blockchain 💸💸💸
I’m not going to do it, just thinking about it, just thinking about it, just thinking about it, just thinking about it, just thinking about it, just thinking
Still committed to no database, I just deployed a url-shortener to get out of those horrible, horrible base64ed URL links for TWDNE . To celebrate, please enjoy this wonderful introduction to clowncomputing
2/n made you look? Seriously, that's the problem. You are infinitely more likely to "click" on something that has a little thread 🧵 icon in it as opposed to a vanilla tweet.
5/n that's a "network effect" where a small ranking change has a huge effect on online discourse. Look at me! I'm writing a thread right now and I hate them.
Is anyone an expert in modern language model tokenization? I'm confused about a conceptual issue; most schemes are not prefix-free but you train on the optimal codes. While generating samples, it seems like you see impossible encodings with poor probability estimates?
4/n weighting on P(click), 🧵 posts dominate others and get ranked to the top of feed. But that encourages you to /post/ more 🧵 content so you get to the top of other people's feeds.
@rbranson
You are probably right. Let me wave from this subthread to the one person 5 years from now who has the same unanswered question and after a day of googling lands on this unsatisfying tweet. 👋
@rememberlenny
@OpenAI
Damn, I’ll miss you man! Seems like yesterday you were walking me through the ChatGPT data model while my house slowly disappeared in the background. Will be closely following for what comes next 🫡
@devansh20la
@hardmaru
I mulled over whether to fix the random seed but it seemed more fun not to. The permalink ("link" button) should give you a stable one
@colinahiggins
Personally, just that they keep getting ranked up even though my preferences are against them. Also it turns Twitter towards a long-form format which the interface is really not designed for.
I'm investigating language models for 3D voxel spaces by treating contiguous runs as variable length "words". Apparently it works? This transformer (reformer) model can chisel out a torus in a 20x20x20 voxel grid.