alex peysakhovich 🤖 Profile Banner
alex peysakhovich 🤖 Profile
alex peysakhovich 🤖

@alex_peys

Followers
5,546
Following
792
Media
150
Statuses
1,268
Explore trending content on Musk Viewer
@alex_peys
alex peysakhovich 🤖
11 months
really love when authors do things like explain equations in-line with colors etc... just makes papers so much easier to read.
Tweet media one
39
336
4K
@alex_peys
alex peysakhovich 🤖
1 year
once did a screening interview with a “famous ml hedgefund”. got a leetcode-style problem to find connected components in a graph. they wanted bfs or dfs or whatever. i took the svd of the laplacian and counted the number of 0s. they didn’t pass me because “you can’t use numpy”
32
68
2K
@alex_peys
alex peysakhovich 🤖
25 days
claude3.5 is really good, but also still a transformer
Tweet media one
30
42
639
@alex_peys
alex peysakhovich 🤖
20 days
just released a paper with will berman on multimodal inputs for image generation main idea: describing things just in text is often hard. can you train a model that uses interleaved text/image prompts for image generation? the answer is yes. 🧵
Tweet media one
14
58
395
@alex_peys
alex peysakhovich 🤖
1 year
standard ml: oh no my model is memorizing the training set, better add some regularization to make that not happen llm ml: ugh it’s hallucinating, why can’t it just memorize some of the training set
10
31
336
@alex_peys
alex peysakhovich 🤖
10 months
living the dream of the gpu upper middle class
Tweet media one
19
8
305
@alex_peys
alex peysakhovich 🤖
7 months
facebook didn't keep it secret, here is the paper that explains the early comment ranking system for exactly this issue
@paulnovosad
Paul Novosad
7 months
Engineers have figured out to cut back on the toxicity of the internet — but firms who are good at it keep it secret as a competitive advantage. A fascinating question of private vs. public interests.
Tweet media one
9
10
152
4
37
296
@alex_peys
alex peysakhovich 🤖
1 year
@Apoorva__Lal you can also estimate null space dimension by “uniformly” randomly sampling vectors, hitting them with the matrix, and seeing what % are 0. no numpy required
2
1
202
@alex_peys
alex peysakhovich 🤖
8 months
ran gpt4 128k context on the "1 useful document + K distractors" from our "attention sorting" paper, seems like the very long (more than 32k) doesn't work that well. 32k is still extremely impressive though! also claude2 clearly has some nice trick behind the scenes
Tweet media one
9
27
202
@alex_peys
alex peysakhovich 🤖
1 year
i worked at fb for 9 years, this is the first time i have ever seen people excited about an fb original product at launch time
7
7
192
@alex_peys
alex peysakhovich 🤖
6 months
the foundation of machine learning is that inside every big matrix lives a much smaller matrix and you should just use that
@krishnanrohit
rohit
6 months
Text embeddings are the unsung heroes of making ai work. Can't believe we found a way to make language into numbers, the matrix stuff backprop etc afterwards feels almost straightforward once it's done. All tricks are downstream of this. With numbers you can add them (merge),
8
13
116
7
16
193
@alex_peys
alex peysakhovich 🤖
9 months
@infoxiao this is literally the recipe for llms
Tweet media one
3
21
172
@alex_peys
alex peysakhovich 🤖
3 months
this is just proof that agi is achieved, we can now simulate a real software engineer perfectly
@a_karvonen
Adam Karvonen
3 months
Interesting watch. In an official Devin demo, Devin spent six hours writing buggy code and fixing its buggy code when it could have just ran the two commands in the repo's README.
5
18
277
1
11
157
@alex_peys
alex peysakhovich 🤖
1 year
there is a huge gain to be made by every company on the planet by just running every file in their codebase through gpt4 with the prompt "what's wrong with this?"
@DimitrisPapail
Dimitris Papailiopoulos
1 year
GPT-4 "discovered" the same sorting algorithm as AlphaDev by removing "mov S P". No RL needed. Can I publish this on nature? here are the prompts I used (excuse my idiotic typos, but gpt4 doesn't mind anyways)
96
445
3K
3
10
149
@alex_peys
alex peysakhovich 🤖
1 year
one of the reasons i left behavioral science (much happier now anyway, so it all worked out!) was that it was pretty clear many people were bs-ing and when getting jobs/tenure/etc... depends on # papers published, it is very hard to compete with people willing to make stuff up
7
8
130
@alex_peys
alex peysakhovich 🤖
9 months
lots of us are too busy working on this stuff to get in these debates on twitter, but yann is completely right here (just like he was with the cake thing, and the neural net thing, and and and)
@ylecun
Yann LeCun
9 months
The heretofore silent majority of AI scientists and engineers who - do not believe in AI extinction scenarios or - believe we have agency in making AI powerful, reliable, and safe and - think the best way to do so is through open source AI platforms NEED TO SPEAK UP !
168
405
2K
4
4
126
@alex_peys
alex peysakhovich 🤖
1 year
@ben_golub sometimes those are too complex to navigate
1
0
124
@alex_peys
alex peysakhovich 🤖
9 months
tfw one of your paper techniques gets rediscovered. now i know how everyone that wrote papers in machine learning between 1980 and 2012 feels
1
4
111
@alex_peys
alex peysakhovich 🤖
20 days
the real trick is dataset construction. we take a (caption, image) dataset, use object detection to find the crop of the image for each object in the caption and tada, we now have a (multimodal caption, image) dataset 3/n
Tweet media one
3
5
103
@alex_peys
alex peysakhovich 🤖
1 year
evergreen question: how does anyone ever do any data cleaning in python? pandas is the worst thing i've ever worked with compared to e.g. tidyverse
18
2
96
@alex_peys
alex peysakhovich 🤖
1 year
@tszzl i find these salaries surprisingly low given the value provided and the working conditions. if you scale cardiac surgeon to 40 hours per week that's ~400k, that's like an L6 at google/fb....
8
1
88
@alex_peys
alex peysakhovich 🤖
1 year
hype cycle for llms is moving so fast and everyone is at totally different points on the curve, it's wild
Tweet media one
5
22
85
@alex_peys
alex peysakhovich 🤖
1 year
ok, it's after 5pm, i am no longer a facebook employee, i will miss FAIR and my awesome colleagues, but it's on to new things! also, now everyone can stop asking me to help their weird uncle get unbanned, you know what they did.
14
0
87
@alex_peys
alex peysakhovich 🤖
11 months
the best conversations i've had about "ai risk" have been with integrity people from social media companies, they have the most experience with having complex systems behave in unintended ways and also deal with internal pressures eg. incentives of other teams to get metrics up
5
9
85
@alex_peys
alex peysakhovich 🤖
10 months
authorship norms differ a lot across fields... cs: "oh, we talked about this at lunch for 5 min, you should be a coauthor!" econ: "you spent weeks in a library collecting data and you want to be a coauthor? gtfo!" ...you can guess which one creates more collegial atmosphere
@SimonBowmaker
Simon Bowmaker
10 months
Daron Acemoğlu and David Card on undergraduate vs. graduate research assistants:
Tweet media one
Tweet media two
43
305
3K
3
3
79
@alex_peys
alex peysakhovich 🤖
11 months
linear algebra is the basis for everything that works
3
6
78
@alex_peys
alex peysakhovich 🤖
1 year
the work on fine tuning llms in parameter efficient ways that is coming out is just so cool and clever. really shows the power of open source + flexible frameworks that allow you to easily write model blocks and just type loss.backwards()
0
19
79
@alex_peys
alex peysakhovich 🤖
1 year
me: oh what have you been up to today? gf: not much, just chilling *casually drops 4 nature/science papers in one day*
@AnnieFranco
Annie Franco
1 year
New in Science and Nature: The first four papers from the U.S. 2020 Facebook and Instagram Election Study!
Tweet media one
Tweet media two
Tweet media three
Tweet media four
5
40
172
0
2
78
@alex_peys
alex peysakhovich 🤖
1 year
i stared at this for probably 90 seconds thinking “what else could it be other than epsilon?” had to check replies for the answer
@VisualAlgebra
Matt Macauley
1 year
The look my wife game me when I immediately shouted “epsilon”, without really thinking it through…
Tweet media one
223
562
25K
4
3
77
@alex_peys
alex peysakhovich 🤖
3 months
its crazy how data inefficient neural net optimization can be - i have a problem where a linear regression gets 80% accuracy but it takes 400k samples and 100+ epochs of training for a 2 layer relu net to match that (my learning rate is fine thanks for asking)
5
4
63
@alex_peys
alex peysakhovich 🤖
10 months
heuristic: if you’re talking about bias-variance tradeoffs you’re doing classic machine learning, if you’re saying “crap i need more gpus”, you’re doing post-modern machine learning
@chrisalbon
Chris Albon
10 months
Recently I heard someone call it “classic machine learning” Like damn bro that hurts
36
16
348
1
4
60
@alex_peys
alex peysakhovich 🤖
1 year
explaining something i'm working on to a jr colleague, he says "oh you're one of those last generation ai people that actually knows math, that's interesting"
3
3
59
@alex_peys
alex peysakhovich 🤖
1 year
one simple statistical hack to improve your language model evaluations: don't take mean(model 1 performance) and compare it to mean(model 2 performance). instead, consider the *paired* statistic mean(model1_acc(question i) - model2_acc(question i)). in many tasks is huge latent
Tweet media one
2
4
58
@alex_peys
alex peysakhovich 🤖
1 year
@GrantStenger agree that in theory bfs/dfs is better but if you compute rarely (are you decomposing 10m vertex graph every second? why?), it doesn’t really matter what you use as long as it runs. we did randomized svd on huge graphs all the time at fb and it was fine.
2
0
59
@alex_peys
alex peysakhovich 🤖
20 days
there's a bunch of work to do, the model has a lot of failure points, but i think overall its a pretty nice proof of concept that multimodality unlocks really interesting stuff in image generation n/n
1
1
58
@alex_peys
alex peysakhovich 🤖
1 year
one reason why “big fraud” is so prevalent in behavioral science is that there are few real world checks, other sciences have more downstream engineering built on them and so it is easier to filter fake effects (not perfect, plenty of bad examples in “harder” science too)
6
3
53
@alex_peys
alex peysakhovich 🤖
6 months
"a man painting a horse" by stable diffusion. it just couldn't decide how to parse that sentence so it just did both
Tweet media one
5
7
53
@alex_peys
alex peysakhovich 🤖
3 months
europe: we are the world champions of shutting down innovation with weird regulation california: hold my beer
@psychosort
Brian Chau
3 months
The California senate bill to crush OpenAI's competitors is fast tracked for a vote. This is the most brazen attempt to hurt startups and open source yet. 🧵
Tweet media one
31
150
469
1
16
49
@alex_peys
alex peysakhovich 🤖
20 days
the general architecture is not that complicated. vlms are basically about stapling a vision encoder to an llm. well, you can also just staple a diffusion model (or your other favorite image decoder) to the end. 2/n
Tweet media one
1
4
49
@alex_peys
alex peysakhovich 🤖
10 months
@aryehazan random matrix theory and various concentration results are all extremely unintuitive (at least to me).
1
2
49
@alex_peys
alex peysakhovich 🤖
1 year
@PhDemetri just add polynomial terms til it's good
2
1
46
@alex_peys
alex peysakhovich 🤖
1 year
been playing with the @huggingface mteb leaderboard () all day, super interesting dataset with very interesting correlation pattern across tasks. if you're good at one retrieval, you're good at all of them. other stuff? much less predictable
Tweet media one
2
9
45
@alex_peys
alex peysakhovich 🤖
5 months
gemini won’t draw me because im a stereotype apparently.
Tweet media one
Tweet media two
Tweet media three
1
5
46
@alex_peys
alex peysakhovich 🤖
1 year
this is the correct take. llm are the glue code that will allow us to put so many other technologies together. if you think of the llm as originating in machine translation this shouldn't be too surprising - they're exactly great for translating between many different modalities
@peteskomoroch
Pete Skomoroch
1 year
For people not paying close attention to AI right now, all the pieces are coming together at the same time to rapidly transform how we live and work. Here Stanford robotics researchers demonstrate GPT-4 control of a robot which can begrudgingly follow your spoken instructions:
Tweet media one
5
10
60
3
8
43
@alex_peys
alex peysakhovich 🤖
11 months
neural network architectures should just copy whatever corvid brains are doing
@TheDavidSJ
David Schneider-Joseph 🔍
11 months
@norabelrose There’s also at least some tasks on which performance scales linearly with log pallial neuron count.
Tweet media one
5
4
48
1
2
42
@alex_peys
alex peysakhovich 🤖
1 year
Tweet media one
1
6
42
@alex_peys
alex peysakhovich 🤖
1 year
if i'm going to complain about companies not releasing their models/model data, i should give credit @MosaicML has a nice release of mpt with full transparency. have been playing with the instruct model and it's pretty impressive!
2
2
43
@alex_peys
alex peysakhovich 🤖
25 days
love this problem (claude didn't fall for this version but chatgpt did)
Tweet media one
1
5
42
@alex_peys
alex peysakhovich 🤖
4 months
i loved doing my phd. when i saw the academic marker afterwards i noped out but the phd itself was super fun, i had great advisors (one of whom won a nobel during when i was his student and still met with me that week to discuss an experiment i was running) and learned a ton
@george_berry
george berry, single 𓊍 engineer
4 months
there's a lot of shit talking grad school on here but genuinely i loved grad school, it was amazing, and i am grateful to the @CornellSoc program for the opportunity
0
0
8
2
0
42
@alex_peys
alex peysakhovich 🤖
1 year
current programming workflow: 1) ask chatgpt to write code to do X 2) remove comments from code and see if it runs 3) pass code back into chatgpt, ask to explain "to an idiot" what the code does if result of step 3 matches X, i assume it's right and move on to the next part
3
2
40
@alex_peys
alex peysakhovich 🤖
1 year
let's play the "when did the vision pro presentation start?" game
Tweet media one
1
1
39
@alex_peys
alex peysakhovich 🤖
20 days
we use idefics2 (which is mistral + siglip finetuned) with some modifications for the encoder and sdxl for the decoder, train it on a single 8xh100 node and you can get a model that does some neat stuff 4/n
Tweet media one
2
2
40
@alex_peys
alex peysakhovich 🤖
10 months
awesome paper. contains great bangers like: "When we wonder whether the machine is sentient, the machine’s answers draw on the abundant science fiction material found in its training set."
0
13
37
@alex_peys
alex peysakhovich 🤖
19 days
@tszzl america goes up and down but those of us who came here from other countries realize that the us at its worst is better than many places at their best
1
2
37
@alex_peys
alex peysakhovich 🤖
4 months
this is how i feel abut state space models vs transformers
@paulgp
Paul Goldsmith-Pinkham
4 months
Tweet media one
9
15
143
0
3
37
@alex_peys
alex peysakhovich 🤖
1 year
also if you do this with social science analyses files you can unpublish like 50% of all papers
1
2
36
@alex_peys
alex peysakhovich 🤖
1 year
the fact that openai didn't release an april fools statement admitting that gpt3 was just 10,000 people in a call center is a bit disappointing
1
1
36
@alex_peys
alex peysakhovich 🤖
1 year
all problems in life can be solved if you realize that within every really big matrix hides a much smaller matrix that preserves most of the information
@rasbt
Sebastian Raschka
1 year
Thanks to parameter-efficient finetuning techniques, you can finetune a 7B LLM on a single GPU in 1-2 h using techniques like low-rank adaptation (LoRA). Just wrote a new article explaining how LoRA works & how to finetune a pretrained LLM like LLaMA:
29
296
2K
0
2
36
@alex_peys
alex peysakhovich 🤖
4 months
man it's crazy how there are no more photographers since digital cameras and photoshop came along
6
2
34
@alex_peys
alex peysakhovich 🤖
1 year
the last 3-6 months in ai have really changed my baseline on a lot of things from "i kind of understand some parts of the world" to "i don't fucking know what's going to happen, predicting the future is impossible"
2
0
35
@alex_peys
alex peysakhovich 🤖
1 year
given the way some of these detectors tend to work (look at whether something is highly likely under the model) it seems like any document that the model has memorized will trigger the detector as "AI generated"
@0xgaut
gaut
1 year
someone used an AI detector on the US Constitution and the results are concerning. Explain this, OpenAI!
Tweet media one
463
3K
36K
2
2
34
@alex_peys
alex peysakhovich 🤖
1 year
another for the “you should be suspicious if your machine learning is too good” pile
@SteveStuWill
Steve Stewart-Williams
1 year
Machine learning predicts hit songs from brain responses with 97% accuracy. Self-reported liking isn’t predictive. 😮
Tweet media one
40
163
1K
2
1
34
@alex_peys
alex peysakhovich 🤖
1 month
smart people from other countries move to us, invent stuff, grow economy
@erikbryn
Erik Brynjolfsson
1 month
What’s the best explanation for the US having so much more real economic growth without more inflation than the other countries in the G7?
Tweet media one
346
138
748
2
3
32
@alex_peys
alex peysakhovich 🤖
1 year
llm development went from "release paper with full details" to "release model evaluations but not training details" to "here are some videos" real quick
@AnthropicAI
Anthropic
1 year
Introducing 100K Context Windows! We’ve expanded Claude’s context window to 100,000 tokens of text, corresponding to around 75K words. Submit hundreds of pages of materials for Claude to digest and analyze. Conversations with Claude can go on for hours or days.
217
1K
5K
2
2
30
@alex_peys
alex peysakhovich 🤖
25 days
if you add think step by step it gets better
Tweet media one
1
0
28
@alex_peys
alex peysakhovich 🤖
1 year
"computers can't do exploratory data analysis like people" is the new "computers can't play chess like people"
3
1
29
@alex_peys
alex peysakhovich 🤖
10 months
everyone who did substantial work on a paper should be an author, if you're not sure whether someone's contribution should count as substantial, you should err on the side of including them. doing otherwise is wrong. don't @ me you won't convince me otherwise
2
0
28
@alex_peys
alex peysakhovich 🤖
1 month
@ZachWeiner where is the BCD option?
4
0
27
@alex_peys
alex peysakhovich 🤖
8 months
playing in the text embedding space of sd turbo is pretty fun. this is just taking convex combination of 2 prompts
Tweet media one
1
1
25
@alex_peys
alex peysakhovich 🤖
1 year
oh academia reviewer: you studied X but really you should have studied Y, reject reply: yes Y is important, but X is also a thing with a huge literature (theory) + many companies trying to solve it (practice), is there something wrong with how we studied X? reviewer: no, reject
3
1
26
@alex_peys
alex peysakhovich 🤖
1 year
@cauchyfriend i don’t know what a class is in python and i worked on some of the most used internal and user facing things at fb for 9 years
2
0
26
@alex_peys
alex peysakhovich 🤖
6 months
this is not a randomized experiment. the much more likely story here is that twitter is better at *figuring out* which of two papers with similar abstracts/conference accepts will be important later, not that twitter *causes* it
@deliprao
Delip Rao e/σ
6 months
Crazy AF. Paper studies @_akhaliq and @arankomatsuzaki paper tweets and finds those papers get 2-3x higher citation counts than control. They are now influencers 😄 Whether you like it or not, the TikTokification of academia is here!
Tweet media one
64
286
2K
2
0
27
@alex_peys
alex peysakhovich 🤖
10 months
excited to finally drop a paper about an idea @adamlerer and i have been messing around with for a while tldr: in a simple qa task, re-sorting documents in llm context by attention it pays to them, and *then* generating improves accuracy a bunch 1/n
Tweet media one
1
6
27
@alex_peys
alex peysakhovich 🤖
5 months
science naming conventions have changed a lot… physics in 50s: “we use a feynman parametrization to solve a model of heisenberg’s uncertainty principle and evaluate it on data from the oil drop experiment” modern ml: “we attach a yolo model to a llama backbone and then look at
0
2
26
@alex_peys
alex peysakhovich 🤖
1 year
a real swe looking at what i just pushed
Tweet media one
2
3
26
@alex_peys
alex peysakhovich 🤖
2 months
where are all these ai people getting time to go to meetups, make flashy videos, etc…? between cleaning data, writing code, and watching the training runs i don’t even have time to cherry pick outputs to post “we’re so back” threads on twitter
2
1
26
@alex_peys
alex peysakhovich 🤖
1 year
fell into classic trap of spending 1 hour+ to automate something i could have done in 20 boring minutes
2
0
25
@alex_peys
alex peysakhovich 🤖
3 months
this board is a scam, why are there random ceos of oil and airplane companies, only a few legit scientists, and nobody from meta?
@AndrewCurran_
Andrew Curran
3 months
This morning the Department of Homeland Security announced the establishment of the Artificial Intelligence Safety and Security Board. The 22 inaugural members include Sam Altman, Dario Amodei, Jensen Huang, Satya Nadella, Sundar Pichai and many others.
Tweet media one
309
243
1K
1
0
25
@alex_peys
alex peysakhovich 🤖
1 year
@johnpdickerson weaknesses math is hard questions why math so hard? ethics review flags why make me read math? just waterboard me already
1
0
25
@alex_peys
alex peysakhovich 🤖
1 year
this is your daily reminder that pandas indexing was created in the lower levels of hell to torture people trying to do basic things like concatenate
3
0
26
@alex_peys
alex peysakhovich 🤖
1 year
tired: all embeddings are data compression wired: all data compression is an embedding
@goodside
Riley Goodside
1 year
this is wild — kNN using a gzip-based distance metric outperforms BERT and other neural methods for OOD sentence classification intuition: 2 texts similar if cat-ing one to the other barely increases gzip size no training, no tuning, no params — this is the entire algorithm:
Tweet media one
150
1K
7K
1
2
25
@alex_peys
alex peysakhovich 🤖
1 year
many people in ai want machines that have “general” intelligence - i don’t care (or believe that will happen) - i want dumb machines that free people from doing the boring, repetitive, and/or dangerous tasks that take away time from actual interesting pursuits
1
3
24
@alex_peys
alex peysakhovich 🤖
1 year
neat guide. hits one of my favorite metaphors: "statistics is like baking, you need to follow the recipe exactly, ML is like cooking, you need to constantly taste and adjust spices"
@AIatMeta
AI at Meta
1 year
Self-supervised learning underpins today’s cutting-edge work across natural language, computer vision & more — but it’s an intricate art with high barriers to entry. Today we're releasing the SSL Cookbook, a practical guide for navigating SSL + contributing to this space ⬇️
29
166
709
2
7
24
@alex_peys
alex peysakhovich 🤖
1 year
ugh these econometricians, everyone knows that the more layers your neural network has the more causal it is.
@instrumenthull
Peter Hull
1 year
This is a common misconception I see a lot in my intro econometrics class. To detect causality in regressions you actually need to look at the *adjusted* R-squared, since the regular R-squared always increases with more controls. Hope this helps!
48
23
448
2
0
23
@alex_peys
alex peysakhovich 🤖
1 year
@kchonyc i too like to indulge in some matrix multiplication
0
0
21
@alex_peys
alex peysakhovich 🤖
1 year
so i pasted the first half of an analysis script for one of my papers into gpt and asked "what is the person writing this trying to do?" 🧵 1/n
Tweet media one
1
2
23
@alex_peys
alex peysakhovich 🤖
3 months
if you want to understand ml/ai, you can't do it by reading papers. being one with the matrix requires the terrible grind of building models to do stuff, getting frustrated when they don't work, working in the data mines, etc...
3
2
23
@alex_peys
alex peysakhovich 🤖
1 year
@Apoorva__Lal im actually curious now: is there a fundamentally easy way to estimate size of null space from appropriately chosen observations (x, Ax)? @ben_golub nerd snipe here
0
0
22
@alex_peys
alex peysakhovich 🤖
1 year
now i do ai and my day is mostly pasting pytorch errors into chatgpt
0
0
22
@alex_peys
alex peysakhovich 🤖
1 year
cs researchers: why are people sending so many papers to conferences these days? we really should take actions to increase the signal / noise ratio also cs researchers: this grad student went to the bathroom and didn’t come out with a neurips paper, i dunno…
@linylinx
Tianlin
1 year
Is this the *minimum* requirement for a new grad in machine learning now? #NVIDIA
Tweet media one
135
353
3K
0
1
22
@alex_peys
alex peysakhovich 🤖
1 year
random projections for dimensionality reduction are bullshit and shouldn't work but they do
6
1
22
@alex_peys
alex peysakhovich 🤖
11 months
i did 1 hike on the ca coastline and came up with at least 5 research things i want to try, time off is really good for the brain
Tweet media one
3
0
21
@alex_peys
alex peysakhovich 🤖
1 year
come join tech where nobody cares if you just type in all lower case without punctuation or grammar
@Andrew_Akbashev
Andrew Akbashev
1 year
In your application letter for #PhD / postdoc, NEVER ever say: "Hi prof" "Hello" "Dear Professor" "Greetings of the day" If you do, your email will be immediately deleted by 99% of professors. ▫️ Only start your applications with “Dear Prof. [second_name],” And don’t
289
338
3K
1
0
21
@alex_peys
alex peysakhovich 🤖
11 months
@giffmana @_basilM neat! that equation would be SO hard to read without that
0
0
20
@alex_peys
alex peysakhovich 🤖
1 year
a great way to see a major weakness of current llms is to take a task it can do an easy version of (sort these 2 words into alphabetical order) and watch it fail on a complicated version that could, in principle, be done by repeated application of the easy thing.
Tweet media one
4
2
20
@alex_peys
alex peysakhovich 🤖
7 months
the impossible text to image prompt (images from DALLE, MJ v6, SDXL respectively)
Tweet media one
Tweet media two
Tweet media three
2
1
21
@alex_peys
alex peysakhovich 🤖
1 year
i have a theory that zuck's metaverse obsession was always just a long, petty, con to get apple to do something dumb
0
1
21