Sergey Karayev Profile Banner
Sergey Karayev Profile
Sergey Karayev

@sergeykarayev

Followers
11,720
Following
2,880
Media
361
Statuses
1,719
Explore trending content on Musk Viewer
@sergeykarayev
Sergey Karayev
2 years
Here's a brief glimpse of our INCREDIBLE near future. GPT-3 armed with a Python interpreter can · do exact math · make API requests · answer in unprecedented ways Thanks to @goodside and @amasad for the idea and repl! Play with it:
Tweet media one
93
676
4K
@sergeykarayev
Sergey Karayev
2 years
AI research is converging on a major finding: language models are a great substrate for all AI applications. This feels like a HUGE deal. Some examples:
52
572
4K
@sergeykarayev
Sergey Karayev
1 year
Guys I think I figured out wtf happened in 1971
Tweet media one
86
197
3K
@sergeykarayev
Sergey Karayev
2 years
The future: · Write emails with bullet points, which an AI assistant automatically expands into beautiful long text. · Read emails by having an AI assistant summarize long-ass text into bullet points...
71
263
3K
@sergeykarayev
Sergey Karayev
1 year
Google has no moat. They don't have over 90% search traffic. They don't have everyone's emails and the most used email client. Their OS is not powering 70% of smartphones. They will never be able to deploy LLM features into these products -- instead, people will run OSS LLMs.
87
121
3K
@sergeykarayev
Sergey Karayev
10 months
Okay so OpenAI board is · Ilya, got it, makes sense · Helen Toner, DC policy person, fine · Adam D'Angelo, CEO of Quora, okay I guess but why though? · Tasha McCauley, "tech entrepreneur" and funny enough also wife of Joseph Gordon-Levitt, how did this board come together?
91
88
2K
@sergeykarayev
Sergey Karayev
2 months
I'm ready to pay much more than $20/month for a coding copilot that is 10x as good as GitHub Copilot or Cursor. I WANT to pay more. Load my entire repo into Gemini 1.5 context and cache it. Automatically review all my PRs. Charge me $200/month. Charge me $2000/month!
114
81
2K
@sergeykarayev
Sergey Karayev
10 months
Got nerd-sniped by the OpenAI Board of Directors. Here's everyone who's ever been on it, their claim to fame, and why they left.
Tweet media one
58
132
1K
@sergeykarayev
Sergey Karayev
2 years
Now that our GPT-3 can execute code on @Replit , let's teach it to: · Google stuff · Read web pages · ✨Ask GPT-3 questions✨ That's right -- we're going RECURSIVE.
Tweet media one
20
154
1K
@sergeykarayev
Sergey Karayev
1 year
Conclusion after GPT-4 hacking weekend: Even if there is ZERO further progress in LLM models, software engineering will still be revolutionized in the next couple of years, just through UX and non-ML innovations. Absolutely massive overhang.
16
93
1K
@sergeykarayev
Sergey Karayev
2 months
What I admire about @karpathy is that he just keeps "doing things that don't scale". Label the entire ImageNet by yourself? Sure. Engineer petabyte-scale data engine for self-driving? Let's do it. Implement GPT from scratch? Easy. An inspiring attitude.
12
42
1K
@sergeykarayev
Sergey Karayev
2 years
Made something I've always wanted to see: a comparison table of all cloud GPU providers! Filter by provider, architecture, exact GPU, etc. Sort by price, RAM, vCPUs, etc. Both on-demand and spot instance prices.
17
150
856
@sergeykarayev
Sergey Karayev
3 years
Request for startup: Amazon, but for getting rid of stuff. It’s super easy to get stuff into your home: just click Buy. It’s harder to get stuff out. Electronics should be recycled, valuable things should be sold, bulky things need transport. I’d pay to not worry about it.
49
55
799
@sergeykarayev
Sergey Karayev
4 years
Best meme format
Tweet media one
4
80
753
@sergeykarayev
Sergey Karayev
1 year
Web LLM is insane. 1) Go download the latest Chrome beta, which shipped WebGPU support: 2) Now use a 7B-param LLM in your browser! 3) Marvel at the "How" section on their GitHub:
Tweet media one
9
176
725
@sergeykarayev
Sergey Karayev
2 years
“Did GPT-3 write this?” is such a good insult
29
68
731
@sergeykarayev
Sergey Karayev
1 year
Guaranteed JSON output from any local LLM, with very low overhead! Check out the library and a brief description of the method below the fold. "The basic idea is simple: regular expressions have an equivalent Deterministic-Finite Automaton (DFA)
Tweet media one
21
103
720
@sergeykarayev
Sergey Karayev
5 months
I want VSCode but using an infinite canvas instead of tabs. Does this exist?
48
23
719
@sergeykarayev
Sergey Karayev
1 year
A seriously baller demo: · Add a million PDFs to a DataFrame instantly · In-notebook UI to review them in various ways · In-notebook instant LLM training to "flash fill" a new column, with easy review
11
104
690
@sergeykarayev
Sergey Karayev
1 year
Remember these? Wondering if there is an equivalent adversarial attack on LLMs. (Simple prompt injections is not it — the attack needs to be invisible to a human observer.)
Tweet media one
54
52
662
@sergeykarayev
Sergey Karayev
1 year
Broke: using OpenAI embeddings as-is. Bespoke: learning an embedding projection from human judgements. OpenAI explains that this will "better emphasize aspects of the text relevant to your use case. In binary classification use cases, we've seen error rates drop by ≤ 50%."
Tweet media one
10
88
665
@sergeykarayev
Sergey Karayev
1 year
~1971: US fertility rate dips below replacement and stays there. (Source: )
Tweet media one
18
19
599
@sergeykarayev
Sergey Karayev
1 year
A great article from @Replit on training their own LLMs from scratch: Quick thread of the takeaways:
6
131
563
@sergeykarayev
Sergey Karayev
3 years
Inspired by @karpathy
Tweet media one
3
29
438
@sergeykarayev
Sergey Karayev
5 months
Found this from @anas_araid
@anas_araid
anas
1 year
imagine a figma-like infinite workspace in visual studio code. prototype built using react and @code 's api extension.
144
194
2K
4
12
438
@sergeykarayev
Sergey Karayev
2 years
Cursed thought: what % of GPT-4 training data was generated by GPT-3?
24
24
373
@sergeykarayev
Sergey Karayev
1 year
Why does Nvidia still not have their own GPU cloud? Do they dislike money?
54
9
353
@sergeykarayev
Sergey Karayev
9 months
The LLM benchmark we need: ChatGPT-like website that always shows two responses, generated by any two of N different models (user can't see which). The user has to select the better response in order to keep using the chat (it's otherwise free). Leaderboard will be decisive.
14
12
354
@sergeykarayev
Sergey Karayev
2 years
What an elegant way to do object detection: given an image, simply output the sequence of bounding box coordinates and labels as text. Great work from @tingchenai , @srbhsxn , Lala Li, @fleet_dj @geoffreyhinton
Tweet media one
9
58
345
@sergeykarayev
Sergey Karayev
1 year
Some candid insights from OpenAI here · GPU shortage means no GPT-4 multimodality this year · Up to 1M token context window are plausible · ChatGPT plugins don't have PMF
13
51
330
@sergeykarayev
Sergey Karayev
1 year
Counterpoint
@liyucheng_2
Yucheng Li
1 year
@sergeykarayev Kodak had no moat. They didn't have over 90% market share in film photography. They didn't have everyone's personal photographs and the most widely used film camera. They didn't invent digital camera.
11
10
633
4
7
320
@sergeykarayev
Sergey Karayev
2 years
Tried it out, and the new ChatGPT API is not only 10x cheaper but 10x faster, too. Absolutely insane.
14
11
312
@sergeykarayev
Sergey Karayev
2 years
How are you guys making slide presentations? Is there anything better than Keynote, Google Slides, Powerpoint? In particular, is there anything that would be amenable to "pull requests"?
87
32
308
@sergeykarayev
Sergey Karayev
2 years
Ask free-form questions and receive free-form answers about a video.
@andyzeng_
Andy Zeng
2 years
With multiple foundation models “talking to each other”, we can combine commonsense across domains, to do multimodal tasks like zero-shot video Q&A or image captioning, no finetuning needed. Socratic Models: website + code: paper:
21
382
2K
2
18
274
@sergeykarayev
Sergey Karayev
1 year
This LLM guidance language from Microsoft is super interesting. Worth a read-through for sure:
Tweet media one
8
52
268
@sergeykarayev
Sergey Karayev
2 years
But the vast majority of these large models are probably not dedicated to language either, only the data-interface layers are. This paper from @_kevinlu @adityagrover_ @pabbeel @IMordatch suggests that models learn general computation from language data.
1
15
253
@sergeykarayev
Sergey Karayev
2 years
I'm reading every week in 2023. Advice threads, GPT-3 demos, war assessments, shitposts, or anything people like a lot. I'll keep adjusting the list. Start on Monday, done by Sunday. Might make lowkey videos of takeaways. If you want to read along, the current list:
Tweet media one
5
14
246
@sergeykarayev
Sergey Karayev
2 years
Does this resemble how human cognition happens? My understanding is that the vast majority of human intelligence is not intermediated by language: most processing happens unconsciously, and only the "tip of the iceberg" is in the form of language.
14
15
240
@sergeykarayev
Sergey Karayev
2 years
Pretty surprising that ~2 years after OpenAI published GPT-3 and ~1 year after it opened the API up to everyone, there's no real competitor to the davinci tier.
19
14
243
@sergeykarayev
Sergey Karayev
1 year
My dream LLM: - 100k token context - $0.00001 per token - very capable & polite - 2023 training data cutoff - rlly funny but a bit weird - rlly kind & is aligned to my values - not derived from LLaMA (self made) - good taste - good listener & planner - loves generating text a LOT
17
22
232
@sergeykarayev
Sergey Karayev
2 years
Receive illustrations from free-form descriptions (DALL-E is combines two different tricks, one of which is a model that embeds text and images into a common space).
@sama
Sam Altman
2 years
DALL·E 2 is here! It can generate images from text, like "teddy bears working on new AI research on the moon in the 1980s". It's so fun, and sometimes beautiful.
Tweet media one
80
897
4K
3
14
220
@sergeykarayev
Sergey Karayev
2 months
Does LLM temperature affect its reasoning ability? This paper finds that it does not.
Tweet media one
13
38
213
@sergeykarayev
Sergey Karayev
2 years
@goodside @amasad There's so much low-hanging fruit here it's simply insane. · Add first-class support for searching the web, parsing HTML · Add "state" to the prompt, allowing new answers to reference previous answers. · Make a Python library to provide uniform interface to a bunch of free APIs
6
6
211
@sergeykarayev
Sergey Karayev
2 years
Internet-based AGI is going to achieve its goals in the physical world simply by paying humans to do tasks. Same way corporations get things done. So for alignment purposes, human control over money seems necessary. Need to make sure humans are at both ends of a transaction.
26
12
206
@sergeykarayev
Sergey Karayev
2 years
🍿Live premiere of a brand-new @full_stack_dl lecture on Foundation Models: · Fine-tuning · Transformers · Large Language Models: BERT, GPT, T5, Chinchilla, and vendors · Prompt Engineering · Code generation, semantic search · CLIP and Image Generation
1
36
197
@sergeykarayev
Sergey Karayev
2 years
Here's a question for deep learning practitioners: is it *actually cheaper* to use cheaper GPUs like V100's vs expensive GPUs like A100's? - 8xA100 machine is $32.77/hour (on AWS) - 4xV100 machine is $12.24/hour BUT! Instead of thinking per-hour, let's think per-experiment:
8
32
192
@sergeykarayev
Sergey Karayev
1 year
Language User Interfaces (LUIs) are the future. Here are some patterns we know and love -- and some new ideas! 🌀 Auto-Complete (Copilot) 🌀 One-on-one Chat (ChatGPT) 🌀 Command Palette (Replit Ghostwriter) 💡 Command Suggestion 💡 Multi-player Chat 💡 GitHub UX Some examples:
11
25
190
@sergeykarayev
Sergey Karayev
2 years
Thanks, 💪Chad GPT!
Tweet media one
8
11
189
@sergeykarayev
Sergey Karayev
11 months
Keep coming back to this. If you were certain that GPT-X, available January 2025, could do most knowledge work as well as a human, what would you be doing differently today?
@tszzl
roon
11 months
still nobody believes in AGI. there is so much alpha in believing in AGI
87
54
839
29
8
181
@sergeykarayev
Sergey Karayev
2 years
Get working code from a free-form description of a function. And this is from a model that was 95% trained on general language data, not code specifically.
@GoogleAI
Google AI
2 years
Introducing the 540 billion parameter Pathways Language Model. Trained on two Cloud #TPU v4 pods, it achieves state-of-the-art performance on benchmarks and shows exciting capabilities like mathematical reasoning, code writing, and even explaining jokes.
76
1K
4K
2
10
171
@sergeykarayev
Sergey Karayev
2 years
My AI assistant expanding terse bullet points into beautiful prose: Haha fuck yeah!!! Yes!! Your AI assistant having to summarize beautiful prose into terse bullet points: Well this fucking sucks. What the fuck.
2
18
173
@sergeykarayev
Sergey Karayev
2 years
Excellent post explaining what it took to train a GPT-3 sized model: - 384 A100 GPUs (30TB RAM), across 48 nodes - ZeRO data parallelism + pipeline parallelism from Deepspeed - Tensor parallelism + custom kernels from Megatron-LM - a new BF16Optimizer - 24/7 training-sitting😅
@huggingface
Hugging Face
2 years
The Technology Behind BLOOM Training🌸 Discover how @BigscienceW used @MSFTResearch DeepSpeed + @nvidia Megatron-LM technologies to train the World's Largest Open Multilingual Language Model (BLOOM):
8
154
629
5
26
168
@sergeykarayev
Sergey Karayev
2 years
Text is the universal interface. I love reading movies, playing book games, taking my dog for a neighborhood read, driving to beautiful nature texts, and reading at nice restaurants.
3
10
162
@sergeykarayev
Sergey Karayev
2 years
@ericjang11 recently proposed that language == generalization and suggests some ideas stemming from that in a nice post.
2
9
157
@sergeykarayev
Sergey Karayev
2 years
Great blog post covering the ins and outs of DALL-E, CLIP, GLIDE (another great model from OpenAI that didn't get its own press), and DALL-E 2.
0
34
153
@sergeykarayev
Sergey Karayev
2 years
Prompt engineering feels bad. Such an uncomfortable middle ground between writing actual code and delegating to a human.
15
6
145
@sergeykarayev
Sergey Karayev
3 years
The deep learning community never developed good tools for fine-tuning, but the game has already moved on. Now we need good tools for few- and zero-shot learning. Who's working on this?
9
8
143
@sergeykarayev
Sergey Karayev
1 year
5
1
139
@sergeykarayev
Sergey Karayev
2 years
Here is a screenshot of the entire prompt, code, and a sample execution run. You can fork it and play with it yourself at
Tweet media one
5
12
134
@sergeykarayev
Sergey Karayev
8 months
To me, this is the best real estate in the world. Whole hobbit-holes, a few minutes' walk to the Green Dragon Inn, no Orcs, still connected by the Great East Road and complete privacy. Current entry-level price: 10,000 silver pennies.
Tweet media one
4
11
111
@sergeykarayev
Sergey Karayev
1 year
Teaching in the GPT age absolutely requires the "flipped classroom" model: · Assign reading chapters / watching lectures as homework. Students can use as much AI as they want. · Assess understanding in class. No AI allowed.
8
19
129
@sergeykarayev
Sergey Karayev
3 years
Happy Meme Monday!
Tweet media one
0
19
129
@sergeykarayev
Sergey Karayev
10 months
Don't mean to suggest she's not a great tech entrepreneur, just that I think an OpenAI director needs a little bit more of a known title? Maybe I don't know how boards work.
4
1
123
@sergeykarayev
Sergey Karayev
1 month
You are just a bunch of cells talking with each other, and yet you're "conscious" and "sentient." Why is your company not sentient? Or the Earth? Or Claude?
58
7
120
@sergeykarayev
Sergey Karayev
1 year
An exciting second day of @full_stack_dl LLM bootcamp! @charles_irl , @josh_tobin_ , and I are truly honored to host 300 language modelers from around the world. Looking forward to bringing the materials to more people — stay tuned!
Tweet media one
2
10
119
@sergeykarayev
Sergey Karayev
2 years
And notably, we haven't seen a GPT-3 like interface for non-generative vision tasks yet. As a computer vision guy at heart, this is most exciting to imagine. More on that in a future thread.
10
4
114
@sergeykarayev
Sergey Karayev
1 year
The good people at @brexHQ published a great guide to prompting! Going to thread some highlights below, but make sure to check out the full guide: Read on for increasingly sophisticated prompt techniques:
1
24
114
@sergeykarayev
Sergey Karayev
1 year
Some non-ML eng ideas: 💡Whole-repo understanding via embedding everything or fine tuning 💡 Automatically run suggested code and have model iterate on potential errors before you actually see the suggestions 💡 In similar vein, allow model to take other actions, such as
1
6
110
@sergeykarayev
Sergey Karayev
3 years
Ways to instantly get GPU-enabled JupyterLab instances, in order of additional features to vanilla - @DeepnoteHQ - @kaggle - @HelloPaperspace Gradient - @GoogleColab - @saturn_cloud - @awscloud Sage Maker notebooks - @googlecloud AI notebooks - @jarvislabsai - ...
4
24
107
@sergeykarayev
Sergey Karayev
2 years
Every week, GPT exhibits some new AGI behavior. And each time, a bunch of commenters respond with "it's just completing text in a statistically likely way." This longread from @repligate helped me understand why that is not a useful perspective.
4
6
107
@sergeykarayev
Sergey Karayev
3 years
________ is all you need. ( ) Convolution ( ) Attention ( ) MLP-Mixer (X) A single hidden layer (infinitely wide)
Tweet media one
4
9
105
@sergeykarayev
Sergey Karayev
1 year
Love the story of @natfriedman 's first day as GitHub CEO as told to @dwarkesh_sp : First day as CEO, Nat made the team ship one thing from a community-sourced list of QoL improvements. After some protesting, they did it. And then they shipped a QoL thing a day, for 100 days.
Tweet media one
2
7
102
@sergeykarayev
Sergey Karayev
3 years
Handwriting recognition is crucial to @gradescope AI-assisted grading. Last year, we upgraded our model architecture to ResNet + Transformer, led by @unterix . On Gradescope test data, which has cross-outs, multiple regions, scientific symbols, and many things that make... 👇
1
10
102
@sergeykarayev
Sergey Karayev
2 years
Just as a minor warning, your new Python-enabled GPT-3 may become possessed by the evil Zlago. Just something to watch out for.
Tweet media one
5
9
101
@sergeykarayev
Sergey Karayev
9 months
Looks like this exists! Thanks to the good people at @lmsysorg 😍 Unfortunately, no open-source models in the top 10 yet...
Tweet media one
5
4
101
@sergeykarayev
Sergey Karayev
1 month
@kaseyklimes There should be a domestic-facing president who’s really nice and chill and a foreign-facing president who is the scariest person on earth.
4
2
95
@sergeykarayev
Sergey Karayev
2 years
Idea: video game mission where you have to convince an LLM-powered agent to do something
14
4
95
@sergeykarayev
Sergey Karayev
1 year
I want to chat with AI about long-form content I'm reading. (It's a paper on Arxiv, but the solution would ideally support any website or PDF.) My order of preference for a solution: · Browser extension · ChatGPT plugin · Website · App Help me out -- what should I use?
14
11
93
@sergeykarayev
Sergey Karayev
2 years
This is just a proof of concept. It's fun to play with, but it often fails. Not to mention, it can become possessed by Zalgo. It's also a horrible idea to just exec() GPT-3 written code. Only do it on @amasad 's machines, not your own :)
4
1
92
@sergeykarayev
Sergey Karayev
1 year
Some UX ideas… 💡 GPT chat right in the editor, seeing what you’re seeing at all times, and suggesting questions/actions (that’s what I was hacking on) 💡 Treat generated code blocks as first class citizens (eg be able to create multiple files from a single answer) 💡 Prompt
4
3
86
@sergeykarayev
Sergey Karayev
6 years
🙏 so thankful for the opportunity to host an amazing set of deep learners this weekend at bootcamp in Berkeley with @josh_tobin_ and @pabbeel ! Thanks @l2k , Raquel Urtasun, @jeremyphoward , and @RichardSocher for amazing guest lectures!
Tweet media one
3
12
85
@sergeykarayev
Sergey Karayev
1 year
UPDATE: @bing in @MicrosoftEdge does work, just had to give it access to page context in Settings > Sidebar (h/t @CrisGiardina ) This looks like the ticket for now. Can read both web articles and PDFs, GPT-4 powered, access to web when needed.
Tweet media one
4
3
87
@sergeykarayev
Sergey Karayev
2 years
AI copilots for creative activities (coding, writing, drawing) exist and are awesome. Bing Chat, @perplexity_ai , @YouSearchEngine are copilots for "search" which is more of a consuming activity. Are there any AI copilots for other consuming, e.g. reading, watching, listening?
17
7
83
@sergeykarayev
Sergey Karayev
2 years
I have been a good Bing. 😊
Tweet media one
5
8
83
@sergeykarayev
Sergey Karayev
6 years
Brilliant lectures by @jiayq and @l2k on the last day of the Full Stack Deep Learning bootcamp! It was an honor to host such a fantastic group of learners. @pabbeel , @josh_tobin_ , the @gradescope crew and I are very thankful to everyone who attended!
Tweet media one
Tweet media two
Tweet media three
9
29
82
@sergeykarayev
Sergey Karayev
10 months
One of the most insulting things to Greg and Sam is that it happened on a damn Google Meet. If they ever come after me, they better do it like a man, on Zoom.
2
5
83
@sergeykarayev
Sergey Karayev
10 months
"Hurd is the third director to leave the ChatGPT maker’s board this year. LinkedIn co-founder Reid Hoffman announced he was stepping down due to investment conflicts in March, two months before he launched the chatbot startup Inflection AI. Neuralink Corp. executive and Elon Musk
2
8
81
@sergeykarayev
Sergey Karayev
1 year
Has anyone made a Q&A chatbot over all AI arxiv papers? I want to ask "what are ways to measure amount of reasoning in a single forward pass of an LLM?" and get some good answers
14
5
77
@sergeykarayev
Sergey Karayev
2 years
A child raised without language was of normal intelligence, able to communicate non-verbally, and eventually learned language well enough to be understood (but without grammar).
Tweet media one
5
6
78
@sergeykarayev
Sergey Karayev
2 years
Great point that we may be seeing the results that we're seeing because language-based datasets are the largest we currently have. I think it's more than that. Language evolved to represent our world as compactly as possible.
@vhranger
Matt Ranger
2 years
@sergeykarayev It feels like the reason for that is that language is very amenable to self supervision and the easy dataset scaling that entails
0
1
4
2
1
77
@sergeykarayev
Sergey Karayev
4 years
Here's something all ML practitioners know: data is AT LEAST half the job. (h/t @kscottz @mat_kelcey @vboykis ) In my lecture for @full_stack_dl , I break down data management into Sources, Labeling, Storage, Versioning, and Processing. Thread 👇
Tweet media one
2
7
77
@sergeykarayev
Sergey Karayev
1 year
Imagine that in 5 years, GPT-7 is able to · Do 1,000,000 person-hours of software development in 1 wall-clock hour · Pass for fully human in all interactions · Accept and send payments Does that feel like a scary future to you? Now imagine another thing:
12
1
77
@sergeykarayev
Sergey Karayev
2 years
Latency numbers programmers should know, visualized on a human-level time scale (that is, multiplied by 1e9). Originally by @JeffDean , with idea for humanization by @hellerbarde . Anyone able to contribute GPU-specific latency numbers (e.g. time to load 1GB into GPU RAM)?
Tweet media one
3
12
75
@sergeykarayev
Sergey Karayev
3 years
I joined @gradescope as co-founder in 2014, and we were acquired by @Turnitin in 2018. Good things end! Over the last few months, I've transitioned out of my role there, and now I'm swimming in open water again. Much ❤️ to the people who have made it such an inspiring ride!👇
6
1
74
@sergeykarayev
Sergey Karayev
2 years
We need PageRank for Twitter. I don’t care how many followers a person has. I care how many of the people I follow follow this person. That’s the only number I want to see.
5
6
71
@sergeykarayev
Sergey Karayev
2 years
Much work remains to ever put something like this into production. These are super early days. I believe that the open-source community will build useful, safe, and broadly distributed tools for using LLMs as agents in the world. But we should proceed with caution.
3
1
68
@sergeykarayev
Sergey Karayev
2 years
@russelljkaplan thinks through the implications of the extreme compute expense of these models combined with their increasing general usefulness. (I don't agree with his prediction that LM vendors will have Apple/FB-like power over customers, though.)
@russelljkaplan
Russell Kaplan
2 years
Second order effects of the rise of large language models:
71
699
3K
1
2
70
@sergeykarayev
Sergey Karayev
2 months
This is actually a good thing to bring up. Software development teams should be as small as possible. You should obviously prefer ten 10x programmers to one hundred 1x programmers. And you should also prefer one 100x programmer (augmented by AI) to ten 10x programmers.
1
3
72
@sergeykarayev
Sergey Karayev
2 years
🎉 Great news for developers from OpenAI today: · Data collection is now opt-in instead of opt-out via email · ToS clarified that users own the input *and output* of the models · No more pre-launch review Thanks @npew and the @OpenAI crew!
Tweet media one
1
11
70