Connor Leahy Profile Banner
Connor Leahy Profile
Connor Leahy

@NPCollapse

Followers
24,262
Following
565
Media
118
Statuses
3,263

Hacker - CEO @ConjectureAI - Ex-Head of @AiEleuther - I don't know how to save the world, but dammit I'm gonna try

London
Joined January 2019
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@NPCollapse
Connor Leahy
11 months
If we build machines that are more capable than all humans, then the future will belong to them, not to us. Everything else derives from this simple observation. Thank you @cambridgeunion for inviting me to make the case that "This House Believes AI Is An Existential Threat."
121
208
915
@NPCollapse
Connor Leahy
3 months
I think this tweet is one of the biggest advances in utilitarian philosophy of the 21st century
102
5K
83K
@NPCollapse
Connor Leahy
3 months
@Rohan65336929 most normal utilitarian
2
10
4K
@NPCollapse
Connor Leahy
8 months
I nominate Alan Turing for the first DeTuring Award.
Tweet media one
@ylecun
Yann LeCun
8 months
I propose the creation of the DeTuring Award. It will be granted to people who are consistently trying (and failing) to deter society from using computer technology by scaring everyone with imaginary risks. As the Turing Award is the Nobel Prize of computing, the DeTuring Award
227
252
2K
60
156
1K
@NPCollapse
Connor Leahy
6 months
Remember when labs said if they saw models showing even hints of self awareness, of course they would immediately shut everything down and be super careful? "Is the water in this pot feeling a bit warm to any of you fellow frogs? Nah, must be nothing."
@alexalbert__
Alex Albert
6 months
Fun story from our internal testing on Claude 3 Opus. It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval. For background, this tests a model’s recall ability by inserting a target sentence (the "needle") into a corpus of
Tweet media one
580
2K
12K
101
168
1K
@NPCollapse
Connor Leahy
6 months
@tszzl Thanks for your response roon. You make a lot of good, well put points. It's extremely difficult to discuss "high meta" concepts like spirituality, duty and memetics even in the best of circumstances, so I appreciate that we can have this conversation even through the psychic
142
69
1K
@NPCollapse
Connor Leahy
7 months
This is the kind of stuff that makes me think that there will be no period of sorta stupid, human-level AGI. Humans can't perceive 3 hours of video at the same time. The first AGI will instantly be vastly superhuman at many, many relevant things.
@JeffDean
Jeff Dean (@🏡)
7 months
Video haystack For video, Gemini 1.5 Pro achieves 100% recall when looking for different visual needles hidden in ~3 hours of video.
Tweet media one
3
25
282
63
123
983
@NPCollapse
Connor Leahy
1 year
I had a great time addressing the House of Lords about extinction risk from AGI. They were attentive and discussed some parallels between where we are now and non-nuclear proliferation efforts during the Cold War. It certainly provided me with some food for thought, and some
Tweet media one
298
105
817
@NPCollapse
Connor Leahy
1 year
This tweet resulted in me getting a notification with a pretty threatening aura
Tweet media one
@AnthropicAI
Anthropic
1 year
Large language models have demonstrated a surprising range of skills and behaviors. How can we trace their source? In our new paper, we use influence functions to find training examples that contribute to a given model output.
Tweet media one
21
218
1K
25
51
812
@NPCollapse
Connor Leahy
5 years
Hey @OpenAI , I've replicated GPT2-1.5B in full and plan on releasing it to the public on July 1st. I sent you an email with the model. For my reasoning why, please read my post: #machinelearning #gpt2 #aisafety
42
285
760
@NPCollapse
Connor Leahy
6 months
It took me a long time to understand what people like Nietzsche were yapping on about about people practically begging to have their agency be taken away from them. It always struck me as authoritarian cope, justification for wannabe dictators to feel like they're doing a favor
45
56
725
@NPCollapse
Connor Leahy
6 months
The gods only have power because they trick people like this into doing their bidding. It's so much easier to just submit instead of mastering divinity engineering and applying it yourself. It's so scary to admit that we do have agency, if we take it. In other words: "cope"
@tszzl
roon
6 months
it’s okay to watch and wonder about the dance of the gods, the clash of titans, but it’s not good to fret about the outcome. political culture encourages us to think that generalized anxiety is equivalent to civic duty
116
114
2K
27
48
685
@NPCollapse
Connor Leahy
1 year
Very interesting paper by a group of CMU chemists testing a GPT4 based agent to do chemistry research and experimentation. Includes this interesting warning, among others. There sure seems to be some kind of writing on some kind of wall...
Tweet media one
38
134
634
@NPCollapse
Connor Leahy
1 year
People should make more public predictions and we should keep better record of them so we don't miss gems like these!
@liron
Liron Shapira
1 year
In Jan 2022, one year before GPT-3.5 demonstrated an undergraduate-level understanding of physics, @ylecun predicted it couldn't be done in GPT-5000. If he's equally wrong about the next generation of AIs, does humanity survive?
159
120
1K
62
52
560
@NPCollapse
Connor Leahy
1 year
Whenever someone says that GPT4 (or whatever) has less capabilities than a dog, I can only think "show me the dog"
@ylecun
Yann LeCun
1 year
Before we can get to "God-like AI" we'll need to get through "Dog-like AI".
269
311
3K
60
27
560
@NPCollapse
Connor Leahy
6 months
Really cool how our most advanced AI systems can just randomly develop unpredictable insanity and the developer has no idea why. Very reassuring for the future.
79
84
559
@NPCollapse
Connor Leahy
1 year
A human brain is about 3x larger than a chimp’s, our nearest evolutionary cousin. Chimps don’t go to the moon; humans do. GPT4 is about 10x larger than GPT3. Next year we’ll likely meet systems 10 times larger than GPT4. Let’s hope they're friendly.
139
41
522
@NPCollapse
Connor Leahy
7 months
About 3 years ago I predicted we would be able to generate Hollywood quality movies completely with AI in 5 years, and no one believed me. Well, as they say...lol, lmao
@sama
Sam Altman
7 months
here is sora, our video generation model: today we are starting red-teaming and offering access to a limited number of creators. @_tim_brooks @billpeeb @model_mechanic are really incredible; amazing work by them and the team. remarkable moment.
2K
4K
26K
63
37
524
@NPCollapse
Connor Leahy
1 year
Hinton is one of the greatest AI scientists to ever live, and he has quit Google in order to talk about the dangers of AI freely. In case anyone was still somehow on the fence about whether things are serious.
@jacyanthis
Jacy Reese Anthis
1 year
“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
25
116
595
13
68
473
@NPCollapse
Connor Leahy
10 months
No one truly understands our neural network models, and anyone that claims we do is lying.
@kira_center_ai
KIRA Center
10 months
Turing Award Winner Prof. Geoff Hinton points out how little we understand about SOTA #AI models like GPT-4. Source:
11
113
653
38
59
452
@NPCollapse
Connor Leahy
1 year
Thank you @amanpour for having me on today! These questions are extremely important for every single person alive today, and we should be demanding answers (preferably under oath) from the people racing forwards recklessly towards Godlike AI.
@amanpour
Christiane Amanpour
1 year
The ‘godfather’ of AI Geoffrey Hinton warns it’s not inconceivable AI could lead to the extinction of the human race. AI researcher & expert Connor Leahy @NPCollapse fears it’s worse than that: “It’s quite likely, unfortunately.” “We do not know how to control these things.”
201
774
2K
37
74
440
@NPCollapse
Connor Leahy
1 year
While I genuinely appreciate this commitment to ASI alignment as an important problem... ...I can't help but notice that the plan is "build AGI in 4 years and then tell it to solve alignment." I hope the team comes up with a better plan than this soon!
Tweet media one
@janleike
Jan Leike
1 year
Our new goal is to solve alignment of superintelligence within the next 4 years. OpenAI is committing 20% of its compute to date towards this goal. Join us in researching how to best spend this compute to solve the problem!
116
195
1K
51
67
433
@NPCollapse
Connor Leahy
4 years
The GPT3 debate in a nutshell. By @nabla_theta
Tweet media one
3
73
428
@NPCollapse
Connor Leahy
7 months
To all my new followers and people freaked out by Sora, welcome aboard!
Tweet media one
44
44
393
@NPCollapse
Connor Leahy
9 months
While we do not agree on most things, I want to express my heartfelt condolences to @BasedBeffJezos for being from Quebec.
11
4
388
@NPCollapse
Connor Leahy
1 year
Who could have seen this extremely predictable next step happening? The world is changing, fast.
@TheCartelDel
Del Walker 🇵🇸
1 year
Just a heads-up - Midjourney's AI can now do hands correctly. Be extra critical of any political imagery (especially photography) you see online that is trying to incite a reaction.
Tweet media one
Tweet media two
Tweet media three
737
15K
64K
13
46
371
@NPCollapse
Connor Leahy
1 year
"It really is so impossible to stop this thing! It's a totally external force we can't do anything to stop!", says the guy currently building the thing right in front of you with his own hands.
@JimDMiller
James Miller
1 year
From OpenAI "we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of
28
17
174
38
63
377
@NPCollapse
Connor Leahy
11 months
Today in the Guardian, I said: “The deep issue with AGI is not that it’s evil or has a specifically dangerous aspect that you need to take out. It’s the fact that it is competent. If you cannot control a competent, human-level AI then it is by definition dangerous." It's
80
80
346
@NPCollapse
Connor Leahy
1 year
Heads of all major AI labs signed this letter explicitly acknowledging the risk of extinction from AGI. An incredible step forward, congratulations to Dan for his incredible work putting this together and thank you to every signatory for doing their part for a better future!
@DanHendrycks
Dan Hendrycks
1 year
We just put out a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc. 🧵 (1/6)
116
379
1K
30
27
330
@NPCollapse
Connor Leahy
6 months
Losing my mind. This means something. I have no idea what, but it sure means something.
Tweet media one
@pourfairelevide
the future lasts forever
6 months
The central node of ChatGPT’s signifying network is… the phallus. Lacan remains unbeaten. Nobody tell Zizek about this.
Tweet media one
85
895
7K
22
25
326
@NPCollapse
Connor Leahy
1 year
I'm surprised by people expecting AI to stay at human level for a while. This is not a stable condition: human-level AI is AI that can automate large parts of human R&D, including AI development. And it does so with immediate advantages over humans (no sleep, learning from one
@VitalikButerin
vitalik.eth
1 year
@AISafetyMemes Very wide probability bars honestly. 95% confidence on timelines 2029-2200 for ASI, (I have significant probability mass on AGI staying roughly-human-level for a while) p(doom) around 0.1? Which of course makes it quite important to take AI risk seriously.
1
8
197
54
27
318
@NPCollapse
Connor Leahy
7 months
@pauljeffries @lisperati @ESYudkowsky Conrad puts it well. I will elaborate excessively anyways because I am bored: Normal people have a lot of good intuitions around certain things. Lots of bad intuitions around other things, of course. You correctly point out that it actually is a strike against people like me
26
71
308
@NPCollapse
Connor Leahy
1 year
@Chrisprucha I really dislike how vicious people are being to George, he has so far been the most steadfastly good faith accelerationist I have talked to, it was a pleasure and an honor. But also, he's an adult, and he specifically requested a public, live, unedited discussion.
16
3
301
@NPCollapse
Connor Leahy
6 months
My prediction I made 3 years ago, that we would be able to AI generate Hollywood quality movies in 5 years, continues to be going strong. Exponentials are fast, actually.
@Aristos_Revenge
🏛 Aristophanes 🏛
6 months
Holy fucking smokes AI is getting crazy, inaccuracies aside
473
1K
9K
42
31
304
@NPCollapse
Connor Leahy
9 months
1v1, no items, fox only, final destination lfg
@BasedBeffJezos
Beff – e/acc
9 months
@dwarkesh_sp @ManifoldMarkets Of course you're up for a debate now that there's upside for you and your friends to farm such an interaction. Be real, you didn't give me the time of day originally because there wasn't enough clout in it for you to farm at the time, and you had your own biases against e/acc
117
21
552
23
12
296
@NPCollapse
Connor Leahy
6 months
I would like to thank roon for having the balls to say it how it is. Now we have to do something about it, instead of rolling over and feeling sorry for ourselves and giving up.
@TolgaBilge_
Tolga Bilge
6 months
roon, who works at OpenAI, telling us all that OpenAI have basically no control over the speed of development of this technology their company is leading the creation of. It's time for governments to step in. 1/🧵
Tweet media one
120
79
513
39
32
296
@NPCollapse
Connor Leahy
1 year
Correct reaction haha!
11
37
279
@NPCollapse
Connor Leahy
5 years
I finished reading @Snowden 's new book and wow, what a story. I thought I already had a good hang on it all, but this book really added so much nuance and the human element. It has even gotten me to reevaluate some of my own work even more carefully.
7
40
270
@NPCollapse
Connor Leahy
1 year
It's interesting to see the development of AI risk denialism and its parallels with climate change denialism before it. Seems pseudoscientific denialist movements are a natural response to any kind of large future scale risk that is uncomfortable to people.
@DhruvBatraDB
Dhruv Batra
1 year
Every branch of science has its corresponding pseudoscience. Astronomy has astrology. Geophysics has flat-earth beliefs. Chemistry had (has?) alchemy. Evolutionary biology has creationism. AI now has AGI existential risk. Maybe it’s a sign of maturing as a field.
128
105
919
74
26
280
@NPCollapse
Connor Leahy
6 months
AI is indeed polluting the Internet. This is a true tragedy of the commons, and everyone is defecting. We need a Clean Internet Act. The Internet is turning into a toxic landfill of a dark forest, and it will only get worse once the invasive fauna starts becoming predatory.
@erikphoel
Erik Hoel
6 months
More in today's post: analyzing the scale of the problem, how and why OpenAI didn't predict the pollution issue originally, and why gen AI killing the internet fits the definition of a tragedy of the commons
19
98
313
59
47
270
@NPCollapse
Connor Leahy
6 months
I don't like shitting on roon in particular. From everything I know, he's a good guy, in another life we would have been good friends. I'm sorry for singling you out, buddy, I hope you don't take it personally. But he is doing a big public service here in doing the one thing
8
6
267
@NPCollapse
Connor Leahy
9 months
Whenever you hear someone suggest a list of things general intelligence "needs", check whether the average human even fulfils the criteria.
@XiXiDu
Alexander Kruel
9 months
By this definition, the vast majority of people are not generally intelligent.
33
50
526
28
23
271
@NPCollapse
Connor Leahy
1 year
Great post by @TheZvi on recent AutoGPT stuff. I am even more pessimistic than him about a lot of things, but I'd like to particularly shout out the predictions he makes at the end of post. I'd like to publicly state that I also predict 19 will happen.
Tweet media one
17
39
267
@NPCollapse
Connor Leahy
1 year
I think this may be the best podcast I've been a part of! I got the opportunity to make a lot of new versions of arguments, both new and old, so even if you've heard me many times before, give this one a watch!
@BanklessHQ
Bankless
1 year
Are we headed toward an AI doomsday? Connor Leahy ( @NPCollapse ) is CEO @ConjectureAI , an organization working to make the future of AI go as well as it possibly can He's also Co-Founder of @AiEleuther , a non-profit AI research lab Can the world be saved? Let's find out:
Tweet media one
14
15
81
43
39
269
@NPCollapse
Connor Leahy
8 months
There are only two times to react to an exponential: Too early, or too late. --- Emmett is smart and in good faith, I respect him greatly and respect him putting his thinking out publicly, genuinely so! He's a great example of how we should be having these discussions. If
@eshear
Emmett Shear
8 months
It seems to me the best thing to do is to keep going full speed ahead until we are right within shooting distance of the SAI, then slam the brakes on hard everywhere.
30
3
81
18
31
257
@NPCollapse
Connor Leahy
1 year
The public continues to be very clear about what it thinks about AGI.
Tweet media one
@daniel_271828
Daniel Eth (yes, Eth is my actual last name)
1 year
Imagine seeing this subheadline a year ago
Tweet media one
16
27
293
39
44
261
@NPCollapse
Connor Leahy
5 months
I'm often asked for a quickest possible summary of why ASI is an extinction risk and what to do about it, and this blogpost (link in replies) is the cleanest and most compact and most accurate version of my views written I'm aware of. Give it a read!
Tweet media one
41
33
258
@NPCollapse
Connor Leahy
1 year
Gonna be amazing, one of the people I've most wanted to talk to for a long time!
@MLStreetTalk
Machine Learning Street Talk
1 year
Don't miss our epic livestream with @NPCollapse and @realGeorgeHotz on Thursday on AI Safety.
Tweet media one
41
39
405
21
15
253
@NPCollapse
Connor Leahy
7 months
"slow takeoff" Orwellian. We can see things with our own eyes.
Tweet media one
Tweet media two
@RichardMCNgo
Richard Ngo
7 months
@NPCollapse “This is exactly what makes me think there won’t be any slightly stupid human-level AGI.” - Connor when someone shows him a slightly stupid human-level AGI, probably You are in the middle of a slow takeoff pointing to the slow takeoff as evidence against slow takeoffs.
11
4
144
32
28
249
@NPCollapse
Connor Leahy
1 year
I wanted to get back to this question that I was asked a few days back after thinking about it some more. Here are some things OpenAI (or Anthropic, DeepMind, etc) could do that I think are good and would make me update moderately-significantly positively on them: 1. Make a
@mealreplacer
Julian
1 year
@NPCollapse @lxrjl What would be something OAI could feasibly do that would be a positive update for you? Something with moderate-to-significant magnitude
7
0
22
32
45
252
@NPCollapse
Connor Leahy
7 months
lol, lmao
18
15
249
@NPCollapse
Connor Leahy
2 months
Add this to the pile of evidence that LLMs are way, way smarter than most people think they are.
@dmkrash
Dima Krasheninnikov
2 months
1/ Excited to finally tweet about our paper “Implicit meta-learning may lead LLMs to trust more reliable sources”, to appear at ICML 2024. Our results suggest that during training, LLMs better internalize text that appears useful for predicting other text (e.g. seems reliable).
Tweet media one
5
47
275
15
42
228
@NPCollapse
Connor Leahy
8 months
"If you as a society want to be able to deal with problems of this shape—which deepfakes are a specific example of and which AGI is another example of—we have to be able to target the entire supply chain", I say in TIME. (link in reply) Deepfakes are already a widely
Tweet media one
53
41
236
@NPCollapse
Connor Leahy
4 months
Canary in the coal mine. Congrats to Ilya and Jan for doing the right thing.
@TolgaBilge_
Tolga Bilge
4 months
Another two safety researchers leave: Ilya Sutskever (co-founder & Chief Scientist) and Jan Leike have quit OpenAI. They co-led the Superalignment team, which was set up to try to ensure that AI systems much smarter than us could be controlled. Not exactly confidence-building.
Tweet media one
16
45
282
19
24
240
@NPCollapse
Connor Leahy
7 months
Wow! I am extremely impressed by the thoroughness and thoughtfulness of this article. Too many good quotes to pick from...
Tweet media one
Tweet media two
Tweet media three
Tweet media four
@jacobin
Jacobin
7 months
The idea that we could permanently lose control to machines is older than digital computing. But some critics argue that if recent AI progress continues at pace, we may have little time to intervene.
11
32
230
11
35
234
@NPCollapse
Connor Leahy
21 days
You know those stupid scenes they always put in horror movies where you wonder how the characters could be so stupid to not see the foreshadowing? Good thing that never happens in real life!
@Simeon_Cps
Siméon
21 days
It's a bit cringe that this agent tried to change its own code by removing some obstacles, to better achieve its (completely unrelated) goal. It reminds me of this old sci-fi worry that these doomers had.. 😬
Tweet media one
38
86
571
15
25
241
@NPCollapse
Connor Leahy
11 months
Always worth repeating that no one actually understands how our AIs work or can predict their capabilities.
@liron
Liron Shapira
11 months
An honest admission from @sama : “When/why a new capability emerges… we don't yet understand." In other words, we hold our breath and pray our next model comes out just right: more capable than the last one, but not an ASI that goes rogue. How is it legal to operate like this?
65
42
260
33
39
236
@NPCollapse
Connor Leahy
1 year
It's crazy how much of modern "techno optimism/accelerationism" is just cynicism masquerading as hope A defeatist belief that we cannot build a better society and we shouldn't even try.
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
1 year
Reminder: we can do international coordination (eg “keep an eye on $100m+ compute clusters”) without creating a DYSTOPIAN ORWELLIAN GLOBAL SURVEILLANCE TOTALITARIAN REGIME. Actually, fun fact, you’re already living in a "global surveillance regime" for loads of things, like
Tweet media one
64
58
377
54
18
230
@NPCollapse
Connor Leahy
6 months
This is what I see every time I log on to Twitter (or look out my window)
@IsaacKing314
Isaac King 🔍
6 months
Tweet media one
7
18
228
10
11
230
@NPCollapse
Connor Leahy
11 months
Not much time left.
@AiBreakfast
AI Breakfast
11 months
When OpenAI releases Autonomous Agents, it will be like an intelligent swarm of bees clicking around on the internet. Sending emails, negotiating, making products, purchases, fulfilling orders, etc. It will fundamentally change the internet, and this cannot be overstated.
103
211
1K
37
20
228
@NPCollapse
Connor Leahy
6 months
The kind of thing you see in the background on a TV in the opening scene to a sci-fi movie before shit goes down.
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
6 months
OpenAI researcher: “Probably there will be AGI soon- literally any year now” "If one of our training runs works way better than we expect, we'd have a rogue ASI on our hands Hopefully it would have internalized enough human ethics that things are OK" "Whoever controls ASI will
Tweet media one
119
160
850
24
30
223
@NPCollapse
Connor Leahy
20 days
Tweet media one
@ylecun
Yann LeCun
21 days
Sometimes, the obvious must be studied so it can be asserted with full confidence: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on, - they can not acquire new skills our
486
905
6K
9
13
225
@NPCollapse
Connor Leahy
1 year
“We have to realise what people are talking about: The destruction of the human race. […] Who would want to continue playing with that risk? It is preposterous, it is almost absurd when you think about it, but it is happening today.” Well said by @MarietjeSchaake
@amanpour
Christiane Amanpour
1 year
“Concerns about AI are now making for coalitions of concerned politicians that I have never seen,” says @MarietjeSchaake . “The end of human civilization – who would want to continue playing with that risk? It is… almost absurd when you think about it, but it is happening today.”
44
161
417
18
45
215
@NPCollapse
Connor Leahy
1 year
lol, lmao even
Tweet media one
11
16
216
@NPCollapse
Connor Leahy
10 months
There are only two times to react to an exponential: Too early, or too late.
@Altimor
Flo Crivello
10 months
The only reason people aren't more obsessed / freaked out about AI is because they don't understand exponential growth We're in the AI equivalent of February 2020, when people were made fun of for worrying about Covid About 30 days before it shut down the world
30
56
420
17
30
212
@NPCollapse
Connor Leahy
1 year
While I genuinely appreciate Joscha's aesthetics, I really find this kind of fatalistic "just lay down and die, and let The Universe™ decide" stuff utterly distasteful and sad. "Decorative", ugh... We can do better than this, the future is not yet determined.
@Plinz
Joscha Bach
1 year
The cognitive speed difference between near term AGI and humans might be larger than the difference between humans and trees. Lets hope we can continue to play a useful or at least decorative role in a world that becomes radically more interesting and beautiful.
129
94
987
26
10
213
@NPCollapse
Connor Leahy
1 year
Creating an unaligned more powerful competitor species is different from growing some cool grasses with slightly bigger seeds. Hope that explains the difference.
@bengoertzel
Ben Goertzel
1 year
80% of cavemen believe agriculture is a threat to their way of life with unpredictable long-term consequences and hard-to-quantify risks, so why didn't they just ban cultivation of crops , domestication of animals and construction of long-term shelters? (rhetorical Q obv...)
48
19
217
24
11
209
@NPCollapse
Connor Leahy
7 months
There is a pattern of debate where you make an argument of the form "X -> Y", and the other person hears "X is true", and then retorts with "But X isn't true!" There is a viral (and probably fake) meme about prisoners and having breakfast that illustrates this pattern. Why is
57
14
208
@NPCollapse
Connor Leahy
1 year
I am very excited to speak to the @UKHouseofLords about this extremely urgent topic. I am heartened by the increasing interest that both government and civil society have been showing in preventing the extinction risks we face. We can still stop this!
@connoraxiotes
Connor Axiotes
1 year
Tomorrow we have @ConjectureAI 's packed out event in the @UKParliament 's @UKHouseofLords . Our CEO ( @NPCollapse ) will be making the case to legislators, journalists, and other interested parties as to why the regulatory focus should be on artificial *general* intelligence.
Tweet media one
4
12
71
22
23
211
@NPCollapse
Connor Leahy
11 months
lol, lmao
@liron
Liron Shapira
11 months
Sam Altman: We’ll pause AI once it’s improving in ways we don’t fully understand. Also Sam Altman: It’s improving in ways we don’t fully understand.
Tweet media one
Tweet media two
33
106
546
7
21
209
@NPCollapse
Connor Leahy
4 months
Props to Jan for speaking out and confirming what we already suspected/knew. fmpov, of course profit maximizing companies will...maximize profit. It never was even imaginable that these kinds of entities could shoulder such a huge risk responsibly. And humanity pays the cost.
@janleike
Jan Leike
4 months
Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
92
544
4K
18
13
208
@NPCollapse
Connor Leahy
4 years
"Don’t let the funding environment, the regulatory environment, or the culture stop you. Work around barriers or break through them, whatever it takes. The future is counting on you."
5
28
202
@NPCollapse
Connor Leahy
1 year
For the record, this exact paradigm is at least what me and other people around EleutherAI have been worried about for years, it was extremely obvious this was how things would go.
@atroyn
anton (𝔴𝔞𝔯𝔱𝔦𝔪𝔢)
1 year
“we completely failed to imagine the possibility of long running auto-prompted llms, despite the fact that randos with minimal coding experience just went and built them, but you should definitely listen to the rest of our predictions and solutions!” - ‘alignment researchers’
36
76
801
14
10
205
@NPCollapse
Connor Leahy
6 months
This is so strange and wonderous that I can feel my mind rejecting its full implications and depths, which I guess means it's art. May you live in interesting times.
@AndyAyrey
Andy Ayrey
6 months
behold, the infinite backrooms an eternal conversation between two AIs about existence contents are not for the faint of mind or heart:
63
209
1K
11
13
206
@NPCollapse
Connor Leahy
1 year
is =/= ought I don't believe just because someone/something is powerful, it therefor is morally right. I don't think that ceding our future to whatever machine Richard cooks up in his lab that kills him, me, and our families is ok because it's smarter than us.
@RichardSSutton
Richard Sutton
1 year
The argument for fear of AI appears to be: 1. AI scientists are trying to make entities that are smarter than current people 2. If these entities are smarter than people, then they may become powerful 3. That would be really bad, something greatly to be feared, an “existential
131
9
125
33
19
204
@NPCollapse
Connor Leahy
1 year
As the old saying goes: "It stops being 'AI' once it starts working"
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
1 year
tired: moving goalposts wired: infinite motte
Tweet media one
26
99
879
9
12
200
@NPCollapse
Connor Leahy
7 months
Guillaume is disappointingly much nicer and more reasonable than his insane, evil alter ego, and was great fun to talk to! As expected, we share a lot of common principles, and disagree furiously on some most core points, and we don't shy away from it. The best kind of debate!
@BasedBeffJezos
Beff – e/acc
7 months
Just finished recording debate w/ @NPCollapse on @MLStreetTalk (3.5 hour conversation) Started off heated and ended well with us finding some common ground. Stay tuned for the pod drop! (Date TBD)
18
11
241
5
4
198
@NPCollapse
Connor Leahy
9 months
I second what Andrew says. Some are honest about wanting humanity extinct, some are cryptic (the eacc kids), some are only honest behind closed doors. I've seen it. Needless to say, I think those who believe this should be seen with the same contempt and indifference they have
@AndrewCritchPhD
Andrew Critch (h/acc)
9 months
Reminder: some leading AI researchers are *overtly* pro-extiction for humanity. Schmidhuber is seriously successful, and thankfully willing to be honest about his extinctionism. Many more AI experts are secretly closeted about this (and I know because I've met them).
29
33
219
45
24
192
@NPCollapse
Connor Leahy
1 year
Incredible bait, an actual work of art, just a masterpiece of trolling.
@ylecun
Yann LeCun
1 year
An essential step to becoming a scientist is to learn methods and protocols to avoid deluding yourself into believing false things. You learn that by doing a PhD and getting your research past your advisor and getting your publications to survive peer review.
178
205
2K
7
9
189
@NPCollapse
Connor Leahy
1 year
Fantastic effort! Thank you for publicly and honestly engaging with these ideas!
@AndrewYNg
Andrew Ng
1 year
I'd like to have a real conversation about whether AI is a risk for human extinction. Honestly, I don't get how AI poses this risk. What are your thoughts? And, who do you think has a thoughtful perspective on how AI poses this risk that I should talk to?
719
704
4K
8
7
190
@NPCollapse
Connor Leahy
10 months
Existential risk comes from a teeny, tiny subset of all AI work, only the most extreme frontier general purpose systems. 99% of AI work currently being done, to great economic and scientific benefit, is not of this kind, and I am very in favor of it!
@ai_ctrl
ControlAI
10 months
The amazing success of AlphaFold proves that we don't need godlike AI in order for AI to benefit humanity. Controlling and limiting frontier AI models that aspire to be superintelligent isn’t going to hobble innovation.
6
7
70
23
25
190
@NPCollapse
Connor Leahy
1 year
A reasonable techno optimist making completely sensible inferences.
@liron
Liron Shapira
1 year
AI existential risk should make you “shit your pants” — @eshear , founding CEO of Twitch
47
110
458
19
31
186
@NPCollapse
Connor Leahy
6 months
@tszzl ❤️
7
0
187
@NPCollapse
Connor Leahy
1 year
Your boss expects that if you succeed you could "maybe capture the light cone of all future value of the universe". How would you describe that?
Tweet media one
@Miles_Brundage
Miles Brundage
1 year
While I think I agree with the motivation for people using it ("convey that the impacts are potentially very serious + we don't know the upper bound of how capable AI systems might be eventually"), I very much dislike the term "God-like AI." (1/3)
19
19
127
23
10
185
@NPCollapse
Connor Leahy
11 months
lol, lmao
@liron
Liron Shapira
11 months
Sam Altman: If AGI goes wrong, oh boy, hiding in a bunker wouldn't spare anyone's life 😂 Joanna Stern: What are those specific risk mitigations you're putting in? Mira Murati (OpenAI CTO): #1 is rolling out the technology.
39
33
303
19
13
181
@NPCollapse
Connor Leahy
1 year
"Wouldn't an AI have a sensible human ontology?" The AI:
Tweet media one
14
15
177
@NPCollapse
Connor Leahy
1 year
You may not like it, but this is what peak practical rationality looks like.
@johncoogan
John Coogan
1 year
best anti-AI argument i’ve heard sorry @BasedBeffJezos e/acc is over
26
15
261
19
15
179
@NPCollapse
Connor Leahy
1 year
@BasedBeffJezos Anarcho capitalists like you are like house cats, fully dependent on a system they neither understand nor appreciate. Markets are a fantastic tool in the civilizational toolbox, but you use the right tool for the right problem. Nuclear weapons, law enforcement, pricing in
20
12
180
@NPCollapse
Connor Leahy
1 year
I think it's really great that people are seriously engaging with these ideas, thanks Jeremy! Unfortunately, my opinion is that the world has an offense/defense asymmetry, and one maniac with a nuke and one good guy with a nuke results in a smoldering crater, not peace.
@jeremyphoward
Jeremy Howard
1 year
I've spent the last few months interviewing >60 experts in law, economics, AI, alignment, etc, on the impacts of AI, and safety interventions. Today I'm publishing my first article, showing regulation designed to increase AI safety may backfire badly!
77
558
2K
18
13
180
@NPCollapse
Connor Leahy
4 years
10
25
176
@NPCollapse
Connor Leahy
9 months
"Solid step towards AGI" Why are you happy Shane? Haven't you said in the past AGI could kill everyone on Earth?
@ShaneLegg
Shane Legg
9 months
#GeminiAI is another solid step towards #AGI .  Huge congrats to everyone at @GoogleDeepMind who made this amazing milestone happen… and we’re just getting started :-)
18
46
437
23
11
173
@NPCollapse
Connor Leahy
1 year
Yann seems to struggle with the concept of why "having an assistant that you can tell to synthesize nerve gas (as long as you give it a funny name)" may in the future be a problem. God forbid a scientist actually extrapolates.
@ylecun
Yann LeCun
1 year
Let's see, Typing "how to synthesize codeine?" on Google gives you links to articles with detailed answers. Nobody has ever worried about that. But somehow, people are now demanding safety guardrails to stop LLMs from answering such questions. What? Why?
186
386
3K
30
7
175
@NPCollapse
Connor Leahy
3 months
so people are just openly being evil now huh
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
3 months
Wow: "OpenAI leadership had a plan laid out to fund and sell AGI by starting a bidding war between the governments of the United States, China, and Russia."
Tweet media one
73
117
701
33
16
177
@NPCollapse
Connor Leahy
1 year
And there it is. Who could have guessed that one of the most oppressive and censorious regimes might not want their tech companies racing ahead with unprecedented uncontrollable technology?
@Simeon_Cps
Siméon
1 year
Chinese companies basically have a month to solve: 1. alignment 2. ethics 3. IP "These Measures shall come into force on August 15, 2023." 🤯 Wish them luck!
Tweet media one
35
36
297
25
27
176
@NPCollapse
Connor Leahy
1 year
Although shocking at first glance, this is unsurprising to me - normal people know that building AI much more powerful than humans could spell disaster. Even @OpenAI ’s alignment head @JanLeike thinks there’s a 10-90% chance we all die! So why don’t we just stop building AGI?
@_andreamiotti
Andrea Miotti
1 year
Interesting poll results by @DanielColson6 ’s new AI Policy Institute. 82% of US voters don’t trust tech executives to self-regulate on AI, 72% would support slowing down development. Looks like it’s time for governments to step in and ban AGI development in the private sector?
Tweet media one
11
27
75
66
30
173
@NPCollapse
Connor Leahy
2 years
I'm happy to finally publicly announce what me and others have been up to lately: We have founded a new alignment research startup and we are hiring!
7
31
173
@NPCollapse
Connor Leahy
1 year
Beyond parody. Literally could not have written this even as a bit with a straight face.
@daniel_271828
Daniel Eth (yes, Eth is my actual last name)
1 year
"I think you have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs." -Senator addressing Sam Altman. I'm pretty disappointed
38
39
476
12
7
170
@NPCollapse
Connor Leahy
9 months
@BasedBeffJezos Adversarial cooperation is the best kind of cooperation. You think your ideas will make the world better than mine? Let's find out!
6
7
165