Andrea Miotti Profile
Andrea Miotti

@_andreamiotti

Followers
1,101
Following
315
Media
35
Statuses
1,220

Trying to make the future go well. executive director @ai_ctrl , advisor @ConjectureAI

Joined April 2020
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@_andreamiotti
Andrea Miotti
10 months
The AI Summit consensus is clear: it's time for international measures. Here is a concrete proposal. In our recent paper, @jasonhausenloy , Claire Dennis and I propose an international institution to address extinction risk from AI: MAGIC, a Multinational AGI Consortium.
Tweet media one
103
21
114
@_andreamiotti
Andrea Miotti
9 months
Meanwhile, the previous CEO: "Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. " Sam Altman, 2015
@chrmanning
Christopher Manning
9 months
I’ve kept quiet on the @OpenAI fiasco, since I also don’t know what’s going on, 🤷 but I can’t possibly support today’s interim CEO—the below in a thread on “50/50 everyone gets paperclipped & dies”—or a residue board that believes in these EA-infused fantasy lands. HT @vkhosla .
68
140
1K
4
20
349
@_andreamiotti
Andrea Miotti
1 year
Therapist: "The superintelligent AI reaching back from the future isn't real, it can't hurt you." Meanwhile in Italy:
Tweet media one
5
24
306
@_andreamiotti
Andrea Miotti
11 months
In @TIME Magazine, I make the case that we can prevent AI disaster much like we prevented nuclear catastrophe, by: (1) Banning the development of AI systems above a certain threshold of computing power. (2) Building a multilateral institution to house risky AI research: MAGIC
Tweet media one
88
37
214
@_andreamiotti
Andrea Miotti
1 year
Throwback to half a year before the Transformers paper, one year and a half before GPT-1, three years and a half before GPT-3.
@fchollet
François Chollet
8 years
@xamat @Smerity @rabois the belief that we are anywhere close to human-level natural language comprehension or generation is pure DL hype.
6
38
127
6
22
206
@_andreamiotti
Andrea Miotti
1 year
It's important to make this clear and explicit: advanced AI might cause human extinction. This is the magnitude of the threat we face. For too long, this was a widespread view in the field but too few dared to publicly commit to it. Great to see heads of all AGI labs sign this
@DanHendrycks
Dan Hendrycks
1 year
We just put out a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc. 🧵 (1/6)
116
379
1K
13
24
159
@_andreamiotti
Andrea Miotti
1 year
Anthropic CEO stated current AI models can “fill in some of the steps” and that models in "2 to 3 years [...] will be able to fill in all the missing pieces" for bioweapons. Given this, we should establish a moratorium on AI proliferation right now, not race ahead.
@Daily_Express
Daily Express
1 year
Terrorists will develop deadly bioweapons within years using AI, tech boss fears @_andreamiotti
2
4
19
9
19
120
@_andreamiotti
Andrea Miotti
1 year
"34% of CEOs said AI could potentially destroy humanity in ten years and 8% said that could happen in five years." "Survey included responses from 119 CEOs from a cross-section of business, including Walmart CEO, Coca-Cola CEO, leaders of IT companies."
Tweet media one
15
19
113
@_andreamiotti
Andrea Miotti
1 year
Some striking reflections by Yoshua Bengio in his new post: "We do not yet know how to make an AI agent controllable and thus guarantee the safety of humanity! And yet we are – myself included until now – racing ahead towards building such systems."
Tweet media one
2
17
95
@_andreamiotti
Andrea Miotti
1 year
I want humanity to survive: let's work to make it happen. There's no "inevitable succession": we have agency over the future of our species.
@RichardSSutton
Richard Sutton
1 year
We should prepare for, but not fear, the inevitable succession from humanity to AI, or so I argue in this talk pre-recorded for presentation at WAIC in Shanghai.
58
58
360
4
8
78
@_andreamiotti
Andrea Miotti
1 year
Interesting poll results by @DanielColson6 ’s new AI Policy Institute. 82% of US voters don’t trust tech executives to self-regulate on AI, 72% would support slowing down development. Looks like it’s time for governments to step in and ban AGI development in the private sector?
Tweet media one
11
27
75
@_andreamiotti
Andrea Miotti
9 months
@AISafetyMemes Operation Boiling Frog proceeding well
3
1
69
@_andreamiotti
Andrea Miotti
10 months
🚨 POLL SHOWS BRITISH PUBLIC SUPPORTS MORATORIUM ON SMARTER-THAN-HUMAN AI, LIMITS ON AI SCALING 🚨 Our ( @ai_ctrl ) comprehensive new @YouGov poll ‘found that most people in the UK would support strict curbs on the development of such powerful technology, including a global treaty
Tweet media one
Tweet media two
Tweet media three
Tweet media four
9
21
66
@_andreamiotti
Andrea Miotti
1 year
"Sure the model will be very smart, but it won't be able to take action in the real world!"
@andrewwhite01
Andrew White 🐦‍⬛
1 year
We report a model that can go from natural language instructions, to robot actions, to synthesized molecule with an LLM. We synthesized catalysts, a novel dye, and insect repellent from 1-2 sentence instructions. This has been a seemingly unreachable goal for years! 1/3
Tweet media one
52
265
1K
3
14
64
@_andreamiotti
Andrea Miotti
2 years
Some of my and @Gabe_cc 's thoughts on the current state of the AI field The race to build increasingly general systems is accelerating, yet we still have no clue about how to control or understand these systems
6
9
53
@_andreamiotti
Andrea Miotti
1 year
The new UK's Foundation Models Taskforce, led by @soundboy , has a chance to shape bold domestic AI policy & set examples to be emulated abroad. The enormous AI challenge needs ambitious policymaking! Here are a few ideas about how to make it a success:
2
14
53
@_andreamiotti
Andrea Miotti
1 year
Strongly disagree. Banning very large training runs (e.g., above 10^23 FLOP) across the board is very viable: at this scale compute is easy to monitor, easy to measure, and enforcement is straightforward. This would significantly reduce the risks from AI @jackclarkSF mentions.
@jackclarkSF
Jack Clark
1 year
Will write something longer, but if best ideas for AI policy involve depriving people of the 'means of production' of AI (e.g H100s), then you don't have a hugely viable policy. (I 100% am not criticizing @Simeon_Cps here; his tweet highlights how difficult the situation is).
17
34
296
6
9
48
@_andreamiotti
Andrea Miotti
1 year
I expect 10^26 FLOP for a training run to be reached next year: GPT-4 was likely around 2-3 * 10^25 FLOP. If AGI at 10^26 should be taken seriously, I can't really see this squaring with long timelines.
@MatthewJBar
Matthew Barnett
1 year
However, now that language models are starting to have a sizable economic impact, it is worth reconsidering the lifetime anchor. The singularity is probably not imminent. But AGI at 10^26-10^29 FLOP (roughly 10-10,000x GPT-4 training) should be taken seriously.
5
3
38
3
2
44
@_andreamiotti
Andrea Miotti
11 months
Did the IPCC's reports solve climate change in 10 years? No. Did the Montreal Protocol banning CFCs solve the ozone hole in 10 years? Yes. An annual report is not the solution to extinction risk from AI. We need a strong international response to limit further AI scaling.
Tweet media one
4
10
45
@_andreamiotti
Andrea Miotti
1 year
There's a common sense solution to extinction risk from AI: just stop developing AGI!
@SigalSamuel
Sigal Samuel
1 year
EXCLUSIVE: 63% of Americans say they *do not want* AI that's smarter than humans. They want regulation to actively prevent that. @voxdotcom
27
43
167
13
3
42
@_andreamiotti
Andrea Miotti
10 months
"we have a pause at home" Pause at home:
@sama
Sam Altman
10 months
we are pausing new ChatGPT Plus sign-ups for a bit :( the surge in usage post devday has exceeded our capacity and we want to make sure everyone has a great experience. you can still sign-up to be notified within the app when subs reopen.
1K
1K
16K
0
0
40
@_andreamiotti
Andrea Miotti
1 year
@Simeon_Cps One of reasons why I always preferred "control problem".
6
0
36
@_andreamiotti
Andrea Miotti
9 months
@GaryMarcus @EmmanuelMacron Foundation models ("GPAIS" at the time) were added to the text by the *French* Presidency of the Council in 2022. Macron's government is the one that pushed for the Act to explicitly cover foundation models in the first place lol
1
5
38
@_andreamiotti
Andrea Miotti
9 months
Clear words in @TIME from the former President of Estonia, @KerstiKaljulaid on the last-ditch attempts to undo the AI Act. "A handful of technology firms are holding the political process hostage, threatening to sink the ship unless their systems are exempt from regulation."
Tweet media one
2
9
37
@_andreamiotti
Andrea Miotti
11 months
It was an honour to help brief the @UKHouseofLords about the extinction risk we face from AI, and the concrete measures we can take to solve this problem. As some in the room reminded us, governments got together to stop nuclear proliferation: we can do so again with AI.
@NPCollapse
Connor Leahy
11 months
I had a great time addressing the House of Lords about extinction risk from AGI. They were attentive and discussed some parallels between where we are now and non-nuclear proliferation efforts during the Cold War. It certainly provided me with some food for thought, and some
Tweet media one
298
105
817
0
3
31
@_andreamiotti
Andrea Miotti
10 months
Great to be on @GBNEWS speaking with @Jacob_Rees_Mogg after the UK Prime Minister's speech ahead of the Summit. @RishiSunak 's speech was historic, declaring AI an extinction risk for humanity. This would have seemed unthinkable 5 years ago, but it's the reality we face now.
7
9
29
@_andreamiotti
Andrea Miotti
10 months
A positive step for international coordination.
@matthewclifford
Matt Clifford
10 months
Huge moment to have both Secretary Raimondo of the US and Vice Minister Wu of China speaking about AI safety and governance at the AI Safety Summit
Tweet media one
Tweet media two
7
44
254
1
3
27
@_andreamiotti
Andrea Miotti
9 months
@dwarkesh_sp @ManifoldMarkets This should be fun, if @BasedBeffJezos actually shows up Last time with @DanHendrycks he backed out lol
@DanHendrycks
Dan Hendrycks
1 year
"The founder of effective accelerationism" and AI arms race advocate @BasedBeffJezos just backed out of tomorrow's debate with me. His intellectual defense for why we should build AI hastily is unfortunately based on predictable misunderstandings. I compile these errors below 🧵
26
70
612
1
0
27
@_andreamiotti
Andrea Miotti
9 months
Weekly reminder that we're in a strange timeline.
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
9 months
Tweet media one
1
5
24
3
0
27
@_andreamiotti
Andrea Miotti
1 year
Tweet media one
0
0
25
@_andreamiotti
Andrea Miotti
11 months
@Altimor A crucial one is really "we have solved global coordination problems before, many times". People forget what we've managed to do with nuclear weapons, biological weapons, human cloning etc. Illustrative meme
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
1 year
Reminder: we can do international coordination (eg “keep an eye on $100m+ compute clusters”) without creating a DYSTOPIAN ORWELLIAN GLOBAL SURVEILLANCE TOTALITARIAN REGIME. Actually, fun fact, you’re already living in a "global surveillance regime" for loads of things, like
Tweet media one
64
58
378
2
0
25
@_andreamiotti
Andrea Miotti
1 year
"In an ideal world where everyone is sane, world governments would come together and say, 'Oh shit, we can't let superhuman AI systems run around,'" @NPCollapse said. "They'd create an intergovernmental project, fund it, and have the best minds come together to work on the hard
Tweet media one
3
6
25
@_andreamiotti
Andrea Miotti
1 year
I agree: let's only give more knowledge and agency to AIs after we have thoroughly tested their benevolence. Which means, stop new training runs larger than the current SOTA, and stop work to make AIs more agentic. Do you agree with these measures @ylecun ?
@ylecun
Yann LeCun
1 year
In this segment, @tegmark argues that more intelligence is not always beneficial, saying that the world would have been worse off if Hitler had been smarter. Max is a professor at MIT. His job, like mine, is to increase the intelligence of students and other humans. He doesn't
169
72
608
6
0
22
@_andreamiotti
Andrea Miotti
10 months
A really important point: humanity has agency over its future. Smarter-than-human AI is not "inevitable": we are the ones developing it! As a species, we can and should decide what to build, what to avoid, and what to pursue cautiously.
@michael_nielsen
Michael Nielsen
10 months
It's interesting that one of the most common responses to concerns about AI is to deny human agency: "This is inevitable." It usually seems to be motivated reasoning (someone who wants it, so uses this to say that people who don't are being unrealistic - as though making sand
22
5
108
3
3
21
@_andreamiotti
Andrea Miotti
8 months
Great to be interviewed on @talkTV by @rosannalockwood , and to have @ai_ctrl 's new campaign video shown on live TV. Deepfakes are out of control, and electoral deepfakes are just the tip of the iceberg: 96% of deepfakes are pornographic, most others are used for fraud. Like
0
9
20
@_andreamiotti
Andrea Miotti
1 year
My ideas on what the Foundation Models Taskforce should do are in Politico today. Here's the full post:
Tweet media one
2
4
21
@_andreamiotti
Andrea Miotti
1 year
Hard, but we have to try! Both unfettered corporate AGI proliferation and AGI nationalism are not stable equilibria, and will predictably lead to disaster. We need a multilateral institution to make it to a stable, safe world.
@rumtin
Rumtin
1 year
"MAGIC (the Multilateral AGI Consortium) would be the world’s only advanced and secure AI facility focused on safety-first research and development of advanced AI." Hard to see any country or company surrendering 'godlike AI' to an international authority.
7
2
29
1
5
21
@_andreamiotti
Andrea Miotti
9 months
@davidad Is this the AGI fire alarm?
3
0
21
@_andreamiotti
Andrea Miotti
1 year
As I told @TechCrunch , "Great to see @vonderleyen , [European] Commission president, acknowledge that AI constitutes an extinction risk, as even the CEOs of the companies developing the largest AI models have admitted on the record." "With these stakes, the focus can't be
Tweet media one
1
5
20
@_andreamiotti
Andrea Miotti
8 months
People can react to the threat of extinction in two ways: cope or agency. We exist today because of people that chose the latter. In the past, we have coordinated to make sure our species survives, and we can do it again.
@michael_nielsen
Michael Nielsen
8 months
An interesting thing lacking from the essay: any sense of agency. Lewis was fatalistically accepting the outcome; other people were working on the non-proliferation treaty, on the test ban treat, on the ABM treaty, and so on. I suspect we owe rather more to the latter than
4
2
33
0
1
19
@_andreamiotti
Andrea Miotti
9 months
Building useful AI tools is cool. Building a successor species is suicidal on a civilizational scale.
@DavidSacks
David Sacks
9 months
AI is a wonderful tool for the betterment of humanity; AGI is a potential successor species.
68
299
1K
0
2
16
@_andreamiotti
Andrea Miotti
1 year
The Transformers paper (Attention is all you need, 2017) was the Chicago Pile of AI.
@tshevl
Toby
1 year
@tszzl They think some random 2006 model was the trinity test but really it's 2023 and we're still bashing neutrons into each other
1
0
26
1
1
17
@_andreamiotti
Andrea Miotti
1 year
Overall, it's great to see the Senate taking AI oversight seriously. Professors Bengio and Russell made clear that uncontrollable AI could end humanity in the next few years. I fully agree, and think the development of powerful autonomous AIs should be banned.
1
2
16
@_andreamiotti
Andrea Miotti
1 year
"It started to dawn on me that my previous estimates of when human-level AI would be reached needed to be radically changed. Instead of decades to centuries, I now see it as 5 to 20 years with 90% confidence."
1
0
14
@_andreamiotti
Andrea Miotti
1 year
The US public doesn't buy the arms race rhetoric on powerful AI.
@NPCollapse
Connor Leahy
1 year
The public continues to be very clear about what it thinks about AGI.
Tweet media one
39
45
261
1
1
13
@_andreamiotti
Andrea Miotti
10 months
It's not that complicated.
@So8res
Nate Soares ⏹️
10 months
My current stance on AI is: Fucking stop. Find some other route to the glorious transhuman future. There’s debate within the AI alignment community re whether the chance of AI killing literally everyone is more like 20% or 95%, but 20% means worse odds than Russian roulette.
220
125
744
0
1
13
@_andreamiotti
Andrea Miotti
1 year
Many such cases
@nabla_theta
Leo Gao
1 year
Tweet media one
7
14
228
0
0
11
@_andreamiotti
Andrea Miotti
11 months
Thanks to @jasonhausenloy and @connoraxiotes for working with me on many of these ideas. And thanks to @davidad for many productive discussions on the advantages and limits of this model, compute thresholds, and good equilibria!
0
0
12
@_andreamiotti
Andrea Miotti
1 year
And great work @DanHendrycks and the whole Center for AI Safety for putting this together!
0
0
12
@_andreamiotti
Andrea Miotti
10 months
Why do we need MAGIC at all? Currently, any actor with sufficient capital can enter a race to extinction-level tech. This is obviously not a stable situation.
8
1
13
@_andreamiotti
Andrea Miotti
9 months
A proposal for "mandatory self regulation" is a self-evident absurdity. It's even more absurd when that's aimed at exempting the most important and powerful AI systems, foundation models, from the AI Act.
@AxelVossMdEP
Axel Voss MdEP
9 months
We cannot accept the 🇩🇪🇫🇷🇮🇹 proposal on #foundationmodels . Also, even minimum standards for self regulation would need to cover transparency, cybersecurity and information obligations - which is exactly what we ask for in the #AIAct . We cannot close our eyes to the risks.
24
50
184
1
4
12
@_andreamiotti
Andrea Miotti
10 months
It doesn’t matter whether companies or countries are racing to extinction-level tech. If we don’t stop the race, we face unacceptable odds of extinction. We need an advanced AI development paradigm that prioritizes safety and gets us to a stable international equilibrium.
2
0
12
@_andreamiotti
Andrea Miotti
9 months
@GaryMarcus @EmmanuelMacron Source for the original French proposal, 16 May 2022:
@clothildegouj
Clothilde Goujard
2 years
❗️New: France wants to massively expand the reach of the EU's Artificial Intelligence Act by regulating the core systems underpinning high-risk AI applications known as “general purpose AI systems,” according to a document obtained by POLITICO.
3
41
129
0
2
12
@_andreamiotti
Andrea Miotti
1 year
Evaluations should be coupled with clear pre-commitments to terminating or restarting training runs that don’t pass evals. Evals policies that don’t come with mandatory requirements in case of evaluation failure are completely toothless.
1
0
10
@_andreamiotti
Andrea Miotti
1 year
@mezaoptimizer @NPCollapse If you believe Conjecture is racing to scale models to AGI as fast as possible (we are not), then yes, I suggest you think of us as poorly as any other actor doing so. And by poorly I mean "an existential danger".
0
0
11
@_andreamiotti
Andrea Miotti
1 year
@Aspie96 @ylecun Are you currently building your students with 10x more compute than their previous version? Do you have non-human students?
1
0
9
@_andreamiotti
Andrea Miotti
1 year
@liron @ArthurB Less a matter of IQ, and much more a matter of perfect memory, thousands of parallel copies, perfect executive function.
1
0
11
@_andreamiotti
Andrea Miotti
1 year
"If we get that far"
@demishassabis
Demis Hassabis
1 year
@ylecun this is a silly calculation, if we get that far we will be building dyson spheres and the like, so the amount of power sent by the sun to earth is not the relevant figure.
35
33
623
1
0
10
@_andreamiotti
Andrea Miotti
1 year
Great to see @ylecun and @tegmark committing to a public debate!
@ylecun
Yann LeCun
1 year
On June 22, @tegmark and I will be debating the question: "Be it resolved, AI research and development poses an existential threat." Max will argue for YES, and I will argue for NO.
92
208
1K
0
0
11
@_andreamiotti
Andrea Miotti
1 year
Not so sure about the "live through" if we don't change how things are going.
@adamdangelo
Adam D'Angelo
1 year
It’s so incredible that we are going to live through the creation of AGI. It will probably be the most important event in the history of the world and it will happen in our lifetimes.
236
360
3K
0
0
11
@_andreamiotti
Andrea Miotti
10 months
Stop using the "Overton Window" as an excuse to not say what you truly think: you are the Overton Window.
@NPCollapse
Connor Leahy
10 months
Lying is Cowardice, not Strategy Many in the AI field disguise their beliefs about AGI, and claim to do so “strategically”. In my work to stop the death race to AGI, an obstacle we face is not pushback from the AGI racers, but dishonesty from the AI safety community itself.
14
28
136
0
1
11
@_andreamiotti
Andrea Miotti
10 months
MAGIC ensures risky AI research happens in a secure, safety-focused, multinational facility. Given the scale of the risks, and the hard-to-contain nature of software, this would be a long-term solution that both realizes the upsides of AI progress, and minimizes extinction risks
9
0
11
@_andreamiotti
Andrea Miotti
1 year
@ESYudkowsky @krishnanrohit especially strange considering that Rohit himself expects these systems to be far, far more powerful than humans: he literally calls developing autonomous, general AI agents "building god"
1
0
10
@_andreamiotti
Andrea Miotti
10 months
🚫 Exclusive: MAGIC would be a controlled environment for risky research. All AGI development outside MAGIC would be prohibited, via a global ban on AI development above a compute threshold. This allows some careful research on superhuman AI, but prevents dangerous proliferation.
5
0
9
@_andreamiotti
Andrea Miotti
1 year
Turns out people don't want a reckless race towards the godlike AI some companies are pursuing.
@erikbryn
Erik Brynjolfsson
1 year
72% of Americans in this poll want to slow down AI development vs only 8% who want to speed it up. That’s a remarkable ratio.
46
48
195
0
1
9
@_andreamiotti
Andrea Miotti
10 months
👀
@dw2
David Wood - h/acc, u/pol, d/age
10 months
Spotted parked on Whitehall just now. A few minutes later, I saw another one driving round Parliament Square.
21
38
256
1
1
9
@_andreamiotti
Andrea Miotti
10 months
While we obviously disagree on the risks, I agree with Beff here. If you believe further AI scaling is extinction-level dangerous, the solution is clearly to stop and solve the problem before continuing. It's definitely not continuing to scale while calling it "responsible".
@BasedBeffJezos
Beff – e/acc
10 months
"Responsible scaling" aka regulation to make sure only a select few orgs are allowed to have large AI models so they effectively have a state-mandated oligopoly and yet they will still scale AI (the thing they claim is dangerous). Just total cognitive dissonance here. You are
8
21
219
0
5
9
@_andreamiotti
Andrea Miotti
10 months
@MelindaBChu1 I think the word "magic" predates Harry Potter by a few millennia
3
0
8
@_andreamiotti
Andrea Miotti
9 months
@AxelVossMdEP Thank you for speaking out on this issue and for your work throughout the whole history of the Act.
0
1
9
@_andreamiotti
Andrea Miotti
9 months
Tweet media one
0
0
8
@_andreamiotti
Andrea Miotti
2 years
@moultano @Gabe_cc It's definitely evidence we can't get systems to do what we want them to. Truthfulness is orthogonal to systems being powerful
2
0
9
@_andreamiotti
Andrea Miotti
10 months
I'll be in Bletchley Park tomorrow and speaking at this event: come say hi and join if you're around!
@connoraxiotes
Connor Axiotes
10 months
🚨AI SUMMIT TALKS🚨 📅31/10/23 ⏲️2pm 📌(Wilton Hall) Bletchley Park - Professor Stuart Russell - @tegmark - Jaan Tallinn - @NPCollapse - @_andreamiotti - Annika Brack - @halhod - @Ron_Roozendaal - @MarkBrakel - @AlexandraMousav - @dw2 RSVP now:
4
20
52
0
1
9
@_andreamiotti
Andrea Miotti
9 months
@elonmusk @ESYudkowsky Yep! Many people seem to have memory holed the fact that Altman has been *very clear* about what he expects the risks of AI are, even before OpenAI started.
1
0
6
@_andreamiotti
Andrea Miotti
9 months
"Mandatory self regulation" 🤡
@BertuzLuca
Luca Bertuzzi
10 months
#AI Act: I got my hands on the 🇫🇷🇩🇪🇮🇹 non-paper that pushes against regulating foundation models in favour of codes of conduct without an initial sanction regime. “This is a declaration of war,” a parliamentary official told me.
18
106
256
0
0
7
@_andreamiotti
Andrea Miotti
9 months
@ESYudkowsky @AndrewCritchPhD Relatable. Default auto-transcription of everything online might come after AGI at this point.
0
0
8
@_andreamiotti
Andrea Miotti
9 months
The public continues to be very clear on AI.
@DanielColson6
Daniel Colson
9 months
1/3: Politico covered a new poll released by AIPI today. In summary: “The public is very on board with direct restrictions on [AI] technology and a direct slowdown.” 70% agree that preventing AI from reaching superhuman capabilities should be an important goal of AI policy with
2
13
54
0
1
8
@_andreamiotti
Andrea Miotti
10 months
@sala_maris @ai_ctrl @YouGov Total sample size was 2086, and the figures are representative of all UK adults (aged 18+).
0
0
8
@_andreamiotti
Andrea Miotti
10 months
🔐 Secure: MAGIC would adhere to national-security grade standards: closed-off facilities, an isolated network, clearances, regular vulnerability testing and strict access policies. This is needed to prevent leaks and safeguard against malicious actors’ attacks.
5
0
8
@_andreamiotti
Andrea Miotti
9 months
@NPCollapse this is going to be fun
0
0
8
@_andreamiotti
Andrea Miotti
1 year
Unexpected praise for the Frontier AI Taskforce!
@ESYudkowsky
Eliezer Yudkowsky ⏹️
1 year
I'm not sure if this govt taskforce actually has a clue, but this roster shows beyond reasonable doubt that they have more clues than any other govt group on Earth.
16
33
339
0
0
8
@_andreamiotti
Andrea Miotti
11 months
"why are governments allowing it?" is a great question
@ZacGoldsmith
Zac Goldsmith
11 months
He says he thinks the technology *he is developing has a 10-25% chance of wiping us all out. I’ve no idea if he’s right, obviously, but it’s what he thinks. Which begs the question - what on Earth does he think gives him and his like the right to gamble with all of our futures?
22
32
173
1
0
7
@_andreamiotti
Andrea Miotti
9 months
@GaryMarcus @BertuzLuca @ylecun And the most ironic part: general purpose AI systems (foundation models) were added by the French Presidency of the Council in 2022. Macron's government is the one that pushed to explicitly cover foundation models in the first place lol
0
1
8
@_andreamiotti
Andrea Miotti
10 months
Optimistic odds from the FTC Chair: only 15% "probability AI will kill us all".
@liron
Liron Shapira
10 months
FTC Chair Lina Khan ( @linakhanFTC ) says her P(Doom) is 15%. This is fine…
25
26
121
0
0
7
@_andreamiotti
Andrea Miotti
1 year
Yann must have met some really impressive cats and dogs!
@ylecun
Yann LeCun
1 year
Before we can get to "God-like AI" we'll need to get through "Dog-like AI".
269
310
3K
1
0
8
@_andreamiotti
Andrea Miotti
10 months
MAGIC offers a proactive approach to minimize the risk of extinction from AI, ensuring that research on powerful AI systems is conducted with proper risk management and affirmative proofs of safety.
6
0
8
@_andreamiotti
Andrea Miotti
9 months
All I want for Christmas is a compute cap
@SciTechgovuk
Department for Science, Innovation and Technology
9 months
All we want this #ChristmasJumperDay is our jumper inspired by the 5 technologies of the future 💡 You can buy 1 of 10 limited edition jumpers for £50 with all proceeds going to @savechildrenuk 🎄
Tweet media one
2
5
15
1
0
7
@_andreamiotti
Andrea Miotti
10 months
🛡️ Safety-Focused: MAGIC would focus on understandable and controllable AI architectures from the outset, rather than black boxes. This starts with developing AI systems in reasonable increments, ensuring they are fully interpretable and proven safe before each iteration.
2
0
7
@_andreamiotti
Andrea Miotti
1 year
The Taskforce needs to act with the authority of the Prime Minister in every decision, otherwise it may become another well-meaning organisation that has limited real-world effect.
1
0
6
@_andreamiotti
Andrea Miotti
10 months
How would this institution function? 🌐 The Multinational AGI Consortium (MAGIC) has four key properties: exclusivity, safety, security, and collective benefit.
2
0
7
@_andreamiotti
Andrea Miotti
1 year
I know @soundboy and the team have the skills and expertise to tackle the extreme risks posed by AI development head-on, and they need to be empowered to do so.
1
0
7
@_andreamiotti
Andrea Miotti
1 year
The UK AI Safety Summit will take place on 1-2 November.
@SciTechgovuk
Department for Science, Innovation and Technology
1 year
On 1-2 November 2023, governments, AI companies and experts from around the work will meet at @BletchleyPark for crucial talks on the safe and responsible development of AI. AI has the potential to revolutionise the way we live, but we must also minimise its risks.
40
141
326
0
0
7
@_andreamiotti
Andrea Miotti
1 year
"Von Neumann, too, was deeply concerned about the inability of humanity to keep up with its own inventions. [...] Moving to the subject of future computing machines he became even more agitated, foreseeing disaster if 'people' could not 'keep pace with what they create.'"
@the_IAS
Institute for Advanced Study (IAS)
1 year
For @WSJ , IAS Director David Nirenberg discusses how “Oppenheimer and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us.”
1
13
32
0
0
5
@_andreamiotti
Andrea Miotti
1 year
No matter your stance on the Bully XL issue, this campaign is a perfect case study of how a small amount of resources can be used to bring a niche issue to maximal salience, and convert that into policy change in a matter of months.
@s8mb
Sam Bowman
1 year
@RichardMCNgo The actual time and effort put into this was trivial, and I suspect he’s learned a lot that can be used on other areas like crime reduction. People interested in effective campaigning should be trying to learn from @pursuitofprog rather than criticising him.
2
1
47
1
0
7
@_andreamiotti
Andrea Miotti
10 months
🤝 Collective Benefit: MAGIC would be supported internationally by all its signatories and designed to distribute the benefits of its research breakthroughs, once proven safe, among all member countries. A global Manhattan Project to ensure humanity keeps control of its future.
3
0
7