![Andrew Critch (π€π©Ίπ) Profile](https://pbs.twimg.com/profile_images/1552317805902512129/xCrYE-2X.jpg)
Andrew Critch (π€π©Ίπ)
@AndrewCritchPhD
Followers
4K
Following
3K
Media
65
Statuses
1K
Let's make AI doctors! Views my own; CEO @ https://t.co/wvoKT50fKX; AI Researcher @ Berkeley; If I block you it's like I'm moving to another convo at a party; nbd.
California
Joined April 2014
When I count on my fingers, I use binary, so I can count to 31 on one hand, or 1023 on two. It took me about 1 hour to train the muscle memory, and it's very rhythmic, so now my right hand just auto-increments in binary till I'm done, and then I just read off the number.
FYI: you can count up to 100 on your fingers like so:.- right hand is ones.- left hand is tens.- thumbs are 5/50, fingers are 1/10. This is convenient enough that it's the way I count by default.
47
61
862
AI safety's most concise summary ever, from @AndrewRousso. And they said it couldn't be explained.
20
160
648
Yann LeCun is calling the list of scientists and founders below "idiots" for saying extinction risk from AI should be a global priority. Using insults to make a point is a bad sign for the point⦠plus Hinton, Bengio, and Sutskever are the most cited AI researchers in history:
38
34
324
Belated congrats to @ilyasut for becoming the third most cited AI researcher of all time, before turning 40β¦ huge! He's actually held the spot for a while β even before GPT-4 β but it seems many didn't notice when it happened. Go Canada π¨π¦ for a claim on all top three π
14
27
243
As recently as last year I attended a tech forecasting gathering where a professional geneticist tried to call bullsh*t on my claims that protein-protein interaction modelling would soon be tractable with AI. His case had something to do with having attended meetings with George.
DeepMind just published AlphaProteo for de novo design of binding proteins. As a reminder, I called this in 2004. And fools said, and still said quite recently, that DM's reported oneshot designs would be impossible even to a superintelligence without many testing iterations.
9
23
235
Reminder: "Mitigating the risk of (human) extinction from artificial intelligence should be a global priority", according toβ¦. The CEOs of the worldβs three leading frontier AI labs:.β’ Demis Hassabis β CEO, Google DeepMind.β’ Dario Amodei β CEO, Anthropic.β’ Sam Altman β CEO,.
16
49
219
Reminder: some leading AI researchers are *overtly* pro-extiction for humanity. Schmidhuber is seriously successful, and thankfully willing to be honest about his extinctionism. Many more AI experts are secretly closeted about this (and I know because I've met them).
AI boom v AI doom: since the 1970s, I have told AI doomers that in the end all will be good. E.g., 2012 TEDx talk:Β Β βDonβt think of us versus them: us, the humans, v these future super robots. Think of yourself, and humanity in general, as a small stepping
26
31
213
Happy Father's Day! Please let the GPT-4o video interface be a recurring reminder: . Without speed limits on the rate at which AI systems can observe and think about humans, human beings are very unlikely to survive. Perhaps today as many of us reflect on our roles as parents
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time: Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks.
19
35
168
Lol, @ylecun defines reasoning to be impossible for his own brain to perform. Explains a lot. π€¦.
@vkhosla It's not an assumption. Reasoning, as I define it, is simply not doable by a system that produces a finite number of tokens, each of which is produced by a neural net with a fixed number of layers.
10
5
126
Reposting for emphasis, because on this point Eliezer is full-on correct: AI output should always be labelled as AI output. If the UK summit fails to produce support for a rule like this, I will resume my levels of pessimism from before the CAIS Statement and Senate hearings. A.
"Every AI output must be clearly labeled as AI-generated" seems to me like a clear bellweather law to measure how Earth is doing at avoiding clearly bad AI outcomes. There are few or no good uses for AI outputs that require a human to be deceived into believing the AI's output.
12
7
114
Congrats to DeepMind! Since 2022 I've been predicting 2025 as the year in which AI can win a gold medal at the International Mathematics Olympiad. I stand by that prediction. By 2026 (or sooner) you will probably see more focus and progress on AI that solves physics and.
Advanced mathematical reasoning is a critical capability for modern AI. Today we announce a major milestone in a longstanding grand challenge: our hybrid AI system attained the equivalent of a silver medal at this yearβs International Math Olympiad!.
4
19
120
From over a decade of conversations about x-risk, my impressions agree strongly and precisely with Zvi here, as to what *exactly* is going wrong in the minds of people who somehow "aren't convinced" that building superintelligent AI would present a major risk to humanity. Cheers!.
13
15
111
AI researchers who know AGI is a few years away or less are mostly working under NDAs for AI labs and not tweeting it. AI researchers still in academia are filtered for believing they need to teach everyone some crucial missing insight that industry will fail without.
Joint post with @GaryMarcus - TL;DR: I am very confident AI will be dramatically better in 3 years, he is not. Either way, we'll both donate some $ to charity.
6
4
108
What are people doing with their minds when they claim future AI "can't" do stuff? The answer is rarely Β«reasoningΒ» in the sense of natural language augmented with logic (case analysis) and probability. I don't know if Eliezer's guesses are correct about what most scientists.
As near as I can recall, not a single objectionist said to me around 2004, "I predict that superintelligences will be able to solve protein structure prediction and custom protein design, but they will not be able to get to nanotech from there.". Why not? I'd guess:. (1) Because.
7
7
99
@JeffLadish Jeffrey, you may have been living under the rose-colored impression that AI-savvy SF bay area people were not about to become successionists. I think many of them (10%?) just are. I tried explaining this to the rationalist & EA communities here:.
7
4
97
What's the billionth digit of sqrt(7)? Don't know?. Okay, but do you need more data about what sqrt means? Or 7?. No. You just need compute. Same for computing the weights of a superintelligence from an internet full of knowledge as an input for specifying the problem. Yet so.
π― "synthetic data" only makes sense if the data generating model is a better model of reality than the model being trained. This only happens in very special cases (eg when first-principles simulators are available).
5
4
90
Much of my dislike for rationalist/EA discourse on AI can be explained by this poll result. It was commonly said (but not consensus) that AI would be too alien to understand or care about human values, which seemed and still seems obviously false.
Pre GPT-3, when you were exposed to rationalist- or EA-adjacent opinions about whether AI would or would not help humans, which of the following best fits their stance at the time?. A: AI won't help you because it's an alien mind that won't understand or won't care about your.
21
3
89
@AryehEnglander Frankly, I also want to normalize calling slow timelines "sci fi". E.g., the Star Trek universe only had AGI in the 22nd century. As far as I can tell, AI progressing that slowly is basically sci-fi/fantasy genre, unless something nonscientific like a regulation stops it.
4
4
77
People ask whether AI can "truly" "create new knowledge". But knowledge is "created" just by inference from observations. There's a fallacy going around that "fundamental science" is somehow crucially different, but sorry, AI will do that just fine. By 2029 this will be obvious.
People ask whether AIs can truly make new discoveries or create new knowledge. What's a new discovery or new knowledge you personally created in 2023 that an AI couldn't currently duplicate?.
6
6
77
GPT-4 is not only able to write code, more reliably than GPT-3.5, it writes code that writes code; see the example below (GPT-3.5 was not able to do this). But first:. 1) @OpenAI: Thank for your openness to the world about your capabilities and shortcomings!. Specifically. .
5
15
72
@AndrewYNg @AndrewNG, I suggest talking to someone not on big-tech payroll: Yoshua Bengio, Geoffrey Hinton, Stuart Russell, or David Krueger. IMHO Yoshua maximizes {proximity to your views}*{notability}*{worry}, and would yield the best conversation. Thanks for engaging with this topic :).
3
2
74
Some of my followers might hate this, but I have to say it: the case for banning open source AI is *not* clear to me. Open source AI will unlock high-impact capabilities for small groups, including bioterrorism:. *Still* I do not consider that a slam-dunk.
There's a simple mathematical reason why AI *massively* increases the risk of a world-ending super-virus: AI *decreases the team size* needed to engineer a virus, by streamlining the work. Consider this post a tutorial on how that works π Only high-school level math is needed to.
17
4
69
AI hype is real, but so is human hype. Einstein was not magic. E=mcΒ² can be found by a structured search through low-degree algebraic constraints on observations of light and matter. Consciously or not, this is how Einstein did it. Not magic, just better search.
@ShaneLegg Agreed π Sadly, many folks I've met seem to feel or believe that fundamental science (e.g., e=mcΒ²) differs from Go and protein folding in some crucial way that can't be explored with hypothesis search. Yes this is false, but like with Go, until they see it they won't believe it.
5
8
68
Big +1 to Dario Amodei, @sama, and everyone else seated here for briefing our government on how keep human society safe in the age of ever-accelerating AI technology.
Artificial Intelligence is one of the most powerful tools of our time, but to seize its opportunities, we must first mitigate its risks. Today, I dropped by a meeting with AI leaders to touch on the importance of innovating responsibly and protecting people's rights and safety.
5
4
63
@jrhwood Thanks Jesse, these are good points, and I agree with you that intelligence, agency, and evil are all different. Unfortunately, I think plants rather than neanderthals are a better analogy for humans if AI is developed without speed limits.
3
2
60
Zuckerberg's message here is really important. I prefer to live in a world where small businesses and solo researchers have transparency into AI model weights. It parallelizes and democratizes AI safety, security, and ethics research. I've been eagerly awaiting Llama 3.1, and I'm.
Mark Zuckerberg says in the future there will be more AI agents than people as businesses, creators and individuals create AI agents that reflect their values and interact with the world on their behalf
10
6
59
Calling a theory about a conspiracy "a consporacy theory" is a slippery waste of words. If you think it's false, just call it "false". It's shorter! Also, using the term "conspiracy theory" as a synonym for "false theory" is oppressive to critical thinking about groups.
6
1
58
This tweet is extremely misleading. Claims like this are a big reason the public has a terrible time determining from discourse if AI is safe. Only people who devote long hard hours and logical probabilistic reasoning to the task of investigating AI labs will actually know.
OpenAI's new model tried to avoid being shut down. Safety evaluations on the model conducted by @apolloaisafety found that o1 "attempted to exfiltrate its weights" when it thought it might be shut down and replaced with a different model.
10
2
58
Dear Governor Newsom (@GavinNewsom), OpenAI's corporate leadership has been visibly falling apart for more than a year now. Whatever they create next should be accountable to external oversight, so please stand strong for your people and your state, by signing SB 1047 into law.
OpenAI just sold us all out. Governor Newsom, are you seeing this?. Congress, are you seeing this?. World, are you seeing this?.
3
2
57
@elonmusk Probably ~AGI arrives first, but yes I hope Neuralink supports human relevance & AI oversight by broadening the meatsticks-on-keyboard channel for humans π. Thanks also for being a voice for AI regulation over the years; now is a key juncture to get something real in place.
1
0
56
+1 to all these points by @ylecun. If we dismiss his points here, we risk building some kind of authoritarian AI-industrial complex in the name of safety. Extinction from AI is a real potentiality, but so is the permanent loss of democracy. Both are bad, and all sides of this.
6
7
53
Something like this will upgrade LLMs from wordsmiths to shape-rotators. It will also make their thoughts less legible and harder to debug or audit.
Brilliant paper from @Meta having the potential to significantly boost LLM's reasoning power. Why force AI to explain in English when it can think directly in neural patterns?. Imagine if your brain could skip words and share thoughts directly - that's what this paper achieves
3
4
54
Dear Californians: please support AI safety *today* by asking Gavin Newsom not to veto SB 1047, through whichever of these channels feels easiest:. 1) Sign this petition and pass it on:. 2) Politely call Governor Newsom's office directly at 916-445-2841.
After much thought, I'm posting an open letter here about #SB1047. If you care about the future of AI safety, I urge you to share it. --------. Dear Governor @GavinNewsom,. You are about to make a momentous decision this week. It may well go down in history as one of the most
2
3
51
How does Pearl have a Turing Prize for fundamental work on the nature of *causality*, and *still* people don't learn this stuff??? For crying out loud people, it's the 21st century, learn what causality is!.
True. In order to show that correlation differs from causation you need to compute both and show inequality. In stat class you are forbidden from computing causation; what's left is hand-waving.
5
2
51
A puzzle for you: Imagine a village of (nuclear) families where the average # of kids per family is 7. On average, how many siblings does each kid have?. *.*.*.*.*.*.*.*.*.*. 6?. Not so! On average each kid has more than 6 siblings, because most of the kids come from.
If you want to raise your child to be a great leader, it helps to give them a lot of siblings. There are no US presidents that were only children and only three presidents had one sibling. On average US presidents have had just over 5 siblings!
2
1
46
@AndrewYNg @AndrewYNg, you're the one who convinced me that we'd get AGI during our lifetimes, back in 2010 in a talk you gave at Berkeley. So why have you been saying publically that AGI risk is like overpopulation on Mars, if you believed it was just decades away? Doesn't seem honest. I.
4
0
48
Hah, you're so correct about synthetic data. I also lol and fail to understand why this is not obvious. Maybe people think too much in terms of Shannon info theory, where synthetic data carries no "information"? But computation is just as important as information! #LogicalDepth.
3
3
46
"The Goddess of Everything Else", narrated by @Liv_Boeree and @robertskmiles, is now my favorite way to convey the idea below, which is now also one of my favorite quotes:. "Darwinism is a kind of violence that is no longer needed for progress." - David @davidad Dalrymple.
βThe Goddess of Everything Elseβ by @slatestarcodex is, imo, one of the most beautiful short stories ever written. And itβs just been made into an animation, in which I voice-act the Goddesses!!!! So stoked π.
1
3
47
Helen, I don't know what exactly you needed to know but didn't, but I'm glad the Board had the integrity to put an end to the false signal of supervision. I honestly can't tell from the outside if this was the best way, but it was a way, and better than faking oversight for show.
Today, I officially resigned from the OpenAI board. Thank you to the many friends, colleagues, and supporters who have said publicly & privately that they know our decisions have always been driven by our commitment to OpenAIβs mission. 1/5.
0
2
44
Seems like Musk actually read the bill! Congrats to all who wrote and critiqued it until its present form π And to everyone who's causally opposing it based on vibes or old drafts: check again. This is the regulation you want, not crazy backlash laws if this one fails.
This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill. For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk.
1
5
44
China+USA agreeing not to give AI access to nuclear weapons is awesome. My subjective probability of AI-driven human extinction just went down by like 1%, just seeing this, & if human leaders continue agreeing not to do obviously dangerous sh*t with AI, it will keep going down.
πΊπΈπ¨π³ BIDEN AND XI AGREE: LETβS NOT LET AI LAUNCH NUKES. The White House says humans will be the ones with control over the big buttons, and China agrees that it's for the best. The leaders also emphasized the cautious development of AI in military technology, acknowledging the
0
2
45
Term limits are good; health problems are bad. If you want to vary who is in power, there are plenty of ways to achieve that without the help of dementia, heart disease, diabetes, osteoperosis, or cancer. If these are your best ideas to combat stagnation, maybe find better ones?.
Why do people think that life extension won't just lead to extreme power consolidation and gerontocracy leading to a cultural ice age?.
5
1
43
I'm with Jess Whittlestone on this. Talk about extinction risk should not crowd out other issues core to the fabric of society; that's part of how we're supposed to avoid crazily unfair risk-taking! E.g., more inclusive representation in who controls a single powerful AI system.
Strong agree with this - I've been pleased to see extreme risks from AI getting a bunch more attention but also disheartened that it seems like tensions with those focused on other harms from AI are getting more pronounced (or at least more prominent and heated online).
1
2
40
Professor David Krueger sharing how some (remaining) OpenAI staff treated his concerns about future extinction risks from AGI development:.
Greg was one of the founding team at OpenAI who seemed cynical and embarrased about the org's mission (basically, the focus on AGI and x-risk) in the early days. I remember at ICLR Puerto Rico, in 2016, the summer after OpenAI was founded, a bunch of researchers sitting out on.
2
0
39
Indeed. While some aspects of AI safety are well-championed by the EA zeitgeist, others are ignored or even disparaged. Ideally, more and more communities will stand up to represent their values as deal-breaking constraints on how AI is developed, so that risks are only taken if.
EA β AI safety. AI safety has outgrown the EA community.The world will be safer with a broad range of people tackling many different AI risks.
0
3
37
Zuckerberg and Patel having an amazing conversation on AI risk. Great questions and great responses in my opinion. I'm with Zuckerberg that these risks are both real and manageable, and hugely appreciative of Patel as an interviewer for keeping the discursive bar high.
Zuck's position is actually quite nuanced and thoughtful. He says that if they discover destructive AI capabilities that we can't build defenses for, they won't open source it. But he also thinks we should err on the side of openness. I agree.
3
1
34