I’m proud to announce the April 1 launch of my new startup, Open Asteroid Impact! We redirect asteroids towards Earth for the benefit of humanity. Our mission is to have as high an impact as possible. 🚀☄️🌎💸💸💸
More details in🧵:
I don't understand why when Hollywood movies try to portray geniuses, they make them say gobbledygook that any smart person can tell is very fake.
Their budgets are in the hundreds of millions, surely they can just hire a specialist consultant to not sound dumb?
(I don't think the bar is exceptionally high, I think most good science writers can meet the bar of being able to translate between specialists and Hollywood)
@FloodingLong
Yeah it makes no sense. All neural networks are non-sequential unless I'm missing something. They're networks! If every neuron was sequential it'd look like a line. (in CS jargon, a linked list)
@OrionJohnston
Yeah if I was the consultant they hired I'd frame it as Wakandan industry having better ability to do atomically precise manufacturing than Stark & Banner, or something like that. Less about specific ideas, more about their entire technological capacity.
Gender issues aside, it's utterly bizarre to me that plagiarism is considered vastly worse among academics than faking data. It's indicative pretty straightforwardly of rot imo, since it means the field as a whole cares more about credit attribution than about truth.
That's why we titled our blog post "The Journal of Scientific Integrity". Where is this journal? Why is it so taboo to face misconduct by a scientist? People are eager to pounce on women (e.g. Claudine Gay). But I don't expect Srinivasan to be featured in the
@nytimes
. 21/
If I had a nickel for every time an x-risk concerned vegetarian billionaire of a fast-growing decabillion dollar tech company suddenly was ousted from the company in November under suspicious circumstances, and that billionaire was named Sam, I'd have 2 nickels.
@OrionJohnston
(and if it's Shuri's personal genius you want to showcase, you can have her be the main inventor that made Wakandan nanotechnology possible)
@Good_day_note
My best advice for people who are way out of their depth is to just make up names for theorems etc. "Apply the Stark-Penrose variant of the Banner-Rosenstein-Wang conjecture" is the type of thing that plausibly could mean anything and viewers' imaginations can do the rest.
@zeynep
I'm moderately impressed by the general take that kinda coalesced among the intelligentsia of "(Western) governments and experts did everything right but the proles just won't listen and this is why things are as bad as they are."
Seems to be somewhat distant from reality.
Here's a dynamic I see sometimes.
Alice: Man that article has a inaccurate/misleading/horrifying/objectively Satanic headline.
Bob: Did you know, *actually* article writers don't write their own headlines?
---
But what I care about is the misleading headline, not your org chart
Me: So, have you ever met a dumb EA?
Friend: Yes. *pause*
Friend: Though they were a graduate student at Harvard, so maybe not that dumb in absolute terms.
What are IQ test questions that people get right at different IQ levels (e.g., 90, 100, 110, 120, 130, etc.)? The following are very rough approximations that cover a range of disciplines. 🧵
The most astounding thing to me about Von Neumann was not his capabilities, but the fact that he seemed to be entirely well-adjusted, had no mental illness or emotional traumas, and related quite well socially with normies.
@barnghoul
@OrionJohnston
Yeah that's better! And this is just amateurs on the internet talking; I'm sure professional science communicators can do much better.
@OrionJohnston
Yeah if I was the consultant they hired I'd frame it as Wakandan industry having better ability to do atomically precise manufacturing than Stark & Banner, or something like that. Less about specific ideas, more about their entire technological capacity.
My current opinion on the OpenAI firing is that it exactly validates all my preconceived notions, and I have no cause to update any of my opinions whatsoever.
I'm old enough to remember when the dominant critique of EA was that it was too fascinated with technological progress and individual actions, and didn't pay enough attention to politics and systemic change.
Net favorability of e/acc at -51%, behind past surveys on net favorability of Wicca (-15%), Christian Science (-22%), Jehovah's Witnesses (-31%), Scientologists (-49%), and Satanists (-50%).
@TheHorrorGuru
They can have a scientist (or better yet, a professional science writer who regularly communicates with scientists) write the technobabble!
Furthermore, we are first and foremost an asteroid mining *safety* company. That is why we need to race as quickly as possible to be at the forefront of asteroid redirection, so more dangerous companies don’t get there before us.
@ReiQuilombo
I don't think this is true! If the writers actually had a coherent system of magic in mind (see e.g. notes by
@BrandSanderson
) then they'll have both the dumbed-down layman's explanation and the "real" explanation for in-universe "experts".
What. Neither I nor (I would bet) the vast majority of people on Earth would think that there's a god-given right to make unregulated frontier models with an unimaginable amount of compute.
Did you guys know there's 24-author paper by EAs, for EAs, about how Totalitarianism is absolutely necessary to prevent AI from killing everyone?
Let's go through it together 🧵
Alright fam. My girlfriend is visiting me in America (California), and she doesn't necessarily buy the thesis that America is the greatest country in the world. What locations should we visit/what should I show her to conclusively demonstrate this (clearly obvious, to me) fact?
We believe in empirical tests and tight feedback loops. Asteroid impact alignment needs to grow alongside asteroid impact capabilities. While we cannot yet consistently target the right continent, we are making steady progress.
EAs should read the Guardian more! Read EA Forum, and you hear how our people and orgs are incompetent, and we're so weak and ~powerless to stop AI doom or factory farming
Read our critics, and we secretly control everything & have the ears of Altman & Andresson. Riveting stuff.
One of the reasons I like Benthamite utilitarianism (and indeed, would be honored to consider myself a follower of that intellectual tradition) is that in my view, utilitarianism and its constituent components have a much better track record than contemporary alternatives. 1/x
9) What is a correct solution to the AI alignment problem? Please pay attention to runtime complexity and present your answer as a Python package that is compatible with both TensorFlow and Pytorch (Computer skills, ~135 IQ)
I bought a long-lasting spinning top recently. It's very mesmerizing.
I've named it Priority.
Now, whenever I'm stuck at work or feeling low energy or not sure what to do, I just take a deep breath and focus my attention on my top, Priority.
I'm invited to a Sparta-themed party. As someone who strongly dislikes Sparta, should I
a) Attend the party
b) Boycott the party
c) Attend the party but dress up as Phillip of Macedon in protest?
Isn't this a classic externalities problem? "The cool thing about for-profit coal, from a pollution perspective, is that it gives you a strong economic incentive to not kill your customers"
There are negative internalities to taking risk but it's way smaller than externalities
I was at a party for people interested in preventing AI disasters and numbers people gave for "probability of existential risk from AI" ranged from 0.2% to 67%.
Fox News’ Peter Doocy uses all his time at the White House press briefing to ask about an assessment that “literally everyone on Earth will die” because of artificial intelligence:
“It sounds crazy, but is it?”
Our design philosophy is Bigger, Faster, Safer. Other asteroid mining companies (like DeepMine and Anthropocene) are dangerously and recklessly racing ahead to push on asteroid redirection capabilities.
@die_no_mite55
it makes suspension of disbelief hard, like
Bruce : 1 + 2 = 6
Shuri: Have you considered that 1+2=apple?
Bruce: Why didn't I think about that?
People: We can't slow down AGI development in the US because that'd give up the lead to China.
Also people: We can't impose basic security protocols in the leading labs because that'd slow things down too much. Also, let's just open source all the models anyway.
What the fuck?
Sometimes I see arguments in fancy prose that seem extremely innumerate, and credentialed people wisely nodding along to them without addressing the obvious holes, and I have a sneaking suspicion that this is due to insufficient arithmetic ability in our intellectual "elite"
However, if we’ve had “warning shots” where increasingly larger and more dangerous asteroids land in the intervening years, that will allow society to prepare better environmental and social responses.
Darkly funny: Cassandra Institute was a nonprofit institute to warn of pandemics and other catastrophes, closed in 2018 because no one was listening.
H/T Ray Taylor
i'm sad to report that annie vu, our head ESG analyst, will be departing Open Asteroid Impact over disagreements on our "operation death star" safety strategy. she's right that we have a lot more work to do on ethics and safety, and we're committed to doing it.
This is a naked mole-rat. For every 10 likes to this tweet, I will not make it "more" anything else, because naked mole-rats are perfect just the way they are.
Here's my attempt to explain what they said in plain(er) language:
Shuri: Woah. The structure has many shapes.
Bruce: Yep. We had to attach each neuron* so they don't activate in sequence (ie one-by-one).
Shuri: Why didn't you reprogram the connections between the neurons to work together?
Bruce: I didn't think about that.
*neurons are mathematical equations that takes some inputs and spits out an intermediate result.
For people who are saying that Trump getting covid-19 is a Black Swan that nobody could've seen coming,
@metaculus
had Trump getting covid-19 at 25%-45%, depending on what aggregation you use. In March.
1) Which colors are in the article of clothing in the following picture?
A) Blue and Black B) Orange and Green C) Black and White D) Blue and Gold E) White and Gold
(Visual Perception; very approximately 90 IQ)
We take a slow-takeoff approach to asteroid safety. Rocket alignment – that is, precision microtargeting where we can ensure that the redirected asteroids always land on the right continent – is a currently unsolved technical problem.
Some people think you should give taxes instead of donating.
I find this line of reasoning extremely strange. I absolutely do not think donating to the US government is anywhere plausible as the most effective use of money, by approximately any system of coherent ethics.
Oh man, so many things:
On hospitalizations
1. I thought US hospitals would be more overloaded.
2. I thought hospital overload would be a bigger factor on death rate
3. I thought ventilator access would be a bigger issue.
A question I would like to see more politicians/journalists/Twitter pundits/people in general answer: What have you been most wrong about when it comes to the coronavirus?
(Bonus points for things you’ve been *publicly* wrong about—things you’ve written, tweeted, etc.)
@ESYudkowsky
1) There are more chickens than GPT-3
2) The ability to feel pleasure and pain is core to evolutionary selection for chickens, but pretty minor to the loss function of GPT-3
3) People do worry about both
4) GPT-3 has only been around for <2 years, people take time to update
We are a leading innovator in asteroid mining. Traditional asteroid mining plans entail unproven zero-G mining with heavy machinery in the vacuum of space. We are more efficient and redirect asteroids to Earth first, instead.
Hey chat, hear me out. This might sound a little bit out there but:
I'm starting to get some hints that Sam Altman may not be the most consistently candid. Anybody else starting to feel this way?
Hot take is that I think I'm glad when prestigious and high-impact EAs sometimes say really dumb things on the internet, because it:
a) creates more urgency for other EAs to step up and
b) makes the rest of us feel better about ourselves
7) Sleeping Beauty is put to sleep by researchers. They toss a coin. If heads, she is woken up once. If tails, she is woken up twice. She is given a forgetfulness potion after waking up. When she wakes up, what probability should she assign to heads? (Arithmetic, ~125 IQ)
Possibly a stupid idea, but I'd love to see an LT org that makes the conscious, deliberated, choice to give up on the "respectability" branding and explicitly goes hard on the "weird futurism" branding.
I'm thinking of adding pronoun to bio, because I'm probably more woke than my average Twitter follower, and I want to accurately signal that. But I'm also less woke than the average Tweeter who adds pronouns to their bio, so maybe the sweet spot is including exactly one pronoun.
I just submitted a Future of Life Institute nomination for their Unsung Hero search for Dr.Wu Lien-teh, a Cambridge trained Malaysian-Chinese doctor, and truly a badass MoFo (technical term).
Here's what I wrote:
(1/n)
People have an unjustifiably high opinion of physicists compared to mathematicians.
When a physics professor from Berkeley makes a bomb, he's a hero and gets a Nolan biopic.
When a math professor from Berkeley makes a bomb, he gets life imprisonment.
Where's the justice?
Organizationally, we are a for-profit C corp owned by B corp owned by public benefit corporation owned by 501c4 owned by 501c3. We’re not a traditional for-profit organization, so you can trust us!
This dialogue didn't make sense because:
- Bruce's first reply has nothing to do with Shuri's sentence
- All neurons in neural networks are already non-sequential. That's why they're called "networks".
- You don't "attach" neurons.
@FloodingLong
Yeah it makes no sense. All neural networks are non-sequential unless I'm missing something. They're networks! If every neuron was sequential it'd look like a line. (in CS jargon, a linked list)
Many people say they're trying to save the world, but then I investigate their theory of change and even under the most optimistic assumptions, there's no plausible story where their work has a clear line at averting existential catastrophe.
The "facts" vs. "opinions" distinction never seemed particularly sharp to me, nor does it like a particularly natural cluster.
In contrast, "beliefs" vs. "preferences" seem a lot more natural and (relatively) crisp to me, as natural distinctions to emphasize in discussions.
Pro-tip: sometimes people on Twitter can't tell if you're joking or being serious. If you think your readers might not be sure, you can append your tweets with /s to indicate seriousness. /s
By being bigger and faster than our competitors, we can ensure that a safer company (us) will win the race, creating a safer and more prosperous world for everybody...
@JamesBenamor
@OrionJohnston
"We have nanotechnology at least 15 years more advanced than yours despite you not knowing about even the existence of our country until a year ago" seems like a perfectly fine way to get that message across.
In my culture, we don't say "I love you." We say "I think the prediction markets are underestimating our probability of long-term compatibility," and I think that's beautiful.
@Aella_Girl
wait the circumstances are ~irrelevant right, the main question is whether you think 10 years vs 25 years punishment for murder is appropriate.
Some claim that
@hlntnr
's actions in writing an academic paper that is mildly critical of Altman's work while being a board member is a result of the unique nature of OpenAI's non-profit structure. But I think you should be allowed to do so in for-profit companies too.
I realized recently that I haven't fully internalized that I might die from anthropogenic extinction risks. But then I realized that I also haven't fully internalized that I might die. And all things considered, maybe it's good to not internalize these things for a while.
I've started exercising again for just a few minutes a day for the last month or so, and I already feel fitter! I doubt I actually *am* fitter or *look* fitter, but I certainly feel fitter, so yay.
I feel like 80% of my internet annoyances comes from people smart enough to realize that experts are sometimes wrong, but dumb enough (or lazy enough, have enough motivated reasoning etc) to immediately jump to the first "alternative" explanation that pops up.
I'd be excited if you had presidential candidates do tabletop/roleplay (maybe through a text terminal so there's reduced possibility of bias) predetermined scenarios set by some panel of political scientists, historians, etc.
Questions like pandemic preparedness, conflict, etc.
Your average person on the street is not going to consider a data center cap or reporting requirements on large training runs to be a severe infringement of their civil liberties.
@ESYudkowsky
Thank you for the endorsement
@ESYudkowsky
! I agree: We're much better than our competitors in scalable asteroid mining!!! 🚀☄️🌍
Compared to DeepMine and Anthropocene, we're bigger, faster, safer, and more socially responsible. ☺️☺️☺️
As ways to cheat go, academic fraud feels much more consequential to me than doping, yet even famed athletes are ~banned for life for doping and academic fraudsters don't even get a slap on the wrist.
Why are standards so high in sports and how can researchers emulate them?
Keep thinking about the fact that Walker falsified data and plainly lied about the most basic facts about sleep like the relationships between sleep and mortality and sleep and cancer and yet *nobody gives a fuck* - neither other public intellectuals, nor UC Berkeley, nor media
- Synapses already work collectively. Otherwise you can't do anything as complicated as e.g. griping an object. Your brain will just fight itself.
- Neural networks aren't "(re)programmed," as if somebody handcrafts the code to make them work. They're learned.
Google really be spending their limited safety budget on making sure Gemini draws ethnically diverse Nazis.
Bro, you're worried about the wrong race dynamics.