![Matt Reardon Profile](https://pbs.twimg.com/profile_images/1854797218731245569/JN3Hhs-m_x96.jpg)
Matt Reardon
@Mjreard
Followers
941
Following
20K
Statuses
8K
Programs @law_ai_. Frmly @80000hours. I want to do a podcast with @chanamessinger but she won't let me.
Berkeley
Joined February 2012
@alcherblack @davidad I’m loathe to agree with Perry, but those polls are bad, at least for heavy selection bias + most positive responses don’t “mean it” re xrisk. Classically mid-2023 was peak xrisk narrative.
1
0
0
@alcherblack @davidad Very few people in AI (let out outside of it) are sympathetic, or even aware of arguments for xrisk, the overlap with tail talent (a small population) is likely very small indeed. The vast majority of people at labs currently are not reckless, they simply don't buy the arguments
1
0
0
Has this cat considered that a stranger is just a friend you haven’t met yet?
I would wipe out five million complete strangers to save the life of a close friend. I would eradicate 5 billion people from the earth to make life mildly more convenient for a family member. If my child’s life was in danger and absolutely nothing could be done to save them, I would press a button that wiped out the entire human race out of spite. I would do these things even if there wasn’t a button, even if I had to do it manually by hand in some sort of weird time freeze hyperbolic chamber of executions. An eternity of pleading strangulations for the marginal benefit of my inner circle tribe. Other people are worth less than my people. Certain living creatures are worth less than other living creatures. There is a dollar value which can be assigned to life. There is a dollar value which can be assigned to human life. This is not callousness, it is mathematical and measurable fact. It is calculations and formulas done by the military, corporations, and bureaucratic offices every single day. People who reject this reality ironically make the world a crueler place. People who embrace this reality are the ones that built the framework of why you live a comfortable and peaceful life. Never ever let someone “charitable” or “altruistic” or “selfless” try to tell you anything about how the world works. Selfishness done without zeal or neurocomplex is the most selfless act because it induces equilibrium in the ecosystem. The eagle does not consider the trout when eviscerating it with its talons. The snake does not hesitate when swallowing the eagles eggs from its nest. The bear eats through the stomach first despite the deer’s screams. The deer cannot fathom what multitudes of creatures die because of overgrazing done by its kind. The considerationalist is a menace because God hates you when you try to do His job for Him. Human beings were not meant to process reality outside of a 50 mile radius. Other people do not exist, and if they seem to, it’s to act as background characters for your personal benefit.
2
0
23
@StefanFSchubert Taking it too literally tempts me to challenge the assumption that life would in fact be better for my family member (i.e. family member now has to live in a much poorer world)
1
0
6
@dwarkesh_sp RLHFed away I’d guess. Making really new connections between things typically entails speculating and being wrong a lot. Gpt2 feels more creative than 4o
2
0
5
@SenFettermanPA They both seem to stand against what their offices are meant to protect: good (evidence based) health and American security. Thank you for your leadership.
0
1
2
The most common intelligent AI risk skepticism: "when I take the outside view and consider fast takeoff as an outcome, the closest analogous non-AI outcomes have very low probability" Being specific about where and how AI progress slows from where it is today is much harder
@scottphuston @s8mb Not in a detailed way, and it is a fairly mild “skepticism.” For example, I am not moved by fairly casual reading of many years of arguments that suggest e.g. my children will not see their teenage years due to rapid takeoff. I put that risk near global nuclear annihilation.
0
0
9
RT @GarrisonLovely: I'm usually a fan of Zeynep's work, but this piece gets things exactly backwards. Her core argument is that DeepSeek ma…
0
8
0
@ben_j_todd This is how I understood it. People just treated it as utilitarianism. Going to write about the two core differences between EA and Ut: 1) EA not totalizing (just discretionary-but-meaningful % of resources); 2) EA doesn't bite bullets/make extremely steep help/harm trades
2
0
10
RT @AndyMasley: An iron law: It is basically never the case that people who disagree with you are secretly scared of how correct you are. E…
0
5
0
RT @robertwiblin: Much harder to stop proliferation of AGI, but if this were right it's the reason accelerationists should care a lot about…
0
8
0
@growing_daniel Sad part is a shitposter who got the signatures and money to be on the debate stage actually would sweep the primary
0
0
3