Someone should figure out how to make super-smart AI that won't take over the world, and nobody should make AI that takes over the world. 1 Cor 15:26. d/acc?
Common pattern: people in a hobby have an unassuming name for their thing. Is there a name for this? Examples:
- Meditators don't meditate, they sit.
- Computer programmers don't have computers, they have machines.
- Camera hobbyists don't own lenses, they own glass. (1/2)
Is memory safety going to become "woke"?
- computers telling you what you can and can't access
- Biden likes it
- 30% of respondents to the rust survey are trans
People are giving OpenAI a lot of shit for this but if you read carefully, it's not OpenAI that's requiring the signature, it's the document itself - OpenAI probably tried to convince the document to change its mind but was obviously unable to.
One big request I have for OpenAI people, including the board: please keep diaries! In the future you'll want to know what you knew and thought when, and if human historians are around 200 years from now, they'll want to know that too!
FYI: you can count up to 100 on your fingers like so:
- right hand is ones
- left hand is tens
- thumbs are 5/50, fingers are 1/10.
This is convenient enough that it's the way I count by default.
I've been normie-pilled. The last three conspiracy theories that I put stock in were that COVID was a lab leak, Epstein didn't kill himself, and Boeing killed those whistleblowers. But looking into them myself made them all seem less likely (altho I can't rule the first 2 out)
The philosophy of dimensional analysis seems underrated relative to its importance. Like, what on earth is a 'dimension'? How do we figure that out? Why can you multiply but not add them? Is it a priori or a posteriori? The more you think about it the stranger it seems.
Thought of a dunk that would be funny, but it would also be mean and kind of contrary to my values, so I'm exercising the virtue of silence. You're welcome.
AI doomers are the type of people who are so awful that they would probably oppose all sorts of good technologies. No I won't provide examples. No biotech, fusion power etc don't count as examples, they probably haven't heard of those.
^ this is not a good argument!
One thing I'd like to emphasize: I think this is the best debate I have seen in my life. Object level informative, and worth wondering how to emulate. I genuinely wish political debates had this format.
not loving the idea that the firing of Sam Altman is the work of "doomerism" / "safetyism" such that people like me are somehow responsible for it. especially when there is ~0 solid public info to go off.
blood donation is not meant to be free. it's meant to be effective at providing blood to people who need it. humans are meant to be free. to transact. in, among other things, blood.
The Cowen-Thiel convo begins with a butchering of both Calvinism and LessWrongism (Calvinists don't believe in the use of reason? LessWrongers don't claim to know things? Give me a break...) Not promising!
@ByrneHobart
2. "[EA, LW-ism] often see empowering the tech sector[...] as crucial to human survival in the far future" - some relevant context here is that Lighthaven people are super opposed to big AI companies doing AI development!
People keep on putting out crazy numbers for losses to cybercrime like this. US GDP is approx $25 trillion. I think it's really unlikely that ~40% of US GDP is lost to cybercrime!
One problem with regulation is that I don't think anyone knows what companies can do to ensure that when they make super-smart AI, it doesn't cause this sort of catastrophe. So it's unclear what the government should mandate other than "stop".
@cremieuxrecueil
I feel like there's probably a project in here of mapping out which journals have a high proportion of LLM-written papers, to make it legible to the rest of the world which bits of academia are more or less fake.
Do you regularly read LessWrong? || If for the next 100 years you had the power to detonate a nuclear bomb on the moon, is there a >10% chance you would use this power?
ok this apparently isn't very capitalist of me but I feel like the social value of ozempic is probably higher than the social value of facebook+instagram existing.
I'm mildly obsessed with this Q: Why did most of Europe industry whiff on the tech revolution?
Look at the largest US vs. EU companies by market cap.
- America: computers, computers, computers, computers, computers, computers
- Europe: ozempic, luxury bags and jewelry, lotions
I feel like Paul Romer should be lauded for his take early in COVID that we should make tests super cheap and not bother to trace - IMO the discourse has forgotten how right he was.
I guess this is a matter of hourly income? As a PhD student I am for sure on team 10x cheaper, and intuitively shocked that others wouldn't be, but perhaps things will change in future.
@ByrneHobart
The two bits of the article that seem the most sketchy:
1. "A group with racist ties" is pretty strong for "a group that once held/hosted a couple of events that invited some racist people, as well as a ton of people with normal levels of racism"
"The reformation had to start from the outside, it couldn't start from within" - but it did start from within! Obviously it couldn't stay there, but Martin Luther was a monk and a priest!
Re: nonlinear drama, has anyone
- read Ben's post, wrote down things that seemed bad on a piece of paper, and the evidence provided for that
- then read Kat's post while looking at that piece of paper and checking off things that were disproven
? If so, what was the result?
If you walk around Berkeley on the day of the graduation ceremony wearing graduation attire, people will say "congratulations" to you. But anyone can buy it, you don't need to have actually graduated.
Finished watching the debate! tl;dr in my sketchy Bayesian model, I get a 96% posterior probability of zoonosis, which is massively sensitive to the question "Would you expect the first superspreading event to be at the HSM even if the pandemic didn't originate there".
~25% of people with an opinion would be willing to increase their expected lifespan by risking human extinction. Makes it plausible that some oppose AI pause for this reason.
Suppose that slowing AGI development decreased the chance that humanity goes extinct by 1 percentage point, but also decreased the chance of you living for at least 1,000 years by 2 percentage points. Would you support slowing AGI development?
Is it just me or is the 80k podcast getting better? Went back to it and there are quite a few fun episodes. To some degree it's at the cost of the EA focus but it's at least interesting.
@adrusi
"it's not normal to post about what topics are and aren't normal to post about. if you do you're either a high decoupler or an anti-intellectual. sorry but those are the only options"
Me, a raisin brain: "man, why am I enjoying these book reviews by random authors so much less than the blog running the book review contest, written by one of my favourite authors on the internet?"
EA/LW terminology on the verge of acceptance that I really dislike: saying "X is counterfactual" to mean "X is factual and not present in counterfactual worlds". I get why people want a word for this but please pick another one!
People be like "it's so crazy that you need a flowchart to explain how Mormons think the afterlife works, that's so complex" but if there were an afterlife it probably would be pretty complex.
My guess is that a lot of the information in "Going Infinite" is just wrong, perhaps because the author took SBF and Nishad's word for things. Reasons I think this:
1. SBF has a reputation among people I know for dishonesty, but the book paints him as a truth-teller. (1/2)
I think Pascal's Wager is at the opposite end of the spectrum of philosophy of colour, in that it's incredibly substantive, raises a bunch of interesting philosophical questions, and doesn't have an obvious answer.
@moultano
@diviacaroline
Maybe development of vectors + vector calc would have made early EM faster? But we got Maxwell's equations without them somehow...
Is it just me or is it kinda sus that you have to scroll a bit to report a twitter account as spam, given that that's by far the main thing I want to report accounts for?
As far as I can tell, continued development of more powerful AI risks catastrophe on the order of human extinction. In that context, a laissez-faire approach of suing for damages after the fact doesn't seem viable.
@Aella_Girl
I think there are probably a few axes of "accepted" and this is just picking up one. Like, I think it would be hard to imagine someone smoking marijuana in a TV ad for cheerios or something, but it's not very taboo. Similarly for "being below median attractiveness".
It occurs to me that Esperanto music and contemporary Christian music both suffer from the problem that people make them because they kind of feel like they should, and people listen for reasons other than the music being good.
@ShakeelHashim
Why? Like, Beff Jezos aims to influence policy by talking and making arguments and tweets. It seems to me that that speech rises and falls on its own merits. What thing of value did you gain by learning his identity?
GCRI did an anonymous survey of experts on the origins of COVID! Apparently most believe (as I do) that it was most likely a zoonotic spillover, but 1/5 thought an accident was more likely.
Weird that no major religion regulates its adherents' soda consumption. Seems like low-hanging fruit as a way to induce communal sacrifice, make the adherents distinctive, and also make them healthier. Mormons could do this this year! Or Catholics could ban it Fridays!
I guess to be concrete:
- I have no idea why Sam was fired.
- Without more info, I don't have a solid take on whether it was good that he was fired.
- I could imagine situations where Sam was cavalier about safety and lied to the board where it would seem good to fire him.
@ByrneHobart
I guess I should say that the broad point, that people like Jonny Anomaly, Steve Hsu etc did attend and mingle and attract audiences etc, is true.
In general it's disturbing how uninformed much intellectuals are when they analogize stuff to various parts of the Judeo-Christian world. I guess they accurately perceive that their audience is largely even less informed and are just impressed that you know fancy words.
The views on metaphysics and epistemology expounded in the LessWrong sequences are not unique but rather have precedent in academic philosophy - people like Quine or Dennett. Also, they're great and underrated. This should give pause to LWer skeptics of academic philosophy.
New episode of The Filan Cabinet with Aaron Silverbook, where we talk about how mouth bacteria cause cavities, and a new product that might prevent them.
I wrote an outline of a post about why I like "AI doom liability". I don't think fleshing it out would add much value, but maybe I'm just being lazy. Anyone want to read and give feedback? Explanation of "AI doom liability" in next tweets.