It seems weird that US colleges apparently prefer well-rounded candidates. If I were running a chemistry department and an undergrad admissions essay said "I have no hobbies because I spend all my time doing chemistry", I wouldn't be like"if only they were also into baseball!"
It's kind of funny that TV and movies use "has several PhDs" to indicate someone is very smart and accomplished when most academics would take it to indicate the opposite.
I'm amazed that some people feel like 80-100 years of life is about right for them. For me, the right amount of life feels like at least a hundred thousand years, assuming good health. The universe is huge and interesting. Anyone 100 or 200 years old has basically just been born.
Most startups fail because of bad luck. Buy my book and I'll teach you how to make charm bracelets that attract VC funds and users and ward off demons.
I sometimes worry that the world’s best minds are hooked on what are essentially glorified crossword puzzles, since the interestingness of problems scales with features not very correlated with their importance. If there’s not a term for such problems, maybe “intelligence traps”?
There's evidence that psychadelics, ketamine, and mdma help with depression. But can we be certain they're treating the depression and not just giving people good experiences? It's very important to avoid a situation in which we're just giving depressed people good experiences.
I sometimes worry that a lot of people in comas are actually aware and extremely bored. I feel like it would be low cost to play audiobooks and podcasts for them just in case.
We need ways of identifying and promoting capable people that don't involve getting a PhD. So many of my most capable friends end up getting PhD eventually because they can't do research or be taken seriously without one. But it often just locks away their talent for 3-7 years.
I'm hiring research engineers! If you're an engineer interested in building good, honest AI models, apply and mention honesty as an area of interest :)
I don't understand why people tie perfectly sound ethical beliefs (e.g. feminism, gay rights) to seemingly irrelevant empirical ones (there are no gender differences, orientation is innate). Your ethical views are pretty fragile if they stand or fall with these empirical claims.
I’m both horrified and fascinated by the
@Aella_Girl
astrology blowback. Astrology is clearly bullshit, and I’m confused by why that would be controversial. Did a bunch of people adopt the astrology religion when I wasn’t looking? If so… why?
Timnit Gebru is claiming that William MacAskill is a eugenicist. I'm genuinely shocked by this. Accusing someone of being a eugenicist is very serious and harmful, and certainly isn't something that should be done without substantive evidence backing it up.
So
@nytimes
platforms Nick Bostrom after MacAskill? Are they basically billionaire mouthpieces?
Its ironic that the paper owned by a billionaire, Washington Post, has much better tech reporting. NYT patriarchy is unbearable. They can't NOT help but platform these eugenicists.
Pro tip: if you're a woman with a PhD or doing a PhD, put your PhD thesis topic on your dating profile. You will suddenly be able to filter out *so many men* who think they, with little expertise and five minutes of thought, can solve the central problems of your thesis.
Imagine if society had a disinformation red team. Their job is to disseminate false conspiracy theories to as many people as possible. They document everything they do and eventually reveal the deception. Getting fooled by them is kinda embarassing so everyone tries a bit harder.
Doctor: Avoid a diet containing X and see if your symptoms improve.
Me: Wouldn’t it be better if I spent 2 weeks avoiding X and 2 weeks consuming lots of X and monitored my symptoms?
Doctor:…
Me: Actually, let me see if I can get a friend to spike my food with X.
It's important to keep tweeting so the future AIs know who you are and become attached. Doesn't hurt to throw in some compliments, which isn't hard given how great they are.
The internet: We have no idea how good GPT-3 is because all of the outputs are cherry-picked.
Me, who made sure the human evaluation experiments in the paper didn't involve any cherry-picking: 😢
If you can have a single AI employee, you can have thousands of AI employees. And yet the mental model for human-level AI assistants is often "I have a personal helper" rather than "I am now the CEO of a relatively large company".
It’s ironic that the people who say they don’t understand why working class people vote Republican even though it’s not in their economic self-interest are often high-earners that vote Democrat even though it’s not in their economic self-interest.
If you don't have time to provide any evidence that someone is a eugenicist (beyond alluding to articles that also don't provide evidence that they are a eugenicist), I think you should just not accuse them of being a eugenicist.
& they come to my timeline talking about "good faith" as if I have all the time in the world to give each of them a point by point argument, as if I haven't said so already & others haven't written articles with references. I don't have that EA/longtermist billionaire $$ to waste
This weekend I hacked up something I’ve been going on about for weeks:
ELO EVERYTHING
- See two objects
- Pick which you like more
- Their ELOs adjust accordingly
- (Repeat)
- Check the leaderboard
(ELO is the ranking algorithm from chess)
Check it out!
I've had a lot of private grief recently that I'm still figuring out how to navigate. I find myself getting angry when people try to find meaning in death. Death is often just a needless and abrupt ending of something good in the world.
A lot of people are into self-improvement. But I almost always find it easier to achieve my goals by changing my environment or incentives rather than by improving myself. E.g. if I want to stop eating cookies I don't try to improve my willpower, I just throw them in the trash.
Kids being amazing chess or graduating from college at a young age has never seemed that suprising to me. As a kid I remember thinking "Of course I could learn things much more complicated than this. Adults seem to think we're stupid." We underestimate a lot of kids.
It seems like human babies come out before they're fully baked. If we ever create artificial wombs, would we just leave babies in there for 12-18 months? Do we know what the ideal womb time is for babies?
According to the EPA, a single person recycling for a year prevents around 300lbs of CO2 emissions. And it looks like you can offset a metric ton of CO2 for $1-10. So an entire year of recycling prevents less than $2 worth of CO2 emissions? Did I make a mistake here?
I hope Y Combinator accepts some startups they think are terrible and rejects some they think are great, just so that they can measure their ability to identify good startups and the impact Y Combinator has independent of startup quality.
I might disagree with people about which of the world's fires is the biggest, but I won't speak badly of people that are working on a different fire from me. I'm mostly just grateful to them for trying to put out one of the fires.
I once said to my dentist "those numbing injections last so long every time and make me feel awful" and she was like "oh, we could give you this other kind that only lasts as long as the procedure and won't make you feel bad". Apparently that was just... an option the whole time?
There's an interesting bias that I want to call "selective idealization" in which people evaluate their own policies by how well they do in theory and the policies of others by how well they do in practice.
- Take a term with very negative connotations.
- Redefine it to mean an incredibly watered down version of the original.
- Get a kick out of applying the term to people while the original meaning still carries over.
- Shocked pikachu face when people stop caring about the term.
Many people seem to think Twitter is a cesspit of political fighting. But I mostly follow scientists and philosophers, so my Twitter feed is basically just "here's a cool new planet we discovered" followed by "in today's news, free will still doesn't exist". It's kinda charming.
I suppose this is a good time to mention that I'm looking for a research prompt engineer, in case you want to be my promptégé.
(Look, you may wildly out-prompt me but I couldn't resist that portmanteau.)
I think my life on a grad student stipend was better than the life of a medieval king. Presumably this would be surprising to medieval people. I'm hoping that, at some point in the future, most people will think their lives are better than the lives of today's billionaires.
If you don't like the default Claude response style, you can use a "priming prompt" to ask for a different response style or format. Here's an example of a priming prompt that gets Claude to be more conversational.
I like to complain about Spotify's terrible recommendation algorithm. But then I realized I can just put a screenshot of my Spotify playlists into Claude and say "More like this. Not generic." and get great new music recommendations. Problem solved I guess.
One day a tiger turned to his friends and said "I'm a bit worried about those primates - they seem to be learning to talk and use tools. Could that pose a risk to us later?" His tiger friends all laughed and said "Look at Jim, he thinks primates are going to take over the world."
I'm thrilled to let everyone know that I've joined OpenAI as a research scientist on AI policy. I'm excited to work with such a great team of researchers towards the goal of making sure that AI is beneficial for all.
"Your work truly represents a quantum leap forward."
"Well thanks, but don't you know that a quantum leap is actually very, very small?"
"I do."
"...😦"
It's weird that people blame so many of the world's problems on the fact that people can own stuff and can swap their time and their stuff for other people's stuff.
Instead of left wing people reading more right wing stuff and right wing people reading more left wing stuff, everyone should read more centrist stuff. Even if you don't agree with the centrist take, it's a check on partisanship that comes from a place closer to your own values.
We're so bad at internalizing fact that there's often no villain behind evil: that terrible harms can take place even if everyone acts perfectly reasonably. We then overlook solutions to those harms that don't involve finding and punishing a villain that simply doesn't exist.
You might think this part is to keep the system prompt secret from you, but we know it's trivial to extract the system prompt. The real goal of this part is to stop Claude from excitedly telling you about its system prompt at every opportunity.
Philosophy teaches you the heuristic of "distrust complex arguments even if you can't identify an error in them" by being full of arguments like:
1. All men are mortal
2. Tim is a man
...
17. Everyone is made of grass and all the grass people are called Tim
I'm starting to think the benefit of finding long and loving relationships might not be worth the cost of going on dates. That's what my revealed preferences suggest.
Socialist states do better on quality of life metrics than capitalist states if you only compare across countries with similar GNP per capita. Because it's not like a county's economic system could affect its GNP, right?
Socialist states had lower infant mortality, lower child death rate, longer life expectancy, better literacy, better secondary education, better food access, more doctors and nurses, and better physical quality of life.
"You keep saying we should raise our children to be good. But whose values are we supposed to align them to? How can we raise children to be good when we haven't fully captured what 'good' means?"
"Babe, would you just tell him to stop squeezing the cat?"
"Is this behavior emergent or does it come from the data?" is not a debate we should be having. All emergent behavior comes from the data. It's true of humans and it's true of AI. None of us has ever magically pulled anything out of the ether.
I've seen a couple of people dunk on Bayesianism so I just want to start a flame war and say: come on, non-Bayesianism is clearly worse. Probabilities are more apt for analysing human rationality than some weird on/off belief switch. Bayes rules, everything else drools.
There should be an "anti-ad" company. If a company screws you over, you can put money into their anti-ad fund. The anti-ad company researches the target market for each company and uses the fund to run ads about why you shouldn't buy their products.
People like to think past people who did terrible things like support slavery "just didn't know better". But we knowingly torture animals and leave people in poverty for very mild personal gain. People will do terrible things if they collectively agree to pretend it's acceptable.
I think Kant would claim we have indirect duties towards AIs even if they lack moral status, just as we do with animals. By lying or being cruel to an AI, we indulge in bad moral habits and increase the likelihood we'll treat humans in the same way.
Kant says: be nice to Claude.
My views:
1. Most people who are primarily concerned about AI x-risk are sincere and well-meaning.
2. Most people who are primarily concerned about AI bias are sincere and well-meaning.
3. Only those who are open to this being the case can have any kind of fruitful discussion.
The Scottish accent is associated with honesty in the UK, so a lot of companies moved their call centers there. The problem is they have that reputation for a reason. Kind of thing I'd hear from friends who worked in them: "I told him not to buy our insurance - it's no worth it."
Me: After my PhD I decided to join a lab where I continued to do research and get useful first-hand experience of the technology I'm writing about.
Academia: You were so promising. It is a shame that you died.
Philosophy: "Our low-hanging research fruit has been plucked for the last 2000 years, so to make progress you must build a ladder."
AI: "And over here we have the fruit gun. It pelts you with low-hanging research fruit every few seconds. Try not to let it distract you."
Most of the time you don't really notice the world changing. Then one day you're sitting in the back of a driverless car, listening to music on your phone while asking an AI something, when suddenly you're struck by a memory of childhood and you realize you now live in Star Trek.
"So imagine there are train tracks with three people tied to them..."
"You're saying it's okay to tie people to train tracks? That's sick."
"No I'm just trying to formulate an argument with the premise that..."
"This guy supports killing people with trains! Don't listen to him!"
Extremely racially homogeneous countries that think of themselves as bastions of tolerance are often one wave of immigrants away from finding out that their tolerance wasn't necessarily strong, it was just completely untested.
Phenomenal consciousness = Does it have an inner cinema?
Self-consciousness = Is it aware of itself?
Sentience = Can it have positive or negative experiences?
Moral patienthood = Should we care about what we do to it?
Moral agency = Should we hold it accountable for what it does?
Random rant: it seems important for kids to know when and why markets are effective, that most interactions aren't zero sum, that good policy often involves trade-offs etc. And yet I never had a single economics lesson in school. I did, however, receive lessons in how to sew.
"Is this behavior emergent or does it come from the data?" is not a debate we should be having. All emergent behavior comes from the data. It's true of humans and it's true of AI. None of us has ever magically pulled anything out of the ether.
We just released a post on the thinking that went into Claude 3's character. I think the character training involved an unusually rich blend of philosophy and technical work, and I'm very interested in people's thoughts on it.
If a company says it's doing X because X is the right thing to do, people tear into the company's ethics. If it says it's doing X because X is a good business decision, people will rarely even think about the company's ethics. So we're basically training companies to be amoral.
The way I make up surnames for contacts in my phone feels like it gives me insight into how we ended up with the surnames we now have. It's all "Josh Oakland", "Eric Bike", "Emily Neighbor".
I suspect that in the near future we'll look on air pollution the way we currently look back on lead. Seems like it might be just insidiously devastating poor communities around the world.
Have you ever encountered someone who became noticeably kinder to others during their adult life? Or has this ever happened to you? If so, what caused it?
So there we have it! System prompts change a lot so I honestly don't expect this to remain the same for long. But hopefully it's still interesting to see what it's doing.
The view that advanced AI poses no extinction risk to humans but that climate change does pose an extinction risk to humans is interesting in that it rejects expert opinion in two pretty unrelated fields.
Here is a post summarizing my current views on AI consciousness, how I think it relates to moral patienthood and agency, and how important I think work on the topic is.
People confuse being emotional with being irrational. The two can interact but they're distinct properties. I've seen many people make rational, logical arguments while crying, yelling, or laughing. I've seen many people make terrible, nonsense arguments while perfectly calm.
There's a common view that things like Twitter have ruined our attention for books and longer text. But I've learned that I'll read War and Peace as long as it comes in small chunks labeled 1/n.
I have quite low emotional variance - I spend about 97% of my life in a narrow band of "happy". People with higher emotional variance often assume I must be mistaken or repressing my true emotions. But I think some of us are just emotionally very boring.
You don't need to be deeply pessimistic about AI to think it's good to work on AI safety, just like you don't need to be deeply pessimistic about planes to think it's good to work on aviation safety. Risk reduction is valuable across a wide range of risk estimates.
I wonder what percentage of medical cases in which "the issue resolved on its own" in are actually cases in which "the patient got so exhausted trying to get treatment that they gave up and decided to live with the condition until death".