not saying today's ML systems are sentient, but this is not a reliable argument form.
once you realize that neural activity is a series of ion channels opening and closing, the question "are humans sentient" becomes fairly easy
Once you realize that AI/ML/NLP is just a series of matrix multiplications, the question "Is it sentient?" becomes fairly easy.
"Company develops conscious matrix multiplication" isn't much of a seller. A browser plugin replacing those keywords would prevent a lot of hype.
Gato🐈a scalable generalist agent that uses a single transformer with exactly the same weights to play Atari, follow text instructions, caption images, chat with people, control a real robot arm, and more:
Paper: 1/
Obsessed with the new “make it more” trend on ChatGPT.
You generate an image of something, and then keep asking for it to be MORE.
For example - spicy ramen getting progressively spicier 🔥 (from u/dulipat)
an interesting fact about life: once you get stamps of approval from enough prestigious institutions (3 or 4 usually does it), you become happy and secure forever
1/ Could AI systems be conscious any time soon?
@patrickbutlin
and I worked with leading voices in neuroscience, AI, and philosophy to bring scientific rigor to this topic.
Our new report aims to provide a comprehensive resource and program for future research 🧵
I recently reached 3,000 followers. People sometimes do an AMA at such a time. But I'm going to do an AYA - Ask You Anything.
Like this tweet and I'll reply asking you whatever I want
(I'm not joking)
In a week of takes and counter-takes and meta-takes about LaMDA, I saw hardly anyone discuss the central, object-level issue in detail:
in light of our best scientific theories of consciousness, what is the actual evidence for and against sentience in large language models?
Dostoevsky epilepsy induced ecstatic experiences "so strong and sweet that for a few seconds of such bliss one could give up ten years of life, perhaps all of life" (ht "What We Owe the Future")
Thread of reports of extreme bliss states 🧵
Could we ever get evidence about whether LLMs are conscious?
In a new paper, we explore whether we could train future LLMs to accurately answer questions about themselves. If this works, LLM self-reports may help us test them for morally relevant states like consciousness. 🧵
if you find large language models uninteresting because they are 'just' pattern matching, then you are missing out on some fascinating phenomena.
the "neuroscience" of these models is uncovering a strange world of complex internal representations
Paul Christiano on how scaling might fail:
Next token prediction provides far fewer effective data points for long-horizon tasks (for example, go do a job over a month).
Full episode out tomorrow.
several people have suggested “sensible strategy of harassment” and I think that’s my current favorite explanation
a lot of people have also suggested “fun” / “play” and also asserted that this is obvious. it’s not obvious!
@rgblong
What's wrong with the simple first-level explanation? (i.e.: it's bad for you for dangerous things to hang around your home, and if you're annoying enough they'll leave)
can someone explain the repetitive cadence and form that Bing/Sydney uses when going full-unhinged?
(PS this may be my 'favorite' example of off-the-rails behavior. alarming of course, but weirdly beautiful)
I asked DALL-E to draw a bicycle and it just randomly threw a bunch of "bicycle" elements together without any true understanding or a model of how they work.
Ridiculous to claim that current ML models are anywhere close to rivaling and replacing humans.
I’m hearing reports that Peter Singer personally arrested Sam Bankman-Fried.
Singer, 76, was seen leading SBF out of his luxury apartment in handcuffs, according to eyewitness accounts.
In a 2015 blog post - before the founding of OpenAI -Sam Altman wrote:
"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."
It's not responsible for the *biggest* change to my life, but I think that this is the piece of writing that has most obviously and legibly changed my life
I really enjoyed talking about artificial sentience with my friend Luisa Rodriguez for the 80,000 Hours podcast. Some highlights if you're interested 🧵
Whenever I overhear someone at a party saying "so what do you do?" I feel a twinge of second-hand shame. Surely, as a society, we can just agree to ask each other better questions than that.
Question for consciousness scientists and AI people:
What is the best evidence for and against large language models having a “global workspace”, as defined by the global workspace theory of consciousness?
When I first became vaguely aware of the jhanas from randomly coming across
@nickcammarata
tweets, I figured they couldn’t possibly be a real thing.
But no, they seem like an incredibly well attested phenomenon and, as many have pointed out, wildly understudied
Conscious/sentient AI and artificial general intelligence (AGI) are not the same thing.
And it's okay and not crankish to discuss these topics, even though they are speculative and uncertain.
In fact, it's very important to!
People are
-washing hands several times a day
-social distancing
-using bidets to replace toilet paper
-getting 0 interest rates
-praying more
-calling on their elders
-covering their body, head to toe, to prevent exposure
Welcome to Islam my sisters & brothers!
#coronavirus
1/
How can we investigate which animals are conscious?
We don’t have a consensus theory of consciousness for humans.
Even if we did, it’s not clear how we would extrapolate it to animals. Their brains can be very different (and small!)
And we can't rely on verbal reports.
Now in my life when I tell people juicy info, in addition to asking them to keep it private I also have to qualify "and don't go bet in any prediction markets"
“a man in a black t-shirt with a great smile, super handsome, brick background, clearly super smart and very cool, inspired by artist who paints super handsome dudes, profile picture, probably has tons of friends, photo of wolf”
Consciousness is a scientific phenomenon and we can make, and have made, progress understanding its computational and biological basis.
We don’t have to just throw up our hands, or resign ourselves to pure speculation, permanent ignorance, or arbitrary decisions
A thread of my reactions to
@davidchalmers42
recent talk “Are Large Language Models Sentient?”
tl;dr the talk hit upon all of the key issues; but I have quibbles with how the evidence was organized, and have a somewhat different way of approaching the question 🧵
at my SF house party, you absolutely *are* allowed to talk about AI. it's important and it's fascinating and it relates to a wide variety of things. honestly it's a great conversation topic, go nuts
the Compliment Deficit:
"There is this enormous compliment deficit in the world....Almost everybody's friends know a whole lot of really good things about them, that the person doesn't know about themselves." -
@michael_nielsen
I have two basic and unimportant questions about the Waluigi effect:
1. Why Waluigi and not (e.g.) Wario?
2. Why _exactly_ is it called the Waluigi effect? Like, in terms of Waluigi, what is his deal
I think we can improve the quality of discussion about AI sentience quite a bit
Position A: “AI systems are very complex. Maybe they are a little bit sentient”
Position B: “that is stupid”
🧵
I have an article in the latest issue of
@asteriskmgzn
, about lessons from the history of animal cognition for how we talk and think about AI systems today
haha i can’t believe it doesn’t get the gear question
the answer’s so obvious. *nervously* so obvious i don’t need to even say what it is, we all know it
when Gimli says "He was twitching because he's got my axe embedded in his nervous system" how does he know what a nervous system is
did Gimli go to 20th century medical school?
Thread of key quotes from a pair of 2015 posts by Sam Altman which outline his views on superhuman AI and regulation
I posted in part because I'm seeing a lot of speculation about Altman's views and motives which are wildly implausible, given this history
In a 2015 blog post - before the founding of OpenAI -Sam Altman wrote:
"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."
@MaxCRoser
The example Max uses in the post is child mortality. But it is true of almost all of the problems we cover on
@OurWorldInData
.
Communicating that all three of these statements are true at the same time is one of the most challenging parts of our job.
Here is the "explanatory gap" between physical processes and consciousness, in basically its modern form, stated by Irish physicist John Tyndall in 1872.
@ben_j_todd
I'm hearing reports that one third of the ToddGPT team has resigned, say they plan to announce their own more safety-focused project later today