“What are human values, and how do we align to them?”
Very excited to release our new paper on values alignment, co-authored with
@ryan_t_lowe
and funded by
@openai
.
📝:
I'm pleased to release the talk “Rebuilding Society on Meaning: Practical Techniques to Align Markets, Recommenders, Social Networks, and Organizations with Togetherness and Meaning”.
This 80m talk is my life's work!
We're starting something called the Social Design Club—a place where social designers can present what they are working on and get feedback—as part of
@peternlimberg
's The Stoa.
Have a social design you want feedback on? DM me or
@utotranslucence
!
Founders! I want to try shifting how you think about your product. If you're making a tool or toolkit, can we get together and try reframing it as an environment or playground?
If you're up for this thought experiment, grab a free slot:
This has quickly become my most popular design suggestion, and now it's a 6 minute video (). It's about splitting your project into funnels, tubes, and spaces.
Why's it popular? It’s easy to grasp, and makes a big difference in what you make.
I had a nightmare that bitcoin miners built a dyson sphere and Earth went dark
As we lay freezing to death, I though: *that* explains the fermi paradox
All this underscores one thing: no one has a good vision of how to integrate powerful AI in society.
The "time to build" folks are for technocapitalism-as-usual. That won't work for AGI.
The "go slower" people are driven by fear, not by an alternate, positive vision.
Holding apps accountable for our time is the first step. Next we must hold them accountable for affecting the quality of our relationships, our projects, and so on.
For many reasons, utilitarian consequentialism is lousy for self-understanding. Here’s one: asking “Am I helpful or harmful?” is soul-crushing compared to “Am I an agent of beauty/truth/love/etc?”
Design theory *should* be the same discipline as political theory. Same core question ❓what ideas about human nature (preferences, goals, feelings) and relations (contracts, solidarity, family) best guide the development of institutions/policies/systems?❓
🧵 A very common problem in entrepreneurship 👇
Your values and vision lead you to make X.
X, once made, doesn't serve those values or visions at all.
Here's my story, how I think about the problem, and what we can do about it.
The next big HCI lab will be explicitly political. The time for PARC-like labs is gone. The internet / software environment has come to dominate society, so those building alt software must have a political vision, and it must be explicit and up for debate.
There's danger to decentralizing the web before we can make it accountable to social harms.
- machine-enforced contracts involving child exploitation
- AI driven profit maximizer bots
- clickbait incentives baked into P2P code
Worse than monopolies
3.
#Decentralisation
is ultimately a question of
#democracy
. As digital technology penetrates society ever more deeply & the two become ever more intertwined, the rules of the former will increasingly govern the latter"
A common cause of procrastination:
“I wish I had a buddy for task X. I’m also disconnected from my feelings about it. So instead of feeling lonely, I distract myself.”
Yes, humans are malleable.
But beware anyone who talks of reinventing people or culture as a step *before* reinventing social systems.
They’re likely justifying their own work (neo-monasteries, leadership trainings, personal growth, …) & hiding a lack of institutional ideas.
Life Don't Make Sense, But Can Have Meaning
Do not seek narrative coherence nor unified purpose in life. You'll get all frustrated and boring.
But purpose and meaning can everywhere be found!
How? In moments of choice and attention, when you grapple with your values.
Founders, I'll talk with you about your product for free. These four cards show some product reframes we can try together, maybe they'll give you new ideas. Click for more info, and a scheduling link.
I have a bold prediction. It might seem crazy.
→ Four years from now, the overall outlook will be radically different. Much more positive.
Today, tech just keeps getting more dystopian; politics seems irrevocably broken; everything's fake.
To change someone's perspective (i.e., their attention policies) give them either:
(1) situations that require new modes of attention; or
(2) help feeling through feelings, to shift values.
@peternlimberg
@buster
@Conaw
@juliagalef
etc
I used to find conversation impossible with some people: those who relentlessly reframe things, who attack, blame, close up, change the subject.
I wish I'd known earlier: if I'm loving, attentive, and let their pattern run its course.. afterwards, they get curious and open!
We reconcile value conflicts by asking which values participants think are wiser than others within a context.
This lets us build an alignment target we call a "moral graph".
It surfaces the wisest values of a large population, without relying on an ultimate moral theory.
Explore vs exploit payoffs is likely the most powerful mental model I know.
Predicts:
- innovators dilemma
- ideal city layouts
- why some people can be present with one another and others can’t
- psychedelic big thoughts
- why research is stifled in big companies
i think im antiea radicalized now
my friends treat their body like a playground (experiments with drugs, meditations, new exercises etc), cycle through dangerous sports, startups, and various ideologies. as a group nothing has hurt their wellbeing as much as ea as
San Francisco: thousands seek authenticity and community in ayahuasca and soulcycle, because it's impossible to see friends and there's nowhere to sit in the sun
Our approach, MGE, outperforms alternatives like CCAI by
@anthropic
on legitimacy in a case study, and offers robustness against ideological rhetoric.
89% even agree the winning values were fair, even if their own value didn't win!
Love this. But peak 20th C saw us building a consumer utopia—a shrine to individual affluence and comfort. This time, let’s build a utopia of interpersonal devotion and personal agency.
I've been working on a wiki-like site to collect **group practices** (and people's experiences inside them). Putting together a team of "librarians of group practices" to test and improve it. Want in?
Going toe to toe with
@fortelabs
on knowledge management this Thursday in front of a live audience.
10am PST on Zoom
Call in sick, Work from Home, tell your assistant to cancel your meetings and phone calls.
When I demo
@RoamResearch
, he's going down like this.
This is true more generally—keeping bridges from collapsing isn't called ethics and it's just someone's job. But we're in the stage before we know how to make it someone's job.
Are you an entrepreneur? You're probably a "same-preneur": you want to get many people doing the same thing: clicking a button, buying a product, downloading an app. You want to drive transactions and build funnels.
I hope to convince you to do something else.
An xmas present for those listening
A work of political philosophy on
- what it is to help someone
- what a good technocracy'd be like
- measuring social benefit/harm
- etc
Ch1 of my book (draft epub + gdoc for feedback)
Some questions from members of
@KERNEL0x
, about my talk.
[4m30] my 16y research project
[12m55] becoming a peer to my heroes
[18m20] transaction costs of collaboration
[24m10] meaning-aligned allocators
[26m10] status dynamics elevate the wrong people
Great, accessible summary from
@CaseyNewton
. He's right: we must battle for these terms. Definitions of 'well spent' and 'meaningful' are of great social consequence, just like definitions of 'free', 'equal', and 'just'.
A good social designer searches a very large space of imagined designs, detecting problems with them, vectoring towards better designs.
Which kinds of imagining are you good at?
I was asked for 3 books for all young software engineers to read.
1. Technopoly, Neal Postman (1992)
2. The Society of the Spectacle, Guy Debord (1967)
3. The Timeless Way of Building, Christopher Alexander (1979)
We are heading to a future where powerful models fine-tuned on individual preferences & operator intent exacerbate societal issues like polarization and atomization.
In order to avoid this, we need to align AI to shared human values. But how do we do that, concretely?
We have a radical approach to online learning that's working well in the new
@humsys
training, the School for Social Design. I bet it could work well for other topics.
It combines these components: a textbook, a mission database, an alumni database, paid experts, and guides. 👇
Tomorrow I'm publishing two new essays, in response to Facebook trying to go Time Well Spent.
* Dear Zuck
* How to Design Social Systems (Without Causing Depression and War)
Watch this space. Tomorrow at 10a PST!