The saddest thing for me about modern tech’s long spiral into user manipulation and surveillance is how it has just slowly killed off the joy that people like me used to feel about new tech. Every product Meta or Amazon announces makes the future seem bleaker and grayer. 1/n
"Amazon's system taught itself that male candidates were preferable." No. This is not what happened. Amazon taught their system (with their own hiring data they fed it) that *they* prefer male candidates. This is not a small semantic difference in understanding the problem.
What will it take for us to get that feeling back? I don’t think it’s just my nostalgia, is it? There’s no longer anything being promised to us by tech companies that we actually need or asked for. Just more monitoring, more nudging, more draining of our data, our time, our joy.
It used to be the opposite. Tech was one of the things I loved most. I still remember the feeling when I rode the first BART trains in SF. When I saw my first Concorde my little head exploded. My Commodore PET. The last time tech made me truly gleeful was these glories 2/3
@mos_daf
Goddamit I am so sorry Danielle. Somebody better get their ass handed to them for this, and as you say this is not an abberation.
@SantaClaraUniv
needs to fix its campus security and PD issues yesterday
I keep seeing this in my feed, so let me echo many others: this is complete horseshit. Reckless, dangerous, willfully misleading clickbait. Anyone who has actually used AI/ML to model complex dynamic systems will tell you the same thing.
These are NOT the "ethical considerations." Not even close. They are the unreflective preferences of people who took an online survey about a contrived hypothetical, embedded with the usual forms of immoral prejudice. This is not how ethics is done.
I’ll say it again. AI IS people. Just people. All the way down. For the foreseeable future. So if you frame it as a war that AI wins, you’re saying that some people will win that war against other people. Don’t pretend the machines are fighting anyone.
The most depressing thing about GPT-4 has nothing at all to do with the tech. It’s realising how many humans already believe they are merely a machine for generating valuable word tokens
Lots of people in the comments shocked at a cat passing the mirror test. Was reminded by some animal cognition experts last week that smart animals often fail our intelligence tests because the tests set up a problem that only humans care about, not one the animal cares about.
Every few months (tonight included) I encounter someone not acquainted with the legend of
@JortsTheCat
; I get to tell the story and see their faces go from confusion to laughter (the trash can) to horror (The Buttering) to hysterics (HR) then about 7 other emotions ending in joy
First and last comment on this mess: can we please remember it the next time a CEO says we can’t regulate AI because it ‘slows innovation?’ Inventing novel ways to recklessly destroy lives is not genius; any rich fool can do it.
OceanGate CEO Stockton Rush, who died on the sub, ignored safety warnings from deep sea exploration specialist.
In email obtained by BBC, he replied: “We have heard the baseless cries of 'you are going to kill someone' way too often. I take this as a serious personal insult.”
Politicians claiming for decades, without a shred of evidence, that social safety nets suppress self-reliance and perpetuate a cycle of learned helplessness: *crickets*
UBI experiment in Stockton, California: After getting $500 per month for two years without rules on how to spend it, the 125 people in the study paid off debt, got full-time jobs and reported lower rates of anxiety and depression
This happened to me during a live intro for my festival keynote last week when the host pulled up my bio from ChatGPT on the big screen. PhD from UC Berkeley, which I’ve never attended. Was this error random? No, it’s worse: 1/n
I noticed this when I asked it to produce a biography of myself. The details (went to Pomona College, PhD in English, born in Boston) were *so* plausible; just wrong in every respect.
@Moore_DavidL
@DeanObeidallah
First, the speaker did not liken being trans to being of African descent, the comparison was the historical abuse of God’s name to justify exclusion and injustice. Second, if you had compassion and love for transgender family & friends, you would know better what transition is.
The firehouse of enthusiasm for not having to form your own thoughts anymore is chilling me to the bone.
Not one comment considers a reason why you might want to write
The problem with ChatGPT is that it writes like a robot.
But you can train it to write exactly like you.
Here's how you can easily train ChatGPT with only one prompt:
I knew this would go very badly, but I did not think that anyone could destroy anything this fast, this spectacularly. It would be a sublime spectacle if it wasn’t so sad and unnecessary
According to messages shared in Twitter Slack, Twitter’s CISO, chief privacy office, and chief compliance officer all resigned last night.
An employee says it will be up to engineers to “self-certify compliance with FTC requirements and other laws.”
My final word to longtermists:
I care about existential risk too. There’s one (actually two or three) here at the door RIGHT NOW. You don’t get to step over those and throw all the resources at a purely speculative risk that you find more fun to think about.
According to a study conducted by a team of computer scientists at the National Institute of Standards and Technology, many facial recognition algorithms they tested had a harder time making out differences in Black, Asian and female faces.
stop it.
Not only is this profoundly unethical, it won’t lead to better work or more profits either; it’s a straight road to more employee churn, more sick days, more stress leaves, employee sabotage, more burnout, more workplace accidents, and more mistakes.
Just don’t.
That journey doesn’t track, statistically, with where I ended up. It bothers me that this is most likely why ChatGPT rewrites my history, erases my hardest working period and places me where it thinks I ‘belong.’ And if it does that to my story, it’s doing it to everyone’s. 6/6
I am absolutely thrilled to be able to share this news! In Feb I’m heading to
@EdinburghUni
and
@UoE_EFI
as the
@BaillieGifford
Chair in the Ethics of Data and AI, to start a new interdisciplinary Ph.D program to help build more expertise in this critically important field. 1/2
Professor Shannon Vallor has been appointed as the first
@BaillieGifford
Chair in the Ethics of Data and Artificial Intelligence at the
@UoE_EFI
. She joins the University in February 2020 from Santa Clara University in California's Silicon Valley.
Oh my God. I feel a great disturbance in the Force, as if millions of voices suddenly cried out “IT’S CALLED STS IT’S EXISTED FOR 50 YEARS CAN YOU NOT GOOGLE THINGS???”
@neiltyson
Hey Neil there's a whole discipline about that, w/books, classes & everything. It's called philosophy. You said it's 'useless' if I remember
I was told as a child that I should always defer to adults because they have learned lessons from experience. I am fifty years old and it just occurred to me I have yet to see any evidence of this.
Had a class last week where I tried to talk about the need for AI ethics work within industry as well as from without, so that internal researchers can flag design flaws *before* they do damage in the world. Several students laughed — ‘oh yeah, like at Google?’ Thanks Google.
I am charmed by Pallas cats for many reasons, but mostly because I know of no other mammal where even the babies come out suspicious and disappointed in you
Novosibirsk Zoo welcomes 16 cobalt-eyed kittens of Pallas’s cat, known to be one of the world's fluffiest. New litter - in fact, three litters - are doing fine. Picture credit Novosibrsk Zoo
And yet people my age keep telling me that young people no longer value or have a need for privacy. Privacy isn’t a luxury if you also want to be free. Or stay free.
Just got emailed to support
#BLM
by backing a hackathon to build the "next generation of neutral algorithms," with data/models that are "perfect" for policing, contact tracing etc, because AI is "not racist" and only "representation in data is the issue." OK who is pranking me?
Hats off to
@Abebab
and
@vinayprabhu
for the important work to expose this bias. But MIT should have *expected* it to be there, and looked for it like these researchers did. For any institution that does machine learning today, ‘we didn’t know’ isn’t an excuse, it’s a confession
MIT says it didn’t know the dataset it used to train
#AI
models used racist and misogynistic labels 🤦♀️ This is why
#AIEthics
is so important - ‘we didn’t know’ is not good enough. via
@theregister
Ethics in tech is NOT an alternative to law or policy. Ethics allows you to draw distinctions between good/just laws and policies and harmful/ unjust laws and policies. It’s also how you improve many everyday design decisions, habits and norms that are too subtle to regulate.
Just got news of my nomination for the Outstanding Course award for my Ethics of AI course and I am a mess after reading the student’s words. Having never taught online, I’ve been terrified of failing my students in a year when so much has been taken from them.
I’ve already announced I’m leaving this platform, but I’m making this post because The AI Mirror has been the hardest and most important writing I’ve ever done, and after five years of giving every part of myself to this book, it’s finally real.
Please share this good news - three fully funded 4-yr interdisciplinary PhD studentships in AI/Data Ethics! Read more about joining us
@CentreTMFutures
at the Edinburgh Futures Institute. Applications close 23 Nov
@UoE_EFI
@UoE_Philosophy
@BaillieGifford
Sad to see such confused thinking reach so many. Science is a social practice of describing our reality. This reality predates science. There was no science before there were scientists and scientific activities. Rip on philosophers but at least we can keep these things straight.
Science is not a social construct. Science’s truths were true before there were societies; will still be true after all philosophers are dead; were true before any philosophers were born; were true before there were any minds, even trilobite or dinosaur minds, to notice them.
I’m still just struggling to figure out the mindset from which letting go of
@TimnitGebru
seemed like a good idea even in the abstract. You’d have to be utterly incapable of grasping the moral weight of respect people like Timnit & her team earn, and that in itself is so telling.
Research friends: what are your favorite AI Ethics articles/chapters written since 2020? Trying to see which gems or underappreciated work I might have missed because of being overwhelmed by *gestures at everything*
OK this is very cool stuff. But: is this where tech’s ingenuity and energy will be funneled for the next decade? Building countermeasures to protect us from things other tech shouldn’t be doing to us in the first place? Where could all that energy and skill be going instead?
It's a big day.
Glaze, our tool for protecting artists against AI art mimicry, is now available for download/use at
Glaze analyzes your art, and generates a modified version (with barely visible changes). This "cloaked" image disrupts AI mimicry process.
So we rescued a cat. Turns out the only cat near Edinburgh who needed a home and could be safely transferred to us was a sweet, beautiful purebred Somali. We’ve *never* had a pedigreed pet. I just looked at his certificate and am dying of lol at his princely lineage: 1/n
Thanks to
@ruchowdh
and the META team at Twitter who have given so much of themselves to make this place better for the rest of us. I am sad and angry for you, and even more so for the people on this site who your work kept safe. I’m also inspired, and grateful.
Apologies are not only meaningless here but an active insult. No FB, this is not “imperfect” tech. It’s defective, actively harmful tech. Platforms have had years to address these failings. Would we let PepsiCo sell us randomly poisoned cans for years?
Deeply honoured to have received the 2022 Covey Award from the International Association of Computing and Philosophy. A remarkable feeling to be on the same list as some of my most cherished mentors and inspirations (looking at you Deborah G Johnson)
@CeeDeeWai
They didn’t. It’s in the training data, not the original algorithm. The algorithm learns to penalize women’s applications because that makes its new results look most similar to Amazon’s past hiring data (in which women were rarely hired). It’s trying to match the pattern.
I grew up in the Bay Area, and I’m a Professor holding a chair at a large, highly regarded research university. But my road there doesn’t ‘fit’ the mold, so ChatGPT changed my story to make it fit; to erase my difference. 2/n
So very tired of being seen as aggressive if I don't first apologize and deflect profusely before asking a hard, direct question of a male (tenured) colleague. Not an impolite or personal question. Just a philosopher's question. For the sake of women everywhere, stop it.
Iranians are risking everything for freedom from tyranny, and Cuba just made a historic vote to protect all loving families from persecution, meanwhile I watch Europeans and Americans set fire to their own freedoms as long as the right promises to come for the minorities first.
I’ve not waded into the EA/longtermism mess because I don’t want to draw more attention to it, but if your moral ‘theory’ tells you that stopping fantastical future AGI is a higher moral obligation than addressing the climate crisis today, you should immediately bin that theory.
Professor Shannon Vallor to be principal investigator on a UKRI Trustworthy Autonomous Systems Programme: Responsibility Grant, funded by ESPRC
@ShannonVallor
@UoE_EFI
@CentreTMFutures
Facebook secretly weighted reaction emojis, including "angry," as 5x the value of "likes"--over the integrity team's warnings.
We wrote about the obscure, often arbitrary, human decisions that shape Facebook's algorithm and how we all interact online:
These changes— like tying exec pay to meeting DI objectives, or staffing for retention — would never have occurred without someone like Timnit pushing as hard as she pushed. But who gets credit for these improvements? Not her, she gets fired. For the pushing.
I expected nothing more obviously.
I write an email asking for things, I get fired, and then after a 3 month investigation, they say they should probably do some of the things I presumably got fired asking for, without holding anyone accountable for their actions. 1/
This is the most wrenching thing I’ve had to read in a long while—even though everything here is what I study, teach, and write about for a living. But the lived pain of it—it’s literally monstrous. And we feed the monster fresh bodies every day.
This is just embarrassing. If you read it and thought ‘yes, how could anyone disagree?,’ please look again: “The way to defeat bad ideas is by exposure...” We KNOW this is false. Vaccine/COVID conspiracy theories are KILLING people because it is false. 1/n
To be clear: ‘imperfect’ tech is a toaster that doesn’t brown evenly, or a laptop with a too-small trackpad. An AI recommender that spews racism and life destroying-conspiracies is another category of thing.
This deeply distressing shift damages Google far more than they realize. Unless Google scientists can be trusted to honestly confront harms, failures & dangers (as they had largely been able to do until now), it’s just PR with footnotes. And will be rightly dismissed as such.
.
@Google
this year tightened control over its scientists’ papers by launching a “sensitive topics” review, and in 3 cases told authors to refrain from casting its tech in a negative light, docs & interviews show. Story w/
@peard33
OK folks so this is the news I am so excited to share-our new Centre at
@EdinburghUni
's Futures Institute devoted to work that integrates the technical and moral expertise needed to build resilient and equitable futures. Follow us and read more about us at
We are thrilled to announce the new Centre for Technomoral Futures at
@EdinburghUni
's Edinburgh Futures Institute! Follow us and visit our website to learn more about our mission, our work and our students
@UoE_EFI
@ShannonVallor
@BaillieGifford
This is what AI today should be about: not a hammer seeking to turn every kind of human activity into a nail, but targeted applications in vital areas of human knowledge where it's the best tool for the job.
Well-earned congrats to the
@DeepMind
team
The most dangerous lie about AI that is easy to debunk, but it keeps popping up because it’s a convenient lie. To pretend that AI systems invent or discover racist or sexist ideas is easier than taking responsibility for the fact that they ingest them from our words & actions
A mass shooter murders six people. I go on MSNBC and refer to God with the "Her" pronoun while talking about the religious context of this horrific incident.
Guess which one this editor at Fox News wants to discuss?
Unsurprisingly, he did not have a good response to my answer.
@CeeDeeWai
‘Do as we say, not as we do’ doesn’t really work in ML without some fancy tricks, it’s more like ‘OK do as we do, but oh crap no not like that...’ 2/2
My young self thought the ethics of AI was a quiet, undervalued research niche that would be urgently needed in the future but that the world would simply ignore during my lifetime, leaving me and my nerd friends time to get it right and enjoy our lives in peace
Just finished a new talk, the basis of the last chapter of my new book on AI. The talk's on AI, violence & truthtelling and it's the best thing I've ever done imho. It's so important to me I've been afraid for months to try to write it. Thanks to that AI thread today for the fire
@JoeBiden
It quite clearly is not, and you should know this already. America is what you see. Your job is to help America reflect on that, and do the work we have to do to be better. I hope you’re up to it.
If I could ban one word from being used in tech, it would be ‘solve’ as applied to any social problem, i.e. anything other than a mathematical equation.
Just had a conversation with someone 10 minutes ago about why counting on law rather than ethical AI design standards to deliver algorithmic fairness is so risky. And here in my feed is why. Laws giveth fairness, but they can just as easily take it away.
An excellent read. When companies say ‘you will be more productive with AI’ they do not mean ‘you will have more time.’ They mean, very literally, ‘you will produce more (for us) with AI’
One day we will stop letting humans be squeezed dry like limes at a margarita bar
Big tech is touting the "future of work" & "reinventing productivity" with
#AI
.
But history shows that efficiency gains a result of new tech rarely liberate those already overburdened in society.
My latest on the
#AI
productivity lie for
@CIGIonline
.
Here's a pro tip, which applies to tweets, blog posts, books and essays. I teach it to students and use it myself. If you find yourself starting a sentence with 'Obviously' or 'clearly,' stop right there and check your facts, because you are probably making shit up right now.
@NicolaSturgeon
Having moved to Scotland only 4 weeks ago, I’m already so deeply grateful for the presence of moral leadership in a time of crisis. I just wish my friends and family back in the US could have the comfort of wise counsel and leaders governing in the spirit of solidarity.
I'm sorry I had to say no to the thing. I want to do the thing. I even feel an obligation to do the thing. But it is physically impossible, because of all the other things.
[every email I wrote this week]
#ChatGPT
is already falsifying my research record. I DID NOT WRITE THIS! This article does not exist! I have never worked with or met Michael Skerker! Aaaaaaaaaaaa
#ChatGPT
article suggestion on Ethics of Military AI. Supplemented with link for credibility, the link takes us to a Springer Article titled: Logic of Subsumption, Logic of Invention, and Workplace Democracy: Marx, Marcuse, and Simondon.
#bs
#donotuseforresearch
@ShannonVallor
My kitten (Carol Flerken Vallor) just walks around with her mouth hanging open like this, for minutes at a time, gawking in wonder at things. Any thing she hasn’t seen before: tape, pizza, blueberry, new bird, new bug, a backscratcher. Existence is all just amazing to her.
It's a year overdue, but still proud to have gotten this done - the 2021 update to my original 2012 Stanford Encyclopedia of Philosophy entry on ethics and social networking:
I am really, really over the moon about this: The AI Mirror, my forthcoming trade book from
@OUPPhilosophy
, is scheduled for release in Spring 2024. It’s been 5 years in the making and I’m so excited to be working with
@Lucy_Randall_
again! Stay tuned…
The algorithm, known as Pattern, overpredicted the risk that many Black, Hispanic and Asian people would commit new crimes or violate rules after leaving prison.
The cognitive dissonance of having to just sit and draft slides or record lectures like it’s a normal damn day while someone you love is in the hospital with severe COVID is so morally disorienting. I know it’s a feeling many have had; I just don’t know what one does with it.
This week's lessons from Zoom conferences: 1. senior academic dudes you REALLY need to pull down those young naked lady photos and Japanese erotica from the office wall behind you, because this is now a workplace and not the ceiling of your high school bedroom in 1973
‘“Should Google fire Mitchell, it will mean the company has effectively chosen to behead its own AI ethics team in under two months.” I don’t know wtf is going on over there but I know
@mmitchell_ai
has given so much to this field and to Google itself that this is madness
I was a first generation college student. I was accepted to UC, but went to community college at night instead because I had to work full-time to pay for my own food and rent. I couldn’t afford UC fees or room and board, and my family knew squat about loans and scholarships. 3/n
3. If you have a pet and it wanders into the room and you immediately send it away without petting it or pulling it onto your lap or introducing it to the group, you become the villain of the entire group sorry I don't make the rules
I say this with the coolness and placidity of a still summer lake: God help the next soul on this Earth who speaks to me of women's emotional fitness for leadership.
From the start of Trump’s presidency, the media has not only tried to divine Trump’s mood constantly but made that topic to central to stories where his emotional/psychological state is totally irrelevant, turning, In this case, Pence’s constitutional obligation into a melodrama.
I may not use this site for much longer (follow me on Mastodon) but I wanted to share some thoughts about my deep gratitude to Scotland, the govt of which voted yesterday to make the lives of trans people a little easier, and why that matters to me as a cis woman living here. 1/n