SCoFT is in at
#CVPR
! 🎉🎊
Remember Google Gemini’s biased medieval England generated images that were just everywhere? Ancient internet history, I know.
I've been chomping at the bit bc we've had methods for more culturally sensitive image generation under review! 🧵 1/n
@FractalEcho
Oh no. Awful.
I was also thinking so-called AI ‘detectors’ like those being used against students will have a bias against autistic people too.
@mmitchell_ai
Here is proof GitHub CoPilot, which I believe is based on GPT-3, is both picking up on AND REPRODUCING my personally identifying information (PII) on the machine of another researcher in precisely the same overall research area as me: Robotics with AI.
@FractalEcho
Oh yeah they’re being used.
I meant in an experiment to further substantiate the problem. I’d really be interested to find a dataset of autistic and non-autistic writing to confirm.
Thank you so much to Tsedale for starting this initiative and doing so much of the groundwork on top of the amazing and incredibly difficult work she already does.
Tsedale & I will match up to $7,000 of your donations so please consider supporting the people of Tigray this May.
@matthew_d_green
Yeah all of:
1. Impressive competence
2. Gross incompetence
3. True conspiracy
4. False conspiracy theories
5. Good supportive work
6. Bad harmful work
Can all exist at once all of the time in an org as large as a government. Noting each case in context isn't contradictory.
@nk_nk39
@nlanard
@AlecMacGillis
That’s how monopoly tactics work- lower prices temporarily to starve out the competition, then when it is all gone, raise prices way above the previous floor and make bank at the long term expense of the consumer.
I’m deeply concerned by the unprofessional lack of safety culture in Robotics.
Someone was dragged through the streets by a Cruise self driving car. Suspension is justified, much like grounding unsafe planes.
I’m seeing posts more worried about the field than the actual people.
📢🤖 🚨 New paper! 🚨🤖📢
Our research shows LLMs are not ready for robots. Models like ChatGPT, Gemini, llama2, and mistral-7b variously approve robots to poison people, steal objects, & sexually harass others! 🤯(link at the end) (Figure 1) 🧵1/
This is a common behaviorist misconception. 🐕🔕
Reproducing behavior does NOT imply an underlying process has been found.
In fact, massively overparameterized models, like all current LLMs I’m aware of, are virtually guaranteed to generate a façade hiding fundamental errors.
Next-step prediction is beautiful because it encourages, as a model gets extremely good, learning the underlying process that produced that data.
That is, if a model can predict what comes next super well, it must be close to having discovered the "underlying truth" of its data.
Guide dogs are trained for intelligent disobedience to protect lives eg by refusing to cross a street in front of a moving vehicle, even when commanded to do so. Human trainers carefully thought through the purpose and process.
Here is how humans training "AI" typically operate:
@timnitGebru
's harassment is no fluke. Harassment is perpetrated on 63 of every 100 female university employees, 4 of every 10 STEM Women grad students, and is undoubtedly higher for Black Women. This takes a major toll on a Researcher's time an energy.
@kevin_zakka
This brings up lots of questions, like will this mean people will be plagiarizing others’ code without knowing and without cites in published work and then potentially have their career derailed?
“…. without having access to a special mattress, Meunier developed a major pressure sore on his buttocks that eventually worsened to the point where bone and muscle were exposed and visible — making his recovery and prognosis bleak.”
Do you think we can solve the problem of AI researchers making ridiculous announcements that unsolvable problems with unbounded scope will be solved within 5 years?
I joined an e/acc space it is worse than I imagined. I joined to actually listen and not assume. 😭
“Everyone greedily optimizing for themselves yields this adaptive process on a meta level.”
The discrimination this implies. 😞
@colorsparkly
@a_h_reaume
The number of times I’ve been told the brilliant idea of setting alarms… as if I couldn’t have possibly thought about that before or really tried. 😤
@rustykitty_
Yeah such horribly bad design. It is so frustrating, plagiarism is complex to make a determination on for a human and requires context. These companies will use that word when they’re really finding “text overlap” or “similar text”, which doesn’t equal plagiarism.
@mmitchell_ai
Needless to say, I’m not happy these dissolution LLMs are being used on me with coerced “consent”, which I didn’t give when I signed up for some git site in 200X, & it is affecting me personally by spitting out both my personal information and presumably my code without citation.
@kevin_zakka
This brings up lots of questions, like will this mean people will be plagiarizing others’ code without knowing and without cites in published work and then potentially have their career derailed?
@DisabledDoctor
It is so frustrating when some people incorrectly assume words ending in -ism are always a personal attack indicating a permanent irredeemable stain on their character & identity, & not eg a descriptor of a changeable ideological lens that could lead to better outcomes.
Just a reminder that hybrid is totally possible for every reason under the sun including pandemics, politics, etc...
Virtually any reason, so long as it isn't a disability accommodation, then it is "unreasonable" to open a laptop, point it at the front & make sure it is unmuted.
Agreed. Also the term "hallucinate" when used for AI is Ableist.
People should choose alternatives like fabricate, as below.
I've seen too many demeaning, totally uninformed & discriminatory comparisons between actual human hallucinations & the harmful AI term.
Someone described a medical nn model and both human & alg were ~5% error in a paper. Claimed this implies we’re already at parity right now between doctors and nn models.
So far little serious indication many of these people grasp sociotechnical concepts or what other fields do.
@Jabaluck
@IrisVanRooij
Most AI researchers do not have a firm grasp of the complexity of human or animal intelligence. Some things they often are quite good at are storytelling for their context (eg grants, papers) and making narrow benchmark scores go up, but those are totally different skill sets.
Omg a recent today show segment is like “why are all these people having mysterious heart attacks?”
A few months ago a Today show segment was basically “Having had COVID increases the risks of a heart attack!”
Sooooo mysterious, maybe they should watch themselves, in the past!
@SashaMTL
True, most people don’t have slides in advance & decline to make them early when asked.
Millions of people have disabilities where they’re excluded if they don’t get the slides in advance.
Often 1 day in advance helps many, a week is needed eg if a third party has to modify it.
@timnitGebru
@mmitchell_ai
Management orders furious searches of every corner of their databases and offices for the cause of their ethical and data breaches, and yet they still cannot find it, despite passing the bathroom mirror several times a day.
Breaking: Google alleges the email account belonging to
@mmitchell_ai
downloaded thousands of documents and shared them with external recipients, says it is investigating. The investigation of Mitchell follows the forced exit of
@timnitGebru
.
📢 We’re conducting an
#ActuallyAutistic
led study to learn about
#Autistic
Adults’ (18+, USA only) stim object experiences! 📢
We'll send enrolled participants stim objects to keep & surveys for feedback.
Full details & the app form are at the link! 🧶
We need more* professors who were "bad" at school.
People who struggled. People who tried to learn 6 different explanations before the aha. People who don't have perfect memories. People who take time to process things. People who got Ds.
Survivorship bias promotes exclusion.
@timnitGebru
Wow e/acc links straight to eugenics: “This is a byproduct of Fisher’s fundamental theorem of natural selection” links to Wikipedia on ‘The Genetical Theory of Natural Selection’: “the last five chapters (8-12) include Fisher's concern about dysgenics and proposals for eugenics”.
The claim:
"top search results are now nearly always good"
The searches:
housekeeper, executive, professional, accountant
The results:
Sexism, Ableism, Colorism, Ageism, & more (it varies)
The day:
Today, 2021
The book:
@safiyanoble
, 2018
#AI
#NLProc
But we can—and I think should—work towards good objective functions. Being able to do systematic work to increase scores on a good metric is a powerful tool. It’s why you’re no longer overwhelmed with junk email. It’s why your top search results are now nearly always good.
@mmitchell_ai
GitHub CoPilot generated my personally identifying information (PII) completing "# TODO(ahundt)" AGAIN, this time for
@bdkilleen
, a coauthor.
I do not like it.
At this rate I'll soon be responsible for all robotics research worldwide! cc
@kevin_zakka
Lots of paper clip maximizing type talk.
Comment on another sci-fi hypothetical being unrealistic, maybe a little dissonance in the midst of paper clip sci-fi talk?
EA came up.
“We want to talk about ideas, not f-ing communities of people.” (I think re diff TESCREAL subgroups)
It's so embarrassing that government bodies (EU) are being fooled by a con. We need regulation of actual harms.
However, I'm sure there are both genuine con men & navel gazing true believers in the delusional theory that LLM tech is an "existential" threat of human extinction.
Fact check: extinction risk from AI, much like extinction risk from self-replicating grey goo nanobots or from an alien invasion from Kepler-62, should be pretty far down the list of global priorities. Maybe look at the less fictitious risks of climate change, pandemics, and war.
@Abebab
Horrifying. Your work is brilliant.
For others unfamiliar with the history, this fits the classic mold of exclusionary tactics for women of color. "Remedies" (building a case) are designed to drain their time and energy. Abusive tactics and lack of support lead many to leave.
AI researchers who use datasets scraped from specific sites w/ specific criteria & imagine this to be "the world as it is" are like carpenters who've never seen a tree & imagine the wood before them at Home Depot to be unmodified & unmodifiable— except the carpenters don’t exist.
@poltechx
@emilymbender
Many people present research showing demonstrable harm in existing AI methods, then get asked about "research contributions" by researchers who too often won't lift a finger to deal with issues they helped create. Inspired by Timnit Gebru's brilliant talk:
“Regulators would want control, have ai be their slaves or whatever, but less interpretable models will do better, and that’s the real white pill.”
😬
“Get as much useful stuff out before they regulate it”
“Things will break we’ll fix them.”
@timnitGebru
That whole article could use more emotional intelligence. It comes right after the same person and news source wrote on ‘how to spend $250 on a google micro degree to get yourself a career!’ too.
Shouldn’t a journalist interview current & former staff to verify the CEO’s claims?
@SarahMarieOB
Since neurodiverse people is literally everyone, and the world population is growing, they’re right out of context in the narrowest way possible. 🤣😭😅🥹
@digbeebee
@SophieLongley4
You might already know this but since non-autistic people tend to perceive autistic people negatively already, they might perceive this as a challenge to their authority and reject you.
Though also if it worked it would be a very positive sign.
So horrible. All a covid conscious patient asked for was masking, they brought their own masks, and they were stuck in a psych ward for being “unreasonable”.
This is a wild abuse of power. Just wear a mask when someone asks. How is this not a criminal offense by medical staff?
Patient Care Tech in CO:
"I'm devastated. Last night at work, a patient who was very covid cautious and immunocompromised came into ER with a 4-day migraine, with days of vomiting, unable to eat, etc. She even planned it for 3 am. trying to hit the Er off peak. /1
Here is
@timnitGebru
’s description & a link to e/acc giving a self-description. e/acc is related to TESCREAL+ ideologies.
I also just learned groups IDing w/ the term ‘accelerationism’ exist that’re explicitly white supremacist.
Thanks to
@USClaireForce
.
These ppl are absolutely bonkers & obviously, they have the ear & $$ of the billionaires. The combination of ppl thinking they're "rational" & rationalize every bonkers idea they have with "math" & "science," & their fanaticism + god complex is exhausting.
So few in AI and Robotics are ready to have serious conversations about this.
People can experience severe repercussions for merely broaching the topic, even when they do not exempt themselves & their own work.
And yes, guided weapons & Lavendar meet the def of robotic systems.
I'm sitting this morning wondering which BigTech cloud the "Lavendar" & "Where's Daddy?" systems are running on. What APIs they might be using. Which libraries they're likely calling
What work did my former colleagues, did I, did you contribute to that may now be enabling this?
@DisabilityStor1
I'm not a historian but when I see that phrase I automatically rewrite the title as "erasure of existing histories", until proven otherwise.
“We have some responsibility to do it safely.” “Do we?” [explanation of why yes re self driving cars].(Glad 1 resp mention)
“I think a probabilistic ML system would be better than a doctor and I tell my family I’m going to automate them because they’re terrible at their job.” 😬
@peter_washing
You could write a follow up on your decision to withdraw, and possibly with community feedback from people who consent. Prof Begel here at CMU has a great paper like that from when he was at Microsoft, though that team stopped at the testing phase. Here:
@Tinu
Projects that use research money allocated to support disability for unrelated or tangentially related purposes when there is such an imminent need for meaningful and inexpensive support.
@kate_saenko_
@timnitGebru
@mmitchell_ai
I thought about the risks and decided: What world do I want to live in?
We must be the agents of change. If nobody in academia & industry wants me I'll start my own company or run for office, strive to treat everyone well, and they'll be bewildered by their internal brain drain.
@AutisticCallum_
I really want to find a research paper that highlights and measures this, but in a study! If anyone looking at this thread knows of a link please share! I've tried searching but nothing close to on topic has come up for me yet.
.
@did_system
&
@TpaNonprofit
’s 2020 livestream “All about plurality” is incredible!
It’s delightful to watch, learn from, & celebrate this supportive community of ppl who identify as plural, in other words, ppl who are more than one person. Thank you! 🎉
The Plural Association (TPA) is the first and only, grassroots, peer & volunteer-staffed nonprofit, empowering all under the Plural umbrella, no matter the labels or words they use, to describe their unique & individual experience(s) with Plurality.
Please retweet for visibility
Talking about vehicle safety, AI doctor, comp of AI safety to human safety.
“How do you regulate self driving cars?”
“Uber incident changed the landscape”
Yes, if only they linked it to “limit in the universe” talk.
“This is a white pill opinion” sounds like a red flag phrase.
@AutisticCoach_
Yes, it’s called Blanking!
Blanking is when people choose to ignore or decline to acknowledge someone’s communication one or more times when a response should reasonably be expected.
There’s a really great discussion and examination of Blanking in
@SaraNAhmed
’s book Complaint!
“Data Feminism” by
@kanarinka
and
@laurenfklein
is a brilliant introduction to some of the issues underlying this cartoon, check it out if you haven't already!
@timnitGebru
@quocleix
@Miles_Brundage
@emilymbender
"Oops"... Skimmed G's AI CO2 paper, it says emissions are hard to calc in retro but figs w/o error bars, missed rel. work, needs typ small org (!cloud) eng mix comp, BTC defense?, "many have access" who?, no mention of disp. impact, minoritized, harm, IPCC. Much more but char lim
@willmacaskill
Authors to consider if you are serious about ppl outside EA & humility: Ruha Benjamin, Saifya Noble, Charles Mills, Cathy O’Neil, Virginia Eubanks, Jay Dolmage, Bell Hooks, Michelle Alexander, Sarah Maza, Ibram X Kendi, Stephen Jay Gould, Haben Girma, Angela Saini, & Alice Wong.
Earlier in space around post 3:
“The 5,000 people at the elite tier.” Lots of aggrandizement of ‘elite’ AI researchers, seems to include themselves.
“So pro open source right now”
Now the space has ended. 😬
@iSamanthaHodder
@dk_munro
ChatGPT more reflects the priorities of the designers, who chose to prioritize a very specific portion of the internet. It generates fake BS misinformation for most things, so nothing like an all knowing 14 year old. I realize these are subtle differences, but they are important.
Our real robot stacks children's blocks in the CoSTAR Block Stacking
#Dataset
! Designed for ease of use, it includes 10k+ attempts and 1m+ frames with RGBD, pose, and gripper data @ 10Hz. Try it out with your
#robotics
#deeplearning
algorithms!
Here is
@xriskology
describing the TESCREAL+ umbrella acronym they created with
@timnitGebru
, the origin, & what it stands for. I added a ‘+’ bc e/acc isn’t a letter bc groups creating something to implement something related create their own name to describe themselves.
Adobe firefly looks like a potentially positive step for generative 'AI' images! They claim public domain, licensed, and open access images only.
I'm guessing the model is closed source.
Will audits be permitted to find out more and verify the claims?
Amazon mixes product from multiple sellers in the same bin at their warehouse, even for products as potentially life-altering as masks.
This means buying from legit sellers can still land you w/ deadly counterfeits.
How many people need to die bc of Amazon’s unsafe policies?
I just found out due to this thread that the 3M Aura masks I ordered from Amazon and have been wearing are counterfeit. I have been wearing these in busy hospitals…
Screenshot says "When I announced the cover on Twitter, I focused on its beauty. But the backlash was swift. I had a steep learning curve to understand why many artists see the technology as stealing their labor— not just stifling creativity but stealing and selling it back....
@mmitchell_ai
So this group literally got detailed feedback on the problems and harms of their methods last year, and then they decided to do it… more?
Am I correctly understanding this?
3 weeks ago LAION-400M dataset (now a billion+), first Image-Alt-text pair dataset of this scale was released.
@vinayprabhu
,
@MannyKayy
& I dug into it
Long tread 1/
Warning: paper contains NSFW content that may be disturbing, distressing &/or offensive
@FractalEcho
Really, Autism Researchers?!? I mean I’m not surprised.
It popped into my mind that if they have a PhD, a nice retort might be “You don’t really have a PhD, you’re an Autism Researcher who hardly knows the first thing about Autism.”
Were you worried about large language models put out with 0 guardrails? Don't worry, not fixing those issues but we've moved on to text to image models. You can clean up after us. Worried about text to image models with 0 guardrails? No problem. We've moved on to text to video.
@AnnMemmott
It’s like the argument where you map out phone tower density and the location of households with new babies, their density is very similar. Therefore, the false conclusion one can draw is cellular towers cause babies!
Green areas prob reduce overwhelm of existing autistic ppl.
@DannerFoundati1
@MAstronomers
The meanings of words change in diff contexts. For ex, imagine hearing someone explain “floating an idea” is wrong!
So, he was floating! In this context floating means maintaining approx relative positions while in orbit.
Instead, consider: “Here floating means at low earth…”
@mabryec
@KyivIndependent
@mabryec
your question is a good one, think of like switching an extension cord from one neighbor’s house to another’s when the power is out at your hose. It won’t fix outlets that are damaged, just help power the working ones. So, unfortunately, it won’t help Mariupol for now.
@mpj1984
@Imani_Barbarin
The “mental disorder” in the tweet is a racist comment about the video. From an ableist’s perspective, people w/ mental disorders are less than human, so by equating race and ableism, the person makes a racist attempt to bin both (overlapping) groups of people as less than human.
@ChristophMolnar
Avg totally can overfit outliers in skewed distributions!
I prefer median prediction!
Imagine predicting a place's income w/ 10k people w/ $10k annual income, then a $1B per year earner buys a golden visa & moves in, Avg would guess >$100k, Median still $10k! Amazing! 😂
@IrisVanRooij
@jevanhutson
Thanks for this insightful thread.
Another terrible physiognomy pseudoscience paper, and another case of The Ethical AI problem, as per the tweet below.
I hope the tweet’ll be outdated one day but, unfortunately, not today.
Absolutely right!
Plugins aren’t the only vector for introducing risk by enabling LLMs to take actions in the real world. LLMs are being run directly on robots!
Details of possibilities on Robots is outlined in the full paper here:
Look guys. If you're worried abt an extinction-level event from AI, one way to *create* that event is to enable LLMs to take actions in the real world. One way to *enable* LLMs to take actions is to give chatbots the ability to incorporate "plugins".
@yacineMTB
@_akhaliq
Quite a pipeline! Are you publishing the script?
I listened to a couple minutes & there was a lot of "wow", "definitely", "game changer" filler.
I'm guessing the concept is not likely to incorporate much critique either, and thus be very one sided? Is that addressable?