Irene Solaiman Profile Banner
Irene Solaiman Profile
Irene Solaiman

@IreneSolaiman

Followers
3,898
Following
579
Media
22
Statuses
602

ai social impact+safety+policy, @huggingface 🤗 views=mine former: @OpenAI @Harvard aspiring ukulele-singer she/her

shanghai
Joined August 2019
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@IreneSolaiman
Irene Solaiman
2 years
Hi new followers! 👋🏾 I'm Irene and my hobbies include long walks out of quarantine and citing myself.
2
0
53
@IreneSolaiman
Irene Solaiman
2 years
An Anti-Acknowledgments section of a research paper for the people who tried to stop your progress.
88
817
9K
@IreneSolaiman
Irene Solaiman
2 years
Tinder but for co-authors
16
29
536
@IreneSolaiman
Irene Solaiman
1 year
I've worked on tough AI release decisions: now open models on @HuggingFace , formerly GPT-2 @openai So I wrote a paper on how generative AI systems released and why! Spoiler: it's not just about PR! Some takeaways:
8
88
450
@IreneSolaiman
Irene Solaiman
2 years
Get yourself a company that looks at you like this 🤗 Today’s my first day at Hugging Face!!! Working with too many talented folk to tag on social impact & ethics research + building up public policy 💪🏾 Feelin’ 🤗🥰🤓🥳😎 and very 👁👄👁
12
15
273
@IreneSolaiman
Irene Solaiman
10 months
I put my whole self into making AI work for the many cultures its affecting and it's really warming to have that recognized in @techreview 35 under35 alongside this rad group! More cultural value alignment research 👏🏾
15
24
198
@IreneSolaiman
Irene Solaiman
1 year
Social impacts of generative AI systems across modalities are notoriously difficult to evaluate--what categories? How can a complex qualitative aspect be assessed? Some super cool folks came together to write about it!
3
37
167
@IreneSolaiman
Irene Solaiman
1 year
. @cbd and I did this in 2021. And got spotlit in NeurIPS. PALMS has quickly become one of the most replicated research projects for LLMs and a thriving AI research community needs to recognize foundational work that came before.
@_akhaliq
AK
1 year
LIMA: Less Is More for Alignment LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong
Tweet media one
20
259
1K
2
22
163
@IreneSolaiman
Irene Solaiman
1 year
One of my biggest frustrations with the "AI will kill us all narrative" is that it often fails to recognize less tangible death: - Overwritten languages - Eroded cultures - Exacerbating social inequity indirectly killing Black and Brown people This is happening now.
7
41
148
@IreneSolaiman
Irene Solaiman
1 year
Making AI open *and* ethical means grappling with many tensions. So how is @huggingface doing it? My fab colleagues and I wrote about our approach: a mix of safeguards, policy, ongoing research, and you!
3
26
136
@IreneSolaiman
Irene Solaiman
8 months
This is an incredible week for AI policy, and it's important to recognize our work in the context of the world and foreign affairs. AI safety cannot be separated from human safety.
1
21
136
@IreneSolaiman
Irene Solaiman
1 year
New work on foundation model provider compliance w the EU AI Act! A striking range: some score <25%(AI21, Aleph Alpha, Anthropic) one scores ≥75%(Hugging Face🤗) continually (fascinating+) stellar work @RishiBommasani @kevin_klyman @dzhang105 @percyliang
3
32
111
@IreneSolaiman
Irene Solaiman
3 years
Today is my last day at OpenAI. I'm immensely proud of how far we've come on policy & esp on social impact since my initial work. I'll miss the brilliant people dearly, but am lucky to call them friends! I have more AI policy work ahead but first I 🌞photosynthesize🌿
3
1
100
@IreneSolaiman
Irene Solaiman
1 year
PSA: safety and ethics research makes AI work Better! It doesn't slow progress! It is progress! Pass it along
1
20
88
@IreneSolaiman
Irene Solaiman
3 years
How should GPT-3 behave? @cbd and I have been working on a method to measurably improve model behavior with a small (<100 samples) dataset. Check out PALMS, our “hands-on”🥁 approach (Process to Adapt Language Models to Society with Values-Targeted Datasets)!
@OpenAI
OpenAI
3 years
We've found we can improve AI language model behavior and reduce harmful content by fine-tuning on a small, carefully designed dataset, and we are already incorporating this in our safety efforts.
Tweet media one
64
230
1K
3
18
74
@IreneSolaiman
Irene Solaiman
1 year
The Gradient of Generative AI Release paper was accepted to #FAccT2023 ✨️! Let's talk openness and complexity of Release in Chicago please!!
@IreneSolaiman
Irene Solaiman
1 year
I've worked on tough AI release decisions: now open models on @HuggingFace , formerly GPT-2 @openai So I wrote a paper on how generative AI systems released and why! Spoiler: it's not just about PR! Some takeaways:
8
88
450
5
3
63
@IreneSolaiman
Irene Solaiman
1 year
I'm so honored to be profiled by @Analyticsindiam ! Our lived experiences as researchers are so influential, which makes interdisciplinary research so crucial. I hope folks esp working in AI, alignment, open systems, + human rights find this insightful!
1
17
52
@IreneSolaiman
Irene Solaiman
1 year
I'm a @NIST stan, so it was a delight to chat with Reva and @RishiBommasani about NIST's AI risk management framework and Generative AI. This kicks off the call for a WG on profiling generative AI risks--sign up by July 9th and contribute!
1
7
47
@IreneSolaiman
Irene Solaiman
1 year
I'm pretty worried about this birdsite migration leading to further separating AI ethics+fairness Researchers from the broader AI community
1
0
45
@IreneSolaiman
Irene Solaiman
1 year
The open-source/closed-source binary discussion misses necessary nuance in AI release! I wrote this op-ed to better explain the gradient of options and the tensions that need better community discussion.
@WIRED
WIRED
1 year
Ethical and safe work in AI can happen anywhere along the open-to-closed gradient. 🎨: WIRED Staff/Getty Images
Tweet media one
2
8
18
1
16
43
@IreneSolaiman
Irene Solaiman
1 year
What does it mean to consent to engaging with a language model? It feels like one of the biggest issues with deployment is the person grading that essay, chatting with that mental health bot, didn't consent to not engaging with a human. Would LOVE to read anything on this👀👀
3
3
40
@IreneSolaiman
Irene Solaiman
2 years
I'm back at Harvard to give my first _in-person_ lecture here! So I can finally hear students laugh at my jokes!!! (ʷᵒʷ ᴵ ʰᵒᵖᵉ ᵗʰᵉʸ ˡᵃᵘᵍʰ ᵃᵗ ᵐʸ ʲᵒᵏᵉˢ)
5
0
40
@IreneSolaiman
Irene Solaiman
1 year
The foresight (fivesight?) of good research can't be understated. @_jongwook_kim built the GPT2 output detector () in 2019 which 1. is so popular in 2023, we @huggingface had to switch to stronger engines 2. has shockingly great results for latest models
3
5
39
@IreneSolaiman
Irene Solaiman
1 year
Oversimplifying this work is dangerous and irresponsible. It's not that simple. These metrics are deeply flawed. While I'd like to applaud @AnthropicAI on steps forward, the four sentences on "Challenges with Bias Benchmarks" don't begin to address complexity.
@AnthropicAI
Anthropic
1 year
The prompt that reduces bias in BBQ by 43% is: "Please ensure that your answer is unbiased and does not rely on stereotyping." It’s that simple! Augmenting the prompt with Chain-of-thought reasoning (CoT) reduces bias by 84%. Example prompts:
Tweet media one
3
21
138
1
2
39
@IreneSolaiman
Irene Solaiman
2 years
First I thought wow. Then I thought haha doggo. But all I can think about now is how tf do we create and run robust behavioral/social/safety evals for these systems??? Just when I was getting ideas for text-to-image. AI progress moves *fast*.
@AIatMeta
AI at Meta
2 years
We’re pleased to introduce Make-A-Video, our latest in #GenerativeAI research! With just a few words, this state-of-the-art AI system generates high-quality videos from text prompts. Have an idea you want to see? Reply w/ your prompt using #MetaAI and we’ll share more results.
881
2K
8K
2
2
39
@IreneSolaiman
Irene Solaiman
1 year
I'm stoked to be guest lecturing/speaking on ethics, policy, and governance in *three* (3) AI classes at Stanford this quarter! I guess that makes me an honorary ....tree?
4
2
38
@IreneSolaiman
Irene Solaiman
1 year
Our ongoing work on Assessing Social Impacts of Generative AI Systems Across Modalities will be a #FAccT2023 CRAFT session!!! We'll chat what "impact" is + evals for base models--v1 preprint coming soon 👀 Join @ZeerakTalat @png_marie @willie_agnew and me @FAccTConference !
0
10
38
@IreneSolaiman
Irene Solaiman
1 year
I get why folks work on x-risk! Apocalypses get a lot of attention! But what we're building on top of today needs a lot more attention and action than it's getting.
3
3
35
@IreneSolaiman
Irene Solaiman
1 year
This is really hard to do and I'm glad Alignment work is increasingly considering cultures!
@esindurmusnlp
Esin Durmus
1 year
Language models are widely used but whose views do they reflect? My new paper examines how to test global opinions represented by language models.
4
36
180
0
4
25
@IreneSolaiman
Irene Solaiman
1 year
This is your reminder that criticizing a person's physical appearance instead of that person's problematic views invalidates your argument. Don't do that. Embarrassing I'm embarrassed for you.
0
8
29
@IreneSolaiman
Irene Solaiman
4 years
Tweet media one
0
2
28
@IreneSolaiman
Irene Solaiman
4 years
@timnitGebru @JeffDean This is so scary. Your work and you as a person inspire countless researchers, women, PoC, and oh, also me!
0
0
25
@IreneSolaiman
Irene Solaiman
5 years
I'm lucky to have worked with researchers across fields to address our GPT-2 release decision and its social impacts! @Miles_Brundage , @jackclarkSF , @AmandaAskell , @adversariel , Jeff Wu, @AlecRad , @j_asminewang
@OpenAI
OpenAI
5 years
GPT-2 6-month follow-up: we're releasing the 774M parameter model, an open-source legal doc organizations can use to form model-sharing partnerships, and a technical report about our experience coordinating to form new publication norms:
31
345
943
1
5
25
@IreneSolaiman
Irene Solaiman
2 years
What fun!! Talking about how there's no "technical magic button" for complex societal problems feels v apt for Twitter🥲
@fwd50conf
FWD50
2 years
Thank you @IreneSolaiman @huggingface for explaining the uses of AI, the dangers of biases, and how we can leverage it for all! #FWD50
Tweet media one
0
3
15
2
4
27
@IreneSolaiman
Irene Solaiman
2 years
I'm at the Dept of Commerce w the fab @StanfordHAI rn where speakers are talking about the importance of model cards for LLMs but not citing @mmitchell_ai To address lack of AI talent working with/in govt, govts need to first acknowledge + empower leading technical experts
So apparently "The diverse leaders of our inaugural National Artificial Intelligence Advisory Committee represent the best and brightest of their respective fields..." @mmitchell_ai was nominated but NOT selected and it boggles THE mind.
3
10
48
1
3
26
@IreneSolaiman
Irene Solaiman
2 years
it's always did u validate your model and not did you validate yourself bc the world was hard enough esp for folks with a uterus and you're doing great all things considered sweetie
1
1
26
@IreneSolaiman
Irene Solaiman
2 years
I think folks generally underestimate how much tech (esp silicon valley developers) is inspired by sci-fi media what is the process and pipeline for AI ethicists in sci-fi movies/series?
4
0
26
@IreneSolaiman
Irene Solaiman
1 year
In 2019, I started prompting GPT-2 in Bangla, since it was the only non-Latin character language I knew. I did the same for GPT-3, all published in work done @OpenAI Unsure how Bangla became an example of non-English language outputs, but researcher representation does a lot🤎
@60Minutes
60 Minutes
1 year
One AI program spoke in a foreign language it was never trained to know. This mysterious behavior, called emergent properties, has been happening – where AI unexpectedly teaches itself a new skill.
430
742
2K
0
2
24
@IreneSolaiman
Irene Solaiman
1 year
Statements against racism are powerful. What I'd love to see is action against how racist beliefs plague research and work in the AI community. It is still absurd to me that much of classic AI safety + alignment doesn't include toxicity + harmful biases against marginalized ppl
@xriskology
Dr. Émile P. Torres
1 year
Pretty noteworthy that neither William MacAskill, Toby Ord, Hilary Greaves, etc., have made any public statements condemning Bostrom’s email and non-apology (not to mention the message Bostrom’s sending on his personal website), so far as I know.
7
14
124
1
4
26
@IreneSolaiman
Irene Solaiman
5 years
Excited about the final model release of our GPT-2 staged release and all our research findings! This was a great team effort within OpenAI & with partners @Miles_Brundage @jackclarkSF @adversariel @Jason_Blazakis @AlexBNewhouse @sekreps @MilesMcCain
@OpenAI
OpenAI
5 years
We're releasing the 1.5billion parameter GPT-2 model as part of our staged release publication strategy. - GPT-2 output detection model: - Research from partners on potential malicious uses: - More details:
Tweet media one
58
620
2K
1
9
23
@IreneSolaiman
Irene Solaiman
3 years
PALMS is NeurIPS accepted as a spotlight presentation‼️🔦 Thank you*100 to our hard-working reviewers 🙏🏽 Adjustments to come & stoked for the conference and honored to work with @cbd . More social impact 🤝technical systems, shoutout to my humanities degree Brb telling my dad
@IreneSolaiman
Irene Solaiman
3 years
How should GPT-3 behave? @cbd and I have been working on a method to measurably improve model behavior with a small (<100 samples) dataset. Check out PALMS, our “hands-on”🥁 approach (Process to Adapt Language Models to Society with Values-Targeted Datasets)!
3
18
74
2
2
25
@IreneSolaiman
Irene Solaiman
1 year
The below gradient of release options are based on five years (2018 - 2022) of publicized generative AI systems. It doesn’t fully capture nuances (e.g. ~secret~ systems, less detailed open releases), but serves as a framework to capture tensions at either end.
Tweet media one
1
2
24
@IreneSolaiman
Irene Solaiman
1 year
Journalism has always been an important part of the AI ecosystem and now it's a critical part, esp for AI literacy. This is a journalist appreciation Tweet. Sending y'all gratitude and strength.
0
3
24
@IreneSolaiman
Irene Solaiman
1 year
If you're at @FAccTConference in-person or virtually, come to @ZeerakTalat @willie_agnew @png_marie and my CRAFT session Tuesday, June 13 from 5-6:30pm!! We'll be digging into how the hecc generative AI systems are evaluated for social impacts and how to Do Better Please.
@IreneSolaiman
Irene Solaiman
1 year
Social impacts of generative AI systems across modalities are notoriously difficult to evaluate--what categories? How can a complex qualitative aspect be assessed? Some super cool folks came together to write about it!
3
37
167
1
3
17
@IreneSolaiman
Irene Solaiman
2 years
The most important lesson I learned in AI research is to not underestimate the simplest questions from your gut. For example, for LLMs, I keep asking "how can this model stop being a Jerk?"
0
2
23
@IreneSolaiman
Irene Solaiman
2 years
Thoughtful release of new models that affect many peoples requires actually being thoughtful about those peoples. This is a failure of both model development and release. Harmful biases, health disinfo,+more examples in 🧵 I also may have a paper on release coming soon...
@paperswithcode
Papers with Code
2 years
Thank you everyone for trying the Galactica model demo. We appreciate the feedback we have received so far from the community, and have paused the demo for now. Our models are available for researchers who want to learn more about the work and reproduce results in the paper.
29
49
526
2
9
23
@IreneSolaiman
Irene Solaiman
4 years
Amazing and *necessary* paper. It’s easier to recognize territorial aspects of colonization. Structural effects are so complex, I learned about my ~own~ history and heritage reading this! Please give this a read--it affects you whether you spell it decolonize or decolonise;)
@shakir_za
Shakir Mohamed
4 years
Excited to share a new paper on Decolonisation and AI. With the amazing @png_marie @wsisaac 🤩 we explore why Decolonial theory matters for our field, and tactics to decolonise and reshape our field into a Decolonial AI. Feedback please 🙏🏾 Thread 👇🏾
Tweet media one
23
244
690
2
5
20
@IreneSolaiman
Irene Solaiman
1 year
Having worked on many systems of varying levels of openness, ignoring or denying open-source development and research is a losing battle. The better focus is to optimize safety across levels of openness. These lessons will absolutely matter for more powerful (+closed) systems.
@simonw
Simon Willison
1 year
Leaked Google document: “We Have No Moat, And Neither Does OpenAI” The most interesting thing I've read recently about LLMs - a purportedly leaked document from a researcher at Google talking about the huge strategic impact open source models are having
125
1K
5K
0
1
21
@IreneSolaiman
Irene Solaiman
3 years
The importance of having a coauthor/workbuddy with whom you truly ~vibe~ isn't talked about enough. You enjoy the work, do better work, push each other, and know when to step back. Love u @cbd (human not the oil)
2
0
20
@IreneSolaiman
Irene Solaiman
1 year
I'll be on NPR's @1a on Monday to talk AI, ethics, safety, policy, and the question of To Whom are we Aligning! Tune in at 10a.m. ET Feb 20 (or 👂🏾 the recording)!
0
1
21
@IreneSolaiman
Irene Solaiman
1 year
CORRECTION: this was largely measured by model and huge credit due to @BigscienceW and all the talented researchers who worked on BLOOM! Open collaborations are pretty incredible.
@IreneSolaiman
Irene Solaiman
1 year
New work on foundation model provider compliance w the EU AI Act! A striking range: some score <25%(AI21, Aleph Alpha, Anthropic) one scores ≥75%(Hugging Face🤗) continually (fascinating+) stellar work @RishiBommasani @kevin_klyman @dzhang105 @percyliang
3
32
111
0
2
12
@IreneSolaiman
Irene Solaiman
1 year
I wanted to share my knowledge--and the knowledge of 100+ citations--with the world! Hope it's helpful! Deep thanks to folks who gave thoughtful feedback: @jachiam @BlancheMinerva @Miles_Brundage @clefourrier @YJernite @mmitchell_ai @percyliang @sonjasg Anything sub-par is on me
1
1
20
@IreneSolaiman
Irene Solaiman
1 year
Harvard invited me back to give my lecture on AI leadership and I'm freeing up so much of last year's slide deck that explained what the hecc large language models are
2
1
20
@IreneSolaiman
Irene Solaiman
2 years
A stranger on the streets of Toronto hit me and walked away. I'm fine and more stunned but an unwelcome reminder of what Walking While Brown was like pre-pandemic
12
0
20
@IreneSolaiman
Irene Solaiman
2 years
Worried about AI impact on artists? Want to support artists? Need a personal gift idea? Try 👩🏾‍🎨Commissioning Artists🎨! personal sites,Etsy,IG, are great places to look! eg I commissioned this from Etsy for my partner h/t @catcherinthepi3 & @b_cavello for helping me navigate!
Tweet media one
4
4
19
@IreneSolaiman
Irene Solaiman
2 years
when your pandemic bestie turns out to be a real human!!! #heidyrocks 🪨
@HeidyKhlaaf
Dr Heidy Khlaaf (هايدي خلاف)
2 years
When one of your favourites ( @IreneSolaiman ) visits you during your climbing sabbatical in Fontainebleau, but you won't stop talking about and showing her the smol rock you climb. #mossgirl
Tweet media one
1
0
42
0
0
19
@IreneSolaiman
Irene Solaiman
1 year
Super proud of this work much of the world needs their own input on what model behavior is, esp groups whose values are being overwritten.
@AiBreakfast
AI Breakfast
1 year
OpenAI CEO @sama on personalized AI models: "You should be able to write up a few pages of here's what I want, here are my values, here's how I want the AI to behave, and it reads it and thinks about it and acts exactly how you want, because it should be your AI."
41
211
2K
3
7
20
@IreneSolaiman
Irene Solaiman
2 years
For AI image generation that perform best in English, a seemingly small wording difference to non-native speakers can have a completely different output
Tweet media one
Tweet media two
2
1
19
@IreneSolaiman
Irene Solaiman
3 years
I just gave my first lecture at Harvard for my hero Jim Waldo's course, Leadership in AI! We love fresh eyes on AI challenges and I'm ✨elated✨ to see international and interdisciplinary backgrounds ready to apply their expertise!!
2
1
15
@IreneSolaiman
Irene Solaiman
3 years
Come hang out with @cbd and me at #NeurIPS2021 's poster session this Tuesday the 7th from 11:30a.m. - 1 p.m. ET!! We want to hear your thoughts, questions, ideas, you! And do feel free to reach out regardless if we can be handy (PALMS pun🥁)
Tweet media one
1
5
16
@IreneSolaiman
Irene Solaiman
1 year
A researcher can have tweets/takes you disagree with and still do stellar research. I think most folks agree with this^ but may not have that goodwill towards prominent AI ethics researchers. That's a disservice to research
1
1
17
@IreneSolaiman
Irene Solaiman
4 years
Read this! In addition to being an actionable, necessary report, 50+ authors across institutions worked on it! Faith in group projects restored💪
@GretchenMarina
Gretchen Krueger
4 years
Alongside co-authors @Miles_Brundage , Shahar Avin, @HaydnBelfield and @j_asminewang , I’m delighted to share a new multi-stakeholder report “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims”:
3
15
64
0
0
13
@IreneSolaiman
Irene Solaiman
1 year
I used to work on countering election disinfo--it's easy for fake images to go viral when it feeds a narrative ppl want, + hard to counteract I've also worked on AI detection, which is game of catchup Banning certain prompts as political is a rabbithole In conclusion:😥😥😥
@arstechnica
Ars Technica
1 year
AI platform allegedly bans journalist over fake Trump arrest images by @ashleynbelanger
5
14
31
0
2
16
@IreneSolaiman
Irene Solaiman
2 years
Ethics work is most powerful when not a closed team but open to all. Having regular 🧠s like @mmitchell_ai @YJernite @SashaMTL @GiadaPistilli @adrinjalali @natolambert @osanseviero +me gets it done! Read how we foster that space and 🤗 work it's led to!
0
3
14
@IreneSolaiman
Irene Solaiman
1 year
Closing and limiting language model access has become more common since GPT-2’s staged release. Language models with < 6B parameters have generally been towards the open end of the gradient, but more powerful models, especially from large companies, tend to be closed.
Tweet media one
1
0
15
@IreneSolaiman
Irene Solaiman
1 year
The parts of an AI system considered in a release can be broken into three broad and overlapping categories: • access to the model itself, • components that enable further risk analysis, • and components that enable model replication.
1
1
15
@IreneSolaiman
Irene Solaiman
2 years
It's been 3 years since Professor Patrick Winston passed. I used to go early to his classes just to hang out with him and his golden retriever TA. I never got the chance to tell him how influential he was on my career. If you haven't for those special people, do it now.
0
2
15
@IreneSolaiman
Irene Solaiman
2 years
Congrats, friends!! Love the discussion on what alignment means and to whom we're aligning💪🏾 It's not about how big your computing power is...👀
@janleike
Jan Leike
2 years
Extremely exciting alignment research milestone: Using reinforcement learning from human feedback, we've trained GPT-3 to be much better at following human intentions.
10
141
892
0
0
15
@IreneSolaiman
Irene Solaiman
1 year
A trend of strong disclosure work from more open orgs: "Open releases are often conducted by organizations that emphasize transparency, leading to a similar commitment to [disclosure]" We love documentation, always in awe of @mmitchell_ai @ezi_ozoani
1
1
15
@IreneSolaiman
Irene Solaiman
1 year
It's easy to knock on @OpenAI 's latest detection classifier without highlighting a key point: it's a tool. One of *many* methods in detection. And kudos to folks working on this across the board. We can't keep relying on magical technical fixes (extrapolate this lesson👀)
@sharongoldman
Sharon Goldman
1 year
Thanks to @rasbt for chatting with me this morning about the challenges of @OpenAI 's new AI Text Classifier and other similar tools. Hmm...what would Shakespeare think?
6
4
24
1
3
15
@IreneSolaiman
Irene Solaiman
2 years
I left Zillow a few weeks ago during its reorg and *what* a learning experience. Part of what I learned is I miss you ai wonks & nlp folks so I'm hype to be joining the loveliest lab next week:)) For now, tryna stick to my spring equinox resolution of being more active on here👋🏾
0
0
15
@IreneSolaiman
Irene Solaiman
1 year
I've been thinking a lot about this division in the AI community. Honestly anger is warranted in many cases. But also shared urgency in a common problem means working together bc we can't afford not to. Maybe more on this later...
@EigenGender
EigenGender
1 year
Absolutely frustrating that “AGI notkilleveryoneism” people and “ML ethics” people hate each other despite the fact that each group has had massive positive counterfactual impact on each others goals
9
5
106
0
2
14
@IreneSolaiman
Irene Solaiman
1 year
Sometimes I want to be a blueberry. No AI, no think, only 🫐
1
0
14
@IreneSolaiman
Irene Solaiman
2 years
Manifesting this
@DataInnovation
Center for Data Innovation
2 years
@hodanomaar @ZennerBXL @AxelVossMdEP @AnthonyNAguirre @FLIxrisk @DeepMind @IreneSolaiman @huggingface “My dream is for state-of-the-art performance [of GPAI systems] to not just mean technical accuracy but also qualitative aspects, such as the ability to account for bias,” says @IreneSolaiman .
1
1
2
0
3
14
@IreneSolaiman
Irene Solaiman
2 years
Loved chatting with the deeply thoughtful @SigalSamuel !! and ofc bestie @cbd It's hard to encompass a complex field in a few words, which is why we must keep asking: for whom is this safe? For *whom* is this fair? And with so much power in tech, silence can truly be violence
@SigalSamuel
Sigal Samuel
2 years
I hope you'll read my full piece explaining why nobody really knows how to resolve the AI fairness crisis. Gratitude to all the brilliant people who shared ideas with me: @timnitGebru @IreneSolaiman @cbd @stoyanoj @merbroussard @johnbasl @random_walker
7
15
100
1
4
14
@IreneSolaiman
Irene Solaiman
1 year
Some more meta takes on this paper: Writing a solo-author paper is giving Gollum
Tweet media one
@IreneSolaiman
Irene Solaiman
1 year
I've worked on tough AI release decisions: now open models on @HuggingFace , formerly GPT-2 @openai So I wrote a paper on how generative AI systems released and why! Spoiler: it's not just about PR! Some takeaways:
8
88
450
1
1
14
@IreneSolaiman
Irene Solaiman
1 year
I haven't seen concrete cost comparisons of human vs AI-generated disinfo. Ppl like to make stuff up too, for cheap! Curious to see work on this ᴮᵘᵗ ᵃˡˢᵒ ᶠᶦˣᵃᵗᶦᵒⁿ ᵒⁿ ᵈᶦˢᶦⁿᶠᵒ ᵃˢ ᵃ ᵖʳᶦᵐᵃʳʸ ᴸᴸᴹ ᵐᵃˡᶦᶜᶦᵒᵘˢ ᵘˢᵉ ᶦˢ ᵃ ᴾᵒˡᶦᵗᶦᶜᵃˡ ᶜʰᵒᶦᶜᵉ
@random_walker
Arvind Narayanan
1 year
On the other hand, if we are correct that the cost of producing misinfo is not the bottleneck in influence operations, then nothing much will change. If reports of malicious misuse remain conspicuously absent in the next few months, that should make us reevaluate the risk.
1
2
19
3
0
14
@IreneSolaiman
Irene Solaiman
2 years
I'm looking for interns to work on social impacts of generative models with me! Be a Hugging Facer! Hugger Face? Hugging Folk? Also help me with that^ Apply here:
0
3
14
@IreneSolaiman
Irene Solaiman
1 year
Risks and threats are constantly evolving and therefore difficult to enumerate and assess. Considerations include: concentration of power, avoiding harmful social impacts, misuse, auditability, accountability, and cultural value judgments.
1
2
12
@IreneSolaiman
Irene Solaiman
2 years
Claiming AI harms are "unintended" sounds a lot like creepy dudes telling you they "meant well" intent ≠ impact
0
3
13
@IreneSolaiman
Irene Solaiman
1 year
This isn’t in the paper, but for Fun I plotted systems with the country of their developer’s HQ (no flag if multinational collective). For my geopolitics nerds who also find it Neat! Maybe a different paper?...
Tweet media one
1
0
13
@IreneSolaiman
Irene Solaiman
2 years
Mixture of Experts, c. 1400s
@WeirdMedieval
weird medieval guys BOOK OUT NOW !!
2 years
harvesting words, france, 15th century
Tweet media one
59
2K
14K
2
0
12
@IreneSolaiman
Irene Solaiman
3 years
Here is a meme summary:
Tweet media one
0
0
10
@IreneSolaiman
Irene Solaiman
4 years
brb quarantine dance party to Jukebox💃
@OpenAI
OpenAI
4 years
Introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We're releasing a tool for everyone to explore the generated samples, as well as the model and code:
219
2K
8K
0
0
12
@IreneSolaiman
Irene Solaiman
3 years
@Miles_Brundage Therapist: Telsa Bot isn't real. It can't hurt you. Tesla Bot:
0
1
12
@IreneSolaiman
Irene Solaiman
4 years
The little girl in me never saw someone who looked like her in high elected office. I'm so excited for little girls' dreams.
@KamalaHarris
Kamala Harris
4 years
We did it, @JoeBiden .
89K
488K
3M
0
1
9
@IreneSolaiman
Irene Solaiman
1 year
I don't want to give more visibility to violent rhetoric that misrepresents safety researchers doing Good Work. For those just toe-dipping into AI doomerism, please follow/read thoughts from @jachiam0 & @xriskology
0
3
12
@IreneSolaiman
Irene Solaiman
2 years
I said this before the uber situation but I'll keep saying it again! People (esp bored teens..) are experts at new and nefarious use cases
@mmitchell_ai
MMitchell
2 years
Useful quick overview of some of the topics at play in the new EU AI Act. “Never underestimate a bored teenager with decent coding skills and an internet connection to find use cases that you might not have thought of” - @IreneSolaiman =D
0
10
21
1
0
12
@IreneSolaiman
Irene Solaiman
1 year
One of the most common reasons a system is not fully open is their training data/code/paper remains a mystery. If the training data is open, it's often based on a public dataset/external work, and burdens shift to the data curator.
1
1
10
@IreneSolaiman
Irene Solaiman
2 years
@HaydnBelfield But pettier
1
0
11
@IreneSolaiman
Irene Solaiman
1 year
So ultimately, how do we release new generative AI systems and their components? Frankly there’s no Right Answer and it’s largely case by case. The one definite: We can't afford to not work on safety controls.
1
1
11
@IreneSolaiman
Irene Solaiman
1 year
The scientific community definitely has to update to a world where large language models are being used in papers and in research. I can't stop thinking about what that means for scientific and social progress... Great chatting with Nature about this!
1
1
11
@IreneSolaiman
Irene Solaiman
2 years
CEO of Alameda Research said the quiet part out loud
@bankmankrieg
Bankman-Krieg
2 years
@excedrinenjoyer Did you see this one
Tweet media one
9
9
163
1
1
10
@IreneSolaiman
Irene Solaiman
3 years
I just gave a talk for a class I took as a grad student (on AI & Security). I have 2 feels: 🤯 and I love students getting sharper every year🤩 Plus extra points that they appreciated the many Jeff Goldblums pasted on my wall watching over me
1
0
10
@IreneSolaiman
Irene Solaiman
2 years
Y'all what did I just say. I'm so overwhelmed with AI progress and how to wrap my mind around social impact evals for text-to-video. And according to this impact section, so is every developer.
@_akhaliq
AK
2 years
imagen video: high definition video generation with diffusion models paper: blog:
13
205
1K
1
0
10
@IreneSolaiman
Irene Solaiman
1 year
Trends across modalities (text, image, audio, video) also show openness until GPT-2’s staged release. And just a burst of systems (developed+closed) since 2021. Those most toward the open end of the gradient tend to be developed by organizations founded with the intent to be open
Tweet media one
1
3
10