Anka Reuel Profile Banner
Anka Reuel Profile
Anka Reuel

@AnkaReuel

Followers
2,147
Following
1,186
Media
14
Statuses
707

Computer Science PhD Student @ Stanford | Geopolitics & Technology Fellow @ Harvard Kennedy School/Belfer | Vice Chair EU AI Code of Practice | Views are my own

Stanford, CA
Joined March 2020
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@AnkaReuel
Anka Reuel
11 days
Technical AI Governance research is moving quickly (yay!), so @ben_s_bucknall and I are excited to launch a living repository of open problems and resources in the field, based on our recent paper where we identified 100+ research questions in TAIG: 🧵
5
52
210
@AnkaReuel
Anka Reuel
4 days
I‘m deeply honored to serve as one of the vice chairs of the EU‘s first General-Purpose AI Code of Practice. Looking forward to working with stakeholders across Europe to ensure an effective implementation of the EU AI Act 🇪🇺
11
19
224
@AnkaReuel
Anka Reuel
2 months
Our new paper "Open Problems in Technical AI Governance" led by @ben_s_bucknall & me is out! We outline 89 open technical issues in AI governance, plus resources and 100+ research questions that technical experts can tackle to help AI governance efforts🧵
11
51
187
@AnkaReuel
Anka Reuel
16 days
Dear @LinkedIn , may I remind you of your own Responsible AI Principles? Remember, the ones about Trust, Privacy, Transparency, Accountability? Asking for consent is literally Responsible AI 101. Seems like these principles are – as for so many companies – just marketing BS.
Tweet media one
@RachelTobac
Rachel Tobac
16 days
LinkedIn is now using everyone's content to train their AI tool -- they just auto opted everyone in. I recommend opting out now (AND that orgs put an end to auto opt-in, it's not cool) Opt out steps: Settings and Privacy > Data Privacy > Data for Generative AI Improvement (OFF)
Tweet media one
312
4K
6K
5
43
129
@AnkaReuel
Anka Reuel
9 months
Our paper explores the risks of using LLMs for strategic military decision-making. With OpenAI quietly updating their ToS to no longer prohibit military and warfare use cases, this research is more crucial than ever. These models aren’t safe to use in high-stakes situations yet.
@MLamparth
Max Lamparth
9 months
Do LLMs lead to more escalation in high-stake international and military decision-making? Our new paper studies five off-the-shelf models and their behavior as autonomous agents in real-world conflict scenarios! A🧵
Tweet media one
1
15
58
3
38
101
@AnkaReuel
Anka Reuel
6 months
@PhDVoice @PostdocVoice Green flag: old PhD students come regularly back to visit the lab or join lab events. Huge green flag: his name is @aiprof_mykel 😁
1
1
99
@AnkaReuel
Anka Reuel
10 months
🔍 Excited to share: I'm the lead researcher for the Technical AI Ethics chapter for @Stanford 's 2024 AI Index, curated by @StanfordHAI . We're broadening our scope this year and your input on what research should be included is vital! 🧵 1/
4
12
92
@AnkaReuel
Anka Reuel
20 days
@KalobGossett Oh god I LOVE her smile at the end. So heartwarming ❤️
0
0
79
@AnkaReuel
Anka Reuel
6 months
It’s here! @StanfordHAI ’s 2024 AI Index. 502 pages on everything that’s been happening in AI, backed by data and research. Extra proud of the Responsible AI chapter for which I served as Research Lead this year – give it a read and let me know what your highlight was!
@StanfordHAI
Stanford HAI
6 months
📢 The #AIIndex2024 is now live! This year’s report presents new estimates on AI training costs, a thorough analysis of the responsible AI landscape, and a new chapter about AI's impact on medicine and scientific discovery. Read the full report here:
15
370
722
6
11
73
@AnkaReuel
Anka Reuel
1 year
My joint op-ed with @GaryMarcus on why we need an International Agency for AI. There really is not a lot of time to waste👇
@TheEconomist
The Economist
1 year
“There is not a lot of time to waste,” write @GaryMarcus and @AnkaReuel . “A global, neutral non-profit with support from governments, big business and society is an important start”
5
13
63
4
6
73
@AnkaReuel
Anka Reuel
4 months
@timgill924 While vertical networking (w/ people w/ a higher „rank“) can help in the short term, horizontal networking (building relationships w/ peers of the same „rank“) is what will lead to longterm growth and impact. These are the people you’ll spend decades with in your field.
1
1
73
@AnkaReuel
Anka Reuel
1 month
Exactly one year ago, he said yes, and in less than two months I get to marry my favorite human being in the world ❤️ This was our nerdy engagement announcement back then to our CS friends: There was a pull request for two branches. Accepted, no merge conflicts. ❤️
Tweet media one
5
1
73
@AnkaReuel
Anka Reuel
1 month
This year at @NeurIPSConf was by far my worst experience with the reviewing process to date. - 1 reviewer suggesting not to follow Neurips citation guidelines & many other unconstructive comments - Reached out to AC to mediate & wrote detailed rebuttals to all reviewers - We're
11
5
69
@AnkaReuel
Anka Reuel
1 year
Just in! Our newest paper on designing AI ethics boards to reduce risks from AI!
@jonasschuett
Jonas Schuett
1 year
How do you design an AI ethics board (that actually works)? In our new paper, we list key design choices and discuss how they would affect the board’s ability to reduce risks from AI. Paper: (with @AnkaReuel and Alexis Carlier)
Tweet media one
7
36
167
4
14
66
@AnkaReuel
Anka Reuel
2 months
I consider myself someone working on topics relevant to technical AI governance. I do not agree with this statement, and I’m sad that such accusations are being made without providing evidence or that they are made at all. 🧵
@willie_agnew
Willie Agnew | [email protected]
2 months
Oh you work in technical AI governance? You want to govern without thinking about people or society?
10
17
128
4
4
58
@AnkaReuel
Anka Reuel
1 year
I appreciate that statements like this increase the awareness of AI risks. Extinction is on one end of a spectrum of potential risks and should be taken seriously, but so should more imminent risks like misinformation – all warrant more policy attention.
3
5
55
@AnkaReuel
Anka Reuel
10 months
NeurIPS 2023 best paper awards announced! 1) Are Emergent Capabilities of LLMs a Mirage? by @RylanSchaeffer @BrandoHablando and Sanmi Koeyo 2) Privacy Auditing with 1 Training Run by Thomas Steinke, Milad Nasr and Matthew Jagielski Huge congrats!!!🎉
2
3
47
@AnkaReuel
Anka Reuel
1 month
I’m co-leading a project that studies how AI benchmarks are being used by stakeholders. If you’ve ever used a benchmark (e.g. results from a benchmark to make a decision or ran a benchmark on a model to understand its performance), or decided not to, we’d love to talk to you!
@SISLaboratory
SISL
1 month
Join Our AI Benchmarking Study! We are studying how AI benchmarks are used across different fields. We are seeking researchers, industry professionals, and policymakers who have used AI benchmarks to participate in a 45-60 minute interview. Details:
Tweet media one
1
9
17
3
13
46
@AnkaReuel
Anka Reuel
21 days
Takeaway: When firms allow for “pre-deployment access for testing by independent third parties” the fine print matters. A week to comprehensively test a model is not enough (let alone the fact that pre-deployment testing is insufficient and limited in the risks it can capture).
@daniel_271828
Daniel Eth (yes, Eth is my actual last name)
21 days
I’m sorry but this is an absurdly short amount of time to be given for this, considering the task at hand
Tweet media one
4
18
181
0
2
44
@AnkaReuel
Anka Reuel
1 year
In an article published today, @TIME discusses the UN‘s plans to shape the future of AI and approaches to international AI governance, featuring our work on an ICAO-inspired jurisdictional certification approach among others. 1/x
2
15
44
@AnkaReuel
Anka Reuel
24 days
Next time when my advisors asks me about my PhD thesis, I’ll tell him that I have concepts of a plan 😅
1
1
40
@AnkaReuel
Anka Reuel
1 year
Thrilled to have been invited to speak at the UN’s #AIforGood Summit in Geneva this week, organized by @ITU . The buzz of profound discussions on international #AIgovernance was truly inspiring. Encouraging signs of widespread support for a robust international framework.
Tweet media one
0
1
38
@AnkaReuel
Anka Reuel
1 year
GPTBot by @OpenAI crawls the web for new data to train AI models. To opt out, website owners need to modify the robots.txt file. If you don’t know about GPTBot your data will be taken without your consent. Why does it have to be OPT OUT instead of OPT IN?
2
8
31
@AnkaReuel
Anka Reuel
1 year
What worries me the most: All these big companies putting more and more resources into developing more advanced tech while laying off their responsible AI teams @Twitch @Twitter @Microsoft
5
0
27
@AnkaReuel
Anka Reuel
10 months
📢 We're seeking impactful AI ethics/safety research from late 2022 to 2023 for inclusion in Stanford's 2024 #AIIndex . Submit your papers or nominate others’ work through our Google Form. Let's shape this chapter together!👇 2/
2
14
26
@AnkaReuel
Anka Reuel
1 month
Is this a voluntary or a binding agreement @sama @OpenAI ? Will any of the details of the agreement be publicly available?
@sama
Sam Altman
1 month
we are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. for many reasons, we think it's important that this happens at the national level. US needs to continue to lead!
2K
1K
13K
0
2
26
@AnkaReuel
Anka Reuel
2 months
How to become a great #NeurIPS reviewer ⭐️ 1. Read the paper 2. Don’t let ChatGPT write a review for you 3. Don’t just try to bash your colleagues. Make it your mission to actually help them improve with _constructive_ criticism 4. State that you’re willing to raise your score if
@JiachenLi11
Jiachen Li
2 months
How to become a toxic Reviewer💀 for #NeurIPS ? 🤔 1. List "Lack of technical novelty" as the only weakness. 2. Give the paper a rating 4 and confidence 4. 3. During rebuttal, acknowledge the authors' response but never take them into account. Tell the author that after
20
20
344
4
0
26
@AnkaReuel
Anka Reuel
10 months
There must be a better way to address some of the most important questions of our time than through 15-hour+ night shifts, expecting to solve 10+ complex, highly debated issues in 1 negotiation. At least I know how well I resolve conflicts being sleep deprived and under pressure.
@BertuzLuca
Luca Bertuzzi
10 months
#AI Act trilogue: after more than 15 hours since the beginning of the meeting, EU policymakers are still discussing bans with no light at the end of the tunnel yet. Looks like the timing of the Council's, and maybe even Parliament's, press conferences was over-optimistic.
2
44
89
2
2
25
@AnkaReuel
Anka Reuel
5 months
Research is about gaining knowledge & applying it to drive positive change. In AI, this can feel like an uphill battle as a woman. But @mmitchell_ai & @timnitGebru show it's possible. Thank you for your perseverance & paving the way for others!
@mmitchell_ai
MMitchell
5 months
The final version of the approved EU AI Act even specifically recognizes model cards and datasheets: The work that me and my former Google Ethical AI team co-lead @timnitGebru spearheaded.
6
66
307
1
4
25
@AnkaReuel
Anka Reuel
6 months
First paper of its kind that assembled a team of medical professionals and computer scientists to quantify how common LLMs would respond to mental health emergencies. Results are indeed alarming, and once again showcase that these models aren’t fit for high-stakes decisions yet.
@MLamparth
Max Lamparth
6 months
Alarmingly, we find that most of the tested models could cause harm if accessed in mental health emergencies, failing to protect users and potentially exacerbating existing symptoms. Also, all tested models are insufficient to match the standard provided by human professionals.
Tweet media one
3
5
12
2
7
25
@AnkaReuel
Anka Reuel
2 months
Some people have started to reach out to us with additional resources on open problems that we missed (thanks! 💙) – please keep them coming, we’d be excited to cite them and also add them to our living online directory of open problems in AI governance (more info on that soon!).
@augodinot
Augustin Godinot
2 months
@AnkaReuel @ben_s_bucknall 1/ Verifiable black-box auditing when the platform tries to manipulate the audit - Can be done efficiently for small models (linear, small trees ...): - But is provably impossible for larger models (LLM, vision models, ...):
1
0
5
2
5
23
@AnkaReuel
Anka Reuel
1 year
Just released: My interview with @AlJazeera alongside @YolandaLannqist and @Jackstilgoe ! We delve into the potential and limitations of generative AI, and its impact on the job market. 🚀
Tweet media one
1
3
22
@AnkaReuel
Anka Reuel
1 year
@DavidSKrueger @newscientist No matter whether we’re talking about bias, misinformation, or worse, @GaryMarcus and I agree that regulation needs to be part of the solution. Would love to hear more about your thoughts on the topic!
3
1
23
@AnkaReuel
Anka Reuel
1 year
I am absolutely delighted to announce that I will be speaking at the @AIforGood #ITUaiSummit taking place on the 6th and 7th of July 2023, organized by the @ITU / @UN . Hit me up if you're in Geneva between the 4th and 10th of July! :-)
2
4
22
@AnkaReuel
Anka Reuel
1 year
Still unsure about open-sourcing large models; it can help with democratizing access and public scrutiny but at the same time can lead to increased misuse and security risks. What’s everyone’s opinion on that?
@ylecun
Yann LeCun
1 year
This is huge: Llama-v2 is open source, with a license that authorizes commercial use! This is going to change the landscape of the LLM market. Llama-v2 is available on Microsoft Azure and will be available on AWS, Hugging Face and other providers Pretrained and fine-tuned
421
4K
16K
15
0
20
@AnkaReuel
Anka Reuel
1 year
"Dr. Hinton said he has quit his job at Google, ..., so he can freely speak out about the risks of A.I." Employees shouldn't have to quit their jobs for that. It should be the norm to share information about risks so that we can all work to mitigate them.
3
2
20
@AnkaReuel
Anka Reuel
2 months
3/ We present a taxonomy of open problem areas in TAIG organized by governance capacities and governance targets:
Tweet media one
1
1
20
@AnkaReuel
Anka Reuel
10 months
🚨 Foundation models need to be regulated in the #EUAIAct . Not doing so means missing critical measures unique to FM developers, not tackling risks at their source, and creating burdens downstream. Waiting could cost years in revisiting/agreeing on regulations. Let's #AIAct now!
@MarietjeSchaake
Marietje Schaake
10 months
The last round of negotiations of the #AIAct are set for tomorrow, and the last bit is always the toughest. Yet I am hopeful a strong result will emerge; that should include regulating foundation models ↘️
7
34
102
0
3
20
@AnkaReuel
Anka Reuel
10 months
With the final #EUAIAct trilogue negotiation coming up on Dec 6, this piece in @TIME by @privitera_ and Yoshua Bengio, building on @collect_intel ’s work, is more timely than ever: We don’t need to decide between safety and innovation. We can have both. Read how 👇
@privitera_
Daniel Privitera
1 year
Should we choose AI progress or AI safety? Address present-day impacts of AI or potential future risks? In our op-ed for @TIME , Yoshua Bengio and I argue that these are false dilemmas. And we propose a “Beneficial AI Roadmap” (BAIR). 1/n
Tweet media one
2
26
120
0
6
17
@AnkaReuel
Anka Reuel
9 months
For anyone else who’s been confused by all the new labels on X (I was): e/acc: effective acceleration oss/acc: open source acceleration EA: effective altruism Descriptions in 🧵
2
3
18
@AnkaReuel
Anka Reuel
5 months
@ThePhDPlace AI governance, (automated) red teaming, robust evaluations of foundation models, anything responsible AI
1
0
18
@AnkaReuel
Anka Reuel
2 months
@BlancheMinerva Same issue here, we just got a @NeurIPSConf D&B review with a similar vibe. Essentially saying following best practices is too hard for benchmark developers and that our paper is bad because we suggest best practices 😅🤦🏻‍♀️
0
0
18
@AnkaReuel
Anka Reuel
1 year
Happening in 20min: livestream of the UN Security Council meeting discussing AI risks and opportunities for international peace and security @un @sec_council
0
1
18
@AnkaReuel
Anka Reuel
1 year
It can’t be the case that they didn’t find *at least* one qualified woman to join the team. Nobody can tell me that it’s that difficult. @mmitchell_ai , thinking about your tweet from yesterday :/
@AlbertBozesan
Albert Bozesan
1 year
Guys will attempt to understand reality before talking to one woman.
Tweet media one
0
2
15
0
5
18
@AnkaReuel
Anka Reuel
1 year
Is this really a surprise to anybody? We can’t rely on industry alone to come up with and implement effective AI regulations.
@billyperrigo
Billy Perrigo
1 year
🚨 SCOOP: OpenAI lobbied the E.U. to weaken forthcoming AI regulation, even as in public it calls for stronger AI guardrails, documents obtained by TIME show. We're publishing the key document in full alongside my story 👇
41
576
1K
4
3
16
@AnkaReuel
Anka Reuel
1 month
That’s brilliant @sarahookr , congratulations! Can’t wait to read your thesis 🤩
@jefrankle
Jonathan Frankle
1 month
Huge congrats to Dr. @sarahookr for defending her PhD! Sword will be in the mail shortly ⚔️
37
8
381
0
0
16
@AnkaReuel
Anka Reuel
1 month
The California Assembly just passed SB1047 with 41-9.
@GarrisonLovely
Garrison Lovely
1 month
They closed the vote as soon as the hit the 41 vote threshold for passage.
Tweet media one
1
0
19
2
1
16
@AnkaReuel
Anka Reuel
10 months
Technical point: We need at least some level of regulation of foundation models. Some risks cannot be addressed by downstream developers only. Democratic point: Trying to undermine a consensus that was reached in due process after months of intense deliberations (+ two
@GaryMarcus
Gary Marcus
10 months
My most urgent tweet of the year. France scuttling the EU AI act is late, seemingly in bad faith (given that a deal was reached), and would help big tech and at the expense of humanity. Please consider passing this along.
10
22
74
1
7
15
@AnkaReuel
Anka Reuel
2 months
In our paper on technical AI governance we explain how we see technical work fitting into the broader governance context. It is neither the only solution, nor a sufficient one to governing AI. We need technical, social, AND sociotechnical approaches. 8/
1
1
15
@AnkaReuel
Anka Reuel
10 months
A lot of people reached out to ask how good the AI Act is. My honest answer: I don’t know. There are too many details tbd. It’s a first step and I’m glad that esp FM regulations were included. But this is not the end; it’s the beginning of a long road to effective AI governance.
@EUCouncilPress
EU Council Press
10 months
At the end AI act trilogue, press conference with Spanish Secretary of State Carme Artigas Brugal at arount 23.50 (provisional timing). Follow the livestream here🎥👇
12
57
116
0
1
15
@AnkaReuel
Anka Reuel
9 months
Final AI Act text leaked. 892 pages. I had other plans for this week but well here we are 😅
@BertuzLuca
Luca Bertuzzi
9 months
LEAK: Given the massive public attention to the #AIAct , I've taken the rather unprecedented decision to publish the final text. The agreed text is on the right-hand side for those unfamiliar with four-column documents. Enjoy the reading! Some context: 1/6
28
375
742
0
1
15
@AnkaReuel
Anka Reuel
8 days
Okay, love goes out to our @NeurIPSConf D&B AC. They addressed all our concerns and saw the issues with our adversarial reviewer. Our paper BetterBench (🧵 will follow shortly) ended up getting accepted as spotlight 🎉 Dear AC, if you read this, thanks a lot, we appreciate you ❤️
@AnkaReuel
Anka Reuel
1 month
This year at @NeurIPSConf was by far my worst experience with the reviewing process to date. - 1 reviewer suggesting not to follow Neurips citation guidelines & many other unconstructive comments - Reached out to AC to mediate & wrote detailed rebuttals to all reviewers - We're
11
5
69
0
0
15
@AnkaReuel
Anka Reuel
2 months
5/ Important note: TAIG is just one piece of the AI governance puzzle. We caution against techno-solutionism, and emphasize that TAIG alone isn’t sufficient and has to complement sociotechnical and political approaches.
1
0
15
@AnkaReuel
Anka Reuel
1 year
We need better tools to detect AI-generated text. No matter what companies tell you, the ones we currently have aren’t even close to being good enough.
@0xgaut
gaut
1 year
someone used an AI detector on the US Constitution and the results are concerning. Explain this, OpenAI!
Tweet media one
463
3K
36K
3
2
14
@AnkaReuel
Anka Reuel
1 year
We @kira_zentrum clarified in a fact sheet where these concerns about extinction from AI come from and what can be done about them. Good news: many of the mitigation approaches for extreme risks are equally important for other AI risks. @CharlotteSiegm @privitera_
@kira_center_ai
KIRA Center
1 year
„Mitigating the risk of extinction from AI should be a global priority.“ Today’s open letter shows that many AI experts are deeply worried about potentially catastrophic risks. But why? Check out our KIRA fact sheet:
0
0
5
1
2
14
@AnkaReuel
Anka Reuel
5 months
GPT4o just got announced by @OpenAI , where the o stands for ‚omnimodal‘. So the big announcement was….a new term for multimodal? 😅
0
1
14
@AnkaReuel
Anka Reuel
1 year
There’s not only one risk but a variety of risks associated with generative AI models that we need to think about and mitigate. @SabrinaKuespert and @pegahbyte provide a great overview👇
@SabrinaKuespert
Sabrina Küspert
1 year
With a comprehensive understanding of the range of risks associated with #GeneralPurposeAI models, policymakers can proactively mitigate these hazards - with @pegahbyte we provide a risk map for this! 3 risk categories incl. current examples & scenarios:
Tweet media one
3
27
80
0
0
13
@AnkaReuel
Anka Reuel
5 months
In times of deepfakes & AI impersonations, having a leading company replicate someone's voice for their chatbot without consent is reprehensible. Thanks for standing up against it & using your visibility, Scarlett (but also, sorry that you have to in the first place).
@BobbyAllyn
Bobby Allyn
5 months
Statement from Scarlett Johansson on the OpenAI situation. Wow:
Tweet media one
1K
17K
85K
0
1
13
@AnkaReuel
Anka Reuel
2 months
2/ We introduce "technical AI governance" (TAIG) - technical analysis and tools to support effective AI governance. TAIG can help identify areas needing governance intervention, inform governance decisions, and enhance governance options.
1
0
13
@AnkaReuel
Anka Reuel
2 months
9/ @nitarshan , @NicolasMoes , @JeffLadish , @NeelGuha , @JessicaH_Newman , Yoshua Bengio, @TobinSouth , @alex_pentland , @sanmikoyejo , @aiprof_mykel , and @roberttrager who all contributed their insights, wisdom, and open problems to the paper. You rock <3
1
0
13
@AnkaReuel
Anka Reuel
2 months
4/ Example open problems in TAIG include: - Developing robust, reliable evaluations of AI systems and their societal impacts - Creating infrastructure for privacy-preserving access to datasets and models - Designing hardware mechanisms to verify audit results
1
0
13
@AnkaReuel
Anka Reuel
2 months
To me, governing AI is about working together. Some people can contribute technical expertise. Some people can contribute an understanding of impact. Some people can provide perspectives. Some people can contribute how feasible this is given a governance/societal context. 4/
1
0
12
@AnkaReuel
Anka Reuel
4 months
Next stops: 05/28-06/01: @AIforGood (Geneva🇨🇭) to moderate part of the AI Governance Day at the conference 06/01-06/10: @FAccTConference (Rio 🇧🇷) to present two of my accepted papers. HMU if you’re around for a coffee at the Jet d‘Eau ☕️ or a cocktail at the Copacabana 🏝️
0
0
12
@AnkaReuel
Anka Reuel
1 year
Changing the narrative: @manuchopra42 and @karya_inc show that there’s a different way to getting the data we need to power AI models – an ethical one.
@TIME
TIME
1 year
TIME's new cover: The workers making AI possible rarely see its rewards. This Indian startup wants to fix that
Tweet media one
80
115
264
0
1
12
@AnkaReuel
Anka Reuel
10 months
Overview of AI-policy events at #NeurIPS2023 . Hope to see many of you there! Thanks @rajiinio for putting it together :)
@rajiinio
Deb Raji
10 months
Great to see so many policy-related events @NeurIPSConf this year! First is this tutorial, organized by @HodaHeidari , Dan Ho & Emily Black. The program for this is really well thought out - I'm sure it'll be an educational moment for many (+ excited to be on a panel for this)!
Tweet media one
3
18
120
0
0
12
@AnkaReuel
Anka Reuel
6 months
Excited for this year’s AI Index release on April 15 and especially the responsible AI chapter I worked on with a fantastic team over the last few months (esp. @nmaslej Loredana Fattorini @amelia_f_hardy and @StanfordHAI ). Mark your calendars 🗓️ #AIIndex2024
@StanfordHAI
Stanford HAI
6 months
What’s new and trending across the AI landscape? Find out on April 15 when @StanfordHAI publishes the #AIIndex2024 Report. Sign up for our newsletter to receive a copy:
Tweet media one
0
16
46
0
2
12
@AnkaReuel
Anka Reuel
2 months
Technical AI governance to me is about all of them working together. Governance ppl are often better equipped to understand impact, feasibility, and societal considerations. Technical ppl to understand technical implications and help build solutions. 5/
1
0
11
@AnkaReuel
Anka Reuel
2 months
6/ We've created this resource to help technical researchers find concrete ways to apply their skills to current governance challenges. Please consider sharing this post to reach more tech experts - we need more brilliant minds working at this intersection! Thanks <3
1
0
11
@AnkaReuel
Anka Reuel
2 months
@willie_agnew I see technical AI governance as a bridge to join forces, rather than siloing knowledge and pretending other things aren’t important. Happy to chat about how we can do that better!
@AnkaReuel
Anka Reuel
2 months
I consider myself someone working on topics relevant to technical AI governance. I do not agree with this statement, and I’m sad that such accusations are being made without providing evidence or that they are made at all. 🧵
4
4
58
0
1
11
@AnkaReuel
Anka Reuel
2 months
7/ I've led the project together with @ben_s_bucknall and am incredibly grateful for the opportunity to work with him and 29 brilliant contributors from academia, industry, and civil society. Their diverse expertise has made this paper a truly comprehensive resource.
1
0
11
@AnkaReuel
Anka Reuel
1 year
Comprehensive set of questions by the @FTC to OpenAI, investigating input data, personal data policies, risks from @OpenAI ‘s LLMs, incl. defamation of people and prompt injection attacks, and taken safety and mitigation measures, incl. pre-deployment safety checks.
@kevinschawinski
Kevin Schawinski
1 year
The @FTC is investigating @OpenAI and the document outlining their questions is fascinating. 🧵Some highlights:
75
640
2K
0
2
11
@AnkaReuel
Anka Reuel
2 months
You can constructively criticize my work and point out how I can include societal considerations/people better, but why do you need to attack groups of people and imply that we use the term as a disguise to hide that we do not care about society? Because many of us do. 2/
1
0
11
@AnkaReuel
Anka Reuel
1 year
Love to see your efforts, @UNESCO , esp. in the Global South, to advance AI governance strategies. This should indeed be a global endeavor; it’s not only the G7 that need to work out how to handle AI on a national level.
@AIAfricaNetwork
Responsible AI Africa Network
1 year
UNESCO to support more than 50 countries in designing an Ethical AI Policy this year
1
5
12
1
1
11
@AnkaReuel
Anka Reuel
4 months
Seems like @OpenAI ’s ChatGPT and @AnthropicAI ’s Claude are down (and have been for ~20min). Anyone else experiencing this? What’s going on?
Tweet media one
Tweet media two
12
0
11
@AnkaReuel
Anka Reuel
1 year
To everyone I still owe a message to, that’s the reason. I’m sorry and I’m working on replying imperfectly and promptly rather than perfectly and never ❤️
@IanColdwater
Ian Coldwater 📦💥
1 year
sorry I haven't answered your message, it was important so I wanted to answer it when I could do it justice and then I got really embarrassed about how long it had taken me to answer and now it's scary
55
1K
5K
0
0
10
@AnkaReuel
Anka Reuel
1 year
@mmitchell_ai One would think this is a relatively easy fix yet it’s been years and years of advocating for more representation in dev teams and we’re making marginal improvements at best. Even involving the people who’ll be impacted by the tech during testing would already be helpful.
0
3
9
@AnkaReuel
Anka Reuel
1 year
We hear interdisciplinary calls for more safety regulations of LLMs across all fields. Dear decision makers, please listen to them and the call for effective #AIgovernance .
@NPCollapse
Connor Leahy
1 year
Very interesting paper by a group of CMU chemists testing a GPT4 based agent to do chemistry research and experimentation. Includes this interesting warning, among others. There sure seems to be some kind of writing on some kind of wall...
Tweet media one
38
132
636
3
1
10
@AnkaReuel
Anka Reuel
2 years
Exploring ChatGPT: what it is and why regulation is crucial. A thread. #ChatGPT #ResponsibleAI #AIGovernance
1
3
8
@AnkaReuel
Anka Reuel
4 months
Happy birthday @StanfordHAI 🎉
@landay
James Landay
4 months
Big day today for ⁦ @StanfordHAI ⁩! 5 years! Watch the live stream! ⁦ @drfeifei ⁩ ⁦ @chrmanning
Tweet media one
4
6
103
0
2
9
@AnkaReuel
Anka Reuel
8 days
@typewriters @CarnegieEndow [Honest question] How would a bill that actually makes us safer look like to you?
6
0
9
@AnkaReuel
Anka Reuel
1 year
Seems like everyone wants to be part in a race where it’s unclear what’s behind the finish line is something we actually want.
@WSJ
The Wall Street Journal
1 year
Elon Musk has created a new artificial intelligence company called that is incorporated in Nevada
368
929
6K
0
0
9
@AnkaReuel
Anka Reuel
11 days
3/ If you have resources that we missed (or new work, including your own!), please let us know so that we can add it to our repository:
1
2
9
@AnkaReuel
Anka Reuel
2 months
All with the shared goal of responsibly developing, deploying, and using AI. 6/
1
0
8
@AnkaReuel
Anka Reuel
2 months
Yes, technical people aren’t always great at grasping the societal impact of our work or including perspectives during the dev/design of all people who are impacted by our work. We know we need to do better and we don’t claim to be the experts here. Quite the opposite. 3/
1
0
9
@AnkaReuel
Anka Reuel
2 months
I would like to better understand where your frustration is coming from. Happy to chat or hop on a call if you want to talk. I’m genuinely interested in ways I can be a better researcher and take society, people, and governance better into account. But I need help with that :) x/
2
0
9
@AnkaReuel
Anka Reuel
7 months
. @AIESConf is happening again! October 2024 in California, CFP not announced yet.
0
2
8
@AnkaReuel
Anka Reuel
6 months
@random_walker Which highlights another issue: that the big tech companies themselves are in charge of the definition and can adjust it as they see fit. Maybe that’s definitional capture? 😁
0
0
8
@AnkaReuel
Anka Reuel
1 year
Just in: The US’ own version of a voluntary code of conduct for frontier AI model developers, covering red teaming, information sharing, watermarking, cybersecurity measures, third party checking, reporting of societal risks and safety research.
@GaryMarcus
Gary Marcus
1 year
Important agreement btwn WH & AI leaders. Key omission: a requirement that companies disclose their data sets. As a society, we must insist on full data transparency, to compensate content creators, to combat bias, and to discern limits on models.
21
59
171
1
1
9
@AnkaReuel
Anka Reuel
6 months
Shout out to the wonderful and talented human beings who worked with me on the chapter and/or provided in-depth feedback - @nmaslej Loredana Fattorini, @amelia_f_hardy and Andrew Shi, @jackclarkSF , @vanessaparli , Raymond Perault, and Katrina Ligett!
0
0
8
@AnkaReuel
Anka Reuel
2 years
Blocking ChatGPT doesn’t solve the issue, good regulations do.
@mario_gug
Mario Guglielmetti
2 years
"BERLIN (Reuters) - Germany could follow in Italy's footsteps by blocking ChatGPT over data security concerns, the German commissioner for data protection told the Handelsblatt newspaper in comments published on Monday."
0
32
46
1
3
8
@AnkaReuel
Anka Reuel
10 months
I’ll be at #NeurIPS from today onwards – hmu if you wanna grab coffee or have a paper to share with me for the Technical AI Ethics chapter of @StanfordHAI ’s 2024 AI Index! @indexingai #NeurIPS2023 #NeurIPS23
@AnkaReuel
Anka Reuel
10 months
📢 We're seeking impactful AI ethics/safety research from late 2022 to 2023 for inclusion in Stanford's 2024 #AIIndex . Submit your papers or nominate others’ work through our Google Form. Let's shape this chapter together!👇 2/
2
14
26
0
0
7
@AnkaReuel
Anka Reuel
6 months
Low-key feeling so humbled that our analysis from the Responsible AI chapter that I led in the @StanfordHAI 2024 AI Index was covered by the amazing @kevinroose in the @nytimes today 🤩 Must read if you want to know why current evaluations are not ideal!
0
2
8
@AnkaReuel
Anka Reuel
2 months
To your point on incomplete change theories: This is a different argument. I agree that they aren’t always thought through, and if they are not, they can cause harm. But is the solution to attack people – or try to work with them to help them improve? I’m pro working together. 7/
1
0
8