![Aviv Ovadya 🥦 Profile](https://pbs.twimg.com/profile_images/1590837844515454976/al6CDR-9_x96.jpg)
Aviv Ovadya 🥦
@metaviv
Followers
9K
Following
8K
Statuses
5K
CEO, AI & Democracy Foundation. Ensuring democratic capacity can keep pace with AI advances. Harvard BKC, GovAI 📧 Email: [email protected] ➡️ https://t.co/KMmV46xFkJ
San Francisco (Not UK!)
Joined July 2011
tl;dr: Recently founded the AI & Democracy Foundation (AIDF) & we are hiring! Two especially critical roles: • Director of AI Strategy: Bring deliberative democracy to AI & AI to deliberation • Head of Ops: Lead & own everything needed to grow the org. Details below...
If we are to chart a safe course to a world with broadly beneficial AI, we will need governance of AI, with AI, through deliberation. Can we do this at sufficient speed & scale, in a human-centered way, while avoiding destructive conflict? I think yes.
2
11
38
Most recent context: > "We need fire drills for AI spearphishing for everyone." This claim is a bit intentionally provocative. But also a result of some justified frustration. There is so much we (govs, tech co, etc.) could have done, that we advocated for. I gave a remote talk at defcon in *2018* pointing out the inevitability of this (others too). Very little happened. The one thing we have that might help, is AI defense. But that requires order of magnitude more infra, especially to do in a privacy preserving way, for those who can afford fancy new phones with sufficient onboard AI. It also requires new infra which we haven't built. Mass drills across every vulnerable communication channel might help. Not cheap — but far cheaper than monitoring all of those communication channels. Do we need a significant proportion of people to be scammed, hacked, heartbroken, destitute, radicalized, made into sleeper agents, etc. before we take action? This might not be the only way forward, and I hate that we have to even consider it. I'd love a better way. Claude lists things like: - Opt-in security training programs - Better security infrastructure and tools - Improved digital literacy education - Collaborative efforts between tech companies and governments on security standards - Privacy-preserving threat detection systems All this sounds good. But there are many many people this won't help right now. We need education at the point of action—in the communication channel itself, that would be (mis)used. I think we might be in harm-reduction mode at this point while we get our act together. Lots of risks here, ethical quandaries — I agree. If this is unnecessary, I'd love for that to be true. But my current default stance is that we will regret not doing this. This could even potentially be required by law for those running communication channels. (We should *definitely* test and ensure it provides significant net-benefit before scaling to everyone.)
0
0
3
Who is conducting ML-style evaluations for scaffolding designed for humans or groups of humans? To be more precise, it seems that we can improve LLM performance on evaluations with scaffolding (caveats: this applies to some LLMs, some evaluations, and some types of scaffolding). - To what extent does human performance also improve with similar or comparable scaffolding? - How does this compare to LLM performance with scaffolding? - Does any form of scaffolding generalize across different contexts? More broadly, the question of how to create effective scaffolding for people to improve their "capabilities" is one of the core challenges in areas like collective intelligence, collective decision-making, facilitation, organizational design, management, and more. As a concrete example, I have facilitated groups and managed work by people and teams with very different strengths, work styles, and neurotypes, across a wide range of tasks. From my experience, it seems clear that fit-for-purpose scaffolding—tailored to the specific person/group and task—often plays a critical role in performance. There are a relatively small number of people who can access most of their intelligence through self-scaffolding (and personal tool use), but most people likely operate at only a fraction of their true capability without external support. When it comes to groups, effective scaffolding becomes even more challenging and has higher variability. Many groups demonstrate collective intelligence capabilities that are much worse than those of most or all of their individual members, whereas it seems likely that, with the right scaffolding, these groups could perform far better. One underlying motivation for this question is to determine just how much excellent scaffolding can improve capabilities. - By analogy, can you take a person or group that is operating roughly at the level of GPT3 and help them reach GPT4 performance with the right scaffolding? - How does that change when you implement different notions of equality across the group? Sometimes you can improve the capabilities of one LLM by using other LLMs or ML models to do or support the scaffolding. - How much can we apply this approach to support humans? If we want groups of people to deliberate effectively, navigate tough decisions, identify trade-offs, find common ground, and make informed decisions—and we think that scaffolding or facilitation can be useful—then it would be valuable to characterize just how effective this support could be. - Where are the limits? - How much might AI-driven facilitation or scaffolding help in these contexts? There are many questions here. I'm especially curious to hear any answers—what work has already been done here? (Both new work focused on AI, e.g. the recent Google DeepMind paper by @mhtessler, @bakkermichiel, et al. with explores one facet of these questions in a particular domain; but also from the existing literature, e.g. in psychology, where work along these lines has already been done, with different terminology.)
1
0
2
@jeremyphoward @Miles_Brundage @deanwball What does that look like concretely here to you? (I ask that as someone working specifically on developing such resilient societal structures!)
0
0
1
What part of it/at what level of abstraction/with what goal? This encompasses a significant fraction of all research, and the kinds of insights that are most useful at the scale of several humans are often very different from those at the scale of nations or economies. The literature on understanding it is also often divorced from that which is focused on addressing specific subparts.
0
0
3
@jachiam0 @ketanr @NeelNanda5 @BlancheMinerva @ericneyman Are there specific changes that you would like to see made? Or is the entire direction unhelpful in your perspective?
0
0
0
RT @democraticAI: We are hiring! We are far more likely to be able to address the challenges of AI if we invest in, develop, and adopt proc…
0
5
0
@Meta Even worse, consider the alternative to open models—permanently locking them behind APIs that can (hopefully🤞) mitigate harms. This could significantly reduce freedom and agency, and many kinds of *good* competition! This also creates immense (dangerous?) power concentration.
0
0
1