![Ben Brooks Profile](https://pbs.twimg.com/profile_images/1864180804600905728/KGCG2Esj_x96.jpg)
Ben Brooks
@opensauceAI
Followers
2K
Following
727
Statuses
416
Fellow @ the Berkman Klein Center, Harvard. Regulatory advocacy ex-Stability AI (weights), GoogleX (drones), Uber (rides), Coinbase (magic beans). Views my own
United States
Joined April 2023
As California prepares for a final vote on SB1047, I wrote a piece for @techpolicypress explaining why – despite nine waves of amendments – SB1047 continues to threaten open-source AI. Check it out here: While Senator Wiener maintains that the bill “does nothing to stifle the power of open-source,” SB 1047 requires the impossible from open-source developers. Indeed, no language in the bill expressly prohibits open technology. But that is precisely why proposals like SB1047 are so troubling: legislation can restrict open innovation indirectly without attracting the same scrutiny as proposals that do so explicitly. The EU took three years and thousands of amendments attempting to balance effective AI oversight with open innovation. On its current trajectory, California's legislature will have taken less than six months to bring open innovation to a grinding halt.
4
25
93
@sweatystartup Still better than the original, i.e. an armor-plated ice cream truck optimized for minimum fuel efficiency, zero star NCAP rating, and no clearance.
0
0
7
Also, if we unpack "privacy" and stop pretending it has canonical meaning—folks care about completely different rights in data. Collection, use, offshoring, retention, deletion, consent, 3P doctrine, etc. These aren't equal, they don't all matter to Joe / Jane Citizen, and they warrant different kinds of intervention.
0
0
3
I think reasonable minds can disagree about whether there's actually a problem here. But I don't think any camp wants more kumbaya platitudes, which is the only thing these forums ever produce. Folks need to read the room. Some New START-style outcome was never seriously on the table, and even less so now. France has been framing this as "not about safety" since Bletchley Park; this kind of messy drafting process is par for the course in international forums (especially since the heavyweight jurisdiction, US, just underwent a dramatic change in government); and if we actually want to hold firms to their commitments, that's a job for a regulator—in the short term, a competition regulator—not an ersatz G20.
1
0
0
Second explanation doesn't really make sense, unless it needs the height for the reserve chute, but first does. Imagine it has a huge impact on energy and noise. Awesome to see. "It doesn’t have to spend energy station-keeping, descending, and then ascending again." "That let us move from having the Zip 50 feet up, to having it 300 feet up, which is important because it’s a big, heavy drone that we don’t want in our customer’s space."
0
0
0
Every faction is stuck in 2022 talking points. That said, it's unbelievable (and increasingly ridiculous) how much airtime is devoted to x-risk versus e.g. labor displacement / transition. Ideally, we'd be able to grapple with all these issues at the same time. But in practice, it's sucking up all the oxygen in the room.
2
0
9
@AdamThierer 🙏 It's troubling to see the Overton window shifting on open tech and open research
0
0
0
So where is this heading? According to @TIME, groups like @ai_ctrl (or their survey respondents) want the @AISafetyInst to have regulatory authority over model development by requiring pre-release authorization. IMHO, that is the last thing the UK needs right now.
0
0
0
@VitalikButerin Folks on both sides of the aisle would argue that the yawning disconnect between a career political class and the everyday constituent wasn't so great.
0
0
0
My piece in @thehill today explains why making it a crime to download @deepseek_ai weights or release Llama 4 isn't just unconstitutional—it's a Bad Idea™.
0
0
0