Peter N. Salib
@petersalib
Followers
505
Following
799
Statuses
778
Assistant Professor of Law @UHLaw AI, Risk, Constitution, Economics
Houston, TX
Joined November 2008
Pleased to share that my newest article, "AI Outputs Are Not Protected Speech," is forthcoming in @WashULRev. The article has, I think, important implications for impending federal and state laws designed to reduce catastrophic risk from AI.
9
37
129
RT @erikphoel: Probably the highest alpha IP in the next decade is whoever can do a TV series about the original Butlerian Jihad in Dune
0
6
0
RT @_damian_bot: @petersalib @newsbeagle This mirrors my take: Even the strong skeptics are getting concerned. "Long timelines" now means…
0
1
0
RT @joshua_clymer: Dario's recent blog post indicates that the amount of compute used for RL training will probably be scaled up by >100x t…
0
7
0
@binarybits This is interesting! I guess I'd also say there will be several years before waymo is >80% of population. But I'd likely say that building cars and getting reg. approval were the bottleneck, and the models will be ready sooner (maybe now). Do you think that's wrong?
1
0
0
@binarybits As to the latter I agree, but that could cut in either direction. As to the former, what do you mean by close? Waymo seems like a big deal. Do you not expect them to be nationwide I the next 5 years, barring regulatory blocks?
1
0
3
RT @ProfArbel: they should have added napster and photoshop. seriously how agi isn't number one
0
2
0
@davidduwaer @davidad @Gabe_cc I think that we don't really know how much of the reasoning is reflected in the text stream. Yes, more test time compute gets you better reasoning. But it doesn't follow from that the tokens the extra compute produces represent most or all of the reasoning.
0
0
1
RT @emollick: $500B committed towards AGI, still no articulated vision of what a world with AGI looks like for most people. Even the huge e…
0
429
0
RT @Miles_Brundage: "AGI isn't here literally this second" is the new standard for being an AGI "pessimist"
0
29
0
@binarybits This seems correct as a prediction about the transition from unsupervised pretraining on existing text to RL as the most important training paradigm. Do you think the prediction that this transition implies a plateau and/or much slower progress holds up, given o1, o3, etc.?
1
0
0
RT @emollick: I want to emphasize the point Kevin is making. This prediction (AGI within next couple years) is a common timeline for insid…
0
65
0