Mike Burnham
@ML_Burn
Followers
753
Following
548
Statuses
620
Postdoc @PUPolitics, dual Ph.D. @psupolisci & @CSoDA_PSU. Text analysis & deep learning, methods, American politics and democratic accountability.
State College, PA
Joined May 2019
Yet another example of how many tech folks fundamentally don’t understand alignment. They literally think it’s whether or not the AI will do what you ask. I’m begging you all to please read a paper on this or at least take a machine learning course 😭
AI safety will not exist because people will simply use a product that actually listens to them. It's over for the authoritarian heaven that these silicon valley freaks want us to live in. Do you think america is the only country in the world? Do you think that california is the only state?
0
0
3
AI researchers and VC’s repeatedly give the impression that they haven’t thought seriously about what “alignment” means or the social implications of what they are building. It’s wild to see stuff like this from some of the most important people in this space.
This was a tweet deserving of more nuance so let me put it here. Wikipedia has admirable principles but a major challenge they face is aligning their hierarchy of editors to those principles. Even though it's not there yet, I think we will reach a point where models like Deep Research are able to do a better job and be more aligned than the editors. I think that will lead to a better Wikipedia, or to a competitor that is better. I do agree there are some major concerns with that, so I think saying "that's fine" wasn't the right reaction. In particular, going down this route essentially means substituting capital for human labor (in this case, Wikipedia editors). The optimistic view of AI is that it will increase productivity and advance scientific progress, and I genuinely believe that, but the pessimistic view, which I also think is true, is that in some instances it will mean capital (GPUs) can substitute for human labor. Wikipedia editors are one example of this. Relatedly, this means power will be concentrated among those who control the GPUs. I think these are real issues and ways of dealing with it involve various tradeoffs.
0
1
11
@arpitrage @ben_golub I love this answer because it elegantly answers the question and highlights the fundamental difference between ML and stats. In both cases, inference means the plain dictionary definition. ML infers outcomes and stats infers parameters.
0
0
1
Computer scientists yearn for the Cold War. Hegemonic stability theory in shambles.
dario is afraid of a "bipolar future" where both china and the US have access to strong AI, and advocates a "unipolar world" in which only the US have such access. I am personally more worried of the unipolar version.
0
0
1
Shout out to my advisor @mjnelson7 who is one of the best people I know. @psupolisci and @CSoDA_PSU are fantastic programs.
1
1
12
@nataliemj10 Is anyone actually tracking this in a rigorous fashion? Seems like a slam dunk project for journalists.
0
0
0
@JakeMGrumbach One of those things I’ve been stewing on for a while but haven’t gotten around to!
1
0
1
@JakeMGrumbach Agree. My guess is that part of the divide is based on whether or not people view retrospective voting as evidence towards voter sophistication, which some of the lit. assumes. Easy to see prices swung median/disengaged voters. Harder to argue this is voter sophistication IMO.
1
0
4
@kamran_soomro @jeremyphoward This is just re-discovering the literature on benign overfitting is it not?
1
0
1