ML_Burn Profile Banner
Mike Burnham Profile
Mike Burnham

@ML_Burn

Followers
753
Following
548
Statuses
620

Postdoc @PUPolitics, dual Ph.D. @psupolisci & @CSoDA_PSU. Text analysis & deep learning, methods, American politics and democratic accountability.

State College, PA
Joined May 2019
Don't wanna be here? Send us removal request.
@ML_Burn
Mike Burnham
5 months
New Pre-print out today! We're releasing Political DEBATE -- a new set of language models for zero/few-shot classification of political text. The models are open source, small enough to run on your laptop, and as good as proprietary LLMs within domain.
Tweet media one
1
49
211
@ML_Burn
Mike Burnham
10 hours
So did DOGE actually uncover anything about USAID that we couldn’t have figured out through normal channels? Serious question.
0
0
3
@ML_Burn
Mike Burnham
3 days
Oh thank god. I was worried nobody was monitoring this.
@samstein
Sam Stein
3 days
Press Sec clarifies that Elon Musk will self-determine when there is a conflict of interest involving him and DOGE.
0
0
4
@ML_Burn
Mike Burnham
5 days
Yet another example of how many tech folks fundamentally don’t understand alignment. They literally think it’s whether or not the AI will do what you ask. I’m begging you all to please read a paper on this or at least take a machine learning course 😭
@yacineMTB
kache
5 days
AI safety will not exist because people will simply use a product that actually listens to them. It's over for the authoritarian heaven that these silicon valley freaks want us to live in. Do you think america is the only country in the world? Do you think that california is the only state?
0
0
3
@ML_Burn
Mike Burnham
5 days
AI researchers and VC’s repeatedly give the impression that they haven’t thought seriously about what “alignment” means or the social implications of what they are building. It’s wild to see stuff like this from some of the most important people in this space.
@polynoamial
Noam Brown
5 days
This was a tweet deserving of more nuance so let me put it here. Wikipedia has admirable principles but a major challenge they face is aligning their hierarchy of editors to those principles. Even though it's not there yet, I think we will reach a point where models like Deep Research are able to do a better job and be more aligned than the editors. I think that will lead to a better Wikipedia, or to a competitor that is better. I do agree there are some major concerns with that, so I think saying "that's fine" wasn't the right reaction. In particular, going down this route essentially means substituting capital for human labor (in this case, Wikipedia editors). The optimistic view of AI is that it will increase productivity and advance scientific progress, and I genuinely believe that, but the pessimistic view, which I also think is true, is that in some instances it will mean capital (GPUs) can substitute for human labor. Wikipedia editors are one example of this. Relatedly, this means power will be concentrated among those who control the GPUs. I think these are real issues and ways of dealing with it involve various tradeoffs.
0
1
11
@ML_Burn
Mike Burnham
5 days
@arthur_spirling Yes but I’m not ready to consider a woodworking career.
0
0
2
@ML_Burn
Mike Burnham
8 days
Me when I plot the data and realize I actually have to model two data generating processes.
@HarrisDavey2024
Corwin 🔶🇪🇺🎄
9 days
Zero inflation isn't good actually
0
0
2
@ML_Burn
Mike Burnham
8 days
Has anyone found a data annotation task where reasoning models (o1, o3, r1) provide a significant gain over autoregressive models (4o, sonnet 3.5, llama 3)? My experience has been that autoregressive models saturate these tasks. If anything, reasoners are worse.
1
0
9
@ML_Burn
Mike Burnham
8 days
@arpitrage @ben_golub I love this answer because it elegantly answers the question and highlights the fundamental difference between ML and stats. In both cases, inference means the plain dictionary definition. ML infers outcomes and stats infers parameters.
0
0
1
@ML_Burn
Mike Burnham
9 days
Computer scientists yearn for the Cold War. Hegemonic stability theory in shambles.
@yoavgo
(((ل()(ل() 'yoav))))👾
10 days
dario is afraid of a "bipolar future" where both china and the US have access to strong AI, and advocates a "unipolar world" in which only the US have such access. I am personally more worried of the unipolar version.
0
0
1
@ML_Burn
Mike Burnham
9 days
Shout out to my advisor @mjnelson7 who is one of the best people I know. @psupolisci and @CSoDA_PSU are fantastic programs.
1
1
12
@ML_Burn
Mike Burnham
12 days
@nataliemj10 Is anyone actually tracking this in a rigorous fashion? Seems like a slam dunk project for journalists.
0
0
0
@ML_Burn
Mike Burnham
12 days
Tweet media one
0
0
7
@ML_Burn
Mike Burnham
12 days
Honestly surprised by how strong the reaction to R1 has been given that: 1. A preview of R1 has been publicly available since November. 2. This is, I think, the third or fourth time Deep Seek has released an open source model far cheaper than competitors.
1
0
6
@ML_Burn
Mike Burnham
13 days
It is the distant future... the year 2027. GPT-o6 has solved engineering and science. One question remains that the computer's can't answer: Who governs? The singularity looks bullish for the humanities and social sciences.
0
1
4
@ML_Burn
Mike Burnham
19 days
@JakeMGrumbach One of those things I’ve been stewing on for a while but haven’t gotten around to!
1
0
1
@ML_Burn
Mike Burnham
19 days
@JakeMGrumbach Agree. My guess is that part of the divide is based on whether or not people view retrospective voting as evidence towards voter sophistication, which some of the lit. assumes. Easy to see prices swung median/disengaged voters. Harder to argue this is voter sophistication IMO.
1
0
4
@ML_Burn
Mike Burnham
21 days
@kamran_soomro @jeremyphoward This is just re-discovering the literature on benign overfitting is it not?
1
0
1