Kevin Miller Profile
Kevin Miller

@kevinjmiller10

Followers
1,311
Following
419
Media
5
Statuses
86

Computational cognitive neuroscientist. Research scientist at @DeepMind Neuroscience Lab. Researcher at CortexLab, UCL.

London, England
Joined August 2016
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@kevinjmiller10
Kevin Miller
1 year
Cognitive models of behavior are a key part of neuroscience. But discovering them is hard! Neural networks are powerful models. But interpreting them cognitively is hard! We explore automatically learning interpretable models using "Disentangled RNNs"
8
69
292
@kevinjmiller10
Kevin Miller
8 months
Applications are open for the @GoogleDeepMind Student Researcher Program! There will likely be projects available to work with me and with others in computational neuroscience. If interested, please feel free to get in touch! Learn more and apply here:
6
100
455
@kevinjmiller10
Kevin Miller
5 years
How do new habits form? Many computational models propose that actions become habits when they consistently lead to rewards. "Habits without Values" showcases a model in which actions become habits when they are repeated consistently (even if unrewarded).
Tweet media one
4
32
81
@kevinjmiller10
Kevin Miller
4 years
How do brains make multi-step plans? Neuroscience does not yet know! But it does have many interesting empirical clues and some solid theoretical ideas. Sarah Jo Venditto and I describe some of these and propose directions for future work:
1
13
63
@kevinjmiller10
Kevin Miller
7 years
The brain represents "expected value" of states and actions in the world, but what do these representations do? Our preprint argues that (at least in the OFC) they play an important role in learning, but no direct role in choice.
4
39
57
@kevinjmiller10
Kevin Miller
6 years
Excited to be organizing a workshop on Model-Based Cognition at #cosyne2018 with @kimstachenfeld , @basvanopheusden , and @roozbeh_kiani . We've got a great lineup of exciting speakers and topics, it should be a fun time!
0
7
25
@kevinjmiller10
Kevin Miller
1 year
I am excited about applying approaches like this to a broader range of datasets, and about using neural networks for scientific discovery more generally. Thanks to brilliant and inspiring co-authors @eckstein_maria , @mattbotvinick , and @zebkDotCom !
0
0
8
@kevinjmiller10
Kevin Miller
1 year
When trained on the rat datasets, these networks could provide better quality-of-fit than the best previously-known cognitive model. Fit quality was similar to that of an LSTM (a classic black-box RNN).
Tweet media one
1
0
7
@kevinjmiller10
Kevin Miller
3 years
@JamesMHyman @guido_meijer Is this the kind of thing you're looking for? Nonsense Correlations in Neuroscience, by @kennethd_harris
1
1
7
@kevinjmiller10
Kevin Miller
5 years
See also our book chapter with Giovanni Pezzulo, surveying models of goal-directed and habitual behavior more broadly
0
0
5
@kevinjmiller10
Kevin Miller
1 year
Two key features encourage these networks to learn interpretable models. First: each latent variable (element of the recurrent state) is updated by a separate sub-network. Second: information bottlenecks penalize retaining information that is not being used.
Tweet media one
1
0
5
@kevinjmiller10
Kevin Miller
1 year
When trained on the synthetic behavioral datasets, these networks successfully recovered the true cognitive structure of the handcrafted agents.
Tweet media one
1
0
4
@kevinjmiller10
Kevin Miller
1 year
We fit these networks to large behavioral datasets from a classic reward learning task. We consider both synthetic datasets generated by handcrafted artificial agents and laboratory datasets generated by rats ()
Tweet media one
1
0
4
@kevinjmiller10
Kevin Miller
7 years
@twitemp1 @IrisVanRooij But: you shouldn't have to take my word for that, and you shouldn't have had to look up a previous paper to understand this one. I'll address both of these in the next preprint iteration. If you have more thoughts on the paper, I'd love to hear them (either here or by email)!
1
0
3
@kevinjmiller10
Kevin Miller
7 years
@twitemp1 @IrisVanRooij Hi Esther, thanks for pointing this out! I see you've found our previous paper, where we did a lot to characterize and validate the behavior. The rats in this paper are like those rats in all the ways that matter.
0
0
2
@kevinjmiller10
Kevin Miller
7 years
@KateWassum @A_Izquierdo1 @MelissaMalvaez @NTlichten @ashkatemorse Thanks! I followed the lead of Schoenbaum lab, and aimed for the border of LO and AIC ("lateral OFC")
1
0
2
@kevinjmiller10
Kevin Miller
4 years
Builds on previous work in econometrics, and in the neuroscience of RL. Yule, 1926. Why do we sometimes get nonsense correlations between timeseries? Elber-Dorozko & Lowenstein, 2018. Striatal action-value neurons reconsidered.
0
0
2
@kevinjmiller10
Kevin Miller
1 year
@RanaAgastya Thanks! Code should be available soon -- stay tuned :)
0
0
1
@kevinjmiller10
Kevin Miller
7 years
@rei_akaishi @brody_lab Best data I know of here are from Matt Gardner at Schoenbaum lab: Silencing OFC in an economic choice task has no effect on behavior. I agree that checking this out in a one-step learning task would be super informative!
1
0
2
@kevinjmiller10
Kevin Miller
5 years
@rei_akaishi @twitemp1 The simulations we did here are all fully-observable, but I think the ideas can generalize straightforwardly to the partially-observable case. Simplest would probably be to model habits as direct S-R links from observations onto actions.
1
0
2
@kevinjmiller10
Kevin Miller
7 years
@QueenieLB @KateWassum Great question! I'd been imagining some kind of learning or update process during the test period (as per your other tweet). But this literature is definitely something I want to think about more deeply. Lots of differences between behaviors, so lots of possibilities!
0
0
2
@kevinjmiller10
Kevin Miller
4 years
...in an upcoming issue of COBS on computational cognitive neuroscience, edited by Geoff Schoenbaum and Angela Langdon (neither they nor Sarah Jo seem to be on Twitter). These articles are all great, and more are on their way!
1
0
1
@kevinjmiller10
Kevin Miller
7 years
@Neurosarda @PREreview_ Awesome! Looking forward to hearing your thoughts on it!
0
0
1
@kevinjmiller10
Kevin Miller
5 years
@MelissaJSharpe I do! In this paper, we model the S-R associations as value-free ("cached policy"), and argue that this has some advantages vs. modeling them as MF-RL's "cached values".
0
0
1
@kevinjmiller10
Kevin Miller
1 year
0
0
1