![Richard Lange ππ€π§ Profile](https://pbs.twimg.com/profile_images/1405903663139201027/ObfJhiR7_x96.jpg)
Richard Lange ππ€π§
@wrongu
Followers
263
Following
1K
Statuses
177
Postdoc with @Kordinglab, interested in the intersection of neuroscience, AI, and philosophy.
Philadelphia, PA
Joined January 2012
@_dsevero @cjmaddison Now you get to add a corollary and thank the reviewer for the inspiration The theorem implies that for all eps>0, f is within eps of g...
0
0
0
@ke_li_2021 Oops βΒ I missed that when skimming. Sounds like it fits with some philosophical ideas on counterfactual computational results as a test of "representation." Nice!
0
0
0
@rmnpogodin @arna_ghosh @gauthier_gidel @g_lajoie_ @tyrell_turing Great work! At first I was puzzled that the NE potential is convex for w>0 or w<0 but is nonconvex around w=0, while (6) relies on it being convex. But maybe a feature not a bug of the theory? Eq (7) preserves the sign of w. So exp grad respects Dale's law for free - neat!
1
0
2
RT @mattbotvinick: Iβve written down some purely personal thoughts about AI and what remains special about the human mind. I hope these wilβ¦
0
79
0
@aliceschwarze I made the switch from mendeley recently. Importing to zotero library seem much smoother, esp with a browser plugin.
0
0
1
Remember kids: never plot or inspect your data. That way all of your results can be zero-shot learning!
Remember how Google claimed their system could translate English to Bengali without ever having seen that? And we were like, wait, how do you know that it hasn't? Yeah. Below is evidence that was a lie. Bengali (BN) in bilingual & translation settings was in the training data.
0
0
4
@RNogueiraNeuro This work with V1 coarse orientation discrimination is very recently out! A lot of data available during training that didn't make it into this first paper
1
0
2
@patrickmineault This work is great and we have been extending it! Elevator pitch: once you start talking about *distance* between neural reps, you can also start talking about *directions*. We can use tools from geometry to talk about how reps are transformed.
1
0
6
@cian_neuro Based on past performance, 3+ years. But deep down I will forever believe that the next one will take two weeks
0
0
5
RT @patrickmineault: Neuromatch soft-launched its mastodon instance last week. We already have a great local feed, you should totally joinβ¦
0
30
0
@rachel_kurchin Check out the neuromatch course materials! Videos + exercises embedded in colab. This works best with a flipped classroom arrangement so students can go through the notebooks, including some exercises ahead of class time.
0
0
1
RT @GalaxyKate: declaring a new term: A Bach Faucet is a situation where a generative system makes an endless supply of some content at orβ¦
0
269
0
RT @KordingLab: Who of the famous people in the field would be ready to publicly talk about their non-rigorous research in the past? I thinβ¦
0
43
0
@micahgoldblum Testing my understanding: this is hard constraints (EmpiricalRM) vs soft complexity penalties (RegularizedRM).. Less complex (lower norm) functions take fewer samples to learn even if your hypothesis space is huge, and VC/rademacher depend only on size of the space, right?
0
0
1
@DimitrisPapail A very similar line of questioning (how to think about composing "simple steps" towards implementing some complex functions) motivated this: I still have more questions than answers and would be interested to hear what you find out!
0
0
1
@gershbrain These papers draw some really nice links between prior predictive, posterior predictive, cross validation, and model comparison :
0
2
12
@KordingLab @SuryaGanguli Credit where credit is due! AlHazen was ahead of his time. Not sure if he counts as part of the "direct/transparent" link that @SuryaGanguli asked about. Did Helmholtz know about AlHazen? Did the authors of the Helmholtz machine? Or were ideas rediscovered?
1
0
1