![Akshay Jagadeesh Profile](https://pbs.twimg.com/profile_images/1258153717448101893/irBdE-cA_x96.jpg)
Akshay Jagadeesh
@akjags
Followers
637
Following
853
Statuses
447
computational cognitive neuroscientist. postdoc at harvard studying visual perception in brains and machines. previously: phd in neuro+psych @ stanford. he/him.
Boston, MA
Joined May 2009
RT @SergeyStavisky: New brain-computer interface preprint led by PhD student Tyler Singer-Clark! This video of a man with paralysis accura…
0
66
0
RT @bryan_johnson: New result: The first iPSC-derived corneal stem cells (iCEPS) reverse vision loss The novelty here is that the cells we…
0
32
0
RT @amrahS_inolaS: Super pumped to announce that our new work “Face cells encode objects parts more than facial configuration of illusory f…
0
21
0
RT @Derek_Vis_Nerd: Preprint: It's been suggested that Aphants may be able to visualise, but lack insight, as they can do Mental Rotation (…
0
6
0
RT @SussilloDavid: Is there a name for the hypothesis that task-trained neural networks and biological brains develop similar representatio…
0
17
0
RT @vineettiruvadi: The *action* space is what matters. Successful actions can tell you, compositionally, in situ insights about the syste…
0
2
0
@RylanSchaeffer If these grid-like properties are actually important "fundamental properties", then should they not manifest in terms of some explainable variance in the biological data? Otherwise, how do we know that they are not simply epiphenomenal?
0
0
3
Sounds like they’re going straight for cortical stimulation… very curious how well this will work, given everything we know about the complexities of the cortex.
The Blindsight device from Neuralink will enable even those who have lost both eyes and their optic nerve to see. Provided the visual cortex is intact, it will even enable those who have been blind from birth to see for the first time. To set expectations correctly, the vision will be at first be low resolution, like Atari graphics, but eventually it has the potential be better than natural vision and enable you to see in infrared, ultraviolet or even radar wavelengths, like Geordi La Forge. Much appreciated, @US_FDA!
0
0
1
RT @ziruichen44: Why do varied DNN designs yield equally good models of human vision? Our preprint with @michaelfbonner shows that diverse…
0
44
0
RT @kfrankelab: New study @CellReports🥳 Using recordings & #deepnet driven image synthesis we find an intriguing difference btw🐭&🐒 visual…
0
42
0
RT @BenucciLab: New paper Great work by @BolanosFederico, @orlandiResearch and our collaborators at @StanfordPsych,…
0
7
0
@JeanRemiKing @BenchetritYoha1 @hubertbanville @AIatMeta If the decoded output differs from the true image, does that reflect a failure of the reconstruction model or reveal that the neural representation is non-veridical? Conversely, if the decoded output is perfect, is that because of (eg natural image) priors in generation model?
1
0
1
@martisamuser @ShahabBakht Shape can also be decoded with very high accuracy from Imagenet-trained CNNs (see Hermann et al., NeurIPS 2020). The most interesting difference between feedforward Imagenet-trained CNNs and IT cortex, in my view, is not about representation, but rather, readout.
1
0
3
@martisamuser @ShahabBakht Stay tuned for the revision which I hope will address concerns about low-level (color, orientation) confounds with texture!
1
0
2
RT @_reachsumit: Is Cosine-Similarity of Embeddings Really About Similarity? Netflix cautions against blindly using cosine similarity as a…
0
392
0