Happy to share that beginning Jan 2024, I will be joining UCSD as an Assistant Professor in Cognitive Science! 🌟 I'm super excited to put together a collaborative team to explore the exciting intersection of neuroscience and machine learning together!
1/ Very excited to be co-organizing this GAC focused on identifying key obstacles in NeuroAI's pursuit of comparing artificial and biological neural networks.
Very happy to share this! Check out the paper for all the pieces of evidence supporting neural selectivity for food and a further discussion on what food selectivity might tell us about why we have the neural selectivities we do!
I'm psyched to share a new study led by
@meenakshik93
, with
@apurvaratan
collaborating: Data-driven methods applied to the NSD fMRI dataset reveals that the ventral visual pathway has neural populations that respond very selectively to images of food:
On a more personal note, my journey into CogSci is a bit of a blur – it took several detours and a late realization that I could actually forge a career from my curiosity about the mind. Looking back, I am grateful for every twist and turn that led me here!
Excited to present @
#OHBM2020
! We show how using joint information about auditory and visual stimuli during naturalistic stimulation leads to remarkable response prediction accuracies. Video walkthrough at . Work w/
@mertrory
@AmyKuceyeski
#OHBM2020Posters
There are 3 main research thrusts in the lab: (1) Computational modeling to understand sensory information processing (2) Investigating how biological and artificial networks align in their representations (3) Applying neuro-inspired techniques to improve and understand AI models
Don't miss out on this exceptional opportunity to work with Ratan, who is not only an outstanding scientist but also a genuinely kind individual and a remarkable friend!
🎉 Exciting News! 🎉 I'm thrilled to announce that I will be joining
#GeorgiaTech
as an Assistant Professor starting Jan 1, 2024! I'm actively recruiting members to join the lab. So please spread the word! The lab website also goes live today.
We will build different kinds of computational models (descriptive, predictive, normative) to help explain the ‘what’, ‘how’ and ‘why’ of information processing in the brain, across domains such as vision, audition, language and multimodal perception
If our lab's mission speaks to you, please check out the openings and feel warmly encouraged to reach out :) For PhD students, applications are due Dec 1: see
Carmen was an exceptionally kind and wonderful human being. It was an honor to get to know her during graduate school. Please consider donating to support her family if you can.
I just received the devastating news that Carmen Khoo, a former graduate student who I was fortunate enough to work with, passed away recently. She was an amazing person and will be dearly missed. There’s a GoFundMe with additional details:
8/ An optimistic view: lack of model differentiation is itself scientifically interesting. For eg. representational convergence across brains and ANNs with diff arch (e.g. CNNs, transformers) may reflect the strength of the task constraints in narrowing down the solution set.
5/ One view: we lack tools sensitive enough to discriminate between models. In response, our goal is to establish a consensus on what invariances our metrics should have and what neural features (e.g. neural tuning/representational geometry) our tools should be sensitive to.
4/ Furthermore, current models/tools struggle to differentiate the computational rationale across diverse brain regions within a domain. Join us to explore why!
3/ For eg, apparent distinctions in model architectures (e.g., conv nets vs. transformers, feedforward vs. recurrent nets) bear limited impact on alignment with biological networks, across vision and audition.
ctd. Thus, while the architectural form and precise learning objective might differ widely across current models, maybe the emergent representations after learning do not.
If any of these ideas are of interest to you, please join us at the GAC workshop @ CCN or submit your feedback here: . We’d love to hear your thoughts!
@CogCompNeuro
@kasper_vinken
@apurvaratan
Thanks, great question! the claim is about neural populations subserving single mental processes. Our hypothesis is that the coarse fMRI resolution prevents access to these populations (“components”) - different populations could still be nearby and overlap at the voxel scale
6/ 2nd view: we are limited by hypotheses, and all our current models under consideration are essentially wrong. In order to make progress, we need entirely new candidate models, eg. we may need to move to active artificial systems trained through interactive learning.
2/ Comparative analysis of AI models and the brain is widely used for understanding perception and high-level cognition. Although initial strides driven by extensive datasets and architectural advancements are promising, recent trends suggest a potential plateau.