Our paper got accepted at
#CVPR2023
!
(w/
@yu_takagi
)
We modeled the relationship between human brain activity (early/semantic areas) and Stable Diffusion's latent representations and decoded perceptual contents from brain activity ("brain2image").
How much does it cost to train a state-of-the-art foundational LLM?
$4M.
Facebook's 65B LLaMA trained for 21 days on 2048 Nvidia A100 GPUs. At $3.93/hr on GCP, that's a total of ~$4M.
Google's 540B PaLM was trained on 6144 v4 TPUs for 1200hrs. At $3.22/hr is a total of ~$27M
Transformers are Sample Efficient World Models
“With the equiv. of 2 hours of gameplay…our approach sets a new SOTA for methods without lookahead search, and even surpasses MuZero.”
Love the simplicity of this approach!
pdf
code
From
@TomNakai
and
@NishimotoShinji
: Modelling based on fMRI data obtained during more than 100 different cognitive tasks reveals that representation and decoding are preserved across the cortex, cerebellum, and subcortex 🧠
First preprint from our group! Language models and brain alignment: beyond word-level semantics and prediction
Led by fantastic PhD student Gabriele Merlin🧙♂️
Thread below
1/n
The Algonauts Project 2023 Challenge is now live!
Join and build a computational model that best predicts how the human brain responds to complex natural scenes 🧠💻
⏲️ Submission deadline: 26th of July
#algonauts2023
@CCNBerlin
@cvnlab
@goetheuni
DreamDiffusion: Generating High-Quality Images from Brain EEG Signals
paper page:
paper introduces DreamDiffusion, a novel method for generating high-quality images directly from brain electroencephalogram (EEG) signals, without the need to translate
Our paper on mental image reconstruction from human brain activity, led by Naoko Koide-Majima and
@majimajimajima
, was published in Neural Networks.
We combined Bayesian estimation and generative AI to visualize imagined scenes from human brain activity.
With the InstructGPT paper we found that our models generalized to follow instructions in non-English even though we almost exclusively trained on English.
We still don't know why.
I wish someone would figure this out.
This achievement was made possible thanks to various open source and open data projects, including:
Stable Diffusion:
@robrombach
@StabilityAI
…
Natural Scenes Dataset (NSD):
@cvnlab
Thomas Naselaris
Follow-up technical paper to our
#CVPR2023
paper (). Investigated how different methods affect the performance of visual experience reconstruction. The figure shows three randomly selected images generated by each method.
@BraydonDymm
@yu_takagi
It is possible to apply the same technique to brain activity during sleep, but the accuracy of such application is currently unknown.
Our new pre-print is out today! We demonstrate a brain-computer interface that turns speech-related neural activity into text, enabling a person with paralysis to communicate at 62 words per minute - 3.4 times faster than prior work.
We just posted a new version of our paper showing the emergence of abstract representations in feedforward neural networks trained on multiple classification tasks.
A brief thread highlighting the new pieces: 🧵1/5
I'm really excited to share
@MedARC_AI
's first paper since our public launch 🥳
🧠👁️ MindEye!
Our state-of-the-art fMRI-to-image approach that retrieves and reconstructs images from brain activity!
Project page:
arXiv:
We used BARseq in situ sequencing to identify genes in ***1.2 million neurons*** throughout the mouse brain
We found that cortical areas with similar cell types are also interconnected. We call this “wire-by-similarity.”
1/n
🥳 Our latest paper is out today in Nature Neuro 🥳
Introducing the Neuro-stack, a wearable platform that records human single-neuron activity during walking🚶🏻♀️
We’ve identified the Mind-Body Interface, a novel distributed network within human primary motor cortex that disrupts the famous—but incorrect—motor homunculus, and that exhibits strong connections to high-level control networks. Preprint here:
All NeuroImage and NeuroImage:Reports editors have resigned over the high publication fee, and are starting a new non-profit journal
This comes with great regret, and a huge amount of thought and discussion- please read announcement to get more details.
Assuming Einstein's contribution is steady-state, can the trend in his citation count be used to normalize the value of citations for each year? (e.g., a citation in 2000 is 3.5 times more precious than in 2020).
Today, we announce Ego-Exo4D, the largest and most diverse multi-view dataset, showing human experts around the world performing a core set of skilled activities, w/ unprecedented multi-modality, novel new video-language resources, and rich annotations
At long last, Dr.
@sara_poppop
's paper on aligned visual and linguistic semantic representations is out!
I want to briefly explain the context for Sara's work, and why I think this is the most important science that I've ever been a part of ⤵️
Our new study is out today in Nature! We demonstrate a brain-computer interface that turns speech-related neural activity into text, enabling a person with paralysis to communicate at 62 words per minute - 3.4 times faster than prior work. 1/3
Glad to share our latest work out in
@Nature
today
We show that human cortical
#organoids
transplanted into the somatosensory cortex of newborn rats develop mature cell types and integrate into sensory circuits and can influence reward-seeking behavior
We are finally ready to share our Le Petit Prince fMRI Corpus (LPPC–fMRI), a multilingual resource for research in the cognitive neuroscience of speech and language!
Data and annotations:
Paper:
🚨 BIG DATA RELEASE 🚨 We are beyond excited to announce the release of our Brain Wide Map of neural activity during decision making! It consists of 547 Neuropixel recordings of 32784 neurons across 194 regions of the mouse brain 🐭🧠
(1/7)
New Research: Voxel-Based State Space Modeling Recovers Task-Related Cognitive States in Naturalistic fMRI Experiments: Complex natural tasks likely recruit many different functional brain networks, but it is difficult to predict how such…
#Neuroscience
Short 🧵 on our new
@nature
article with
#SeaHeroQuest
@antoine_coutrot
et al
Entropy of city street networks linked to future spatial navigation ability
Video: footage from two game levels with different levels of entropy in the paths (L: high, R: low)
How to track local microscopic neurovascular response and systemic blood flow changes over the entire brain at the same time? Functional Ultrasound Localization Microscopy performs such neuroimaging at micron scale,
@PhysMedParis
article in
@naturemethods
New preprint w/
@Chris_I_Baker
: we scanned people watching memories they recorded w/
@1SecondEveryday
from up to 7 yrs ago and find a memory content map w/ subregions for memory age, strength, and content (people & place) info in medial parietal cortex!
Think text-to-image is cool?
Tons of research came out in the last 2 months that will blow your mind:
🗨️speech-to-text
🔊text-to-audio
🔊audio-to-audio
⏯️text-to-video (2x)
🕺text-to-motion (2x)
🧊text-to-3d (3x)
🧠 brain-to-text (2x)
Check these out 👇
BRAIN Initiative Cell Census Network (BICCN) published 17 papers in this week's Nature showing combinations of transcriptome/epigenetics, atlas/connectome, mouse/marmoset/human motor cortex: