Redwood_Neuro Profile Banner
Redwood Center for Theoretical Neuroscience Profile
Redwood Center for Theoretical Neuroscience

@Redwood_Neuro

Followers
1K
Following
40
Statuses
37

The Redwood Center for Theoretical Neuroscience is a group of faculty, postdocs and graduate students at UC Berkeley. Account run by students.

Berkeley, CA
Joined January 2018
Don't wanna be here? Send us removal request.
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
6 months
New work published by Adrianne Zhong in the Redwood Center's DeWeese group! 🥳🥖
@DeWeeseLab
DeWeese Lab
6 months
Very excited to share our new paper out in PRL today (as an Editors’ Suggestion)! We unify thermodynamic geometry 🔥 and optimal transport 🥖 providing a coherent picture for optimal minimal-work protocols for arbitrary driving strengths 💪 and speeds ⏱️ !
0
0
1
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
8 months
Thank you for visiting!
@chklovskii
Dmitri "Mitya"
8 months
My recent talk at UC Berkeley: Neurons as feedback controllers. Thanks Bruno Olshausen for inviting me to the terrific Redwood Institute for Computational Neuroscience via @internetarchive
0
0
0
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
8 months
Bruno Olshausen will be speaking tomorrow (Monday) at 1:30 PDT: "Neural computations for geometric reasoning". Abstract:
@shiryginosar
Shiry Ginosar
8 months
Join us next week at our intelligence workshop @SimonsInstitute! Schedule: Register online for both in-person and streaming. FANTASTIC lineup of speakers: 1/3
Tweet media one
0
0
6
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
2 years
RT @neur_reps: NeurIPS is here! Come check out the NeurReps Workshop on Sat Dec 3, ft. invited talks by ⭐️ @TacoCohen ⭐️ Irina Higgins @Dee
0
16
0
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
2 years
RT @TrentonBricken: Super super excited for my first day as a visiting researcher at @Redwood_Neuro. Will be here and living in the Bay are…
0
2
0
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
3 years
Congrats! We’re very excited about this workshop @NeurIPSConf
@naturecomputes
Sophia Sanborn @ NeurIPS
3 years
Our workshop on Symmetry and Geometry in Neural Representations has been accepted to @NeurIPSConf 2022! We've put together a lineup of incredible speakers and panelists from 🧠 neuroscience, 🤖 geometric deep learning, and 🌐 geometric statistics.
0
0
7
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
4 years
If you haven't already, scooch on over to our Seminars page to check out a recent talk by Jeff Hawkins and Subutai Ahmad on the Thousand Brains theory of intelligence:
0
0
8
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
4 years
Exciting new research from recent RCTN grads @DylanPaiton, @charles_irl, and others on how neuroscience-inspired recurrent layers for image processing with sparse coding grant increased selectivity and robustness to adversarial inputs
@DylanPaiton
Dylan Paiton
4 years
Hey twitter, check out my new paper! We investigate how competition through network interactions can lead to increased adversarial robustness and selectivity to preferred stimulus. paper: explainer 🧵: 1/8
Tweet media one
0
0
3
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
4 years
RT @charles_irl: new paper with @DylanPaiton and others in @Redwood_Neuro now out in @ARVOJOV -- connecting ideas in visual neuroscience wi…
0
2
0
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
5 years
...resisted the urge to add thug lyfe sunglasses. Can correct later if necessary
0
0
1
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
5 years
Preprint spotlight: Charles Frye (@charles_irl) et al. demonstrate a surprising weakness in critical point-finding methods used to support the "no bad local minima" hypothesis re: neural net losses. See🧵for a quick explainer, or dive deep🏊‍♀️in the paper:
@charles_irl
Charles 🎉 Frye
5 years
There's a theory out there that neural networks are easy to train because their loss f'n is "nice": no bad local minima. Recent work has cast doubt on this claim on analytical grounds. In new work, we critique the numerical evidence for this claim. 🧵⤵
0
3
9
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
5 years
A special day in the Redwood Center as we host the thesis seminar of Yubei Chen!
Tweet media one
0
0
7
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
5 years
@uwcnc
UW Computational Neuroscience Center
5 years
Our weekly CompNeuro journal club is back up and running for the quarter. Today, we discussed Cheung et al’s preprint "Superposition of many models into one" from @berkeley_ai and @Redwood_Neuro. Led by AJ Kruse and Satpreet Singh.
Tweet media one
0
0
3
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
6 years
Congratulations to Sylvia and Spencer who passed their qualifying exams this week! 🎉🎊
0
0
8
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
6 years
We are looking forward to hearing from our latest seminar speaker, @pkdouglas16, who is coming up from UCLA! Her talk on modeling latency and structural organization of connections in the brain is at noon tomorrow.
1
0
3
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
6 years
New preprint spotlight: Saeed Saremi and Aapo Hyvarinen develop a framework for density estimation and sampling based on a unification of kernel density and Empirical Bayes. You can read all about it on the arXiv:
0
4
17
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
6 years
New preprint spotlight: Louis Kang and Mike DeWeese present an attractor network model of grid cells which exhibits replay and theta sequences. Sound interesting? Learn more here!
0
1
6
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
6 years
New preprint spotlight: Brian Cheung (@thisismyhat), Alex Terekhov, Yubei Chen (@Yubei_Chen), Pulkit Agrawal (@pulkitology) and Bruno Olshausen report a simple but effective technique for building neural networks that can solve many different tasks:
0
6
14
@Redwood_Neuro
Redwood Center for Theoretical Neuroscience
6 years
New preprint spotlight: Charles Frye (@charles_irl), Neha Wadia, Mike DeWeese, and Kris Bouchard tell us about how to estimate the critical points of a deep linear autoencoder, complete with gotchas, tricks, and how this might be extended to other models.
0
3
8