Nasir Ahmad Profile
Nasir Ahmad

@nasiryahm

Followers
294
Following
576
Statuses
179

Nijmegen, Netherlands
Joined May 2016
Don't wanna be here? Send us removal request.
@nasiryahm
Nasir Ahmad
3 months
RT @1000brainsproj: 🚀 We’re thrilled to announce the launch of our open-source code read the docs here: https://t…
0
22
0
@nasiryahm
Nasir Ahmad
3 years
RT @artcogsys: We (Nasir Ahmad; @nasiryahm , Ellen Schrader & Marcel van Gerven @marcelge) are proud to share our recent paper: Constraine…
0
17
0
@nasiryahm
Nasir Ahmad
4 years
RT @ChristianPehle: Happy that our work on Backpropagation in Spiking Neural Networks is published! We show that it is possible to compute…
0
129
0
@nasiryahm
Nasir Ahmad
4 years
RT @artcogsys: Postdoctoral Researcher on SNN Robot Control at the Donders Centre for Cognition via @Radboud_Uni
0
8
0
@nasiryahm
Nasir Ahmad
4 years
RT @ellis_nijmegen: Apply now and join us in exploring brain-inspired computing!
0
1
0
@nasiryahm
Nasir Ahmad
4 years
RT @TimKietzmann: New paper & resource alert: our ecoset project is finally out and you are all invited to use it. A thread. 1/n https://t.…
0
80
0
@nasiryahm
Nasir Ahmad
4 years
@thisismyhat @TimKietzmann The sign of the input to the system will then determine the form of prediction: for positive inputs to the system you would have negative feedback, for negative inputs you would have positive (cancelling feedback).
0
0
0
@nasiryahm
Nasir Ahmad
4 years
@thisismyhat @TimKietzmann Hmmm, interesting thought -- I scratched my head over it! But multiplying the loss argument by -1 will change the derivative wrt weights. If we minimised your loss (whilst still providing the network with a positive input) it would provide re-inforcing feedback not inhibiting!
1
0
0
@nasiryahm
Nasir Ahmad
4 years
Check it out! The predictive coding architecture (error and prediction units) simply emerges if you constrain energy (unit output and weight magnitudes) in an RNN model. Super fun work, was awesome to be part of the team🤸
@TimKietzmann
Tim Kietzmann
4 years
New preprint alert! We show that predictive coding is an emergent property of input-driven RNNs trained to be energy efficient. No hierarchical hard-wiring required. A thread: 1/
0
1
8
@nasiryahm
Nasir Ahmad
4 years
@FleurZeldenrust An instructor I worked with used to teach the Software Carpentry Lessons (freely available and creative commons). Not a text but more "follow along guides" for learning the key tools in research computing :) Perhaps helpful! @swcarpentry
1
0
0
@nasiryahm
Nasir Ahmad
4 years
Ooooft. The worst so far: Me: " we developed a biologically plausible learning algorithm ..." System: "we developed a biologically *impossible* learning algorithm ..." ... burned by the machine!
0
0
5
@nasiryahm
Nasir Ahmad
4 years
RT @TimKietzmann: Are you using DNNs in your work? Then our new paper may be of interest to you: "Individual differences among deep neural…
0
99
0
@nasiryahm
Nasir Ahmad
4 years
RT @hisspikeness: I'm accepting applications for PhD students in computational neuroscience who have a keen interest in understanding infor…
0
46
0
@nasiryahm
Nasir Ahmad
4 years
@lexfridman, This might crack you up haha -- recently listened to you and Sheldon Solomon talking on death, Ernest Becker plus more. Captivating conversation --
1
0
1
@nasiryahm
Nasir Ahmad
4 years
RT @ostojic_srdjan: During my physics undergrad, I have never heard of Singular Value Decomposition (SVD). Why? Almost all matrices in…
0
95
0
@nasiryahm
Nasir Ahmad
5 years
RT @LBNaumann: 👇 Registration for this year's free and virtual #BernsteinConference is open! 🙌 If you're a junior researcher (master/grad s…
0
17
0
@nasiryahm
Nasir Ahmad
5 years
Really neat idea and the performance looks incredible (even for very sparse nets)! Love the easily digestible blog post explanation btw
@hisspikeness
Friedemann Zenke
5 years
#ICML2020 paper on finding trainable sparse network topologies with neural tangent kernels. Joint work with @tianlinliu0121. Paper: Poster: (containing today's Q&A session schedule) Blog:
Tweet media one
0
0
1