![Nasir Ahmad Profile](https://pbs.twimg.com/profile_images/744895931280142336/4Qpp9mRF_x96.jpg)
Nasir Ahmad
@nasiryahm
Followers
294
Following
576
Statuses
179
RT @1000brainsproj: 🚀 We’re thrilled to announce the launch of our open-source code read the docs here: https://t…
0
22
0
RT @artcogsys: We (Nasir Ahmad; @nasiryahm , Ellen Schrader & Marcel van Gerven @marcelge) are proud to share our recent paper: Constraine…
0
17
0
RT @ChristianPehle: Happy that our work on Backpropagation in Spiking Neural Networks is published! We show that it is possible to compute…
0
129
0
RT @artcogsys: Postdoctoral Researcher on SNN Robot Control at the Donders Centre for Cognition via @Radboud_Uni
0
8
0
RT @TimKietzmann: New paper & resource alert: our ecoset project is finally out and you are all invited to use it. A thread. 1/n https://t.…
0
80
0
@thisismyhat @TimKietzmann The sign of the input to the system will then determine the form of prediction: for positive inputs to the system you would have negative feedback, for negative inputs you would have positive (cancelling feedback).
0
0
0
@thisismyhat @TimKietzmann Hmmm, interesting thought -- I scratched my head over it! But multiplying the loss argument by -1 will change the derivative wrt weights. If we minimised your loss (whilst still providing the network with a positive input) it would provide re-inforcing feedback not inhibiting!
1
0
0
Check it out! The predictive coding architecture (error and prediction units) simply emerges if you constrain energy (unit output and weight magnitudes) in an RNN model. Super fun work, was awesome to be part of the team🤸
New preprint alert! We show that predictive coding is an emergent property of input-driven RNNs trained to be energy efficient. No hierarchical hard-wiring required. A thread: 1/
0
1
8
@FleurZeldenrust An instructor I worked with used to teach the Software Carpentry Lessons (freely available and creative commons). Not a text but more "follow along guides" for learning the key tools in research computing :) Perhaps helpful! @swcarpentry
1
0
0
RT @TimKietzmann: Are you using DNNs in your work? Then our new paper may be of interest to you: "Individual differences among deep neural…
0
99
0
RT @hisspikeness: I'm accepting applications for PhD students in computational neuroscience who have a keen interest in understanding infor…
0
46
0
@lexfridman, This might crack you up haha -- recently listened to you and Sheldon Solomon talking on death, Ernest Becker plus more. Captivating conversation --
1
0
1
RT @ostojic_srdjan: During my physics undergrad, I have never heard of Singular Value Decomposition (SVD). Why? Almost all matrices in…
0
95
0
RT @LBNaumann: 👇 Registration for this year's free and virtual #BernsteinConference is open! 🙌 If you're a junior researcher (master/grad s…
0
17
0
Really neat idea and the performance looks incredible (even for very sparse nets)! Love the easily digestible blog post explanation btw
#ICML2020 paper on finding trainable sparse network topologies with neural tangent kernels. Joint work with @tianlinliu0121. Paper: Poster: (containing today's Q&A session schedule) Blog:
0
0
1