![Fatih Dinc Profile](https://pbs.twimg.com/profile_images/1325968086315524096/Cu7b6afc_x96.jpg)
Fatih Dinc
@fatihdin4en
Followers
3K
Following
2K
Statuses
674
Theoretical neuroscience + explainable AI. Moving to @KITP_UCSB and @geometric_intel as a postdoc. PhD in Applied Physics @stanford.
Santa Barbara, CA
Joined November 2020
In my final year as a PhD student at this #SfN23, two of my PhD papers will be presented, i.e., SOTA tools to process and analyze brain-wide neural recordings. These tools utilize convex optimization and support real-time interventions of thousands of neurons and BMIs. 1/5
4
11
119
RT @geomstats: Our Python module "Information Geometry" is available! With: šThe Fisher-Rao Riemannian manifolds of probability distributiā¦
0
58
0
RT @ninamiolane: Happy New Yearš Thrilled to start 2025 with a $1M award from @cziscience @ChanZuckerberg to study the maternal brain usinā¦
0
20
0
@IsmailGumustop In current state, likely no. That being said, a good Master's experience plus excellent letters will likely go long way.
1
0
1
Dont waste your best years chasing things of the past. Set a vision, and follow it. If you cannot, surround yourself with people more insightful until you can. For many of us, this involves getting a PhD. Anti-PhD sentiment in X/Twitter is getting out of hand
the best indicator for whether you should do a PhD is that youāre hell bent on doing it regardless of what others say. if you are the kind of person who constantly looks for reasons not to do the PhD (eg people on twitter saying LLMs obviate the need for a PhD), donāt do a PhD
0
1
43
RT @EkdeepL: Paper alertāā*Awarded best paper* at NeurIPS workshop on Foundation Model Interventions! š§µš We analyze the (in)abilities ofā¦
0
85
0
Billy, I have been fitting these models for over two years now. Thus, I am one of the main target audience you want to convince. I am telling you, this is not convincing. Now, each method has different definition for lambda, so setting all equal to the same value is not how you should choose them. You are making a claim based on a toy model, and I am letting you know what type of methodology would be convincing and realistic. It is to your interest to show it, no need to argue with me. I will follow up with an email later on, but I do consider myself a friend of @CPehlevan and his lab. I hope this information will help you focus on the scientific concerns. The spirit of the conference, and hopefully why you posted these tweetprints, is to receive feedback that you can incorporate to the final version in January. Neurips lets you update the paper after the conference for this exact reason.
0
0
0
Yeah, no. Then, you would do splits with sliding windows. This is really an overfitting problem. For the final version, I would appreciate if you could put an asterisk on CORNN noting that it is used outside of its domain of validity (without regularization). Or better, not label it as CORNN, as one of its key components is taken off.
4
0
0
Well, that's why one should use cross-validation and multiple trials. I don't think anyone would be surprised that these models would overfit when trained with a single trial, nor is this how we use them in experiments. Happy to chat offline or share a relevant chapter from my thesis. Also cc'ing @CPehlevan as he asked me about details on applications before. Otherwise good work and congratulations!
1
0
0
@b_qian_ Spurious limit cycles often mean insufficient regularization. From public code, reproduced with CORNN. Simply reinstated the regularization term that was set to effectively zero in the original paper.
2
0
0
RT @geometric_intel: It's December in Santa Barbara... which means it's the best time for a lab beach volleyball tournament!š With burritosā¦
0
5
0
I met Christian last year at NeurIPS, and personally attest that he is one of the most brilliant and kind person there is out there. He has been doing transformative work ever since. Check it out!!!
Excited to talk with the community about what we've been working on! Co-founding @newtheoryai with @colin_odonnell to build new foundational architectures for intelligence. Come join us at #NeurIPS in Vancouver this week for our soft-launch event šš„
0
3
11
@gunesyagizisik Soyledigim gibi, henuz bursun kendisi hakkinda bir duyuruya cikmadik. Ocak/Subat gibi detaylar kesinlesince herkese acik bir sekilde bildirecegiz
0
0
1
RT @akenginorhun: Just $1.5k allowed me to participate in a life changing research at Stanford, which then led me to beat the %1 acceptenceā¦
0
1
0
RT @baricandinc: As an undergrad, financial barriers held me back from doing research many times. Today, many talented students from Turkeyā¦
0
7
0
Any work by @xavierjgonzalez is bound to impress! Looking forward to using this in my own research!
Did you know that you can parallelize *nonlinear* RNNs over their sequence length!? Our @NeurIPSConf paper "Towards Scalable and Stable Parallelization of nonlinear RNNs," which introduces quasi-DEER and ELK to parallelize ever larger and richer dynamical systems! š§µ [1/11]
0
1
16