fatihdin4en Profile Banner
Fatih Dinc Profile
Fatih Dinc

@fatihdin4en

Followers
3K
Following
2K
Statuses
674

Theoretical neuroscience + explainable AI. Moving to @KITP_UCSB and @geometric_intel as a postdoc. PhD in Applied Physics @stanford.

Santa Barbara, CA
Joined November 2020
Don't wanna be here? Send us removal request.
@fatihdin4en
Fatih Dinc
1 year
In my final year as a PhD student at this #SfN23, two of my PhD papers will be presented, i.e., SOTA tools to process and analyze brain-wide neural recordings. These tools utilize convex optimization and support real-time interventions of thousands of neurons and BMIs. 1/5
Tweet media one
4
11
119
@fatihdin4en
Fatih Dinc
16 days
@HansolLim1 @MPIforBI Haha, very well deserved!
1
0
2
@fatihdin4en
Fatih Dinc
1 month
RT @geomstats: Our Python module "Information Geometry" is available! With: šŸŒThe Fisher-Rao Riemannian manifolds of probability distributiā€¦
0
58
0
@fatihdin4en
Fatih Dinc
1 month
RT @ninamiolane: Happy New YearšŸŽ‰ Thrilled to start 2025 with a $1M award from @cziscience @ChanZuckerberg to study the maternal brain usinā€¦
0
20
0
@fatihdin4en
Fatih Dinc
1 month
@IsmailGumustop In current state, likely no. That being said, a good Master's experience plus excellent letters will likely go long way.
1
0
1
@fatihdin4en
Fatih Dinc
1 month
No way, you can bring an experimental work to completion in a year in my field. If we expect UGs to have several top conference publications, we are telling them not to involve themselves with the long experimental projects. How is this helpful for bridging AI and neuroscience?
0
1
9
@fatihdin4en
Fatih Dinc
2 months
Dont waste your best years chasing things of the past. Set a vision, and follow it. If you cannot, surround yourself with people more insightful until you can. For many of us, this involves getting a PhD. Anti-PhD sentiment in X/Twitter is getting out of hand
@sh_reya
Shreya Shankar
2 months
the best indicator for whether you should do a PhD is that youā€™re hell bent on doing it regardless of what others say. if you are the kind of person who constantly looks for reasons not to do the PhD (eg people on twitter saying LLMs obviate the need for a PhD), donā€™t do a PhD
0
1
43
@fatihdin4en
Fatih Dinc
2 months
RT @EkdeepL: Paper alertā€“ā€“*Awarded best paper* at NeurIPS workshop on Foundation Model Interventions! šŸ§µšŸ‘‡ We analyze the (in)abilities ofā€¦
0
85
0
@fatihdin4en
Fatih Dinc
2 months
Billy, I have been fitting these models for over two years now. Thus, I am one of the main target audience you want to convince. I am telling you, this is not convincing. Now, each method has different definition for lambda, so setting all equal to the same value is not how you should choose them. You are making a claim based on a toy model, and I am letting you know what type of methodology would be convincing and realistic. It is to your interest to show it, no need to argue with me. I will follow up with an email later on, but I do consider myself a friend of @CPehlevan and his lab. I hope this information will help you focus on the scientific concerns. The spirit of the conference, and hopefully why you posted these tweetprints, is to receive feedback that you can incorporate to the final version in January. Neurips lets you update the paper after the conference for this exact reason.
0
0
0
@fatihdin4en
Fatih Dinc
2 months
Yeah, no. Then, you would do splits with sliding windows. This is really an overfitting problem. For the final version, I would appreciate if you could put an asterisk on CORNN noting that it is used outside of its domain of validity (without regularization). Or better, not label it as CORNN, as one of its key components is taken off.
4
0
0
@fatihdin4en
Fatih Dinc
2 months
Well, that's why one should use cross-validation and multiple trials. I don't think anyone would be surprised that these models would overfit when trained with a single trial, nor is this how we use them in experiments. Happy to chat offline or share a relevant chapter from my thesis. Also cc'ing @CPehlevan as he asked me about details on applications before. Otherwise good work and congratulations!
1
0
0
@fatihdin4en
Fatih Dinc
2 months
@b_qian_ Spurious limit cycles often mean insufficient regularization. From public code, reproduced with CORNN. Simply reinstated the regularization term that was set to effectively zero in the original paper.
Tweet media one
2
0
0
@fatihdin4en
Fatih Dinc
2 months
RT @geometric_intel: It's December in Santa Barbara... which means it's the best time for a lab beach volleyball tournament!šŸ With burritosā€¦
0
5
0
@fatihdin4en
Fatih Dinc
2 months
I met Christian last year at NeurIPS, and personally attest that he is one of the most brilliant and kind person there is out there. He has been doing transformative work ever since. Check it out!!!
@cashewmake2
Christian Shewmake
2 months
Excited to talk with the community about what we've been working on! Co-founding @newtheoryai with @colin_odonnell to build new foundational architectures for intelligence. Come join us at #NeurIPS in Vancouver this week for our soft-launch event šŸŽŠšŸ„‚
0
3
11
@fatihdin4en
Fatih Dinc
2 months
RT @adamimos: What computational structure are we building into LLMs when we train them on next-token prediction? Excited to present our #Nā€¦
0
46
0
@fatihdin4en
Fatih Dinc
2 months
@gunesyagizisik Soyledigim gibi, henuz bursun kendisi hakkinda bir duyuruya cikmadik. Ocak/Subat gibi detaylar kesinlesince herkese acik bir sekilde bildirecegiz
0
0
1
@fatihdin4en
Fatih Dinc
2 months
@gunesyagizisik Ocak veya Subat gibi, bagis miktarina gore tekrar duyuracagiz
1
0
0
@fatihdin4en
Fatih Dinc
2 months
RT @akenginorhun: Just $1.5k allowed me to participate in a life changing research at Stanford, which then led me to beat the %1 acceptenceā€¦
0
1
0
@fatihdin4en
Fatih Dinc
2 months
RT @baricandinc: As an undergrad, financial barriers held me back from doing research many times. Today, many talented students from Turkeyā€¦
0
7
0
@fatihdin4en
Fatih Dinc
2 months
Any work by @xavierjgonzalez is bound to impress! Looking forward to using this in my own research!
@scott_linderman
Scott Linderman
2 months
Did you know that you can parallelize *nonlinear* RNNs over their sequence length!? Our @NeurIPSConf paper "Towards Scalable and Stable Parallelization of nonlinear RNNs," which introduces quasi-DEER and ELK to parallelize ever larger and richer dynamical systems! šŸ§µ [1/11]
Tweet media one
0
1
16