timos_m Profile Banner
Timoleon (Timos) Moraitis Profile
Timoleon (Timos) Moraitis

@timos_m

Followers
1K
Following
3K
Statuses
2K

AI Hardware Acceleration | Bio-inspired Machine Learning | Computational Neuroscience | Neuromorphic Comp Previously: @Huawei @IBMResearch @UZH_en @ETH_en @ntua

Zurich, Switzerland
Joined January 2009
Don't wanna be here? Send us removal request.
@timos_m
Timoleon (Timos) Moraitis
1 year
Life update: I have stepped down from my role at Huawei. My stay there has been without doubt an extremely rare opportunity. Truly, very few places could offer such growth in such a short time. I am entirely grateful to the company, my colleagues, and especially my team. Thank you all, sincerely! But it is now time to move on. Currently taking some time to take care of family and other matters, as well as simply enjoying life a bit, before going all in to the next steps. (Pictured: Cape Sounion, Greece) What are the next steps? They are big enough to make it worth leaving the excellent conditions of my last role. Stay tuned.
Tweet media one
2
1
28
@timos_m
Timoleon (Timos) Moraitis
11 days
@PakoVM @jamesob Miners can run a BIP300 patch/CUSF. Problem solved.
2
0
2
@timos_m
Timoleon (Timos) Moraitis
11 days
@Plinz Something along these lines is an argument against AI doomerism.
0
0
0
@timos_m
Timoleon (Timos) Moraitis
15 days
@timos_m
Timoleon (Timos) Moraitis
15 days
Neural architecture (replacing attention), learning algorithms (replacing backprop), and hardware architecture (replacing von Neumann). Not really new axes, but newly effective. Stay tuned.
0
0
0
@timos_m
Timoleon (Timos) Moraitis
15 days
Neural architecture (replacing attention), learning algorithms (replacing backprop), and hardware architecture (replacing von Neumann). Not really new axes, but newly effective. Stay tuned.
@DimitrisPapail
Dimitris Papailiopoulos
16 days
Ok we've read a lot about test-time compute being the new scaling axis, but what's the next scaling axis?
1
1
5
@timos_m
Timoleon (Timos) Moraitis
1 month
@nic__carter "Alfa" means nothing, contrary to alpha, so you owe nothing 😁
0
0
3
@timos_m
Timoleon (Timos) Moraitis
1 month
Amazing work.
@SRSchmidgall
Samuel Schmidgall
1 month
🚀🔬 Introducing Agent Laboratory: an assistant for automating machine learning research Agent Laboratory takes your research ideas and outputs a research paper and code repository, allowing you to allocate more effort toward ideation rather than low-level coding and writing 🧵
Tweet media one
1
0
4
@timos_m
Timoleon (Timos) Moraitis
2 months
Are you aware of the possibility to have a zerocash sidechain of Bitcoin, ie zcash minus the extra token? What do you think about it? There's already an implementation. What's pending is Bitcoin miners' decision decide to enable a hashrate escrow, which would maintain the two way peg between the chains, and pay them the fees of both chains.
0
0
1
@timos_m
Timoleon (Timos) Moraitis
2 months
@TuckerGoodrich @nicknorwitz Also, I really don't think that "peroxidation index" in the figure is that. It actually is an empirical measurement of oxygen absorption rate, as the figure caption and the text before the figure explain.
1
0
0
@timos_m
Timoleon (Timos) Moraitis
2 months
@nicknorwitz @TuckerGoodrich I think you @nicknorwitz are wrong that this is the index shown in the figure. From its caption: "Values are from Homan (148), and all were empirically determined as rates of oxygen consumption." But the axis is confusingly labeled, you are right.
0
0
1
@timos_m
Timoleon (Timos) Moraitis
3 months
Somewhat more politically correctly, I also said I'm perplexed.
@__tinygrad__
the tiny corp
3 months
Why do people think this is real? A "transformer" ASIC makes no sense. BS=1 is ram bandwidth bound. BS=big is FLOPS bound. And NVIDIA "Transformer Engine" has everything you might need re: scaling small dtypes. Hype is disgusting. Stop falling for it.
0
1
3
@timos_m
Timoleon (Timos) Moraitis
3 months
@KordingLab Following up on this.
@mbateman
Matt Bateman
3 months
The French Polymarket whale commissioned polls with a specific alternate methodology, the “neighbor method” 1. What a baller 2. What a killer example of how betting markets can surface contrarian, high quality signals
Tweet media one
Tweet media two
0
0
2
@timos_m
Timoleon (Timos) Moraitis
3 months
RT @SebastianSeung: With the breakdown of Moore's Law and the rise of parallel computing, computer scientists who devise algorithms and eve…
0
5
0
@timos_m
Timoleon (Timos) Moraitis
3 months
@Jeffaresalan @AliciaCurth Great, Alan! Interesting paper, and nicely written too.
1
0
1
@timos_m
Timoleon (Timos) Moraitis
4 months
RT @Truthcoin: It is not an apples to apples comparison 34% on a poll, means "0% chance of winning" -- a sentence that everyone can under…
0
7
0
@timos_m
Timoleon (Timos) Moraitis
4 months
@LucaAmb @hardmaru @ZyphraAI Depending on what you mean, we probably agree. Could you elaborate?
0
0
0
@timos_m
Timoleon (Timos) Moraitis
4 months
@timos_m
Timoleon (Timos) Moraitis
4 months
Some examples from our work: Hebbian cortical-like microcircuits enable the most efficient and (still?) best performing, non-backprop deep learning algorithm Spikes speed up learning and inference in sequence processing Efference copies improve and explain self-supervised learning Short-lived associative plasticity without training outperforms trained RNNs, LSTMs etc in certain scenarios Meta-learned short-term plasticity outperforms them in more scenarios Several examples of nano-devices performing neural computations extremely efficiently. etc. Would they have been figured out by engineering alone? Maybe, at some point. But computational neuroscience was certainly our origin.
0
0
4
@timos_m
Timoleon (Timos) Moraitis
4 months
Some examples from our work: Hebbian cortical-like microcircuits enable the most efficient and (still?) best performing, non-backprop deep learning algorithm Spikes speed up learning and inference in sequence processing Efference copies improve and explain self-supervised learning Short-lived associative plasticity without training outperforms trained RNNs, LSTMs etc in certain scenarios Meta-learned short-term plasticity outperforms them in more scenarios Several examples of nano-devices performing neural computations extremely efficiently. etc. Would they have been figured out by engineering alone? Maybe, at some point. But computational neuroscience was certainly our origin.
@gershbrain
Sam Gershman
4 months
I'd like to teach a paper which shows how a fact about the brain materially improved an AI system in a way that is unlikely to have been figured out by engineering alone. I haven't been able to find a single example of this. Suggestions welcome.
0
2
16
@timos_m
Timoleon (Timos) Moraitis
4 months
@DustlerMagnus @MFarajtabar But seven+five does have an answer (and has been padded needlessly)
0
0
0