![Moritz Zaiss Profile](https://pbs.twimg.com/profile_images/1741876638026018816/sYFZ6oTJ_x96.jpg)
Moritz Zaiss
@altustro
Followers
502
Following
2K
Statuses
863
Heidelberg, Tübingen, Erlangen, MR physicist, science communicator, CEST and Bloch-McConnell fan, Pulseq fan, MR-zero Prof @FAU_Germany
Joined March 2015
MR-zero with phase distribution graphs. #MRI Git and doc: Paper: ...
#MRI simulations of images can only be done with spin simulations, but not with EPG? In the new paper Jonathan Endres and Co-Authors show a new MRI simulation called Phase Distribution Graphs which extends EPG for arbitrary timing and image encoding.
0
1
14
This could be the new speed of iteration cycles of science. Tweet about new methods paper. Tweet of a different group reproducing it with new insights, 8h later.
I took a brief look at the Harmonic Loss paper tl;dr: instead of dot-product with softmax, do euclid dist with normalized 1/d**n. I kinda want this to work. I've dabbled with preferring euclid many times throughout my career (eg triplet loss etc) However...
2
0
4
RT @schorn_stephan: KI-Fakes sind nicht immer leicht zu erkennen, aber sie enthalten alle irgendwelche Fehler. Hier wird zum Beispiel eine…
0
68
0
Just learned that trainable bilateral filters are most simple transformers @maier_ak
Since we're going back to the origins, let me add that Attention, which @karpathy correctly calls a "brilliant (data-dependent) weighted average operation", was not discovered in machine learning - in fact this dates back to data-dependent "filters" in image processing from the 90s. Perhaps the most well-known is the Bilateral Filter of Tomasi and Manduchi from 1998, and the SUSAN filter of Smith and Brady in 1997; followed by many variations like non-local means of Buades, Coll, and Morel in 2005, and more general kernel regression filters in 2007 by me and two of my students: A year or so before the 2014 time-frame mentioned in @DBahdanau's account of Attention, the kernel filters were generalized even further using the RKHS framework, and popularized even more by yours truly, in late 2012: Open access version here: Neither I - nor, I think, anyone in imaging - could reasonably claim any credit for the idea of attention in the ML context. But it is worth having the complete historical picture, which would be incomplete without understanding the importance and generality of "(data-dependent) weighted average operations" much more broadly.
1
0
1
@agahkarakuzu so here is the current version. after I finally got response from openai, it seems it was the 'gpt' in the name that hindered publication. this version should work now
0
0
3
RT @PerlmanOr: I am happy to share our just-published paper at @iScience_CP, which combines #MRF with #AI for rapid MR imaging of semisolid…
0
2
0
@ajerschow Could be worse! Next time just use the comment function to note that this sentence should be considered to be removed.
0
0
1