![Alfonso² Peterssen☕ Profile](https://pbs.twimg.com/profile_images/631471376122150912/eY0wiRM3_x96.jpg)
Alfonso² Peterssen☕
@TheMukel
Followers
440
Following
822
Statuses
115
@GraalVM padawan at Oracle Labs. Working on Java on Truffle aka Espresso: A meta-circular Java bytecode interpreter for GraalVM.
Zurich, Switzerland
Joined August 2015
RT @Stephan007: Looking forward to speaking tomorrow at @VoxxedCERN together with @TheMukel followed by delivering both a keynote and a reg…
0
4
0
RT @fniephaus: We just merged the current status of the upcoming JDWP support for @GraalVM Native Image! 🥳 This will soon provide develope…
0
23
0
RT @Stephan007: As a result I can now use @DevoxxGenie with a pure Java Arm Inference engine running locally on my mac using Llama 3.2 🍏🔥 h…
0
1
0
RT @Stephan007: Modern @Java Project : a Spring Boot wrapper for from @TheMukel supporting OpenAI Chat Completion…
0
23
0
RT @JohanHutting: Earlier today was asked if Java AI integration improved yet, or that we'd still need to rely on Python or C bindings. Was…
0
23
0
RT @Stephan007: Just made the first-ever @DevoxxGenie LLM inference using ONLY @Java, powered by the awesome #Jlama project! ☕🔥 Huge thanks…
0
6
0
@christzolov @vitalethomas @alina_yurenko I have a working prototype with function calling via LangChain4j. Vision is just a matter of implementing an additional component, the rest of the inference remains the same. I'll do my best to implement the missing encoder for vision soon-ish, starting with Llama, then Qwen.
1
0
3
@diegoasua @bate5a55 @julien_c Why not? It runs Llama 3.2 1B at 40+ tokens/s on my laptop. It also supports GraalVM's Native Image with instant time-to-first-token.
1
0
10
RT @fniephaus: .@TheMukel and @alina_yurenko talking about practical LLM inference in modern Java with and @GraalVM…
0
5
0
RT @maxandersen: Loving @TheMukel and @alina_yurenko 's talk explaining the various working parts of inference engines - then make it work…
0
4
0
RT @Stephan007: Must see #Devoxx talk : Practical LLM Inference in Modern Java with @TheMukel and @alina_yurenko Btw I see also DevoxxGeni…
0
8
0
RT @alina_yurenko: Fast LLM inference in pure Java? How about ✨instant✨ LLM inference in pure Java?:) Join me and @TheMukel on Thursday at…
0
13
0