Jimmy Smith Profile
Jimmy Smith

@jimmysmith1919

Followers
455
Following
105
Statuses
26

Founding Scientist @LiquidAI_. Previously PhD @Stanford and member of @scott_linderman's lab.

Stanford, CA
Joined October 2017
Don't wanna be here? Send us removal request.
@jimmysmith1919
Jimmy Smith
6 days
RT @hackwithtrees: Proud to have @LiquidAI_ as TreeHacks' 2025 Official Edge AI Track Sponsor! šŸ§µ for prizes ->
Tweet media one
Tweet media two
0
3
0
@jimmysmith1919
Jimmy Smith
24 days
RT @maximelabonne: šŸ„³ Super happy to share the new model I've been working on: LFM-7B LFM-7B was designed for exceptional multilingual chatā€¦
0
15
0
@jimmysmith1919
Jimmy Smith
24 days
Excited to share LFM-7B! Incredibly proud of the strong work of our @LiquidAI_ team.
@LiquidAI_
Liquid AI
24 days
Introducing LFM-7B, our new best-in-class language model in English, Arabic, and Japanese optimized to be the substrate for private enterprise chat, code, fast instruction following, and agentic workflows. 1/
Tweet media one
0
5
18
@jimmysmith1919
Jimmy Smith
2 months
We are announcing our Series A led by @AMD Ventures! It is a privilege to work with such an amazing team. More to come soon!
@LiquidAI_
Liquid AI
2 months
We raised a $250M Series A led by @AMD Ventures to scale Liquid Foundation Models and accelerate their deployment on-device and at enterprises
Tweet media one
2
3
19
@jimmysmith1919
Jimmy Smith
2 months
Also check out or new blog post!
@xavierjgonzalez
Xavier Gonzalez
2 months
We also just dropped a new blog that steps through both the the math and the code! Blog: Poster: With my amazing collaborators @awarr9 , @jimmysmith1919 , and @scott_linderman
0
1
2
@jimmysmith1919
Jimmy Smith
2 months
I was sadly unable to make the trip to #NeurIPS2024 this year, but check out our poster #2010 with @xavierjgonzalez during the 11 am-2pm session! Poster info:
@xavierjgonzalez
Xavier Gonzalez
2 months
Check out our #NeurIPS2024 poster "Towards Scalable and Stable Parallelization of Nonlinear RNNs." When: Today! Friday the 13th (šŸ‘») 11 am - 2pm. Where: East side, poster #2010 Why: Learn how DEER and ELK can parallelize inherently sequential systems like nonlinear RNNs! šŸ¦ŒšŸ«Ž
1
0
2
@jimmysmith1919
Jimmy Smith
2 months
RT @dereklim_lzh: Presenting our paper today (Thursday) at NeurIPS at 11am! East Exhibit Hall A-C #4402 Stop by if you want to learn abouā€¦
0
8
0
@jimmysmith1919
Jimmy Smith
2 months
RT @scott_linderman: I'm excited to share our #NeurIPS2024 paper, "Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dā€¦
0
18
0
@jimmysmith1919
Jimmy Smith
2 months
RT @xavierjgonzalez: So excited by our latest @NeurIPSConf paper on parallelizing nonlinear RNNs! With my amazing collaborators @awarr9, @jā€¦
0
4
0
@jimmysmith1919
Jimmy Smith
2 months
RT @LiquidAI_: New Liquid research: STAR -- Evolutionary Synthesis of Tailored Architectures. At Liquid we design foundation models with tā€¦
0
45
0
@jimmysmith1919
Jimmy Smith
2 months
Check out our @NeurIPSConf paper that explores the connections between parallelizing *nonlinear* RNNs and Newton's method. Paper: With @xavierjgonzalez, @awarr9, and @scott_linderman. See Scott's thread below for more details.
@scott_linderman
Scott Linderman
2 months
Did you know that you can parallelize *nonlinear* RNNs over their sequence length!? Our @NeurIPSConf paper "Towards Scalable and Stable Parallelization of nonlinear RNNs," which introduces quasi-DEER and ELK to parallelize ever larger and richer dynamical systems! šŸ§µ [1/11]
Tweet media one
0
4
18
@jimmysmith1919
Jimmy Smith
3 months
RT @anas_ant: State space models are awesome, but the usually used pre-training scheme really clips their wings. Checkout Birdie, which rā€¦
0
1
0
@jimmysmith1919
Jimmy Smith
3 months
Fun work with @SamBlouir_NLP, @anas_ant, and Amarda Shehu! We show that the long context retrieval performance of SSMs such as Hawk and Mamba can be significantly improved with better training procedures. Paper: See Sam's thread for more details.
@SamBlouir_NLP
Sam Blouir
3 months
šŸš€ Introducing Birdie šŸ¤! Our EMNLP 2024 paper supercharges SSMs like Mamba and Hawk on long-range, context-heavy tasks, closing the gap with Transformers. Come see us at 12:30 - 2:00 PM in Riverfront Hall - Lobby Level #EMNLP2024! Proud to collaborate with @jimmysmith1919, @anas_ant, and Amarda Shehu on this work. šŸ“„ Paper: šŸ’» Code:Ā 
Tweet media one
0
4
17
@jimmysmith1919
Jimmy Smith
4 months
Had a great time speaking at the @LiquidAI_ Product Launch. Link to full stream: Many other great talks and demos by CEO @ramin_m_h, Head of Post-Training @maximelabonne, CSO @xanamini, and CTO @mlech26l, as well as fireside chats with industry leaders.
Tweet media one
Tweet media two
Tweet media three
2
10
42
@jimmysmith1919
Jimmy Smith
4 months
RT @maximelabonne: You'll see our CEO @ramin_m_h, founding scientist @jimmysmith1919, CSOĀ @xanamini, andĀ CTO @mlech26l. We also had many iā€¦
0
2
0
@jimmysmith1919
Jimmy Smith
5 months
Excited to launch our first series of Language LFMs. Minimal memory footprints without sacrificing quality makes them perfect for edge deployments. A lot more to come from the team soon!
@LiquidAI_
Liquid AI
5 months
Today we introduce Liquid Foundation Models (LFMs) to the world with the first series of our Language LFMs: A 1B, 3B, and a 40B model. (/n)
Tweet media one
4
8
59
@jimmysmith1919
Jimmy Smith
7 months
RT @jakub_smekal: Excited to share the first paper of my PhD: Towards a theory of learning dynamics in deep state space models https://t.cā€¦
0
21
0
@jimmysmith1919
Jimmy Smith
1 year
RT @LiquidAI_: Today we announceĀ our collaboration with Capgemini to build next-generation AI solutions for enterprises. For the last monthā€¦
0
17
0
@jimmysmith1919
Jimmy Smith
1 year
RT @AllanRaventos: What role does pre-training diversity play in the emergence of in-context learning? Come see our poster #727 at 5:15 toā€¦
0
6
0
@jimmysmith1919
Jimmy Smith
1 year
Excited to be at #NeurIPS23 this week! I will be presenting our poster on ConvSSMs for modeling long video sequences Tuesday at 5:15pm: Come find me or DM me if you want to discuss SSMs, efficient sequence modeling, transformer alternatives, etc.
1
6
32