![Jimmy Smith Profile](https://pbs.twimg.com/profile_images/1587263836721016832/1zPChRR9_x96.jpg)
Jimmy Smith
@jimmysmith1919
Followers
455
Following
105
Statuses
26
Founding Scientist @LiquidAI_. Previously PhD @Stanford and member of @scott_linderman's lab.
Stanford, CA
Joined October 2017
RT @hackwithtrees: Proud to have @LiquidAI_ as TreeHacks' 2025 Official Edge AI Track Sponsor! š§µ for prizes ->
0
3
0
RT @maximelabonne: š„³ Super happy to share the new model I've been working on: LFM-7B LFM-7B was designed for exceptional multilingual chatā¦
0
15
0
Excited to share LFM-7B! Incredibly proud of the strong work of our @LiquidAI_ team.
Introducing LFM-7B, our new best-in-class language model in English, Arabic, and Japanese optimized to be the substrate for private enterprise chat, code, fast instruction following, and agentic workflows. 1/
0
5
18
Also check out or new blog post!
We also just dropped a new blog that steps through both the the math and the code! Blog: Poster: With my amazing collaborators @awarr9 , @jimmysmith1919 , and @scott_linderman
0
1
2
I was sadly unable to make the trip to #NeurIPS2024 this year, but check out our poster #2010 with @xavierjgonzalez during the 11 am-2pm session! Poster info:
Check out our #NeurIPS2024 poster "Towards Scalable and Stable Parallelization of Nonlinear RNNs." When: Today! Friday the 13th (š») 11 am - 2pm. Where: East side, poster #2010 Why: Learn how DEER and ELK can parallelize inherently sequential systems like nonlinear RNNs! š¦š«
1
0
2
RT @dereklim_lzh: Presenting our paper today (Thursday) at NeurIPS at 11am! East Exhibit Hall A-C #4402 Stop by if you want to learn abouā¦
0
8
0
RT @scott_linderman: I'm excited to share our #NeurIPS2024 paper, "Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dā¦
0
18
0
RT @xavierjgonzalez: So excited by our latest @NeurIPSConf paper on parallelizing nonlinear RNNs! With my amazing collaborators @awarr9, @jā¦
0
4
0
RT @LiquidAI_: New Liquid research: STAR -- Evolutionary Synthesis of Tailored Architectures. At Liquid we design foundation models with tā¦
0
45
0
Check out our @NeurIPSConf paper that explores the connections between parallelizing *nonlinear* RNNs and Newton's method. Paper: With @xavierjgonzalez, @awarr9, and @scott_linderman. See Scott's thread below for more details.
Did you know that you can parallelize *nonlinear* RNNs over their sequence length!? Our @NeurIPSConf paper "Towards Scalable and Stable Parallelization of nonlinear RNNs," which introduces quasi-DEER and ELK to parallelize ever larger and richer dynamical systems! š§µ [1/11]
0
4
18
RT @anas_ant: State space models are awesome, but the usually used pre-training scheme really clips their wings. Checkout Birdie, which rā¦
0
1
0
Fun work with @SamBlouir_NLP, @anas_ant, and Amarda Shehu! We show that the long context retrieval performance of SSMs such as Hawk and Mamba can be significantly improved with better training procedures. Paper: See Sam's thread for more details.
š Introducing Birdie š¤! Our EMNLP 2024 paper supercharges SSMs like Mamba and Hawk on long-range, context-heavy tasks, closing the gap with Transformers. Come see us at 12:30 - 2:00 PM in Riverfront Hall - Lobby Level #EMNLP2024! Proud to collaborate with @jimmysmith1919, @anas_ant, and Amarda Shehu on this work. š Paper: š» Code:Ā
0
4
17
Had a great time speaking at the @LiquidAI_ Product Launch. Link to full stream: Many other great talks and demos by CEO @ramin_m_h, Head of Post-Training @maximelabonne, CSO @xanamini, and CTO @mlech26l, as well as fireside chats with industry leaders.
2
10
42
RT @maximelabonne: You'll see our CEO @ramin_m_h, founding scientist @jimmysmith1919, CSOĀ @xanamini, andĀ CTO @mlech26l. We also had many iā¦
0
2
0
Excited to launch our first series of Language LFMs. Minimal memory footprints without sacrificing quality makes them perfect for edge deployments. A lot more to come from the team soon!
Today we introduce Liquid Foundation Models (LFMs) to the world with the first series of our Language LFMs: A 1B, 3B, and a 40B model. (/n)
4
8
59
RT @jakub_smekal: Excited to share the first paper of my PhD: Towards a theory of learning dynamics in deep state space models https://t.cā¦
0
21
0
RT @LiquidAI_: Today we announceĀ our collaboration with Capgemini to build next-generation AI solutions for enterprises. For the last monthā¦
0
17
0
RT @AllanRaventos: What role does pre-training diversity play in the emergence of in-context learning? Come see our poster #727 at 5:15 toā¦
0
6
0
Excited to be at #NeurIPS23 this week! I will be presenting our poster on ConvSSMs for modeling long video sequences Tuesday at 5:15pm: Come find me or DM me if you want to discuss SSMs, efficient sequence modeling, transformer alternatives, etc.
1
6
32