UnslothAI Profile Banner
Unsloth AI Profile
Unsloth AI

@UnslothAI

Followers
15K
Following
2K
Statuses
267

Open source LLM fine-tuning! 🦥 https://t.co/2kXqhhvLsb

California, USA
Joined November 2023
Don't wanna be here? Send us removal request.
@UnslothAI
Unsloth AI
17 days
Introducing 1.58bit DeepSeek-R1 GGUFs! 🐋 DeepSeek-R1 can now run in 1.58-bit, while being fully functional. We shrank the 671B parameter model from 720GB to just 131GB - a 80% size reduction. Naively quantizing all layers breaks the model entirely, causing endless loops & gibberish outputs. Our dynamic quants solve this. The 1.58-bit quant fits in 160GB VRAM (2x H100 80GB) for fast inference at ~140 tokens/sec. By studying DeepSeek-R1’s architecture, we selectively quantized certain layers to higher bits (like 4-bit), and leave most MoE layers to 1.5-bit. Benchmarks + Blog: Dynamic GGUFs (131GB–212GB) on Hugging Face:
Tweet media one
139
625
4K
@UnslothAI
Unsloth AI
15 hours
Train your own reasoning LLM using DeepSeek's GRPO algorithm with our free notebook! You'll transform Llama 3.1 (8B) to have chain-of-thought. Unsloth makes GRPO use 80% less VRAM. Guide: GitHub: Colab:
19
254
1K
@UnslothAI
Unsloth AI
3 days
Unsloth is the #1 trending repo on GitHub! 🦥 It’s been an incredible journey and we couldn’t have done it without you! To celebrate, we’re taking a look back at how it all started and how we got here: GitHub repo:
Tweet media one
24
57
489
@UnslothAI
Unsloth AI
7 days
You can now reproduce DeepSeek-R1's reasoning on your own local device! Experience the "Aha" moment with just 7GB VRAM. Unsloth reduces GRPO training memory use by 80%. 15GB VRAM can transform Llama-3.1 (8B) & Phi-4 (14B) into reasoning models. Blog:
Tweet media one
150
539
3K
@UnslothAI
Unsloth AI
11 days
@cneuralnetwork Thanks for using Unsloth! 😉🦥
0
0
4
@UnslothAI
Unsloth AI
12 days
@vega_holdings @OpenWebUI Oooh maybe really depends on demand. Issue is the imatrix quants will take a lot of time and money but we'll see. We might release it for V3 first - or maybe not :)
1
0
11
@UnslothAI
Unsloth AI
13 days
@zevrekhter @OpenWebUI You'll need at least 20GB RAM but that's the minimum requirement. Preferably have at least a sum of VRAM+RAM = 80GB+ for decent results.
0
0
1
@UnslothAI
Unsloth AI
14 days
RT @OpenWebUI: 🚀 You can now run 1.58-bit DeepSeek-R1 (non-distilled version) on Open WebUI with llama.cpp, thanks to @UnslothAI! 💻⚡️ (Test…
0
46
0
@UnslothAI
Unsloth AI
14 days
RT @ggerganov: DeepSeek-R1 on Mac Studio 192GB 🪄
Tweet media one
Tweet media two
0
164
0
@UnslothAI
Unsloth AI
15 days
@ggerganov Looks soo good! 🎇🐋
0
0
10
@UnslothAI
Unsloth AI
17 days
@tom_doerr Thanks for the shout-out Tom! We really appreciate it! ♥️🦥
0
0
10
@UnslothAI
Unsloth AI
17 days
@0xAsharib @tom_doerr @deepseek_ai We do, but only for the distilled versions. You can read more in our blog here:
2
0
4
@UnslothAI
Unsloth AI
17 days
RT @tom_doerr: Unsloth: Faster LLM fine-tuning library
Tweet media one
0
67
0
@UnslothAI
Unsloth AI
22 days
@levelsio DeepSeek R1 Distill Llama 8B seems to be the current most popular R1 GGUF and it will definitely run great on your laptop. We uploaded ALL of the GGUF files & they can be directly used with Jan AI, llama.cpp, Ollama, HF etc:
@UnslothAI
Unsloth AI
24 days
DeepSeek-R1 GGUF's are now on @HuggingFace! Includes all Llama & Qwen distilled models + 2 to 8-bit quantized versions. How to run R1: DeepSeek-R1 Collection:
0
0
41
@UnslothAI
Unsloth AI
22 days
RT @helloiamleonie: You can be GPU poor like me and still fine-tune an LLM. Here’s how you can fine-tune Gemma 2 in a Kaggle notebook on a…
0
121
0
@UnslothAI
Unsloth AI
24 days
DeepSeek-R1 GGUF's are now on @HuggingFace! Includes all Llama & Qwen distilled models + 2 to 8-bit quantized versions. How to run R1: DeepSeek-R1 Collection:
20
122
567
@UnslothAI
Unsloth AI
26 days
@yar_vol @reach_vb @ollama They're using our version of GGUF because we fixed many bugs in Phi-4. We uploaded the fixed Phi-4 modes on Hugging Face: You can read more about the bug fixes in our blog:
0
0
3
@UnslothAI
Unsloth AI
27 days
@reach_vb @ollama Thanks a lot for sharing Vaibhav we really appreciate it! 🤗
0
0
3