![MindsDB Profile](https://pbs.twimg.com/profile_images/1836875331199979523/-6YiMH5X_x96.png)
MindsDB
@MindsDB
Followers
80K
Following
1K
Statuses
3K
The #1 enterprise data platform for AI. @Forbes Most Promising AI Companies, @Gartner Cool Vendor
San Francisco
Joined November 2018
RT @SidneyRabsatt: Deepseek is the #1 app this week. But you can use it AND keep your conversations off the grid with your own private Chat…
0
1
0
RT @SidneyRabsatt: o1, Sora, Codestral, Sonnet, Veo, DeepSeek, so much goodness coming from the models. But private and real-time data are…
0
1
0
At MindsDB, we like to say, “Skate to where the puck is going, not just where it is.” Sure, it's important to solve immediate data & AI challenges, but we also plan for what’s next. Balancing real-time impact with a future proof roadmap (while keeping compliance in check) is what keeps us ahead.
0
0
2
Bun 1.2 is here.
0
0
3
RT @GregKamradt: DeepSeek @arcprize results - on par with lower o1 models, but for a fraction of the cost, and open pretty wild https://t…
0
99
0
AI will soon replace call centers.
YC W25's @Leaping_AI automates entire call centers with self-improving AI voice agents. With a simple platform, you can create agents that automate complex use cases at scale. Congrats on the launch, @kevinwu_hi, @iamshraey_, and @akyshnik!
0
0
2
RT @_akhaliq: This is wild DeekSeek-R1 coder is building games from different languages this is by @gclue_akira prompt (english transla…
0
82
0
Another open-source LLM has come to light! The Moonshot AI team built a multi-modal model pushing reinforcement learning and long context (128k!) to new heights. It’s already outpacing GPT-4o and Claude Sonnet 3.5 on short-CoT tasks by up to 550%. 2025 is going to be interesting.
Introducing Kimi k1.5 an o1-level multi-modal model -Sota short-CoT performance, outperforming GPT-4o and Claude Sonnet 3.5 on 📷AIME, 📷 LiveCodeBench by a large margin (up to +550%) -Long-CoT performance matches o1 across multiple modalities (📷MathVista, 📷Codeforces, etc) Tech report: i-k1.5… Key ingredients of k1.5 -Long context scaling. Up to 128k tokens for RL generation. Efficient training with partial rollouts. -Improved policy optimization: online mirror descent, sampling strategies, length penalty, and others. -Multi modalities. Joint reasoning over text and vision.
1
2
44
AGI at home 👀
AGI at home Running DeepSeek R1 across my 7 M4 Pro Mac Minis and 1 M4 Max MacBook Pro. Total unified memory = 496GB. Uses @exolabs distributed inference with 4-bit quantization. Next goal is fp8 (requires >700GB)
0
1
1