Dhanvin Sriram Profile
Dhanvin Sriram

@DhaniSriram

Followers
898
Following
11K
Statuses
340

Currently building and scaling multiple AI products

Joined June 2023
Don't wanna be here? Send us removal request.
@DhaniSriram
Dhanvin Sriram
1 day
@SarahChieng @perplexity_ai @CerebrasSystems Would love to try the API
1
0
0
@DhaniSriram
Dhanvin Sriram
1 day
@JuanIsidro @krishnanrohit Not Debt to GDP ratio is around 120%
1
0
1
@DhaniSriram
Dhanvin Sriram
4 days
0
0
1
@DhaniSriram
Dhanvin Sriram
4 days
@AngelicaOung This is just twitter hype. Model itself is not useful
0
0
0
@DhaniSriram
Dhanvin Sriram
5 days
@k_flowstate @kimmonismus @apples_jimmy Nah we’re going to accelerate more
0
0
3
@DhaniSriram
Dhanvin Sriram
6 days
@onetwoval Is this version of mistral large available with the api?
0
0
1
@DhaniSriram
Dhanvin Sriram
6 days
@sama @joannejang @akshaynathan_ Give us the real CoT. Not this fake resummarized one
0
0
0
@DhaniSriram
Dhanvin Sriram
8 days
@OfficialLoganK Any update on imagen 3?
1
0
0
@DhaniSriram
Dhanvin Sriram
8 days
@weswinder @enggirlfriend Yeah I’m curious also
0
0
0
@DhaniSriram
Dhanvin Sriram
8 days
0
0
0
@DhaniSriram
Dhanvin Sriram
10 days
@daniel_nguyenx lol not an AI bot
1
0
2
@DhaniSriram
Dhanvin Sriram
10 days
@theo @FireworksAI_HQ NVIDIA is hosting it
@rohanpaul_ai
Rohan Paul
14 days
NVIDIA just brought DeepSeek-R1 671-bn param model to NVIDIA NIM microservice on build.nvidia .com - The DeepSeek-R1 NIM microservice can deliver up to 3,872 tokens per second on a single NVIDIA HGX H200 system. - Using NVIDIA Hopper architecture, DeepSeek-R1 can deliver high-speed inference by leveraging FP8 Transformer Engines and 900 GB/s NVLink bandwidth for expert communication. - As usual with NVIDIA's NIM, its a enterprise-scale setu to securely experiment, and deploy AI agents with industry-standard APIs. @NVIDIAAIDev
Tweet media one
0
0
0
@DhaniSriram
Dhanvin Sriram
10 days
“Is deep seeker a good name” lmao
Tweet media one
@OpenAI
OpenAI
10 days
Today we are launching our next agent capable of doing work for you independently—deep research. Give it a prompt and ChatGPT will find, analyze & synthesize hundreds of online sources to create a comprehensive report in tens of minutes vs what would take a human many hours.
0
0
0
@DhaniSriram
Dhanvin Sriram
11 days
@kimmonismus It’s not out yet
0
0
0
@DhaniSriram
Dhanvin Sriram
11 days
@EMostaque This is just a bug. I got it too but nothing happened after. I was still able to use o3
0
0
0
@DhaniSriram
Dhanvin Sriram
13 days
@ludwigABAP NVIDIA is hosting it
@rohanpaul_ai
Rohan Paul
14 days
NVIDIA just brought DeepSeek-R1 671-bn param model to NVIDIA NIM microservice on build.nvidia .com - The DeepSeek-R1 NIM microservice can deliver up to 3,872 tokens per second on a single NVIDIA HGX H200 system. - Using NVIDIA Hopper architecture, DeepSeek-R1 can deliver high-speed inference by leveraging FP8 Transformer Engines and 900 GB/s NVLink bandwidth for expert communication. - As usual with NVIDIA's NIM, its a enterprise-scale setu to securely experiment, and deploy AI agents with industry-standard APIs. @NVIDIAAIDev
Tweet media one
0
0
4
@DhaniSriram
Dhanvin Sriram
13 days
@AravSrinivas This shouldn’t be a question
0
0
0
@DhaniSriram
Dhanvin Sriram
14 days
@thegenioo @dylhunn Sam said it’s not
1
0
5
@DhaniSriram
Dhanvin Sriram
14 days
@suryakane @chandrarsrikant @AshwiniVaishnaw Nah they have more than 50k
@dylan522p
Dylan Patel
17 days
@timlihk Never said that. Hopper. Some my clients executives misinterpreted that as H100, but includes H20 and H800.
0
0
2