XpeaGPU Profile Banner
AGF Profile
AGF

@XpeaGPU

Followers
2K
Following
12K
Statuses
1K

GPU Veteran

High-NA EUV wafer
Joined March 2022
Don't wanna be here? Send us removal request.
@XpeaGPU
AGF
3 days
@thejefflutz @I_loves_deep_nn it's BS. The supercomputer is build by Nvidia and Oracle. See the official PR:
0
0
0
@XpeaGPU
AGF
3 days
@I_loves_deep_nn Nvidia is a key partner of Stargate. The PR clearly states that Nvidia and Oracle will build and operate the supercomputer.
0
0
1
@XpeaGPU
AGF
6 days
@Xii_Nyth Well I was wrong and right at the same time. I've been simply cheated by the marketing slides that included the fake frames without any mention of it (5070 as fast as 4090 🤬). But I'm gonna give it to you, only the final result is important👍
1
0
1
@XpeaGPU
AGF
10 days
You just forgot to mention that the vast majority of Chinese EVs uses Nvidia Drive SoC and system (BYD, Xpeng, Li Auto including the Mega you show on the picture, Xiaomi, Zeekr, Wey, etc). Only Huawei and their HIMA platform is full Chinese but its full of bugs and lot of accidents happened in the last months because of this ADAS. So what is your point exactly?
0
0
2
@XpeaGPU
AGF
5 months
@BrettBurmanPA @kopite7kimi Nah 600W is as wrong as 800W was wrong for Ada
1
0
7
@XpeaGPU
AGF
7 months
@iggythelad @formula1champ @dnystedt NVDA is building ~70k NVL72 racks with a value above 200B USD to satisfy it's customers and it's still not enough. With such demand on Blackwell, the 4T market cap milestone is not a question of how but when
1
0
2
@XpeaGPU
AGF
7 months
@formula1champ @dnystedt Correct. Reality is that they are all struggling to keep up with Nvidia's pace and they can't get any ROI out of these chips. CAPEX to compete is sky high and these projects are all losing huge money right now, hoping for a brighter future that may or may not come...
1
0
5
@XpeaGPU
AGF
7 months
@BrettBurmanPA Yes, it's I get on 448/512bit GB202 vs 4090. And with new Blackwell RT features, the gap is much higher. Real question is what will be the SKUs because no competition this round. NV can basically do whatever they want. From 5080 256bit GB203 to 5080 448bit GB202...
1
0
2
@XpeaGPU
AGF
7 months
@Nibosshi0 @compguru910 Read again slowly and you will see that I don't send personal attacks. cheers. PS: I already explain the Ampere / Samsung situation. But I won't waste my time again, you are biased in picking only what fits your narrative
2
0
0
@XpeaGPU
AGF
7 months
@Nibosshi0 @compguru910 You obviously have difficulties in reading and understanding. But no offence taken, free speech brother. Just keep it respectful
2
0
0
@XpeaGPU
AGF
7 months
@kopite7kimi What the relationship between a worldwide launch and a China specific SKU?
0
0
11
@XpeaGPU
AGF
7 months
@MonkeyDrSentech @MonacoRebooted Of course, GTC is an Nvidia event! by public event, I mean not organized by Nvidia, like CES, GDC, Computex...
1
0
1
@XpeaGPU
AGF
7 months
@jonathannero111 @Muxim4 @Sebasti66855537 @VideoCardz Blame the lack of competition
1
0
1
@XpeaGPU
AGF
7 months
@jonathannero111 @Muxim4 @Sebasti66855537 @VideoCardz More if they dare to sell the full die at high TDP. And it becomes >2x in RT/PT when the new Blackwell features are enabled
1
0
1
@XpeaGPU
AGF
7 months
@Muxim4 @Sebasti66855537 @VideoCardz Gaming Blackwell will bring his bag of surprises and the large clock increase on the "same" node is one of them. Totally new SM designed from scratch is another one. Just remember what I said, expect significant performance increase of GB202 over AD102
1
0
3
@XpeaGPU
AGF
7 months
@burkov Problem is that nowadays H100 80GB is well below $30k. In fact even well below $20k for the majority of Nvidia customers...
0
0
0
@XpeaGPU
AGF
8 months
@FurbyPerson @Etched No and that's the catch. Nvidia is still needed for training. Moreover, it's really a big gamble. AI is a fast moving market with new software algorithms coming every month. If a major breakthrough arrives, this ASIC will become useless...
0
0
5
@XpeaGPU
AGF
8 months
@TechEpiphanyYT Absolutely not
@XpeaGPU
AGF
8 months
@EnerTuition @AMD They used vLLM for comparison with Nvidia. Really? vLLM is ~1/3rd the speed of LMDeploy, the most popular CUDA-only inference engine:
0
0
3
@XpeaGPU
AGF
8 months
@EnerTuition @AMD They used vLLM for comparison with Nvidia. Really? vLLM is ~1/3rd the speed of LMDeploy, the most popular CUDA-only inference engine:
1
0
4