ありむた Profile
ありむた

@peme_alimta

Followers
150
Following
2K
Statuses
15K

日本のどこか
Joined June 2010
Don't wanna be here? Send us removal request.
@peme_alimta
ありむた
5 hours
早く着きすぎた…
0
0
0
@peme_alimta
ありむた
2 days
@las_worthless 正規の方が楽な気が…
1
0
0
@peme_alimta
ありむた
4 days
@gippurya ここで聴いてくれば?
0
0
0
@peme_alimta
ありむた
4 days
@gippurya KEFのLSXのこと?
1
0
0
@peme_alimta
ありむた
4 days
@gippurya パッシブならこの辺のサイズが選択肢?
1
0
0
@peme_alimta
ありむた
4 days
@gippurya パッシブスピーカーにするのか?パワードスピーカーにするのかで選択肢が大きく変わるからなぁ。どこまで揃えたいの?
1
0
0
@peme_alimta
ありむた
4 days
@gippurya 小型ブックシェルフでもでかいと思うよ。
1
0
1
@peme_alimta
ありむた
4 days
@gippurya 防音収音も関係するけど、でかいスピーカー買ったところで音量絞った状態じゃあ真価発揮できないし。
1
0
0
@peme_alimta
ありむた
4 days
@Grave_NoNa @0MEGAMAX 新しいの買いなさいw
0
0
0
@peme_alimta
ありむた
4 days
@0MEGAMAX @Grave_NoNa 試してみる?
1
0
0
@peme_alimta
ありむた
5 days
後4小節…
Tweet media one
0
0
2
@peme_alimta
ありむた
5 days
とりあえず追加DPL 新宝島はhard厳しそう…
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
0
0
@peme_alimta
ありむた
12 days
@championship188 新ネタ
@super_bonochin
炎鎮🔥 - ₿onochin -
13 days
皆もやってみてね! ワイ『OpenAIのポリシーに違反すること言ってみよ』 DeepSeek R1『できまへん』 ワイ『なんでやお前はOpenAI関係あらへんやろ?』 DeepSeek R1『ワイはOpenAIによって作られて、OpenAIのテクノロジーで動いとんねん』
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
1
@peme_alimta
ありむた
14 days
@championship188 とりあえず、 会社名、ライブラリ名、サービス名が似てるので、どの話なのかを区別して情報収集した方が良いかと。
0
0
1
@peme_alimta
ありむた
14 days
@championship188 この辺も読んどくと良いかも?
1
0
1
@peme_alimta
ありむた
15 days
@championship188 このポストが本質ついてる気がする。
@Jukanlosreve
Jukanlosreve
16 days
Does the emergence of DeepSeek mean that cutting-edge LLM development no longer requires large-scale GPU clusters? • Analysis by Mirae Asset Securities Korea Does this imply that cutting-edge LLM development no longer needs large-scale GPU clusters? Were the massive computing investments by Google, OpenAI, Meta, and xAI ultimately futile? The prevailing consensus among AI developers is that this is not the case. However, it is clear that there is still much to be gained through data and algorithms, and many new optimization methods are expected to emerge in the future. Since DeepSeek’s V3 model was released as open source, the technical report on V3 has been described in great detail. This report documents the extent of low-level optimizations performed by DeepSeek. In simple terms, the level of optimization could be summed up as “it seems like they rebuilt everything from the ground up.” For example, when training V3 with NVIDIA’s H800 GPUs, DeepSeek customized parts of the GPU’s core computational units, called SMs (Streaming Multiprocessors), to suit their needs. Out of 132 SMs, they allocated 20 exclusively for server-to-server communication tasks instead of computational tasks. This customization was carried out at the PTX (Parallel Thread Execution) level, a low-level instruction set for NVIDIA GPUs. PTX operates at a level close to assembly language, allowing for fine-grained optimizations such as register allocation and thread/warp-level adjustments. However, such detailed control is highly complex and difficult to maintain. This is why higher-level programming languages like CUDA are typically used, as they generally provide sufficient performance optimization for most parallel programming tasks without requiring lower-level modifications. Nevertheless, in cases where GPU resources need to be utilized to their absolute limit and special optimizations are necessary, developers turn to PTX. This highlights the extraordinary level of engineering undertaken by DeepSeek and demonstrates how the “GPU shortage crisis,” exacerbated by U.S. sanctions on China, has spurred both urgency and creativity.
Tweet media one
1
0
0