2ndtrader Profile Banner
Lone_wolf Profile
Lone_wolf

@2ndtrader

Followers
33K
Following
12K
Statuses
8K

Trader/investor/10y+ hosts @Fillorkillpod https://t.co/z1eddrNBj3. Writes the letter https://t.co/FPrGTYzRGO

Stockholm
Joined February 2012
Don't wanna be here? Send us removal request.
@2ndtrader
Lone_wolf
12 hours
RT @elonmusk: Very important message
0
13K
0
@2ndtrader
Lone_wolf
1 day
Tänk att dom får betalt för det här…
@financialjuice
FinancialJuice
1 day
FED'S GOOLSBEE, IF WE GOT MULTIPLE MONTHS LIKE THIS, ON CPI INFLATION: THEN THE JOB IS CLEARLY NOT DONE.
0
0
18
@2ndtrader
Lone_wolf
9 days
0
4
0
@2ndtrader
Lone_wolf
13 days
@Ellebellesxo Jo det var en såkallad teaser inför nästa vecka🙏
0
0
0
@2ndtrader
Lone_wolf
14 days
@playerzerozero1 Så är det säkert. Värderingen jag raljerar över. Elon hade inte en elon emot sig dock;)
0
0
2
@2ndtrader
Lone_wolf
15 days
RT @financialjuice: 🔴 ⚠️ BREAKING: $NVDA TRUMP OFFICIALS DISCUSS TIGHTENING CURBS ON NVIDIA CHINA SALES.
0
31
0
@2ndtrader
Lone_wolf
16 days
0
4
0
@2ndtrader
Lone_wolf
16 days
@adamkhootrader Ask what it drives inference on. If huawei 910?
0
0
2
@2ndtrader
Lone_wolf
16 days
😂
@tekbog
terminally onλine εngineer ~ new era incoming
17 days
i cant believe ChatGPT lost its job to AI
1
1
89
@2ndtrader
Lone_wolf
17 days
RT @DeItaone: $NVDA - MUSK SUGGESTS DEEPSEEK 'OBVIOUSLY' HAS MORE NVIDIA GPUS THAN CLAIMED Elon Musk and Alexandr Wang suggest DeepSeek ha…
0
1K
0
@2ndtrader
Lone_wolf
17 days
@deergod69 😂
0
0
2
@2ndtrader
Lone_wolf
18 days
@Aktiediplomaten Ja r1 är reasoning. Som gpts o1.
0
0
0
@2ndtrader
Lone_wolf
18 days
RT @pmarca:
Tweet media one
0
275
0
@2ndtrader
Lone_wolf
18 days
@Aktiediplomaten @ABrunkeberg Var nog planerat. Men ändå kort om tid att få ihop något. Speciellt 500 yards 😂 Se nedan. Notera datum.
@chamath
Chamath Palihapitiya
2 months
DeepSeek, a Chinese AI startup, has released DeepSeek-V3, an open-source LLM that matches the performance of leading U.S. models while costing far less to train. The large language model uses a mixture-of-experts architecture with 671B parameters, of which only 37B are activated for each task. This selective parameter activation allows the model to process information at 60 tokens per second, three times faster than its previous versions. In benchmark tests, DeepSeek-V3 outperforms Meta's Llama 3.1 and other open-source models, matches or exceeds GPT-4o on most tests, and shows particular strength in Chinese language and mathematics tasks. Only Anthropic's Claude 3.5 Sonnet consistently outperforms it on certain specialized tasks. The company reports spending $5.57 million on training through hardware and algorithmic optimizations, compared to the estimated $500 million spent training Llama-3.1.
0
0
0
@2ndtrader
Lone_wolf
18 days
@Aktiediplomaten Nej var nog planerat. Men av stargate att dömma känns det inte vidare strukturerat än. Var nog mer än en dag, men ändå inte särskilt mkt. Se nedan. Notera datum.
@chamath
Chamath Palihapitiya
2 months
DeepSeek, a Chinese AI startup, has released DeepSeek-V3, an open-source LLM that matches the performance of leading U.S. models while costing far less to train. The large language model uses a mixture-of-experts architecture with 671B parameters, of which only 37B are activated for each task. This selective parameter activation allows the model to process information at 60 tokens per second, three times faster than its previous versions. In benchmark tests, DeepSeek-V3 outperforms Meta's Llama 3.1 and other open-source models, matches or exceeds GPT-4o on most tests, and shows particular strength in Chinese language and mathematics tasks. Only Anthropic's Claude 3.5 Sonnet consistently outperforms it on certain specialized tasks. The company reports spending $5.57 million on training through hardware and algorithmic optimizations, compared to the estimated $500 million spent training Llama-3.1.
1
0
3
@2ndtrader
Lone_wolf
18 days
@250GTO_ La på mkt nasdaq shorts i veckan. Inte enskilda stocks. Men kikar på det.
0
0
1