sdmat Profile
sdmat

@sdmat123

Followers
56
Following
68K
Statuses
559

Joined February 2022
Don't wanna be here? Send us removal request.
@sdmat123
sdmat
1 day
@DeborahMeaden @northessexHarry The charity is for Noble Causes and the charter says that the money is spent on the best and most noble endeavors. Of course you don't get to audit. The charity will release reports saying how great everything is. Don't worry, this is only 0.7% of your income.
0
0
36
@sdmat123
sdmat
1 day
@neverwrong_88 @jonatanpallesen Exactly what will be said about humans if we rely on notions about consumerism persisting in a world where labor is no longer a factor of production.
0
0
1
@sdmat123
sdmat
1 day
@neverwrong_88 @jonatanpallesen Just like humans are subservient to horses because the economy relies on providing fodder, care, and stabling to billions of them?
1
0
4
@sdmat123
sdmat
1 day
@patience_cave sama just announced GPT-5 will be more patient than any of us.
1
0
1
@sdmat123
sdmat
1 day
@_Mira___Mira_ Maybe because OpenAI don't want to slow it down? And yes, this is exactly the same model. "High" is just a setting for reasoning length.
Tweet media one
0
0
0
@sdmat123
sdmat
1 day
@kernelkook ChatGPT Pro. You can actually use it for work. No "unexpected demand" causing your request to be refused. No defaulting to abridged answers to save compute. Unlimited use for most things. And unlike Anthropic recently, OpenAI ships. Deep Research is amazing!
1
0
2
@sdmat123
sdmat
1 day
@signulll More likely 1-2 years.
0
0
0
@sdmat123
sdmat
1 day
@attentionmech @_xjdr Convoy law?
1
0
0
@sdmat123
sdmat
1 day
@attentionmech @_xjdr There is little market for generic safety guardrails, nor should there be. Real safety is making models intrinsically safe. That's what Anthropic *was* working on, their proud announcement of terrible bolt-on safety guardrails brings into question if this is still the case.
1
0
2
@sdmat123
sdmat
2 days
@flowersslop You can type a prompt after getting the report. Try: Summarize in 200 words
0
0
1
@sdmat123
sdmat
2 days
@scaling01 @apples_jimmy Then did exactly that with Sonnet 3.5. Including debuting agentic computer operation with fanfare in the refreshed version. Which is great, everyone loves Sonnet 3.5! This doesn't change the fact that Anthropic are self-righteous hypocrites.
0
0
2
@sdmat123
sdmat
2 days
@Zapidroid @apples_jimmy Jan Leike finds a way
0
0
2
@sdmat123
sdmat
2 days
@DeryaTR_ No way! I'll be looking for the other three horsemen of the apocalypse all day.
0
0
0
@sdmat123
sdmat
2 days
Such organizations spend far more on food aid an ongoing basis than Musk could (total world food aid is circa $75 Billion yearly), surely that is proof that Musk couldn't end world hunger with money. Very little of the aid reaches people who need it due to layers of corruption.
0
0
1
@sdmat123
sdmat
2 days
NASA didn't design suits up front. They went with low stakes robotic missions, they got the data, then they designed the suits for the high stakes manned missions. Had NASA designed suits for the high stakes missions before flying the low stakes missions, such suits would have failed because they would not have known about lunar dust. Nobody is saying we don't need adequate safety measures for high stakes AI systems. The point is designing clunky generic safety systems before knowing the specific characteristics of very high level AI and the requirements to make it safe is *worse* than doing nothing as it gives a false sense of security. And the current models don't call for space suits, they need flight suits at most. Dressing pilots up in space suits is horribly inconvenient and expensive. What Anthropic is doing here is incredibly unsophisticated and is a step backwards for true AI safety. I can't believe they are bragging about it. What they should be doing is working out how to make intrinsically safe AI models that don't *need* political officers watching over them. Personally I think Anthropic's Constitutional AI concept is a great direction for this and they should stick with it.
0
0
1
@sdmat123
sdmat
2 days
@xlr8harder @ylecun <handwaves something about using reinforcement learning counting as a major redesign>
0
0
1
@sdmat123
sdmat
2 days
@ESYudkowsky Cheshire Cat fallacy - without the animal only the smile remains.
0
0
0
@sdmat123
sdmat
2 days
@ESYudkowsky They also pledge to think Very Hard about building plants that have hundreds of tons of fuel stored right next to the explosive reactor for convenience - this calls for Serious Procedures, and Manuals, and Training.
0
0
2
@sdmat123
sdmat
2 days
@toojoe @infrecursion1 @janleike You realize NASA landed five robotic probes on the moon prior to Apollo to determine the conditions there? And that this was critical to designing working space suits for the surface. If they hadn't done this the unexpectedly abrasive lunar dust would have compromised the suits.
1
0
0
@sdmat123
sdmat
2 days
0
0
1