![sdmat Profile](https://pbs.twimg.com/profile_images/1799186465345220609/1sfP8qiJ_x96.jpg)
sdmat
@sdmat123
Followers
56
Following
68K
Statuses
559
Joined February 2022
@DeborahMeaden @northessexHarry The charity is for Noble Causes and the charter says that the money is spent on the best and most noble endeavors. Of course you don't get to audit. The charity will release reports saying how great everything is. Don't worry, this is only 0.7% of your income.
0
0
36
@neverwrong_88 @jonatanpallesen Exactly what will be said about humans if we rely on notions about consumerism persisting in a world where labor is no longer a factor of production.
0
0
1
@neverwrong_88 @jonatanpallesen Just like humans are subservient to horses because the economy relies on providing fodder, care, and stabling to billions of them?
1
0
4
@_Mira___Mira_ Maybe because OpenAI don't want to slow it down? And yes, this is exactly the same model. "High" is just a setting for reasoning length.
0
0
0
@kernelkook ChatGPT Pro. You can actually use it for work. No "unexpected demand" causing your request to be refused. No defaulting to abridged answers to save compute. Unlimited use for most things. And unlike Anthropic recently, OpenAI ships. Deep Research is amazing!
1
0
2
@attentionmech @_xjdr There is little market for generic safety guardrails, nor should there be. Real safety is making models intrinsically safe. That's what Anthropic *was* working on, their proud announcement of terrible bolt-on safety guardrails brings into question if this is still the case.
1
0
2
@scaling01 @apples_jimmy Then did exactly that with Sonnet 3.5. Including debuting agentic computer operation with fanfare in the refreshed version. Which is great, everyone loves Sonnet 3.5! This doesn't change the fact that Anthropic are self-righteous hypocrites.
0
0
2
NASA didn't design suits up front. They went with low stakes robotic missions, they got the data, then they designed the suits for the high stakes manned missions. Had NASA designed suits for the high stakes missions before flying the low stakes missions, such suits would have failed because they would not have known about lunar dust. Nobody is saying we don't need adequate safety measures for high stakes AI systems. The point is designing clunky generic safety systems before knowing the specific characteristics of very high level AI and the requirements to make it safe is *worse* than doing nothing as it gives a false sense of security. And the current models don't call for space suits, they need flight suits at most. Dressing pilots up in space suits is horribly inconvenient and expensive. What Anthropic is doing here is incredibly unsophisticated and is a step backwards for true AI safety. I can't believe they are bragging about it. What they should be doing is working out how to make intrinsically safe AI models that don't *need* political officers watching over them. Personally I think Anthropic's Constitutional AI concept is a great direction for this and they should stick with it.
0
0
1
@xlr8harder @ylecun <handwaves something about using reinforcement learning counting as a major redesign>
0
0
1
@ESYudkowsky They also pledge to think Very Hard about building plants that have hundreds of tons of fuel stored right next to the explosive reactor for convenience - this calls for Serious Procedures, and Manuals, and Training.
0
0
2
@toojoe @infrecursion1 @janleike You realize NASA landed five robotic probes on the moon prior to Apollo to determine the conditions there? And that this was critical to designing working space suits for the surface. If they hadn't done this the unexpectedly abrasive lunar dust would have compromised the suits.
1
0
0