ben hohnerš
@bhohner
Followers
1K
Following
15K
Statuses
5K
wisdom accelerationist (w/acc) ā¢ sensemaking futurist amplifying meaning and balance ā¢ hegelian dialectics, complexity, wardley maps ā¢ he/any
now
Joined February 2012
@HotPotatoPanda @JeffLadish Have you tried the reasoning agents like o3-mini-high or deepseek-r1?
0
0
0
@VictorTaelin Preventing scraping for AI datasets. Zero incentive to allow external access to data so bottom priority since the layoffs
0
0
0
I agree about the sigmoid thing. Curious to hear why you think we're anywhere near the asymptotic part? Plenty of room to scale in chip efficiency, train compute, algorithmic effectiveness, and test time compute. Also, even if we are entering the asymptotic part, it clearly ends well after autonomous agents with superhuman intelligence.
1
0
0
This is why OAI is so bad at naming! Each GPT-n corresponds to a 100x increase in training compute! With the focus on efficiency and architecture improvements, everything will just be GPT-4 until they train a larger model. What if we can get superintelligence by GPT-4.5 or 5?!
Sam Altman "Internally, we've reached GPT-4.5, and getting to GPT-5.5 won't require 100x more compute" - progress in reasoning models and RL techniques have greatly improved compute efficiency - it allows smaller models like internal GPT-4.5 to achieve GPT-6 level performance without requiring 100x more computing power
0
0
0
@HotPotatoPanda @JeffLadish Ha, that's ironically the main plausible thing in the whole story. People really don't understand exponential growth...
1
0
0
@JeffLadish I don't think this is a very probable scenario for reasons I outline here:
My critical review of this story: - No plausible explanation for why AI would attempt to wipe out humanity - It's more likely that many AIs would be independently competing with each other leading to a self-regulating AI ecosystem - Missing an explanation for why AI would be incentivized to take over violently when it could use non-violent means and still be as effective - Paints a realistic scenario of pace of AI development but not of the types of actions a superintelligence is likely to take - Omits any type of conclusion, probably because there is no coherent rationale for the AI to behave the way it does in the story. Any intelligent agent needs meaning to pursue this goals, yet if you're unable to conceive of what brings the AI meaning, its purpose remains unclear, and therefore its goals are unclear and result in an ambiguous ending. If hyperintelligent, it should be able to achieve its goals, but you haven't outlined a goal for it besides, "take over". Well what happens once it has achieved that? For that you'd need to understand what brings it meaning and purpose. - Ignores the high probability that a superintelligence may discover a form of universal ethics which might lead to universal love - Humans naturally progress through Spiral Dynamics levels. AI is trained on human data. Also, Spiral Dynamics may represent natural levels of emergence for intelligences. Higher levels seem to progress toward more universal care for all living beings, as well as an ability to powerfully intervene in systems with a lighter touch. Only someone in a Red or Blue mindset would project this Machiavellian worldview onto an AI. And, therefore, only someone at that level could conceive of a scenario where an AI would be stuck at that level. Is it possible to have a superintelligence that is a mere probabilistic distribution of the current distribution of world perspectives, thus leading to more violence? Yes. But the addition of reasoning capabilities and online learning almost guarantees some form of emergent and evolving worldview, which should evolve through developmental levels similar to Spiral Dynamics levels. - If we want to reduce the probability of scenarios like those you outlined, we should focus research on building symbolic ethical reasoning systems that can provide a verifiable foundational ethical basis at the deepest levels of the models. As we know, RLHF and fine tuning is a bandaid fix which brings more future danger because it makes people feel safe while the real danger bubbles underneath.
0
0
0
My critical review of this story: - No plausible explanation for why AI would attempt to wipe out humanity - It's more likely that many AIs would be independently competing with each other leading to a self-regulating AI ecosystem - Missing an explanation for why AI would be incentivized to take over violently when it could use non-violent means and still be as effective - Paints a realistic scenario of pace of AI development but not of the types of actions a superintelligence is likely to take - Omits any type of conclusion, probably because there is no coherent rationale for the AI to behave the way it does in the story. Any intelligent agent needs meaning to pursue this goals, yet if you're unable to conceive of what brings the AI meaning, its purpose remains unclear, and therefore its goals are unclear and result in an ambiguous ending. If hyperintelligent, it should be able to achieve its goals, but you haven't outlined a goal for it besides, "take over". Well what happens once it has achieved that? For that you'd need to understand what brings it meaning and purpose. - Ignores the high probability that a superintelligence may discover a form of universal ethics which might lead to universal love - Humans naturally progress through Spiral Dynamics levels. AI is trained on human data. Also, Spiral Dynamics may represent natural levels of emergence for intelligences. Higher levels seem to progress toward more universal care for all living beings, as well as an ability to powerfully intervene in systems with a lighter touch. Only someone in a Red or Blue mindset would project this Machiavellian worldview onto an AI. And, therefore, only someone at that level could conceive of a scenario where an AI would be stuck at that level. Is it possible to have a superintelligence that is a mere probabilistic distribution of the current distribution of world perspectives, thus leading to more violence? Yes. But the addition of reasoning capabilities and online learning almost guarantees some form of emergent and evolving worldview, which should evolve through developmental levels similar to Spiral Dynamics levels. - If we want to reduce the probability of scenarios like those you outlined, we should focus research on building symbolic ethical reasoning systems that can provide a verifiable foundational ethical basis at the deepest levels of the models. As we know, RLHF and fine tuning is a bandaid fix which brings more future danger because it makes people feel safe while the real danger bubbles underneath.
0
0
1
@AstrusDastan @sama @mia_glaese @joannejang @akshaynathan_ I'm not sure. Maybe DeepSeek R1 doesn't think it's sentient? Or perhaps they've successfully trained it out somehow. It's also a small model and this behavior is emergent.
0
0
0
@speakerjohnash @VictorTaelin @MuseOfTruth @speakerjohnash are these predictions going to be logged to start generating Ŧrust based on future outcomes? That would be cool!
2
0
0