![tino ⏸ Profile](https://pbs.twimg.com/profile_images/1722724008607518720/Vqsso8X6_x96.jpg)
tino ⏸
@bomelino
Followers
28K
Following
40K
Statuses
14K
pause AI | mich gibts auch live zu sehn:
BÜRO
Joined July 2012
@optimysticism @TylerAlterman for me: mask: ego / your projection of yourself grass/sky: your projection of the world black with stripes: consciousness in general eyes: the now/ your awareness
1
0
4
i do agree with you that this cooperative mode exists and don't get me wrong - i want you to be right. But: how can you be sure that there will be no mode above that? humans are *very* inefficient with their energy usage. we don't cooperate with beavers to build dams. even if beavers might think "I could have done that!" Cooperating with humans/businesses will be excruciatingly slow for an AI - especially after an intelligence explosion. I get the feeling your intuition is that we simply "pick up the pace", become smarter and faster humans and businesses - but this obviously hits a ceiling immediately, way below the level on which a never stopping, self improving algorithm operates. regarding your grey goo counter argument: offence is easier than defence. and even if there was a (stable?) balance - why would we want to put ourselves in this situation? this is not a world I want to live in. also: I cannot see how this way of thinking works for these kinds of threats. the AI won't tell us "by the way I'm gonna release the grey goo in a couple of months, you better prepare!" - in a market you can have paradigm shifts. businesses die and new ones get created. in the context of AI a paradigm shift can mean you literally die. how can you be sure the business model "humanity" wont get replaced? the universe doesn't care about us. we are not characters in a story.
1
0
1
"... because it's always looking in reverse. The training data (...) contains all sorts of information about the world that is outdated." - this goes out the window the second one of these models creates a new algorithm with a real time feedback-loop. - if the ai operates on a much faster timescale the whole world suddenly looks static and you don't need to cooperate - there can be invariants in a system so that you simply don't need that much feedback. the aerovore self replicating nano drone swarm does not have to react to your fists.
1
0
0
@CKakadan @RogueUAPTF @TOEwithCurt 1. imagine an answer X that would satisfy you 2. ask "why X?" it is possible to ask inconsistent questions that cannot have an answer
0
0
0
Interesting. What’s your model of thinking/understanding, and how does it relate to a ceiling? My ontology of thinking and understanding is roughly a graph of facts, tools, and abstractions. If something is complicated, it gets divided into simpler parts. Every "edge" in that graph represents a simple relation, which is not inherently complicated. There are no truly "hard" thinking steps, in my opinion. For example: you learn how differentiation works (including the proofs), but then you store the differentiation rules as a shortcut. In my experience, whenever I encountered a problem that felt like I just couldn’t understand it, it always felt easy after it clicked. There was always some missing piece I needed to understand in order to decipher the explanation. I’ve always thought that such challenges stemmed from ambiguity in the explanation itself—a problem of language and differing assumptions about the listener's world model. When you try to understand something, you’re essentially fitting puzzle pieces into your graph. You map the explanation to your own ontology. However, this process can fail if you pattern-match the explanation to the wrong place in your graph, or if you can’t find the right place because the explanation is too abstract and lacks grounding. Additionally, parts of your graph might be disconnected or contain errors in their construction, which can make it harder to add new pieces or even talk clearly about them. Every explanation functions like a pointer in an implicit map. If your map is off, the pointer won’t be useful. "Cognitive horsepower," then, would be the speed and efficiency with which you can modify your graph. A "cognitive ceiling" would represent the limit of the graph’s size or complexity that you can construct in your lifetime. I don’t think we’re anywhere near this limit for any topic, for anyone.
0
0
0
Maybe you are measuring two effects: introspection and ontological confusion (of M2). If M2 is somewhat capable of introspection before finetuning, feeding it data that overrides "you" must be very confusing. It would be very interesting to see, if the accuracy gets better, if the data is explicitly talking about another model (compared to the test you already did).
1
0
2
and it seems impossible to even identify what those weak points are. they get annoyed if you ask because these kind of questions seem stupid to them (because the answer is "obvious") in addition to this: the different weak points don't have to be consistent. A and not A can be "true" in different contexts.
0
0
0
@Johnny2Fingersz @Plinz hm good point. i meant, if you want to use this as the only "true" benchmark to decide wether something is an AGI, then it's not practical/safe.
1
0
1