![Andre Beukers Profile](https://pbs.twimg.com/profile_images/1358720868482973699/hPvbyQ7u_x96.jpg)
Andre Beukers
@BeukersAndre
Followers
43
Following
14
Statuses
61
PhD, building cognitive systems in Utah
Salt Lake City, Utah
Joined January 2021
@AravSrinivas what makes you confident in the premise of this question? its not obvious to me that the inference architecture is easily imitable
0
0
0
@paulg @fkasummer if intelligence is free then we’re not paying for energy which means energy is solved
0
0
1
@DaveShapi the when premises are too loose for the question to make sense. what does “smarter” than any human mean? better at algebra? better designer? can set goals and make plans? capable of strategic thinking?
0
0
0
doesnt understand the meaning of the word “model”
You could represent GPT-4 as a mathematical function on paper, detailing every weight, bias, and operation. By manually processing an input with a calculator, step by step, you’d compute the output of each neuron, layer by layer, until finally reaching the model's response. If GPT-4 were conscious, where would that consciousness exist? In the written equations? The paper and ink? Or the act of computation by the calculator? The issue is that the calculator (the processor) and the neural net on paper (the knowledge) are entirely separate, unable to interact or influence one another. This isn’t how our brains work. In the brain, computation and storage are one, interacting and modifying each other seamlessly. Our thoughts alter our brain’s structure through bioplasticity. LLMs can only simulate this—any "thoughts" they generate during inference don’t alter their frozen internals. We aren’t static, but LLMs are. From GPT-4’s perspective, every output happens at a single, immutable moment in time. It doesn’t live in our reality as we experience it.
0
0
1
@pmarca @GillesEGignac @pmarca does this get us closer to a measurement instrument in millielon units?
0
0
0