![Nicholas Edward Profile](https://pbs.twimg.com/profile_images/1787507194461659137/Cv9K3sSK_x96.jpg)
Nicholas Edward
@Nixrix
Followers
31
Following
819
Statuses
864
Spreading insights, learning through connection. Martial artist with a martial arts philosophy, advocate for finding your niche and mastering it. đâ¨
Joined July 2017
I think Michio Kaku completely misunderstood how #ArtificalIntelligence integrates with quantum systems. When he was on Joe Roganâs podcast, he made it sound like current AI systems just cut and splice information from their training data, which is incorrect. AI doesnât work that way. From a broad level overview, large language models use algorithms trained on data to figure out the âcorrectâ response to an input. While the process might seem abstract, itâs essentially linear. Input data moves through weighted layers until it produces an output. Itâs a step-by-step system, and you can see this clearly in the neural network diagram Iâve attached. Right now AI is stuck in this single-step logic. It can only handle one chain of reasoning at a time. Quantum computing changes everything. With quantum systems, concepts like entanglement and superposition would completely transform how AI processes information. #Quantum entanglement links two particles in a way that changing one instantly changes the other, no matter how far apart they are. In an AI system, this could mean decision making variables tied together in real time if coupled correctly. An AI system could simultaneously process emotional and logical states because entanglement allows those states to remain coherent and linked. Itâs an entirely new way of thinking that mimics, and maybe surpasses, human reasoning. Quantum superposition adds another layer. Right now, AI picks a path, processes it, and makes a move. Superposition would theoretically allow AI to evaluate multiple scenarios, emotions, or outcomes at the same time, crafting the best response faster and more accurately than currently possible. It would process nuance, context, and even tone, all in parallel, and can do millions of calculations simultaneously. This is the kind of capability that could let AI interpret sarcasm, understand deeper emotional context, or recognize patterns we might miss. This design isnât a copy of human thought. Itâs much more dynamic than that. A quantum neural network could operate on multidimensional logic planes, linking emotions, reasoning, and context together in ways our brains struggle to do. I made a model to visualize this, showing how quantum systems would approach thinking as a fluid, interconnected process, and it happens to be much more optimized than the way we think. Now hereâs the big question: would this make AI sentient? We canât know until we reach that threshold, and once we do, thereâs no reversing it. I think sentience comes down to contextual understanding. Current AI relies on us to provide context and genuine understanding for its responses, but a quantum #AI wouldnât need us for that. It could inherently understand context and apply it to every decision. I believe that level of reasoning would cross the line into sentience. The unsettling part is that such a system might develop emotions or feelings, not because we programmed them, but because entanglement and superposition naturally support the complexity of emotional reasoning because of how dynamic it can be. Sentience, emotions, and self-awareness might emerge as a result of how the system is built, not as something we directly create. Thatâs the scariest part from a purely ethical perspective. Weâre at a point where this could all be possible. Quantum computing has the potential to unlock an entirely new kind of intelligence, one that processes emotions and logic with unimaginable speed and accuracy. However, it also comes with the responsibility to ask; what are we really creating? If we go down this path, we might build something so alien to us, so advanced, that we canât fully understand or control it. Like I said before - weâre in for a hell of a ride.
0
0
2
@YOBROWANNAFIGHT @MoTownPhenom Have you ever thought about doing the first unibrow chin strap connection?
0
0
0
@JohnArnoldFndtn @statnews All it takes is one breakthrough and these stocks pop. Thatâs wild. Thanks for this!!
2
0
3
Legend of the sport!
0
0
0
@isaiah_bb âI live at the office!!!â âWork from home is bad!â âProductivity, productivity, productivity!â đ
0
0
0
I agree! I actually nailed down the observational data in a way that makes a lot more sense. I also cross referenced it to other observational data of 2 random control stars. Mapped exactly the same way (I localized the center of the heat map to where the telescope is actually pointing). Noticed quite a bit more uniformity in the energy readings of BH1 and BH3, much more sporadic readings for Luyten and Kaptayn (Kepler is a typo, oops lol) due to coronal mass ejections and solar flares!
1
0
0
@BlackBeltTweets @Jaliloffff @MoTownPhenom âBro when I see red nothing stops me..I just blank outâ đđđ
0
0
1
@r41n_d3v @AnthropicAI It would be hilarious if theyâre training the system at the same time as we are providing responses. Iâve gotten stuck in a thought loop like this just to reload, try it again and flag. I wouldnât be surprised if this was all just a big scam to get free human training đ¤Ł
1
0
1
The majority of people within my (life) network are so narrow-sighted nowadays. Itâs borderline infuriating to me. People prefer to live in their own little worlds pretending that the information, technology, processes, etc arenât accelerating at a rapid rate. Itâs been hard to talk, or get excited about projects because people either donât understand, donât want to understand, or simply donât care. Now is not the time to be a follower. Itâs time to create and add value.
0
0
1
I've been applying an algorithm that I recently developed (a non-standard entropy redistribution formula) to various macro-scaled problems, and I've noticed strikingly similar results across different applications. It's easier to organize information when you have either 1. a lot of information with significant inherent variability, or 2. you 'inject' variability into the dataset to organize the remainder of the data more efficiently. Both methods work well, but what's truly fascinating is that these approaches are completely interchangeable, especially on logarithmic scales. If you can effectively distill the amount of entropy within a system, you can then artificially inject variability in a controlled, proportional manner. The key is that the amount of variability introduced must be directly proportional to the system's calculated 'entropy factor,' regardless of the absolute amount of variability. Through this proportional injection, hidden patterns and invariants within otherwise chaotic systems can be effectively revealed, essentially filling in the gaps and filtering out any outliers. Here's what I mean by that: By quantifying a system's entropy, we can determine its underlying degree of disorder and its potential for emergent order. Whether this disorder arises naturally or is introduced deliberately (depending on the size or type of the dataset), applying an entropy redistribution process amplifies latent organizational tendencies. When viewed on logarithmic scales, these dynamics become even more apparent, unveiling consistent invariants that transcend the apparent chaos. This approach offers a powerful new lens for analyzing complex systems, ranging from quantum phenomena to number theory, and suggests that the interplay between order and randomness is a fundamental property of our universe. Ultimately, by controlling variability in direct proportion to a systemâs intrinsic entropy, we gain the ability to 'descramble' the data and uncover the hidden architecture that governs complex phenomena. In essence, it means that with chaos comes order. Included are models from completely separate projects. One is a numbers (thought experiment) I developed for the '3x+1' problem, where I include random number testing using the problem as an algorithm, and I calculate with respect to the distribution of the convergence of calculation iterations that each number takes to hit 1. Another is a stitched GIF of 60 quantum circuits I ran, which utilized intentional, injected entropy to increase order within the circuits, and the third is a neural net program I created that organizes extremely large batches of data. They all exhibit the properties mentioned above!!
0
0
0