![Beth Carey Profile](https://pbs.twimg.com/profile_images/1153071654781587456/MKh2PP_z_x96.jpg)
Beth Carey
@BethCarey12
Followers
731
Following
4K
Statuses
4K
Enabling the #trustworthyAI language interface for devices
Palo Alto
Joined August 2013
'How to solve AI with Our Brain - the Final Frontier in Science' Find out about the journey to ethical AI by Cognitive scientist @jbthinking and what his vision is when we have the ultimate control of the machine interface -with our language.
1
2
3
@GaryMarcus They don’t learn-on-the-fly. Hence perpetual training and training data is required
0
0
0
As 'statistical models of knowledge bases rather than knowledge bases' (Thomas Dietterich) they are lossy and dependent on training data. Why does this matter? The 'lossy' limitation shows up as 'hallucinations'. Generative AI will come back with some answer that neither you nor the large language model know if is correct or not. Like a low res jpeg, you can't build it back to its original. It's lost. 'Low res' language in today's LLMs is impossible to fix after the fact. No matter what and much resource is applied. How do we rectify? We implement a cognitive science approach that emulates what the brain does. Using brain science, Patom Theory, and RRG Linguistics the meaning is stored in a lossless format.
0
0
0
From the perspective of giving away your 'IP' Dorothea that is reasonable because why would you want to expose yourself to misquote, misinterpretation from AI tools that have hallucinations as a feature? You are in good company because @AnthropicAI famously won't accept job applications written by LLMs. From a @neilturkewitz 's perspective, I personally try to always state 'notwithstanding copyright infringement exposures' if I refer to Use Cases that today's generative AI appear to be used for. Cases such generating editable copy.
0
0
2
The brain is *not* a processing engine. Unlike today's computer science generative AI/LLM model. Theoretical neuro-science can demonstrate that with a working machine model for language. When this cognitive approach is used, #PatomTheory it shows the path forward for the human-machine interface with a virtuous circle of feedback/adjustment between machine and human brain. It doesn't get any better for guiding research that drives useful applications. To read the simplification of complex research from the cognitive sciences and applicability for those applications , @jbthinking writes about it here
0
0
1
This is probably the core question. It seems such a straight forward answer. How would you answer this @burkov ?
Why has nobody trained a good model capable of saying "I don't know"? Is it that hard or is it just that nobody cares?
0
0
0
Great question. Yes it's hard. 'AI Hard' or 'AI complete'. The difficulty of language and vision comprehension for machines has been known about for ~70 years. Today's powerful generative AI paper's over the 'understanding part' because it's the hardest. It's why there is so much talk about new architecture and approaches needed, like neuro-symbolic. Or something that is accurate because it is grounded in real world knowledge. @jbthinking talks about the phenomenon in his new book 'How to Solve AI with Our Brain' - the problem and the solution for that trustworthy multi-modal interface with language.
0
0
0
Agreed @DorotheaBaur - for all the problems of generative AI’s architecture when applied to Use Cases requiring trustworthiness rather than ‘hallucinations’ there is a need for disruptive innovation. Innovation around the core model. The gold standard language model is based on the brain rather than statistics and training data. That focus is the future. How to Solve AI with Our Brain: The Final Frontier in Science
“The metaphor of an AI ‘race’ is deeply entangled w/the ‘bigger is better’ paradigm that’s just seen a serious blow. It also suggests a winner-takes-all dynamic that leads to panicked responses (& may not align with Europe’s broader economic & societal objectives).” @F_Kaltheuner
0
0
2
Galileo’s observations and ‘giants’ before him showed the limitations of starting with a flawed model of planetary motion. Is today’s generative AI model from computer science obscuring the real path forward? That one which provides the ‘gold standard’ language model?
It is hard to see an example of brain function in the media in which the brain isn't portrayed as some kind of computer. It's the same in AI, trying to fit the computer paradigm where it doesn't belong. Why else have we spent 70 years looking to solve AI with such limited progress?
0
0
0
Agreed @MrEwanMorrison - today's paradigm of seeing us humans as like computers is holding us back. Today's resources are diverted towards scaling and amassing training data that suits 'processing' but it's not what brains do! When we seek to emulate what brains *do*, a cognitive science approach, we inherit the qualities that come with the brain's comprehension - accuracy in conversation, energy efficiency, learning on the fly, and the list goes on. @jbthinking writes about it in his new book 'How to Solve AI with Our Brain'. Spoiler alert: the brain is not a processing engine like a computer
2
0
7
RT @BethCarey12: Fluency sufficient for everyday conversation requires very few words in a language. Language educators know this - they ar…
0
2
0