
Beth Carey
@BethCarey12
Followers
760
Following
5K
Media
553
Statuses
4K
Enabling the #trustworthyAI language interface for devices
Palo Alto
Joined August 2013
'How to solve AI with Our Brain - the Final Frontier in Science'. Find out about the journey to ethical AI by Cognitive scientist @jbthinking and what his vision is when we have the ultimate control of the machine interface -with our language.
1
2
6
RT @Srini_Pa: ๐๐ก๐๐ญ'๐ฌ ๐๐ก๐ย ๐๐ง๐ ๐๐ซ๐ข๐ญ๐ข๐๐๐ฅ ๐๐จ๐ ๐ง๐ข๐ญ๐ข๐ฏ๐ ๐๐๐ฉ๐๐๐ข๐ฅ๐ข๐ญ๐ฒ ๐๐ก๐๐ญ ๐๐ข๐ฅ๐ฅ ๐๐ง๐ฅ๐จ๐๐ค ๐๐๐, ๐๐ก๐๐ญ ๐๐๐๐ฌ ๐๐๐ง ๐๐๐ฏ๐๐ซ ๐๐จ (Part 2 of 2). For all the spectaclโฆ.
0
1
0
โDeep language understandingโ is only possible with โDeep Symbolicsโ . The tide is turning. Itโs receding fast and those swimming naked are being exposed. Todayโs generative AI all operate on โgenerationโ without โunderstandingโ. So how do they solve their problems? . A new.
1
1
1
Are you prepared to update this daily @marcusarvan with the new appearances of misalignment like a whack-a-mole game at society's expense ๐ or should I say, ๐ข?. The next generation of language AI achieves the trustworthy human-machine interface because by design it.
โIf biological life is, as Hobbes famously said, โnasty, brutish, and shortโ, LLM counterparts are dishonest, unpredictable, and potentially dangerous.โ. What can do about it? New essay at Marcus on AI.
2
0
1
Today's generative AI can't replicate human intelligence because of the underlying statistical nature. The only trustworthy human-machine interface with language is the one designed on the gold standard - our brain!. @jbthinking cuts through to the solution using brain science.
World models are the new goal and the new holy grail. Language alone canโt replicate human intelligenceโAI needs to understand and simulate the physical world. Stanfordโs FeiโFei Li and Metaโs Yann LeCun argue conventional LLMs lack spatial reasoning, memory, planning. Their
1
1
5
Perfectly said @MrEwanMorrison . This is singularly the most fundamental flaw ed assumption that perpetuates today's erratic Generative AI architecture. The knowledge gap between what we know about the human brain and Large Language Models is vast. So to shrink that chasm AI.
Sabine should know better than to use the 'Human brains are machines too' trick. This linguistic trick is often used by AI pushers. The knowledge gap between what we know about the human brain and Large Language Models is vast. So to shrink that chasm AI pushers say "The brain.
2
1
7
RT @MrEwanMorrison: The โGodfather of AIโ is not talking about Large Language Models but about some hypothetical future AI that our currentโฆ.
0
15
0
The lowest energy solution always wins. This means that an AI/AGI breakthrough based on how the brain works inherits that ~20W energy efficiency. With the speed of smart phone adoption generative AI will fall when the cognitive science breakthrough required is on show.
Rant. Over the last few years, I've grown to despise Sam Altman. The idea that intelligence will require a significant fraction of the power on earth is the rumination of a self-absorbed megalomaniac who can't see past GPUs and LLMs. The human brain needs about 20W, the last I.
2
2
9
RT @TrueAIHound: @pmddomingos Yep. Everyone is sick and tired of the constant BS coming from the Sam Altmans and the Dario Amodeis of AGI mโฆ.
0
1
0
New breakthroughs from the cognitive sciences are here for the next generation of tools robust enough for the trustworthy human machine interface of language. They emulate what the brain does for its accuracy and energy efficiency. @jbthinking writes about it here.
Itโs becoming increasingly apparent that @GaryMarcus is correct. New LLM models are producing diminishing returns. Current models are powerful and useful, but they are short of being reliable autonomous agents capable of replacing the vast human workforce. New breakthroughs.
1
0
1
Just like Google wasnโt the first search engine, the model for how the brain works is opened up by cognitive scientists outside of the CS paradigm. Using brain science #PatomTheory and RRG linguists shows the way forward with the accuracy and energy efficiency of the brain.
Is everyone fucking nuts? Did we actually believe that after decades and decades of studying the brain and coming up with next to zilch, that a handful of rich, arrogant midwits in Silicon Valley would come to a full understanding of consciousness and deliver to us artificial.
1
1
3
The next generation? It will avoid inaccuracies and will have the energy efficiency of the brain. As for stealing and scraping training data? It is not required.
@emollick 1. you really think model collapse has been resolved? eg I still see stuff look this . 2. LLMs have hit a point of diminishing returns; lots of projects like GPT-5 have been delayed and/or didnโt return the expected results. The question is what.
0
0
1