BethCarey12 Profile Banner
Beth Carey Profile
Beth Carey

@BethCarey12

Followers
760
Following
5K
Media
553
Statuses
4K

Enabling the #trustworthyAI language interface for devices

Palo Alto
Joined August 2013
Don't wanna be here? Send us removal request.
@BethCarey12
Beth Carey
6 months
'How to solve AI with Our Brain - the Final Frontier in Science'. Find out about the journey to ethical AI by Cognitive scientist @jbthinking and what his vision is when we have the ultimate control of the machine interface -with our language.
1
2
6
@BethCarey12
Beth Carey
10 days
RT @Srini_Pa: ๐–๐ก๐š๐ญ'๐ฌ ๐“๐ก๐žย ๐Ž๐ง๐ž ๐‚๐ซ๐ข๐ญ๐ข๐œ๐š๐ฅ ๐‚๐จ๐ ๐ง๐ข๐ญ๐ข๐ฏ๐ž ๐‚๐š๐ฉ๐š๐›๐ข๐ฅ๐ข๐ญ๐ฒ ๐“๐ก๐š๐ญ ๐–๐ข๐ฅ๐ฅ ๐”๐ง๐ฅ๐จ๐œ๐ค ๐€๐†๐ˆ, ๐“๐ก๐š๐ญ ๐‹๐‹๐Œ๐ฌ ๐‚๐š๐ง ๐๐ž๐ฏ๐ž๐ซ ๐ƒ๐จ (Part 2 of 2). For all the spectaclโ€ฆ.
0
1
0
@BethCarey12
Beth Carey
10 days
0
0
0
@BethCarey12
Beth Carey
10 days
โ€˜Deep language understandingโ€™ is only possible with โ€˜Deep Symbolicsโ€™ . The tide is turning. Itโ€™s receding fast and those swimming naked are being exposed. Todayโ€™s generative AI all operate on โ€˜generationโ€™ without โ€˜understandingโ€™. So how do they solve their problems? . A new.
@GaryMarcus
Gary Marcus
10 days
longer discussion with more context here:.
1
1
1
@BethCarey12
Beth Carey
13 days
0
0
1
@BethCarey12
Beth Carey
13 days
Are you prepared to update this daily @marcusarvan with the new appearances of misalignment like a whack-a-mole game at society's expense ๐Ÿ˜€ or should I say, ๐Ÿ˜ข?. The next generation of language AI achieves the trustworthy human-machine interface because by design it.
@GaryMarcus
Gary Marcus
13 days
โ€œIf biological life is, as Hobbes famously said, โ€œnasty, brutish, and shortโ€, LLM counterparts are dishonest, unpredictable, and potentially dangerous.โ€. What can do about it? New essay at Marcus on AI.
Tweet media one
2
0
1
@BethCarey12
Beth Carey
18 days
0
1
1
@BethCarey12
Beth Carey
18 days
Today's generative AI can't replicate human intelligence because of the underlying statistical nature. The only trustworthy human-machine interface with language is the one designed on the gold standard - our brain!. @jbthinking cuts through to the solution using brain science.
@kimmonismus
Chubbyโ™จ๏ธ
20 days
World models are the new goal and the new holy grail. Language alone canโ€™t replicate human intelligenceโ€”AI needs to understand and simulate the physical world. Stanfordโ€™s Feiโ€‘Fei Li and Metaโ€™s Yann LeCun argue conventional LLMs lack spatial reasoning, memory, planning. Their
Tweet media one
1
1
5
@BethCarey12
Beth Carey
18 days
The brain is a multi-sensory pattern matcher and this is how it works.
0
1
1
@BethCarey12
Beth Carey
18 days
Perfectly said @MrEwanMorrison . This is singularly the most fundamental flaw ed assumption that perpetuates today's erratic Generative AI architecture. The knowledge gap between what we know about the human brain and Large Language Models is vast. So to shrink that chasm AI.
@MrEwanMorrison
Ewan Morrison
18 days
Sabine should know better than to use the 'Human brains are machines too' trick. This linguistic trick is often used by AI pushers. The knowledge gap between what we know about the human brain and Large Language Models is vast. So to shrink that chasm AI pushers say "The brain.
2
1
7
@BethCarey12
Beth Carey
18 days
RT @MrEwanMorrison: The โ€œGodfather of AIโ€ is not talking about Large Language Models but about some hypothetical future AI that our currentโ€ฆ.
0
15
0
@BethCarey12
Beth Carey
21 days
0
0
0
@BethCarey12
Beth Carey
21 days
1
0
0
@BethCarey12
Beth Carey
21 days
The lowest energy solution always wins. This means that an AI/AGI breakthrough based on how the brain works inherits that ~20W energy efficiency. With the speed of smart phone adoption generative AI will fall when the cognitive science breakthrough required is on show.
@TrueAIHound
AGIHound
21 days
Rant. Over the last few years, I've grown to despise Sam Altman. The idea that intelligence will require a significant fraction of the power on earth is the rumination of a self-absorbed megalomaniac who can't see past GPUs and LLMs. The human brain needs about 20W, the last I.
2
2
9
@BethCarey12
Beth Carey
26 days
RT @TrueAIHound: @pmddomingos Yep. Everyone is sick and tired of the constant BS coming from the Sam Altmans and the Dario Amodeis of AGI mโ€ฆ.
0
1
0
@BethCarey12
Beth Carey
27 days
0
0
0
@BethCarey12
Beth Carey
27 days
New breakthroughs from the cognitive sciences are here for the next generation of tools robust enough for the trustworthy human machine interface of language. They emulate what the brain does for its accuracy and energy efficiency. @jbthinking writes about it here.
@JHochderffer
Jeffrey Dean Hochderffer
27 days
Itโ€™s becoming increasingly apparent that @GaryMarcus is correct. New LLM models are producing diminishing returns. Current models are powerful and useful, but they are short of being reliable autonomous agents capable of replacing the vast human workforce. New breakthroughs.
1
0
1
@BethCarey12
Beth Carey
27 days
0
0
0
@BethCarey12
Beth Carey
27 days
Just like Google wasnโ€™t the first search engine, the model for how the brain works is opened up by cognitive scientists outside of the CS paradigm. Using brain science #PatomTheory and RRG linguists shows the way forward with the accuracy and energy efficiency of the brain.
@TLiterarian
The Liturgiologicalpheonician
28 days
Is everyone fucking nuts? Did we actually believe that after decades and decades of studying the brain and coming up with next to zilch, that a handful of rich, arrogant midwits in Silicon Valley would come to a full understanding of consciousness and deliver to us artificial.
1
1
3
@BethCarey12
Beth Carey
27 days
The next generation? It will avoid inaccuracies and will have the energy efficiency of the brain. As for stealing and scraping training data? It is not required.
@GaryMarcus
Gary Marcus
27 days
@emollick 1. you really think model collapse has been resolved? eg I still see stuff look this . 2. LLMs have hit a point of diminishing returns; lots of projects like GPT-5 have been delayed and/or didnโ€™t return the expected results. The question is what.
0
0
1
@BethCarey12
Beth Carey
27 days
0
0
0