BethCarey12 Profile Banner
Beth Carey Profile
Beth Carey

@BethCarey12

Followers
762
Following
5K
Media
553
Statuses
4K

Enabling the #trustworthyAI language interface for devices

Palo Alto
Joined August 2013
Don't wanna be here? Send us removal request.
@BethCarey12
Beth Carey
8 months
'How to solve AI with Our Brain - the Final Frontier in Science' Find out about the journey to ethical AI by Cognitive scientist @jbthinking and what his vision is when we have the ultimate control of the machine interface -with our language.
1
2
6
@Srini_Pa
Srini Pagidyala
3 months
๐–๐ก๐š๐ญ'๐ฌ ๐“๐ก๐žย ๐Ž๐ง๐ž ๐‚๐ซ๐ข๐ญ๐ข๐œ๐š๐ฅ ๐‚๐จ๐ ๐ง๐ข๐ญ๐ข๐ฏ๐ž ๐‚๐š๐ฉ๐š๐›๐ข๐ฅ๐ข๐ญ๐ฒ ๐“๐ก๐š๐ญ ๐–๐ข๐ฅ๐ฅ ๐”๐ง๐ฅ๐จ๐œ๐ค ๐€๐†๐ˆ, ๐“๐ก๐š๐ญ ๐‹๐‹๐Œ๐ฌ ๐‚๐š๐ง ๐๐ž๐ฏ๐ž๐ซ ๐ƒ๐จ (Part 2 of 2) For all the spectacle surrounding todayโ€™s artificial intelligence, from LLMs that churn out essays on
Tweet media one
3
1
10
@BethCarey12
Beth Carey
3 months
โ€˜Deep language understandingโ€™ is only possible with โ€˜Deep Symbolicsโ€™ . The tide is turning. Itโ€™s receding fast and those swimming naked are being exposed. Todayโ€™s generative AI all operate on โ€˜generationโ€™ without โ€˜understandingโ€™. So how do they solve their problems? A new
@GaryMarcus
Gary Marcus
3 months
longer discussion with more context here:
1
1
2
@BethCarey12
Beth Carey
3 months
Are you prepared to update this daily @marcusarvan with the new appearances of misalignment like a whack-a-mole game at society's expense ๐Ÿ˜€ or should I say, ๐Ÿ˜ข? The next generation of language AI achieves the trustworthy human-machine interface because by design it
@GaryMarcus
Gary Marcus
3 months
โ€œIf biological life is, as Hobbes famously said, โ€œnasty, brutish, and shortโ€, LLM counterparts are dishonest, unpredictable, and potentially dangerous.โ€ What can do about it? New essay at Marcus on AI.
Tweet media one
2
0
1
@BethCarey12
Beth Carey
3 months
Today's generative AI can't replicate human intelligence because of the underlying statistical nature. The only trustworthy human-machine interface with language is the one designed on the gold standard - our brain! @jbthinking cuts through to the solution using brain science
@kimmonismus
Chubbyโ™จ๏ธ
3 months
World models are the new goal and the new holy grail. Language alone canโ€™t replicate human intelligenceโ€”AI needs to understand and simulate the physical world. Stanfordโ€™s Feiโ€‘Fei Li and Metaโ€™s Yann LeCun argue conventional LLMs lack spatial reasoning, memory, planning. Their
Tweet media one
1
1
5
@BethCarey12
Beth Carey
3 months
Perfectly said @MrEwanMorrison . This is singularly the most fundamental flaw ed assumption that perpetuates today's erratic Generative AI architecture. The knowledge gap between what we know about the human brain and Large Language Models is vast. So to shrink that chasm AI
@MrEwanMorrison
Ewan Morrison
3 months
Sabine should know better than to use the 'Human brains are machines too' trick. This linguistic trick is often used by AI pushers. The knowledge gap between what we know about the human brain and Large Language Models is vast. So to shrink that chasm AI pushers say "The brain
2
1
7
@MrEwanMorrison
Ewan Morrison
3 months
The โ€œGodfather of AIโ€ is not talking about Large Language Models but about some hypothetical future AI that our current AI is not capable of reaching. Note: AI gurus have been making such claims since the 1960s.
Tweet media one
8
15
100
@BethCarey12
Beth Carey
3 months
0
0
0
@BethCarey12
Beth Carey
3 months
The lowest energy solution always wins. This means that an AI/AGI breakthrough based on how the brain works inherits that ~20W energy efficiency. With the speed of smart phone adoption generative AI will fall when the cognitive science breakthrough required is on show.
@TrueAIHound
AGIHound
3 months
Rant Over the last few years, I've grown to despise Sam Altman. The idea that intelligence will require a significant fraction of the power on earth is the rumination of a self-absorbed megalomaniac who can't see past GPUs and LLMs. The human brain needs about 20W, the last I
2
2
9
@TrueAIHound
AGIHound
3 months
@pmddomingos Yep. Everyone is sick and tired of the constant BS coming from the Sam Altmans and the Dario Amodeis of AGI mafia. ๐Ÿ˜
0
1
20
@BethCarey12
Beth Carey
3 months
New breakthroughs from the cognitive sciences are here for the next generation of tools robust enough for the trustworthy human machine interface of language. They emulate what the brain does for its accuracy and energy efficiency. @jbthinking writes about it here
@JHochderffer
Jeffrey Dean Hochderffer
3 months
Itโ€™s becoming increasingly apparent that @GaryMarcus is correct. New LLM models are producing diminishing returns. Current models are powerful and useful, but they are short of being reliable autonomous agents capable of replacing the vast human workforce. New breakthroughs
1
0
1
@BethCarey12
Beth Carey
3 months
Just like Google wasnโ€™t the first search engine, the model for how the brain works is opened up by cognitive scientists outside of the CS paradigm. Using brain science #PatomTheory and RRG linguists shows the way forward with the accuracy and energy efficiency of the brain
@TLiterarian
The Liturgiologicalpheonician
3 months
Is everyone fucking nuts? Did we actually believe that after decades and decades of studying the brain and coming up with next to zilch, that a handful of rich, arrogant midwits in Silicon Valley would come to a full understanding of consciousness and deliver to us artificial
1
1
3
@BethCarey12
Beth Carey
3 months
The next generation? It will avoid inaccuracies and will have the energy efficiency of the brain. As for stealing and scraping training data? It is not required. https://t.co/iByU9AVFWt
Tweet card summary image
amazon.com
by John Ball is a groundbreaking exploration of artificial intelligence through the lens of cognitive science. John Ball is a cognitive scientist, computer artisan, and one of the leading innovators...
@GaryMarcus
Gary Marcus
3 months
@emollick 1. you really think model collapse has been resolved? eg I still see stuff look this https://t.co/A6ENRlC0MI 2. LLMs have hit a point of diminishing returns; lots of projects like GPT-5 have been delayed and/or didnโ€™t return the expected results. The question is what
0
0
1