maw501 Profile Banner
Mark Worrall Profile
Mark Worrall

@maw501

Followers
239
Following
372
Statuses
436

Co-founder. Building a science-backed online learning platform. Musings on AI, Knowledge & Education.

Learn Quantum Theory 👉
Joined January 2012
Don't wanna be here? Send us removal request.
@maw501
Mark Worrall
25 days
Started a substack: The Infinite Human [link in comments] Will explore the intersections of AI, knowledge, and education - topics close to my heart as I build an EdTech company. Also, want to pushback on some of this "AI is gonna takeover" narrative.
Tweet media one
1
1
3
@maw501
Mark Worrall
3 hours
At last, someone prominent is asking the right question. For what it's worth I'm very sceptical on AI's ability to do anything close to human knowledge creation (and hence can't get on board with much of the AI doomer chat). Far too easy to gloss over / hand wave away the knowledge creation part. Yet aligning an objective function to discovery isn’t just difficult - it’s essentially a form of knowledge creation itself. This is an epistemological challenge often (frustratingly) overlooked in these discussions. For example, scientific breakthroughs often redefine the problem space, meaning any predefined metric for “discovery” risks constraining AI to predictable novelty rather than anything more fundamental. I think this makes aligning AI to discovery a fundamentally recursive problem - we’d need to discover how to define discovery itself. Until we start seeing true knowledge creation we should concentrate on the more immediate risks of imperfect AI everywhere rather than endless speculation of creating a perfect one.
@dwarkesh_sp
Dwarkesh Patel
2 days
This question is even more puzzling and salient given the existence of Deep Research
0
0
1
@maw501
Mark Worrall
4 hours
@sama That new domain ain't working out so well eh?
0
0
0
@maw501
Mark Worrall
4 hours
@RichardSocher Well, good luck with the book - would love to read it once finished (or even anything before!).
0
0
1
@maw501
Mark Worrall
4 hours
Indeed. For what it's worth I'm very sceptical on their ability to do anything close to human knowledge creation (and hence can't get on board with much of the AI doomer chat). Far too easy to gloss over / hand wave away the knowledge creation part. Yet aligning an objective function to discovery isn’t just difficult - it’s essentially a form of knowledge creation itself. This is an epistemological challenge often (frustratingly) overlooked in these discussions. For example, scientific breakthroughs often redefine the problem space, meaning any predefined metric for “discovery” risks constraining AI to predictable novelty rather than anything more fundamental. I think this makes aligning AI to discovery a fundamentally recursive problem - we’d need to discover how to define discovery itself. Until we start seeing true knowledge creation we should concentrate on the more immediate risks of imperfect AI everywhere rather than endless speculation of creating a perfect one.
0
0
1
@maw501
Mark Worrall
5 hours
@RichardSocher Nice! Highly recommend the writing of David Deutsch for related thinking (though not really to do with AI).
1
0
1
@maw501
Mark Worrall
11 hours
That makes sense! Just to continue on the (somewhat) epistemological point for a second though (i.e. ignoring maximal effectiveness and efficiency for a moment)... One key difference is that humans don’t strictly need data / examples to learn - many concepts can be discovered through reasoning alone. That is, humans can actually generate new knowledge before seeing data / examples (e.g., mathematical proofs, logical inferences). Also, all observation is what's called theory-laden - we don’t passively absorb "raw data" like an ML model. Instead, we interpret new information through existing mental models, which means learning isn’t just updating parameters but actively refining explanatory frameworks as alluded to before. ---------------------------- Anyway, good luck with the programming course! I think the early lessons for such a course are a bit subtle and need some careful sequencing. I built an intro to python course last year using knowledge graphs and mastery learning and it took a lot of care to introduce things without bringing along a lot of implicit prerequisite knowledge.
0
0
3
@maw501
Mark Worrall
1 day
@LaraS_EDU @MrZachG I find it inspiring to own many books which I've not read. Gives me great satisfaction to select the next one from the "home library" and motivates me to keep reading.
0
0
3
@maw501
Mark Worrall
1 day
@panickssery @_MathAcademy_ @ninja_maths @justinskycak I'm not sure if they accept free-form input anywhere in the platform (presumably they do, but I'm not a user). Re. those points - I think that's fair but I guess the argument is that seeing answer options is essentially an extra form of scaffolding.
0
0
1
@maw501
Mark Worrall
1 day
@AlgorithmicBot @_MathAcademy_ Life-affirming stuff
0
0
1
@maw501
Mark Worrall
1 day
@AlgorithmicBot @_MathAcademy_ Nice, which copy of Spivak do you have? I have the third edition and it's one of the most lovely (physical) textbooks I own.
1
0
1
@maw501
Mark Worrall
2 days
Indeed. If you think scaling solves intelligence without a credible and specific explanation as to how then you're taking a leap of faith. That's your prerogative, but don't try to scare the shit out of everyone else with it.
@dwarkesh_sp
Dwarkesh Patel
2 days
This question is even more puzzling and salient given the existence of Deep Research
0
0
2
@maw501
Mark Worrall
2 days
@anushkmittal @dwarkesh_sp Emergence of what? Let's be specific.
0
0
1
@maw501
Mark Worrall
2 days
A very Deutsch-ian view. And I haven't seen a good explanation either. AI is no more likely to independently create the knowledge of the next 100 years than it would have been to create the breakthroughs of the 20th century if trained on pre-1900 data. Without the ability to create new knowledge AI won't develop a "runaway" intelligence. Current AI systems perform well because we gave them humanity’s knowledge and built them to optimise it. Humans, using AI as a tool, will drive the creation of new knowledge in the coming decades (and beyond). Even if models continue to improve, my bet is they are likely to plateau somewhere around human-level capabilities, with further progress bottlenecked by environmental integration - challenges that could take decades to overcome.
1
1
13
@maw501
Mark Worrall
2 days
Sure, but I think "turning them into reasoners is apparently quite trivial" is glossing over some of the key practical challenges then, no? E.g. when to trust the approximate reasoning they use, how to know when to use the exact solver (how to formulate the input from natural language) etc...
0
0
0