![Mark Worrall Profile](https://pbs.twimg.com/profile_images/993581276589412353/OErG3axi_x96.jpg)
Mark Worrall
@maw501
Followers
239
Following
372
Statuses
436
Co-founder. Building a science-backed online learning platform. Musings on AI, Knowledge & Education.
Learn Quantum Theory 👉
Joined January 2012
At last, someone prominent is asking the right question. For what it's worth I'm very sceptical on AI's ability to do anything close to human knowledge creation (and hence can't get on board with much of the AI doomer chat). Far too easy to gloss over / hand wave away the knowledge creation part. Yet aligning an objective function to discovery isn’t just difficult - it’s essentially a form of knowledge creation itself. This is an epistemological challenge often (frustratingly) overlooked in these discussions. For example, scientific breakthroughs often redefine the problem space, meaning any predefined metric for “discovery” risks constraining AI to predictable novelty rather than anything more fundamental. I think this makes aligning AI to discovery a fundamentally recursive problem - we’d need to discover how to define discovery itself. Until we start seeing true knowledge creation we should concentrate on the more immediate risks of imperfect AI everywhere rather than endless speculation of creating a perfect one.
0
0
1
@RichardSocher Well, good luck with the book - would love to read it once finished (or even anything before!).
0
0
1
Indeed. For what it's worth I'm very sceptical on their ability to do anything close to human knowledge creation (and hence can't get on board with much of the AI doomer chat). Far too easy to gloss over / hand wave away the knowledge creation part. Yet aligning an objective function to discovery isn’t just difficult - it’s essentially a form of knowledge creation itself. This is an epistemological challenge often (frustratingly) overlooked in these discussions. For example, scientific breakthroughs often redefine the problem space, meaning any predefined metric for “discovery” risks constraining AI to predictable novelty rather than anything more fundamental. I think this makes aligning AI to discovery a fundamentally recursive problem - we’d need to discover how to define discovery itself. Until we start seeing true knowledge creation we should concentrate on the more immediate risks of imperfect AI everywhere rather than endless speculation of creating a perfect one.
0
0
1
@RichardSocher Nice! Highly recommend the writing of David Deutsch for related thinking (though not really to do with AI).
1
0
1
That makes sense! Just to continue on the (somewhat) epistemological point for a second though (i.e. ignoring maximal effectiveness and efficiency for a moment)... One key difference is that humans don’t strictly need data / examples to learn - many concepts can be discovered through reasoning alone. That is, humans can actually generate new knowledge before seeing data / examples (e.g., mathematical proofs, logical inferences). Also, all observation is what's called theory-laden - we don’t passively absorb "raw data" like an ML model. Instead, we interpret new information through existing mental models, which means learning isn’t just updating parameters but actively refining explanatory frameworks as alluded to before. ---------------------------- Anyway, good luck with the programming course! I think the early lessons for such a course are a bit subtle and need some careful sequencing. I built an intro to python course last year using knowledge graphs and mastery learning and it took a lot of care to introduce things without bringing along a lot of implicit prerequisite knowledge.
0
0
3
@LaraS_EDU @MrZachG I find it inspiring to own many books which I've not read. Gives me great satisfaction to select the next one from the "home library" and motivates me to keep reading.
0
0
3
@panickssery @_MathAcademy_ @ninja_maths @justinskycak I'm not sure if they accept free-form input anywhere in the platform (presumably they do, but I'm not a user). Re. those points - I think that's fair but I guess the argument is that seeing answer options is essentially an extra form of scaffolding.
0
0
1
@AlgorithmicBot @_MathAcademy_ Nice, which copy of Spivak do you have? I have the third edition and it's one of the most lovely (physical) textbooks I own.
1
0
1
Indeed. If you think scaling solves intelligence without a credible and specific explanation as to how then you're taking a leap of faith. That's your prerogative, but don't try to scare the shit out of everyone else with it.
0
0
2
A very Deutsch-ian view. And I haven't seen a good explanation either. AI is no more likely to independently create the knowledge of the next 100 years than it would have been to create the breakthroughs of the 20th century if trained on pre-1900 data. Without the ability to create new knowledge AI won't develop a "runaway" intelligence. Current AI systems perform well because we gave them humanity’s knowledge and built them to optimise it. Humans, using AI as a tool, will drive the creation of new knowledge in the coming decades (and beyond). Even if models continue to improve, my bet is they are likely to plateau somewhere around human-level capabilities, with further progress bottlenecked by environmental integration - challenges that could take decades to overcome.
1
1
13