Announcing Maven: We’ve created a new kind of social network - a serendipity network - that’s directly inspired by insights from open-endedness and Why Greatness Cannot Be Planned.
iPhone:
Android:
It's very different....
More 👉
I'm thrilled to announce that I will be joining the superb team at
@OpenAI
in June, where I will be starting a group (and indeed hiring) focused on achieving open-endedness in machine learning. Looking forward to exploring a novel path!
New paper from our team
@OpenAI
: “Evolution through Large Models.” Idea: Large language models trained on code can propose mutations to programs of unprecedented coherence. Broad implications for EC,Genetic Programming,RL,deep learning,open-endedness. 1/4
Though it does some optimization, natural evolution is not primarily an optimization algorithm. It's unfortunate that classical genetic algorithms created a misapprehension among computer scientists that evolution is best abstracted as optimization. So what is evolution? 1/2
I'd say *optimization* is the only way to obtain complexity from primeval simplicity.
There are many ways to optimize .
Darwinian evolution (mutate and select among a population) is just one particularly simple and *inefficient* way to perform zeroth-order optimization.
But there
Mostly missing from current ML discourse: the order in which things are learned profoundly affects their ultimate representation, & representation is the true heart of “understanding.” But we just shovel in the data like order doesn’t matter. No AGI without addressing this. 1/2
NEAT (NeuroEvolution of Augmenting Topologies) is now available to work with PyTorch, thanks to Alex Gajewski
@UberAILabs
! . Also includes CPPNs and HyperNEAT capabilities.
Some news: It has been a great privilege to contribute to the outstanding innovation at OpenAI and our team did exciting work (some out soon!), but I've decided the time is right for me to move on and to take this opportunity to really reflect on what I want to do next!
I’ve seen some talk recently about the virtues of hallucination, implying that it’s a “good” thing because it’s “creative” to make things up or simply what LMs are “trained to do.” I see my own work sometimes come up in these discussions since I’ve written a lot on creativity
Accepted at ICLR: You can train plastic neural networks to adapt to changing circumstances through neuromodulation with backprop! See our paper on "Backpropamine," pioneered by
@ThomasMiconi
, working with Aditya Rawal,
@jeffclune
and myself.
@iclr2019
The reason experts are so split on whether or not we're on an exponential path to AI is that intelligence is multi-dimensional. You can be both exponentially exploding/scaling in one dimension and stagnating in another. You can be both superhuman and subhuman at the same time.
Excited to announce positions available in the Open-Endedness Team I'm starting
@OpenAI
: Research Engineer Research Scientist - looking forward to exploring new frontiers! 1/2
Evolution is actually a set of entangled search processes that supports multiple abstractions. Among them: quality diversity, novelty search, competition escape, minimal criterion search, and the "spilled milk" abstraction discussed in Why Greatness Cannot Be Planned. 2/2
You don't need a corporate-sized warehouse of servers to run deep neuroevolution! For what formerly took an hour with 720 CPUs, we just released code to train on Atari in 4 hours on a single 48-core desktop: (with
@felipesuch
and
@jeffclune
)
Little-discussed challenge for Large Language Models: No clear path to learn the concept of novelty from data alone - novelty is a function of global chronology (not captured in dataset) and *all* that came before. Note the novelty instinct is the foundation of human creativity.
Some beautiful infinite-resolution images produced using CPPNs (compositional pattern producing networks) from Google Brain at the link. They differentiated back through an image classifier and then through the CPPN to produce the images:
Though comforting, following a path illuminated by a set of clear metrics is risky because other people can follow those metrics just as easily. Through its defiance of easy measure, the path of interestingness is the path of the brave, and most often walked alone.
Introducing Differentiable Plasticity: Pleased to share along with
@ThomasMiconi
and
@jeffclune
new work at Uber AI Labs on meta-training neural networks through gradient descent to be able to adapt their weights *after* SGD-based training is over:
For those who have confidence in the path to AGI because you believe you know its signs, it’s worth remembering what John Stuart Mill wrote in 1843: “There remains one a priori fallacy or natural prejudice, the most deeply-rooted, perhaps, of all which we have enumerated; one
@EugeneVinitsky
But there’s a deeper question here: why do we assume that an algorithm is only “any good” if it’s definitively better than something else? The value of an algorithm can be as the seed of a chain of new ideas, a potential that benchmark/baseline obsession stifles.
Interesting to contemplate creating AI algorithms as *itself* an art form, i.e. not only a scientific discipline. Finally can share freely the manuscript of my position piece, "Art in the Sciences of the Artificial," published in
@LeonardoISAST
journal:
Ever wanted to see what the whole population is doing in neuroevolution? We just released an interactive visualization tool called VINE for neuroevolution from Uber AI Labs:
I was just reading a book to my 6-year-old that briefly discussed Einstein. His incredulous response: "Einstein is real!?" That made me realize that the true mark of success is when your achievements reach such a level that people assume you must be a fictional character.
The fact that a human-created architecture became ascendant before one from architecture search suggests that something has been wrong with the formulation of NAS, both in the formalization of the search space and in the conception of success, which is likely too focused on
What happened to architecture search? It had so much potential to help AI self improve but afaics nobody is using a model found by it.
Maybe it'll get unlocked at another scale since it's just so expensive?
Announcing Synthetic Petri Dish for architecture search: An NN slice goes into a "dish" where tiny synthetic data is learned to make variants perform like the full NN. Then rapid search in the dish: beyond a model because it IS part of the real NN 1/2
The solutions to almost all great puzzles (e.g. AGI) are counter-intuitive. Otherwise, they would be solved! So we should be suspicious when a solution is offered that sounds too intuitive. Reality is a radical!
“Foundation model” is a poor metaphor for LLMs. A 3-year old is a foundation model. A 3-year old has a foundation of skills and basic concepts upon which almost anything can be built. An LLM is a multiple-personality mega-vac - it literally sucks up every concept and
I did not contribute, but this NEAT-Gym package from Simon Levy and Coletta Fuller (neither on Twitter) may be interesting to some. It lets you run Python-based NEAT/HyperNEAT/ES-HyperNEAT/novelty search in OpenAI Gym:
Much discussion of AI's future extrapolates from current approaches, but things like pre-training and RLHF can become archaic surprisingly fast. One day there will be no "pre" and no "prompt," and enduring weight-level updates will happen for as long as "life" continues, just
Just published through Open Access at Nature Machine Intelligence: a review of promising ideas and connections to deep learning from neuroevolution: “Designing neural networks through neuroevolution” w/
@jeffclune
@joelbot3000
& Risto Miikkulainen
My most popular talk on Why Greatness Cannot Be Planned, from TTI/Vanguard, which many used as a reference, was unfortunately taken down recently when their YouTube channel was closed. So I obtained the video from them & reposted it to keep it available:
Major advances in Go-Explore: far more SoTA results, superhuman across Atari, stochastic training, hard-exploration simulated robotics. pdf: & code updated soon: . with
@AdrienLE
(lead)
@Joost_Huizinga
(lead)
@joelbot3000
@jeffclune
Wow interesting—I had not realized before this chart what a phenomenon open-endedness is becoming within AI and ML. As it should be—intelligence without open-endedness is like flight without the ability to take off. This will increasingly matter! (Thanks
@_samvelyan
!)
The surge in
#OpenEndedness
research on arXiv marks a burgeoning interest in the field!
The ascent is largely propelled by the trailblazing contributions of visionaries like
@kenneth0stanley
,
@jeffclune
, and
@joelbot3000
, whose work continues to pave new pathways.
For current approaches to GPT/LLMs and chatbots, what is important for progress is not what they don’t do well, but rather what they can *never* do well and why that is, as well as which “nevers” will require the most fundamental architectural or training overhauls.
On consciousness I notice many dismiss it for lacking objective measure. But that’s precisely its fascination: it is the phenomenon of subjectivity itself, literally what it’s like from the inside. If science lacks the tools it is an indictment of science, not of consciousness.
Introducing Generative Teaching Networks - they generate synthetic data optimized to train learners as fast as possible - a new way to generate data for architecture search & more! Led by
@felipesuch
w/
@jeffclune
,
@joelbot3000
, &
@AdityaRawal
One issue that I haven’t seen raised is that once you have AI at a genuinely human level you already have super-human capabilities because a group of many humans can achieve vastly beyond what a single human can alone (including in science and technology). So the more copies of
I asked ChatGPT to come up with a new programming language where you just write code in plain English. It wrote code for a couple games as examples. When will this be available? (I'm not sure the principle behind the color coding in this language!)
Thank you
@MLStreetTalk
and its fantastic host
@ecsquendor
for this chance to explain why my experiences after publishing Why Greatness Cannot Be Planned led to me start the Maven social network - an open-ended social network that for once isn't based on likes or follows.
Kenneth Stanley has launched a brand new social network called Maven based on the ideas from "Why Greatness Cannot Be Planned"!
@kenneth0stanley
@HeyMaven_
Something interesting to try: If you feel strongly about something--say some controversial topic where emotions are high--instead of thinking about the reasons your feelings are justified (which is the natural impulse), instead think about why your feelings are your feelings. In
@mathemagic1an
Optimization is a type of search that attempts to move towards a better score with respect to some defined (usually) global metric, but search can be guided by other principles.
.
@jbrant1983
& I changed the minimal criterion coevolution (MCC) open-ended alg to replace speciation with a simple resource limit with surprising benefit: MCC, now extremely simple, actually works better (& won Best Paper in Complex Systems
@GeccoConf
!)
As some of you know, I've been "doing something new" that I hope to share soon! It happens that being in the Lux Capital family is part of that journey, and I couldn't be happier to congratulate Lux on the launch of their latest fund - Lux ventures 8
When LLMs fail at simple reasoning, many posit an inability to genuinely reason as the culprit. But there’s a deeper ingredient that underpins reasoning and many other faculties as well: representation. Before worrying about reasoning, worry about representation.
Announcing POET: continually inventing new and ever more complex training environments while optimizing and transferring solutions among them! Led by
@ruiwang2uiuc
, with
@joelbot3000
&
@jeffclune
@UberAILabs
blog: vid:
Could not be more thrilled to be working together again with my amazing collaborator
@joelbot3000
, now at
@OpenAI
, and on so exciting a topic as open-endedness! Congrats to Joel on the new role and looking forward to reaching the next stepping stone together!
I’m excited to announce that I have joined
@OpenAI
as of last month, am happy to again team up with the singular
@kenneth0stanley
to push forward the frontiers of open-endedness. Unexpected where the stepping stones lead (image from 1st academic presentation at ALife 2008). 1/3
Open-endedness is among the most fascinating of challenges, yet massively underappreciated.
@joelbot3000
@err_more
and I wrote an article to broaden the discussion:
I did it again - second time on
@MLStreetTalk
! It was so much fun the first time that I couldn’t resist. Just as stimulating the second time around, with entirely new topics covered. Thanks to
@ecsquendor
and
@DoctorDuggar
!
This is a special day! We speak with
@kenneth0stanley
for the second time about subjectivity, art and open-endedness. Ken gave us a much deeper insight into his philosophy on the second time round.
This point from
@hardmaru
is why understanding open-endedness will ultimately prove fundamental to establishing a healthy dynamic as creative proliferation is increasingly dominated by machines.
But unlike previous trends, machine learning models are constantly updated with new data, which is produced by our collective intelligence reflecting the current state of our culture. If most of this new creative content is made using ML, it will lead to this weird feedback loop.
Thank you to the organizers of the Meta-Learning & Multi-Agent Learning Workshop for the opportunity to give a talk and to participate in your panel on open-endedness. That was a great discussion and inspiring to see so much interest and novel thinking on open-endedness!
The Importance of Open-Endedness in AI & Machine Learning, Kenneth Stanley (
@kenneth0stanley
) of
@OpenAI
is currently giving a talk on open-endedness at the Meta-Learning & Multi-Agent Learning Workshop. Video available tomorrow.
My conversation with
@kenneth0stanley
about why you shouldn’t pursue big hairy audacious goals
I hope this stirs up some controversy b/c the implications of the concept are large
That he “discovered” this idea via AI research fascinates me
Enjoy!
This paragraph from a letter Jeff Bezos wrote to shareholders strongly reminds me of the whole idea behind following the gradient of interestingness that I learned about in the great podcast by
@ecsquendor
on
@kenneth0stanley
ideas.
Nobody called LLMs, as far as I know. One guy at OpenAI built GPT-1 while the others were working on deep RL, OpenAI Gym and such.
Technology in general and AI in particular is not that predictable! People thought at one point that defeating human grandmasters at chess would
I created a browser based environment where humanoids are trained to walk through NeuroEvolution of Augmenting Topologies (NEAT)
@kenneth0stanley
| Tools used:
@TensorFlow
, Neataptic.js, Planck.js (a Box2D rewrite)
Appearing on this show was a great experience, exhilarating discussion, and also at times thought-provoking debate - thank you
@MLStreetTalk
! (And it leads with a very well-produced intro to our work going back a long ways. Much appreciation to
@ecsquendor
and all involved!)
Great time reminiscing about the days of NEAT (and how things have changed since then) on
@wellecks
’ Thesis Review podcast! Also a great idea for a show, to look back years later with the authors of selected dissertations! Thank you
@wellecks
for having me.
Episode 9 of The Thesis Review:
Kenneth Stanley (
@kenneth0stanley
), "Efficient Evolution of Neural Networks through Complexification"
We discuss neuroevolution and the NEAT algorithm, open-endedness and POET, and how the field has developed over time
Congratulations
@_joelsimon
on such a creative and socially relevant application of concepts from novelty search and quality diversity. I hope this has the impact on our democracy that it deserves to. Very inspiring work!
Excited to share 💻🇺🇸
A paper, interactive blog, and open-source initiative to help expose gerrymandering by generating many optimized alternatives. Motivated by how generative design can make complex tradeoffs understandable. 1/
In case you didn’t have time for our arxiv paper on Enhanced POET, a quicker blog post with videos and pictures was just released. See enhancements like domain-general environmental diversity metrics and a new open-ended progress measure: 1/
Just a quick reminder that you can win a Nobel with an h-index = 29 (according to the usually inflated google scholar). Quantitative measures of scientific output are not the way to go to a bright future for humanity.
Novelty search, quality diversity (QD), open-endedness, and indirect encoding, all in an
#icml2019
tutorial on Monday morning! Excited to be presenting with
@jeffclune
and
@joelbot3000
:
Pleased to introduce with
@NavidKardaan
an intriguingly simple fix to overgeneralization in
#NeuralNetworks
that also reduces fooling and helps with adding new classes: The Competitive Overcomplete Output Layer (COOL) -
(separations on same prob in pic)
Happy to be part of this very interesting extension of novelty search that combines it with ES to adjust the amount of emphasis on novelty dynamically over the course of search.
Introducing a new algorithm that automatically, intelligently adjusts exploration vs. exploitation for
#DeepRL
. NSRA-ES exploits until stuck, then increasingly explores. It's newly added to 'Improving Exploration in ES' Great work on it Vashisht Madhavan!
The question is not whether LLMs are intelligent but rather whether they are a stepping stone along one of many paths. Are our worm-like ancestors more “intelligent” than LLMs? Arguments about stepping stones are far harder, hence less amenable to certitude.
Personal Announcement! I’m launching
@SakanaAILabs
together with my friend, Llion Jones (
@YesThisIsLion
).
is a new R&D-focused company based in Tokyo, Japan.
We’re on a quest to create a new kind of foundation model based on nature-inspired intelligence!
📣 New paper! Open-Endedness is Essential for Artificial Superhuman Intelligence 🚀 (co-lead
@MichaelD1729
).
🔎 We propose a simple, formal definition for Open-Endedness to inspire rapid and safe progress towards ASI via Foundation Models.
📖
🧵[1/N]
Congratulations to
@enasmel
for his (and my!) first accepted
#NeurIPS
paper! 🎉
We added an experiment that shows that Hebbian networks can also sometimes generalize to robot morphologies not seen during training.
PDF:
Code:
Evolving Curricula with Regret-Based Environment Design
Website:
Paper:
TL;DR: We introduce a new open-ended RL algorithm that produces complex levels and a robust agent that can solve them (e.g. below).
Highlights ⬇️! [1/N]
This was a really fun talk and discussion at
@UCL_DARK
- very honored to speak! And the topics I covered are ones I've rarely spoken about previously (including new work), so it's quite unique. Thanks to the whole lab and especially
@LauraRuis
for the organizing!
We extend our deepest gratitude to Kenneth Stanley (
@kenneth0stanley
) for his amazing and thought-provoking talk on “Novel Opportunities in Open-Endedness” during the
@UCL_DARK
Invited Speaker Series.
Now available on our YouTube channel:
@EugeneVinitsky
Does it really matter if algorithm x is “really” 2% better vs. 2% worse than algorithm y if x is a whole new way of thinking? The numbers become an excuse for us not to think.
Thank you again to the organizers of the Meta-Learning & Multi-Agent Learning Workshop for the opportunity to talk (now available to watch below) on why open-endedness is important for AI and machine learning!
The Importance of Open-Endedness in AI and Machine Learning a fascinating talk by Kenneth Stanley (
@kenneth0stanley
) of
@OpenAI
at the Meta-Learning & Multi-Agent Learning Workshop.
Video of our ICML tutorial (with
@jeffclune
and
@joelbot3000
) covering topics including Quality Diversity (first), Open-Endedness (1:14:20), and Indirect Encoding (1:51:50) is at . If you haven't heard of these topics, they're worth a look!
@TonyZador
@TonyZador
there is actually a whole active area of research within the field of neurevolution on evolving the "genetic bottleneck," which is called in ML an *indirect encoding*. See for example HyperNEAT: ...(more)
I spent 2 hours of in-depth conversation with the
@experilearning
community about implications of Why Greatness Cannot Be Planned for *education*. First time to have such a great opportunity to go deeply on this important topic: Thanks
@experilearning
!
A lot of excitement about NVIDIA's new GAN () but it raises the interesting question of what it takes to genuinely infer strict regularity in structure, which remains slippery even for such a striking success. (image annotated by me from their video)
Speaking of AI risks, if I gave a cutting-edge LLM (say with visual inputs too) an email address, access to a browser, a place to store notes, $1M in the bank with a credit card, and told it to “improve the human condition” in a continual loop, what would it do?
@WilliamCB
By the way, the fact that it's hard for many people to even conceive of making choices that do not maximize something reveals a profound cultural undercurrent in our society. Optimization is so worshipped that it is inconceivable to many that anything else is even possible.
Nice coverage by
@SilverJacket
of some of the big ideas behind our work and some of the history of neuroevolution for
@QuantaMagazine
at ...thanks for the thoughtful article!
Thank you
@vijayants
for the sentiment- I don't think
@joelbot3000
or I had realized the book might be worth more than one read! Glad to hear you get something new from it each time!
Highly recommended book for most of us, everytime I re-read it (but of course, sans any objective or plan,) I get something new & marvellous... Today, it was the validation of an idea.
@kenneth0stanley
& Joel Lehman.
#ComputerScience
#AI
#Life
#Books
Giving a keynote this morning at 8:45am at the
#NeurIPS2018
Workshop on Machine Learning for Creativity and Design (room 518). Looking forward to seeing people who managed to get up early!
Finally ready to share Biomaker CA, a Biome Maker project using Neural Cellular Automata. w/
@zzznah
See live article (with lots of videos) and arxiv paper . 1/N
Kicking off our speaker spotlights is
@kenneth0stanley
, a pioneer of open-endedness in ML. He co-authored Why Greatness Cannot Be Planned w/
@joelbot3000
(also speaking). They argue open-ended search can lead to better solutions than direct optimization.
I just had a very interesting and wide-ranging conversation on Why Greatness Cannot Be Planned with the
@LTCWRK
community, hosted thoughtfully by
@BlasMoros
. Most of it is forward focused on what comes next after we accept the book's perspective.
More exciting work announced pushing progress forward in open-endedness (this time from
@mitrma
and colleagues) - third day in a row of such an announcement! The field is really amping up!
I’m excited to announce Craftax, a new benchmark for open-ended RL!
⚔️ Extends the popular Crafter benchmark with Nethack-like dungeons
⚡Implemented entirely in Jax, achieving speedups of over 100x
1/
Great concept and implementation from the Open-Endedness Team at
@GoogleDeepMind
. Open-endedness remains a critical way forward for AI! I'm thrilled to see it thriving!
I am really excited to reveal what
@GoogleDeepMind
's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.