Kenneth Stanley Profile
Kenneth Stanley

@kenneth0stanley

Followers
12,099
Following
900
Media
31
Statuses
1,208

Maven: Prev: Team Lead @OpenAI , Uber AI, prof @UCF . NEAT,HyperNEAT,novelty search, POET. Book:Why Greatness Cannot Be Planned

San Francisco, CA
Joined March 2016
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@kenneth0stanley
Kenneth Stanley
6 months
Announcing Maven: We’ve created a new kind of social network - a serendipity network - that’s directly inspired by insights from open-endedness and Why Greatness Cannot Be Planned. iPhone: Android: It's very different.... More 👉
Tweet media one
39
96
425
@kenneth0stanley
Kenneth Stanley
4 years
I'm thrilled to announce that I will be joining the superb team at @OpenAI in June, where I will be starting a group (and indeed hiring) focused on achieving open-endedness in machine learning. Looking forward to exploring a novel path!
43
36
751
@kenneth0stanley
Kenneth Stanley
2 years
New paper from our team @OpenAI : “Evolution through Large Models.” Idea: Large language models trained on code can propose mutations to programs of unprecedented coherence. Broad implications for EC,Genetic Programming,RL,deep learning,open-endedness. 1/4
Tweet media one
15
114
656
@kenneth0stanley
Kenneth Stanley
10 months
Though it does some optimization, natural evolution is not primarily an optimization algorithm. It's unfortunate that classical genetic algorithms created a misapprehension among computer scientists that evolution is best abstracted as optimization. So what is evolution? 1/2
@ylecun
Yann LeCun
10 months
I'd say *optimization* is the only way to obtain complexity from primeval simplicity. There are many ways to optimize . Darwinian evolution (mutate and select among a population) is just one particularly simple and *inefficient* way to perform zeroth-order optimization. But there
87
79
657
34
53
409
@kenneth0stanley
Kenneth Stanley
2 years
Mostly missing from current ML discourse: the order in which things are learned profoundly affects their ultimate representation, & representation is the true heart of “understanding.” But we just shovel in the data like order doesn’t matter. No AGI without addressing this. 1/2
24
37
329
@kenneth0stanley
Kenneth Stanley
6 years
NEAT (NeuroEvolution of Augmenting Topologies) is now available to work with PyTorch, thanks to Alex Gajewski @UberAILabs ! . Also includes CPPNs and HyperNEAT capabilities.
4
98
314
@kenneth0stanley
Kenneth Stanley
2 years
Some news: It has been a great privilege to contribute to the outstanding innovation at OpenAI and our team did exciting work (some out soon!), but I've decided the time is right for me to move on and to take this opportunity to really reflect on what I want to do next!
16
3
268
@kenneth0stanley
Kenneth Stanley
1 year
Interesting tidbit from @sama on the role of Why Greatness Cannot Be Planned in the history of OpenAI. (thanks to @actualMahmoud for highlighting)
18
47
266
@kenneth0stanley
Kenneth Stanley
7 months
I’ve seen some talk recently about the virtues of hallucination, implying that it’s a “good” thing because it’s “creative” to make things up or simply what LMs are “trained to do.” I see my own work sometimes come up in these discussions since I’ve written a lot on creativity
24
49
236
@kenneth0stanley
Kenneth Stanley
6 years
Accepted at ICLR: You can train plastic neural networks to adapt to changing circumstances through neuromodulation with backprop! See our paper on "Backpropamine," pioneered by @ThomasMiconi , working with Aditya Rawal, @jeffclune and myself. @iclr2019
2
60
246
@kenneth0stanley
Kenneth Stanley
24 days
The reason experts are so split on whether or not we're on an exponential path to AI is that intelligence is multi-dimensional. You can be both exponentially exploding/scaling in one dimension and stagnating in another. You can be both superhuman and subhuman at the same time.
17
49
247
@kenneth0stanley
Kenneth Stanley
4 years
Excited to announce positions available in the Open-Endedness Team I'm starting @OpenAI : Research Engineer  Research Scientist   - looking forward to exploring new frontiers! 1/2
6
41
235
@kenneth0stanley
Kenneth Stanley
7 years
Welcoming the Era of Deep Neuroevolution: deep GAs, ES enhancements, safe mutation, novelty, and more, all with big, deep neural networks
1
70
227
@kenneth0stanley
Kenneth Stanley
10 months
Evolution is actually a set of entangled search processes that supports multiple abstractions. Among them: quality diversity, novelty search, competition escape, minimal criterion search, and the "spilled milk" abstraction discussed in Why Greatness Cannot Be Planned. 2/2
12
25
230
@kenneth0stanley
Kenneth Stanley
9 months
Not a bad choice!
@ClementDelangue
clem 🤗
9 months
Reading of the week!
Tweet media one
29
46
888
9
10
211
@kenneth0stanley
Kenneth Stanley
6 years
You don't need a corporate-sized warehouse of servers to run deep neuroevolution! For what formerly took an hour with 720 CPUs, we just released code to train on Atari in 4 hours on a single 48-core desktop: (with @felipesuch and @jeffclune )
4
78
181
@kenneth0stanley
Kenneth Stanley
2 years
Little-discussed challenge for Large Language Models: No clear path to learn the concept of novelty from data alone - novelty is a function of global chronology (not captured in dataset) and *all* that came before. Note the novelty instinct is the foundation of human creativity.
19
15
170
@kenneth0stanley
Kenneth Stanley
6 years
Some beautiful infinite-resolution images produced using CPPNs (compositional pattern producing networks) from Google Brain at the link. They differentiated back through an image classifier and then through the CPPN to produce the images:
0
47
161
@kenneth0stanley
Kenneth Stanley
2 years
Though comforting, following a path illuminated by a set of clear metrics is risky because other people can follow those metrics just as easily. Through its defiance of easy measure, the path of interestingness is the path of the brave, and most often walked alone.
4
22
154
@kenneth0stanley
Kenneth Stanley
6 years
Introducing Differentiable Plasticity: Pleased to share along with @ThomasMiconi and @jeffclune new work at Uber AI Labs on meta-training neural networks through gradient descent to be able to adapt their weights *after* SGD-based training is over:
1
50
151
@kenneth0stanley
Kenneth Stanley
7 months
For those who have confidence in the path to AGI because you believe you know its signs, it’s worth remembering what John Stuart Mill wrote in 1843: “There remains one a priori fallacy or natural prejudice, the most deeply-rooted, perhaps, of all which we have enumerated; one
12
28
146
@kenneth0stanley
Kenneth Stanley
4 years
@EugeneVinitsky But there’s a deeper question here: why do we assume that an algorithm is only “any good” if it’s definitively better than something else? The value of an algorithm can be as the seed of a chain of new ideas, a potential that benchmark/baseline obsession stifles.
5
14
136
@kenneth0stanley
Kenneth Stanley
6 years
Interesting to contemplate creating AI algorithms as *itself* an art form, i.e. not only a scientific discipline. Finally can share freely the manuscript of my position piece, "Art in the Sciences of the Artificial," published in @LeonardoISAST journal:
2
29
132
@kenneth0stanley
Kenneth Stanley
3 years
We're hiring a software engineer at OpenAI who will have the opportunity to work closely with our team, among others! More here:
1
29
131
@kenneth0stanley
Kenneth Stanley
3 months
Optimization alone can't get you novelty or interestingness.
16
11
126
@kenneth0stanley
Kenneth Stanley
6 years
Ever wanted to see what the whole population is doing in neuroevolution? We just released an interactive visualization tool called VINE for neuroevolution from Uber AI Labs:
1
35
124
@kenneth0stanley
Kenneth Stanley
3 years
I was just reading a book to my 6-year-old that briefly discussed Einstein. His incredulous response: "Einstein is real!?" That made me realize that the true mark of success is when your achievements reach such a level that people assume you must be a fictional character.
5
3
121
@kenneth0stanley
Kenneth Stanley
6 months
The fact that a human-created architecture became ascendant before one from architecture search suggests that something has been wrong with the formulation of NAS, both in the formalization of the search space and in the conception of success, which is likely too focused on
@RichardSocher
Richard Socher
6 months
What happened to architecture search? It had so much potential to help AI self improve but afaics nobody is using a model found by it. Maybe it'll get unlocked at another scale since it's just so expensive?
35
15
166
7
17
119
@kenneth0stanley
Kenneth Stanley
4 years
Announcing Synthetic Petri Dish for architecture search: An NN slice goes into a "dish" where tiny synthetic data is learned to make variants perform like the full NN. Then rapid search in the dish: beyond a model because it IS part of the real NN 1/2
Tweet media one
2
29
113
@kenneth0stanley
Kenneth Stanley
1 year
The solutions to almost all great puzzles (e.g. AGI) are counter-intuitive. Otherwise, they would be solved! So we should be suspicious when a solution is offered that sounds too intuitive. Reality is a radical!
8
5
102
@kenneth0stanley
Kenneth Stanley
19 days
“Foundation model” is a poor metaphor for LLMs. A 3-year old is a foundation model. A 3-year old has a foundation of skills and basic concepts upon which almost anything can be built. An LLM is a multiple-personality mega-vac - it literally sucks up every concept and
5
27
104
@kenneth0stanley
Kenneth Stanley
4 years
Excited to announce a major enhancement of the POET algorithm - broadening and simplifying its potential application to make almost any conceivable domain open-ended. With @ruiwang2uiuc @joelbot3000 @jeffclune @AdityaRawaI @_calio Yulun Li and @jeffclune
1
20
96
@kenneth0stanley
Kenneth Stanley
3 years
I did not contribute, but this NEAT-Gym package from Simon Levy and Coletta Fuller (neither on Twitter) may be interesting to some. It lets you run Python-based NEAT/HyperNEAT/ES-HyperNEAT/novelty search in OpenAI Gym:
0
15
93
@kenneth0stanley
Kenneth Stanley
7 months
Much discussion of AI's future extrapolates from current approaches, but things like pre-training and RLHF can become archaic surprisingly fast. One day there will be no "pre" and no "prompt," and enduring weight-level updates will happen for as long as "life" continues, just
5
20
89
@kenneth0stanley
Kenneth Stanley
5 years
Just published through Open Access at Nature Machine Intelligence: a review of promising ideas and connections to deep learning from neuroevolution: “Designing neural networks through neuroevolution” w/ @jeffclune @joelbot3000 & Risto Miikkulainen
0
25
87
@kenneth0stanley
Kenneth Stanley
2 months
My most popular talk on Why Greatness Cannot Be Planned, from TTI/Vanguard, which many used as a reference, was unfortunately taken down recently when their YouTube channel was closed. So I obtained the video from them & reposted it to keep it available:
2
16
86
@kenneth0stanley
Kenneth Stanley
8 months
For me, the question might be more whether I write any other book in my life! But thank you @BetterCallMedhi !
@BetterCallMedhi
Mehdi (e/flλ)
8 months
If you read no other book in your life : this is the one.
Tweet media one
9
9
130
5
5
83
@kenneth0stanley
Kenneth Stanley
4 years
Major advances in Go-Explore: far more SoTA results, superhuman across Atari, stochastic training, hard-exploration simulated robotics. pdf: & code updated soon: . with @AdrienLE (lead) @Joost_Huizinga (lead) @joelbot3000 @jeffclune
4
16
78
@kenneth0stanley
Kenneth Stanley
6 months
Wow interesting—I had not realized before this chart what a phenomenon open-endedness is becoming within AI and ML. As it should be—intelligence without open-endedness is like flight without the ability to take off. This will increasingly matter! (Thanks @_samvelyan !)
@_samvelyan
Mikayel Samvelyan
6 months
The surge in #OpenEndedness research on arXiv marks a burgeoning interest in the field! The ascent is largely propelled by the trailblazing contributions of visionaries like @kenneth0stanley , @jeffclune , and @joelbot3000 , whose work continues to pave new pathways.
Tweet media one
3
19
121
1
14
79
@kenneth0stanley
Kenneth Stanley
1 year
For current approaches to GPT/LLMs and chatbots, what is important for progress is not what they don’t do well, but rather what they can *never* do well and why that is, as well as which “nevers” will require the most fundamental architectural or training overhauls.
3
11
78
@kenneth0stanley
Kenneth Stanley
2 years
On consciousness I notice many dismiss it for lacking objective measure. But that’s precisely its fascination: it is the phenomenon of subjectivity itself, literally what it’s like from the inside. If science lacks the tools it is an indictment of science, not of consciousness.
3
14
76
@kenneth0stanley
Kenneth Stanley
5 years
Introducing Generative Teaching Networks - they generate synthetic data optimized to train learners as fast as possible - a new way to generate data for architecture search & more! Led by @felipesuch w/ @jeffclune , @joelbot3000 , & @AdityaRawal
1
18
77
@kenneth0stanley
Kenneth Stanley
1 month
One issue that I haven’t seen raised is that once you have AI at a genuinely human level you already have super-human capabilities because a group of many humans can achieve vastly beyond what a single human can alone (including in science and technology). So the more copies of
17
9
78
@kenneth0stanley
Kenneth Stanley
2 years
I asked ChatGPT to come up with a new programming language where you just write code in plain English. It wrote code for a couple games as examples. When will this be available? (I'm not sure the principle behind the color coding in this language!)
Tweet media one
6
13
72
@kenneth0stanley
Kenneth Stanley
4 months
Thank you @MLStreetTalk and its fantastic host @ecsquendor for this chance to explain why my experiences after publishing Why Greatness Cannot Be Planned led to me start the Maven social network - an open-ended social network that for once isn't based on likes or follows.
@MLStreetTalk
Machine Learning Street Talk
4 months
Kenneth Stanley has launched a brand new social network called Maven based on the ideas from "Why Greatness Cannot Be Planned"! @kenneth0stanley @HeyMaven_
Tweet media one
5
10
52
3
11
76
@kenneth0stanley
Kenneth Stanley
6 months
Something interesting to try: If you feel strongly about something--say some controversial topic where emotions are high--instead of thinking about the reasons your feelings are justified (which is the natural impulse), instead think about why your feelings are your feelings. In
8
11
76
@kenneth0stanley
Kenneth Stanley
10 months
@mathemagic1an Optimization is a type of search that attempts to move towards a better score with respect to some defined (usually) global metric, but search can be guided by other principles.
3
6
72
@kenneth0stanley
Kenneth Stanley
4 years
. @jbrant1983 & I changed the minimal criterion coevolution (MCC) open-ended alg to replace speciation with a simple resource limit with surprising benefit: MCC, now extremely simple, actually works better (& won Best Paper in Complex Systems @GeccoConf !)
Tweet media one
1
9
70
@kenneth0stanley
Kenneth Stanley
1 year
As some of you know, I've been "doing something new" that I hope to share soon! It happens that being in the Lux Capital family is part of that journey, and I couldn't be happier to congratulate Lux on the launch of their latest fund - Lux ventures 8
5
6
71
@kenneth0stanley
Kenneth Stanley
1 month
What does it mean that an AI with the naivety yet aptitude of a 3-year old is harder to create than a simulacrum of an adult polymath?
26
10
72
@kenneth0stanley
Kenneth Stanley
2 years
Worth considering: Open-ended systems (which include infants and children) learn in a different order than objective-driven systems. 2/2
6
6
71
@kenneth0stanley
Kenneth Stanley
4 years
Happy to share that our paper "Enhanced POET: Open-ended RL through Unbounded Invention of Learning Challenges & their Solutions" will appear at #icml2020 Led by @ruiwang2uiuc w/ @joelbot3000 , @AdityaRawaI , @_calio , Yulun Li, @jeffclune , and myself 1/2
1
18
68
@kenneth0stanley
Kenneth Stanley
14 days
When LLMs fail at simple reasoning, many posit an inability to genuinely reason as the culprit. But there’s a deeper ingredient that underpins reasoning and many other faculties as well: representation. Before worrying about reasoning, worry about representation.
10
10
67
@kenneth0stanley
Kenneth Stanley
4 years
Could not be more thrilled to be working together again with my amazing collaborator @joelbot3000 , now at @OpenAI , and on so exciting a topic as open-endedness! Congrats to Joel on the new role and looking forward to reaching the next stepping stone together!
@joelbot3000
Joel Lehman
4 years
I’m excited to announce that I have joined @OpenAI as of last month, am happy to again team up with the singular @kenneth0stanley to push forward the frontiers of open-endedness. Unexpected where the stepping stones lead (image from 1st academic presentation at ALife 2008). 1/3
Tweet media one
11
4
149
3
2
62
@kenneth0stanley
Kenneth Stanley
7 years
Open-endedness is among the most fascinating of challenges, yet massively underappreciated. @joelbot3000 @err_more and I wrote an article to broaden the discussion:
1
38
64
@kenneth0stanley
Kenneth Stanley
2 years
I did it again - second time on @MLStreetTalk ! It was so much fun the first time that I couldn’t resist. Just as stimulating the second time around, with entirely new topics covered. Thanks to @ecsquendor and @DoctorDuggar !
@MLStreetTalk
Machine Learning Street Talk
2 years
This is a special day! We speak with @kenneth0stanley for the second time about subjectivity, art and open-endedness. Ken gave us a much deeper insight into his philosophy on the second time round.
Tweet media one
1
3
26
8
11
62
@kenneth0stanley
Kenneth Stanley
2 years
This point from @hardmaru is why understanding open-endedness will ultimately prove fundamental to establishing a healthy dynamic as creative proliferation is increasingly dominated by machines.
@hardmaru
hardmaru
2 years
But unlike previous trends, machine learning models are constantly updated with new data, which is produced by our collective intelligence reflecting the current state of our culture. If most of this new creative content is made using ML, it will lead to this weird feedback loop.
15
14
206
2
8
61
@kenneth0stanley
Kenneth Stanley
4 years
Thank you to the organizers of the Meta-Learning & Multi-Agent Learning Workshop for the opportunity to give a talk and to participate in your panel on open-endedness. That was a great discussion and inspiring to see so much interest and novel thinking on open-endedness!
@GoodAIdev
GoodAI
4 years
The Importance of Open-Endedness in AI & Machine Learning, Kenneth Stanley ( @kenneth0stanley ) of @OpenAI is currently giving a talk on open-endedness at the Meta-Learning & Multi-Agent Learning Workshop. Video available tomorrow.
Tweet media one
0
6
33
4
9
61
@kenneth0stanley
Kenneth Stanley
4 years
Generative Teaching Networks accepted at ICML! blog: paper: GTNs learn to generate data/experience to teach other networks! Led by @felipesuch w @AdityaRawaI @joelbot3000 @jeffclune
0
18
60
@kenneth0stanley
Kenneth Stanley
2 years
Original and thoughtful questions, broad-ranging discussion - a real privilege to appear on @InvestLikeBest with @patrick_oshag !
@patrick_oshag
Patrick OShaughnessy
2 years
My conversation with @kenneth0stanley about why you shouldn’t pursue big hairy audacious goals I hope this stirs up some controversy b/c the implications of the concept are large That he “discovered” this idea via AI research fascinates me Enjoy!
Tweet media one
16
22
158
2
9
61
@kenneth0stanley
Kenneth Stanley
2 years
Great work @carperai ! Evolution through Large Models (ELM) in open source!
1
7
60
@kenneth0stanley
Kenneth Stanley
3 years
Very interesting how this paragraph from @JeffBezos does sound like something out of Why Greatness Cannot Be Planned. Good catch @MAghajohari !
@MAghajohari
Milad Aghajohari
3 years
This paragraph from a letter Jeff Bezos wrote to shareholders strongly reminds me of the whole idea behind following the gradient of interestingness that I learned about in the great podcast by @ecsquendor on @kenneth0stanley ideas.
Tweet media one
1
4
34
2
9
60
@kenneth0stanley
Kenneth Stanley
7 months
Sounds like you might enjoy Why Greatness Cannot Be Planned.
@ESYudkowsky
Eliezer Yudkowsky ⏹️
7 months
Nobody called LLMs, as far as I know. One guy at OpenAI built GPT-1 while the others were working on deep RL, OpenAI Gym and such. Technology in general and AI in particular is not that predictable! People thought at one point that defeating human grandmasters at chess would
36
12
248
2
2
58
@kenneth0stanley
Kenneth Stanley
6 years
I wasn't part of this project but it sounds potentially interesting: Browser-based NEAT with @TensorFlow
@mishig25
Mishig Davaadorj
6 years
I created a browser based environment where humanoids are trained to walk through NeuroEvolution of Augmenting Topologies (NEAT) @kenneth0stanley | Tools used: @TensorFlow , Neataptic.js, Planck.js (a Box2D rewrite)
3
57
190
1
8
59
@kenneth0stanley
Kenneth Stanley
3 years
Appearing on this show was a great experience, exhilarating discussion, and also at times thought-provoking debate - thank you @MLStreetTalk ! (And it leads with a very well-produced intro to our work going back a long ways. Much appreciation to @ecsquendor and all involved!)
@MLStreetTalk
Machine Learning Street Talk
3 years
Epic special edition. @kenneth0stanley on why greatness cannot be planned, abandoning objectives and open-endedness. @joelbot3000 @jeffclune @ykilcher
Tweet media one
2
23
78
2
8
57
@kenneth0stanley
Kenneth Stanley
4 years
Great time reminiscing about the days of NEAT (and how things have changed since then) on @wellecks ’ Thesis Review podcast! Also a great idea for a show, to look back years later with the authors of selected dissertations! Thank you @wellecks for having me.
@thesisreview
The Thesis Review Podcast
4 years
Episode 9 of The Thesis Review: Kenneth Stanley ( @kenneth0stanley ), "Efficient Evolution of Neural Networks through Complexification" We discuss neuroevolution and the NEAT algorithm, open-endedness and POET, and how the field has developed over time
Tweet media one
1
4
31
1
5
54
@kenneth0stanley
Kenneth Stanley
4 years
Congratulations @_joelsimon on such a creative and socially relevant application of concepts from novelty search and quality diversity. I hope this has the impact on our democracy that it deserves to. Very inspiring work!
@_joelsimon
Joel Simon
4 years
Excited to share 💻🇺🇸 A paper, interactive blog, and open-source initiative to help expose gerrymandering by generating many optimized alternatives. Motivated by how generative design can make complex tradeoffs understandable. 1/
8
120
367
1
10
52
@kenneth0stanley
Kenneth Stanley
6 years
More progress in deep neurovolution, this time from Sentient. This research area is heating up:
0
26
54
@kenneth0stanley
Kenneth Stanley
4 years
In case you didn’t have time for our arxiv paper on Enhanced POET, a quicker blog post with videos and pictures was just released. See enhancements like domain-general environmental diversity metrics and a new open-ended progress measure: 1/
1
17
50
@kenneth0stanley
Kenneth Stanley
9 months
Just another example of the Myth of the Objective. Maximization does not lead to greatness.
@sentefmi
Michael Sentef
2 years
Just a quick reminder that you can win a Nobel with an h-index = 29 (according to the usually inflated google scholar). Quantitative measures of scientific output are not the way to go to a bright future for humanity.
Tweet media one
88
1K
7K
3
0
50
@kenneth0stanley
Kenneth Stanley
5 years
Novelty search, quality diversity (QD), open-endedness, and indirect encoding, all in an #icml2019 tutorial on Monday morning! Excited to be presenting with @jeffclune and @joelbot3000 :
0
12
48
@kenneth0stanley
Kenneth Stanley
6 years
Pleased to introduce with @NavidKardaan an intriguingly simple fix to overgeneralization in #NeuralNetworks that also reduces fooling and helps with adding new classes: The Competitive Overcomplete Output Layer (COOL) - (separations on same prob in pic)
Tweet media one
2
17
51
@kenneth0stanley
Kenneth Stanley
6 years
Happy to be part of this very interesting extension of novelty search that combines it with ES to adjust the amount of emphasis on novelty dynamically over the course of search.
@jeffclune
Jeff Clune
6 years
Introducing a new algorithm that automatically, intelligently adjusts exploration vs. exploitation for #DeepRL . NSRA-ES exploits until stuck, then increasingly explores. It's newly added to 'Improving Exploration in ES' Great work on it Vashisht Madhavan!
1
25
128
0
14
49
@kenneth0stanley
Kenneth Stanley
1 year
The question is not whether LLMs are intelligent but rather whether they are a stepping stone along one of many paths. Are our worm-like ancestors more “intelligent” than LLMs? Arguments about stepping stones are far harder, hence less amenable to certitude.
4
4
48
@kenneth0stanley
Kenneth Stanley
11 months
It’s the right time for someone to do this! Congrats @hardmaru , glad it will be you (as well as @YesThisIsLion )!
@hardmaru
hardmaru
11 months
Personal Announcement! I’m launching @SakanaAILabs together with my friend, Llion Jones ( @YesThisIsLion ). is a new R&D-focused company based in Tokyo, Japan. We’re on a quest to create a new kind of foundation model based on nature-inspired intelligence!
Tweet media one
148
419
3K
2
1
48
@kenneth0stanley
Kenneth Stanley
30 days
Great to see continuing deep thinking on the role of open-endedness in the current era of AI - congrats to @edwardfhughes and all the coauthors!
@edwardfhughes
Edward Hughes
1 month
📣 New paper! Open-Endedness is Essential for Artificial Superhuman Intelligence 🚀 (co-lead @MichaelD1729 ). 🔎 We propose a simple, formal definition for Open-Endedness to inspire rapid and safe progress towards ASI via Foundation Models. 📖 🧵[1/N]
Tweet media one
9
29
109
0
10
49
@kenneth0stanley
Kenneth Stanley
6 years
ES with novelty search is coming to NIPS!
@jeffclune
Jeff Clune
6 years
Our paper was accepted @NipsConference ! Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents.Congrats to leads @vashmadhavan , Edoardo Conti, & the whole team @felipesuch @joelbot3000 @kenneth0stanley @UberAILabs
7
33
155
0
3
46
@kenneth0stanley
Kenneth Stanley
4 years
Intriguing demonstration of the potential of Hebbian plasticity in large networks at #NeurIPS . Congrats @risi1979 and @enasmel !
@risi1979
Sebastian Risi
4 years
Congratulations to @enasmel for his (and my!) first accepted #NeurIPS paper! 🎉 We added an experiment that shows that Hebbian networks can also sometimes generalize to robot morphologies not seen during training. PDF: Code:
12
45
226
2
4
46
@kenneth0stanley
Kenneth Stanley
2 years
Inspiring work here for anyone interested in POET or open-endedness! Congrats to @jparkerholder and his coauthors! I recommend taking a look:
@jparkerholder
Jack Parker-Holder
2 years
Evolving Curricula with Regret-Based Environment Design Website: Paper: TL;DR: We introduce a new open-ended RL algorithm that produces complex levels and a robust agent that can solve them (e.g. below). Highlights ⬇️! [1/N]
3
47
226
1
8
45
@kenneth0stanley
Kenneth Stanley
2 years
This was a really fun talk and discussion at @UCL_DARK - very honored to speak! And the topics I covered are ones I've rarely spoken about previously (including new work), so it's quite unique. Thanks to the whole lab and especially @LauraRuis for the organizing!
@UCL_DARK
UCL DARK
2 years
We extend our deepest gratitude to Kenneth Stanley ( @kenneth0stanley ) for his amazing and thought-provoking talk on “Novel Opportunities in Open-Endedness” during the @UCL_DARK Invited Speaker Series. Now available on our YouTube channel:
Tweet media one
0
5
46
0
8
45
@kenneth0stanley
Kenneth Stanley
4 years
@EugeneVinitsky Does it really matter if algorithm x is “really” 2% better vs. 2% worse than algorithm y if x is a whole new way of thinking? The numbers become an excuse for us not to think.
2
5
45
@kenneth0stanley
Kenneth Stanley
4 years
Thank you again to the organizers of the Meta-Learning & Multi-Agent Learning Workshop for the opportunity to talk (now available to watch below) on why open-endedness is important for AI and machine learning!
@GoodAIdev
GoodAI
4 years
The Importance of Open-Endedness in AI and Machine Learning a fascinating talk by Kenneth Stanley ( @kenneth0stanley ) of @OpenAI at the Meta-Learning & Multi-Agent Learning Workshop.
0
7
21
2
6
45
@kenneth0stanley
Kenneth Stanley
5 years
Video of our ICML tutorial (with @jeffclune and @joelbot3000 ) covering topics including Quality Diversity (first), Open-Endedness (1:14:20), and Indirect Encoding (1:51:50) is at . If you haven't heard of these topics, they're worth a look!
1
5
44
@kenneth0stanley
Kenneth Stanley
5 years
@TonyZador @TonyZador there is actually a whole active area of research within the field of neurevolution on evolving the "genetic bottleneck," which is called in ML an *indirect encoding*. See for example HyperNEAT: ...(more)
1
8
40
@kenneth0stanley
Kenneth Stanley
2 years
I spent 2 hours of in-depth conversation with the @experilearning community about implications of Why Greatness Cannot Be Planned for *education*. First time to have such a great opportunity to go deeply on this important topic: Thanks @experilearning !
2
5
43
@kenneth0stanley
Kenneth Stanley
6 years
A lot of excitement about NVIDIA's new GAN () but it raises the interesting question of what it takes to genuinely infer strict regularity in structure, which remains slippery even for such a striking success. (image annotated by me from their video)
Tweet media one
2
9
42
@kenneth0stanley
Kenneth Stanley
1 year
Speaking of AI risks, if I gave a cutting-edge LLM (say with visual inputs too) an email address, access to a browser, a place to store notes, $1M in the bank with a credit card, and told it to “improve the human condition” in a continual loop, what would it do?
13
2
42
@kenneth0stanley
Kenneth Stanley
10 months
@WilliamCB By the way, the fact that it's hard for many people to even conceive of making choices that do not maximize something reveals a profound cultural undercurrent in our society. Optimization is so worshipped that it is inconceivable to many that anything else is even possible.
3
9
42
@kenneth0stanley
Kenneth Stanley
5 years
Nice coverage by @SilverJacket of some of the big ideas behind our work and some of the history of neuroevolution for @QuantaMagazine at ...thanks for the thoughtful article!
1
13
40
@kenneth0stanley
Kenneth Stanley
4 years
Thank you @vijayants for the sentiment- I don't think @joelbot3000 or I had realized the book might be worth more than one read! Glad to hear you get something new from it each time!
@vijayants
Baba Yoda
4 years
Highly recommended book for most of us, everytime I re-read it (but of course, sans any objective or plan,) I get something new & marvellous... Today, it was the validation of an idea. @kenneth0stanley & Joel Lehman. #ComputerScience #AI #Life #Books
Tweet media one
0
3
19
2
3
39
@kenneth0stanley
Kenneth Stanley
6 years
Giving a keynote this morning at 8:45am at the #NeurIPS2018 Workshop on Machine Learning for Creativity and Design (room 518). Looking forward to seeing people who managed to get up early!
4
3
40
@kenneth0stanley
Kenneth Stanley
1 year
There are many ways to research open-endedness. Artificial life (alife) is still full of possibilities.
@RandazzoEttore
Ettore Randazzo
1 year
Finally ready to share Biomaker CA, a Biome Maker project using Neural Cellular Automata. w/ @zzznah See live article (with lots of videos) and arxiv paper . 1/N
6
79
396
2
4
40
@kenneth0stanley
Kenneth Stanley
2 years
Looking forward to joining a bunch of great speakers to talk about open-endedness at @aloeworkshop at ICLR 2022!
@aloeworkshop
ALOE Workshop
2 years
Kicking off our speaker spotlights is @kenneth0stanley , a pioneer of open-endedness in ML. He co-authored Why Greatness Cannot Be Planned w/ @joelbot3000 (also speaking). They argue open-ended search can lead to better solutions than direct optimization.
Tweet media one
2
10
36
1
7
40
@kenneth0stanley
Kenneth Stanley
2 years
I just had a very interesting and wide-ranging conversation on Why Greatness Cannot Be Planned with the @LTCWRK community, hosted thoughtfully by @BlasMoros . Most of it is forward focused on what comes next after we accept the book's perspective.
2
5
39
@kenneth0stanley
Kenneth Stanley
4 months
More exciting work announced pushing progress forward in open-endedness (this time from @mitrma and colleagues) - third day in a row of such an announcement! The field is really amping up!
@mitrma
Michael Matthews
4 months
I’m excited to announce Craftax, a new benchmark for open-ended RL! ⚔️ Extends the popular Crafter benchmark with Nethack-like dungeons ⚡Implemented entirely in Jax, achieving speedups of over 100x 1/
10
58
259
0
7
37
@kenneth0stanley
Kenneth Stanley
4 months
Great concept and implementation from the Open-Endedness Team at @GoogleDeepMind . Open-endedness remains a critical way forward for AI! I'm thrilled to see it thriving!
@_rockt
Tim Rocktäschel
4 months
I am really excited to reveal what @GoogleDeepMind 's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.
145
573
3K
0
7
38