Jack Parker-Holder
@jparkerholder
Followers
5K
Following
6K
Media
39
Statuses
927
Research Scientist @GoogleDeepMind & Honorary Lecturer @UCL_DARK interested in generating worlds from video data. Dad (👶🐶), CFC fan, BJJ. Views are my own :)
London, England
Joined October 2018
🚨🚨 We are hiring a student researcher to start in Q1 2025!! If you're interested in exploring the potential for world models to fuel the next wave of embodied AI, apply here by *DECEMBER 13TH*.
Introducing 🧞Genie 2 🧞 - our most capable large-scale foundation world model, which can generate a diverse array of consistent worlds, playable for up to a minute. We believe Genie 2 could unlock the next wave of capabilities for embodied agents 🧠.
5
33
203
When we started this project the idea of training world models *exclusively* from Internet videos seemed wild, but it turns out latent actions are the key and the bitter lesson holds. Now we have a viable path to generating the rich diversity of environments we need for AGI. 🚀.
I am really excited to reveal what @GoogleDeepMind's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.
8
23
166
I always love hearing from former ML PhD students about the days before tensorflow/pytorch. maybe in a few years we will tell current PhD students about the time before free MuJoCo 🙌.
We’ve acquired the MuJoCo physics simulator ( and are making it free for all, to support research everywhere. MuJoCo is a fast, powerful, easy-to-use, and soon to be open-source simulation tool, designed for robotics research:
4
11
133
Feel very fortunate to have contributed to this as my first project @DeepMind! It is amazing to see what can be done when combining Transformer models with meta-RL and PLR in a vast, open-ended task space!.
I’m super excited to share our work on AdA: An Adaptive Agent capable of hypothesis-driven exploration which solves challenging unseen tasks with just a handful of experience, at a similar timescale to humans. See the thread for more details 👇 [1/N]
3
8
100
Amazing to see such fast progress in video generation, congrats to the Veo 2 team!!.
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥. We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through
2
2
73
Excited to announce that Genie will be presented as an Oral at @icmlconf #ICML2024, see you all in Vienna!!.
I am really excited to reveal what @GoogleDeepMind's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.
5
7
71
Super exciting time to work on population-based methods! We already have fast data collection, now this paper shows vectorizing agent updates can lead to huge speedups (on a GPU): Looking forward to discussing with the authors (@instadeepai) at #ICML2022😀.
3
7
60
In 2024 we saw huge advances in video + world models. For me the most exciting result was the SIMA agent interacting in worlds generated by Genie 2 + Imagen 3 and solving novel tasks. Now we have some key ingredients for an open-ended system… and it’s the worst it’ll ever be 😀.
🤯🤯🤯… And just like that, we have a path to unlimited environments for training and evaluating our embodied agents! We tried creating another world with three arches, and once again Genie 2 was able to simulate the world and SIMA solved the task ✅.
3
8
61
I think industry labs will always want to recruit people with new scalable ideas that show a possible path to the next big leap. To me, the issue is students being falsely encouraged to iterate on toy benchmarks with increasingly complex methods, for the sake of "papers".
feeling a bit under the weather this week … thus an increased level of activity on social media and blog:
2
4
59
Working closely with many amazing members of @UCL_DARK (and @robertarail) over the past few years has been a privilege and I am *also* super excited to make this official!! 😎🚀.
We are super excited to announce that Dr Roberta Raileanu (@robertarail) and Dr Jack Parker-Holder (@jparkerholder) have joined @UCL_DARK as Honorary Lecturers! Both have done impressive work in Reinforcement Learning and Open-Endedness, and our lab is lucky to get their support.
4
5
56
A perfect way to cap what has been the highlight of my career, so far…. 🫶🧞🧞🧞.
4
2
54
Heading to Baltimore for #ICML2022 ✈️ Will be presenting ACCEL on Thursday and would love to chat about unsupervised environment design and open-endedness with many of you there! DM if you're around and want to catch up 😀.
Evolving Curricula with Regret-Based Environment Design. Website: Paper: TL;DR: We introduce a new open-ended RL algorithm that produces complex levels and a robust agent that can solve them (e.g. below). Highlights ⬇️! [1/N]
1
3
52
Heading to @NeurIPSConf tomorrow, would be great to chat about open-endedness, RL, world models or England’s chances at the word cup 😀 DMs open! #NeurIPS2022.
4
4
51
If you're thinking of applying for PhDs, interested in open-endedness/foundation models and don't mind rainy weather 🇬🇧, then consider applying to @UCL_DARK! My DMs are open and I'll be in New Orleans for NeurIPS so please get in touch if this sounds like you! 😀.
We (@_rockt, @egrefen, @robertarail, and @jparkerholder) are looking for PhD students to join us in Fall 2024. If you are interested in Open-Endedness, RL & Foundation Models, then apply here: and also write us at ucl-dark-admissions@googlegroups.com.
3
8
43
I’ll be ✈️ to #NeurIPS2023 on Monday and hoping to discuss:.- open-endedness and why it matters for AGI #iykyk .- world models.- why it’s never been a better time to do a PhD in ML (especially @UCL_DARK 😉)!.Find me at two posters + @aloeworkshop + hanging around the GDM booth 🤪.
Everyone from @GoogleDeepMind's Open-Endedness Team and almost the entire @UCL_DARK Lab are going to be at @NeurIPSConf 2023 next week. You will find most of us at the @ALOEworkshop on Friday. Come and say hi!.
0
3
41
Super excited to be heading to #ICML2024 next week!!! We will be presenting Genie as an Oral on Tuesday morning, and a poster straight after. If you can't make it, we will be doing a demo at the GDM booth on Wednesday at 10am 😀.
Heading to @icmlconf next week? 🇦🇹. Our teams are presenting over 8️⃣0️⃣ papers and hosting live research demos on AI assistants for coding, football tactics, and more. We’ll also be showcasing Gemini Nano: our most efficient model for on-device tasks. →
2
3
41
Not sure who needs to hear this, but, effectively filtering large and noisy datasets is a gift that keeps on giving!! 🎁 Often more impactful than fancy new model architectures 😅 We found this same thing in RL with autocurricula (e.g. PLR, ACCEL), and I'd bet it works elsewhere.
In our new paper (oral ICCV23), we develop a concept-specific pruning criterion (Density-Based-Pruning) which reduces the training cost by 72%. Joint work with @amrokamal1997 @kushal_tirumala @wielandbr @kamalikac @arimorcos (1/5).
1
6
38
Going for action-free training is a total game changer and it helps to do it with someone who has been thinking about this for years ( who happens to also be one of the nicest people ever.
This was such a fun and rewarding project to work on. Amazing job by the team! The most exciting thing for me is that we were able to achieve this without using a single doggone action label, which believe me, was not easy!.
1
2
35
Super cool work showing QD algorithms at scale 🚀 Congrats to the team!! May be of interest @CULLYAntoine @tehqin17 @jeffclune @MinqiJiang @_samvelyan.
I'm super excited to share AlphaZeroᵈᵇ, a team of diverse #AlphaZero agents that collaborate to solve #Chess puzzles and demonstrate increased creativity. Check out our paper to learn more!.A quick 🧵(1/n)
1
6
34
Super excited to be in Seattle for @CVPR! Will be presenting Genie as a keynote tomorrow at the AI for Content Creation Workshop at 3:15pm and on Tues at @cveu_workshop in the afternoon session. If you can’t make it, fear not! I’ll also be at the Google booth Thurs 1:30pm! 🧞🧞.
0
4
35
Super excited for this, see you in Singapore :D.
🎤 World Models Workshop at #ICLR2025! 🌍✨. We invited the foundation World Models #Google #Genie key contributors, Open-Endedness Team Lead Tim Rocktäschel @_rockt and Jack Parker-Holder @jparkerholder to share their groundbreaking work with us! 🙌. 🔥 Keynote Highlights:. 🤖
0
1
34
For anyone interested in finding diverse solutions for exploration or generalization, this is worth checking out! Was awesome to work on this project and I'm excited to see where the next ridges take us!! 🚀.
The gradient is a locally greedy direction. Where do you get if you follow the eigenvectors of the Hessian instead? Our new paper, “Ridge Rider” (, explores how to do this and what happens in a variety of (toy) problems (if you dare to do so),. Thread 1/N
1
4
31
👋 @MinqiJiang and I will be presenting ACCEL today @icmlconf, come by!.Talk: Room 327 at 14:35 ET.Poster: Hall E #919.Hopefully see you there 😀 #ICML2022.
Evolving Curricula with Regret-Based Environment Design. Website: Paper: TL;DR: We introduce a new open-ended RL algorithm that produces complex levels and a robust agent that can solve them (e.g. below). Highlights ⬇️! [1/N]
0
9
31
The Open-Endedness team is growing 🌱 come and join us!! Exciting times 😀.
In addition to a Research Engineer, we are also looking for a Research Scientist 🧑🔬 to join @DeepMind's Open-Endedness Team! . If you are excited about the intersection of open-ended, self-improving, generalist AI and foundation models, please apply 👇.
1
2
29
We often refer to Genie as an action controllable video model, but it is arguably more like an image generation model with memory. Super cool to see a similar approach go back to the original World Models inspo from @hardmaru and model Doom 😎 Congrats @shlomifruchter and team!.
Google presents Diffusion Models Are Real-Time Game Engines. discuss: We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality.
0
1
29
Looking forward to discussing Genie tomorrow!! 🧞♀️🧞🧞♂️.
📢Happy to share our UMD MARL talk @ Apr 16, 12:00 pm ET📢/--by . @GoogleDeepMind Research Scientist @jparkerholder. on "Generative Interactive Environments (GENIE)". in-person: IRB-5137. virtually: @johnpdickerson.@umdcs.@umiacs.@ml_umd.#RL #AI #ML.
0
4
29
With Bayesian Generational PBT we can update *both* architectures and >10 hyperparameters on the fly in a single run 😮 even better it’s fast with parallel simulators ⚡️… great time to work in this area!!.
(1/7) Population Based Training (PBT) has been shown to be highly effective for tuning hyperparameters (HPs) for deep RL. Now with the advent of massively parallel simulators, there has never been a better time to use these methods! However, PBT has a couple of key problems….
0
2
27
By curating *randomly generated* environments we can produce a curriculum that makes it possible for a student agent to transfer zero-shot to challenging human designed ones, including Formula One tracks 🏎️. maybe one day F1 teams will use PLR? 😀 come check it out @NeurIPSConf.
🏎️ Replay-Guided Adversarial Environment Design. Prioritized Level Replay (PLR) is secretly a form of unsupervised environment design. This leads to new theory improving PLR + impressive zero-shot transfer, like driving the Nürburgring Grand Prix. paper:
1
5
27
Was super fun chatting with @kanjun and @joshalbrecht, hopefully I said something useful in there somewhere! Also interesting to see how much has changed since we spoke in August (both in the field and for @genintelligent 🚀) what a time to be an AI researcher!!😀.
Had a really fun convo with @jparkerholder about co-evolving RL agents & environments, alternatives & blockers to population-based training, and why we aren't thinking properly about data efficiency in RL. We also discussed how Jack managed so many papers during his PhD 💪!.
0
5
27
We introduce ACCEL, a new algorithm that extends replay-based Unsupervised Environment Design (UED) (e.g. by including an *editor*. The editor makes small changes to previously useful levels, which compound over time to produce complex structures. [2/N]
🏎️ Replay-Guided Adversarial Environment Design. Prioritized Level Replay (PLR) is secretly a form of unsupervised environment design. This leads to new theory improving PLR + impressive zero-shot transfer, like driving the Nürburgring Grand Prix. paper:
2
8
27
Open source world models + real world data to train them with. what a time to be alive 🚀.
@1x_tech @JackMonas For people interested in getting started with world models, we've also open-sourced a PyTorch implementation of GENIE here
1
5
28
Check out our #NeurIPS2022 paper showing we can train more general world models by collecting data with a diverse population of agents! Great work by @YingchenX and team!! Come chat to us in New Orleans 😀.
Interested in learning general world models at scale? 🌍 Check out our new #NeurIPS2022 paper to find out! .Paper: Website: [1/N]
0
10
27
Join us!! 😀.
We are hiring for @DeepMind’s Open-Endedness team. If you have expertise in topics such as RL, evolutionary computation, PCG, quality diversity, novelty search, generative modelling, world models, intrinsic motivation etc., then please consider applying!.
1
3
27
As we see with Genie - foundation world models trained from videos offer the potential for generating the environments we need for AGI 🎮. New paper by @mengjiao_yang laying out all the possibilities in the space, exciting times 🚀.
Video as the New Language for Real-World Decision Making. Both text and video data are abundant on the internet and support large-scale self-supervised learning through next token or frame prediction. However, they have not been equally leveraged: language models have had
1
2
27
This is also what we see with Genie, predicting the future is sufficient to learn parallax and consistent latent actions.
Predicting the next word "only" is sufficient for language models to learn a large body of knowledge that enables then to code, answer questions, understand many topics, chat, and so on. This is clear to many researchers now, and there are nice tutorials on why this works by
1
4
27
Working on this paper has been super helpful for me to have a concrete definition of open-endedness, kudos to @edwardfhughes and @MichaelD1729 who did an amazing job! This field seems to get even more exciting every week as we build on ever more capable foundation models 🚀.
📣 New paper! Open-Endedness is Essential for Artificial Superhuman Intelligence 🚀 (co-lead @MichaelD1729). 🔎 We propose a simple, formal definition for Open-Endedness to inspire rapid and safe progress towards ASI via Foundation Models. 📖 🧵[1/N]
0
5
27
PSA: you can use linear models in deep RL papers and still get accepted at #ICML2021!! Congrats to @philipjohnball and @cong_ml . now let’s try and beat ViT with ridge regression :).
The case for offline RL is clear: we often have access to real world data in settings where it is expensive (and potentially even dangerous) to collect new experience. But what happens if this offline data doesn’t perfectly match the test environment? [1/8].
0
2
25
Genie + @UCL_DARK = 🫶🚀.
We're excited to announce that the Genie Team from @GoogleDeepMind will be our next invited speakers!. Title: Genie: Generative Interactive Environments. Speakers: @ashrewards, @jparkerholder, @YugeTen. Sign up: 📌 90 High Holborn.📅 Tue 30 Apr, 17:00.
1
2
24
We can now scale UED to competitive multi-agent RL!! This plot is my favorite, showing that the agent-level dependence clearly matters 🤹♂️ come check out the paper at #ICLR2023.
A key insight for multi-agent settings is that, from the perspective of the teacher, maximising the student’s regret over co-players independently of the environment (and vice versa) doesn’t guarantee maximising regret in the joint space of co-players and environments.
0
3
24
Thank you @maxjaderberg!! XLand was super inspiring for us, it showed that our current RL algorithms are already capable of amazing things when given sufficiently rich and diverse environments. Can't wait to push this direction further with future versions of Genie 🚀🚀.
Very cool to see the @GoogleDeepMind Genie results: learning an action-conditional generative model purely unsupervised from video data. This is close to my heart in getting towards truly open-ended environments to train truly general agents with RL 1/
0
0
23
Probably the shortest reviews I’ve ever seen for a top tier conference… maybe we can use them as a prompt for a language model to generate more thorough reviews?? 🤔 #ICML2022.
0
1
21
Great news!! ALOE is back and in person. If you’re heading to @NeurIPSConf and interested in open-endedness, adaptive curricula or self-driven learning systems then hopefully see you there 🕺.
🌱 The 2nd Agent Learning in Open-Endedness Workshop will be held at NeurIPS 2023 (Dec 10–16) in magnificent New Orleans. ⚜️. If your research considers learning in open-ended settings, consider submitting your work (by 11:59 PM Sept. 29th, AoE).
0
1
22
As someone who finds it hard to tell what LLM benchmarks really mean, this has been a super cool project to see clear daylight between models and also clear areas where a step change improvement is needed to solve it *cough* Nethack *cough*. awesome work @PaglieriDavide et al!.
Tired of saturated benchmarks? Want scope for a significant leap in capabilities?. 🔥 Introducing BALROG: a Benchmark for Agentic LLM and VLM Reasoning On Games!. BALROG is a challenging benchmark for LLM agentic capabilities, designed to stay relevant for years to come. 1/🧵
0
1
23
Was fun giving this talk! It’s a super exciting time to be working on world models and amazing to think of the potential use cases in the next few years 🫶.
We kicked off the first day of the MLx Generative AI track with Jack Parker-Holder (@jparkerholder), Research Scientist at the Google DeepMind (@GoogleDeepMind) Open-Endedness Team. 🧠🚀He presented "Generative World Models", covering key topics including reinforcement learning,
0
0
21
It turns out foundation world models are the stepping stone required for converting children's sketches into interactive experiences 🌱.
One amazing thing Genie enables: anyone, including children, can draw a world and then *step into it* and explore it!! How cool is that!?! We tried this with drawings my children made, to their delight. My child drew this, and now can fly the eagles around. Magic!🧞✨
0
2
22
Thanks to a fantastic effort from @MinqiJiang all the code from our recent work on UED is now public!! Excited to see the new ideas that come from this! 🍿.
We have open sourced our recent algorithms for Unsupervised Environment Design! These algorithms produce adaptive curricula that result in robust RL agents. This codebase includes our implementations of ACCEL, Robust PLR, and PAIRED.
0
1
20
Super excited about this, more info to follow 😀. #NeurIPS2022.
Interested in foundation models + RL? Keep an eye out for the 1st. "Foundation Models for Decision Making" workshop at NeurIPS 2022: Call for submissions will soon follow. w. @du_yilun @jparkerholder @siddkaramcheti @IMordatch @shaneguML @ofirnachum
1
2
21
🧞♀️🫶.
How can we learn a foundational world model directly from Internet-scale videos without any action annotations? @YugeTen, @ashrewards and @jparkerholder from @GoogleDeepMind's Open-Endedness Team are presenting "Genie: Generative Interactive Environments" at the @UCL_DARK Seminar
1
0
19
Was great to chat to @shiringhaffary about Genie 2, and world models in general. Exciting times 🚀.
NEW: For this week's Q&AI newsletter I wrote about what "world models" are and why AI companies like Google, NVIDIA, others are investing in them. Featuring insight from Google DeepMind's @jparkerholder and OpenAI's @billpeeb .
0
4
20
💯 and as many have pointed out, this is the worst video models are ever going to be. Super exciting to see the impact these models will have when used as world simulators with open-ended learning.
So, rather than considering video models as a poor approximation to a real simulation engine, I think it's interesting to also consider them as something more: a new kind of world simulation that is in many ways far more complete than anything we have had before. 3/3.
0
0
20
Super exciting to see improved techniques for generating synthetic data for agents! Awesome work from @JacksonMattT and team, plenty more to be done in this space 🚀🚀🚀.
🎮 Introducing the new and improved Policy-Guided Diffusion!. Vastly more accurate trajectory generation than autoregressive models, with strong gains in offline RL performance!. Plus a ton of new theory and results since our NeurIPS workshop paper. Check it out ⤵️
0
3
20
Looking forward to seeing all the creative ideas submitted to this workshop! Submit by September 22nd 😀.
We are open for submissions! . I know there are lots of people working on large models, pretraining, cross-domain/agent generalization for RL. Please submit your papers to the 1st FMDM workshop at NeurIPS 2022!
0
1
20
This was very much a collective effort from a great group of people! @RaghuSpaceRajan @XingyouSong @AndreBiedenkapp @yingjieMiao @The_Eimer @BaoheZhang1 @nguyentienvu @RCalandra @AleksandraFaust @FrankRHutter @LindauerMarius Look forward to seeing future progress here 📈! [4/4].
0
1
17
Come to break the agents… stay to read about our new approach for unsupervised environment design 😀.
🧬 For ACCEL, we made an interactive paper to accompany the typical PDF we all know and love. "Figure 1" is a demo that lets you challenge our agents by designing your own environments! Now you can also view agents from many training runs simultaneously.
0
4
19
Super excited about this, we are only just beginning to see the potential for controllable video models!! #ICML2024.
We are pleased to announce the first *controllable video generation* workshop at @icmlconf 2024! 📽️📽️📽️. We welcome submissions that explore video generation via different modes of control (e.g. text, pose, action). Deadline: 31st May AOE.Website:
0
2
18
PSA: we are super excited to announce the workshop on Agent Learning in Open-Endedness (ALOE) at #ICLR2022! If you're interested in open-ended learning systems then check out the amazing speaker line-up and the CfP 😀.
Announcing the first Agent Learning in Open-Endedness (ALOE) Workshop at #ICLR2022! . We're calling for papers across many fields: If you work on open-ended learning, consider submitting. Paper deadline is February 25, 2022, AoE.
0
1
19
Hate to steal your thunder @pcastr …but I got 8!! I genuinely enjoy reviewing but this makes it impossible to do a good job @iclr_conf #ICLR2023.
seven #ICLR2023 papers to review in 12 days (8 business days) is too much, imho. .
2
1
16
🥚Eggsclusive🥚… introducing the first workshop on Environment Generation for Generalizable robots at #RSS2023!! This workshop brings together many topics close to my heart: PCG, large offline datasets, generative modelling and much more! More info from @vbhatt_cs ⬇️⬇️⬇️.
We are excited to announce the first workshop on Environment Generation for Generalizable Robots (EGG) at #RSS2023 (! Consider submitting if you are working in any area relevant to environment generation for robotics. Submissions due on May 17, 2023, AoE.
0
3
17
Welcome @_samvelyan!! Great to have an amazing researcher join the team… *and* one more vote for a football match as the next team social 🤣.
Thrilled to share that I’ve joined @GoogleDeepMind as a Research Scientist in the Open-Endedness Team. Excited to continue working with my brilliant PhD advisor @_rockt and the incredible team, including @jparkerholder & @MichaelD1729 who have greatly shaped my research journey.
1
1
18
This is really great work from @_samvelyan & @PaglieriDavide … and… it’s applied to football 🫶😀 bucket list item ✅.
Uncovering vulnerabilities in multi-agent systems with the power of Open-Endedness! . Introducing MADRID: Multi-Agent Diagnostics for Robustness via Illuminated Diversity ⚽️. Paper: Site: Code: 🔜. Here's what it's all about: 🧵👇
0
4
16
This looks like a great tool for RL research!!.
Thanks to @Bam4d, we now have a MiniHack Level Editor inside a browser which allows to easily design custom MiniHack environments using a convenient drag-and-drop functionality. Check it out at
0
1
16
Our recent talk on Genie is now on YouTube 📽️ check it out!!.
We were honored to have @ashrewards, @jparkerholder and @YugeTen from @GoogleDeepMind's Open-Endedness Team presenting their foundation world model Genie at @ai_ucl . Video available on our YouTube channel:.
0
6
17
Welcome @aditimavalankar! Exciting times for the Open-Endedness team 🙌.
Super stoked to be back at @DeepMind in London this time as a Research Scientist in the Open-Endedness team! I look forward to working with all my brilliant colleagues here!.
1
0
16
Exciting new approach for generating diverse co-players in cooperative games! This is a super hard problem and the solution required some flair 😀.
Access to diverse partners is crucial when training robust cooperators or evaluating ad-hoc coordination. In our top 25% #iclr2023 paper, we tackle the challenge of generating diverse cooperative policies and expose the issue of "sabotages" affecting simpler methods. A 🧵!
0
0
15
If only real footballers got up that quickly… 😅.
Soccer players have to master a range of dynamic skills, from turning and kicking to chasing a ball. How could robots do the same? ⚽. We trained our AI agents to demonstrate a range of agile behaviors using reinforcement learning. Here’s how. 🧵
1
1
15
Happy to share that we contributed to these numbers with at least 10 viewings in 2024 🎉.
Moana was the #1 streaming movie in America in 2024 as well as the last 5 years with 45B total mins 🤯. some IPs age like fine wine. Moana stands out with:.- banger soundtrack (ty Lin-Manuel).- hero’s journey not a love story (for girls + boys).- we all love islands 🏝️. h/t @WSJ
2
0
16
It has been a pleasure to collaborate with @ucl_dark on so many exciting projects… come say hi at the conference!! #NeurIPS2021.
We're excited to present @UCL_DARK's work at #NeurIPS2021 and look forward to seeing you at the virtual conference! . Check out all posters sessions and activities by our members below 👇
0
1
16
Learned adversaries are back 😎. after some amazing work from @ishitamed a variant of PAIRED can now match our previous sota UED algorithms (ACCEL and Robust PLR). This should unlock some exciting new research directions for autocurricula and environment generation 🚀
📢 Exciting News! We're thrilled to announce our latest paper: "Stabilizing Unsupervised Environment Design with a Learned Adversary” 📚🤖 accepted at #CoLLAs 2023 @CoLLAs_Conf as an Oral presentation!. 📄Paper: 💻Code: 1/🧵 👇.
0
1
16
Thanks @jeffclune, was fun giving this talk… check it out!.
The ICML talk on Genie is now available. Great presentation by @jparkerholder and @ashrewards! Congrats to the whole team again on the Best Paper Award! Well-deserved. Video:
0
1
16
Great summary of Ridge Rider!!.
📉 GD can be biased towards finding 'easy' solutions 🐈 By following the eigenvectors of the Hessian with negative eigenvalues, Ridge Rider explores a diverse set of solutions 🎨 #mlcollage [40]. 📜: 💻: 🎬:
0
0
15
Returning home early from #ICML2024 to help my wife and daughter who are unwell - to all the folks I was hoping to chat to, don’t be a stranger we can speak on a call soon! 🫶 if you’re in Vienna check out the controllable video generation workshop on Saturday, it’ll be great!!.
6
0
16
Didn't make it to Hawaii, but, *just* made it into the fireside chat photo. I guess this is my 15 seconds of fame 😎.
Spent time with the Google DeepMind team in London this week, including the people working on our next generation models. Great to see the exciting progress and talk to @demishassabis and the teams about the future of AI.
0
0
16
Why generate one adversarial prompt when you can instead generate them all…. And then train a drastically more robust model 🌈🌈🌈. Amazing work from @_samvelyan @_andreilupu @sharathraparthy and team!!.
Introducing 🌈 Rainbow Teaming, a new method for generating diverse adversarial prompts for LLMs via LLMs. It's a versatile tool 🛠️ for diagnosing model vulnerabilities across domains and creating data to enhance robustness & safety 🦺. Co-lead w/ @sharathraparthy & @_andreilupu
1
6
15
Super inspiring talk (ft 🧞), check it out!.
Video of my Oxford talk, including OMNI-EPIC. It describes how much of my work over the last two decades fits together, including Quality Diversity, Open-Ended, AI-Generating algorithms, POET, VPT, Thought Cloning, etc. Thanks to @j_foerst for hosting! 🥂.
1
0
15
If you do apply - email me at my twitter handle at google dot com. Include [student researcher] in the title. And if you want more info on what its like to be part of the team, you can find @ckaplanis1 (and others) at #NeurIPS2024!!.
0
1
15