jparkerholder Profile Banner
Jack Parker-Holder Profile
Jack Parker-Holder

@jparkerholder

Followers
5K
Following
6K
Media
39
Statuses
927

Research Scientist @GoogleDeepMind & Honorary Lecturer @UCL_DARK interested in generating worlds from video data. Dad (👶🐶), CFC fan, BJJ. Views are my own :)

London, England
Joined October 2018
Don't wanna be here? Send us removal request.
@jparkerholder
Jack Parker-Holder
2 months
Introducing 🧞Genie 2 🧞 - our most capable large-scale foundation world model, which can generate a diverse array of consistent worlds, playable for up to a minute. We believe Genie 2 could unlock the next wave of capabilities for embodied agents 🧠.
294
484
3K
@jparkerholder
Jack Parker-Holder
3 years
I'm super excited to be joining @DeepMind today as a Research Scientist, working with @_rockt! Thank you to everyone who helped make this possible! Watch this space 🌱.
35
7
437
@jparkerholder
Jack Parker-Holder
3 years
🤖 Introducing the first survey on AutoRL: methods for automatically discovering multiple components of the RL training pipeline, from tuning hyperparameters and architectures to learning algorithms or automatically designing environments. Link 👉 [1/4]
Tweet media one
2
110
313
@jparkerholder
Jack Parker-Holder
3 years
Evolving Curricula with Regret-Based Environment Design. Website: Paper: TL;DR: We introduce a new open-ended RL algorithm that produces complex levels and a robust agent that can solve them (e.g. below). Highlights ⬇️! [1/N]
3
47
217
@jparkerholder
Jack Parker-Holder
2 months
🚨🚨 We are hiring a student researcher to start in Q1 2025!! If you're interested in exploring the potential for world models to fuel the next wave of embodied AI, apply here by *DECEMBER 13TH*.
@jparkerholder
Jack Parker-Holder
2 months
Introducing 🧞Genie 2 🧞 - our most capable large-scale foundation world model, which can generate a diverse array of consistent worlds, playable for up to a minute. We believe Genie 2 could unlock the next wave of capabilities for embodied agents 🧠.
5
33
203
@jparkerholder
Jack Parker-Holder
11 months
When we started this project the idea of training world models *exclusively* from Internet videos seemed wild, but it turns out latent actions are the key and the bitter lesson holds. Now we have a viable path to generating the rich diversity of environments we need for AGI. 🚀.
@_rockt
Tim Rocktäschel
11 months
I am really excited to reveal what @GoogleDeepMind's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.
8
23
166
@jparkerholder
Jack Parker-Holder
2 months
🤯🤯🤯… And just like that, we have a path to unlimited environments for training and evaluating our embodied agents! We tried creating another world with three arches, and once again Genie 2 was able to simulate the world and SIMA solved the task ✅.
2
13
149
@jparkerholder
Jack Parker-Holder
2 months
From first person real world scenes, to third person driving environments, Genie 2 generates worlds in 720p 📷. Given an image, Genie 2 simulates world dynamics, creating a consistent environment playable with keyboard and mouse inputs ⌨️.
4
8
140
@jparkerholder
Jack Parker-Holder
2 months
Genie 2 can also turbocharge environment design for humans, making it possible to step in and play from concept art 🎨, such as the beautiful work below from one of our rockstar designers.
5
12
139
@jparkerholder
Jack Parker-Holder
3 years
I always love hearing from former ML PhD students about the days before tensorflow/pytorch. maybe in a few years we will tell current PhD students about the time before free MuJoCo 🙌.
@GoogleDeepMind
Google DeepMind
3 years
We’ve acquired the MuJoCo physics simulator ( and are making it free for all, to support research everywhere. MuJoCo is a fast, powerful, easy-to-use, and soon to be open-source simulation tool, designed for robotics research:
4
11
133
@jparkerholder
Jack Parker-Holder
2 months
To illustrate the potential of this for embodied agents, consider the world below, generated using Imagen 3. The SIMA team tested whether their latest agent could follow language instructions, such as going to the red or blue door 🚪.
2
12
131
@jparkerholder
Jack Parker-Holder
2 years
Interested in scaling open-ended learning systems? Check out the new RE posting in our team 🚀. Feel free to DM with any questions!
0
17
102
@jparkerholder
Jack Parker-Holder
2 years
Feel very fortunate to have contributed to this as my first project @DeepMind! It is amazing to see what can be done when combining Transformer models with meta-RL and PLR in a vast, open-ended task space!.
@FeryalMP
Feryal
2 years
I’m super excited to share our work on AdA: An Adaptive Agent capable of hypothesis-driven exploration which solves challenging unseen tasks with just a handful of experience, at a similar timescale to humans. See the thread for more details 👇 [1/N]
3
8
100
@jparkerholder
Jack Parker-Holder
2 months
Finally, this would not have been possible without the amazing diversity of incredible collaborative people at Google DeepMind 🫶🫶🫶. Shout out to the team that made this possible, from the Genie 2 team, the Generalist Agents team and SIMA. Exciting times ahead!!.
6
1
96
@jparkerholder
Jack Parker-Holder
4 years
The case for offline RL is clear: we often have access to real world data in settings where it is expensive (and potentially even dangerous) to collect new experience. But what happens if this offline data doesn’t perfectly match the test environment? [1/8].
1
14
81
@jparkerholder
Jack Parker-Holder
2 months
Amazing to see such fast progress in video generation, congrats to the Veo 2 team!!.
@GoogleDeepMind
Google DeepMind
2 months
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥. We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
2
73
@jparkerholder
Jack Parker-Holder
7 months
Excited to announce that Genie will be presented as an Oral at @icmlconf #ICML2024, see you all in Vienna!!.
@_rockt
Tim Rocktäschel
11 months
I am really excited to reveal what @GoogleDeepMind's Open Endedness Team has been up to 🚀. We introduce Genie 🧞, a foundation world model trained exclusively from Internet videos that can generate an endless variety of action-controllable 2D worlds given image prompts.
5
7
71
@jparkerholder
Jack Parker-Holder
4 years
First day @facebookai working with @_rockt and @egrefen . should be a great summer! 😀.
5
3
72
@jparkerholder
Jack Parker-Holder
3 years
Super exciting time to work on population-based methods! We already have fast data collection, now this paper shows vectorizing agent updates can lead to huge speedups (on a GPU): Looking forward to discussing with the authors (@instadeepai) at #ICML2022😀.
3
7
60
@jparkerholder
Jack Parker-Holder
1 month
In 2024 we saw huge advances in video + world models. For me the most exciting result was the SIMA agent interacting in worlds generated by Genie 2 + Imagen 3 and solving novel tasks. Now we have some key ingredients for an open-ended system… and it’s the worst it’ll ever be 😀.
@jparkerholder
Jack Parker-Holder
2 months
🤯🤯🤯… And just like that, we have a path to unlimited environments for training and evaluating our embodied agents! We tried creating another world with three arches, and once again Genie 2 was able to simulate the world and SIMA solved the task ✅.
3
8
61
@jparkerholder
Jack Parker-Holder
4 years
Population Based Training (PBT) has been shown to be successful in a variety of RL settings, but often requires vast computational resources 💰. To address this, last year we introduced Population Based Bandits (PB2 [1/N].
2
15
58
@jparkerholder
Jack Parker-Holder
1 month
I think industry labs will always want to recruit people with new scalable ideas that show a possible path to the next big leap. To me, the issue is students being falsely encouraged to iterate on toy benchmarks with increasingly complex methods, for the sake of "papers".
@kchonyc
Kyunghyun Cho
1 month
feeling a bit under the weather this week … thus an increased level of activity on social media and blog:
2
4
59
@jparkerholder
Jack Parker-Holder
1 year
Working closely with many amazing members of @UCL_DARK (and @robertarail) over the past few years has been a privilege and I am *also* super excited to make this official!! 😎🚀.
@UCL_DARK
UCL DARK
1 year
We are super excited to announce that Dr Roberta Raileanu (@robertarail) and Dr Jack Parker-Holder (@jparkerholder) have joined @UCL_DARK as Honorary Lecturers! Both have done impressive work in Reinforcement Learning and Open-Endedness, and our lab is lucky to get their support.
Tweet media one
4
5
56
@jparkerholder
Jack Parker-Holder
6 months
A perfect way to cap what has been the highlight of my career, so far…. 🫶🧞🧞🧞.
@ashrewards
Ashley Edwards
6 months
Yayy congrats to the Genie team for receiving best paper award at @icmlconf!! 🎉🧞‍♂️
Tweet media one
4
2
54
@jparkerholder
Jack Parker-Holder
3 years
Heading to Baltimore for #ICML2022 ✈️ Will be presenting ACCEL on Thursday and would love to chat about unsupervised environment design and open-endedness with many of you there! DM if you're around and want to catch up 😀.
@jparkerholder
Jack Parker-Holder
3 years
Evolving Curricula with Regret-Based Environment Design. Website: Paper: TL;DR: We introduce a new open-ended RL algorithm that produces complex levels and a robust agent that can solve them (e.g. below). Highlights ⬇️! [1/N]
1
3
52
@jparkerholder
Jack Parker-Holder
2 years
Heading to @NeurIPSConf tomorrow, would be great to chat about open-endedness, RL, world models or England’s chances at the word cup 😀 DMs open! #NeurIPS2022.
4
4
51
@jparkerholder
Jack Parker-Holder
1 year
If you're thinking of applying for PhDs, interested in open-endedness/foundation models and don't mind rainy weather 🇬🇧, then consider applying to @UCL_DARK! My DMs are open and I'll be in New Orleans for NeurIPS so please get in touch if this sounds like you! 😀.
@UCL_DARK
UCL DARK
1 year
We (@_rockt, @egrefen, @robertarail, and @jparkerholder) are looking for PhD students to join us in Fall 2024. If you are interested in Open-Endedness, RL & Foundation Models, then apply here: and also write us at ucl-dark-admissions@googlegroups.com.
3
8
43
@jparkerholder
Jack Parker-Holder
1 year
I’ll be ✈️ to #NeurIPS2023 on Monday and hoping to discuss:.- open-endedness and why it matters for AGI #iykyk .- world models.- why it’s never been a better time to do a PhD in ML (especially @UCL_DARK 😉)!.Find me at two posters + @aloeworkshop + hanging around the GDM booth 🤪.
@_rockt
Tim Rocktäschel
1 year
Everyone from @GoogleDeepMind's Open-Endedness Team and almost the entire @UCL_DARK Lab are going to be at @NeurIPSConf 2023 next week. You will find most of us at the @ALOEworkshop on Friday. Come and say hi!.
0
3
41
@jparkerholder
Jack Parker-Holder
7 months
Super excited to be heading to #ICML2024 next week!!! We will be presenting Genie as an Oral on Tuesday morning, and a poster straight after. If you can't make it, we will be doing a demo at the GDM booth on Wednesday at 10am 😀.
@GoogleDeepMind
Google DeepMind
7 months
Heading to @icmlconf next week? 🇦🇹. Our teams are presenting over 8️⃣0️⃣ papers and hosting live research demos on AI assistants for coding, football tactics, and more. We’ll also be showcasing Gemini Nano: our most efficient model for on-device tasks. →
Tweet media one
2
3
41
@jparkerholder
Jack Parker-Holder
1 year
Not sure who needs to hear this, but, effectively filtering large and noisy datasets is a gift that keeps on giving!! 🎁 Often more impactful than fancy new model architectures 😅 We found this same thing in RL with autocurricula (e.g. PLR, ACCEL), and I'd bet it works elsewhere.
@evgenia_rusak
Evgenia Rusak
1 year
In our new paper (oral ICCV23), we develop a concept-specific pruning criterion (Density-Based-Pruning) which reduces the training cost by 72%. Joint work with @amrokamal1997 @kushal_tirumala @wielandbr @kamalikac @arimorcos (1/5).
Tweet media one
1
6
38
@jparkerholder
Jack Parker-Holder
11 months
Going for action-free training is a total game changer and it helps to do it with someone who has been thinking about this for years ( who happens to also be one of the nicest people ever.
@ashrewards
Ashley Edwards
11 months
This was such a fun and rewarding project to work on. Amazing job by the team! The most exciting thing for me is that we were able to achieve this without using a single doggone action label, which believe me, was not easy!.
1
2
35
@jparkerholder
Jack Parker-Holder
1 year
Super cool work showing QD algorithms at scale 🚀 Congrats to the team!! May be of interest @CULLYAntoine @tehqin17 @jeffclune @MinqiJiang @_samvelyan.
@TZahavy
Tom Zahavy
1 year
I'm super excited to share AlphaZeroᵈᵇ, a team of diverse #AlphaZero agents that collaborate to solve #Chess puzzles and demonstrate increased creativity. Check out our paper to learn more!.A quick 🧵(1/n)
Tweet media one
1
6
34
@jparkerholder
Jack Parker-Holder
8 months
Super excited to be in Seattle for @CVPR! Will be presenting Genie as a keynote tomorrow at the AI for Content Creation Workshop at 3:15pm and on Tues at @cveu_workshop in the afternoon session. If you can’t make it, fear not! I’ll also be at the Google booth Thurs 1:30pm! 🧞🧞.
0
4
35
@jparkerholder
Jack Parker-Holder
2 years
Already the second day of the year and no huge breakthroughs in AI… what’s going on?.
3
2
33
@jparkerholder
Jack Parker-Holder
2 months
Super excited for this, see you in Singapore :D.
@Mengyue_Yang_
Mengyue Yang
2 months
🎤 World Models Workshop at #ICLR2025! 🌍✨. We invited the foundation World Models #Google #Genie key contributors, Open-Endedness Team Lead Tim Rocktäschel @_rockt and Jack Parker-Holder @jparkerholder to share their groundbreaking work with us! 🙌. 🔥 Keynote Highlights:. 🤖
Tweet media one
0
1
34
@jparkerholder
Jack Parker-Holder
4 years
For anyone interested in finding diverse solutions for exploration or generalization, this is worth checking out! Was awesome to work on this project and I'm excited to see where the next ridges take us!! 🚀.
@j_foerst
Jakob Foerster
4 years
The gradient is a locally greedy direction. Where do you get if you follow the eigenvectors of the Hessian instead? Our new paper, “Ridge Rider” (, explores how to do this and what happens in a variety of (toy) problems (if you dare to do so),. Thread 1/N
Tweet media one
1
4
31
@jparkerholder
Jack Parker-Holder
3 years
👋 @MinqiJiang and I will be presenting ACCEL today @icmlconf, come by!.Talk: Room 327 at 14:35 ET.Poster: Hall E #919.Hopefully see you there 😀 #ICML2022.
@jparkerholder
Jack Parker-Holder
3 years
Evolving Curricula with Regret-Based Environment Design. Website: Paper: TL;DR: We introduce a new open-ended RL algorithm that produces complex levels and a robust agent that can solve them (e.g. below). Highlights ⬇️! [1/N]
0
9
31
@jparkerholder
Jack Parker-Holder
2 years
The Open-Endedness team is growing 🌱 come and join us!! Exciting times 😀.
@_rockt
Tim Rocktäschel
2 years
In addition to a Research Engineer, we are also looking for a Research Scientist 🧑‍🔬 to join @DeepMind's Open-Endedness Team! . If you are excited about the intersection of open-ended, self-improving, generalist AI and foundation models, please apply 👇.
1
2
29
@jparkerholder
Jack Parker-Holder
5 months
We often refer to Genie as an action controllable video model, but it is arguably more like an image generation model with memory. Super cool to see a similar approach go back to the original World Models inspo from @hardmaru and model Doom 😎 Congrats @shlomifruchter and team!.
@_akhaliq
AK
5 months
Google presents Diffusion Models Are Real-Time Game Engines. discuss: We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality.
0
1
29
@jparkerholder
Jack Parker-Holder
10 months
Looking forward to discussing Genie tomorrow!! 🧞‍♀️🧞🧞‍♂️.
@Saptarashmi
Saptarashmi Bandyopadhyay ✈️ NeurIPS 2024
10 months
📢Happy to share our UMD MARL talk @ Apr 16, 12:00 pm ET📢/--by . @GoogleDeepMind Research Scientist @jparkerholder. on "Generative Interactive Environments (GENIE)". in-person: IRB-5137. virtually: @johnpdickerson.@umdcs.@umiacs.@ml_umd.#RL #AI #ML.
0
4
29
@jparkerholder
Jack Parker-Holder
3 years
With Bayesian Generational PBT we can update *both* architectures and >10 hyperparameters on the fly in a single run 😮 even better it’s fast with parallel simulators ⚡️… great time to work in this area!!.
@wanxingchen_
Xingchen Wan
3 years
(1/7) Population Based Training (PBT) has been shown to be highly effective for tuning hyperparameters (HPs) for deep RL. Now with the advent of massively parallel simulators, there has never been a better time to use these methods! However, PBT has a couple of key problems….
0
2
27
@jparkerholder
Jack Parker-Holder
1 year
I think the most exciting thing about the current research paradigm is a shift in focus from *solutions* -> *stepping stones*. Every time a new LLM or VLM comes out it immediately enables new capabilities in a variety of unexpected downstream areas. What a time to be alive 🌱.
0
0
28
@jparkerholder
Jack Parker-Holder
3 years
By curating *randomly generated* environments we can produce a curriculum that makes it possible for a student agent to transfer zero-shot to challenging human designed ones, including Formula One tracks 🏎️. maybe one day F1 teams will use PLR? 😀 come check it out @NeurIPSConf.
@MinqiJiang
Minqi Jiang
3 years
🏎️ Replay-Guided Adversarial Environment Design. Prioritized Level Replay (PLR) is secretly a form of unsupervised environment design. This leads to new theory improving PLR + impressive zero-shot transfer, like driving the Nürburgring Grand Prix. paper:
1
5
27
@jparkerholder
Jack Parker-Holder
2 years
Was super fun chatting with @kanjun and @joshalbrecht, hopefully I said something useful in there somewhere! Also interesting to see how much has changed since we spoke in August (both in the field and for @genintelligent 🚀) what a time to be an AI researcher!!😀.
@kanjun
Kanjun 🐙🏡
2 years
Had a really fun convo with @jparkerholder about co-evolving RL agents & environments, alternatives & blockers to population-based training, and why we aren't thinking properly about data efficiency in RL. We also discussed how Jack managed so many papers during his PhD 💪!.
0
5
27
@jparkerholder
Jack Parker-Holder
3 years
We introduce ACCEL, a new algorithm that extends replay-based Unsupervised Environment Design (UED) (e.g. by including an *editor*. The editor makes small changes to previously useful levels, which compound over time to produce complex structures. [2/N]
@MinqiJiang
Minqi Jiang
3 years
🏎️ Replay-Guided Adversarial Environment Design. Prioritized Level Replay (PLR) is secretly a form of unsupervised environment design. This leads to new theory improving PLR + impressive zero-shot transfer, like driving the Nürburgring Grand Prix. paper:
2
8
27
@jparkerholder
Jack Parker-Holder
5 months
Open source world models + real world data to train them with. what a time to be alive 🚀.
@ericjang11
Eric Jang
5 months
@1x_tech @JackMonas For people interested in getting started with world models, we've also open-sourced a PyTorch implementation of GENIE here
1
5
28
@jparkerholder
Jack Parker-Holder
2 years
Check out our #NeurIPS2022 paper showing we can train more general world models by collecting data with a diverse population of agents! Great work by @YingchenX and team!! Come chat to us in New Orleans 😀.
@YingchenX
Yingchen Xu
2 years
Interested in learning general world models at scale? 🌍 Check out our new #NeurIPS2022 paper to find out! .Paper: Website: [1/N]
0
10
27
@jparkerholder
Jack Parker-Holder
2 years
Join us!! 😀.
@_rockt
Tim Rocktäschel
2 years
We are hiring for @DeepMind’s Open-Endedness team. If you have expertise in topics such as RL, evolutionary computation, PCG, quality diversity, novelty search, generative modelling, world models, intrinsic motivation etc., then please consider applying!.
1
3
27
@jparkerholder
Jack Parker-Holder
11 months
As we see with Genie - foundation world models trained from videos offer the potential for generating the environments we need for AGI 🎮. New paper by @mengjiao_yang laying out all the possibilities in the space, exciting times 🚀.
@_akhaliq
AK
11 months
Video as the New Language for Real-World Decision Making. Both text and video data are abundant on the internet and support large-scale self-supervised learning through next token or frame prediction. However, they have not been equally leveraged: language models have had
Tweet media one
1
2
27
@jparkerholder
Jack Parker-Holder
10 months
This is also what we see with Genie, predicting the future is sufficient to learn parallax and consistent latent actions.
@NandoDF
Nando de Freitas
10 months
Predicting the next word "only" is sufficient for language models to learn a large body of knowledge that enables then to code, answer questions, understand many topics, chat, and so on. This is clear to many researchers now, and there are nice tutorials on why this works by
1
4
27
@jparkerholder
Jack Parker-Holder
8 months
Working on this paper has been super helpful for me to have a concrete definition of open-endedness, kudos to @edwardfhughes and @MichaelD1729 who did an amazing job! This field seems to get even more exciting every week as we build on ever more capable foundation models 🚀.
@edwardfhughes
Edward Hughes
8 months
📣 New paper! Open-Endedness is Essential for Artificial Superhuman Intelligence 🚀 (co-lead @MichaelD1729). 🔎 We propose a simple, formal definition for Open-Endedness to inspire rapid and safe progress towards ASI via Foundation Models. 📖 🧵[1/N]
Tweet media one
0
5
27
@jparkerholder
Jack Parker-Holder
4 years
PSA: you can use linear models in deep RL papers and still get accepted at #ICML2021!! Congrats to @philipjohnball and @cong_ml . now let’s try and beat ViT with ridge regression :).
@jparkerholder
Jack Parker-Holder
4 years
The case for offline RL is clear: we often have access to real world data in settings where it is expensive (and potentially even dangerous) to collect new experience. But what happens if this offline data doesn’t perfectly match the test environment? [1/8].
0
2
25
@jparkerholder
Jack Parker-Holder
9 months
Genie + @UCL_DARK = 🫶🚀.
@UCL_DARK
UCL DARK
9 months
We're excited to announce that the Genie Team from @GoogleDeepMind will be our next invited speakers!. Title: Genie: Generative Interactive Environments. Speakers: @ashrewards, @jparkerholder, @YugeTen. Sign up: 📌 90 High Holborn.📅 Tue 30 Apr, 17:00.
1
2
24
@jparkerholder
Jack Parker-Holder
2 years
We can now scale UED to competitive multi-agent RL!! This plot is my favorite, showing that the agent-level dependence clearly matters 🤹‍♂️ come check out the paper at #ICLR2023.
@_samvelyan
Mikayel Samvelyan
2 years
A key insight for multi-agent settings is that, from the perspective of the teacher, maximising the student’s regret over co-players independently of the environment (and vice versa) doesn’t guarantee maximising regret in the joint space of co-players and environments.
Tweet media one
0
3
24
@jparkerholder
Jack Parker-Holder
11 months
Thank you @maxjaderberg!! XLand was super inspiring for us, it showed that our current RL algorithms are already capable of amazing things when given sufficiently rich and diverse environments. Can't wait to push this direction further with future versions of Genie 🚀🚀.
@maxjaderberg
Max Jaderberg
11 months
Very cool to see the @GoogleDeepMind Genie results: learning an action-conditional generative model purely unsupervised from video data. This is close to my heart in getting towards truly open-ended environments to train truly general agents with RL 1/
0
0
23
@jparkerholder
Jack Parker-Holder
3 years
Probably the shortest reviews I’ve ever seen for a top tier conference… maybe we can use them as a prompt for a language model to generate more thorough reviews?? 🤔 #ICML2022.
0
1
21
@jparkerholder
Jack Parker-Holder
2 years
Great news!! ALOE is back and in person. If you’re heading to @NeurIPSConf and interested in open-endedness, adaptive curricula or self-driven learning systems then hopefully see you there 🕺.
@aloeworkshop
ALOE Workshop
2 years
🌱 The 2nd Agent Learning in Open-Endedness Workshop will be held at NeurIPS 2023 (Dec 10–16) in magnificent New Orleans. ⚜️. If your research considers learning in open-ended settings, consider submitting your work (by 11:59 PM Sept. 29th, AoE).
0
1
22
@jparkerholder
Jack Parker-Holder
2 months
As someone who finds it hard to tell what LLM benchmarks really mean, this has been a super cool project to see clear daylight between models and also clear areas where a step change improvement is needed to solve it *cough* Nethack *cough*. awesome work @PaglieriDavide et al!.
@PaglieriDavide
Davide Paglieri
2 months
Tired of saturated benchmarks? Want scope for a significant leap in capabilities?. 🔥 Introducing BALROG: a Benchmark for Agentic LLM and VLM Reasoning On Games!. BALROG is a challenging benchmark for LLM agentic capabilities, designed to stay relevant for years to come. 1/🧵
Tweet media one
0
1
23
@jparkerholder
Jack Parker-Holder
5 months
Was fun giving this talk! It’s a super exciting time to be working on world models and amazing to think of the potential use cases in the next few years 🫶.
@GlobalGoalsAI
AI for Global Goals
5 months
We kicked off the first day of the MLx Generative AI track with Jack Parker-Holder (@jparkerholder), Research Scientist at the Google DeepMind (@GoogleDeepMind) Open-Endedness Team. 🧠🚀He presented "Generative World Models", covering key topics including reinforcement learning,
Tweet media one
Tweet media two
Tweet media three
0
0
21
@jparkerholder
Jack Parker-Holder
11 months
It turns out foundation world models are the stepping stone required for converting children's sketches into interactive experiences 🌱.
@jeffclune
Jeff Clune
11 months
One amazing thing Genie enables: anyone, including children, can draw a world and then *step into it* and explore it!! How cool is that!?! We tried this with drawings my children made, to their delight. My child drew this, and now can fly the eagles around. Magic!🧞✨
Tweet media one
0
2
22
@jparkerholder
Jack Parker-Holder
1 year
Loved this part of the documentary and so glad it has become a meme. also totally true 😅.
@PhD_Genie
PhD_Genie
1 year
The academic way.
Tweet media one
0
1
21
@jparkerholder
Jack Parker-Holder
5 months
Having played with Genie on the RT1 data, it seems obvious that world models could unlock huge new capabilities in robotics, both for training (e.g. with autocurricula) and evaluation (e.g. probe tasks). Super excited to see this pushed to new levels, congrats!!.
@1x_tech
1X
5 months
hello, world model.
0
0
21
@jparkerholder
Jack Parker-Holder
3 years
Thanks to a fantastic effort from @MinqiJiang all the code from our recent work on UED is now public!! Excited to see the new ideas that come from this! 🍿.
@MinqiJiang
Minqi Jiang
3 years
We have open sourced our recent algorithms for Unsupervised Environment Design! These algorithms produce adaptive curricula that result in robust RL agents. This codebase includes our implementations of ACCEL, Robust PLR, and PAIRED.
0
1
20
@jparkerholder
Jack Parker-Holder
6 months
🧞🧞‍♀️🫶.
@NandoDF
Nando de Freitas
6 months
Generating interactive environments 🧞‍♂️🧞‍♀️🧞
0
2
21
@jparkerholder
Jack Parker-Holder
3 years
Super excited about this, more info to follow 😀. #NeurIPS2022.
@sherryyangML
Sherry Yang
3 years
Interested in foundation models + RL? Keep an eye out for the 1st. "Foundation Models for Decision Making" workshop at NeurIPS 2022: Call for submissions will soon follow. w. @du_yilun @jparkerholder @siddkaramcheti @IMordatch @shaneguML @ofirnachum
Tweet media one
1
2
21
@jparkerholder
Jack Parker-Holder
9 months
🧞‍♀️🫶.
@_rockt
Tim Rocktäschel
9 months
How can we learn a foundational world model directly from Internet-scale videos without any action annotations? @YugeTen, @ashrewards and @jparkerholder from @GoogleDeepMind's Open-Endedness Team are presenting "Genie: Generative Interactive Environments" at the @UCL_DARK Seminar
Tweet media one
1
0
19
@jparkerholder
Jack Parker-Holder
23 days
Was great to chat to @shiringhaffary about Genie 2, and world models in general. Exciting times 🚀.
@shiringhaffary
Shirin Ghaffary
23 days
NEW: For this week's Q&AI newsletter I wrote about what "world models" are and why AI companies like Google, NVIDIA, others are investing in them. Featuring insight from Google DeepMind's @jparkerholder and OpenAI's @billpeeb .
0
4
20
@jparkerholder
Jack Parker-Holder
11 months
💯 and as many have pointed out, this is the worst video models are ever going to be. Super exciting to see the impact these models will have when used as world simulators with open-ended learning.
@phillip_isola
Phillip Isola
11 months
So, rather than considering video models as a poor approximation to a real simulation engine, I think it's interesting to also consider them as something more: a new kind of world simulation that is in many ways far more complete than anything we have had before. 3/3.
0
0
20
@jparkerholder
Jack Parker-Holder
10 months
Super exciting to see improved techniques for generating synthetic data for agents! Awesome work from @JacksonMattT and team, plenty more to be done in this space 🚀🚀🚀.
@JacksonMattT
Matthew Jackson
10 months
🎮 Introducing the new and improved Policy-Guided Diffusion!. Vastly more accurate trajectory generation than autoregressive models, with strong gains in offline RL performance!. Plus a ton of new theory and results since our NeurIPS workshop paper. Check it out ⤵️
0
3
20
@jparkerholder
Jack Parker-Holder
3 years
Looking forward to seeing all the creative ideas submitted to this workshop! Submit by September 22nd 😀.
@ofirnachum
Ofir Nachum
3 years
We are open for submissions! . I know there are lots of people working on large models, pretraining, cross-domain/agent generalization for RL. Please submit your papers to the 1st FMDM workshop at NeurIPS 2022!
Tweet media one
0
1
20
@jparkerholder
Jack Parker-Holder
3 years
This was very much a collective effort from a great group of people! @RaghuSpaceRajan @XingyouSong @AndreBiedenkapp @yingjieMiao @The_Eimer @BaoheZhang1 @nguyentienvu @RCalandra @AleksandraFaust @FrankRHutter @LindauerMarius Look forward to seeing future progress here 📈! [4/4].
0
1
17
@jparkerholder
Jack Parker-Holder
3 years
Come to break the agents… stay to read about our new approach for unsupervised environment design 😀.
@MinqiJiang
Minqi Jiang
3 years
🧬 For ACCEL, we made an interactive paper to accompany the typical PDF we all know and love. "Figure 1" is a demo that lets you challenge our agents by designing your own environments! Now you can also view agents from many training runs simultaneously.
0
4
19
@jparkerholder
Jack Parker-Holder
5 years
@hardmaru Lol at everyone trying to explain monetary policy to a former rates trader.
1
0
20
@jparkerholder
Jack Parker-Holder
10 months
Super excited about this, we are only just beginning to see the potential for controllable video models!! #ICML2024.
@cvgworkshop
Controllable Video Generation Workshop @ ICML2024
10 months
We are pleased to announce the first *controllable video generation* workshop at @icmlconf 2024! 📽️📽️📽️. We welcome submissions that explore video generation via different modes of control (e.g. text, pose, action). Deadline: 31st May AOE.Website:
Tweet media one
0
2
18
@jparkerholder
Jack Parker-Holder
3 years
PSA: we are super excited to announce the workshop on Agent Learning in Open-Endedness (ALOE) at #ICLR2022! If you're interested in open-ended learning systems then check out the amazing speaker line-up and the CfP 😀.
@aloeworkshop
ALOE Workshop
3 years
Announcing the first Agent Learning in Open-Endedness (ALOE) Workshop at #ICLR2022! . We're calling for papers across many fields: If you work on open-ended learning, consider submitting. Paper deadline is February 25, 2022, AoE.
Tweet media one
0
1
19
@jparkerholder
Jack Parker-Holder
8 months
An occasion that lived up to the name @FLAIR_Ox.
@jeffclune
Jeff Clune
8 months
It was magical to return to Oxford to give a talk, seeing old friends and making new ones. That's especially true because Jakob @j_foerst is a great host that really knows how to roll out the red carpet!
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
18
@jparkerholder
Jack Parker-Holder
2 years
Hate to steal your thunder @pcastr …but I got 8!! I genuinely enjoy reviewing but this makes it impossible to do a good job @iclr_conf #ICLR2023.
@pcastr
Pablo Samuel Castro
2 years
seven #ICLR2023 papers to review in 12 days (8 business days) is too much, imho. .
2
1
16
@jparkerholder
Jack Parker-Holder
2 years
🥚Eggsclusive🥚… introducing the first workshop on Environment Generation for Generalizable robots at #RSS2023!! This workshop brings together many topics close to my heart: PCG, large offline datasets, generative modelling and much more! More info from @vbhatt_cs ⬇️⬇️⬇️.
@vbhatt_cs
Varun Bhatt
2 years
We are excited to announce the first workshop on Environment Generation for Generalizable Robots (EGG) at #RSS2023 (! Consider submitting if you are working in any area relevant to environment generation for robotics. Submissions due on May 17, 2023, AoE.
0
3
17
@jparkerholder
Jack Parker-Holder
2 months
Typical understated comment from Phil, one of the driving forces of Genie 2 🚀🚀.
@philipjohnball
Philip J. Ball
2 months
This was fun to work on 😃.
1
0
18
@jparkerholder
Jack Parker-Holder
4 months
Welcome @_samvelyan!! Great to have an amazing researcher join the team… *and* one more vote for a football match as the next team social 🤣.
@_samvelyan
Mikayel Samvelyan
4 months
Thrilled to share that I’ve joined @GoogleDeepMind as a Research Scientist in the Open-Endedness Team. Excited to continue working with my brilliant PhD advisor @_rockt and the incredible team, including @jparkerholder & @MichaelD1729 who have greatly shaped my research journey.
1
1
18
@jparkerholder
Jack Parker-Holder
1 year
This is really great work from @_samvelyan & @PaglieriDavide … and… it’s applied to football 🫶😀 bucket list item ✅.
@_samvelyan
Mikayel Samvelyan
1 year
Uncovering vulnerabilities in multi-agent systems with the power of Open-Endedness! . Introducing MADRID: Multi-Agent Diagnostics for Robustness via Illuminated Diversity ⚽️. Paper: Site: Code: 🔜. Here's what it's all about: 🧵👇
Tweet media one
0
4
16
@jparkerholder
Jack Parker-Holder
3 years
This looks like a great tool for RL research!!.
@_samvelyan
Mikayel Samvelyan
3 years
Thanks to @Bam4d, we now have a MiniHack Level Editor inside a browser which allows to easily design custom MiniHack environments using a convenient drag-and-drop functionality. Check it out at
Tweet media one
0
1
16
@jparkerholder
Jack Parker-Holder
9 months
Our recent talk on Genie is now on YouTube 📽️ check it out!!.
@UCL_DARK
UCL DARK
9 months
We were honored to have @ashrewards, @jparkerholder and @YugeTen from @GoogleDeepMind's Open-Endedness Team presenting their foundation world model Genie at @ai_ucl . Video available on our YouTube channel:.
0
6
17
@jparkerholder
Jack Parker-Holder
6 months
Come along to the poster to find out more, Hall C #614 #ICML2024
Tweet media one
@jparkerholder
Jack Parker-Holder
6 months
A perfect way to cap what has been the highlight of my career, so far…. 🫶🧞🧞🧞.
1
0
16
@jparkerholder
Jack Parker-Holder
3 years
Despite starting simple, levels in the replay buffer quickly become complex. Not only that, but ACCEL agents are capable of transfer to challenging human designed out-of-distribution environments, outperforming several strong baselines! [3/N]
Tweet media one
1
2
15
@jparkerholder
Jack Parker-Holder
2 years
Welcome @aditimavalankar! Exciting times for the Open-Endedness team 🙌.
@aditimavalankar
Aditi Mavalankar
2 years
Super stoked to be back at @DeepMind in London this time as a Research Scientist in the Open-Endedness team! I look forward to working with all my brilliant colleagues here!.
1
0
16
@jparkerholder
Jack Parker-Holder
2 years
Exciting new approach for generating diverse co-players in cooperative games! This is a super hard problem and the solution required some flair 😀.
@_andreilupu
Andrei Lupu
2 years
Access to diverse partners is crucial when training robust cooperators or evaluating ad-hoc coordination. In our top 25% #iclr2023 paper, we tackle the challenge of generating diverse cooperative policies and expose the issue of "sabotages" affecting simpler methods. A 🧵!
Tweet media one
0
0
15
@jparkerholder
Jack Parker-Holder
11 months
It has been a dream to work on Genie with such fantastic people, I’ve learned so much from all of them. We've also had a lot of fun, for example, using our model trained on platformers to convert random pictures of our pets into playable worlds 🤯🐶
0
2
16
@jparkerholder
Jack Parker-Holder
12 days
If I had $1 every time I wanted to say this :).
@_rockt
Tim Rocktäschel
13 days
Easy to remember:.Video model: f: S ➔ S.World model: f: S × A ➔ S.
1
1
16
@jparkerholder
Jack Parker-Holder
10 months
If only real footballers got up that quickly… 😅.
@GoogleDeepMind
Google DeepMind
10 months
Soccer players have to master a range of dynamic skills, from turning and kicking to chasing a ball. How could robots do the same? ⚽. We trained our AI agents to demonstrate a range of agile behaviors using reinforcement learning. Here’s how. 🧵
1
1
15
@jparkerholder
Jack Parker-Holder
1 month
Happy to share that we contributed to these numbers with at least 10 viewings in 2024 🎉.
@Tocelot
Jon Lai
1 month
Moana was the #1 streaming movie in America in 2024 as well as the last 5 years with 45B total mins 🤯. some IPs age like fine wine. Moana stands out with:.- banger soundtrack (ty Lin-Manuel).- hero’s journey not a love story (for girls + boys).- we all love islands 🏝️. h/t @WSJ
Tweet media one
2
0
16
@jparkerholder
Jack Parker-Holder
3 years
It has been a pleasure to collaborate with @ucl_dark on so many exciting projects… come say hi at the conference!! #NeurIPS2021.
@UCL_DARK
UCL DARK
3 years
We're excited to present @UCL_DARK's work at #NeurIPS2021 and look forward to seeing you at the virtual conference! . Check out all posters sessions and activities by our members below 👇
Tweet media one
Tweet media two
0
1
16
@jparkerholder
Jack Parker-Holder
1 year
Learned adversaries are back 😎. after some amazing work from @ishitamed a variant of PAIRED can now match our previous sota UED algorithms (ACCEL and Robust PLR). This should unlock some exciting new research directions for autocurricula and environment generation 🚀
@ishitamed
Ishita Mediratta
1 year
📢 Exciting News! We're thrilled to announce our latest paper: "Stabilizing Unsupervised Environment Design with a Learned Adversary” 📚🤖 accepted at #CoLLAs 2023 @CoLLAs_Conf as an Oral presentation!. 📄Paper: 💻Code: 1/🧵 👇.
0
1
16
@jparkerholder
Jack Parker-Holder
5 months
Thanks @jeffclune, was fun giving this talk… check it out!.
@jeffclune
Jeff Clune
5 months
The ICML talk on Genie is now available. Great presentation by @jparkerholder and @ashrewards! Congrats to the whole team again on the Best Paper Award! Well-deserved. Video:
0
1
16
@jparkerholder
Jack Parker-Holder
3 years
Great summary of Ridge Rider!!.
@RobertTLange
Robert Lange
3 years
📉 GD can be biased towards finding 'easy' solutions 🐈 By following the eigenvectors of the Hessian with negative eigenvalues, Ridge Rider explores a diverse set of solutions 🎨 #mlcollage [40]. 📜: 💻: 🎬:
Tweet media one
0
0
15
@jparkerholder
Jack Parker-Holder
6 months
Returning home early from #ICML2024 to help my wife and daughter who are unwell - to all the folks I was hoping to chat to, don’t be a stranger we can speak on a call soon! 🫶 if you’re in Vienna check out the controllable video generation workshop on Saturday, it’ll be great!!.
6
0
16
@jparkerholder
Jack Parker-Holder
2 years
Didn't make it to Hawaii, but, *just* made it into the fireside chat photo. I guess this is my 15 seconds of fame 😎.
@sundarpichai
Sundar Pichai
2 years
Spent time with the Google DeepMind team in London this week, including the people working on our next generation models. Great to see the exciting progress and talk to @demishassabis and the teams about the future of AI.
Tweet media one
Tweet media two
Tweet media three
0
0
16
@jparkerholder
Jack Parker-Holder
11 months
Why generate one adversarial prompt when you can instead generate them all…. And then train a drastically more robust model 🌈🌈🌈. Amazing work from @_samvelyan @_andreilupu @sharathraparthy and team!!.
@_samvelyan
Mikayel Samvelyan
11 months
Introducing 🌈 Rainbow Teaming, a new method for generating diverse adversarial prompts for LLMs via LLMs. It's a versatile tool 🛠️ for diagnosing model vulnerabilities across domains and creating data to enhance robustness & safety 🦺. Co-lead w/ @sharathraparthy & @_andreilupu
1
6
15
@jparkerholder
Jack Parker-Holder
8 months
Super inspiring talk (ft 🧞), check it out!.
@jeffclune
Jeff Clune
8 months
Video of my Oxford talk, including OMNI-EPIC. It describes how much of my work over the last two decades fits together, including Quality Diversity, Open-Ended, AI-Generating algorithms, POET, VPT, Thought Cloning, etc. Thanks to @j_foerst for hosting! 🥂.
Tweet media one
1
0
15
@jparkerholder
Jack Parker-Holder
2 months
If you do apply - email me at my twitter handle at google dot com. Include [student researcher] in the title. And if you want more info on what its like to be part of the team, you can find @ckaplanis1 (and others) at #NeurIPS2024!!.
0
1
15