Excited to share our
#ICML2023
paper on Symbolic Regression! We introduce a method that combines a Transformer mutation model with Monte-Carlo Tree Search. We pre-train our model on synthetic example mutations and fine-tune it from online experience. (1/4)
Life update: moved to SF (lower haight) 2 months ago with gf and joined
@GoogleDeepMind
. Excited to make new friends!
ps: who wants a climbing partner (mission cliffs)😬?
Will defend my PhD on "Efficient adaptation of reinforcement learning agents: from model-free exploration to symbolic world models" tomorrow 2pm CET at Meta Paris. If you want to attend in person or hybrid, feel free to DM me!
@mlia_isir
@AIatMeta
Our work on learning exploration strategies without extrinsic rewards was accepted at
@iclr_conf
!
We learn a tree-structured policy that composes skills to reach increasingly far states.
Work with
@jean_tarbou
@alelazaric
@LudovicDenoyer
.
Camera-ready paper coming soon.
good time to dust off our work on LLM+MCTS from 1.5y ago on symbolic regression, a program synthesis task whose goal is finding a math equation f s.t. y=f(x)
tldr: pre-train a policy, generate "quality" synthetic data w/ MCTS, finetune π and V, then 🔂
Poster session at 12:05 and 17:10 in room 388-390 for the
#AI4science
workshop.
“Symbolic Model-Based Reinforcement learning” with
@SylvainLamprier
“Privileged Deep Symbolic Regression” with
@BiggioLuca
and Tommaso Bendinelli.
Only in SF do founders spend their Saturday writing and deconstructing a single tweet like a Eminem punchline. Watch out for upcoming cool projects launches 🚀🚀
Sharing my experience as a beginner with
#midjourney
: crafting the most personalized and budget-friendly 𝗖𝗵𝗿𝗶𝘀𝘁𝗺𝗮𝘀 𝗴𝗶𝗳𝘁!
Create 🎨, print 🖨️, gift 🎁
Hope to inspire some people and gather feedback. Let's get started!
Really cool work👏! Didn’t know about Manim (and that so many YT educators use it), awesome tool! opens up lots of opportunities for generating high-quality videos with more complex mathematical concepts and "story-telling".
Sharing my experience as a beginner with
#midjourney
: crafting the most personalized and budget-friendly 𝗖𝗵𝗿𝗶𝘀𝘁𝗺𝗮𝘀 𝗴𝗶𝗳𝘁!
Create 🎨, print 🖨️, gift 🎁
Hope to inspire some people and gather feedback. Let's get started!
At the
@the_builderclub
hackathon, we built Batman 🦸♂️ a life-saving (literally) app. A simple audio record triggers an SOS call, alerting authorities and loved ones to your situation.
real-time demo (Hollywood, call me pls) with
@abhav_k
@ma_as_
How to pre-train deep generative symbolic regression to incorporate priors, e.g. symmetries or appearing sub-expressions.
Work led by Tommaso &
@BggLc
. Visit us at our poster session next Thursday
@icmlconf
! 😚
HF demo:
Code:
Interested in online adaptation in dynamic environments?
Come check our work "Probing Dynamic Environments with Informed Policy Regularization" at
#icml2020
's BIG workshop.
Presentation: Saturday 18th July, 6:15 - 7:30am (PDT), 13:15-14:30 (UTC)
Recent deep neural models have shown competitive performance compared to more classical Genetic Programming algorithms. Unlike GP approaches, they are trained to generate expressions from observations given as context using next-token prediction … (2/4)
hey, some questions for the users of json mode and structured generation ⤵️
i want to better understand inner workings and benefits of popular libraries that enable sampling structured responses, e.g. instructor
@dottxtai
hope some people can learn from this
cc
@jxnlco
@remilouf
… however, they usually do not benefit from search abilities, resulting in lower performance on out-of-distribution datasets compared to GP. We use MCTS for planning and fine-tune the mutation and critic models on satisfactory mutations (~ RLHF). (3/4)
𝗭𝗲𝗿𝗼-𝘀𝗵𝗼𝘁.
❌Describing the entire scene in a single prompt failed, likely due to having both the zebra and hippo in it. So did "multi-prompts"
What I wanted: "a zebra reading a book, a hippo drinking wine in a pool, Palm Springs, a Citroen traction car"
🆚
What I got:
if you are i) french, ii) want to learn about European bills, iii) make your mind who to vote for during the next European elections, my friends released a cool tinder-like open-source project,
We created a site to choose your candidate for the European elections.
The twist? It is based on parliamentary votes rather than programs.
Launched it last week, already 40,000 users!
Website :
X account :
@VoteFinder_eu
Our new paper "Learning Adaptive Exploration Strategies in Dynamic Environments Through Informed Policy Regularization" is now online:
@pa_kamienny
,
@teopir
, Alessandro Lazaric, Thibault Lavril, Nicolas Usunier,
@LudovicDenoyer
Excited to share our
#ICML2023
paper on Symbolic Regression! We introduce a method that combines a Transformer mutation model with Monte-Carlo Tree Search. We pre-train our model on synthetic example mutations and fine-tune it from online experience. (1/4)
not sure I understand the skepticism around integrating LLMs as part of OS.
like is it that surprising?
isn't what "AI agents" are for?
or maybe the provider (or not getting the deal) is the problem?
Encountered some bumps on my creative journey with
@midjourney
, but here are insights and potential solutions I think could improve the creation experience.
We're sharing progress on our video-to-audio (V2A) generative technology. 🎥
It can add sound to silent clips that match the acoustics of the scene, accompany on-screen action, and more.
Here are 4 examples - turn your sound on. 🧵🔊
Structured decoding.
main idea: leverage priors on model outputs to constrain decoding to a subset of tokens.
guaranteed valid json comes for free (small overhead for creating a finite state machine), and inference might be speeded up.
cool blog:
𝗘𝘅𝘁𝗲𝗻𝗱𝗶𝗻𝗴 𝘁𝗿𝗶𝗮𝗹 𝟯.
• Pan ”a zebra is reading a book on a chair” until its pose is ok
• Improve the costume and sit the zebra on "a chair from 60s"
• Pan again to add "a Citroen traction car"
⚠️ Forced by Midjourney to square the image after each pan
𝗧𝗿𝗶𝗮𝗹 𝟭: 𝗘𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁 𝗳𝗶𝗿𝘀𝘁.
• Generate the Palm spring background
• In-paint "a zebra reading a book in a nice costume"
❌Failed
😬Conditioning in-painting by generated images of zebras sitting improved results, but still not enough
when is it useful?
firs things i see are json entries where values can be enumerated or with special types e.g. integers, boolean, as well as for speed-ing up decoding keys. anything else?
i don't yet understand how it can improve on general tasks e.g. summarization?
@shreyaskapur
Nice work 🤌 conditioning the policy model on textual descriptions describing the change would be cool!
Small q: did you find that the value function learned using edit distance between target and current programs generalized to drawn images? or maybe you trained on drawings?
𝗚𝗲𝘁𝘁𝗶𝗻𝗴 𝘀𝘁𝗮𝗿𝘁𝗲𝗱.
⏱️In 5 minutes, you can subscribe to Midjourney ($30/ mo) 💸 and get impressive results using basic features (generation, upscaling, in-painting)
For instance, “classy zebras reading in a palm spring house” yields the following image:
Our model is trained on a vast dataset of synthetic examples, and scales to input dimensions up to ten. After several million examples, the attention maps start to reveal intricate mathematical analysis : in the example f(x)=sin(x)/x below, we see Fourier-like patterns. 4/4
𝗙𝗶𝗻𝗮𝗹 𝗽𝗼𝗹𝗶𝘀𝗵.
• Improve quality : notice how the sky got darker due to image squaring?
➡️In-paint the sky, careful not to erase pixels from the hippo
• Adjust last details : palm trees, cactus and costume colors and car grille
At the
@the_builderclub
hackathon, we built Batman 🦸♂️ a life-saving (literally) app. A simple audio record triggers an SOS call, alerting authorities and loved ones to your situation.
real-time demo (Hollywood, call me pls) with
@abhav_k
@ma_as_
@justinsunyt
@MultiOn_AI
whats up with the scrolls? are you using screenshots or something? (if so how do you avoid having the chat box on the screenshots?)
use any multimodal model to match with unstructured html?
thanks!
as models become better at following instructions, what is the future for structured decoding?
i only see the inference speedup?
also, is there a cost to merging tokens (distribution shift)?
𝗧𝗿𝗶𝗮𝗹 𝟯: 𝗛𝗮𝗿𝗱 𝗰𝗼𝗺𝗲𝘀 𝗳𝗶𝗿𝘀𝘁.
• Generate “A feminine hippo on a pool mattress, drinking wine in a pool, Palm Springs house”
• In-paint the hippo in “an orange pool ring” then improve her “arms resting on the ring"
👌 Final result
Prompting.
is json mode just prompting the model to output a json, giving the json schema as input and json.load'ing the response?
is there 1 or 2 model calls. if 1, has anyone studied the difference in performance by not allowing model to "think"?
𝗜𝗺𝗽𝗿𝗼𝘃𝗶𝗻𝗴 𝗶𝗻-𝗽𝗮𝗶𝗻𝘁𝗶𝗻𝗴.
Generation quality 📉 with larger images, especially as in-painting surface📉. Attention mechanism struggles with out-of-domain prompts eg "a monkey sitting on a lion's head" & too many elements.
💡Manual Attention Region Specification
These were first ideas I had when doing my first
#midjourney
project, which I highlighted here . Curious to see if the community has developed any of these or if you've got some cool tips to share!
Sharing my experience as a beginner with
#midjourney
: crafting the most personalized and budget-friendly 𝗖𝗵𝗿𝗶𝘀𝘁𝗺𝗮𝘀 𝗴𝗶𝗳𝘁!
Create 🎨, print 🖨️, gift 🎁
Hope to inspire some people and gather feedback. Let's get started!
𝗦𝘂𝗿𝗽𝗿𝗶𝘀𝗶𝗻𝗴 𝗺𝘆 𝗾𝘂𝗶𝗿𝗸𝘆 𝗴𝗳'𝘀 𝗳𝗮𝗺𝗶𝗹𝘆!
Each member is represented by a distinctive animal. Dad: 🦓, mom: 🦛
Their interests can be summed up in the goal prompt:
"a zebra reading a book, a hippo drinking wine in a pool, Palm Springs, a Citroën Traction car”
are libraries like instructor, llama_index, ... mainly wrapping json mode by abstracting jsons w/ clean pydantic objects? they also seem super useful to catch validation error traces to do recursive calls. anything else?
𝗧𝗿𝗶𝗮𝗹 𝟮: 𝗭𝗼𝗼𝗺-𝗼𝘂𝘁 (𝗭𝗢) & 𝗣𝗮𝗻.
• Generate “Zebra reading, Palm Springs, car”
• ZOs and Pans with “a hippo drinking wine in a pool” and “a Bull Terrier dog”
✅ Better thanks to large in-painting spaces
⚠️ Abusing ZOs and Pans disrupts the composition