I have finally received my PhD from
@UniofOxford
@oxcsml
(and felt incredibly cool in the gown)!!
On a slightly belated note, I have been working as a Research Scientist on language models and privacy at
@GoogleDeepMind
. Stay tuned for the research from my amazing team 🥳🎉
Thank you,
@MSFTResearch
!! I am so incredibly grateful and honoured to receive the MSR PhD Fellowship in Health Intelligence. A huge congratulations to the other recipients 🥳
My first oral!! Come around to Eindhoven at 4 pm today
@UncertaintyInAI
#UAI2022
. The stars of my talk: four different estimators of differentially private importance weights. I promise that at least one of them will make your synthetic data look more real!
This would have not been possible without the help of many and many people. For the purpose of tagging accounts with many followers, I want to thank
@StatMLIO
@OxfordStats
@Novartis
@MSFTResearch
and most importantly my supervisor
@cholmesuk
for supporting my studies.
@vdwild
@SGhalebikesabi
@sejDino
and I rigorously connected Bayesian methods to deep ensembles using Wasserstein gradient flows!
Our results are summarised in a recent NeurIPS oral paper (), a 30-minute talk (), and below:
Proud to announce that we had \geq 0 people attending our AISTATS poster! We (
@rob_cornish
, Luke Kelly,
@cholmesuk
) bring back nonignorable missingness models, which were proposed by Little and Rubin in a time where neural networks were not as popular
It was an amazing experience to work with
@cholmesuk
, Edwin Fong and
@BrieucLehmann
on this new kind of Bayesian density estimation. If you’ve never heard of iterative Bayesian predictives, check out this thread.
#notAtISBA
📣 Preprint alert! 📣
Huge congrats to
@SGhalebikesabi
for this great piece of work 🎉
So, what is 'Density Estimation with Autoregressive Bayesian Predictives'...?
🧵/ 6
Have you ever been curious to know how your fancy NN behaves at a single prediction? Then check out our (
@TerLucile
*,
@karlado
,
@cholmesuk
) paper where we improve upon local explanation models (i.e. SHAP) by imposing a neighbourhood distribution as reference
🚨Have you ever wanted to attend a top-notch tutorial on explainable AI? 🚨
Not sure, I'll get there, but I will give it my best. Sign up here for the event hosted by
@QUANTESSLONDON
:
📅 Thursday, March 24 2022
🕧 6:00 pm UTC
Never again have the feeling that you wasted weeks working on a hypothesis you couldn’t support empirically. Just write it up in the next 1.5 months and submit it here:
Our CfP for our
@NeurIPSConf
workshop is officially out! Submit your negative results, summarised in 4-6 pages, until
*Sep 22, 2022*
#NeurIPS2022
Especially insightful negative results will be published in a special edition of PMLR.
Planning is underway for
#NeurIPS2023
(New Orleans, Dec 10-16, 2023).
You can nominate organisers (incl. yourself) by Jan. 13. More details in our blog:
This would not have been possible without the constant support of my supervisor
@cholmesuk
, and the amazing mentorship I received from
@javiergonzh
. Lastly, thank you to my collaborator, Jack Jewson, for helping with the application!!
Student researcher position applications are open at Google Deepmind!
I'm hosting a SR in the intersection of bias and generative models. If you're an interested PhD student please reach out!
…
We have two opportunities to work at the intersection of machine learning and immunology as part of an amazing project
@MSFTResearch
and
@AdaptiveBiotech
.
Curious about novel risks that AI introduces for patients and healthcare providers?
We're hosting an
#AIUK
Fringe event to explore Privacy and Fairness in AI for Health – Register now!
Date: 27 March, 10:00-16:30 GMT
Location:
@turinginst
, London
1/6
🚀 Ever had a research idea that seemed brilliant but turned out to not work in practice? 🚀
Don’t move on to something else; your research has built a hypothesis and then disproved it! Present your work at our 🥳 now accepted 🥳 hybrid
@NeurIPSConf
workshop!
#NeurIPS2022
Come check out my poster “On Measuring Private Model Disparities” at the AFCP workshop today! A sneak preview of my work from my summer internship! Yay!
4:20 pm
Room 392
#NeurIPS2022
We are creating a repo of negative results such that all beautiful ML ideas that did not work are easily searchable. If you know of a negative result tag
@ICBINBWorkshop
or fill out this form The more researchers engage, the more useful this becomes!
A yearly workshop is not enough to summarise all the beautiful ideas that did not work. So we are starting a *Repository of Negative Results*. Tag
@ICBINBWorkshop
whenever you see an idea that did not work, and we will add it to our Repo! Feedback welcomed
Shapley values explain the behaviour of a black box at a single prediction x by attributing each feature of the model with a numeric value. Roughly speaking, the Shapley value of feature j is the expected difference of the black box when evaluating it on an observation where ...
feature j takes value x_j and an observation where feature j was sampled from a pre-specified distribution. These distributions have so far been chosen globally, so we propose neighbourhood distributions that come with many nice properties!
Wonderful surprise: my work done at
@OxfordStats
on Posterior Bootstrap received the
#isba2021
Best Student/Postdoc Contributed Paper Award. Thank you
@ISBA_events
and everyone who was supporting me with this project!
#WomenInSTEM