kswersk Profile Banner
Kevin Swersky Profile
Kevin Swersky

@kswersk

Followers
8K
Following
2K
Statuses
390

Research Scientist at Deepmind.

Toronto, Ontario
Joined June 2015
Don't wanna be here? Send us removal request.
@kswersk
Kevin Swersky
22 days
RT @priyankjaini: Do generative video models learn physical principles from watching videos? Very excited to introduce the Physics-IQ bench…
0
42
0
@kswersk
Kevin Swersky
3 months
@wgrathwohl @mo_norouzi @rob_fergus Sad to see you go Will, but it was a privilege to work with you. Your wild and brilliant. ideas, boundless enthusiasm, and unique style made everything so much fun!
0
0
3
@kswersk
Kevin Swersky
3 months
RT @awiltschko: Well, we actually did it. We digitized scent. A fresh summer plum was the first fruit and scent to be fully digitized and r…
0
2K
0
@kswersk
Kevin Swersky
8 months
RT @HanieSedghi: 🆕🔥We show that LLMs *can* plan if instructed well! 🔥Instructing the model using ICL leads to a significant boost in planni…
0
8
0
@kswersk
Kevin Swersky
8 months
RT @_akhaliq: Greedy Growing Enables High-Resolution Pixel-Based Diffusion Models We address the long-standing problem of how to learn eff…
0
18
0
@kswersk
Kevin Swersky
9 months
@ilyasut You have been inspirational. Good luck with whatever you have planned next, I’m sure it is going to be incredible!
0
0
1
@kswersk
Kevin Swersky
9 months
@aidangomez Congratulations Aidan!!
0
0
2
@kswersk
Kevin Swersky
1 year
RT @priyankjaini: We have a student researcher opportunity in our team @GoogleDeepMind in Toronto 🍁 If you’re excited about research on di…
0
26
0
@kswersk
Kevin Swersky
1 year
RT @PaulVicol: Check out @clark_kev’s and my paper on fine-tuning diffusion models on differentiable rewards! We present DRaFT, which compu…
0
15
0
@kswersk
Kevin Swersky
1 year
I’m really excited about this project! Backpropagation and variations are extremely effective at fine-tuning diffusion models on downstream rewards.
@clark_kev
Kevin Clark
1 year
@PaulVicol and I are excited to introduce DRaFT, a method that fine-tunes diffusion models on rewards (such as scores from human preference models) by backpropagating through the diffusion sampling! with @kswersk, @fleet_dj arXiv: (1/5)
Tweet media one
0
2
24
@kswersk
Kevin Swersky
1 year
@wchan212 Congrats Will!
0
0
0
@kswersk
Kevin Swersky
1 year
@mo_norouzi @hojonathanho @wchan212 @Chitwan_Saharia Congrats to you and the team! Can’t wait to see what you make!
1
0
0
@kswersk
Kevin Swersky
1 year
@elliot_creager Congratulations Dr. Wise Guy!
0
0
3
@kswersk
Kevin Swersky
2 years
@itsIanMac @YorkUniversity Congrats!! 🎓
1
0
1
@kswersk
Kevin Swersky
2 years
@aidangomezzz @nickfrosst Congratulations Aidan and team! Incredible work 😀
0
0
2
@kswersk
Kevin Swersky
2 years
This is a really natural framework to improve Bayesian optimization when you have access to related optimization tasks Joint work with @ziwphd, @GeorgeEDahl, Chansoo Lee, @zacharynado, @jmgilmer, @latentjasper, @ZoubinGhahrama1
@GoogleAI
Google AI
2 years
Hyper Bayesian optimization (HyperBO) is a highly customizable interface that pre-trains a Gaussian process model and automatically defines model parameters, making Bayesian optimization easier to use while outperforming traditional methods. Learn more →
1
5
27
@kswersk
Kevin Swersky
2 years
RT @tingchenai: 📢Introducing Pix2Seq-D, a generalist framework casting panoptic segmentation as a discrete data generation task conditioned…
0
52
0
@kswersk
Kevin Swersky
2 years
This was a really interesting project for me to learn about and apply neural fields, with some great collaborators!
@taiyasaki
Andrea Tagliasacchi 🇨🇦
2 years
📢📢📢 𝐂𝐔𝐅 – 𝐂𝐨𝐧𝐭𝐢𝐧𝐮𝐨𝐮𝐬 𝐔𝐩𝐬𝐚𝐦𝐩𝐥𝐢𝐧𝐠 𝐅𝐢𝐥𝐭𝐞𝐫𝐬 Neural-fields beat classical CNNs in (regressive) super-res: AFAIK, a first for @neural_fields in 2D deep learning? Mostly their wins are in sparse, higher-dim signals ~NeRF
Tweet media one
0
0
5
@kswersk
Kevin Swersky
3 years
@AidanNGomez Wonderful, congratulations all!!
0
0
1
@kswersk
Kevin Swersky
3 years
@cjmaddison Chris, you’re absolutely right… There, much better.
Tweet media one
0
0
4