• Improving pretrained models without training or data,
• Getting foundation models to learn skills faster,
• Figuring out how self-training techniques like autolabeling work,
• A new way to exchange model components,
• Making weak supervision fair,
•And a lot more!