Carl Doersch: Learning and transferring visual representations with few labels: BYOL + CrossTransformers
Title: Learning and transferring visual representations with few labels: BYOL + CrossTransformers
Speaker: Carl Doersch, DeepMind
Date and Time: Tuesday, June 22, 1-2 pm
Place: Zoom Meeting
Meeting ID: 693 9289 5889 Pass code: 259475
Abstract: When encountering novelty, like new tasks and new domains, current visual representations struggle to transfer knowledge if trained on standard tasks like ImageNet classification. This talk explores how to build representations which better capture the visual world, and transfer better to new tasks. I’ll first discuss Bootstrap Your Own Latent (BYOL), a self-supervised representation learning algorithm based on the ‘contrastive’ method SimCLR. BYOL outperforms its baseline without ‘contrasting’ its predictions with any ‘negative’ data. Second, I’ll present CrossTransformers, which achieves state-of-the-art few-shot fine-grained recognition on Meta-Dataset, via a self-supervised representation that’s aware of spatial correspondence.
Bio: Carl Doersch is a research scientist at DeepMind working closely with Andrew Zisserman. He received his PhD from Carnegie Mellon under the supervision of Alexei Efros and Abhinav Gupta, during which time he received a Google Fellowship and an NDSEG fellowship and spent 2 years as a visiting scholar at UC Berkeley. His current interests span computer vision and machine learning, with a particular focus on neural representations, including self-supervised learning and transfer across domains and tasks.
Organizer: Josephine Sullivan