Contextualizing Self-Supervised Learning: A New Path Ahead

Speaker

Yifei Wang
CSAIL

Host

Thien Le
CSAIL
Abstract: Self-supervised learning (SSL) has achieved remarkable progress over the years, particularly in visual domains. However, recent advancements have plateaued due to performance bottlenecks, and more focus has shifted towards generative models. In this talk, we step back to analyze existing SSL paradigms and identify the lack of context as their most critical obstacle. To address this, we explore two approaches that incorporate contextual knowledge into SSL:

1. Contextual Self-Supervised Learning: Here, learned representations adapt their inductive biases to diverse contexts, enhancing the flexibility and generality of SSL.
2. Self-Correction: This method allows foundation models to refine themselves by reflecting on their own predictions within a dynamically evolving context.

These insights illustrate new paths to craft self-supervision and highlight context as a key ingredient for building general-purpose SSL.


Paper Links:

* In-Context Symmetries: Self-Supervised Learning through Contextual World Models (https://arxiv.org/pdf/2405.18193)
* A Theoretical Understanding of Self-Correction through In-context Alignment (https://arxiv.org/pdf/2405.18634)

Both papers to be covered in this talk were accepted to NeurIPS 2024. The theoretical work on understanding self-correction received the Spotlight Award at the ICML 2024 ICL Workshop.


Bio: Yifei Wang is a postdoc at CSAIL, advised by Prof. Stefanie Jegelka. He earned his bachelor’s and Ph.D. degrees from Peking University. Yifei is generally interested in machine learning and representation learning, with a focus on bridging the theory and practice of self-supervised learning. His first-author works have been recognized by multiple best paper awards, including the Best ML Paper Award at ECML-PKDD 2021, the Silver Best Paper Award at the ICML 2021 AdvML Workshop, and the Spotlight Award at the ICML 2024 ICL Workshop.