Speakers:
- Andrej Risteski, Assistant Professor, Machine Learning Department, Carnegie Mellon University
- Ankur Moitra, Norbert Wiener Professor of Mathematics, MIT
REGISTER
TALK INFORMATION
Andrej Risteski: Towards Understanding the Statistical Landscape of Score-based Losses
Score-based losses have emerged as a more computationally appealing alternative to maximum likelihood for fitting (probabilistic) generative models with an intractable likelihood (for example, energy-based models and diffusion models). What is gained by foregoing maximum likelihood is a tractable gradient-based training algorithm. What is lost is less clear: in particular, since maximum likelihood is asymptotically optimal in terms of statistical efficiency, how suboptimal are score-based losses?
I will survey a recent connection relating the statistical efficiency of broad families of generalized score losses, to the algorithmic efficiency of a natural inference-time algorithm: namely, the mixing time of a suitable diffusion using the score that can be used to draw samples from the model. This “dictionary” allows us to elucidate the design space for score losses with good statistical behavior, by “translating” techniques for speeding up Markov chain convergence (e.g., preconditioning and lifting). I will also touch upon a parallel story for learning discrete probability distributions, in which the “analogue” of score-based losses is played by masked-prediction-like losses. Finally, time-permitting, I will speculate on co-designing pre-training and inference time procedures in foundation models in light of recent interest in inference-time algorithms.
Ankur Moitra: Vignettes in Learning Theory
In this tutorial I will revisit two classic learning problems in a new light:
- Learning sequence models. Can we hope for algorithms that work in greater generality when we are given access to a conditional sampling oracle?
- Learning graphical models. Is learning from trajectories of the Glauber dynamics actually computationally easier than learning from iid samples?
Both are examples of what I hope is a more general theme, that new and arguably more natural and modern problem formulations can help us overcome intransigent barriers.