We are currently in the midst of a data revolution. Massive and ever-growing datasets, arising in science, health, or even everyday life, are poised to impact many areas of society. Many of these datasets are not only large – they are high-dimensional, in the sense that each data point may consist of millions or even billions of numbers. To take an example from imaging, a single image can contain millions of pixels or more; a video may easily contain a billion “voxels”. There are fundamental reasons (“curses of dimensionality”) why learning in high-dimensional spaces is challenging. A basic challenge spanning signal processing, statistics, and optimization is to leverage lower-dimensional structure in high-dimensional datasets. Low-dimensional signal modeling has driven developments both in theory and in applications to a vast array of areas, from medical and scientific imaging, to low-power sensors, to the modeling and interpretation of bioinformatic data sets, just to name a few. However, massive modern datasets pose an additional challenge: as datasets grow, and data collection techniques become increasingly uncontrolled, it is common to encounter noise, missing data, and even gross errors or malicious corruptions. Classical techniques break down completely in this setting, and new theory and algorithms are needed. Wright’s group develops efficient computational tools for recovering low-complexity models from noisy, incomplete, or corrupted observations, proves their correctness, and collaborates with a wide range of colleagues to apply them to problems in data science, imaging, vision, health, and communications.