סמינר: Probability and Stochastic Processes Seminar
On the Convex Gaussian Minimax Theorem and Minimum-Norm Interpolation
We revisit sharp risk bounds for the minimum-l1-norm interpolant(basis pursuit) in high-dimensional linear regression. These bounds werefirst obtained by Wang, Donhauser and Yang (2022) via the Convex Gaus-sian Minimax Theorem (CGMT), in an analysis motivated by the widelyused Gaussian-width and uniform-convergence framework of Koehler et al.(2021); in particular, their results disprove a conjecture of Chinot, Lo ̈fflerand van de Geer (2021). In contrast, our proof, inspired by the workof Fluery (2010), does not rely on Gaussian comparison inequalities orCGMT. Instead, it uses tools from high-dimensional geometry, supercon-centration, and the geometry of Gaussian polytopes. Finally, we arguethat CGMT may not always be the most appropriate tool for studyingthe mean-squared error of minimum-norm interpolants.

