Published in

Information and Inference: A Journal of the IMA, 2021

DOI: 10.1093/imaiai/iaaa037

Links

Tools

Export citation

Search in Google Scholar

Approximate separability of symmetrically penalized least squares in high dimensions: characterization and consequences

Journal article published in 2021 by Michael Celentano
Distributing this paper is prohibited by the publisher
Distributing this paper is prohibited by the publisher

Full text: Unavailable

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Abstract We show that the high-dimensional behavior of symmetrically penalized least squares with a possibly non-separable, symmetric, convex penalty in both (i) the Gaussian sequence model and (ii) the linear model with uncorrelated Gaussian designs nearly agrees with the behavior of least squares with an appropriately chosen separable penalty in these same models. This agreement is established by finite-sample concentration inequalities which precisely characterize the behavior of symmetrically penalized least squares in both models via a comparison to a simple scalar statistical model. The concentration inequalities are novel in their precision and generality. Our results help clarify that the role non-separability can play in high-dimensional M-estimation. In particular, if the empirical distribution of the coordinates of the parameter is known—exactly or approximately—there are at most limited advantages to use non-separable, symmetric penalties over separable ones. In contrast, if the empirical distribution of the coordinates of the parameter is unknown, we argue that non-separable, symmetric penalties automatically implement an adaptive procedure, which we characterize. We also provide a partial converse which characterizes the adaptive procedures which can be implemented in this way.