Oxford University Press, Information and Inference: a Journal of the IMA, 4(11), p. 1389-1456, 2022
Full text: Download
Abstract Concentration inequalities form an essential toolkit in the study of high-dimensional statistical methods. Most of the relevant statistics literature in this regard is, however, based on the assumptions of sub-Gaussian or sub-exponential random variables/vectors. In this paper, we first bring together, through a unified exposition, various probabilistic inequalities for sums of independent random variables under much more general exponential type (namely sub-Weibull) tail assumptions. These results extract a part sub-Gaussian tail behavior of the sum in finite samples, matching the asymptotics governed by the central limit theorem, and are compactly represented in terms of a new Orlicz quasi-norm—the Generalized Bernstein–Orlicz norm—that typifies such kind of tail behaviors. We illustrate the usefulness of these inequalities through the analysis of four fundamental problems in high-dimensional statistics. In the first two problems, we study the rate of convergence of the sample covariance matrix in terms of the maximum elementwise norm and the maximum $k$-sub-matrix operator norm that are key quantities of interest in bootstrap procedures and high-dimensional structured covariance matrix estimation, as well as in high-dimensional and post-selection inference. The third example concerns the restricted eigenvalue condition, required in high-dimensional linear regression, which we verify for all sub-Weibull random vectors through a unified analysis, and also prove a more general result related to restricted strong convexity in the process. In the final example, we consider the Lasso estimator for linear regression and establish its rate of convergence to be generally $\sqrt{k\log p/n}$, for $k$-sparse signals, under much weaker than usual tail assumptions (on the errors as well as the covariates), while also allowing for misspecified models and both fixed and random design. To our knowledge, these are the first such results for Lasso obtained in this generality. The common feature in all our results over all the examples is that the convergence rates under most exponential tails match the usual (optimal) ones obtained under sub-Gaussian assumptions. Finally, we also establish some complementary results on analogous tail bounds for the suprema of empirical processes indexed by sub-Weibull variables. All our results are finite samples.