ORF 550 Problem Statements!
General Principles
Concentration
If $X_1, …, X_n$ are independent (or weakly dependent) random variables, then the random variable $f(X_1, …, X_n)$ is “close” to its mean $\mathbb{E}[f(X_1, …, X_n)]$ provided that the function $f(x_1, …, x_n)$ is not too “sensitive” to any of the coordinates $x_i$. (important: only tells us under what conditions concentration occurs, and doesn’t give the mean which we concentrate around.)
Suprema
If the random process $\{X_t\}{t\in T}$ is “sufficiently continuous,” then the magnitude of the supremum $\sup{t \in T}X_t$ is controlled (in the sense that we have estimates from above, and in some cases also from below) by the “complexity” of the index set $T$.
Universality
If $X_1, …, X_n$ are independent (or weakly dependent) random variables, then the expectation $\mathbb{E}[f(X_1,...,X_n)]$ is “insensitive” to the distribution of $X_1 , . . . , X_n$ when the function $f$ is “sufficiently smooth.”
Sharp Transitions
If $X_1, …, X_n$ are independent (or weakly dependent) events with prob- ability $p$, then the probability of an event $f(X_1, …, X_n)$ undergoes a “sharp transition” in $p$ if $f(x_1, …, x_n)$ is monotone and depends in a “sufficiently symmetric” manner on the coordinates $x_i$. (important: only tells us under what conditions a transition occurs, and doesn’t give the value $p^*$at which the transition happens)
Chapter 2 - Variance Bounds and Poincare Inequalities (Concentration)
Trivial vector-valued random variable variance bounds suck at capturing high dimensional concentration! We would like to be able to bound things in a way that depends on the number of independent degrees of freedom (the effective dimension).
2. 1 - Tensorization of Variance
We present tensorization below, and apply it to the variance, which is a quantity that tensorizes.
Def: We say that a quantity/statistic of a function $f(X_1, …, X_n)$ of independent random variables tensorizes if we can bound the quantity from bounds for functions of each random variable $f_i(X_i)$ in a single dimension. Quantities that tensorize behave well in high dimension!
We can prove that variance is such a quantity, leading to the following useful concentration bound straight off the rip. There’s also a corollary that uses the $\inf$ and $\sup$ of the $f_i$’s as bounded differences to bound the variance as well. This result is sharp in that equality holds for linear functions $f$.