1 Kjos-Hanssen (2010)
Excerpts from the paper The probability distribution as a computational resource for randomness testing.
1.1 Introduction
The fundamental idea of statistics is that by repeated experiment we can learn the underlying distribution of the phenomenon under investigation. In this paper we partially quantify the amount of randomness required to carry out this idea. We first show that ordinary Martin-Löf randomness with respect to the distribution is sufficient. Somewhat surprisingly, however, the picture is more complicated when we consider a weaker form of randomness where the tests are effective, rather than merely effective relative to the distribution. We show that such Hippocratic randomness actually coincides with ordinary randomness in that the same outcomes are random for each notion, but the corresponding test concepts do not coincide: while there is a universal test for ordinary ML-randomness, there is none for Hippocratic ML-randomness.
For concreteness we will focus on the classical Bernoulli experiment, although as the statistical tools we need are limited to Chebyshev’s inequality and the strong law of large numbers, our result works also in the general situation of repeated experiments in statistics, where an arbitrary sequence of independent and identically distributed random variables is studied.
When using randomness as a computational resource, the most convenient underlying probability distribution may be that of a fair coin. In many cases, fairness of the proverbial coin may be only approximate. Imagine that an available resource generates randomness with respect to a distribution for which the probability of heads is \(p\ne 1/2\). It is natural to assume that \(p\) is not a computable number if the coin flips are generated with contributions from a physical process such as the flipping of an actual coin. The non-computability of \(p\) matters strongly if an infinite sequence of coin flips is to be performed. In that case, the gold standard of algorithmic randomness is Martin-Löf randomness, which essentially guarantees that no algorithm (using arbitrary resources of time and space) can detect any regularities in the sequence. If \(p\) is non-computable, it is possible that \(p\) may itself be a valuable resource, and so the question arises whether a “truly random” sequence should look random even to an adversary equipped with the distribution as a resource. In this article we will show that the question is to some extent moot, as these types of randomness coincide. On the other hand, while there is a universal test for randomness in one case, in the other there is not. This article can be seen as a follow-up to Martin-Löf’s paper where he introduced his notion of algorithmic randomness and proved results for Bernoulli measures [ 10 ] .
It might seem that when testing for randomness, it is essential to have access to the distribution we are testing randomness for. On the other hand, perhaps if the results of the experiment are truly random we should be able to use them to discover the distribution for ourselves, and then once we know the distribution, test the results for randomness. However, if the original results are not really random, we may “discover” the wrong distribution. We show that there are tests that can be effectively applied, such that if the results are random then the distribution can be discovered, and the results will then turn out to be random even to someone who knows the distribution. While these tests can individually be effectively applied, they cannot be effectively enumerated as a family. On the other hand, there is a single such test (due to Martin-Löf) that will reveal whether the results are random for some (Bernoulli) distribution, and another (introduced in this paper) that if so will reveal that distribution.
In other words, one can effectively determine whether randomness for some distribution obtains, and if so determine that distribution. There is no need to know the distribution ahead of time to test for randomness with respect to an unknown distribution. If we suspect that a sequence is random with respect to a measure given by the value of a parameter (in an effective family of measures), there is no need to know the value of that parameter, as we can first use Martin-Löf’s idea to test for randomness with respect to some value of the parameter, and then use the fundamental idea of statistics to find that parameter. Further effective tests can be applied to compare that parameter \(q\) with rational numbers near our target parameter \(p\), leading to the conclusion that if all effective tests for randomness with respect to parameter \(p\) are passed, then all tests having access to \(p\) as a resource will also be passed. But we need the distribution to know which effective tests to apply. Thus we show that randomness testing with respect to a target distribution \(p\) can be done by two agents each having limited knowledge: agent 1 has access to the distribution \(p\), and agent 2 has access to the data \(X\). Agent 1 tells agent 2 which tests to apply to \(X\).
The more specific point is that the information about the distribution \(p\) required for randomness testing can be encoded in a set of effective randomness tests; and the encoding is intrinsic in the sense that the ordering of the tests does not matter, and further tests may be added: passing any collection of tests that include these is enough to guarantee randomness. From a syntactic point of view, whereas randomness with respect to \(p\) is naturally a \(\Sigma ^0_2(p)\) class, our results show that it is actually an intersection of \(\Sigma ^0_{2}\) classes.
The Bernoulli measure \(\mu _{p}\) is defined by the stipulation that for each \(n\in \omega =\{ 0,1,2,\ldots \} \),
and \(X(0),X(1),X(2),\ldots \) are mutually independent random variables.
If \(X\) is a \(\{ 0,1\} \)-valued random variable such that \(\mathbb P(X=1) = p\) then \(X\) is called a Bernoulli(\(p\)) random variable.
A \(\mu _p\)-ML-randomness test is a sequence \(\{ U^p_n\} _n\) that is uniformly \(\Sigma ^0_1(p)\) with \(\mu (U^p_n)\le 2^{-n}\), where \(2^{-n}\) may be replaced by any computable function that goes to zero effectively.
A \(\mu _p\)-ML-randomness test is Hippocratic if there is a \(\Sigma ^0_1\) class \(S\subseteq 2^\omega \times \omega \) such that \(S=\{ (X,n):X\in U_n^p\} \). Thus, \(U_n=U_n^p\) does not depend on \(p\) and is uniformly \(\Sigma ^0_1\). If \(X\) passes all \(\mu _p\)-randomness tests then \(X\) is \(\mu _p\)-random. If \(X\) passes all Hippocratic tests then \(X\) is Hippocrates \(\mu \)-random.
To explain the terminology: like the ancient medic Hippocrates we are not consulting the oracle of Delphi (\(p\)) but rather looking for “natural causes”. This level of randomness recently arose in the study of randomness extraction from subsets of random sets [ 8 ] .
We will often write “\(\mu _p\)-random” instead of “\(\mu _p\)-ML-random”, as we work in the Martin-Löf mode of randomness throughout, except when discussing a conjecture at the end of this paper.
1.2 Chebyshev’s inequality
We develop this basic inequality from scratch here, in order to emphasize how generally it holds. For an event \(A\) in a probability space, we let \(\mathbf1_A\), the indicator function of \(A\), equal 1 if \(A\) occurs, and \(0\) otherwise. The expectation of a discrete random variable \(X\) is
where \(\mathbb {P}\) denotes probability and the sum is over all outcomes in the sample space. Thus \(\mathbb {E}(X)\) is the average value of \(X\) over repeated experiments. It is immediate that
Next we observe that the random variable that is equal to \(a\) when a nonnegative random variable \(X\) satisfies \(X\ge a\) and \(0\) otherwise, is always dominated by \(X\). That is,
Therefore, taking expectations of both sides,
In particular, for any random variable \(X\) with \(\mathbb {E}(X)=\mu \in \mathbb R\) we have
so
If we let \(k\in \omega \) and replace \(a\) by \(k\sigma \), then
This is Chebyshev’s inequality, which in words says that the probability that we exceed the mean \(\mu \) by \(k\) many standard deviations \(\sigma \) is rather small.
1.3 Results for ordinary randomness
We first prove a version of the phenomenon that for samples of sufficiently fast growing size, the sample averages almost surely converge quickly to the mean.
Consider a sequence \(Y=\{ Y_n\} _{n\in \omega }\) of independent Bernoulli(\(p\)) random variables, with the sample average
Let \(N(b)=2^{3b-1}\) and let
Then \(U_d\) is uniformly \(\Sigma ^0_1(p)\), and \(\mu _p(U_d)\le 2^{-d}\), i.e., \(\{ U_d\} _{d\in \omega }\) is a \(\mu _p\)-ML-test.
The idea of the proof is to use Chebyshev’s inequality and the fact that the variance of a Bernoulli(\(p\)) random variable is bounded (in fact, bounded by \(1/4\)).
The fact that \(U_d\) is \(\Sigma ^0_1(p)\) is immediate, so we prove the bound on its \(\mu _p\)-measure. We have
where \(\sigma ^2=p(1-p)\le 1/4\) is the variance of \(Y_0\) and \(\sigma ^2(\overline Y_n)\) denotes the variance of \(\overline Y_n\). Thus \(\sigma \le 1/2\), and
so
Now, we claim that \(2^{-b}\ge \frac{k}{2\sqrt{n}}\) by taking \(n\) large enough as a function of \(b\):
Thus, if \(n\ge N(b):=2^{3b-1}\),
so
The following result in a sense encapsulates the essence of statistics.
If \(Y\) is \(\mu _p\)-ML-random then \(Y\) Turing computes \(p\).
We may assume \(p\) is not computable, else there is nothing to prove; in particular we may assume \(p\) is not a dyadic rational.
Let \(\{ U_d\} _{d\in \omega }\) be as in Proposition 3. Since \(Y\) is \(\mu _p\)-random, \(Y\not\in \cap _d U_d\), so fix \(d\) with \(Y\not\in U_d\). Then for all \(b\ge d\), we have
where \(N(b)=2^{3b-1}\).
If the real number \(p\) is represented as a member of \(2^\omega \) via
in binary notation, then we have to define a Turing functional \(\Psi _d\) such that \(p_n=\Psi _d^Y(n)\).
We pick \(b\ge n+1\) such that \(\overline Y_{N(b)}=.y_0\cdots y_n\cdots \) is not of either of the forms
where as usual \(1^k\) denotes a string of \(k\) ones. Since \(p\) is not a dyadic rational, such a \(b\) exists. Then by (*) it must be that the bits \(y_0\, \cdots \, y_{n}\) are the first \(n+1\) bits of \(p\). In particular, \(y_n=p_n\). So we let \(\Psi _d^Y(n)=y_n\).
1.4 Hippocratic results
In the last section we made it too easy for ourselves; now we will obtain the same results assuming only Hippocratic randomness.
There is a Hippocratic \(\mu _p\)-test such that if \(Y\) passes this test then \(Y\) computes an accumulation point \(q\) of the sequence of sample averages
The point is that the usual proof that each convergent sequence is Cauchy gives a \(\Sigma ^0_1\) class that has small \(\mu _p\)-measure for all \(p\) simultaneously. Namely, let
Then \(\{ V_d\} _{d\in \omega }\) is uniformly \(\Sigma ^0_1\). Recall from Proposition 3 that we defined
If there is a \(p\) such that \(|\overline Y_{N(b)}-p|{\lt}2^{-b}\) for all \(b\ge d\), then
for all \(a,b\ge d\); thus we have
and therefore
for all \(p\). Thus if \(Y\) is Hippocrates \(\mu _p\)-random then \(Y\not\in V_d\) for some \(d\).
We next note that for any numbers \(c{\gt} b\),
so \(\{ \overline Y_{N(c)}\} _{c\ge d}\) will remain within \(2^{-(b-1)}\) of \(\overline Y_{N(b)}\) for all \(c{\gt}b\). That is, \(\{ \overline Y_{N(n)}\} _{n\ge d}\) is a Cauchy sequence (for each \(b\) there is an \(N(b)\) such that for all \(n,m\ge N(b)\), \(|\overline Y_n-\overline Y_m|\le 2^{-b}\)) hence \(q:=\lim _n \overline Y_{N(n)}\) exists. Write \(q=.q_0q_1q_2\cdots \). Then
if we define \(\Theta _d\) as \(\Psi _d\) in Theorem 4 except with \(N(\cdot )\) replaced by \(N(\cdot +1)\), then
and so \(Y\) computes \(q\) using the Turing reduction \(\Theta _d\).
To argue that the accumulation point \(q\) of Theorem 5 is actually equal to \(p\) under the weak assumption of Hippocratic randomness, we need:
An analysis of the strong law of large numbers.
Let \(\{ X_n\} _{n\in \omega }\) be independent and identically distributed random variables with mean 0, and let \(S_n=\sum _{i=0}^n X_i\). Then \(S_n^4\) will be a linear combination (with binomial coefficients as coefficients) of the terms
Since \(\mathbb {E}(X_i)=0\), and \(\mathbb {E}(X_i^aX_j^b)=\mathbb {E}(X_i^a)\mathbb {E}(X_j^b)\) by independence, and each \(X_i\) is identically distributed with \(X_1\) and \(X_2\), we get
Since \(0\le \sigma ^2(X_1^2) = \mathbb {E}(X_1^4)-\mathbb {E}(X_1^2)^2\), this is (writing \(K:=\mathbb {E}(X_1^4)\))
so \(\mathbb {E}(S_n^4/n^4) \le \frac{3K}{n^2}. \) Now
surely, so (as in the proof of Chebyshev’s inequality)
giving
We now applying this to \(X_n=Y_n-\mathbb {E}(Y_n) = Y_n - p\) (so that \(K=K_p\)). Note that (writing \(\overline p=1-p\))
so \(\mathbb {P}(\exists n\ge N\, \, |\overline Y_n-p|\ge a)\) is bounded by
This bound suffices to obtain our desired result:
If \(Y\) is Hippocrates \(\mu _p\)-random then \(Y\) satisfies the Strong Law of Large Numbers for \(p\).
Let \(q_1\), \(q_2\) be rational numbers with \(q_1{\lt}p{\lt}q_2\). Let
Then \(\{ W_N\} _{N\in \omega }\) is uniformly \(\Sigma ^0_1\), and \(\mu _p W_N\rightarrow 0\) effectively:
Thus if \(Y\) is Hippocrates \(\mu _p\)-random then \(Y\not\in \cap _n W_n\), i.e., \(\overline Y_n\) is eventually always in the interval \((q_1,q_2)\).
If \(Y\) is Hippocrates \(\mu _p\)-random then \(Y\) Turing computes \(p\).
Note that the randomness test in Theorem 6 depends on the pair \((q_1,q_2)\), so we actually needed infinitely many tests to guarantee that \(Y\) computes \(p\). This is no coincidence. Let \(Y\ge _T p\) abbreviate the statement that \(Y\) Turing computes \(p\), i.e., \(p\) is Turing reducible to \(Y\).
For all \(p\), if there is a Hippocratic \(\mu _p\)-test \(\{ U_n\} _{n\in \omega }\) such that \(\{ X:X\not\ge _T p\} \subseteq \cap _n U_n\), then \(p\) is computable.
Let \(\{ U_n\} _{n\in \omega }\) be such a test. By standard computability theoretic basis theorems, the complement \(U_1^c\) has a low member \(X_1\) and a hyperimmune-free member \(X_2\). By assumption \(X_1\ge _T p\) and \(X_2\ge _T p\), so \(p\) is both low and hyperimmune-free, hence by another basic result of computability theory [ , \(p\) is computable.
There is no universal Hippocratic \(\mu _p\)-test, unless \(p\) is computable.
If there is such a test then by Corollary 7 there is a test \(\{ U_n\} _{n\in \omega }\) as in the hypothesis of Theorem 8, whence \(p\) is computable.