1 Kjos-Hanssen (2010)
Excerpts from the paper The probability distribution as a computational resource for randomness testing.
1.1 Introduction
The fundamental idea of statistics is that by repeated experiment we can learn the underlying distribution of the phenomenon under investigation. In this paper we partially quantify the amount of randomness required to carry out this idea. We first show that ordinary Martin-Löf randomness with respect to the distribution is sufficient. Somewhat surprisingly, however, the picture is more complicated when we consider a weaker form of randomness where the tests are effective, rather than merely effective relative to the distribution. We show that such Hippocratic randomness actually coincides with ordinary randomness in that the same outcomes are random for each notion, but the corresponding test concepts do not coincide: while there is a universal test for ordinary ML-randomness, there is none for Hippocratic ML-randomness.
For concreteness we will focus on the classical Bernoulli experiment, although as the statistical tools we need are limited to Chebyshev’s inequality and the strong law of large numbers, our result works also in the general situation of repeated experiments in statistics, where an arbitrary sequence of independent and identically distributed random variables is studied.
When using randomness as a computational resource, the most convenient underlying probability distribution may be that of a fair coin. In many cases, fairness of the proverbial coin may be only approximate. Imagine that an available resource generates randomness with respect to a distribution for which the probability of heads is . It is natural to assume that is not a computable number if the coin flips are generated with contributions from a physical process such as the flipping of an actual coin. The non-computability of matters strongly if an infinite sequence of coin flips is to be performed. In that case, the gold standard of algorithmic randomness is Martin-Löf randomness, which essentially guarantees that no algorithm (using arbitrary resources of time and space) can detect any regularities in the sequence. If is non-computable, it is possible that may itself be a valuable resource, and so the question arises whether a “truly random” sequence should look random even to an adversary equipped with the distribution as a resource. In this article we will show that the question is to some extent moot, as these types of randomness coincide. On the other hand, while there is a universal test for randomness in one case, in the other there is not. This article can be seen as a follow-up to Martin-Löf’s paper where he introduced his notion of algorithmic randomness and proved results for Bernoulli measures
[
10
]
.
It might seem that when testing for randomness, it is essential to have access to the distribution we are testing randomness for. On the other hand, perhaps if the results of the experiment are truly random we should be able to use them to discover the distribution for ourselves, and then once we know the distribution, test the results for randomness. However, if the original results are not really random, we may “discover” the wrong distribution. We show that there are tests that can be effectively applied, such that if the results are random then the distribution can be discovered, and the results will then turn out to be random even to someone who knows the distribution. While these tests can individually be effectively applied, they cannot be effectively enumerated as a family. On the other hand, there is a single such test (due to Martin-Löf) that will reveal whether the results are random for some (Bernoulli) distribution, and another (introduced in this paper) that if so will reveal that distribution.
In other words, one can effectively determine whether randomness for some distribution obtains, and if so determine that distribution. There is no need to know the distribution ahead of time to test for randomness with respect to an unknown distribution. If we suspect that a sequence is random with respect to a measure given by the value of a parameter (in an effective family of measures), there is no need to know the value of that parameter, as we can first use Martin-Löf’s idea to test for randomness with respect to some value of the parameter, and then use the fundamental idea of statistics to find that parameter. Further effective tests can be applied to compare that parameter with rational numbers near our target parameter , leading to the conclusion that if all effective tests for randomness with respect to parameter are passed, then all tests having access to as a resource will also be passed. But we need the distribution to know which effective tests to apply. Thus we show that randomness testing with respect to a target distribution can be done by two agents each having limited knowledge: agent 1 has access to the distribution , and agent 2 has access to the data . Agent 1 tells agent 2 which tests to apply to .
The more specific point is that the information about the distribution required for randomness testing can be encoded in a set of effective randomness tests; and the encoding is intrinsic in the sense that the ordering of the tests does not matter, and further tests may be added: passing any collection of tests that include these is enough to guarantee randomness. From a syntactic point of view, whereas randomness with respect to is naturally a class, our results show that it is actually an intersection of classes.
Definition
1
The Bernoulli measure is defined by the stipulation that for each ,
and are mutually independent random variables.
If is a -valued random variable such that then is called a Bernoulli() random variable.
Definition
2
A -ML-randomness test is a sequence that is uniformly with , where may be replaced by any computable function that goes to zero effectively.
A -ML-randomness test is Hippocratic if there is a class such that . Thus, does not depend on and is uniformly . If passes all -randomness tests then is -random. If passes all Hippocratic tests then is Hippocrates -random.
To explain the terminology: like the ancient medic Hippocrates we are not consulting the oracle of Delphi () but rather looking for “natural causes”. This level of randomness recently arose in the study of randomness extraction from subsets of random sets
[
8
]
.
We will often write “-random” instead of “-ML-random”, as we work in the Martin-Löf mode of randomness throughout, except when discussing a conjecture at the end of this paper.
1.2 Chebyshev’s inequality
We develop this basic inequality from scratch here, in order to emphasize how generally it holds. For an event in a probability space, we let , the indicator function of , equal 1 if occurs, and otherwise. The expectation of a discrete random variable is
where denotes probability and the sum is over all outcomes in the sample space. Thus is the average value of over repeated experiments. It is immediate that
Next we observe that the random variable that is equal to when a nonnegative random variable satisfies and otherwise, is always dominated by . That is,
Therefore, taking expectations of both sides,
In particular, for any random variable with we have
so
If we let and replace by , then
This is Chebyshev’s inequality, which in words says that the probability that we exceed the mean by many standard deviations is rather small.
1.3 Results for ordinary randomness
We first prove a version of the phenomenon that for samples of sufficiently fast growing size, the sample averages almost surely converge quickly to the mean.
Proposition
3
Consider a sequence of independent Bernoulli() random variables, with the sample average
Let and let
Then is uniformly , and , i.e., is a -ML-test.
The idea of the proof is to use Chebyshev’s inequality and the fact that the variance of a Bernoulli() random variable is bounded (in fact, bounded by ).
Proof
▶
The fact that is is immediate, so we prove the bound on its -measure. We have
where is the variance of and denotes the variance of . Thus , and
so
Now, we claim that by taking large enough as a function of :
Thus, if ,
so
The following result in a sense encapsulates the essence of statistics.
Theorem
4
If is -ML-random then Turing computes .
Proof
▶
We may assume is not computable, else there is nothing to prove; in particular we may assume is not a dyadic rational.
Let be as in Proposition 3. Since is -random, , so fix with . Then for all , we have
where .
If the real number is represented as a member of via
in binary notation, then we have to define a Turing functional such that .
We pick such that is not of either of the forms
where as usual denotes a string of ones. Since is not a dyadic rational, such a exists. Then by (*) it must be that the bits are the first bits of . In particular, . So we let .
1.4 Hippocratic results
In the last section we made it too easy for ourselves; now we will obtain the same results assuming only Hippocratic randomness.
Theorem
5
There is a Hippocratic -test such that if passes this test then computes an accumulation point of the sequence of sample averages
Proof
▶
The point is that the usual proof that each convergent sequence is Cauchy gives a class that has small -measure for all simultaneously. Namely, let
Then is uniformly . Recall from Proposition 3 that we defined
If there is a such that for all , then
for all ; thus we have
and therefore
for all . Thus if is Hippocrates -random then for some .
We next note that for any numbers ,
so will remain within of for all . That is, is a Cauchy sequence (for each there is an such that for all , ) hence exists. Write . Then
if we define as in Theorem 4 except with replaced by , then
and so computes using the Turing reduction .
To argue that the accumulation point of Theorem 5 is actually equal to under the weak assumption of Hippocratic randomness, we need:
An analysis of the strong law of large numbers.
Let be independent and identically distributed random variables with mean 0, and let . Then will be a linear combination (with binomial coefficients as coefficients) of the terms
Since , and by independence, and each is identically distributed with and , we get
Since , this is (writing )
so Now
surely, so (as in the proof of Chebyshev’s inequality)
giving
We now applying this to (so that ). Note that (writing )
so is bounded by
This bound suffices to obtain our desired result:
Theorem
6
If is Hippocrates -random then satisfies the Strong Law of Large Numbers for .
Proof
▶
Let , be rational numbers with . Let
Then is uniformly , and effectively:
Thus if is Hippocrates -random then , i.e., is eventually always in the interval .
Corollary
7
If is Hippocrates -random then Turing computes .
Proof
▶
By Theorem 5, computes the limit of a subsequence . By Theorem 6, this limit must be .
Note that the randomness test in Theorem 6 depends on the pair , so we actually needed infinitely many tests to guarantee that computes . This is no coincidence. Let abbreviate the statement that Turing computes , i.e., is Turing reducible to .
Theorem
8
For all , if there is a Hippocratic -test such that , then is computable.
Proof
▶
Let be such a test. By standard computability theoretic basis theorems, the complement has a low member and a hyperimmune-free member . By assumption and , so is both low and hyperimmune-free, hence by another basic result of computability theory
[
, is computable.
Corollary
9
There is no universal Hippocratic -test, unless is computable.
Proof
▶
If there is such a test then by Corollary 7 there is a test as in the hypothesis of Theorem 8, whence is computable. □