Marginis

1 Kjos-Hanssen (2010)

Excerpts from the paper The probability distribution as a computational resource for randomness testing.

1.1 Introduction

The fundamental idea of statistics is that by repeated experiment we can learn the underlying distribution of the phenomenon under investigation. In this paper we partially quantify the amount of randomness required to carry out this idea. We first show that ordinary Martin-Löf randomness with respect to the distribution is sufficient. Somewhat surprisingly, however, the picture is more complicated when we consider a weaker form of randomness where the tests are effective, rather than merely effective relative to the distribution. We show that such Hippocratic randomness actually coincides with ordinary randomness in that the same outcomes are random for each notion, but the corresponding test concepts do not coincide: while there is a universal test for ordinary ML-randomness, there is none for Hippocratic ML-randomness.

For concreteness we will focus on the classical Bernoulli experiment, although as the statistical tools we need are limited to Chebyshev’s inequality and the strong law of large numbers, our result works also in the general situation of repeated experiments in statistics, where an arbitrary sequence of independent and identically distributed random variables is studied.

When using randomness as a computational resource, the most convenient underlying probability distribution may be that of a fair coin. In many cases, fairness of the proverbial coin may be only approximate. Imagine that an available resource generates randomness with respect to a distribution for which the probability of heads is p1/2. It is natural to assume that p is not a computable number if the coin flips are generated with contributions from a physical process such as the flipping of an actual coin. The non-computability of p matters strongly if an infinite sequence of coin flips is to be performed. In that case, the gold standard of algorithmic randomness is Martin-Löf randomness, which essentially guarantees that no algorithm (using arbitrary resources of time and space) can detect any regularities in the sequence. If p is non-computable, it is possible that p may itself be a valuable resource, and so the question arises whether a “truly random” sequence should look random even to an adversary equipped with the distribution as a resource. In this article we will show that the question is to some extent moot, as these types of randomness coincide. On the other hand, while there is a universal test for randomness in one case, in the other there is not. This article can be seen as a follow-up to Martin-Löf’s paper where he introduced his notion of algorithmic randomness and proved results for Bernoulli measures [ 10 ] .

It might seem that when testing for randomness, it is essential to have access to the distribution we are testing randomness for. On the other hand, perhaps if the results of the experiment are truly random we should be able to use them to discover the distribution for ourselves, and then once we know the distribution, test the results for randomness. However, if the original results are not really random, we may “discover” the wrong distribution. We show that there are tests that can be effectively applied, such that if the results are random then the distribution can be discovered, and the results will then turn out to be random even to someone who knows the distribution. While these tests can individually be effectively applied, they cannot be effectively enumerated as a family. On the other hand, there is a single such test (due to Martin-Löf) that will reveal whether the results are random for some (Bernoulli) distribution, and another (introduced in this paper) that if so will reveal that distribution.

In other words, one can effectively determine whether randomness for some distribution obtains, and if so determine that distribution. There is no need to know the distribution ahead of time to test for randomness with respect to an unknown distribution. If we suspect that a sequence is random with respect to a measure given by the value of a parameter (in an effective family of measures), there is no need to know the value of that parameter, as we can first use Martin-Löf’s idea to test for randomness with respect to some value of the parameter, and then use the fundamental idea of statistics to find that parameter. Further effective tests can be applied to compare that parameter q with rational numbers near our target parameter p, leading to the conclusion that if all effective tests for randomness with respect to parameter p are passed, then all tests having access to p as a resource will also be passed. But we need the distribution to know which effective tests to apply. Thus we show that randomness testing with respect to a target distribution p can be done by two agents each having limited knowledge: agent 1 has access to the distribution p, and agent 2 has access to the data X. Agent 1 tells agent 2 which tests to apply to X.

The more specific point is that the information about the distribution p required for randomness testing can be encoded in a set of effective randomness tests; and the encoding is intrinsic in the sense that the ordering of the tests does not matter, and further tests may be added: passing any collection of tests that include these is enough to guarantee randomness. From a syntactic point of view, whereas randomness with respect to p is naturally a Σ20(p) class, our results show that it is actually an intersection of Σ20 classes.

Definition 1
#

The Bernoulli measure μp is defined by the stipulation that for each nω={0,1,2,},

μp({X:X(n)=1})=p and 
μp({X:X(n)=0})=1p

and X(0),X(1),X(2), are mutually independent random variables.

If X is a {0,1}-valued random variable such that P(X=1)=p then X is called a Bernoulli(p) random variable.

Definition 2

A μp-ML-randomness test is a sequence {Unp}n that is uniformly Σ10(p) with μ(Unp)2n, where 2n may be replaced by any computable function that goes to zero effectively.

A μp-ML-randomness test is Hippocratic if there is a Σ10 class S2ω×ω such that S={(X,n):XUnp}. Thus, Un=Unp does not depend on p and is uniformly Σ10. If X passes all μp-randomness tests then X is μp-random. If X passes all Hippocratic tests then X is Hippocrates μ-random.

To explain the terminology: like the ancient medic Hippocrates we are not consulting the oracle of Delphi (p) but rather looking for “natural causes”. This level of randomness recently arose in the study of randomness extraction from subsets of random sets [ 8 ] .

We will often write “μp-random” instead of “μp-ML-random”, as we work in the Martin-Löf mode of randomness throughout, except when discussing a conjecture at the end of this paper.

1.2 Chebyshev’s inequality

We develop this basic inequality from scratch here, in order to emphasize how generally it holds. For an event A in a probability space, we let 1A, the indicator function of A, equal 1 if A occurs, and 0 otherwise. The expectation of a discrete random variable X is

E(X)=xxP(X=x).

where P denotes probability and the sum is over all outcomes in the sample space. Thus E(X) is the average value of X over repeated experiments. It is immediate that

E(1A)=P(A).

Next we observe that the random variable that is equal to a when a nonnegative random variable X satisfies Xa and 0 otherwise, is always dominated by X. That is,

a1{Xa}X.

Therefore, taking expectations of both sides,

aP{Xa}E(X).

In particular, for any random variable X with E(X)=μR we have

a2P{(Xμ)2a2}E((Xμ)2)=:σ2

so

P{|Xμ||a|}σ2/a2

If we let kω and replace a by kσ, then

P{|Xμ|kσ}σ2/(kσ)2=1/k2.

This is Chebyshev’s inequality, which in words says that the probability that we exceed the mean μ by k many standard deviations σ is rather small.

1.3 Results for ordinary randomness

We first prove a version of the phenomenon that for samples of sufficiently fast growing size, the sample averages almost surely converge quickly to the mean.

Proposition 3

Consider a sequence Y={Yn}nω of independent Bernoulli(p) random variables, with the sample average

Yn:=1ni=0n1Yi.

Let N(b)=23b1 and let

Ud=bd{Y:|YN(b)p|2b}.

Then Ud is uniformly Σ10(p), and μp(Ud)2d, i.e., {Ud}dω is a μp-ML-test.

The idea of the proof is to use Chebyshev’s inequality and the fact that the variance of a Bernoulli(p) random variable is bounded (in fact, bounded by 1/4).

Proof

The following result in a sense encapsulates the essence of statistics.

If Y is μp-ML-random then Y Turing computes p.

Proof

1.4 Hippocratic results

In the last section we made it too easy for ourselves; now we will obtain the same results assuming only Hippocratic randomness.

Theorem 5

There is a Hippocratic μp-test such that if Y passes this test then Y computes an accumulation point q of the sequence of sample averages

{Yn}nω.
Proof

To argue that the accumulation point q of Theorem 5 is actually equal to p under the weak assumption of Hippocratic randomness, we need:

An analysis of the strong law of large numbers.

Let {Xn}nω be independent and identically distributed random variables with mean 0, and let Sn=i=0nXi. Then Sn4 will be a linear combination (with binomial coefficients as coefficients) of the terms

iXi4, i<jXi3Xj, i<j<kXi2XjXk,i<j<k<XiXjXkX, and i<jXi2Xj2.

Since E(Xi)=0, and E(XiaXjb)=E(Xia)E(Xjb) by independence, and each Xi is identically distributed with X1 and X2, we get

E(Sn4)=nE(X14)+(n2)(42)E(X12X22)
=nE(X14)+(n2)(42)E(X12)E(X22)=nE(X14)+(n2)(42)E(X12)2.

Since 0σ2(X12)=E(X14)E(X12)2, this is (writing K:=E(X14))

nE(X14)+(n2)(42)E(X14)=(n+3n(n1))E(X14)=(3n22n)K

so E(Sn4/n4)3Kn2. Now

Sn4/n4a41{Sn4/n4a4}

surely, so (as in the proof of Chebyshev’s inequality)

E(Sn4/n4)a4E(1{Sn4/n4a4})=a4P(Sn4/n4a4)

giving

P(Xn=Sn/na)3Kn2a4

We now applying this to Xn=YnE(Yn)=Ynp (so that K=Kp). Note that (writing p=1p)

Kp=E[(Y1p)4]=(1p)4p+p4p=pp(p3+p3)142=12,

so P(nN|Ynp|a) is bounded by

nN3Kpn2a432a4nN1n232a4N11x2dx=32a4(N1).

This bound suffices to obtain our desired result:

Theorem 6

If Y is Hippocrates μp-random then Y satisfies the Strong Law of Large Numbers for p.

Proof
Corollary 7

If Y is Hippocrates μp-random then Y Turing computes p.

Proof

Note that the randomness test in Theorem 6 depends on the pair (q1,q2), so we actually needed infinitely many tests to guarantee that Y computes p. This is no coincidence. Let YTp abbreviate the statement that Y Turing computes p, i.e., p is Turing reducible to Y.

Theorem 8

For all p, if there is a Hippocratic μp-test {Un}nω such that {X:XTp}nUn, then p is computable.

Proof
Corollary 9

There is no universal Hippocratic μp-test, unless p is computable.

Proof