On 14/02/2014 14:31, Mike Speciner wrote:
I think my argument can be made rigorous rather straightforwardly, although perhaps it assumes that there is an answer. It is rather trivial to see that if you start with a [0,1] uniformly distributed random variable and express it as a binary fraction, then each bit must have equal probability of being 0 or 1 (just look at the part of the distribution with all the other bits fixed). Taking the bits as a sequence a[n] of 0s and 1s, then 2*a[n]-1 is your sequence, and it's clear that the new sum is twice the old minus 1, transforming uniform in [0,1] to uniform in [-1,1].
Easier and maybe more watertight is to go in the other direction: 0. X cannot lie outside [-1,+1]. 1. Pr(X < 0) = Pr(X0 = +1) = 1/2. 2. Now Pr(X<-1/2) = 1/4, etc. 3. Continuing, by induction we find that for dyadic rationals a,b between -1 and +1, Pr(a<=X<b) = (b-a)/2. 4. So for any a,b between -1 and +1, Pr(a<=X<b) = (b-a)/2 because we can squeeze it between two arbitrarily close probabilities made from dyadic rationals. 5. This is just the same thing as saying that X is uniformly distributed on [-1,+1]. -- g