[math-fun] The probability distribution arising from a decreasing function
Let F(n) be any real-valued function of integer n>0 which decreases to limit value 0 and with F(1)=1. Let E>0 be fixed. R = limit (sum +-F(n)) / (sum F(n)^E)^(1/E) be a real-valued random variable (all +- signs independent fair coin tosses). The sums are n=1..N and the limit is N-->infinity. In this way, any such function F and number E>0 is associated with a probability distribution on the real line. What can we say about these distributions? 1. Normality. If F(n)=n^(-k), for fixed k>0, then R is a normal distribution if and only if k<=1/2 and E is large enough so that R is defined. [Proof: Lindeberg-Feller central limit theorem.] If k>1/2 then we can take E=infinity if we want to jettison the scaling factor, but anyhow we get a smooth but not normal density with finite variance (the variance is expressible using zeta function). 2. Uniform. If F(n)=2^(-n) then we get the uniform density in a certain real interval (and zero density outside). 3. Funny fractals. If F(n)=3^(-n) or indeed A^(-n) for any A>2, then we get a distribution supported on a Cantor set. 4. But if 1<A<2 then we get a smooth but not normal density with finite variance (the variance is expressible in close form). Are there, e.g, fast ways to evaluate these (new?) density functions?
participants (1)
-
Warren D Smith