From Osher Doctorow Ph.D. mdoctorow@comcast.net
I recently proved on sci.stat.math that the equation of an Attractor using Probable Influence is: 1) P(A-->B) = P(A) where P(A) is the probability of the attractor (A) and A-->B, the influence of A on B, turns out to be A' U B which is roughly speaking "either 'not A' or B," or for those who know about set theory complements, "the union of the complement of A and/or B." P(A-->B) is called the Probable Influence (PI) of A on B. Intuitively, equation (1) means that A gives "all its influence to B", or in a slightly different way of expressing it, the result of the (probable) influence of A on B is just the probability of A. This means roughly that all that is left from the influence of A on B is A itself - as we expect if A is an attractor, in a sense. This is the first time to my knowledge that probability-statistics has been related to dynamical systems/chaos/fractals in regard to an Attractor, and where there's an equation, numerical analysis and approximations will eventually be close behind. This is a good opportunity to show readers that the mathematics has a remarkable intuitive quality, by giving an example "opposite" to an Attractor, namely where the influence of some event/set/process A on B has no effect on B. This is "independence" of A and B, and there are two equations, depending on whether one is using Mainstream (Bayesian) Conditional Probability-Statistics (BCP) or PI. Using PI, the equation is: 2) P(A-->B) = P(B) In BCP, P(B/A) means "the probability of B given A," that is to say given the fact that A has occurred or given that A is fixed or constant, and the result is: 3) P(B/A) = P(B) (if P(A) is not 0) which, since P(B/A) turns out to be P(AB) divided by P(A), where AB is the event that both A and B occur simultaneously (together), (3) is equivalent to the usual expression: 4) P(AB) = P(A)P(B) for what is called "statistically independent events," such as tossing two fair coins simultaneously without one coin interfering with the other and without the tossing being such as to affect one coin more than the other. Whether we use "statistical independence" as in (3) or (4), or PI-independence as in (2), readers can see that P(A-->B) = P(A) is the opposite of P(A-->B) = P(B) in an intuitive sense, just as we would expect "independent" events to be the opposite of "enormously affected events" where the latter occurs for an Attractor. Likewise, P(B/A) = P(A) is the opposite of P(B/A) = P(B), and instead of leading to P(AB) = P(A)P(B) as for the latter, the former leads to P(AB) = P(A)^2 where ^2 means "squared", that is to say P(A)^2 is P(A) times P(A). Osher Doctorow