There are two reasons your concerns are mitigated. First, you assume that causality is to be experimentally demonstrated purely by statistical correlation. That's common in, for example, medical trials (where double blind tests are standard). But in the hard sciences it is more likely done by direct interference in a well controlled and isolated system so that only one variable is changed at a time. No statistics is needed. Second, where statistical analysis is needed most researchers consider Bayesian analysis to be more relevant than Neyman-Pearson experimental designs, and Bayesian analysis is independent of when or by whom an hypothesis is formed. But a Bayesian analysis requires quantifying prior probabilities. If the experimenter reports he tested for 20 different possible causes but just happened to assign a prior of 0.5, instead of a uniform 0.05, to the one that seemed to work, his bias would be obvious. There are also post-hoc correction factors for frequentist statistics that collective are called Bonferroni corrections. Failure to use those is a sign of poor, if not fraudulent, analysis. Of course personal bias can distort any scheme (c.f. N-rays & Blondlot, which also illustrates my first point); that is why detailed description of experiments is required in order to allow replication by independent experimenters. That's why at the Large Hadron Collider there are two independent detectors, ATLAS and CMS, designed by different groups and operating on different principles to study the same physics. Brent On 6/24/2018 10:59 AM, Henry Baker wrote:
I'm not an expert in statistical analysis, and I'm having a hard time reconciling all of the features of the modern scientific method.
In particular, the usual process goes something like the following:
1. A scientist observes some phenomena and detects some correlations between observations of type A and observations of type B.
2. The scientist *hypothesizes* some causality among the observations.
3. The scientist designs some *experiment* to try to determine causality.
4. But since the scientist already has preconceived notions about the causality, he/she is not the appropriate person to *perform* the experiment; better to *double blind* the study and have someone *completely ignorant of the experimental design* perform the experiment on subjects (in the case of animate subjects) who are also *completely ignorant of the experimental design*.
5. The data from the experiment can be analyzed by yet another party who is *completely ignorant of the experimental design*, so that his/her biases cannot affect the analysis.
In a perfect, causal world, such a proper experiment should show causality if and only if the causality exists. In particular, a "proper" experiment should have N large enough so that the probability of false positives and false negatives are unbelievably small.
----- Here's my problem:
Scientists have been accused of *fitting to the facts* -- i.e., coming up with hypotheses *after the experiment* that match the experimental results. Furthermore, some have recommended that all such "a posteriori" papers be firmly rejected as scientific fraud.
My question is: "how can our universe possibly tell whether the hypothesis was suggested before or after the experiment?"
In a classically causal universe, the timing of the hypothesis and the timing of the experiment should make no difference, because the mental state of the scientist can't possibly affect the results of the experiment.
If an experiment is indeed performed completely blind by disinterested third parties, why should anyone care how or *when* the hypothesis was obtained?
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun