Probability is increasingly used in modern science, and notably in
medicine and in physics, in scientific proof claims. But often probability
is used poorly in science and really gives little or no proof of what is claimed.
In medicine, probability is now commonly used in survey data analysis, as where a 10% correlation between peoples illness and peoples behaviour statements is said to prove eg that general behaviour A always has a 10% risk of causing illness B. Often the general behaviour A has no actual effect on the illness B, but has some correlation with the use of some unidentified product C which is the actual cause correlating 100% with illness B.
In physics, probability is now commonly used in experiment data analysis, as where a 95% correlation between photon emissions and some general magnetic event is said to prove eg that general magnetic event A always causes photon emission B. But the general event A may have no actual effect on the emission B, but has some strong correlation with some unidentified specific event C which is the actual cause correlating 100% with emission B. Probability is also used as the basis of Quantum Mechanics.
In science today probability is widely used in different aspects of data analysis proof claims that are not reviewed by statisticians. It is used in experiment data analysis and in survey data analysis, and in both areas it is also used in error estimation. But probability is commonly used wrongly in science as noted by some major statisticians like R.A.Fisher. It is commonly used by amateur-statistician 'scientists' who are not good statisticians and consult no statistician, so much that the journal 'Nature' has now started asking statisticians to review some submitted papers.
In medicine, probability is now used both in experiment data analysis and in
survey data analysis but here we will consider chiefly the latter
(below under Physics we will consider the former). The chief problem with survey
data is that it always involves some limited number of selected people being
asked some limited number of selected questions. It may be that an illness
being studied is caused by ACME soap, but the survey had no question about
ACME soap or it did but none of the people surveyed used ACME soap. But still
that survey will be probability-tested for that illness, and may well give some
correlations for that illness. It will be announced that some behaviours 'are a risk
for the illness', while ACME soap passes unmentioned.
(PS. this is NOT a claim that ACME soap causes any illness, we use the name here only as the name of 'some hypothetical product'.)
We can now consider a hypothetical medical survey to be probability-tested regarding a hypothetical disease A ;
If there is some strong evidence for any hypothesis, then additional weak evidence will now commonly be taken as confirming and strengthening that. And even if there is only weak evidence for an hypothesis, then additional weak evidence will now commonly be taken as confirming and strengthening that. But logically only strong evidence should count towards proof, and weak evidence should only ever count as an indicator of a need to look for strong evidence. Generally there are no 20% causes and so no 20% risks, mostly A actually cause B or actually does not cause B. There may commonly be dose effects, and more rarely there may be multiple causes. But much too commonly medicine is reporting, and governments spread concerns about, relatively low illness 'risks' that are not actual scientific truths like 'eating fat causes heart problems' - and scientific journal 'peer review' has tended to create and keep backing such false discipline-prejudices. They might do better having chemists review physics papers, physicists review chemistry papers and astronomers review biological papers because their discipline-peer-review just promotes prejudice instead of real science.
Probability testing in Physics and Astronomy is now commonly used in experiment data analysis or observation data analysis. This can have some of the problems seen in the use of probability testing of survey data.
Hence where surveys can have omitted questions, experiment or observation can involve omissions in the factors investigated and this may have great impact in the more contentious areas like Particle Physics and Astronomy.
Probability is also widely used in accuracy estimation, but often ignoring the probability fact that of several experiments or observations it is often NOT the one with the best accuracy that gives the most reliable evidence. Other significant issues are often also involved.
More recent Physics and Astronomy theories also commonly try to incorporate aspects of probability theory, correctly or incorrectly. Deductive assumptions involving infinities or limits often give false answers. So theory handling the infinitely small and infinitely large can ultimately require that the sum of an infinite set of zero probabilities add to a probability of one, which is plainly false. Physics deductions about the infinitely small or the infinitely large can generally be valid only derived correctly relative to some well proven specified finites. More recent physics theories can often involve error related to this issue.
False probability deductions can be due to a failure in specifying the data involved, or to a failure in specifying the assumed prior information involved. So there often can be no valid probability comparison between two physics theories regarding given data, if both involve assumptions about eg 'mass' but both fail to specify the prior information properties of 'mass' that their theories involve. Or phenomena that seem probabilistic may simply have some unseen or uncomputed non-probabilistic causes that may be currently unseeable or uncomputable.
Statistics based 'experiments' commonly rely on computer analyses or computer 'models' that are not fully specified and so such 'experiments' are not fully replicable to verify them or to challenge them. And replicable experiments generally though involving one set of statistical probabilities are all capable of being interpreted differently in terms of different theory paradigms. But statistics often cannot offer any valid evidence as to correctness between several alternative interpretations of an experiment.
For some physicists the two-slit light experiment was taken as supporting a Heisenberg probabilistic quantum mechanics, as where there is some probability that
an object actually at a specified time occupies one space location and actually at the same specified time in contradiction occupies some other space location.
In such a probabilistic physics universe, the universe actually behaves probabilistically whereas in a determinate physics the universe actually involves fully specifiable causes
giving fully determinate effects though that may not always appear to be the case. Probabilistic physics claims to be also backed by other supporting evidence, with claims that microscopic quantum processes such as 'superposition', 'entanglement' and 'virtual particle exchange' are involved.
Heisenberg's Uncertainty Principle basically assumes that all possible ways of determining an objects motion and position at some instant must involve changing the objects motion or position. But the Rudolphine Tables of Kepler allow determining the position and motion of a planet at some instant by calculation alone (which has no impact on the planets position or motion), and the position and motion of a body continuously emitting light can be determined for some instant from its emitted light signals (having no impact on body position or motion but maybe limited by light having a quantal nature). It seems that there will be some cases where such determinations in principle cannot be done accurately, but also that there will be some cases where such determinations in principle can be done accurately.
Some physicists do not support probabilistic physics including Einstein who rejected probability physics "because God does not play dice" (though that is maybe no scientific disproof and Einstein still accepted duality contradiction physics). Probabilistic physics is rejected also by others like Schrodinger who reject all contradiction physics, including Einstein dualism, as in his Schrodinger's Cat exposing-probability 'thought experiment' which is perversely often quoted to help 'explain' probabilistic quantum physics. For those who reject contradiction in science, it exposes probability physics as contradiction nonsense. But for those who accept contradiction in science, it helps explain probability physics !? Of course it can be said that any claimed evidence for a contradiction must be contradictory evidence, and contradictory evidence may reasonably be taken as not being valid factual evidence - eg evidence that Jane is in Paris now AND that Jane is in Tokyo now or evidence that Jane is alive now AND that Jane is dead now ?! Logically it would seem that 'evidence' for a contradiction must be data being misinterpreted. But statistical associations often allow of multiple alternative causal explanations or Image Theories, and in some cases necessarily do. See http://psych-networks.com/theoretically-distinct-mechanisms-can-generate-identical-observations/?utm_content=buffercae71&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer and http://www.new-science-theory.com/general-image-theory-1.php
Probability methods generally are widely used in particle and quantum physics and have some use in almost all areas of physics today, even by physicists who reject actual probability physics. But where it is claimed that it has been proved that some physics is probabilistic, it is maybe best taken as meaning that it has really at most been proved that it is either probabilistic OR involves some as yet unidentified non-probabilistic causation. 2014 sees Christopher Ferrie and Joshua Combes, supported by Rainer Kaltenbaek and Franco Nori, throwing major doubt on Quantum Mechanics and especially its 'weak measurement' as being based on bad statistics. (see http://physicsworld.com/cws/article/news/2014/oct/09/are-weak-values-quantum-after-all)
While arguing for one-theory-only science, E.T.Jaynes concluded that probability theory has 'been fooled by a subtle mathematical correspondence between stochastic and dynamical phenomena'. But that rather supports multiple-theory science like Newton blackbox-theory science or perhaps preferably our General Image Theory science. See http://bayes.wustl.edu/etj/articles/prob.in.qm.pdf
Some of these physics probability issues were considered at the CERN 2007 conference 'Statistical Issues for LHC Physics', see http://physicsworld.com/cws/article/indepth/43309
Of course misuse of statistics is far from the only problem with science but hard-science Physics is the leading edge of science, unfortunately long leading in bad science.
You are welcome to link to any page on this site, eg http://www.new-science-theory.com/probability-science.php
If you have any view or suggestion on the content of this site, please contact :- New Science Theory
Vincent Wilmot 166 Freeman Street Grimsby Lincolnshire DN32 7AT UK.
© new-science-theory.com, 2017 - taking
care with your privacy, see Sitemap.