The empirical movement of modern science in the 19th and 20th centuries raised questions about knowledge. Hume, Kant, and others asked how we can know and how we might verify that what we know is true? Hume began with empiricism but had an issue with the reliability of induction. Kant attempted to resolve the question by (among other things) separating reason from experience.[i] He did not resolve all the issues of induction but instead opened the door to empiricism with the application of a priori language used to develop principled rules and laws for nature. This application of language has allowed empiricists to construct theories which appear to be natural laws but which are in fact merely the conclusions of induction.
Then science changed. Einstein’s abstractions about the nature of nature forced the development of theories which were far less specific in both structure and fruit. It was no longer sufficient for a scientist to manage chemicals in a laboratory and thus conclude something about the earth’s atmosphere. The study of the atmosphere became a statistical concern with probabilistic results. So while a laboratory experiment might be shown to work or not work under various conditions and limitations on language these new fields of inquire were semantic and rarely if ever conclusive.
Let’s go back to the laboratory experiment for a moment, though. When hydrogen is combined with oxygen the result will be both heat and water. But the theory “what do you get when combining hydrogen and oxygen” is broad. On the surface of the sun the results would be different. But obtaining this result does not mean that the theory is false. It only means that the language used was too imprecise. The language required additional precision to limit the scope of reliable results.
If we add to this theory the (hypothetical) conditions “under 0.1 and 2.0 earth atmospheres of pressure” and “between -300 and +200 degrees Farenheit” we might get a condition where this always occurs. But even then there are potentially missing conditions and constraints. If we used a close container or a large volume of each chemical there might be a violent explosion which otherwise would not have occurred. The precision required for verification and falsification provides a higher sense of reliability to the results.[ii] Falsification is less a true-false test to “conclusively establish a universal generalization”[iii] but rather a challenge to precision and reliability.
Little is certain in science. Science is now in the business of models rather than managed observational analysis. Cleland is correct that “falsificationism cannot be used to justify the superiority of one science over another” since the methods of the new sciences vary greatly from traditional empiricism.
A question that must be considered is whether a model approach can be subjected to falsification. Since the hypothesis of a model differs significantly from the hypothesis of an empirical test it would seem to follow that falsification occurs differently. In our example of combining oxygen and hydrogen we reach precision and reliability by exclusion. That is, we refine the hypothesis conditions. Those conditions where the experiment would more likely fail are added so that the remaining state would be one where the success of the test would reach a high level of reliability. The greater the precision of the constraints the higher the reliability of the results. Constraints reduce the possibility of false positives and false negatives.
Of the various models available, the predictive and explanatory (historical) models seem more available with some sort of falsification. The most common predictive model to us is the weather forecast. This type of model gains its accuracy through an inclusive approach. Since it attempts to discern effect from cause, greater amounts of potential causes added on the front end lead to a greater precision in the predicted outcome. The results are not seen as either true or false. The goal is not to predict what will happen on every square foot of ground but to predict what will happen in different areas at approximate times. The forecast (as of last evening) for this morning was +8F but the actual was +3F. It was in specific error but in general a useful number. It was precise enough for the needs of a population.
Historical models are more difficult to falsify. The answer in the historical model is the question of the predictive model. The predictive model is composed of causes and seeks an effect. The historical model is composed of effects and seeks a cause. The historical model is also limited by the insights of the theory posited. As Popper noted the assumptions taint the results.[iv] Feyerabend took this further to suggest that much of science is impossible. In this same period theologian Cornelius Van Til raised the same suspicion but went further by suggesting that knowledge is tainted and as such suspect.
One characteristic shared by these two model constructs is that they depend upon a volume of information to obtain greater accuracy at the output. We can ask, for instance, what were the causes of the American Civil War? The resolution of the slavery issue is the first that comes to mind. But the result gets more specific and accurate when other recent events are added to the argument. The question of states’ rights (constitution vs. confederation) becomes significant though the document change was about 80 years prior. Theological differences played a part on both sides with the South being in a (globally) minority position on this issue.[v] A multitude of factors play into the cause of the conflict and identifying no single event or condition is sufficient to fully explain the situation.
Underdetermination is a problem with all historical models. Much data is missing, lost forever. As a result what might have been is inferrentially predicted to fill in the blanks. But these points have even less support than the initial model hypothesis.
In a court case it is suggested that person A killed person B. The historical hypothesis is that “person A committed this act against person B” and the historical evidence is lined up by the prosecution: Person A was the only person in the room with person B. Nobody else entered the room or exited the room. Person A was the only person in position to commit this act.
But now the defense makes its case: Did the room have windows? Did anyone else have cause to commit this act? Why did person A not possess the weapon necessary to commit the act? A room with a window provides opportunity to another to commit the act. The lack of a weapon means that another person may have been the perpetrator. If person B had made a sufficient number of enemies in life then others would have had cause to commit the act. So the prosecution goes back and gathers additional information. The initial hypothesis was underdetermined. Falsification for the hypotheses does not show it false but it does show it insufficiently precise to account for the effect. Like the predictive model this is an inclusive approach to modeling.
Historical analysis often leads to false positives as well as false negatives. As noted, an incompetent defense attorney might allow a prosecutor to come lead the jury to a false positive conclusion regarding the guilt of a defendant (the conclusion is “guilty” matching the hypothesis even though missing evidence would indicate otherwise). Likewise a false negative might occur where the guilty party goes free on account of underdetermined evidence.
These differ from the self-verifying results of Marxists as Freudians as noted by Popper. Conclusions reached by underdetermination are one thing, but conclusions which merely reflect the hypothesis always find cause for self-justification. This is not science.[vi] A hypothesis may be correct or incorrect, reliable or unreliable. But without a richly determined model to act as support it serves as nothing more than a hypothesis. A theory which will not account for data, or which does so in an insufficient manner, is not one whose conclusions should to be taken seriously. It may be accurate but it is not precise.
The term falsification itself is unreliable. Induction and abduction are about what is likely or frequently accurate. It may snow or rain today but it may not snow or rain at my house. Such is the nature of induction in science. The truthfulness or falsehood of these models comes by degree. It is not often a yes or no question.
This does not mean that there are no general laws. It only means that these laws are not fixed and unbending. They are general, they are principles. If the general laws are understood as the conclusions of induction then we are free to modify them as additional evidence allows fuller determination. This approach avoids the skepticism of Kant, Hume, and Feyerabend, alternatively allowing for induction to reach not specific facts but working conclusions with respect to these two model-type theory structures.
[i] … Kant (as we have seen) clearly states, in § 29 of the Prolegomena (the very passage where he gives his official “answer to Hume”), that there is a fundamental difference between a mere “empirical rule” (heat always follows illumination by the sun) and a genuine objective law (the sun is through its light the cause of heat) arrived at by adding the a priori concept of cause to the merely inductive rule. Any law thus obtained is “necessary and universally valid,” or, as Kant also puts it, we are now in possession of “completely and thus necessarily valid rules.”
http://plato.stanford.edu/entries/kant-hume-causality/
[ii] Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
http://www.stephenjaygould.org/ctrl/popper_falsification.html
[iii] Cleland, Carol E., “Historical science, experimental science, and the scientific method,” Geology, November 2001, p. 987.
[iv] I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appear to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, open your eyes to a new truth hidden from those not yet initiated.”
http://www.stephenjaygould.org/ctrl/popper_falsification.html
[v] See Noll, Mark, The Civil War as a Theological Crisis for a fuller treatment.
[vi] The Marxist theory of history, in spite of the serious efforts of some of its founders and followers, ultimately adopted this soothsaying practice. In some of its earlier formulations (for example in Marx’s analysis of the character of the “coming social revolution”) their predictions were testable, and in fact falsified. Yet instead of accepting the refutations the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. In this way they rescued the theory from refutation; but they did so at the price of adopting a device which made it irrefutable.
McGrew, Timothy McGrew, Alspector-Kelly, Marc, Allhoff, Fritz, editors, Philosophy of Science: An Historical Anthology, Wiley-Blackwell, 2009, p. 474
Sorry. I could not see the link in the other message that is awaiting moderation, so there we go: http://onscienceingeneral.blogspot.com.au/2014/02/perhaps-our-origins-are-not-well.html