This book primarily deals with spotting humbug. However, it is important to be able to put forward rational reasons for believing or not believing a particular claim. There are numerous techniques a one can keep in the "intellectual toolbox" which can help in this debunking process. This section of the book outlines some of these techniques and rules of thumb.
If there are two (or more) equally valid, but conflicting hypotheses for an observed phenomenon, one should choose the hypothesis that requires the least number of steps to explain it. If you have different explanations for something, all of which can explain it equally well, then you have no reason to choose a complex reason over a simple one.
Note that the essential criterion - explain it equally well - must be met before applying Occam’s razor. (Obviously it is always preferable to accept the hypothesis that explains "the something" best.) Occam’s razor is no guarantee of truth or even of likelihood, but it is not invoked as such. It is a "rule of thumb" that is used to give provisional acceptance to a hypothesis. The provision is that until evidence comes along to give greater weight to an alternative hypothesis, it is best to stick with the simplest one. As Occam once said: "It is vain to do with more what can be done with fewer."
Spinning Another Hypothesis
In order to use Occam’s razor it is, of course, necessary to have more than one hypothesis. The technique spinning another hypothesis is very useful in combination with Occam’s razor. When commentators propose reasons for a claim, are their reasons the only ones possible? If you are able to think of another reason, then the answer is "obviously not", and even if you can’t think of another reason it doesn’t mean there isn’t one - you might not be that smart (although given the book you're reading this is likely not the case). When thinking of alternative reasons you are, in the words of Carl Sagan, spinning another hypothesis.
Once other possible explanations have been proposed, the next step is to eliminate (if we can) each hypothesis until there is only one. Consider all the possible tests that could be done to demonstrate that all the alternative explanations are false. If it isn’t possible to whittle all the possible explanations down to one, then apply Occam’s razor. Spinning another hypothesis is essential if one wants to remain open-minded and undogmatic. People tend to make up their minds about things very early on and ignore evidence that doesn’t fit their "worldview" (as Francis Bacon called it - counting the hits and forgetting the misses).
In his work Of Miracles, David Hume nominated a principle which has since been called Hume’s razor:
No testimony is sufficient to establish a miracle unless that testimony be of such a kind that its falsehood would be more miraculous than the fact which it endeavours to establish.
If we are asked to believe X, in deciding whether to believe it (give provisional assent) or not we should ask: "Is it more likely that X is true, or that the evidence for X is mistaken (or can be interpreted in a different or more realistic way)?" If it seems that it’s more likely that the evidence is wrong, then we don’t believe X.
Playing the Devil’s Advocate
This is a tool which can be used to promote open-minded skepticism. It involves strong advocacy of negations to the proposition in question. Like a good debater, the devil’s advocate employs reason to examine a position or argument in order to test its validity; bringing up facts or points that are unfavourable to the position or argument in order to test it. In this sense it is similar to spinning another hypothesis.
The role of the devil’s advocate is to be skeptical and to find flaws in the proponent’s argument despite the fact that this person often also believes the argument. Whether the devil’s advocate ultimately believes any of the points being made is irrelevant. This is a process to go through in order to ensure an argument is as flawless as possible - through the proponent’s systematic rebuttal of the points made by the devil’s advocate. (For those with an etymological bent, the expression "devil’s advocate" comes from Catholicism. It was an argumentative position taken up in order to challenge claims of advocates putting forward individuals of saintly reputation for beatification and canonisation.)
Let's consider an example that demonstrates a few of these things.
We are not going to entertain any specific claims of 9-11 conspiracy theorists. This has been done elsewhere with far greater knowledge and skill than we could hope to muster. Moreover, it's not really possible to reason with such people, even with a point-by-point take down. However, here are four points about such beliefs in general.
Playing the devil's advocate, and spinning another (or rather, the official) hypothesis, if it's a choice between a conspiracy or a "stuff up", then go with the stuff up every time. 9-11 was a stuff up by intelligence agencies. (To be fair to conspiracy theorists - there was a conspiracy - by al-Qaeda.)
The White House was incapable of covering up something as small as (known to only a few people) Bill Clinton's "extra-curricular" activities with an intern and a cigar. Yet, conspiracy theorists would have us believe they were capable of covering up a conspiracy that would require the perfect compliance and secrecy of hundreds of people?
To both these points let us apply Occam's razor. To be justified in applying Occam's razor, the conspiracy needs to have equal explanatory power as the standard explanation. For the sake of argument, we will grant the 9-11 conspiracy equal explanatory validity as the official version. We can apply Occam's razor and still refute it. By definition, conspiracies are always more complex than the standard explanation - conspiracies involve steps to cover them up. More steps, more complex. To rationally believe a conspiracy, it must have better explanatory power than the standard explanation. But, of course, they never do (which, presumably, is why they're not the standard version!).
Following from this we can now see a place for Hume's razor. We are asked to believe a 9-11 conspiracy theory. In deciding whether to believe it (give provisional consent) or not we should ask: "Is it more likely that the conspiracy is true or that the evidence of the conspiracy (as outlined by the conspiracy theorists) is mistaken or can be interpreted in a different or more realistic way?" Putting it simply - extraordinary claims require extraordinary evidence. Conspiracies such as 9-11 certainly fall under the category of an "extraordinary claim", yet their evidence is completely unconvincing (and at times, quite laughable).
The official version - al Qaeda (who have claimed 9-11 of course) hijacked and crashed four planes - is far more convincing. It's not that extraordinary - Muslim terrorists exist, have been and continue to be suicidal, and state their hatred of America and the West on a regular basis. However, it would be extraordinary if the evidence from 9-11 was faked, mistaken or could be interpreted in a more convincing way.
After the tragedy of the September 11 attacks in the US, it was a given that the conspiracy nuts would be out in full force. Their claims are as numerous and detailed as they are paranoid, single-minded and simple-minded. Broadly, they believe the US government, CIA and military orchestrated the attacks. They make claims such as the US military flew planes into the World Trade Centre and used a bunker buster bomb to attack the Pentagon. Al-Qaeda had nothing to do with it, in contrast to what the 'official' version of events would have us believe.
If someone proposes some theory about something, could it be tested? Karl Popper argued that science proceeds by refuting false hypotheses, not by confirming true ones.
Popper's work has been criticised, but what we can say about Popper's idea of falsification, is that at the very least, for a hypothesis to be worthy of consideration it needs to be testable by some method. There needs to be some criteria, some kind of evidence (at least in theory), that would allow us to judge the hypothesis false. The more detailed and specific the hypothesis, the better, as this means there is potentially more evidence that would warrant us judging it false. If we can find no such evidence, then the explanation looks quite strong. Killing erroneous hypotheses is the paradox by which science proceeds. As Popper said: "Our belief in any particular natural law cannot have a safer basis than our unsuccessful critical attempts to refute it.” (Conjectures and refutations. P 75).
Falsification should be applied to any idea about how the physical world works. Any hypothesis about the physical world should be falsifiable - expressed as a predictive statement that through experimental investigation could be shown to be false. Applying falsification to the non-physical world - such as politics - is a little more problematic. However, it's always worth having the idea of falsification in the back of your mind when someone proposes an explanation for something. That is, "How could I tell if what you're saying is horse shit or not?"
When a claim is not falsifiable, one is dealing with an Immunised Hypothesis. Claims about the physical world that are based on immunised hypotheses are pseudo-scientific. Examples of pseudo-sciences that Popper became suspicious of are psychoanalytic beliefs such as those of Freud and political Ideologies such as Marxism. In Popper's view, these theories could never go wrong, as they were sufficiently flexible to accommodate any type of new behaviour. No observation or test could show these theories to be false, as their proponents are able to invent 'just-so'stories to account for any possible behaviour. These theories gave the appearance of being able to explain everything, but in fact they explained nothing, as they could rule out nothing. Of course there are other pseudo-scientific beliefs we can easily include with the previously mentioned ones - intelligent design, astrology, fortune telling, tarot cards…
An important point to note about Falsification is it is only able to discriminate between scientific and non-scientific propositions. On its own it cannot identify the truthfulness of those propositions. It is also difficult to apply to very complex phenomena, where the chain of cause-and-effect can be lost in a dizzying number of feedback loops that often occur across many spatial and temporal scales.
Using an opponent's argument against them by reversing the direction of their reasoning. That is, the argument is turned on its head. If we reach a dubious or absurd conclusion (reductio ad absurdum) then we have reason to assume the original argument is flawed.
Consider the following fictitious example.
Shamoo Shabang is opposed to embryonic stem cell research. She says: "You can’t destroy an embryo as each is a ‘potential life’."
Her friend Olive Olivers responds by pointing out: "The basics of biology tell us that there are many stages in the development of life. Given this, it seems rather arbitrary to say the ‘potential’ begins with fertilisation."
She goes on to invert Shamoo’s argument by heading in the other direction: "Before fertilisation we have the ‘potential life’ of each sperm and ovum (each of them, potentially, could fuse and start off the whole process). If we follow the ‘potential life’ reasoning we ought to attribute them this moral status too. This is patently absurd."
Though Olive has given a reason to doubt the "potential life" argument, Shamoo understands inversion also, and inverts the inversion: "If I granted that argument any validity, then surely we can go the other way? Concluding that there is no point at which we can attribute ‘potential life’ to a gestating human, and as a consequence award no moral status to any unborn child? Clearly, you’d agree, that’s an abhorrent position?"
Replace the group, person or phenomenon involved in the proposition they are arguing for (or against), with another group, person or phenomenon for which they hold a different (often inverse) position. Assuming that the chain of reasoning for the argument ought to be comparable, it follows that a comparable conclusion ought to be reached.
The following example illustrates how substitution can be used.
Lee N Wright is arguing with his girlfriend Ima Green about the justification for military pre-emptive strikes on terror cells and rogue states. He says: "We might not be one hundred percent sure that there will be a terrorist attack, but the precautionary principle means we ought to act before it’s too late."
Ima is aware that Lee is an anthropogenic climate change contrarian who thinks there is no need to worry about greenhouse gas emissions. She points out Lee’s inconsistency by simple substitution: "We might not be one hundred percent sure that climate change is man-made, but the precautionary principle means we ought to act before it’s too late."
If Lee argues the precautionary principle in one case, it could be argued that he needs to uphold it in all cases. He either needs to back away from his argument for pre-emptive strikes, or alter his position on climate change. (This assumes there are no significant differences in both cases, otherwise one could be guilty of the fallacy false analogy.)
Of course, Ima is in the exact same position as Lee, given she thinks we ought to act to minimise man-made climate change. (We are often left wondering how people are able to function whilst suffering from such obvious cognitive dissonance?)
The Socratic Method
In all these cases, it is more than likely that an argument between proponents will go back and forth. In this sense it can be thought of as a dialectic inquiry. The Socratic method is a form of dialectic that, in its strictness sense, isn’t of much use in real life. However, it is worth discussing given its historical importance, and as a precursor for a more "loose" version which is of greater practical use in humbug hunting.
Essentially the Socratic method is a running dialogue between two people, one taking the lead role, in a question and answer format. The purpose is to establish the truth of the matter under consideration by proposition, contradiction of the proposition, repetition of this process and then eventual synthesis (the truth). In its strict sense, the requirement is for both people to have an agreed upon topic, to remain on topic, and to proceed by a question (from the lead person) and response (from the minor person). The lead looks for fallacies and contradictions in the minor’s responses, which are then used to drive the discussion forward until, eventually, the truth of the matter is attained.
The non-strict interpretation, which is entirely more useful, is any kind of thorough question and answer dialogue, with all involved agreeing that the questions are answered and that the goalposts remain firmly in place. This sets up the "ground rules" for a discussion and requires a strong lead who refuses to allow the dialogue to go off topic and will not tolerate non-answers (so perhaps even this interpretation is somewhat unrealistic). The best example of this is actually the epitome of its converse - any interview with any politician, anytime in the past, present and future. The second best example is any excellent teacher asking a series of questions to a bright student.
As a tool for truth seeking, the Socratic method is useful in particular domains of knowledge - in ethics and epistemology (for example) but not (usually) science. And even then the Socratic method will more than likely not yield the truth. It is best viewed as a tool used to clarify ideas, spin other hypotheses, defend a position and remove contradictions and fallacies from that position. Much like playing the devil’s advocate, the Socratic method is guaranteed to keep you "on your toes" and challenge unthinking dogmatism.