The thinking error at the root of science denial

by Jeremy Shapiro edited by O Society August 8, 2019

Currently, there are three important issues on which there is scientific consensus but controversy among laypeople: climate change, biological evolution and childhood vaccination. On all three issues, prominent members of the Trump administration, including the Trump, lined up against the conclusions of research.

This widespread rejection of scientific findings presents a perplexing puzzle to those of us who value an evidence-based approach to knowledge and policy.

Yet many science deniers do cite empirical evidence. The problem is they do so in invalid, misleading ways. Psychological research illuminates these ways.


No shades of grey

As a psychotherapist, I see a striking parallel between a type of thinking involved in many mental health disturbances and the reasoning behind science denial. Dichotomous thinking, also called black-and-white and all-or-none thinking, is a factor in depression, anxiety, aggression and, especially, borderline personality disorder.

In this type of cognition, a spectrum of possibilities is divided into two parts, with a blurring of distinctions within those categories. Shades of gray are missed; everything is considered either black or white. Dichotomous thinking is not always or inevitably wrong, but it is a poor tool for understanding complicated realities because these usually involve spectra of possibilities, not binaries.

Spectra are sometimes split in very asymmetric ways, with one-half of the binary much larger than the other. For example, perfectionists categorize their work as either perfect or unsatisfactory; good and very good outcomes are lumped together with poor ones in the unsatisfactory category. In borderline personality disorder, relationship partners are perceived as either all good or all bad, so one hurtful behavior catapults the partner from the good to the bad category. It’s like a pass/fail grading system in which 100 percent correct earns a P and everything else gets an F.

In my observations, I see science deniers engage in dichotomous thinking about truth claims. In evaluating the evidence for a hypothesis or theory, they divide the spectrum of possibilities into two unequal parts: perfect certainty and inconclusive controversy. Any bit of data that does not support a theory is misunderstood to mean the formulation is fundamentally in doubt, regardless of the amount of supportive evidence.

Similarly, science deniers perceive the spectrum of scientific agreement as divided into two unequal parts: perfect consensus and no consensus at all. Any departure from 100 percent agreement is categorized as a lack of agreement, which is misinterpreted as indicating fundamental controversy in the field.


Psychotherapeutic Diagrams: Pathways, Spectrums, Feedback Loops, and the Search for Balance by Jeremy Shapiro

There is no ‘proof’ in science

In my view, science deniers misapply the concept of “proof.”

Proof exists in mathematics and logic but not in science. Research builds knowledge in progressive increments. As empirical evidence accumulates, there are more and more accurate approximations of ultimate truth but no final end point to the process. Deniers exploit the distinction between proof and compelling evidence by categorizing empirically well-supported ideas as “unproven.” Such statements are technically correct but extremely misleading, because there are no proven ideas in science, and evidence-based ideas are the best guides for action we have.

I have observed deniers use a three-step strategy to mislead the scientifically unsophisticated. First, they cite areas of uncertainty or controversy, no matter how minor, within the body of research that invalidates their desired course of action. Second, they categorize the overall scientific status of that body of research as uncertain and controversial. Finally, deniers advocate proceeding as if the research did not exist.

For example, climate change skeptics jump from the realization that we do not completely understand all climate-related variables to the inference that we have no reliable knowledge at all. Similarly, they give equal weight to the 97 percent of climate scientists who believe in human-caused global warming and the 3 percent who do not, even though many of the latter receive support from the fossil fuels industry.

This same type of thinking can be seen among creationists. They seem to misinterpret any limitation or flux in evolutionary theory to mean that the validity of this body of research is fundamentally in doubt. For example, the biologist James Shapiro (no relation) discovered a cellular mechanism of genomic change that Darwin did not know about. Shapiro views his research as adding to evolutionary theory, not upending it. Nonetheless, his discovery and others like it, refracted through the lens of dichotomous thinking, result in articles with titles like, “Scientists Confirm: Darwinism Is Broken” by Paul Nelson and David Klinghoffer of the Discovery Institute, which promotes the theory of “intelligent design.” Shapiro insists that his research provides no support for intelligent design, but proponents of this pseudoscience repeatedly cite his work as if it does.

For his part, Trump engages in dichotomous thinking about the possibility of a link between childhood vaccinations and autism. Despite exhaustive research and the consensus of all major medical organizations that no link exists, Trump has often cited a link between vaccines and autism and he advocates changing the standard vaccination protocol to protect against this nonexistent danger.

There is a vast gulf between perfect knowledge and total ignorance, and we live most of our lives in this gulf. Informed decision-making in the real world can never be perfectly informed, but responding to the inevitable uncertainties by ignoring the best available evidence is no substitute for the imperfect approach to knowledge called science.


False Dichotomy and Science Denial

by Stephen Novella

Psychologist Jeremy Shapiro has an interesting argument one of the pillars of science denial is the false dichotomy. I agree, and this point is worth exploring further. He also points out the same fallacy in thinking is common in several mental disorders he treats.

The latter point may be true, but may be perceived as inflammatory or irrelevant. For example, he says borderline personality disorder clients often split the people in their world into all bad or all good. If you do one thing wrong, then you are a “bad” person. Likewise, perfectionists often perceive any outcome or performance less than perfect gets lumped into one category of “unsatisfactory.”

I do think these can be useful examples to show how dichotomous thinking can lead to or at least support a mental disorder. Part of the goal of therapy for people with these disorders is cognitive therapy, to help them break out of their pattern of approaching the world as a simple dichotomy. But we have to be careful not to imply science denial itself is a mental illness or disorder.


Denialism and False Dichotomy

A false dichotomy is a common logical fallacy in which many possibilities, or a continuum of possibilities, is rhetorically collapsed into only two choices. People are either tall or short, there is no other option. There are just Democrats and Republicans.


While some physical properties may be truly dichotomous (electric charge is either positive or negative), people and the world itself usually display much more complex features. Most traits exist along a continuum. Yet our minds like simplicity, and we like to categorize and pigeon-hole things in order to mentally grapple with them. Using schematics and categories is fine, but we have to recognize they are not reality, which is often more messy.


These principles are especially true when dealing with very complex systems, like people. People are rarely if ever all good or all bad, for example. People generally are a complex combination of traits that range from vice to virtue, are often context dependent, and exist along a continuum.


Likewise, scientific understanding also cannot be understood as any simple dichotomy. I have written previously about the demarcation problem between science and pseudoscience, for example. We cannot divide all claims to science into two clean categories – with pristine science on one end and pure pseudoscience at the other. There is a continuum with no clear dividing line between the two.


However, we can identify methods and features that are scientifically valid and others that are flawed. The more valid features any scientific endeavor has, the more of a legitimate science it is, while the more dubious features it has makes it more pseudoscientific. So while there is no sharp demarcation line, there are two recognizable ends of the spectrum. Denying this reality is also a logical fallacy – the false continuum.


Scientific knowledge also falls along a continuum. No fact is established to 100% metaphysical certitude, nor can we assign a 0% probability to any claim. This is because human knowledge is limited, is dependent on our perspective, frame of reference, and perhaps unknown assumptions.


Still this does not mean that we cannot be 99.99% certain some basic fact about the universe is probably true. The world is roughly a sphere. We can be absolutely certain of that (despite the delusions of flat-earthers) to such a high degree that we can treat it as 100%. Similarly we can say that homeopathy has as close to 0% a chance of having a real medical effect as we can get in medicine. You can place every scientific claim along this spectrum, based on existing evidence, competing theories, known unknowns, and other factors. The more well-established independent lines of evidence point to one conclusion, the more confident we can be in the conclusion.


So while there is a continuum of confidence in scientific facts and theories, we can divide that continuum up into practical categories. There are well-established facts that we can use as a solid foundation. There are theories that are sufficiently well-established that we can act upon them, even if there remains some small uncertainty or room for doubt. Other claims are possibly true but we should treat with caution. Some claims in the middle are a toss up, we really cannot say with any confidence one way or the other. Then there are claims that are probably not true but there is room for a minority opinion and we shouldn’t write them off just yet. And finally there are claims and theories that have been sufficiently disproved that we can move on and stop wasting any further resources on pursuing them.


We can quibble about where exactly to draw the lines, and about exactly where any one scientific claim exists on this spectrum, and this debate is healthy. It is part of the scientific process. Designations are also moving targets, revised as new evidence and new ideas are brought to bear.

Shapiro is correct in science denialism, as one of its strategies, collapses this continuum into the false dichotomy of – scientific conclusions are either rock solid, or they are suspect and controversial at best and bogus at worst. Denialists ignore the huge part of the spectrum where we can treat theories as probably true, even if minor uncertainty remains. The purpose of this strategy is to point to unknowns, apparent anomalies, apparent contradictions, or any dissent among scientists (no matter how minor) as evidence any theory is not 100% rock solid. Therefore, the theory is controversial and suspect, lunacy and quackery even.


So evolution deniers will point to “gaps” in the fossil record as if that calls the entire theory into question. Or they will point to disagreements among scientists about some of the details of evolution to claim that the entire theory is controversial and there is no consensus. Any chink, any flaw, and the whole theory collapses, in their view.

Scientists often inadvertently feed this strategy, because we operate in the real world where scientific knowledge is a continuum. We sometimes make statements about how disruptive a new discovery is, or how little we understood prior to a breakthrough, without realizing how such statements can be misused easily to attack the science itself. This is an important principle of effective science communication – to give an accurate portrayal of how science progresses. This means resisting the urge to overhype your own research.

Scientists are operating within a scientific paradigm, so when we make casual statements like, “We have no idea how this works,” they unconsciously are assuming that people will put such statements into the same scientific context in which it was meant. But often this is not the case. Usually such absolute statement are not literally true – we often have lots of ideas, and lots of evidence, but there may still be competing theories, or we lack solid confirming evidence.

Science needs to be understood as the messy, flawed, but at its best rigorous, thorough, and careful endeavor that it is. We don’t know everything, and we don’t necessarily know anything 100%. This does not mean we know nothing, or that you can casually dismiss any scientific conclusion you don’t like. We do know stuff, and some stuff we know to such a high degree of confidence that we can treat it as a fact. Other things we can say with sufficient confidence to base important decisions on those conclusions. I practice medicine, so this is my daily life.


The Problem of Induction

Climate change is a perfect example. There are significant uncertainties in exactly what is happening and will happen with the climate, all the feedback mechanisms at play, and what the net results will be. But we do have a fairly high degree of confidence that releasing large amounts of previously sequestered carbon into the atmosphere is forcing rising average global temperatures, with potentially inconvenient effects. The consensus on the evidence is strong enough to act, even with the lingering uncertainty.


Waiting for 100% certainty is rarely practical. If you approached health care this way, you would be paralyzed into inaction with very bad outcomes. If we were only 95% confident that an asteroid was going to wipe out all life on Earth, I think we should act on that 95%, and not quibble about the 5%.

(PowerPoint slides illustrating Quantum Mechanics)

How Mass Murder Happens at a Level Everyone Will Understand



3 thoughts on “The thinking error at the root of science denial

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s