There is No Absolute Truth: On Covid-19, Misinformation, and Bullshit

by Julia Wong interviews Carl Bergstrom edited by O Society May 2, 2020

Carl Bergstrom is uniquely suited to understanding the current moment. A professor of biology at the University of Washington, he spends his career studying two seemingly disparate topics: emerging infectious diseases and networked misinformation. The two merged into one the moment reports of a mysterious respiratory illness emerged from China in November.

The coronavirus touched off a pandemic and an “infodemic” of hoaxes, conspiracy theories, honest misunderstandings, and politicized scientific debates. Bergstrom jumps into the fray to help the public and the press navigate the world of epidemiological models, statistical uncertainty, and the topic of his forthcoming book: bullshit.

The following interview is edited for length and clarity.

You teach a course and co-wrote a book about the concept of bullshit. Explain what you mean by bullshit?

The formal definition we use is “language, statistical figures, data, graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener with a blatant disregard for truth or logical coherence.”

Flood the zone with shit: How misinformation overwhelmed our democracy

The idea with bullshit is trying to appear authoritative and definitive in a way not really about communicating accurately and informing a reader, but rather by overwhelming them, persuading them, impressing them. If it’s done without any allegiance to truth or accuracy, then it’s bullshit.

hubbard

Why people believe bullshit is profound

Unfortunately, we’re all used to verbal bullshit. We’re all used to campaign promises and weasel words, and maybe we’re pretty good at seeing through it because we’ve had a lot of practice.

Yet as the world becomes increasingly quantified and the currency of arguments becomes statistics, facts and figures and models and such, we’re increasingly confronted, even in the popular press, with numerical and statistical arguments.

And this area is really ripe for bullshit, because people don’t feel qualified to question information given to them in quantitative form.

Are there bullshit narratives about the coronavirus you are concerned about right now?

What’s happened with this pandemic we’re not accustomed to in the epidemiology community is it’s really heavily politicized. Even when scientists are very well-intentioned and not trying to support any side of the narrative, when they do work and release a paper, it gets picked up by actors with political agendas.

Whether it’s talking about seroprevalence or estimating the chance this is going to overwhelm the United States, each study gets picked up and placed into this little political box and sort of used as a cudgel to beat the other side with.

So even when the material itself isn’t being produced as bullshit, it’s being picked up and used in the service of bullshit by overstating its claims, by cherry-picking the information out there and so on. And I think the biggest problem we’re facing.

One example of intentional bullshit is this insistence for a while on graphing the number of cases on a per-capita basis, so people could say the US response is so much better than the rest of the world because we have a slower rate of growth per capita. That is basically graphical malfeasance or bullshit.

5e7cf7d8487c2273757162f6

Graphing COVID-19 Cases Per Capita Is Bullshit

When a wildfire starts spreading, you’re interested in how it’s spreading now, not whether it’s spreading in a 100-acre wood or millions of square miles of national forest.

Is there one big lesson that you think the media should keep in mind as we communicate science to the public? What mistakes are we making?

When I’m talking about how to avoid misinformation around this I’m constantly telling people to trust the professional fact-based media. Rather than looking for the latest rumor spreading across Facebook or Twitter so you can have information up to the hour, recognize it’s much better to have solidly sourced, well-vetted information from yesterday.

Hyper-partisan media are making a huge mess of this, but that’s on purpose. They’ve got a reason to promote hydroxychloroquine or whatever it is and just run with it. They’re not even trying to be responsible.

brandolinis-law

The amount of energy necessary to refute bullshit is an order of magnitude bigger than it takes to produce it.

One of the biggest things people can do to improve is to recognize scientific studies, especially in a fast-moving situation like this, are provisional. That’s the nature of science. Anything can be corrected. There’s no absolute truth there. Each model, each finding is just adding to a weight of evidence in one direction or another.

Then Imperial came back and modeled a completely different scenario, where the disease was actually brought under control and suppressed in the US, and they released a subsequent model that said, ‘If we do this, something like 50,000 deaths will occur.’ That was picked up by the other side and used to try to discredit the Imperial College team entirely by saying, ‘A couple of weeks ago they said a million now they’re saying 50,000; they can’t get anything right.’ And the answer , of course, is that they were modeling two different scenarios.

We’re also not doing enough of deliberately stressing the possible weaknesses of our interpretations. That varies enormously from researcher to researcher and team to team.

It requires a lot of discipline to argue really hard for something but also be scrupulously open about all of the weaknesses in your own argument.

But it’s more important than ever, right? A really good paper will lay out all the most persuasive evidence it can and then in the conclusion section or the discussion section say, ‘OK, here are all the reasons that this could be wrong and here are the weaknesses.’

When you have something that’s so directly policy relevant, and there’s a lot of lives at stake, we’re learning how to find the right balance.

It is a bit of a nightmare to put out data that are truthful. Also be aware there are bad faith actors who pounce on it and use it in a way you didn’t intend.

There’s a spectrum. You have outright bad faith actors and then you have misinformed actors.

There’s so much that goes unsaid in science in terms of context and what findings mean that we don’t usually write in papers. If someone does a mathematical forecasting model, you’re usually not going to have a half-page discussion on the limitations of forecasting. We’re used to writing for an audience of 50 people in the world, if we’re lucky, who have backgrounds that are very similar to our own and have a huge set of shared assumptions and shared knowledge. And it works really well when you’re writing on something that only 50 people in the world care about and all of them have comparable training, but it is a real mess when it becomes pressing, and I don’t think any of us have figured out exactly what to do about that because we’re also trying to work quickly and it’s important to get this information out.

One area that is contentious and in some ways politicized is the serology surveys, which are supposed to show what percentage of the population has antibodies to the virus. What are some of the big picture contextual caveats and limitations that we should keep in mind as these surveys come out?

The seroprevalence in the US is a political issue, and so the first thing is to recognize that when anyone is reporting on that stuff, there’s a political context to it. Maybe some of the research is being done with an implicitly political context, depending on who the funders are or what the orientations and biases of some of the researchers.

Calvin-Studies

On the scientific side, I think there’s really two things to think about. The first one is the issue of selection bias. You’re trying to draw a conclusion about one population by sampling from a subset of that population and you want to know how close to random your subset is with respect to the thing you’re trying to measure. The Santa Clara study recruited volunteers off of Facebook. The obvious source of sampling bias there is people desperately want to get tested. The people who want it are, of course, people who think they have it.

The other big piece is understanding the notion of positive predictive value and the way false positive and false negative error rates influence the estimate. And that depends on the incidence of infection in the population.

If you have a test that has a 3% error rate, and the incidence in the population is below 3%, then most of the positives that you get are going to be false positives. And so you’re not going to get a very tight estimate about how many people have it. This has been a real problem with the Santa Clara study. From my read of the paper, their data are actually consistent with nobody being infected.

A New York City study, on the other hand, shows a 21% seropositive, so even if there is a 3% error rate, the majority of these positives have to be true positives.

Now that we’ve all had a crash course in models and serosurveys, what are the other areas of science where it makes sense for the public to start getting educated on the terms of the debate?

One that I think will come along sooner or later is interpreting studies of treatments. We’ve dealt with that a little bit with the hydroxychloroquine business but not in any serious way because the hydroxychloroquine work has been pretty weak and the results have not been so positive.

But there are ongoing tests of a large range of existing drugs. And these studies are actually pretty hard to do. There’s a lot of subtle technical issues: what are you doing for controls? Is there a control arm at all? If not, how do you interpret the data? If there is a control arm, how is it structured? How do you control for the characteristics of the population on whom you’re using the drug or their selection biases in terms of who’s getting the drug?

Unfortunately, given what we’ve already seen with hydroxychloroquine, it’s likely this will be politicized as well. There’ll be a parallel set of issues to come around with vaccination, but that’s more like a year off.

If you had the ability to arm every person with one tool – a statistical tool or scientific concept – to help them understand and contextualize scientific information as we look to the future of this pandemic, what would it be?

I would like people to understand there are interactions between the models we make, the science we do and the way we behave. The models we make influence the decisions we make individually and as a society, which then feed back into the models and the models often don’t treat that part explicitly.

Once you put a model out there that then creates changes in behavior that pull you out of the domain that the model was trying to model in the first place. We have to be very attuned to that as we try to use the models for guiding policy.

That’s very interesting, and not what I expected you to say.

What did you expect?

That correlation does not imply causation.

That’s another very good one. Seasonality is a great example there. We’re trying a whole bunch of things at the same time. We’re throwing all kinds of possible solutions at this and lots of things are changing. It’s remarkable to me actually, that so many US states are seeing the epidemic curve decrease. And so there’s a bunch of possibilities there. It could be because people’s behavior is changing. There could be some seasonality there. And there are other possible explanations as well.

But what is really important is just because the trend you see is consistent with a story someone’s selling, there may be many other stories that are also consistent, so inferring causality is dangerous.

Calling Bullshit: The Art of Scepticism in a Data-Driven World by Carl Bergstrom and Jevin West will be published 4 August 2020 by Penguin.

One thought on “There is No Absolute Truth: On Covid-19, Misinformation, and Bullshit

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s