Covid vs. Climate Modeling: Cloudy With a Chance of Politics

Are statistics driven computer models accurate? Let's ask Governor Cuomo.

COVID-19 has proved to be a crisis not only for public health but for public policy. As credentialed experts, media commentators, and elected officials have insisted that ordinary men and women heed “the science,” the statistical models cited by scientists to predict the spread of contagion and justify the lockdown of the national economy have proven to be far off-base.

Gov. Andrew Cuomo of New York complained this week about the “guessing business” experts had presented to him dressed up as scientific fact: "All the early national experts [said]: Here's my projection model. Here's my projection model,” Cuomo said. “They were all wrong. They were all wrong."
The COVID models were clearly wrong. Yet they were used to make public policy that locked Americans down for months and crushed our economy.

What other statistics driven computer models have an out-sized affect on public policy? Climate change models. Are they just as accurate as COVID models? Yes. Other than the most mild-models have been close to reality. The rest have been wildly off.

As Texas Sen. John Cornyn tweeted: “After #COVID-19 crisis passes, could we have a good faith discussion about the uses and abuses of ‘modeling’ to predict the future? Everything from public health, to economic to climate predictions. It isn't the scientific method, folks.”


Scientific American sought to dismiss such concerns in an April 15 article headlined “Climate Science Deniers Turn to Attacking Coronavirus Models.” While not exactly defending the methodology used in the models, the article said they were wrong “because millions of Americans responded to pleas for social distancing.” It then invoked newer models that would also prove to be wrong – forecasting only 60,000 U.S. deaths; there are now more than 107,000 – before defending the original alarmist numbers with what almost sounds like an argument for the politicization of science from the coronavirus to climate change: “Health experts say the models worked the way they were supposed to -- by providing a glimpse into a dire future that was partially averted because of collective action.”


Building complex models is both a science and an art. It requires vast amounts of data representing a range of factors that might influence a particular question. To predict the spread of COVID-19, for example, researchers need reliable data on a wide range of factors including how infectious the virus is, how it is transmitted, how much of the population is susceptible to the worst outcomes. They have to assign a weight to each factor in the model, and then crunch the numbers with powerful computers to produce probabilities of possible outcomes.


Models may be helpful in thinking about the results of various policies. But they are easily oversold as providing answers with mathematical certainty. Writing in the BMJ (formerly the British Medical Journal), Devi Sridhar, a professor of public health at Edinburgh University, and Maimuna Majumder, a computational epidemiologist affiliated with Harvard Medical School, chide the “modeling community” for failing to make the limitations of models clear. Sridhar and Majumder call for transparency about the assumptions modelers make and clarity about how much the predictions shift when even small changes are made to the assumptions. Most of all, they urge humility about just how uncertain such models are.