Politicization of the Coronavirus Pandemic
Thoughts on coronavirus modeling
response to WP story on disease modelers and political response to the models
Today, March 26, the Washington Post published an article discussing the politicization of actions taken to deal with the coronavirus outbreak. In particular, the models and science behind the recommended solutions have been called into question and declared in various places and time to be a hoax.
A “leading epidemiologist, Neil Ferguson of Imperial College in Britain,” developed a model predicting a probable outcome for the UK and United States if no action was taken. The model predicted over 500,000 deaths in England and 2,160,000 deaths in the United States. Later Ferguson published projections assuming drastic action was taken to slow the spread of the virus, producing dramatically reduced predicted deaths. Immediately conservative pundits jumped on this as a revision and called into question the original estimates and the entire program designed to reduce the impact of the pandemic. Here is a quote from the article:
“But in fact, Ferguson had not revised his projections in his testimony, which he made clear in interviews and Twitter. His earlier study had made clear the estimate of 500,000 deaths in Britain and 2.2 million in the United States projected what could happen if both took absolutely no action against the coronavirus. The new estimate of 20,000 deaths in Britain was a projected result now that Britain had implemented strict restrictions, which this week came to include a full lockdown.” [Emphasis added]
This concerted attack triggered my reaction to this controversy and how such information and projections are misused by some pundits. The misuse seems to encourage Trump to draw unwarranted conclusions from this sort of modeling.
Modeling is an art form based on demonstrated statistical modeling and scientific procedure. In this instance and in the also controversial climate change denial of this administration and its followers, I detect a serious misunderstanding of models, modeling and their use to demonstrate future conditions.
What Ferguson is doing in his modeling is attempting to predict future events based upon well defined assumptions (rates of change and so forth), parameters (events, contacts, characteristics, qualities of individuals and so forth). By declaring assumptions (nothing is going to be done, estimates of rates, etc), stating the parameters, a projection is then made that attempts to show the possible outcomes.
Simple examples are the row of matches where the first is lit, the heat of the match ignites the next one within a given range and so forth. Then the example is repeated but his time with one match stepping out of line and creating a distance between the lit match and the next available match. Or a simple graphic model diagramming the contagion from one person to another, then modifying the model by changing the number of successive contacts and transmissions. These are simple visual representations of the much more complex computer model that Ferguson and other epidemiologists are using to demonstrate or show the progression of the infection. Neither of the second displays are revisions of the first, they are in fact a different model using the same initial assumptions but changing the parameters of the model.
To address Trump and company: in this world hunches don’t create outcomes until you can define exactly what is included in that hunch and then are able to put them into a working model to see exactly what the results of that hunch will produce. Clearly we can have a hunch and then assume an outcome that confirms what we believe to be the consequences of those inputs. That is pure unguided speculation. If you want me to believe in your hunch, you must show me what the parameters are, why they will work in the way you think they will and how they are going to produce the outcome that you want.*
Hunches are also employed by physicists, biologists, epidemiologists, sociologists, economists when they initially work on an issue. Fact based researchers get a database that contains the kinds of information that will help them make the model and see what the results are. The test of that preliminary model then uses historical data to see if what the model assumed to be the case can be replicated with existing data, known parameters and outcomes. Experiments are designed with this in mind.
Occasionally we have “natural” human experiments where events in comparable physical settings with different parameters and known outcomes are present. The 1918 flu pandemic is such a “natural” experiment. The graph of the progression of the flu in St Louis, Missouri presented in the article is exactly that. What I would like to see is a comparison of St. Louis against a comparable city where the mitigation efforts were continued and the outcome shown. This would be, to me, convincing evidence and demonstration (not proof, you do not prove in science, you demonstrate) of the effectiveness of the mitigation efforts.
Models of the type Ferguson and other epidemiologists are using to demonstrate the coronavirus outcome are statistical experiments. This method does have its issues, they are in fact models. How well they demonstrate what we are seeking depends largely on the inputs and the quality of the data on which they are based. Every one of these epidemiologists have been trained in the use of the models and the statistical procedures underlying them. We have to presume they have chosen the data set well and fully understand the assumptions inherent in the model. We also expect other experts to review and attempt to verify what each has claimed. In the words of Ronald Reagan: Trust, but verify.
I am a retired sociologist with an interest in modeling and statistical projections; also an interest in human communities and how they work. Modeling human behavior has improved greatly in the 50 years since I began my study, teaching and interpretation. Today we have powerful computers, often in the hands of people like me who do not have a current, active affiliation with research institutions, private or public, capable of managing large sets of data and describing trends, where these data suggest we are headed. The point here is the change in computation and modeling capability is very large with interesting consequences.
Fifty years ago we had to rely on hand calculators or actual pencil and paper calculating techniques. Some were fortunate to work at one of the 25 to 50 universities and research labs with access to giant computers capable of handling large numbers, large sets of data and producing results in a reasonable time frame. These computers were expensive and obviously limited in what they could be used for.
The statistical techniques and models were quite simple and very limited in what kinds of problems they could analyze and the solutions they could achieve. Universities and federal agencies such as Stanford, Michigan, California-Berkeley, MIT, and Lawrence-Livermore Labs began to develop procedures to handle these large sets quickly and efficiently. As they did so they used these databases and large computers to model things like fluid dynamics and weather patterns, to build models that began to predict future events, making it possible to forecast weather with modest accuracy and time frame. Through years of experience, of matching their models to increasingly accurate and complete data sets they are now able to give moderately accurate projections of future states of the empirical (real) world.
Between 30 and 40 years ago a revolution in computing occurred, suddenly small computers were relatively inexpensive and easy to place on desktops everywhere. Programs and computing techniques were developed that brought access and analysis of large data sets into the hands of more and more scientists and citizens. New modeling, analysis and statistical techniques were created that allowed us to go where many had been unable to go before.
About 35 years ago (mid-1980s) I had sitting on my desktop at home and in the office a computer that had more data storage and analytical power than I had been able to access on the IBM 360 machine at the graduate university where I earned my Masters and PhD or the DEC computer at the university where I began teaching and doing research on my own. Today I own and use computers that are even faster, more powerful and handle more data.
My point here is that more people have access to computational speed and models than could have been imagined even 30 years ago. Furthermore these computers are now connected in a way unknown then. Now access to data and information is as simple as typing in a name, clicking and Voila! there is your answer. We no longer consult an encyclopedia, where the articles are written or vetted by professionasl in the topic being covered. Instead we are sent by Google or Bing to Wikipedia where interested amateurs write the articles. Sure they are checked, but by whom and how often.
Just as likely the curious may be sent to a for profit site with its own ideas as to what is real and what is false. Opinion? We have lots of that, including right here in what I am writing. Is it accurate? Have the views presented been demonstrated? Do they come from reliable sources? How do we know they are from reliable sources?
In these circumstances a Steve Bannon or Stephen Miller are given equal standing with Sandra Day O’Connor or Neil deGrasse Tyson.
To conclude this essay I think each of us will find it incumbent to read carefully, think seriously and apply our own analytic skills to what we see and read. If we are unsure, find and know reliable sources and agencies that you trust. But keep in mind how you build your trust and how your trust is supported. Even if you vehemently disagree with a source, agent, organization, person stop, think, why do I trust it? Is it because it speaks to my preconceived ideas or is it challenging them. What is the nature of the challenge? Is it fact or is it opinion?
Finally I encourage you to read or re-read the article and this essay. Think critically and verify.
*As an aside, I will put trickle down economics in this category. This is a model based on a hunch that if we drop the taxes on the rich and corporations, they will just naturally invest more into business creating more goods and services and by doing so magically make the economy grow thus offsetting the loss of the federal funds that inevitably follow the reduction in taxes
Comments
Politicization of the Coronavirus Pandemic — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>