Weighing Our Losses
by Theodore Dalrymple
Nothing is more entertaining than an apocalypse—provided, of course, that it happens elsewhere or remains in an imaginary realm and does not intrude into our everyday lives.
Intermittently over the past few decades, I have read for enjoyment books about future viral pandemics, thrilling to them with that mixture of fear and disbelief essential to the pleasure of any horror story. Appearing in 1996, for example, Virus X, by Frank Ryan, a British physician and evolutionary biologist, warned that “new plagues every bit as deadly as anything seen in previous history threaten our species” and foresaw the possibility of human extinction. In 2005, a French book, Pandémie, by Jean-Philippe Derenne and François Bricaire—respectively, a pneumonologist and tropical disease doctor—foretold accurately all the measures that most Western countries would take 15 years later during the Covid-19 pandemic. And in 2011, Stanford biology professor Nathan Wolfe published The Viral Storm, which, among other things, quoted the prediction by Sir Martin Rees (onetime president of the Royal Society) that, by 2020, bio-error (viral escape from a laboratory) or bioterror will have killed 1 million people. But having read these books, all by highly informed men, I got on with life, as if they referred to life on a planet not my own: for the trouble with prophecies of a forthcoming apocalypse, as with those of stock-market crashes, is that one needs to know not just that they will happen, but when they will happen.
Then came Covid. I was not sure what to think of the pandemic when it struck, and am still not quite sure. Like many, I suspect, I find myself veering, or careening, from one opinion to another. Sometimes, I think that it is not so much the illness but the response to it that is the more damaging. At other times, I think that governments had little choice but to act as they did. On this subject, I lack fixed convictions.
Epidemiologists, along with pathologists regarded as the nerds and autists of the medical profession, suddenly rose to prominence, even to stardom, with the pandemic—some because of their apocalyptic predictions of what would happen if the most extreme protective measures were not taken. We had grown almost inured to such predictions, however: Imperial College London’s Neil Ferguson, for example, who predicted up to 500,000 deaths from Covid-19 in the United Kingdom alone if his advice on strict lockdowns was not heeded, had in 2001 predicted (admittedly, as a worst-case scenario) up to 150,000 deaths in the coming years from Creutzfeldt-Jakob disease, contracted from having eaten meat infected with mad cow disease. Since then, approximately 50 deaths from the disease have occurred—just two since 2011.
But dismissing apocalyptic predictions or projections because they always proved wrong in the past is imprudent. (Predictions are assertions of what will happen, while projections are assertions of what might happen if present trends continue, the two often being confounded.) The problem is that of induction, brilliantly expounded by Bertrand Russell in The Problems of Philosophy:
Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.
Thus, it would not be wise to conclude that, because all previous dire predictions or projections were false or exaggerated, we may safely dismiss them in the present or future as false or exaggerated: provided only that they are founded upon reasonable assumptions and the current state of knowledge rather than on ignorance and superstition. They might, after all, be true.
This, in turn, means that we can never entirely free ourselves from fear, even if it is merely subliminal. It is not true that the only thing we have to fear is fear itself: insouciance in the face of real danger can be just as devastating, if not more so. But even the wisest, best-informed person will need luck to be right in situations with many unknowns.
My first instinct about the Covid-19 pandemic was that it had been blown out of all proportion. Had we not been here before, with viral epidemics, such as bird flu, that were expected to decimate the world’s population but passed more or less unnoticed, apart from an initial panic? And had we not been through the Asian flu of 1957 and the Hong Kong flu of 1968, which had left almost no trace in our memories, despite having killed far more people than Covid seemed likely to kill (at least, early in the pandemic), at a time when the population was only half or two-thirds of what it is now? This was no Black Death that wiped out a third of Europe’s population; it was not even the 1720 plague in Marseille that killed a third of the population of Provence but spread no farther (and never returned). And seasonal flu kills many thousands each year—the variation in the death rate is considerable, by at least a factor of five—without the general public batting an eyelid. It gets on with life. Why all the fuss, then, over Covid?
To overestimate a threat can be dangerous. In September 1939, at the outset of World War II, nearly 600 more people died in accidents on British roads as a result of the blackout decreed that month than during the same period the previous year, without a single air raid having taken place, or a single bomb having fallen. If road traffic deaths had continued at this rate throughout the war, almost half as many people would have died in such accidents as died from the bombing itself. This did not happen because a substantial decrease in traffic ensued; as to how many more would have died in the bombing had no blackout been imposed, we cannot know. Controlled experiments during a crisis are not easy to perform.
The costs of the economic blackout that was an important part of the response to Covid-19 almost everywhere in the West—costs not only in money but in misery and ill health—cannot be known for certain, but the attempt to measure them will occupy academics for years to come. Of course, they will need double-entry bookkeeping, to offset any benefits against the costs. Academic exercises of this nature, however, can never quite capture reality: and future accounts will have to navigate between the Scylla of exaggeration and the Charybdis of underestimate—just as policymakers have had to navigate during the epidemic.
When it became obvious, as it soon did, that the illness did not affect every section of the population equally—but that in its dangerous form, it was largely confined to the elderly and a few other, easily identified, groups—I thought that the focus should be on preventing the transmission of the virus to these populations, with everyone else left to go about business as usual to develop the natural herd immunity that would end the epidemic. Instead of this policy, it often seemed as if that of euthanasia of the very old (except that the death procured was far from easeful) had been adopted, in Britain, France, Sweden, and the United States. No attempts were made to protect old people’s homes; on the contrary, elderly people known to be carrying the virus were sent back to such homes, seeding the disease like colonial troops giving smallpox-infested blankets to hostile natives. If a deliberate policy had been adopted to cull a population that was a burden on society, as elephants are culled in national parks in Africa when they get too numerous, the results would not have been distinguishable.
Whether my imagined policy would have worked is unknowable, since it was not tried. There were obvious difficulties with it, but no policy could ever have been perfect or without its disadvantages. As Doctor Johnson put it in Rasselas, if one must meet every objection in advance, nothing will ever be ventured. Obviously, not all old people lived in such a way that they could easily be protected from contact with younger carriers of the virus—for example, those in multigenerational households, more prevalent among people originating from the Indian subcontinent, who also suffer disproportionately from diabetes, another predisposing factor of serious Covid disease. That they have had higher death rates than the general population, as was only to be expected, became grist to the antiracists’ mill, as if all differences in outcome between groups could result only from racial prejudice and discrimination, which the self-appointed engineers of human souls must root out. (See “Does Covid Discriminate?,” Summer 2020.)
It was likely, besides, that many of the elderly would have preferred to run the risk of death rather than have lived in total isolation and would not have followed mere advice to self-isolate, even if it had been possible to do so. The objection that, if followed, my policy might have led to hospitals being quickly stretched beyond their capacity to cope by an influx of desperately ill old people was surely a perfectly reasonable one—though whether it would have proved correct, we shall never know.
Lockdowns applied to everyone except “essential” workers—not only hospital staff but delivery people, shop assistants, maintainers of public utilities, and factory employees—in short, a gamut of individuals necessary to the continuation of daily life, who were mostly of low social status. This reminded me of George Orwell’s famous remark that our civilization is based upon coal and that the coal miner was second in civilizational importance only to the man who plowed the soil.
Commentators tend to regard such workers—often immigrants and typically poorly paid—as archetypal victims of exploitation. Certainly, their lives must be hard, especially in large, high-rent cities, where they must live in cramped conditions or, alternatively, endure long and exhausting daily journeys to work. Spending the lockdowns and quarantines alternately in a small town in England and in Paris, I learned to appreciate such people’s helpfulness and good humor as a triumph of the human spirit over what I would have considered great adversity, had I faced it myself. If nothing else, the pandemic gave us all opportunity to reflect on much that we had previously taken for granted.
Like many, I began daily to examine the statistics from around the world, like an anxious investor scanning the share prices to find happiness, foresee doom, or face disaster. The strange thing about numbers is that they confer a reality, a solidity, to whatever is supposed to be enumerated by them, irrespective of their reliability. When one considers a statistical table, critical thought is apt to decline, or even to evaporate. The meaning of the numbers seems evident at first glance: country X is doing better than country Y, the reason being that country X has adopted a policy that country Y has not.
In this world of ideal comparative statistics, no margin of error exists, let alone danger that what is being measured and compared is not strictly comparable. For example, in comparing death rates per million of the population, we give no thought to the possibility that causes of death are recorded differently in different countries, or that the care with which certification is carried out is unequal between countries, or even within countries. We simply conclude that if the death rate per million in one country is, say, 1,400 per million and, in another, 1,800 per million, the first must have a better policy than the second. We allow no intervening variables, such as the age structure or density of the population, or the prevalence of obesity or diabetes, to obtrude on our interpretation. Moreover, at any one time, we regard the statistics as definitive, as if the whole episode were over and the latest statistics were the last word on it. And we derive a certain satisfaction—though we don’t admit to it—that some country has statistics worse than our own.
Suddenly, the world grew full of epidemiologists, clinical pharmacologists, and immunologists—amateurs, of course, but thanks to the Internet, all with access to immense quantities of information (including misinformation)—often endowed with the belief that a statistic, by the mere fact of having been published, must refer to a corresponding reality. Not surprisingly, a cacophony of opinion soon erupted, ranging from the reasonable and the well-informed to the frankly psychotic. Piers Corbyn, for example, brother of the former leader of the British Labour party and who holds a degree in astrophysics, has variously claimed that the virus did not exist; that the illness was no worse than the flu; that George Soros and Bill Gates created it with the intention of reducing women’s fertility, thereby gaining control over the world; and that the use of Covid vaccines is comparable with the Nazi genocide. Corbyn distributed leaflets to this effect in an area of London with a high proportion of ethnic minority groups, some already inclined to believe that vaccines were intended as an assault on their fertility. Mad as all this may be, Corbyn has presided over rallies of as many as 10,000 people—and this, in a country, Britain, with one of the lowest rates of vaccine skepticism in the West.
No doubt governmental confusion was both cause and consequence of this cacophony of opinion. Just listening to the science, as Greta Thunberg has enjoined us to do with regard to global warming, was impossible for governments because science is not straightforwardly a body of doctrine, especially in a situation, such as the pandemic, with so many unknown variables. Masks initially were not made compulsory in France; the government claimed that they were ineffective. Only a little later, when mask-wearing had been made compulsory, did it emerge that the reason masks had previously not been advised was that there weren’t any available at the time. This lack of veracity only added to the mistrust of government that inevitably occurs when officials make a policy U-turn: the policy changes, but the supposed authority with which it is made remains, like the grin of the Cheshire cat. People may be prepared to forgive and forget error, but they are less willing to overlook mendacity.
In France, the mask fiasco was all the worse because, between 2005 and 2010, the government had stockpiled 2.2 billion masks precisely in preparation for a pandemic such as the current one. France was thus fully prepared, in this respect, for the 2009 H1N1 influenza pandemic, which, as it turned out, killed just 342 people in the country. The minister of health was severely criticized, and then demoted, for having spent (and supposedly wasted) about $1.2 billion on the masks, which proved unneeded. The lesson was learned, but unfortunately the wrong one: that it was better, politically, to be unprepared for something that did happen than prepared for something that didn’t.
During the first lockdown (I am beginning to forget how many we have had), I traveled from France to the Netherlands by train. I spoke at a colloquium, my attendance granting me exemption from the travel restrictions then in force for most of the population. The restrictions differed in certain respects in the two countries: for example, in France, all shops other than grocery stores and pharmacies were closed; in the Netherlands, they remained open. Was this because of the difference in the two countries’ situations and dictated by “the science,” or was it arbitrary, the result of two ideas plucked from the air by politicians, civil servants, and their advisors? Clearly, the Dutch policy was preferable, insofar as it was slightly less restrictive and economically damaging, but was it any the worse from the standpoint of public health? And how would one determine whether this was so or not?
The obvious answer is: “By the results.” On the crude measure of Covid mortality, keeping the shops open was certainly no worse, and perhaps better, than keeping them closed, for mortality was consistently higher in France than in the Netherlands. But so many intervening variables might have led to this result that even limited conclusions about the differing policies would be almost impossible. Yet if retrospective conclusions are impossible, taking prospective decisions must rely on something other than pure evidence: intuition, perhaps, or prejudice, the Dutch attaching more importance to commerce than the French, who prefer powerful, centralized administration.
Where uncertainty is inevitable but the stakes are high, tempers are likely to flare and people to claim insights into the nature of things that they do not have. Humankind, said T. S. Eliot, cannot bear too much reality, but it also cannot bear too much uncertainty: humans then turn to conspiracy theories or cults to alleviate their sense of helplessness. That is why discussions of Covid so quickly become arguments: most people who are not sure of their ground make up for it by dogmatism.
There is one small thing that, to my shame, I have learned thanks to the epidemic. An old person, whom I know well, has mildly irritated me for some years by claiming that, for him, eating is a terrible corvée. In fact, getting him to eat, in order to keep his body and soul together, has been a terrible corvée—for others. Then a doctor friend of mine, aged 71, contracted Covid and lost his sense of smell and therefore could taste almost nothing. It has not returned, several months later. This has caused me, for the first time, to enter imaginatively into the world of someone who tastes nothing. In these circumstances, eating must be a corvée rather than a pleasure, a tedious and repetitive task akin to brushing one’s teeth or tying one’s shoelaces. The old man who irritated me had probably lost his sense of smell, but it was only when my friend lost his sense of smell that I took the trouble to think about what such a loss would mean. Perhaps I will learn to be less easily irritated by others if I henceforth say to myself, “Remember anosmia”—the absence of the sense of smell.
First published in the City Journal.