I have a research background in the social sciences and dozens of peer-reviewed publications to my name. There’s a lot that sets off my crap detector in Ferguson’s comments – mostly to do with overestimating the validity of his own data, and using this to in effect depoliticise political questions and naturalise a kind of technocratic despotism under the guise of neutral science. I don’t think this is a deliberate conspiracy; I think it’s a predictable result of a particular way of seeing.
The political assumption is that ‘we’ as a society make decisions for the whole society (i.e., society is not an aggregate of individuals), that within this range of decisions, anything goes (the only criteria are quantitative), and that the decisions should be made based on expert data. These are highly contentious beliefs: they are not apolitical or scientific. I believe lockdowns are always wrong because people are autonomous beings with a need for freedom, and acts such as threatening violence if a person leaves their home are abusive regardless of circumstances (I don’t believe there is any significant moral difference between a Government, a terrorist group, or an individual abuser making such threats, and I don’t believe the ends justify the means). But I could also cite dozens of political theories which oppose the general model that the Government should do whatever it likes on behalf of the entire society based on expert guidance. Literally everything from right-libertarianism to the Marxist class model of society, from Kantian deontology to participatory and deliberative democracy, from conservatism to deep ecology to postcolonial theory, runs against this view. The closest philosophical forerunner is probably Hobbes: the idea that we need to submit to tyranny or our lives will be nasty, brutish and short – though I think the current version is a novel ideology which has developed out of cybernetic information theory and behaviourist psychology, and which reaches us mainly through the Third Way. There is also a background here in disaster management theory (e.g., Quarantelli): the idea that the main problem in disasters is the public response, and that this response should be managed through media and behavioural manipulation, with the goal of preventing the disaster – which by definition is already horrific for the human beings affected – from overwhelming the state’s ability to cope. In other words, it’s a strategy based on damage reduction, permitting or increasing human suffering so as to preserve state/Government stability (again clearly a contentious view, and again with Hobbesian and behaviourist roots). Yet Ferguson embeds this view of politics in such a way as to make it seem obvious, apolitical. It isn’t. It is a choice in favour of technocratic governance.
Ferguson’s desire not to ‘politicise’ science involves effectively making policy decisions based on the ‘expert’ conclusions arising from computer modelling. This kind of technocratic model is perfectly compatible with how countries like China are run. There is a current tendency to turn western democracies into electorally competitive technocracies in which changes in elected Government has little impact on the ‘evidence-based’ functioning of the policy machines – a narrowing in political space which dates back at least to the early 2000s. It is not a desirable trend, and it likely reflects the economic success of China and the resultant appeal of its model in the same period. I don’t think this requires secret conspiracies or Chinese manipulation; it might just be a matter of elites/experts seeing what works and copying it. But it means that, if we follow the path of ‘what works’, and China clearly works (or keeps up a good enough appearance of working), we will end up copying China. This happens both because China ‘works’ and because China has a model of governance based on experts applying ‘what works’.
Having decided to defer to ‘experts’ in making policy, there is then a second political decision as to which data counts. The choice to rely on computer modelling – and to treat it as if it were impartial, apolitical expertise – is itself a political choice. Different methods would have produced different outcomes. Suppose, for instance, that the response had been based on the knowledge provided by historians who have studied previous epidemics. The Government and public would have been told that non-medical interventions do no good, that even such an intuitive measure as closing borders between affected and unaffected regions only delays spread by a few weeks, and that one of the biggest dangers is public panic. Suppose the discussion was driven by virologists. The focus might have been on rapidly testing promising drugs and fast-tracking these into use with Covid patients. In this scenario, Remdesivir might have been confirmed effective back in March (say), instead of only in autumn, and lives might have been saved. Or suppose a decision had been taken early on to test virus transmission and impacts of interventions on small but substantial communities of volunteers from among the low-risk population. One would, within a month of the outbreak, have clear evidence on whether (for example) masks or distancing or Vitamin D have any effect. If the ‘experts’ were people working in sociology of health, likely they would have recommended avoidance of compulsion and encouragement of community support. The response might then have been more like Venezuela’s or Kerala’s. It’s also worth noting here that had scientists, including modellers, been consulted earlier, NHS beds per capita might be nearer to those of Sweden and Belarus, who never feared their health systems being overwhelmed. Ferguson suggests a novel pandemic was the Government’s number one priority risk, yet neither the current nor the previous Governments ensured there were enough ICU beds to handle a pandemic on the scale of the 1918 flu. If the central focus was preparedness, this failing would be at the centre of the public debate – and lockdowns could also cost lives if they incentivise future Governments to keep under-resourcing healthcare without accepting resultant risks.
Medical data arising from methods such as clinical trials is (in my view rightly) highly trusted. Clinical trials are what I would call a ‘conservative’ method: they are far more likely to produce false negatives than false positives in terms of effectiveness, and thus, any positive result is highly likely to be true (provided of course that there is no foul play). However, this trust is being squandered through the misleading association of medical authority with less-developed, less-reliable methods.
Any discussion of non-medical interventions is a social science discussion, not a medical science discussion. Pandemic modelling is similar in scientific reliability to the slew of statistics on things like the crime rate, the impact of trade liberalisation, development strategies in poor countries, causes of mental health problems, the impact of internet use on children, etc. None of this evidence is very reliable: there is always substantial scholarly debate, competing models which show different things, and vastly different outcomes from different quantitative and qualitative methods. For example, it’s quite common that I look into the impact of structural adjustment policies in a country like Uganda or Egypt, and there is one set of data showing that the SAP improved economic indicators, another showing that it did terrible harm, and a third, qualitative data-set which suggests that, from the perspectives of the worse-off, the impact was entirely negative. There are those on both sides of the debate who point to their own ‘science’ as conclusive data and denounce the politicisation or bias on the other side. But the truth is, there just isn’t currently a way to provide in human/social settings the kind of decisive evidence people expect in the physical sciences (in these cases, I lean towards trusting the qualitative). Even a strongly reliable method like clinical trials starts to fail in the social sciences, because the criteria of ‘success’ become more questionable: when testing psychiatric interventions for instance, one often has to rely on self-reports.
Having made a political decision to treat statistical modelling as gold-standard expertise, the devastating consequences of lockdown come to seem a ‘tragedy’, a ‘random, terrible event’, and ‘no-one’s fault’ – a series of perverse disavowals typical of a certain style of technocratic politics. The (unconscious) sleight-of-hand here is to confuse the virus with the responses. It’s plausible to argue that the virus is no-one’s fault (though it may well have human causes, whether these be eating undercooked meat, a lab leak, or something else). The policies result from human agency, and from the choice to rely on computer modelling as the main source of ‘data’. Now, there is no avoiding here the fact that all deaths and other harms caused by lockdowns are due to human agency. At most Ferguson can appeal to a kind of ‘necessity defence’: yes, we killed all these people, but it was necessary so as to save a greater number of others. Right away this opens a can of ethical worms outside the domain of ‘science’ on Ferguson’s definition (and one which is certainly not limited to the question of whether it was ‘worth it’). Assuming lockdown passes this test, we’re then in the situation where, if lockdown doesn’t work, there has been a cock-up of monumental proportions, and methodologies and assumptions need to be revised to avoid a repetition (what the computer-modelling establishment are currently resisting). I assume Ferguson realises that a group of people causing large numbers of deaths based on well-meaning but flawed methods in which they had excessive confidence is more serious than just an academic disagreement.
A few standard problems with statistical approaches. Firstly, it is easy to miss statistically things which are obvious on the ground at a qualitative level, or to deduce things which are completely fallacious (as the most attractive or parsimonious explanation). Secondly, the very fact of presenting things numerically has the effect of depersonalising them, of making inhumane, barbaric actions seem sensible and reasonable (planning for nuclear wars is a classic example of this). And thirdly, the reliability of the outputs depends on the inputs. (I have seen, for example, statisticians claiming very high lockdown compliance in India based on mobile phone tracking; every on-the-ground source I’ve found says completely the opposite.) Quantitative approaches mean the worse-off get observed, but do not get any input into the process; their voices are not heard, and often the reasons they act a certain way are not deduced or understood – reasons which could easily be established with a little simple qualitative research. The overemphasis on quantitative research over qualitative, tends to produce a set of top-down findings which reinforce the widespread impression among the worse-off that the political and scientific elites are out-of-touch. Hence responses like those of the first correspondent: that Ferguson seems to be living in a parallel world where the things she’s seeing day-to-day don’t exist. Statistics cannot see either the microsocial or the intrapersonal; human relations and human suffering are largely invisible to it. Hence a massive cost at these levels registers either weakly or not at all at a statistical level. I don’t think people like Ferguson have the slightest idea of the human effects of lockdown (or, most likely, of Covid): the sheer misery and desperation, the rage or despair arising from the violation of assumed rights or the loss of sources of survival or wellbeing, the existential collapse of known parameters, the violations of fairness and trust, the ripping-away of the simple ways people stay sane.
These are problems with statistical approaches based on existing data. With statistical modelling, there is the additional problem of dealing with the unknown. Computer modelling is not a longstanding established science. It has very low reliability compared to (say) laboratory virology or peer-blind clinical trials. Computer modelling looks scientific because it’s mathematical and the decision is made by the machine. But it depends completely on the inputs and the model, which are at base human inventions (however much algorithmic learning is layered on top of them). It’s basically the same procedure which is used to tailor targeted ads, Amazon recommendations and Facebook feeds. That gives something of a sense of its reliability as a method. It’s a bit like leaving a bunch of AIs playing Risk and concluding from this that we’re about to be nuked. And it is also possible to use computer modelling to show that lockdowns will cause millions of deaths. For example, there’s a study in America which predicts 800,000 suicides, based on previous evidence that a 1% increase in annual unemployment generates 21 extra suicides per 100,000, and the current crisis had at that point caused a 12% increase. This isn’t necessarily any more reliable than the Imperial modelling, but it’s not much less reliable either (nobody really knows if the unemployment-to-suicides rate applies to very high rates). At least in this case I can check the maths (and the maths works). The point is, however, that an expert on suicides in a position like Ferguson’s could very easily have told the Government, “even if you’re expecting 200,000 covid deaths and you can prevent these with a lockdown, you must not lock down because you will cause 800,000 suicide deaths”. It might be true, it might not; it’s just what happens when you choose this particular method.
I read the notorious Imperial study back in March, and two things stood out for me. The first is that all the claims about the characteristics of Covid and the impact of lockdowns were derived from WHO studies in Wuhan, i.e., from data the Chinese Government were feeding to the WHO. I’m sure you were privately aware that the Chinese Government were probably manipulating statistics, and it’s been reported that the Government expected the Chinese data to be underreported, but still, these were the statistics that underpinned the model. The second is that the paper explicitly admitted that it was only considering impacts on Covid, and bracketing out other economic and social effects and ethical implications – leaving it to the Government to ‘weigh’ these. I can understand leaving out ethics, but excluding other quantifiable variables was quite methodologically irresponsible, given that these effects could be modelled in similar ways, and that the impact of the paper was predictably going to be pressure to implement lockdowns to avoid consequences which seemed (to a non-specialist) so great as to override everything else. By the way, this would not just include suicides over many years (not just during the acute crisis), but deaths arising from economic crisis, cancelled operations, increases in coping strategies (drugs, drink, overeating, thrill-seeking) developed during or after lockdown, etc. We are probably never going to be able to calculate these, because it will be impossible to unpack the impact of Covid, the impact of lockdown, the impact of economic crisis (which may or may not have happened anyway), the impact of war or civil unrest or whatever else might happen, so as to definitively say “lockdown killed this many people”.
A third point: the initial messaging was ‘slow the curve’, the idea being that fewer people would die if infection rates could be slowed. Lockdown sceptics always pointed out that any delay in infections would simply be displaced to second or subsequent waves. This seems to have been confirmed. But lockdown supporters have not turned around and said, “OK, you were right, we just postponed infections until later waves and ended up in an indefinite cycle of lockdowns.”
Ferguson and Imperial College have (rightly or wrongly) come under suspicion for using faulty models. (It doesn’t matter much here whether the suspicion is valid, whether it’s a reasonable misunderstanding, or whether it’s just a ‘conspiracy theory’). Ferguson’s response is basically that critics don’t understand the model because it hasn’t been published – therefore they cannot run the model for Sweden or other cases, cannot talk about ‘the’ model (because unbeknownst to us there are actually four Imperial models plus several from other institutions), etc. But here’s the problem: the model’s secrecy does not at all rebut suspicions, particularly when this model (or these models) has led to politically contentious outcomes and has led to predictions which do not seem to have been confirmed. One is caught in a situation where any criticism is a ‘conspiracy theory’ or at least mistaken because it might not be referring to the real model.
Now, it’s not a scientific norm that people can get out of scepticism (whether justified or not) by simply concealing the basis for their claims. Even if the suspicion is unjustified or malicious, it’s important that scientists take measures to prevent it. In academic publishing, the way to mitigate such suspicion is to publish details of the entire method used so that it can be replicated if necessary, and any flaws pointed out. In computer programming, the best method is open-sourced release of the source code so any programmer can review it for bugs, followed by inspection of the source code by independent experts. Ferguson/the Imperial group are refusing to do any of these things – thus making it impossible to test whether or not critics of the model are right about its flaws. And in fact, it’s becoming more common for researchers working with corporations or Governments to avoid scrutiny by using closed-source proprietary software to reach conclusions which nobody (sometimes including the researcher) can account for. Sometimes, the software is produced and owned by the same entity that benefits from the research. This generates, in practice, a perverse incentive-structure for companies and academic clusters to use secret algorithms to produce ‘data’ which cannot be tested by others, and thus claim for themselves expertise similar to that of oracles. Whether or not Ferguson’s group are doing this, it’s a danger which needs to be reduced. It is simply good practice for scientists’ methods to be fully public so they can be replicated if necessary.
Now, clearly there is a world of difference between saying “this is what we should do because a highly tried and tested method has shown it” and “this is what we should do because this relatively new, dubiously reliable method, applied to complex material, indicates it if we put in certain data of dubious reliability”. It seems to me that people in the computer modelling sector (so to speak) have interests in exaggerating the reliability of their methods and data, promoting themselves as the definitive gold-standard experts ahead of the hundreds of other species of academics, and promoting worldviews (technocracy) and policies (‘behavioural’ interventions) which direct status, power and resources towards their own discipline. I have no idea how far this is working at an individual or group level with Ferguson and his research cluster, or whether it’s also speaking to a general propensity to risk-aversion, scaremongering, assuming the worst, etc. (a propensity which certainly seems historically to encourage abject endorsement of authority).
Donate
We depend on your donations to keep this site going. Please give what you can.
Donate TodayComment on this Article
You’ll need to set up an account to comment if you don’t already have one. We ask for a minimum donation of £5 if you'd like to make a comment or post in our Forums.
Sign UpPostcard From the Alps
Next PostThe Ne Plus Ultra of Zero-Zealotry