27 March 2021  /  Updated 17 July 2021
Sorry to break this...
 
Notifications
Clear all

Sorry to break this: lockdown worked!

Page 11 / 20

jmc
Posts: 597
 jmc
(@jmc)
Joined: 1 year ago

a set of coupled differential equations to solve for the various dynamical variables you include. One of those variables you may wish to analyse would be mortality as a function of time. Another might be the R number. And so on. These variables are all going to be dynamically coupled and parametrized by time...Lockdown measures would be modelled by limiting population mobility in the model, which would tend to have the effect of reducing 'collisions' of infected people with non-infected people. But again you're looking at a complex set of coupled differential equations to solve - and models of this kind of multivariate complex interactive behaviour are notoriously difficult to get right. understanding this dynamical behaviour mathematically is the key to the whole thing - and not at all fruitless.

yes of course, certainly I faintly recall a fella, whatisname... ah got it Neil Ferguson, tried all that, Rudolph Rigger. But guess what: it was completely fruitless and his predictions were miles off and he was 'pissing in the wind' it was all just gobbledegook 😀 😀 😀

You know why? Because GIGO.

Having look at quite a few of the published epidemiological models papers my reaction when then Imperial College people finally produced their source code was WTF - what a bunch of utterly unprofessional amateurs. Even by the very low bio-science math standards it was pathetically bad. It was looking through that material that proved to me the Ferguson was not just some kind media spoofer but a total charlatan. And a dangerous one at that.

The actual models tend to be some mix of random diffusion models, from fluid dynamics, and mix and match markov chains. With the more sophisticated trying their hand at stochastic differential equations. Which has it own serious problems for the unwary. The simpler models are usually implemented in something like R. The more sophisticate in various libraries tied together by python etc.

I never got the sense in reading any of these papers that any of the authors had a firm enough grasp of the underlying mathematics that they understood the limitation and pitfalls of the models they were creating. Sensitivity to initial conditions, edge conditions, unexpected variable sensitivity, errors introduce by numerical solution approximations etc. Hardly surprising as the majority of papers in very heavy duty math areas like theoretical physics and astrophysics have exactly the same problems. Models often using the mathematics the authors understand rather than the mathematics that is most suitable. Or making unreasonable extrapolations from simplistic assumptions and the resulting numerical solutions.

So when reading epidemiological model papers I look for the level of mathematical sophistication shown, the assumptions made, and whether the initial values in the models used actually do conform to reasonable published values. So some models are actually quite plausible within very strict boundaries. The Norwegian FNI one comes to mind. Some are quite interesting and informative despite some mistakes in initial values. Some Italian papers on the early spread in Lombardy and Piedmont. And some are just utterly worthless garbage. Pretty much everything from Fergusons group at Imperial and Bill Gates bozo Epidemiological Modelling group associated with the University of Washington. Who seemed to have got little beyond linear extrapolation.

These mathematical models can be very useful. After all the car you drive, the plane you fly in, and the hardware inside your computer uses exactly these kind of mathematical models in their development and manufacture. The problem is that most bio-science mathematics is exactly the same as the mathematics used in economics. Lots of very fancy equations used by people who have no real understanding of the limitations and pitfalls of those equations. Which they then feed with data sets that are partial, heterogeneous, incomplete, or polluted and very low quality. From which they then make sweeping claims and extrapolations.

Would you fly in an aircraft designed and built by economists? Or rather by people who use mathematics with the louche disregard shown in most published economics papers. Same goes epidemiological modelers. Their models can be very informative on occasions but only if at all times keep in mind the serious limitations of all facets of the modelling process as used by these researchers.

As I said, its mostly GIGO. Garbage In, Garbage Out.

Reply
fon
Posts: 1356
 fon
Topic starter
(@fon)
Joined: 12 months ago

You know why? Because GIGO.

Having look at quite a few of the published epidemiological models papers my reaction when then Imperial College people finally produced their source code was WTF - what a bunch of utterly unprofessional amateurs. Even by the very low bio-science math standards it was pathetically bad.

I worked for some years to maintain the code of the Cambridge University Computational Pharmacology department, we had a million lines of Fortran code for molecule docking that "looked bad" because it represented molecule atoms, and bonds in multiple 2 dimensional arrays, but this old software had this habit of assembling active compounds from a fragment library and docking them into a site. De novo active compounds are as rare as hen's teeth, but we found them for Roche, Novartis and Genentech… we even did retrosynthetic analysis to find the cheapest route to synthesise the new drugs. The difference between Cambridge and the likes of Ferguson etc. is that we took pride in our work. Any tosser can get a copy on Numerical Recipies and knock some rubbish up.

We treated our code like the Crown Jewels. People like Ferguson treat their code like Play Dough, and that's the difference, mate, to tell you the truth.
As I said, it's mostly GIGO. Garbage In, Garbage Out.

OK, makes sense, thanks for your input,

Reply
MikeAustin
Posts: 1193
(@mikeaustin)
Joined: 1 year ago

I worked for some years to maintain the code of the Cambridge University Computational Pharmacology department, we had a million lines of Fortran code ...

It is a long time since I heard that word - Fortran. I am a retired aircraft stress engineer that worked with design methods, mainly on aircraft wings. I wrote the Fortran code from scratch for a few of the main analysis programs for stiffened panel analysis. We had to regularly check the calculations against structural tests to show that strength was never over-predicted - and only marginally under-predicted. Sometimes, I had to revisit idealised modes of failure after discovering different behaviour in structures that strayed outside normal design parameters.

Regarding GIGO, I had to ensure there were error traps and warnings for input data to inform users of something inappropriate, or simply a typo. One of the problems was actually getting users to read such errors and warnings!

Reply
CoronanationStreet
Posts: 598
(@coronanationstreet)
Joined: 1 year ago

If you look at the mortality graph for London over the whole year until Christmas it's clear the lockdown had no effect on deaths, which have followed the expected death curve from the end of the pandemic in May until now save for a slight blip in Sept.

Hancockdown had zero effect on deaths.

Reply
Rudolph Rigger
Posts: 180
(@rudolph-rigger)
Joined: 1 year ago

yes of course, certainly I faintly recall a fella, whatisname... ah got it Neil Ferguson, tried all that, Rudolph Rigger. But guess what: it was completely fruitless and his predictions were miles off and he was 'pissing in the wind' it was all just gobbledegook 😀 😀 😀

😆

Just because one team of modellers managed to bollox things up does not mean that all modelling is useless! I doubt very much whether Ferguson's model was completely erroneous* either.

There were other models, other teams. I think the problem was we paid far too much attention to them. The models are (potentially) useful for getting an idea of what's driving things. Adjust this parameter - see how it changes things. That sort of thing.

Like I said, these models are notoriously difficult to get fully right. Have all the relevant variables been included? What parameters do we need to assume? For example, as I understand it, the initial models assumed a susceptibility of something like 97% of the population. I think this was a crazy assumption from the outset. Other parameters used would similarly have to have been assumed.

Models can be useful - but they should be treated with caution - and definitely should not have been used as some kind of "holy writ" as they appear to have been for the covid epidemiological models. They definitely need to be checked against the data and adjusted accordingly in the light of new information.

They also need to be checked against other models. For example, if one model predicts a curve that peaks at one time, and another model of the same thing predicts this peak at a later time - that difference needs to be understood (and debated).

The problem isn't so much with modelling as a useful and worthwhile technique, it's that we've treated these models as if they're accurate predictive 'science' - instead of useful tools to give us insights into things.

*I mean this in a slightly more technical sense as follows. Most of you might vaguely remember looking at projectile motion. You throw something and the shape of its trajectory looks like a parabola. You can model this using Newton's laws and demonstrate that under ideal conditions this is indeed a parabola. So you get some insight into the dynamics with this model. Are the model's predictions perfect? No - because these initial treatments neglect air resistance, for example. In order to develop a more realistic (and predictive) model we need to include air resistance. Just because the ideal model's predictions are not quite right does not mean we can just dismiss it as useless.

Reply
Page 11 / 20
Share: