“The question is,” says Prof. James Alexander, “is probability a property of reality or a property of our awareness of reality? In other words, is it objective, or subjective: is it to do with our minds, or not?”

A fascinating question, indeed. And Prof. Alexander adds a moral dimension, claiming that entrepreneur Mike Lynch, who died aboard his yacht *Bayesian *in a freak weather incident earlier this month, “claimed that probabilities were not properties of the world, and, like Nimrod, challenged [God] to prove him wrong. He was proved wrong.”

While I doubt that Mr. Lynch, who had built his fortune on the theorems of ‘subjectivist’ Bayesian probability theory, would have regarded his work as any kind of challenge to God (or reality), the question Prof. Alexander poses remains a pertinent one.

In fact, it’s not difficult to show that probability is often concerned with our ‘subjective’ expectations of future events. That’s what makes it useful to us, after all: it quantifies how risky or otherwise a course of action is, at least insofar as we can tell from what we currently know. Consider the following scenario.

“You will face a test one day next week,” the teacher tells you, “but I’m not telling you when it will be.”

You ponder when the test might occur. At this point, you know each day is as likely as the others: there are five days in the week, so each has a likelihood of a fifth or 20%.

Monday comes and goes; no test. Heading into class on Tuesday morning you know there are now four days remaining, so you recalculate: each remaining day has a quarter or 25% chance of being test day.

Again, the day passes without a test. Wednesday morning, with three possible test days remaining, you work out that each has a third or 33% chance of being it.

But it’s now Thursday and still no test. Has the teacher forgotten? With just two days remaining, you calculate each has a half or 50% likelihood.

But Thursday, too, passes without a test. Assuming the teacher was being truthful, you turn up to class on Friday, 100% certain that the test must occur today. Sure enough, the test appears.

This is ‘subjective’ probability in action. At the start of the week you have a particular expectation of how likely the test is to occur on any given day. As the week goes on, the possible days left for it to happen reduce, and this changes the information you have and thus your expectations for the remaining days.

Here’s a trickier scenario that has caught out many people over the years: it’s the so-called ‘Monty Hall’ problem, named for an old U.S. gameshow host. Imagine you are a contestant on a gameshow and Monty presents you with three doors, telling you that behind one is a car but the other two are empty. You pick a door, but before opening it, Monty tells you he is going to open one of the doors you haven’t picked to reveal an empty door. Having done so, he asks if you want to switch to the other remaining door. Should you stick or switch?

Many people would stick, as they think each of the two remaining doors is as likely as the other and want to avoid the regret that would come if they switched and were wrong. But in fact you should switch to the other door, as the chances you got the car with your first pick was one in three (33%, because there were three doors to choose from) but the chances that the remaining door has the car is twice that, i.e., two thirds (67%). Why? Because, before Monty opened a door for you, the remaining two doors together had a two-thirds chance of having the car behind one of them, and Monty has just eliminated one of them that doesn’t have the car, leaving the remaining one now with the full 67% likelihood. In other words, Monty has just given you key information about the remaining door that you didn’t have when you made your first pick – eliminating one door that is definitely wrong, and leaving one that is now twice as likely to be right.

If you’re still confused by this, think of it this way. There was a one in three chance that your first pick had the car behind it. But there was a two in three chance that your first pick *didn’t* have the car behind it. In this latter scenario, the remaining door, after Monty has opened one, must have the car behind it because Monty has just eliminated the one that didn’t. Of course, you don’t know that you’re in this scenario – the one where your first pick didn’t have the car. But you do know that *there is a two-thirds chance that you are in it*. And hence there is a two-thirds probability that if you switch to the door Monty has left you will find the car.

Again, then, we see how probability is about expectations, and how expectations change as we gain more information. This can produce outcomes that are counterintuitive, at least at first glance.

What about the *objective* probabilities in the gameshow scenario? They are of little interest or use to us. The ‘objective’ probability (i.e., the one based on the state of the world rather than the state of our beliefs about the world) of a door concealing the car is 1 for where the car is and 0 for where it isn’t. This doesn’t tell us anything helpful, of course, as we don’t know which is which. Probability in such a situation is about quantifying expected outcomes so we can make informed decisions. It’s about what we should logically expect to be the case given what we currently know about the structure of the situation. And as we ‘subjectively’ learn more, our expectations change accordingly.

Objective probability, on the other hand, is about quantifying the propensity of a thing to behave in certain ways, and is essentially a description of how the thing has behaved in the past. While of no use in a gameshow, we might use it to study, for instance, how often obese people have heart disease, in order to quantify the risk of heart disease in obese people (the past prevalence and the future risk being, for objective probability, basically the same thing). We might then attempt to find the efficacy of a medical treatment by looking at prevalence of heart disease in a treatment versus a control group. This is obviously a very important thing to do. However, even here there is a ‘subjective’ element, in that the more information a person gains about different risk factors, the more his assessment of his own risk will change depending on how many risk factors he deems himself to have. And how can you be sure that you’ve fully captured all your risk factors, along with any confounders? These were key issues during the Covid pandemic, of course, as we were bombarded with studies purporting to prove the “statistical significance” (a term of objective probability) of various treatment-based outcomes.

But in truth, even nature engages in a spot of ‘subjective’ probability. This is one of the unexpected (and mind-bending) discoveries of quantum mechanics. For example, if you send a single photon of light (i.e., the smallest physical unit of light) into a beam splitter, so that half of its wave goes one way and half goes another, what you won’t ever detect is half a photon in each direction. Instead, you find that the photon has a 50-50 chance of being detected in either direction. Furthermore, if the photon fails to be detected in one direction it will certainly be detected in the other – the lack of detection of one half of the split wave changing the probability of detection of the other half. You may think this is just because the photon went one way rather than the other. But quantum mechanical waves don’t work that way – they are probability waves that define the likelihood of a particle being detected across a region of space.

Hence with physical matter, too, on a quantum level the probabilities evolve as time goes by and the occurrence or non-occurrence of events alters the likelihood of events elsewhere. If you think this “spooky action at a distance” (as Einstein memorably put it – he wasn’t a fan of quantum theory) sounds like science fiction then you should know that it is the basis of the very real science of quantum computing.

Subjective probability does throw up some knotty logic problems, it would be fair to say. Let’s return to the test the teacher set for us and suppose that she stipulates it will be a *surprise* test, i.e., we will not know on the day of the test that it is going to take place. This apparently innocuous stipulation messes with the logic of the probabilities in a weird way and generates a paradox. To see why, note that such a surprise test *cannot* occur on the Friday. This is because, once we get to the Friday, we are 100% certain the test will occur today (as it must happen at some point this week). Therefore the test would not be a surprise to us – and the teacher has stipulated that it *will be* a surprise to us. Hence we deduce the teacher will not set it on the Friday.

Having ruled out the surprise test occurring on Friday, it seems we can continue the train of logic and rule out Thursday as well. For with Friday ruled out, when we now arrive at Thursday morning we will be certain that the test will occur today, as we have already established that it cannot occur on Friday. But this would mean that, again, it will not be a surprise to us, and since the teacher has stipulated it will be a surprise, we conclude that it cannot occur on Thursday either.

The inductive logic then appears to continue: having ruled out both Thursday and Friday, upon reaching Wednesday morning we deduce the test must occur today, and thus again will not be a surprise to us and so cannot take place. And so on: having ruled out Wednesday, Thursday and Friday, we do the same for Tuesday – on Tuesday morning the surprise test would again be anticipated, and thus is once more ruled out. Ultimately we conclude that a surprise test cannot be set on any day of the week.

Then the teacher surprises us with a test on Tuesday.

What went wrong with our apparently sound reasoning? The problem is that the requirement that the test must be a surprise left our probabilistic reasoning with a self-referential element (because ‘surprise’ is itself about what is expected, the very thing we are trying to deduce). Consequently, each step in our reasoning contains a paradox or contradiction: we conclude that the test *must* occur that day (100% likelihood) and therefore that it *cannot* occur that day (0% likelihood). This is obviously not sound probabilistic logic.

The surprising upshot is that it is not in fact possible for us to quantify the probability of a surprise test (at least on the terms of this scenario) as the probabilistic reasoning breaks down owing to its inherent contradictions. This means that while we know that there will be a surprise test during the week (the teacher has told us so), we cannot put any numbers on how strongly we should expect it on any given day. This is unexpected because we are used to being able to put figures on our expectations, calculating the odds from the structure of the scenario facing us. But some scenarios, it appears – in particular, those that pose circular questions about our expectations of our expectations – do not allow us to do that.

Maybe, then, there is something dubious about ‘subjective’ probability, after all. Or more likely, just a limit to what it can tell us. But certainly, there’s no reason to regard it as a challenge to God, or reality. Indeed, it’s an intrinsic part of reality, both because our minds and beliefs are themselves ‘real’ (real beliefs, that is, not necessarily true ones), and also because nature itself is known to dabble in evolving probabilities – just ask the photons (or indeed, any particles small enough to retain their quantum mechanical properties).

There is, I confess, no great political point to be made from this, or theological one for that matter (though Einstein may disagree, as he felt it was important to argue that “God does not play dice with the universe” as he fought his losing battle with quantum theory). But if you’ve made it this far, then presumably, like me, you find it all somewhat fascinating.

To join in with the discussion please make a donation to The Daily Sceptic.Profanity and abuse will be removed and may lead to a permanent ban.