Scoop has an Ethical Paywall
Licence needed for work use Learn More
Top Scoops

Book Reviews | Gordon Campbell | Scoop News | Wellington Scoop | Community Scoop | Search

 

Rational Expectations, Intelligence, And War

'Rational Expectations' is a problematic theory in economics. Here I want to focus more away from economics; and more on the meanings of 'rationality' in decision-making, than on the problematic ambiguity of the word 'expect' (and its derivatives such as 'expectations'). 'Expectation' here means what we believe 'will' happen, not 'should' happen; a rational expectation is a prediction, an unbiased average of possibilities, formed through a (usually implicit) calculation of possible benefits and costs – utilities and disutilities, to be technical – and their associated probabilities.

A rational decision is one that uses all freely available information in unbiased ways – plus some researched information, bearing in mind the cost of information gathering – to reach an optimal conclusion, or to decide on a course of action that can be 'expected' to lead to an optimal outcome to the decision-maker.

All living beings are rational to a point, in that they contain an automatic intelligence (AutoI) which exhibits programmed rationality. For most beings, AutoI is fully pre-programmed, so is not 'intelligence' as we would normally understand it; for others, that programming is subject to continuous reprogramming through a process of 'learning', true intelligence. In addition, beings of at least one species – humans – have a 'manual override' intelligence (ManualI), which is our consciousness or awareness.

AutoI is an imperfect, though subversive, process of quasi-rational decision-making. Brains make calculations about optimal behaviour all the time; calculations of which we are not aware. (Richard Dawkins – eg in The Selfish Gene – would argue that these calculations serve the interest of the genotype rather than the individual phenotype.) For humans at least, full rationality means the capacity to use ManualI to override the amoral limitations of AutoI.

Advertisement - scroll to continue reading

Rational decision-making, through learning, may be called 'intelligence'. Though intelligence has another meaning: 'information', as in the 'Central Intelligence Agency' (CIA). It is perfectly possible to use unintelligent (stupid?) processes to gather and interpret intelligence!

Even when rational processes are used, many good decisions will, with hindsight, have inferior outcomes; or many good forecasts will prove partly or fully incorrect. It's mostly bad luck, but also partly because intelligence is rarely completely unbiased, and partly because the cost of gaining extra information can be too high.

Expected Value, aka Expected Outcome

There is a simple rationality formula – familiar to students of statistics and of finance – which can yield a number called an 'expected value'. In this expectations' formula, a high positive number represents a good decision and a higher positive number represents a better decision. A negative number represents a bad (ie adverse) expected outcome, although sometimes all available expected outcomes are 'bad', meaning that the better course of action is the 'lesser evil'. A positive number indicates an expected benefit, though not a necessary benefit. Negative possible outcomes represent 'downside risk', whereas positive possible outcomes represent 'upside risk'.

(It is important to note that, in many contexts, a negative number does not denote something bad. A negative number may indicate 'left', as in the left-side of a Bell Curve; or 'south' or 'west' as in latitude and longitude. In accounting, a 'deficit' by no means indicates something bad, though President Trump and many others are confused on that point [see Could US tariffs cause lasting damage to the global economy? Al Jazeera 7 April 2025, where he says "to me a deficit is a loss"]; and we note that the substitution of the term 'third world' for 'global south' suggests an inferiority of southern latitudes. In double-entry bookkeeping, items must add to zero; one side of any balance sheet has negative values by necessity. A deficit, in some contexts, represents a 'shortfall' which is probably 'bad'; but also a 'longfall' – or 'surplus' – is often bad, just think of the games of lawn bowls and pétanque.)

A simple example of rational decision-making is to decide between doing either something or nothing; for example, when contemplating asking someone out on a date. The expected outcome of doing nothing – not asking – has a value of zero. But, if you ask the person for the date, and you evaluate the chance of a 'yes' as 0.2, the utility of a 'yes' as +10, and the disutility of a 'no' as -1, then the expected value calculates to 1.2; so, the rational decision is to ask (the calculation is 10×0.2–1×0.8). This example is interesting, because the more probable outcome is a 'no', and a 'no' would make you less happy than if you had not asked the question; nevertheless, the rational decision here is to 'take the risk'. ('Risk averse' persons might have rated the consequence of 'rejection' as a -4 rather than a -1; they would calculate an expected value of -1.2, so would choose to not ask for the date.)

Political Decision-Making when Catastrophic Outcomes are Possible

A rational calculation allocates values and probabilities to each identified possible outcome. A favourable outcome is represented by a positive number, a neutral outcome has a zero value, and an adverse outcome has a negative value.

A basic favourable outcome may be designated a value of one; an outcome twice-as-good has a value of two. An outcome an 'order-of-magnitude' better has a utility or happiness value of ten. The same applies to adverse outcomes; the equivalent disutility scores are minus-one, minus-two, and minus-ten.

An aeroplane crash might incur a score of minus fifty to society and minus ten million to an individual. The probability of dying in such a crash, for an individual, getting on a plane is probably about one in 100 million. If it was less than one-in-a-million, hardly anybody would get on a plane. (The chance of winning NZ Lotto first division is about one-in four-million.)

We should be thinking like this when we think about war. What kind of risk would we be willing to take? A problem is that the people who provoke wars do not themselves expect to be fatal victims.

A catastrophic outcome could range from minus 100 (say a small war) to minus infinity. An outcome which meant the total eradication of all life on Earth would come close to minus infinity. However, because of the mathematics of infinity (), any outcome of minus infinity with a non-zero probability yields an expectation of minus infinity. So for the following example, I will use minus one billion (-1b) as the disutility score for such a total catastrophe. A catastrophe that leads 'only' to human extinction might have a value of minus ten million (-10m). A holocaust the size of the 1943 RAF firebombing of Hamburg might have a catastrophe-value of minus one thousand (-1,000). A catastrophe the size of the 1932-1945 Bloodlands of Eastern Europe (which included 14,000 murders including the Holocaust, and much additional non-fatal suffering) might have an overall catastrophe-value of minus a hundred thousand (-100,000).

(Could we imagine an outcome of plus infinity: +? Maybe not, though certain evangelical Christians – extreme dispensationalists – pray for Armageddon; "dispensationalism views the progression of history in stages that begin in the Garden of Eden and ends in the paradise of the New Heavens and New Earth". Thus, what might be minus infinity to most of us could be plus infinity for a few. There is an analogy of 'wrap-around-mathematics' in geospace; a longitude of +180° is the same as a longitude of -180°. And, in another example, some people believe that there is little difference between extreme-far-right politics and extreme-far-left politics. On this topic of extremes, the mainstream media should avoid the mindless repetition of hyperbole – as in a comment recently heard that President Trump's tariffs may amount to an "economic nuclear winter".)

My Example – the Ukraine War

In an example with some relevance to today, we might consider the NATO-backed 'defence of Ukraine'. I could assign a modestly favourable outcome of +1 with a 50% probability, a very favourable outcome +10 with a 10% probability, and a catastrophic -1,000,000 with a 1% probability. (All other possibilities I will treat here as neutral, although my sense is that they are mostly adverse.) I calculate an expected value of minus 9,998.5; practically, minus 10,000; this is an average of all the identified possibilities, a catastrophic risk rather than a prediction of a major catastrophe.

This decision to persevere with the NATO-backed 'defence of Ukraine' is only rational if the only alternative decision – to abandon the NATO- backed 'defence of Ukraine' – comes up with an even lower expected value. (These two alternative decisions would be characterised by New Zealand's former Ambassador to the United Kingdom – Phil Goff – as 'standing up for Good in the face of Evil' versus 'appeasement of Putin'.) It seems to me that catastrophe becomes much less probable, in my example, with the 'appeasement' option than with the 'defence' option. (In the case that Goff was commenting on, his implication was that the 1938 'appeasement' of Adolf Hitler by Neville Chamberlain led to either an increase in the probability of catastrophic war, or an increase in the size of catastrophe that might ensue.)

Morality Fallacy

One view of morality is the identification of some Other as Evil, and that any subsequent calling out of that (Evil) Other must therefore be Good. Further, in this view of morality, the claim is that, if and when hostilities break out between Good and Evil, then Good must fight to the 'bitter end' at 'any cost'. (When we see Evil fighting to the bitter end – as per the examples of Germany and Japan in World War Two – we tend to think that's stupid; but Good fighting to the bitter end is seen as righteous.)

Of course, this kind of morality is quite wrong. The idea that one must never surrender to Evil is a moral fallacy, based on the false (binary) idea that one side (generally 'our side') of a dispute or conflict has the entire 'moral-high-ground' and the other side has the entire 'moral-low-ground'. Further, a victory to 'Evil' is surely less catastrophic than annihilation; a victory to Evil may be a lesser evil. Choosing annihilation can never be a Good choice.

Most conflict is nothing like Good versus Evil, though many participants on both (or all) sides believe that their side is Good. Most extended conflict is Bad versus Bad, Bad versus Stupid, or Stupid versus Stupid; although there are differing degrees of Bad and Stupid. Further, in the rare case when a conflict can objectively be described as Good versus Evil, it can never be good to disregard cost.

Morality in Practice

True morality requires a broadening of the concepts of 'self' and 'self-interest'.

The important issues are benefits and costs to whom (or to what), and the matter of present benefits/costs versus future benefits/costs. In a sense, morality is a matter of 'who', 'where' and 'when'. Is it beneficial if something favourable happens 'here' but not 'there'? 'Now', but not 'then'? To 'me' or 'us', but not to 'you' or to 'them'.

Human ManualI is very good at inclusive morality; AutoI is not.

It is natural, and not wrong, to prioritise one's own group; and to prioritise the present over the future. The issue is the extent that we 'discount' benefits to those that are not 'us', and future benefits vis-à-vis present benefits. And costs, which we may regard as negative benefits. A very high level of discounting is near complete indifference towards others, or towards to future. An even higher level of discounting is to see harm to others as being beneficial to us; anti-altruism, being cruel to be cruel.

Then there is the 'straw man' morality much emphasised by classical liberals. 'Libertarians' claim that certain people with a collectivist mindset believe in an extreme form of altruism, where benefits to others take priority over benefits to self; such an ethos may be called a 'culture of sacrifice', benefitting by not-benefitting. While this does happen occasionally, what is more common is for people to emphasise public over private benefits; this is the sound moral principle that libertarians really disapprove of.

Thus, an important part of our 'rational calculus' is the private versus public balance; the extent to which we might recognise, and account for, 'public benefits' in addition to 'private benefits'.

So, when we complete our matrix of probabilities and beneficial values, what weight do we give to the benefits that will be enjoyed by people other than ourselves, to other people in both their private and public capacities. Should we care if another group experiences genocide? Do we gloat? Should we empathise, or – more accurately – sympathise, and incorporate others into a more broadly-defined 'community of self'?

If we have a war against a neighbouring country, should we care about how it affects other more distant countries through 'collateral damage'? Should we care about a possible catastrophe if it can be postponed until the end of the life-expectancy of our generation? Should we care about the prosperity of life forms other than our own? Should we care about the well-being of our environments? Should we care more about our 'natural resources' – such as 'land' – than we care about other people who might be competing for the use of those same resources? If we have knowledge that will allow us to make improvements to the lives of others so that they catch up to our own living standards, should we make that knowledge public and useful? Should we account for the well-being of people who live under the rule of rulers who we have cast as 'Evil' (such as the burghers of Hamburg in 1943)?

One important morality concept is that of 'reciprocation'. If we accept that others have the right to think of us in ways that compare with how we think of them, then we must value their lives much as we value our own lives. If I live in Auckland, should I value the life of a person who lives in New Delhi nearly as much as I value the life of someone who lives in Wellington? I should if I expect persons in Mumbai to value my life nearly as much as they value the lives of people in New Delhi.

Reciprocal morality can easily fail when someone belongs to a group which has apparent power over another group. We may cease to care whether the other group suffers our wrath, if we perceive that the 'lesser' group has no power to inflict their wrath onto our group. We may feel that we have immunity, and impunity. They should care about us, but we need not care about them.

It is through our ManualI – our manual override, our consciousness, our awareness – that we have the opportunity to make rational valuations which incorporate morality. Our AutoI, while rational in its own terms, is also amoral. We can behave in amoral self-interested ways – even immoral ways – without being aware of it. Our automatic benefit-cost analyses drive much of our behaviour, without our awareness; we cannot easily question what drives our Auto-Intelligence.

Our AutoI systems may – in evolutionary terms – select for degrees of ignorance, stupidity, blindness as ways of succeeding, of coping. AutoI protects us from having to face-up to the downsides of our actions and our beliefs; especially downsides experienced more by others than by ourselves. And they tell us that we are Good, and that some others are Bad.

Pavlovian Narratives

We come to believe in other people's narratives through habit or conditioning. AutoI itself has a cost-cutting capacity that allows speedy decision-making; it adopts reasoning shortcuts, in the context that shortcuts save costs. We build careers – indeed our careers as experts in something – by largely accepting other people's narratives as truths that should not be questioned and that should be passed on. We enjoy belonging to 'belief communities'; and we are 'pain-minimisers' at least as much as we are 'pleasure-maximisers'; it may be 'painful' to be excluded from a community. We too-easily appease unsound public-policy decisions without even knowing that we are appeasing. We turn-off the bad news rather than confronting it.

Our beliefs are subject to Pavlovian conditioning. And one of the most painful experiences any human being can suffer is to have beliefs cancelled as 'stupid'. So we unknowingly – through AutoI – program our auto-intelligences to protect our beliefs from adverse exposure; and, if such protection fails, to denounce those who challenge our belief-narratives.

One form of cost-cutting-rationality is 'follow-the-leader'. It's a form of 'conclusion free-riding'. We choose to believe things if we perceive that many others believe those things. An important form of 'follow-the-leader' is to simply take our cues from authority figures, saving ourselves the trouble of 'manual' self-reasoning.

With AI – Artificial Intelligence – we delegate even more of our decision-making away from our moral centres, our consciousnesses, our manual overrides. We allow automatic and artificial intelligence to perform ever more of our mental labour. It's more a matter of people becoming robot-like than being replaced by robots.

Pavlovian rationalisation is heavily compromised by unconscious bias. Beliefs that arise from uncritical 'follow-the-leader' strategies are unsound. They lead us to make suboptimal decisions.

Why War?

Many people, including people in positions of influence, make decisions that are sub-rational, in the sense that they allow auto-biases to prevail over reflective 'manual' decision-making. There are biases in received information, and further biases in the way we interpret/process information.

Unhelpful, biased and simplistic narratives lead us into wars. And, because wars end in the future, we forever discount the problem of finishing wars.

When we go to war, how much do we think about third parties? In the old days when an attacker might lay-siege to a castle, it was very much 'us' versus 'you'. But today is the time of nuclear weapons, other potential weapons of mass destruction, of civilian-targeting, and drone warfare. Proper consideration of third-parties – including non-human parties – becomes paramount. A Keir Starmer might feel cross towards a Vladimir Putin; but should that be allowed to have a significant adverse impact on the people of, say, Sri Lanka; let alone the people of Lancashire or Kazan?

Proper reflective and conscious consideration of the costs and benefits of our actions which impact on others should be undertaken. Smaller losses are better than bigger losses, and the world doesn't end if the other guy believes he has 'won'. Such considerations, which minimise bias, do allow for a degree of weighting in favour of the protagonists' communities. But our group should never be indifferent to the wellbeing of other groups – including but not only the antagonist group(s) – and should forever understand that if we expect our opponents to not commit crimes, then we should not commit crimes either.

War escalates conflicts rather than resolves them. And it exacerbates other public 'bads' such as disease, famine, and climate change. War comes about because of lazy unchecked narratives, and unreasoned loyalty to those narratives.

Further Issues about Rational Expectations:

Poor People

It is widely believed by middle-class people that people in the precariat (lower-working-class) and the underclass should not gamble; as in buying lottery tickets and playing the 'pokies'. But 'lower-class people' generally exhibit quite rational behaviour. In this case, rare but big wins make a real difference to people's lives, whereas regular small losses make little difference to people already in poverty or in poverty-traps.

The expected return on gambling is usually negative, though the actual value of a big-win cannot simply be measured in dollar-terms. $100,000 means a much greater benefit to a poor person than to a rich person. Further, the expected value of non-gambling for someone stuck in a poverty-trap is also negative. It is rational to choose the least-negative option when all options are adverse.

Policy Credibility

Here I have commented about the rationality of decision-making, and how rational decisions are made in a reflective, conscious, moral, and humane way. However, there is also an issue around the meaning of 'expectations'. While the more technically correct meaning of expectation is a person's belief in what will happen, the word 'expectation' is also used to express a person's belief in what should happen.

(An expectation can be either what someone will do, or should do. Consider: 'Russia will keep fighting' and 'Russia should stop fighting'. To 'keep fighting' and to 'stop fighting' are both valid expectations; though only the first is a rational expectation from the viewpoint of, say, Keir Starmer; the second is an 'exhortation'.)

The phrase 'rational expectations' is used most widely in the macroeconomics of interest rates and inflation. The job of Reserve Banks ('central banks') in the post-1989 world is to condition people (in a Pavlovian sense) into believing that an engineered increase in interest rates will lead to a fall in the inflation rate. This is called 'credibility'. The idea is that if enough people believe a proposition to be true, then it will become true, and hence the conditioned belief becomes a rational belief. If people come to believe that the rate of inflation this year will be less than it was last year – however they came to that belief – then it should dowse their price-raising ardour; it becomes a contrived 'self-fulfilling prophecy'.

War

The same reasoning may be applied to warfare. If, by one side (especially 'our' side) talking-tough (and waving an incendiary stick), people on both sides believe that the other side will dowse its asset-razing ardour (due to fear or 'loss of morale'), then the belief that a war is more-likely-to-end may in itself lead to a cessation of hostilities. While unconvincing, because humans are averse to humiliation, it's an appeal to 'our' AutoI (automatic intelligence) over our less credulous ManualI (manual override, our reflective intelligence). It's the 'credible' 'tough-man' (or iron-lady) narrative. In this sense, Winston Churchill was a credible wartime leader.

-------------

Keith Rankin (keith at rankin dot nz), trained as an economic historian, is a retired lecturer in Economics and Statistics. He lives in Auckland, New Zealand.

© Scoop Media

 
 
 
Top Scoops Headlines