The recently announced new health indicators framework was promoted as a key transformational feature of the Labour government’s health system restructuring announced by Minister of Health Andrew Little last April. But for indicators to be indicators they have to indicate what is important to the health system.
The myth of National’s health targets
These indicators replace the former National led government’s five health targets, which were misnamed “national performance measures”, to improve health services. These targets were not performance measures. They shared one common feature – they could be counted.
However, much of what the health system does can’t be counted as a performance measure, including acute hospital admissions, chronic illnesses and mental health. But target activities were held up as the measure of productivity and performance and what made the system accountable – when they were nothing of the sort.
Having a certain percentage of children immunised was a good target, although narrow in scope. Shorter stays in emergency departments and cancer treatment access were also good targets, but their achievement was hampered by workforce shortages (including 24 per cent for hospital specialists) and increasing patient demand. The other targets should have been business as usual.
Indicators framework
The latest “health system indicators” framework is purportedly to measure how well the system serves New Zealanders. It outlines high-level indicators of performance that will also take into account the different health challenges every community has.
The change from targets is justified by stating that the emphasis is on continuous improvement at a local level to lift overall health system performance. The framework confidently affirms that local solutions will be “developed, measured and tracked to ensure positive change on each of the indicators”.
The indicators
There are 12 national initial high-level indicators, each with their own brief description:
1. Percentage of children who have all their age-appropriate schedule vaccinations by two years (immunisation was formerly a target, although with a narrower focus).
2. Ambulatory (outpatient) sensitive hospitalisations for children under five (rate of hospital admissions for children for an illness that might have been prevented or better managed in the community).
3. Percentage of under-25-year-olds able to access specialist mental health services within three weeks of referral.
4. Access to primary mental health and addiction services (in development).
5. Ambulatory (outpatient) sensitive hospitalisations for adults aged 45 to 64 (rate of hospital admissions for an illness that might have been prevented or better managed in the community).
6. Participation in the National Bowel Screening Programme (in development).
7. Acute hospital bed-day rate (number of days spent in hospital for unplanned care, including emergencies).
8. Access to planned care (people who had surgery or care that was planned in advance, as a percentage of the agreed number of events in the delivery plan).
9. Percentage of people who say they can get primary care from a GP or nurse when they need it (still under development).
10. Percentage of people who say they felt involved in their own care and treatment with their GP or nurse (still under development).
11. Annual surplus or deficit at financial year end (net surplus or deficit as a percentage of total revenue).
12. Variance between planned budget and year-end actuals (budget versus actuals variance as a percentage of budget).
Glaring omissions
These indicators are a mature improvement on the former targets. It is good that their scope is widened beyond what can be counted. But there are glaring omissions.
Addressing the severe workforce shortages is neglected, despite having a detrimental effect across the health system and fatiguing health professionals who are the prime driver of health system improvement. Workforce wellbeing through retention and recruitment should be an indicator.
Further, there is nothing about improving allied workforce engagement. This has been dealt a body blow by central government’s destructive assault on the leadership culture of Canterbury DHB. Most of the expertise and experience necessary for health system improvement is possessed by its workforce; but its workforce is marginalised. Strengthened engagement through distributed leadership should be an indicator.
One of the biggest drivers of hospital operating costs – and a contributor to DHB deficits – is acute patient demand. This is because acute demand is increasing at a faster rate than population growth. Through its clinically led engagement between hospital and community, Canterbury was the most successful DHB in addressing this driver. There should be, but isn’t, an indicator on first slowing the rate of increase, and then reducing, acute demand for hospital treatment.
Unfortunately, the Government is abolishing the structure best suited to facilitate this (DHBs responsible for defined populations) and has crushed the engagement leadership culture best able to achieve it (Canterbury DHB).
It is difficult to go past access to cancer treatment as an indicator of how well the health system is performing. This is because of the prevalence of cancer as a cause of sickness and death affecting the spectrum from community to hospital.
While it was one of the former targets, access to cancer treatment is inexplicably excluded from the new indicators. The fact it can be counted would suggest its exclusion is political, although Andrew Little has been reported as saying the target of 90 per cent of patients starting treatment within 62 days of diagnosis remains. But it is now buried.
Shorter stays in emergency departments (six hours) was a former target but again does not appear as an indicator. Emergency departments tell you what is happening in the hospital itself (whether or not there is bed-blocking, for example). The shorter-stays target led to hospital-wide system improvements. Like access to cancer treatment, shorter ED stays are more measurable and therefore more likely to expose failures.
Non-alignment
But what makes a mockery of the indicators is the inclusion of the final two on financial performance. DHB abolition makes this more difficult to meaningfully assess, outside hospitals at least. Nationally aggregated data is not an indicator, especially when it deliberately excludes government accountability for funding.
Going beyond what can be counted is good, but it doesn’t mean you exclude what can be counted. At best, several of the indicators are positive work in progress, but their highly qualitative nature make them vulnerable to being overly subjective. They are weak on government accountability and remind me of school reports describing children’s progress as “could do better” or “has improved”.
We don’t need indicators whose objective is to fudge performance. The indicators must be seen in the context of the Government’s leftfield decision to abolish DHBs, without sufficient thought to what this means.
Posted on otaihangasecondopinion. This blog is based on a slightly revised column published in New Zealand Doctor on 15 September 2021.