One of the principal tenets of systems-thinking is that setting targets to drive improvements in performance doesn’t work, and I recently came across an absurd example that I thought I’d share. But first let me explain what I mean when I say that targets don’t work.
It’s not the numbers that bother me – I like evidence-based decision-making. Measures can be useful – as long as they really are measures that relate to the goals you are trying to achieve. But that’s not as easy as it sounds, and we often end up with targets which drive the wrong behaviours. A common example is setting a performance target to answer customers’ calls more quickly, for example 95% of all calls answered within 5 rings. Sounds reasonable, but people find all sorts of ways to achieve the target while losing sight of the true goal. How about answering the calls really quickly, but not actually answering the customers’ enquiries? Or, calling each other repeatedly, picking up and hanging up, to drive up the call-monitoring statistics! It’s called “gaming”.
Examples of failed targets include:
The 4-hour maximum waiting time in A&E – which leads to patients being admitted, or referred for further assessment, in order to fall outside the target parameters, which results in those patients waiting even longer.
The target for social landlords to do urgent repairs within 5 days – which leads to complicated definitions to avoid categorising things as urgent; contractors cancelling jobs to restart the clock, or recording incomplete jobs as finished.
I was doing some work with an NHS Trust, to understand the demoralising impact of top-down controls, target- and standard-setting, and continuous process redefinitions. Bureaucratic management and scrutiny look a lot like mistrust and can result in people losing (or perhaps abdicating) autonomy, responsibility and job-satisfaction. There may be a grudging acceptance of the need to tick boxes and fill in forms, but that’s not what gets health-care professionals up in the morning. They want to care for others and be valued for the difference they make, but they get less and less time to do this, because they’re too busy coping with the latest initiatives and monitoring their ‘performance’.
One of the Trust’s measures shockingly showed that nursing staff only spent about 40% of their time with their patients. So in response, the management team set a target to increase this to 60% and introduced a new monitoring system to record this! Oh, the irony! (not to mention the lack of ambition – why is 60% enough?!).
A systems approach might include measuring contact time, if this is seen to be a valuable part of the service. And if it is too low, working with the nurses to identify what’s getting in their way; what are the non-valuable activities that are taking up the rest of their time, and finding ways to get rid of these obstacles. But actually, contact time is an input – is this what the desired outcome is? Or is the aim that the patients feel well cared for? in which case, what is the best measure?
As I said earlier, I don’t think it’s so easy to set meaningful measures. I did a bit of digging around to see if I could find a practical guide to a proven methodology, but no luck yet. What I have found are some common themes:
- relate to the desired outcome, not the processes followed to get there
- include qualitative elements, not just statistics
- are customer focused (not provider-driven)
- describe what matters most, not what’s easy to measure
- are not seen as targets to be achieved, but evidence to drive continuous improvement
If you have any other tips, or know of a useful guide for setting great measures, or illuminating stories about the impact of measures and targets, then please share.