Are “outcomes” always outcomes? What are the data implications?

What outcomes are we trying to measure?

Good performance management requires priorities, objectives and outcome expectations to be aligned at all organisational levels. It requires the development of information that is useful for decision-making and contributes to better outcome reporting.

The terminology is not always consistent, but it is reasonable to assume that outcomes are the consequences of an operation or activity that is undertaken to achieve an objective. Of course, not all outcomes will be intended or desired. Unintended or undesirable consequences of our actions are far from unusual. Hence we are trying to measure intended, quantifiable, desirable outcomes. And for that we need to identify an outcome indicator. An outcome indicator, like the very similar and related performance and benefit indicators, is a yardstick used in forming judgements about the extent of achievement of an objective (though it does not, it should be noted, of itself make the judgement).

What are outcome indicators?

Outcome indicators are indicators of effectiveness. We need to put analysis and thought into getting them right – that is, unambiguous and fit for purpose. Are instances really outcome indicators, or just workload indicators? They are useless if they are not allowing us to measure what we are trying to achieve.

Because of the inconsistency of use, and, unfortunately and frequently, inadequate thinking about these concepts, it is worth drawing a comparison between three types of indicators:

 
DMAOutcomes.jpg.png
 

Activity indicators allow you measure what has happened during an activity, for example use of time or resources. These are also called workload indicators.

Efficiency indicators allow you to measure how well this has happened, for example how many outputs were created for a unit of input, and at what cost.

Effectiveness indicators allow you to measure whether you have got what you wanted, that is whether you have achieved your objective.

The most appropriate outcome indicators are determined by consultation and the use of indicator assessment criteria. The more broad an objective, the more difficult it usually is to agree an outcome indicator. Additionally, we need to consider the timeframe for our desired outcomes. With large projects and programs this could take years.

A successful organisation will use a combination of lag and lead indicators. Traditional corporate reporting generally gives a picture of the longer-term outcomes after the event – that is, lag indicators. Lead indicators report on the results that drive longer-term performance.

To use an example of a commercial bus operator, if customers are not happy with the proficiency of service, they are likely to use alternative modes of transport, which would cause customer numbers to drop. Therefore, “On-time service” would be one lead indicator of “Customer satisfaction”, which again would be a lead indicator of eventual customer usage of the service. The purpose of defining and monitoring lead indicators is that they hold the clue to what will happen with the lag indicators in the future if the status quo is maintained.

fabrizio-verrecchia-180315.jpg

What data is needed?

Outcome measurement consists of identifying the appropriate data sources, collecting the requisite data, and measuring your achievement against agreed targets. The process of determining the data and algorithm required to measure each indicator works in parallel with the indicator definition.

If we have a clear understanding of what we are trying to measure, then we can determine how we can measure it. For any indicator this can be in the form of an algorithm that identifies the component items of data that are needed for measurement. This is the case even for qualitative measures, which might be based on surveys or inter-subjective evaluations.

Once you know what data is needed, you can assess whether you already hold it in one or more of your databases, or whether it needs collection and is cost effective to do so. In either case, it is now critical to be clear how each of the items of data is defined. To use the old maxim, you need to be comparing and counting apples with apples to get any sensible result. And if you are using these results for budget allocation, or for organisational comparisons, it is best to get them right. It is advantageous if you have an enterprise architecture and/or corporate thesaurus to help you.

When you have established your outcome indicators and determined the necessary data, quality becomes a consideration. A number of criteria may be applied here, for example how precise it is, how reliable it is, how timely it is.

To use community services examples of the need for clearly defined indicators and quality data, if your intended outcome is that women and children are safe from domestic abuse, what data is needed in what form to measure this? If your funding is based on “instances of a service”, do two client visits to a provider for the same service on the same day count as one instance?

The bottom line

Having identified the outcomes that you want to achieve:

  • Are you clear on your objectives?
  • Are your outcome indicators relevant, appropriate and practical?
  • How do you measure them?
  • What data is needed to do this?
  • Where does this data come from?
  • Is the data of good enough quality?

If you don’t know the answers to these questions, Doll Martin Associates can help!

DMAwireframe