Measurement System

About Measurement Systems

However thorough an investor’s initial impact analysis, the ongoing monitoring of the investment and oversight of the actual impact achieved will rely upon data collected and reported by the investee organisation. The measurement system the organisation uses therefore is crucial, as it will largely define what the investor will subsequently be able to know and report about the impact.

While a multitude of impact measurement and reporting systems currently exist, there is no universal standard. And while an increasing number of common tools are available for different sectors or outcome areas to draw upon, innovation among social purpose organisations is likely to make rigid definitions about what impact is, and how it must be measured, constrictive and misplaced. The organisations themselves, with their unique understanding of their own mission, and the beneficiaries they work with, are often in the best position to identify and select the things that are most important for being able to track and evidence their own impact. And as the systems’ primary users are the organisations, it is essential they feel their systems are able to produce results that are valuable and instructive to them.

Most investors prefer not to take a prescriptive approach to the measurement systems used by their investees, and rather let the organisations set out what they propose to measure and how, and how subsequently to report upon these things. However for the investor to ensure that the system in use will provide meaningful and reliable information, and information that can be used to support their own impact investment strategies, reviewing the measurement system is an essential part of working-through the organisation’s impact plan. Often this may involve collaborative development of the system with the organisation, both to strengthen it in itself, and to find ways for it to key into the investor’s own framework for measuring impact across their investment portfolio.

Although any individual measurement system may be unique to an organisation in its detail, certain principles and structures are consistently indicative of quality, and can be referred to for reviewing purposes, namely:

  • commitment to evidence
  • use of indicators
  • data collection methods
  • targets and objectives
  • proportionality

Commitment to Evidence

Does the organisation show a strong commitment to being able to evidence the work it is doing, and the impact it is having?
Is the measurement system fit for purpose, and capable of confirming the success or failure of the impact plan?

The first consideration is if the organisation is committed to finding out if its approach to impact generation is actually working, and to being able to evidence this. The maxim that can be applied is that if an organisation wishes to claim an impact, it has to prove it. To be able to do so, the organisation’s measurement system must be fit for purpose, and capable of producing results that will confirm the success or failure of the impact plan. This implies not only taking measurements of individual elements within the plan (outputs, outcomes), but also testing the validity of the links between these elements. A complete measurement system will provide evidence both of the impact being achieved, and of the connections within the impact chain, thus proving the essential theory of change.

A review looks for:

  • evidence is defined

    For the evidence collected to prove the initial theory, it is necessary to specify in advance what the anticipated results look like (i.e. “if the plan is working, we expect the measurements to show x”)

  • scope or materiality is defined

    The measurement system will most likely not capture everything, and so it is important to set out the scope of the system, specifying what aspects of the impact are and are not covered, and providing adequate reasons for omissions (e.g. issues judged to be of low relevance, the existence of barriers to measurement). The scope is set in relation to the conditions for change and the context of change, and is designed to take account of those factors beyond the organisation’s operations that play a role in the observed outcomes. Factors that are deemed material to the changes taking place — and therefore needing to be accounted for in a calculation of the impact — are said to be within the bounds of materiality.

  • timeline is defined

    The measurement system anticipates when the various outputs and outcomes will be observable (and corresponds with their timelines). Where the anticipated outcomes are expected to occur after the intervention has finished (and possibly beyond the term of the investment), in accordance with the maxim that to claim it, you have to prove it, it is necessary for the measurement system to continue tracking beneficiaries (or a representative sample of beneficiaries) after the intervention or investment has finished.

  • reporting is defined

    Beyond the gathering of evidence, the evidence must be made available through regular, transparent reporting, including verifiable results, and auditing where appropriate.

Use of Indicators

Has the organisation selected a high quality set of indicators?

Indicators are the linchpin of any measurement system. They are the specific variables that are tracked to demonstrate the delivery of outputs and the positive change that follows.

An effective impact measurement system will incorporate a number of indicators, or an “indicator set”, which taken as a whole tracks information about both outputs and outcomes, and includes both quantitative and qualitative data. The precise indicator set used by any organisation will depend upon its mission and focus, as well as its scale and resourcing capacity. As with the choice of the measurement system itself, often the organisation is best placed to select its own indicators, rather than being dealt a prescriptive list by an investor. However investors have an important role in reviewing the proposed indicator set, and working with the organisation to ensure its quality.

The concept of SMART indicators (Specific, Measurable, Attainable, Relevant, Time-bound) is often borrowed from business, and can be helpful. The components of the SMART acronym can be given a little extra definition in relation to impact, and the qualities of Standard and Stakeholder-Inclusive can be added, to yield SMARTSSI.

A review looks for indicators that are:

  • specific

    The indicators are specific as to what is being measured, and how it is measured, such that repeat measurements are made of the same thing in the same way

  • measurable

    The indicators track values that are meaningfully measurable. To produce useful data, indicators must be:

    • responsive to change, and so do not always produce the same result
    • consistent i.e. the measurement is taken in a consistent fashion and there is consistency as to what the measurement means, thus forming the basis for comparison from one set of measurements to the next
    • relative, such that results relate to a scale which can distinguish higher and lower
  • attainable

    The indicators set goals that are ambitious but attainable. Equally, the process of taking measurements using the indicators is practical and attainable. It is crucial that indicators are reasonably simple, quick and cheap to use, and therefore suitable for taking regular measurements (at least once a year).

  • relevant

    The indicators address the things that are most important to the organisation (in terms of its mission and goals) and to beneficiaries (as expressed through consultation — see outcomes, and understanding beneficiary needs in context): i.e. they serve to demonstrate the outcomes and impacts that really matter.

  • time-bound

    Indicator measurements relate to the reporting period (providing readings at least from one year to the next), and serve to demonstrate change that has taken place over that time. Organisations working with long-term outcomes, and not able to provide results on these within a reporting year, may look to intermediate outcomes that can be used to show progress (while maintaining planned measures to monitor the long-term outcomes). For organisations working with shorter-term outcomes, it may still be appropriate to track beneficiaries over a longer period to ensure outcomes are having a sustained impact.

  • standard

    Indicators within specific fields or outcome areas align to established standards, and wherever possible, common indicators are used, and in a way that supports comparison. Standard indicators offer the benefits of being available, up-to-date and of assured quality, as well as compatibility and the potential for benchmarking. The outcomes matrix provides a useful resource for standard indicators.

  • stakeholder-inclusive

    The indicator set as a whole will provide more convincing evidence if it incorporates the perspective of stakeholders. Foremost is the beneficiary perspective, typically represented through surveys or beneficiary feedback. If surveys are used, basic principles of good data collection must apply (neutral questioning, representative samples etc.). Feedback from those around beneficiaries (e.g. family members, social workers, carers, employers of beneficiaries) may also be valuable, including reported or observed changes in attitude, feelings or behaviour of beneficiaries.

Data Collection

Is there a plan and clear processes for data collection?
Is there a review of the data collected?

A measurement system implies not only the selection of a set of indicators, but also processes for the actual taking of measurements. The reliability, simplicity and cost of data collection is significantly improved by planning and embedding processes.

  • planned data collection

    The measurement system sets out a plan that anticipates ongoing data collection, and specifies both the processes for taking measurements, and when measurements will be taken. Ad hoc or post hoc collection of data is invariably less consistent and more difficult to do. People’s memories (including beneficiaries’ own memories) of how beneficiaries were progressing at various stages over a reporting period are conspicuously prone to error, and far more reliable are reports and measurements taken at the time, and according to a planned schedule. An appropriate minimum requirement is often that measurements are made both before and after an intervention (ensuring that what is being measured is the change between the two).

  • embedded processes

    The processes for data collection are embedded into operating processes, and form an essential part of how activities are run. Staff are aware of and follow processes, as well as being aware of how results feed into the greater measurement system, and of the organisation’s overall commitment to evidence.

It is important to review the data periodically as it is being collected for any identifiable errors, or consistent differences in the quality of data collection (e.g. among different beneficiaries or by different operators). The review must also encompass the data collection processes to ensure these are operating well, and not producing biases in the results. In particular, attention should be paid to the beneficiary sample, asking if it is representative of the population as a whole (e.g. is data being collected only from the most engaged beneficiaries?), and if not, how might it differ for a different sample of beneficiaries?

Targets and Objectives

Are there specific and genuinely demanding targets and objectives in place?

Targets and objectives relate to measurements taken using the selected indicators. Their role is to provide clear markers by which to assess subsequent results, and to help determine if the organisation has carried out its plan as intended, and if that plan has been successful.

A review of targets and objectives looks for:

  • timeline

    the targets and objectives sit on the timeline with the outputs and outcomes, making it apparent when it is expected these will be reached

  • defined success

    The targets and objectives set explicit goals (e.g. how many, how much, who for, how good, what is the aimed-for quality), linking back to the mission and beneficiary needs and expectations

  • genuinely demanding

    The targets and objectives set an ambitious level for the organisation to aim for

Proportionality

Is the measurement system proportional to the organisation?

The above sections cover the various aspects of impact measurement systems, and the points they pose for review, in some detail. However when working through an organisation’s measurement system, it is important to retain a sense of proportionality — in relation to the organisation’s size and stage of development, and to the relative maturity, with respect to measurement, of the sector or outcome area in which it is active. A primary point for a measurement system is that it is useful to the organisation — for understanding, monitoring and responding to its impact — and is not crippling it.

Generally, smaller organisations, with less by way of resources to devote to impact measurement, will have less sophisticated measurement systems, and potentially less coverage of every point. Younger organisations will naturally have less by way of track record and, if pursuing a new approach, less research to draw on when assembling their measurement systems. Conversely, while larger, more mature organisations may have more capacity for producing attractive-looking reports, they are potentially faced with a more complicated task when reporting across multiple activities and operations than a smaller organisation with a single tangible project that it runs directly.

The review of an organisation’s measurement system must therefore be sensitive to the characteristics of organisation itself, comparing it to a sense of what would be proportional and appropriate (for this, the investor may look to other organisations of similar size, stage of development and sector or outcome area).

In working collaboratively through the impact plan and measurement system, the investor and the organisation may identify a number of ways to strengthen the system, and select some points for immediate implementation, and some for gradual introduction. As the organisation draws down the investment capital and scales, and gains in experience, it may be expected to build up its measurement system proportionally.

Website built and hosted by Strangecode