4.2 Impact Reporting

Monitoring and evaluation of reporting looks first to ensure that the reporting is forthcoming, of high informational quality, and verified as appropriate.

4.2.1 Reporting Forthcoming

Is the reporting being delivered on time?
Is it complete?
Is the measurement system being used and developed as anticipated?
If reporting is not forthcoming, is there a process to deal with it?

During the analysis stage, and in making the initial investment deal, the investor considered the organisation’s measurement system, and agreed on specific points for it to report against (typically a set of KPIs), as well as the reporting frequency. This naturally defines the frame for subsequent monitoring.

Over the course of the investment, it is important to ensure that the organisation’s initial commitment to evidencing its impact is borne out in practice, and that it is devoting the appropriate human and financial resources to the task (if additional funds within the investment or grant funds have been provided for measurement, the investor may seek some assurance that this is in fact how the funds are being used). The measurement system as defined in the impact plan, and agreed with the investor, should set out the scope of the system, and define the bounds of materiality — i.e. what information needs to be included to give a fair picture of the organisation’s performance. Reporting can be compared to these standards, and checked for its completeness, and whether or not it is sufficient to support reasoned conclusions about results.

If at the time of investment the organisation had comparatively little by way of impact measurement, but envisaged putting the necessary systems and processes in place, it must be able to show that these have been devised and implemented. The principle of proportionality (see measurement system) suggests that as organisations grow, mature and scale, so does the sophistication and comprehensiveness of their impact measurement and reporting. Any agreements or expectations around the organisation’s development of its impact measurement should likewise be followed up.

These set three clear points to look for regarding the delivery of reporting:

  • reporting is delivered on time (according to schedule)
  • reporting is complete (according to the previously determined scope and bounds of materiality)
  • reporting is developing as envisaged (the anticipated time and funds are being spent on measurement and reporting, and the system is developing proportionally with the organisation)

If impact reporting is not forthcoming or satisfactory on any of these points, the investor may call an “impact default” and protection measures may be brought to bear.

Beyond the direct reporting requirements of the investor, the organisation’s results may be of wider interest. Sharing results can play a valuable role in contributing to industry standards, learning and best practice, as well as providing a basis for comparability. In addition to monitoring results for their own purposes, investors may look to see if their investee organisations are working to communicate results more widely.

4.2.2 Quality of Information

Are the indicators tracking impact effectively, and telling us what we need to know?
Is the data strong — i.e. properly collected, and treated appropriately?
Is the perspective of beneficiaries and others being included?
Are questions relating to the context of change being addressed?

Separate from what the results suggest in terms of performance (considered under 4.3 Impact Delivery), it is important to check the results for their informational quality, and to ensure they are sufficiently sound to support reasoned conclusions about the impact. It is unlikely investors will want to go too far toward auditing the results themselves (though they may wish to see independent auditing — see verification of results below). However it can be useful to think through the kinds of problems or weaknesses results may have, and consider if any apply.

Monitoring the quality of information of the results involves:

  • reviewing indicators

    The indicators, and in particular the designated KPIs (where these are being used), are the primary source of data the investor has about the impact the organisation is generating. It is therefore crucial to review these indicators periodically, and ensure they are both fit for purpose and up to date. In particular it is important to ensure that, in application, the indicators are able to pick up both positive and negative aspects of performance, and thereby give a balanced picture.

    A useful process may be to return to the standards set out in the impact plan for the use of indicators, (see measurement system), and see if, once in use, the indicators live up to these (i.e. indicators are specific, measurable, attainable, relevant, time-bound, standard, stakeholder-inclusive). An implicit aspect of this is monitoring developments in the field, and being aware of any shifts in the standard indicators being used.

  • reviewing data collection

    The methods for data collection are laid out in the impact plan, and again these can be reviewed for whether or not they have been followed, and are proving effective. Data collection can be checked for:

    • protection against double-counting
    • the counting of drop off or failure rates
    • where samples are used, sample sizes are adequate, and sample selection is transparent and appropriate (i.e. there is not a selection bias, with favourite stories being collected, told and used as data). N.B. If the role of the organisation is to make a small contribution toward a large change, rather than being the major driving factor of the change, then to make a strong case for the organisation’s outputs to be related to the observed effect requires a correspondingly larger sample size.
    • the treatment of data is appropriate for the consistency and strength of the data itself (e.g. numbers are aggregated only when the quantities are genuinely like-for-like; calculations are sensible and acknowledge realistic margins of error)
    • where consultation processes are used, it may be appropriate to review the questions being asked (e.g. are the questions neutral? are they able to capture positive and negative information? are the answers always the same? are the answers completely different every the time?), and that the people being asked the questions really know what they think, and are in a position to give a meaningful and lasting response (i.e. one which is not based wholly on how they happen to be feeling at the moment when the consultation is conducted)
  • reviewing for multiperspective information

    The overall quality of the results, and the extent to which they can support a confident assessment of performance, is considerably increased if they are able to incorporate more than one perspective. Data collected from different sources on different measures, and showing agreement regarding the outcomes, greatly strengthens the evidence base. Perspectives may include:

    • the beneficiary perspective is the most vital, and incorporation of feedback from beneficiaries is likely to be the most important validation of the approach, and the impact
    • feedback from those around beneficiaries (e.g. family members, social workers, carers, employers, other organisations) can offer a valuable perspective on beneficiary progress and behaviour
    • changes in the context coming about through impacts upon beneficiaries, and captured by third party information sources, offers a valuable form of secondary data (as opposed to primary data collected directly by the organisation itself). This may include e.g. figures for local crime, the local economy, exam results
  • adjusting for the context of change

    The impact an organisation is generating represents a real change only in so far as this change is truly the result of its interventions, and exceeds or is additional to what would have happened anyway, what is happening elsewhere, and the role of other factors. Therefore, before results can with confidence be chalked up to the organisation, some review of the context of change is required.

    Factors such as deadweight, displacement and attribution are hard to estimate, and impossible to know for sure. Randomised Control Trials (RCTs), or the monitoring of control groups, provide the best evidence, but these are costly to conduct and require specialist skills. Sometimes there are pre-existing studies to draw on, but often not. However the fundamental lesson of control experiments still stands: that a degree of change may be expected to take place irrespective of the organisation’s activities, and therefore, the observed change must be adjusted for this in order to arrive at the true impact.

    An organisation’s address of the context of change is likely to be incomplete, but it is important that the key issues are thought-through, and that there is clarity as to what is left assumed. A plausible minimum is often to review the typical outcomes that target beneficiaries can be expected to experience without the organisation’s support, and to compare this with the recorded results, and make the necessary adjustments. In addition a check should be performed for displacement, attribution and unintended consequences, which can be used to indicate whether or not further research and adjustments need to be made for these.

    Organisations may not be keen to make these adjustments, especially when it involves subtracting from their recorded results. However it is an essential step if the investor is to be sure not to be exposed to a situations where all the carefully measured change has nothing to do with what they have invested in, is in no way dependent upon the investee organisation’s existence, or the organisation is in fact doing more harm than good, while reporting only the good.

4.2.3 Verification of Results

Is there any independent verification of results or processes?
Do the results and information provided by the organisation leave an audit trail such that they could be verified?

As the investor is primarily reliant on the organisation for information about impact, and especially if there are clear expectations — and possibly incentives and penalties — around impact performance, the investor may wish to consider if there are any ways to verify the results being submitted.

The most obvious and complete form of verification is through an independent audit, which investors may expect, and possibly include among the terms of the investment. This implies a further expense however, and at present is not standard practice. Less intensive forms of accreditation of the organisation’s practices may be available through industry groups or labels (e.g. fairtrade, certified organic), which are independently monitored. On a different front, the organisation may adopt an independent standard for its reporting, and while the contents of the report may remain unverified, the use of the methodology may still lend a degree of external validity to the presentation of results.

Results may also enjoy a degree of implied verification through the incorporation of multiple information streams, including secondary data gathered and published by others that corroborates the organisation’s own results (see reviewing for multiperspective information under quality of information above).

While reported results may remain without independent verification, a complete and transparent presentation of results includes sufficient information to provide an audit trail. An audit trail allows readers of the results to follow how they have been arrived at, to review the processes involved, and to check any calculations that have been performed (using either the major primary data, or figures drawn from elsewhere among supporting evidence and assumptions).

Website built and hosted by Strangecode