4.3 Impact Delivery

Monitoring processes seek first to ensure: that the capital is being used appropriately; that the reporting is forthcoming; and that the results are of sufficient informational quality to support evaluation. With these three conditions met, evaluation itself can take place.

Evaluation essentially revisits the questions first asked in the initial impact analysis, and looks to see if evidence from the reality of operations bears out the theory and projections of the impact plan. Evaluation thus turns on the two main points of impact risk (is the approach working?) and impact generation (how much impact is being generated?).

4.3.1 Evaluation of Impact Risk

Do the results boost confidence in the initial impact plan, and the implied theory of change?
Is the impact risk clearly either falling or rising (rather than just remaining hazy)?
Is there evidence to show outputs, outcomes and impact are all forthcoming, and to support the links made between these three?

At the time of investment, the impact was yet to be seen, and expectations were based on the impact plan — in essence a proposed theory of change, supported by a combination of research, reasoning, and prior evidence. With the investment capital now drawn down and the organisation carrying out its activities, the expectation shifts to become one of results and new evidence that serve to substantiate the theory, demonstrate the impact (in so far as is possible at this stage), and ultimately boost the investor’s confidence in the plan, and thereby lower the risk.

The initial assessment of impact risk considers the extent to which the impact plan is explicit, reasoned, feasible, integral, evidenced, and evidenceable. It is important to revisit these points over the course of monitoring and evaluation, and update the estimation of the impact risk. As part of this, there may be “evidence targets” to review performance against — targets setting out which parts of the impact plan initially identified as evidenceable are expected to become evidenced, and when. Also, as the organisation responds to changing circumstances and its own results, there may be changes to the plan. The modified plan is then liable to being reassessed for impact risk.

An investor should expect impact risk to decrease over the term of the investment. Ultimately, if an investor exits an impact investment no more sure as to whether or not, as a means to generate social and environmental benefits, it has worked, then they are in a poor position to know whether or not to continue investing in a similar fashion. If impact risk is not falling, it should be rising — i.e. results should be showing that the approach is not proving to be an effective means to generate the desired outcomes, in which case there are lessons to be learned, and changes and improvements to be made. This need be no bad thing. Failure is a considerably better result than continuing ignorance, and a prolonged and hazy sense of maybe generating some impact, but with little sense of how, or how much of it is really the organisation’s doing.

In revisiting the evaluation of impact risk, the immediate focus is on the central impact chain:

Organisation
Activities
Outputs
Outcomes
Impact

Assuming the capital has been used appropriately, and activities resourced (i.e. the “organisation to activities” link within the chain is confirmed), following each of the subsequent links, the key questions are:

Are the activities delivering the outputs?

Evaluation looks to whether the results indicate there is effective service delivery, including consideration of:

  • evidence of the delivery of outputs (typically output numbers)
  • evidence that the outputs are reaching the right beneficiaries — i.e. those identified in the impact plan as the target group. It may be relevant to consider if the organisation can show that the beneficiary group is: aware of the organisation; able to access the activities (with respect to affordability, transport, etc.); and further, that the beneficiaries the organisation is reaching are diverse, representative, and include those who are hardest to reach (see vulnerability of beneficiaries under direct impact).
  • evidence that the activities are the right activities — i.e. they are an effective way to deliver the outputs to the target beneficiaries (as suggested by e.g. beneficiary uptake and utilisation of the services, low drop off rates).

Do the outputs give rise to the desired outcomes?

Evaluation of outcomes looks to the organisation’s ability to demonstrate both the presence of outcomes, and the relationship with outputs:

  • outcomes are often most effectively demonstrated by feedback from beneficiaries, confirming that beneficiaries’ experience is in line with their own and the organisation’s expectations. While including the beneficiary perspective can be a powerful aspect, according to the outcomes involved, other sources can also be valuable (e.g. feedback from others working with beneficiaries, indicators following changes in the context as a result of beneficiary outcomes). When long-term outcomes are involved, attention moves to signs that show progress is being made within the agreed reporting, monitoring and evaluation period. Organisations’ measurements may be limited more to outputs, in which case, the link to outcomes need be that much stronger, and accordingly be reviewed closely.
  • considering the relationship between outputs and outcomes requires returning to the conditions for change, and the assumptions involved. A review encompasses the other factors initially identified as being needed for the outcomes to take place — i.e. the assumptions regarding the context — and asks if these have been stable and forthcoming. There are also the assumptions around how beneficiaries are expected to respond to the outputs, and thereby change. This implies the question of whether or not, in practice, these assumptions are proving to be correct (i.e. are the beneficiaries responding as anticipated?). If the organisation is not itself proving the link between outputs and outcomes, then attention may turn to any further evidence from elsewhere that has been brought out in support of it.

Is a contribution to the overall impact being achieved?

Impact, properly understood, is the sum of the organisation’s outcomes adjusted for the context of change — i.e. the change that would have occurred anyway (deadweight), as well as any displacement effects, issues of attribution, drop off, and unintended consequences (see context of change). The extent to which the organisation has addressed these is considered in the adjusting for the context of change section of 4.2.2 Quality of Information. The implications for a re-evaluation of impact risk are to consider the extent to which new information about the context of change serves to boost confidence that the organisation’s activities are ultimately leading to a significant positive impact. The key questions are:

  • Do you have a good sense of what the context of change implications are for the organisation’s impact?
  • If yes, does the assessment of the context of change suggest that the organisation’s outcomes do produce a significant impact, thereby lowering impact risk?
  • If no, given the uncertainty, is there a risk that aspects of the context of change would significantly diminish the organisation’s impact if appropriate adjustments were made (e.g. if the most of the recorded outcomes would quite possibly have happened anyway), thereby increasing impact risk?

The relationship between the organisation’s outcomes and the understanding of its impact, and the extent to which adjustments have to be made, will have been considered in the initial assessment of impact risk. In revisiting this at the monitoring and evaluation stage, the focus is on whether or not, over the course of its operations, the organisation has been able to produce further data or evidence on this front.

As more information about the impact becomes apparent, it is appropriate to return to the original problem, and ask if the intervention is proving useful and effective. This involves reviewing the context and the mission to ensure they both still relate meaningfully to each other, and considering any other developments that have taken place, to ensure this approach is still the right thing to be doing. The high level aim is to be able to witness a change taking place in the context in line with the original mission.

Equally, and again as the impact is increasingly established in practice, it is important to ensure that the changes the organisation is achieving are sustainable and sustaining — i.e. that changes last beyond the immediate intervention, and the initial measurement, and are upheld and truly absorbed into beneficiaries’ lives. If the intervention is relatively short (relative to the desired change), it may be necessary to track beneficiaries beyond the period of direct contact with the organisation to ensure there hasn’t been a swift regression. Tracking adds an additional task and cost, and to manage this, organisations may track a sample of beneficiaries. Appropriate periods for validating the sustainability of the impact vary, but a typical figure is two years.

4.3.2 Evaluation of Impact Generation

Is the organisation meeting the set objectives, and generating the anticipated impact?
Are the results being explained? Do we understand why they are as they are?

The evaluation of impact generation turns to the results themselves to assess performance. The obvious reference point is the impact identified in the initial investment analysis (on the basis of which the investment was made), and the specific objectives that were set. This will involve reviewing the three major aspects of the impact (though to varying degrees depending on what kinds of impact the investment set out to achieve, and what indicators are being used):

  • direct impact on beneficiaries, using data on outputs, and relating to outcomes
  • wider impact, for example on the community, on public cost, on other organisations and the sector at large, and on the government
  • investor impact, for example on the organisation’s growth and strength, its financial and impact management, and its access to capital. In evaluating the investor impact, it is relevant in particular to consider if the initial investment is proving sufficient, and the organisation has the capital it needs to carry out its impact plan effectively. If an anticipated part of the investor impact was to leverage in more capital, or to make the organisation more creditworthy, again these are obvious points to follow up in the monitoring and evaluation. In addition to business-orientated KPIs, the organisation’s perspective is important in relation to investor impact, and investors need to ensure there are clear feedback channels from investee organisations regarding their experience of the investment (what they feel the effect has been, what has been useful, what they have found difficult, what could be helpful etc.), potentially using formal surveys.

The initial objectives set a useful benchmark with which to compare results, and evaluate the organisation’s performance. Comparisons with further reference points can, where possible, strengthen the evaluation, including comparisons with: performance in previous years; with other organisations working in comparable fields (and possibly within the investor’s portfolio); and with sector-wide benchmarks and standards. The use of standard indicators greatly facilitates this process, and thereby enhances both the investor’s and the organisation’s understanding of success.

It is important the evaluation of results is not merely a target-meeting exercise. Changes in the context, and things learned over the course of implementing the approach, may make the initial targets less realistic or appropriate, and these may therefore prove to be bad anchors off which to make judgements (indeed a significant advantage of having a wider field of comparisons is precisely to have more than one anchor point). In view of this, potentially as important as the results themselves is the explanation of the results. Evaluation looks to why the results are as they are, and to answer the narrative questions implied by comparisons with targets and objectives, with benchmarks, and with the results of other organisations.

A useful exercise in this regard can be to draw out explicitly, at regular reviewing intervals (e.g. biannually or annually), not only the results, but also the findings — i.e.:

  • what do the results say about performance?
  • what are the conclusions, and the lessons that can be learned?
Website built and hosted by Strangecode