Author Profile Picture

Kenneth Fee

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Evaluation: Capturing impact

iStockphoto_Thinkstock_evaluation_tickbox

To kick off evaluation month, here's another great rumination on the subject from our resident experts, Kenneth Fee and Dr Alasdair Rutherford.

People often speak, quite loosely, about 'evaluating learning', when what they really mean is evaluating the impact of learning. Meanwhile, far too much effort in evaluating learning is focused instead on collecting satisfaction scores, reactions from learners via 'happy sheets', and too little effort on what really matters to organisations – the difference learning makes to their key performance indicators or business results.

One of the most effective techniques for capturing impact is the Success Case Method (SCM), devised by Robert O Brinkerhoff, emeritus professor at Western Michigan University, more than ten years ago. Despite having been used to great effect by leading organisations such as Hewlett Packard, Anheuser Busch and the World Bank, and despite a decade of popularity in the United States, the SCM is rarely applied in Britain or Europe, and remains relatively unknown.

One of the key ideas underpinning the SCM is that training, learning and development are inextricably linked with performance improvement. The goal of learning for work is not simply to acquire new knowledge, skills, or even competences, but to apply them in ways that improve individual, team and organisational performance, and achieve business results.  The mistake people often make is to try to evaluate training/learning in isolation, instead of working with the whole business process to show a causal relationship in the form of a contribution to success. Brinkerhoff characterises the right way as evaluating a marriage (a long term commitment) rather than a wedding (a one-off event) – performance improvement is a long term commitment; training tends to take the form of one-off events.

It’s important to note that this is a quest for meaningful evidence, not absolute proof. It’s a question of identifying what sort of success measures the business needs, and looking for what is agreed in advance to be sufficient evidence that training/learning contributes usefully to that success.

"The goal of learning for work is not simply to acquire new knowledge, skills, or even competences, but to apply them in ways that improve individual, team and organisational performance, and achieve business results."

Brinkerhoff observed that typical success rates in training are highly predictable, and a group of learners can usually be divided into three broad categories:

  • A few learners usually learn little or nothing and are unable to apply the learning at all.
  • A few learners usually find the learning enlightening and worthwhile, and are able to apply it to obtain substantial results.
  • And the great majority of learners use some of the learning, but accomplish little and/or give up trying.

The trick, therefore, is to find out what made the critical difference for the success cases in the second category, and try to apply that to the others.

The SCM does this quickly, simply and affordably, by following just a few steps.

The first step is to plan the evaluation. Clarify the learning under consideration, and decide whose learning will be evaluated, when, and by whom. Make sure you have a baseline of current performance as a starting point, and that the evaluation will make a comparison with that baseline. Agree what constitutes success, what evidence you are looking for, and what will amount to sufficient evidence of success.

The second step is to get a sense of the place of the learning intervention in the performance improvement process, and this involves developing an impact map, or a representation of which learning inputs and outputs contribute to which performance indicators and thence to business results. We use a version of this we call Business Impact Modelling, which was the subject of a previous TrainingZone feature in November 2012 (LINK). The impact map or model is a picture or description of the chain of events leading from the beginnings of a learning intervention to the ultimate organisational impact. The impact map or model helps people understand what value the learning is aiming to add, and facilitates measurement of what value it in fact adds.

The third step is to conduct a survey – a short, simple survey of just a small number of questions, perhaps as few as three questions. The shorter the survey the better, as this helps to ensure the highest possible number of returns. The aim of the survey is to sort the learners into the three categories identified above, in order to select a sample for interview.  This means the survey needs to include some ‘success questions’, derived from the impact model, and written to get to the heart of whether the learning worked or not, in terms of the kind of success we have already defined. This is where the SCM differs from many other evaluation techniques, insofar as it is a sampling method, rather than an exhaustive investigation that could be both time consuming and costly.

The fourth step is to conduct interviews. The idea here is to interview the success cases in some depth, to understand what factors contributed to making the learning work. Some interviews with the non-successes help understand what can go wrong and indeed has gone wrong, while a small number of interviews with the third category of learners, the great majority who had just a little success, can help confirm the validity of the samples. The interviews should yield real evidence, in the form of stories of how individuals used the learning to achieve success. This is what is so attractive about the SCM – the sample can be used quantitatively to project a realistic estimate of the overall value added, and is also a rich qualitative source of convincing real-life stories.

The fifth and final step is to collect and analyse the data, and report the results. This should help in two ways, both by giving senior management and other stakeholders a real sense of the impact of the learning, and by enabling learning and development professionals to use the stories to improve learning the next time around, not least by trying to replicate what worked for a small minority, with the great majority. And this should lead to greater success in the future.

Capturing impact, as the Success Case Method demonstrates, is about combining quantitative and qualitative measures to build a business case that is convincing because it is objective and evidence-based, and because it is a practical tool for improving the impact of learning.

This article also appears on the Airthrey website. Kenneth Fee and Dr Alasdair Rutherford are the founding directors of learning evaluation firm Airthrey Ltd. Ken is a career learning and development professional; Alasdair is an evaluation and econometrics specialist, and a lecturer at the University of Stirling. Airthrey can help organisations implement Business Impact Modelling and the Success Case Method.

Author Profile Picture