Author Profile Picture

Robin Hoyle

Huthwaite International

Head of Learning Innovation at Huthwaite International

Read more from Robin Hoyle

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

What gets measured in L&D gets done … or does it?

Robin Hoyle examines why, in L&D, data-informed decision-making comes down to measuring genuine impact and performance.
time lapse photography of LED waters. Impact. Performance. Measure

“What gets measured gets done!”. This seems to be one of those oft-cited phrases: a throwaway cliche that’s designed to focus attention on what’s important or how we can motivate individuals to do the things that matter. 

In some respects it’s true: by setting measures – or targets – for certain activities or outcomes, we communicate to the organisation what’s important. Hopefully, this shows what the organisation cares about and influences individuals to focus on the things that matter.

But it’s only part of the story. 

How much do measures matter?

Measures are not always all they are cracked up to be. Yes, we can count things, but are we counting the right things? Because we can associate a quantity or a number with some aspect of performance, does that mean it matters? 

In learning and development, we are far from immune to the malady of measurement missteps. 

You’re probably familiar with the idea of ‘vanity measurement’. This refers to situations where we collect data on how many people looked at something or attended an event. Clearly, if people viewed that video or downloaded that infographic or logged on to our Learning Experience Platform, then it must be good – right?

Not right. At least, not necessarily right. 

Vanity doesn’t equate to impact

The metrics used on digital platforms have often been adapted from marketing tools. However, counting clicks and views is not the same as measuring learning and some distance from assessing impact. 

That, I would have thought, was obvious. Yet, when I am asked to judge various submissions for learning awards I frequently come across these same vanity numbers being described as proving impact.  

These days, the marketing metrics L&D folks value are not even seen as informative by marketers. They realised some time ago that capturing eyeballs and attention was only useful if you want to sell advertising. 

Now the focus is on conversion. In other words, marketers have recognised that looking at the number of views is only important if the viewers then go on to do something like make a purchase. The same is true in learning. 

Conversion counts

A number of our colleagues viewing our video or digital module only amounts to a hill of beans if some kind of action happens. And the impact can only be satisfactorily measured if that action positively changes workplace behaviour.

We have also accorded importance to measures which time and again have not been found to correlate with change in behaviour or development of skills.

Chief among these is the end of the digital module quiz. Completing a bit of e-learning and then being asked to answer badly written multiple choice questions is not ‘proving learning’. 

Doing so before logging off from the module you logged on to 30 minutes earlier is not an example of spaced practice. Potentially, it proves you have a slightly more long-term memory than a goldfish, but other than that it is a bureaucratic process designed by someone who thinks having a test shows them something about what people have learned. 

L&D are not the only people who grasp at the straw which is some kind of quantitative measurement.

Happy sheets are another measure which is much referenced without being significant in terms of quality, learning or impact. Giving someone a survey which asks them how satisfied they are with the course is far from objective and probably says more about the comfort of the seats, the lunch provided and who they were sitting next to, than it does about the effectiveness of the event. 

Making your happy sheet into some kind of faux Net Promoter Score only compounds the error. Net Promoter Scores are designed to work with large numbers of people. Drawing some kind of measure from a cohort of 6 before they leave the classroom which shows an NPS score of +67 is about as meaningless as numbers can get. 

Targets

What’s more some of these questionable measurements become even more questionable when they are converted into targets.

Charles Goodhart was a Bank of England Economist. In the 1970s, he was credited with creating the adage, most often expressed as: “When a measure becomes a target, it ceases to be an effective measure”. Goodhart was talking about monetary policy, but at about the same time, John Campbell came up with what was subsequently known as Campbell’s Law which said much the same thing about testing in US schools. 

Effectively these ideas suggest that as soon as targets are set, then these have the effect – and often the intention – of skewing behaviour towards achieving the target. However, the purpose of the initial measurement and monitoring is often lost. In other words, achieving the target becomes an end in itself, however, it is achieved regardless of the effect on the original purpose of the measure.

So, while we may believe that engagement in – and completion of – a specific course or sequence of learning activities is a good thing, it should never be elevated to the status of ‘Good Thing’ in its own right. Learning must have the greater goal of performance improvement however that performance is measured.

Are we measuring what is important or according significance to the things we can easily measure?

Targets for completion of modules or numbers of video views are just the digital equivalent of measuring the number of bums on seats. Being at the end of a piece of eLearning and in possession of a pulse has little to do with performance improvement and nothing to do with measuring impact.

So what do we measure?

First, what is the baseline? How are people performing before the learning intervention? From that assessment, what shift in that performance would represent positive and valuable progress?

Those are important numbers and they are difficult to gather unless you are close to the business and understand the roles your people play.  Is it about errors in using software? Is it about time to complete a task? Is it about the number of service users satisfactorily served? What does satisfactorily mean in that context? 

Understanding these business metrics and how your learning intervention potentially shifts those metrics is the essence of learning design and the only worthwhile basis for measuring the impact of what you do.

In more ways than most people think, performance before and after cannot only be measured but a return on investment can be built into that measurement. In other words, how much did the learning intervention cost (including participant time) and what was the value of the performance uplift to the organisation? 

Performance, performance, performance

If it is too difficult to isolate the performance uplift attributable to your learning activity,  then how about having a control group? One group that does not participate in the learning intervention and one that does. What happens differently in those two groups? Do people leave, get promoted, and take on new responsibilities? Is money saved? Is efficiency increased? Do surveys suggest higher employee satisfaction? 

The measures which matter in your organisation already exist. If someone somewhere is monitoring a performance metric, then the best way of showing our relevance and our value to the organisation is to positively shift the dial on those measures – not achieve metrics we have invented to make us look important. 

doesn’t helping people do better things better our job? 

Don’t get me wrong. L&D are not the only people who grasp at the straw which is some kind of quantitative measurement. I have said before: ‘Are we measuring what is important or according significance to the things we can easily measure?’

As L&D people we should be reflecting back to our colleagues in other departments where they are measuring the wrong things. Because if ‘what gets measured, gets done’ is true, then the chances are they could be doing different and more impactful things. 

And doesn’t helping people do better things better our job? 

I think it is.

Author Profile Picture
Robin Hoyle

Head of Learning Innovation at Huthwaite International

Read more from Robin Hoyle
Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!