Author Profile Picture
Robin Hoyle

Huthwaite International

Head of Learning Innovation at Huthwaite International

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Your six-step L&D guide to delivering impact and value

To make a genuine business impact, L&D teams must look beyond much-loved vanity metrics and assess behaviour changes required to support the organisation-wide strategy. Here, L&D expert Robin Hoyle offers a six-step guide on how to do just this.
clear water droplet, A six-step L&D guide to delivering impact and value

Before getting stuck into this six-step L&D guide to delivering impact and value, let me give you some context and key headlines.

A former colleague reminded me the other day about how long I have been banging on about measuring impact and designing for learning transfer. She and I first worked together 20 years ago. It’s been a while.

For those of you who haven’t had the dubious benefit of the last two decades of exposure to my frequent rants about this stuff, here are the headlines:

  • Our job in learning and development is to enable people to do things differently and do different things.
  • We frequently accord importance to the things we can measure rather than spend time measuring the things that are important.
  • Learning and development activities have little or no impact without some level of support by leaders and teams with whom programme participants work.
  • Learning designs focus on delivering content, when they should extend to the application of what has been learned to improve performance.  

If these four simple points resonate with you, then this six-step L&D guide to delivering impact and value may prove helpful.

1. Start with the end in mind

Often credited to Stephen Covey, this concept has been expressed throughout history. Sun Tzu (c.500BCE), Aristotle (c. 350BCE), Seneca, (1st Century CE), Benjamin Franklin (18th Century) and numerous 20th Century writers all recognised the importance of having a clear goal that informs planning. This stuff ain’t new!

Whenever we are designing a learning intervention – and whatever methods we select to enable the required learning – we should know what it is we want people to do at the end. We need to know:

  • What will be different.
  • What people will need to stop doing (or do less of) to make space for the new tasks, skills and behaviours.
  • What good looks like.

2. Determine measures and monitoring processes

Next, consider the following questions:

  • How will you work out what has changed?
  • What do you expect to see? When? How frequently?
  • How will you gather data about how your people are undertaking their work?
  • Is it sufficient to demonstrate a causal effect between participating in the programme and changed behaviour? 

Organisations frequently tell me how many people participated in a course or watched a video, or completed a piece of e-learning. They believe it shows the impact of what they have done. But if we seek to change behaviours in a way that improves performance then these metrics are clearly insufficient.

Consumption of content is not learning. Passing an end-of-module test is not impact. Value is not derived from how many of your colleagues logged on to your LMS, attended a class or downloaded a document. Value depends on something changing as a result of your intevention.

Of course, L&D is not adding much value if no one completes your programme or accesses the content you have spent so much time (and money) creating. So monitoring usage is part of the story – but it’s not anywhere close to being the whole story.

We have to be able to show the goal we set in response to point one above has been achieved, or at least progress has been made towards its achievement. 

3. Involve your leaders

L&D activity does not happen in a vacuum. Occasionally, you’ll be able to involve the whole organisation in a change initiative in which learning has a significant role to play. (And in my experience all change initiatives involve people doing things differently and doing different things. As such, learning should always be part of the mix).

In those ‘transformation’ initiatives, you may have the luxury of designing cross-functional approaches for leaders, team members and new recruits. Involving your leaders is an essential part of the process and expected by everyone. But this is a rare opportunity. Often we are addressing a specific group or a particular function with a need to change some aspect or aspects of their current behaviour.

In these situations, the leaders, networks and cross-functional groups that form the ecosystem in which our participants operate need to know what is happening, why, and with what intended results. These intended results need to be communicated in a compelling way. In other words: how does the output of your programme, the new skills for a particular group,  contribute to the wider strategy of that network?  

Why most L&D programmes fall short

How individuals can contribute to achieving these results should be discussed, defined and implemented. When we define our primary learning goal – the behaviour we want to see changed and the routes we will use to gain evidence of it working (or not) – we define measurable, observable outcomes.

Constructive and enabling behaviour by leaders needs a similar focus. Most learning initiatives fall short of their intended goal not because the participants are disengaged or the course material and content is poor. They fail because team leaders do not support the application of new skills or do not make specific space available for people to continue learning on the job. The idea that the training is ‘done’ when someone leaves a classroom or logs off the LMS is patently daft, but one which is still evidenced through the actions of those managing our intended audience. 

4. Focus on ‘How’, not just ‘What’

Your learning content and experiences should not simply focus on the ‘what’ and ‘why’. These are important to include but far from the whole story.

Typically, L&D is pretty good at focusing on the ‘how’ when designing technology or software training. We define a job that needs doing, we specify a technology tool or piece of software that can help, and we show people – step by step – what they need to do.

Often, we skip the demonstration part and, instead, provide people with just-in-time guides to completing specific tasks or using specific software functions. We recognise that creating a pivot table in Excel (for example) is not something people need to do everyday and that next time they come to that task, they should have easy access to a guide that will remind them of the key steps.

But when it comes to other tasks – such as working with customers or stakeholders, collaborating with colleagues, or handling difficult conversations – we rely on communicating what people should do, and why. We rarely provide the same tools and guides about how these tasks can be carried out effectively. 

Communication conundrums

This is particularly true with activities involving communication with others. We might focus on terms such as ‘displaying empathy’ or ‘listening actively’ or ‘writing succinctly’. But these terms are open to interpretation. They are imprecise. They frequently mean whatever someone wants them to mean.

To help people achieve a minimum standard of performance, unpack ‘how’ people can change what they do. Help people build a repertoire of objectively defined skills and behaviours that equip them to improve these human interactions.

5. Plan to gather and analyse the data

Monitoring how people are doing things and achieving your definition of good performance takes time. You need a robust approach to investigating what changed from before to afterward. The resources required in people, time and data analysis needs to be part of your project plan and design. 

  • Do you work with your participants to plan action that both supports ongoing learning and generates evidence of performance?
  • Do you follow up on these plans?
  • Does your learning design extend to the workplace at all? 

You can hardly complain that nothing much has changed after a learning intervention, if you didn’t set an expectation of what you expect to see and rigorously follow up with those involved. 

The ‘crawl, walk, run’  approach

Many students of learning impact – including Professor Robert Brinkerhoff – advocate a ‘crawl, walk, run’  approach. In other words, tasks that show application of new skills and behaviours are scaffolded.

After an intervention or between one learning activity and the next, we expect people to complete task A Two weeks later, and we would expect them to be working on task B by week four, while task C should be part of the performance repertoire.

This opportunity to implement change in stages is good practice. My colleagues and I at Huthwaite International strongly advise working on one behaviour at a time, gaining confidence and a sense of achievement, before moving on to the next priority behaviour.

Overwhelming people who have just been introduced to new ways of performing and expecting them to change everything, all at once and by last week, is not helpful. Providing ongoing support as people are on the nursery slopes of change should be part of your initial design and how you reflect on, and improve, the learning design as it is rolled out to a wider audience. 

Now that you’ve gathered and analysed the data, ask yourself: what does it mean? This question isn’t about participant behaviour change – although that’s important. It’s about uncovering whether behaviour change delivered – or supported the delivery of – the organisation’s strategy? This is the long view.

You’re not going to achieve this with the happy sheets at the end of a session or a quick check-in with participants a month later. This is the 3-month, 6-month or 12-month study of impact over time. This is where you can quantify the impact you have achieved and – most importantly – the value you have delivered. 


Value IS quantifiable

That’s a wrap on this six-step L&D guide to delivering impact and value, but before you go here are some additional (hopefully helpful) nuggets.

People think defining value is tricky or a matter of opinion. It isn’t. 

Value is the benefit gained from the change you have achieved minus the cost of its achievement.

Once you start to think in terms of benefit minus cost, making the case for your planned initiatives becomes much more straightforward.

Set up a control group

Finally, think about cause and effect. Did the training add the value you claim? To find this out consider identifying a control group. Assuming the remaining work environment remains unaltered between group A (involved in the programme) and group B (not involved), you should be able to compare performance data and show a difference. Notice I refer to performance data. If it matters to the organisation, the organisation will be gathering data on how well, how often and how much. If you don’t move the dial on these metrics, then don’t expect much support from the organisation.

What if your organisation doesn’t measure what truly counts?

Of course, not every organisation is good at defining the data they need and the metrics that capture high performance or where it is lacking. They too are victim of valuing that which is easy to count rather than counting the important things. 

If that’s true for you and your organisation,you may need to implement some change. 

That means emabrking on another project in which people do different things and do things differently! I hope it has impact and delivers value.

Your next read: What gets measured in L&D gets done … or does it?