No Image Available

TrainingZone

Read more from TrainingZone

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Feature: Understanding the Need for Evaluation

default-16x9

Teaching Following a snapshot survey of his clients, Mike Taylor offers a glimpse into their thinking on when and how to evaluate training.


Understanding the impact of training, especially development training (focused on attitudes and behaviours) is a complex issue. On the one hand there are people who work hard to measure training’s impact, and on the other, there are people who are happy to invest in training simply because they believe it is the right thing to do.

As a provider of experiential learning programmes to corporate and public sector clients, our experience of clients’ approaches to evaluation is mixed. Organisations often make much of evaluation in the tendering process. But suggest payment by results or ongoing measurement of return on investment (ROI) and organisations seem less enthusiastic.

And so the debate keeps running.

I’ve looked at current viewpoints in the training press, and spoken to a selection of our senior clients. It leaves me feeling we need a more sophisticated debate about evaluation and an acceptance of a broader set of beliefs around what matters, when, and to whom.

We spoke to our clients about what they thought about ROI in development.

Can ROI be measured?
The short answer is yes. The closer the learners’ activities are to the bottom line, the easier it is. Our clients all believed they could come up with a financial model of sorts that helped the argument for some of their training. However, many clients said it was difficult to measure much of the training that they deliver.

Training is one of many complex and interdependent factors affecting organisational performance as demonstrated by numerous models of organisational change. This means that the ripples of training are often harder to find in the larger waves of business dynamics.

When we run a programme we recognise that many factors influence how well managers develop that it is difficult and pointless to measure the ROI of a training intervention in isolation. It is far better to measure an organisation’s overall ability to develop managers through a variety of organisational surveys, succession plans, turnover figures etc.

Should ROI be measured?
The overall view seemed to be “yes in an ideal world” but generally, “no, given the difficulty involved”.

A 2005 TrainingZONE feature referred to a survey of professionals at HRD by MaST, which appears to indicate a desire by professionals to spend up to 10% of the training budget on evaluation. When I checked this out with our clients, they all felt this was an unrealistic and poor use of such a large proportion of budget.

Interestingly the more senior the people I spoke to, the less concerned they were about measuring ROI.

In terms of getting support from other members of the board, one HR director (who does demonstrate ROI in skills training of front line staff) said: "When it comes to the management development stuff, I say to the board ‘you either believe in this, or you don’t – we’re not going to resolve this by arguing over the numbers'." He pointed out that his research suggested they invest more in training than their competitors.

I asked another client about the procurement department: “They know they are going to be held to account for much bigger projects than these. As long as we can demonstrate good value for money, they don’t challenge us on ROI.”

So what really matters?
The interventions they chose to make had to have “face validity”. They had to be arguable in the context of an organisational and HR strategy. But this is not the same as demonstrating ROI.

“If I can sit down with the financial director and convince him with a broad model of why something makes sense, then that’s the job done. If the HR director and the FD are in agreement, there needn’t be too much debate in the board room,” said one client.

Another commented: “We make our decisions intuitively - we absorb ourselves in the organisation and the people, we watch and we listen, and we act accordingly.”

The overriding message seems to be: If we as training professionals can prove we do a good job with some of our interventions, we can build the trust of our colleagues in the rest of what we do. If colleagues buy into the rationale, and the course evaluations and anecdotal evidence are positive, we don’t need to use up people’s time proving the point further.

Most of what I read about measurement of training impact discusses it as a means to prove or justify the activity. I think this is distracting us from a more valuable focus.

Where measurement really works is when the desired business impact is fully understood at the outset. Ideally such results are the basis of training objectives.

For example we recently won a training award for work we did with a client. From the outset we were very clear about what the training needed to achieve in terms of business impact. We clarified where we were starting from and where we needed to end up before the training began.

Our experience of well-measured development training is that it sets out to focus the organisation on delivering business performance outcomes on the back of the training. In other words, the training is only the start of the process. This takes the development activity back into the organisation, with managers focused and involved in supporting the learners.

The crux of successful training is to be found in implementation by the organisation back in the workplace. Our experience is that having some metrics in place helps this process. So, for this reason alone, I would like more of our work to be more closely measured. On the other hand, I believe we can and should also trust business leaders to follow their instincts and do what they know is right.

To suggest organisations should or will only sponsor activities if the financial benefit is proven is doing business leaders a disservice. Good managers - leaders - often do things because they believe them to be right as part of a broad strategy.

Our research suggests that it isn’t always possible to justify training and development in pure monetary terms.

* Mike Taylor is director of Interaction Learning and Development.

Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!