No Image Available


Read more from TrainingZone

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

You can’t design effective training without effective evaluation


EvaluationZone aims to build up a complete and systematic approach to training which enables trainers and developers to make as big a contribution to organisational effectiveness as possible. This short article gives a glimpse of part of that continuing professional development and looks at the specific area of training design.

Design is just one part of a holistic process.

Training designers take a brief from the training needs analyst and turn it into a training programme, right? Wrong.

For a start, training design cannot be detached from any other part of the training process (needs analysis, delivery and evaluation). Training design is not just about a choice of training method, content or materials. The best training designers know what an iterative process it is; constantly checking with the business sponsors and other stakeholders to ensure the training is designed to deliver what they need. And that is where the problems really start.

Business sponsors of training don't know what they really need.

Take the example of an organisation I came across very recently that decided to put virtually all of its employees (about 1000 in total) through a 'problem solving' training programme. This would include such techniques as cause and effect analysis and using the Pareto principle. If you were asked to design this type of training from scratch you might immediately start getting out the reference books on problem solving techniques and put together a 'professional' piece of training. This might appear to be a very reasonable approach, which should produce some benefits both for the trainees and the organisation.

However, those who know anything about effective training design would not approach it in this way. You see effective training designers acknowledge the part that evaluation plays. Evaluators would immediately regard this as an unfocused approach, which is unlikely to result in any lasting benefits. Why should that be?

Designers need to ask evaluation questions.

Trainers who understand evaluation do not design training until the objectives are as clear as possible. In the case above what are the objectives? Well, "to give participants an understanding of problem-solving techniques and how to use them." This might seem innocuous enough, but is this a real training need?

Has anybody asked the 1000 participants whether they know anything about such techniques already? Some of them may have just joined from an organisation where they used these techniques all the time. Other employees may never be in a position to use problem solving even if they wanted to. Answers to these two questions would immediately start to make this training more focused and therefore more effective.

Now let us assume that we have identified that only 500 of these staff have a need for these skills. Is this the time to get the problem solving training course books off the shelf? No, not yet. We still have not answered the other questions posed by the evaluators – how will we know if this training has worked?

The first way to answer this would be to decide exactly what knowledge the participants need to have after the training is completed. Perhaps they need to know what a fishbone diagram is and how to draw one. The designer needs to design a test into the training, to indicate what has been learned. But that is not the end of the story. You can only say 'the training has worked' when employees start to actually use the problem solving techniques in their workplace.

At this stage the designer may try to satisfy the evaluator by offering to design a 'follow-up questionnaire'. That may suit old-fashioned evaluators but those who use the baseline model espoused by EvaluationZone will want to approach this from a different angle. They will want the trainees to actually use the problem solving techniques to solve problems they have already identified themselves. In other words the training now becomes 'action-centred'. The evaluation becomes part of the learning experience.

Learning designers not training designers.

Here is just as a simple example of what I mean. All trainees will be asked to collect some basic data before they attend the training such as the number of queries they have to deal with each day, for a week. These will be categorised under a series of simple headings as shown in the table below for an insurance customer service adviser.

Wrong details on cover noteIIIIIIIIII
Cover note sent out unsignedIII
No premium shownII
Incorrect premiumI
Spelling errorI

Once the data is in place any problem solving technique can be applied directly. For example, Pareto tells us to look at the problem with the biggest impact first.

This simple approach has many design advantages: -

  • you get more buy-in from the trainees
  • you get more buy-in from their boss/supervisor
  • the numbers make the training relevant
  • you have set up a feedback loop to check the training is working
  • positive feedback will reinforce learning

    There are many more advantages but the main point is this – evaluation is turning this from an old fashioned ‘training’ event into a learning programme. Something good trainers have been trying to do for years.

    Designers need to think about the bottom line.

    You may already like this approach but there is one key piece of information still missing. One that only an evaluator will be focused on. What impact is this training actually having on the bottom line? What is it worth in £s?

    Here the evaluator can help the designer. What would it be worth to reduce the incidence of incorrect cover notes by, say, 20 per cent? We know it costs most organisations about £100 to produce an invoice so let us just use that figure for now. Then we ask how many cover notes have to be re-written (we already have that data – 10 for this one adviser) and it produces the following figure: -

    2 cover notes a week (i.e. 20 per cent of 10) times £100, times 50 weeks per year equals £10,000. If the cost of the training is £500 the ROI is 1900 per cent. We would soon get a payback on this training.

    There is an awful lot more to evaluation than evaluation itself – it influences every part of the learning process.

    If you want to find out more about the role evaluation plays in training design or any other aspect of the training cycle, subscribe to EvaluationZone and see how much more you can get out of your training and development.

    Paul Kearns

  • Newsletter

    Get the latest from TrainingZone.

    Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.


    Thank you!