No Image Available

TrainingZone

Read more from TrainingZone

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Evaluation – how much do you know?

default-16x9

What better place to start learning about evaluation than to establish how much you know first. Try answering the following questions by giving yourself a score for each. If you agree 100% give yourself the maximum score shown in brackets. If you agree only 60% give yourself 3 out 5 or 6 out of 10. Add up your scores and you will have a %.

All of these questions relate to the key elements of an effective evaluation and ROI system.

1. All of our training is clearly linked to business needs. (10)
2.All of our training has clearly defined learning/training objectives.(5)
3. I can clearly define the difference between validation and evaluation. (5)
4. I know what the PDCA cycle is, how it generates continuous improvement. (10)
5. I know what a feedback loop is and how it encourages learning.(10)
6. I can distinguish between basic training and improvement (added value) training (5)
7. All of our added value training is subject to an ROI calculation.(5)
8. Reactions to all training are measured.(5)
9. Tests are used after all training to measure how much trainees have learned. (5)
10. Observations are made to check how much learning is being transferred to the workplace. (5)
11. We evaluate the impact of all training on the organisation. (10)
12. We do not design any training until we have already established the evaluation measures to be used.(10)
13. I can clearly define added value. (5)
14. The amount of money we spend on training could not achieve a greater return if used elsewhere in this organisation. (5)
15. We have a set a pass mark for all training programmes (5).

Here are some guideline answers.

Thank you for completing the questionnaire. It was intended to set a very high standard so please don’t feel disappointed if you felt your score was low; just regard it as a great opportunity for improvement! I follow the principle that as long as you measure your performance and produce a baseline before you learn, then you can check your performance improvement afterwards. (Have you tried doing the questionnaire again recently?)

Maybe the questionnaire was not scientific enough for you? It was not intended to be. My view is that although measurement is important in training it is not as important as getting the fundamental principles of training right. The questions are really geared to checking the principles you follow, not only in training evaluation but training analysis and design as well.

I have given my own guideline ‘answers’ to the questions below. You may not agree with all of them but they should provide a great deal of food for thought. Also, even if you just improve in one or two areas, that is still improvement and that is the main reason for producing the questionnaire (and any other evaluation tool).

1. All our training is clearly linked to business objectives.

Not as difficult a question as it may seem. Probably as much as 75% of training is done simply because the business has to do it (eg. safety, induction, systems, product knowledge). This is automatically done for an obvious business reason. The rest of training spend is discretionary and this is where problems arise. What was the business reason for your leadership programme (better leadership does not qualify as a business reason)? Any training for which you have identified a business improvement, in monetary terms, qualifies as a business reason and you should be able to produce an ROI for it.

For more insights please ask for a copy of ‘The 3 Box System’ article which provides a simple, practical answer to this question.

2. All our training has clearly defined learning/training objectives.

Relatively straightforward, hopefully. Even coaching and mentoring exercises can establish some pretty clear learning objectives. This is basic good practice. If you gave yourself a low score try and tighten up on this one straight away, it is a good discipline.

3. I can clearly define the difference between validation and evaluation.

I'm sure we could all argue semantics here. For what it is worth I regard levels 1-3 as validation only. They check that training delivered its training objectives but will never tell you how much value training added. Only level 4 is evaluation, that is it checks that the training delivered its business objectives. Evaluation is about putting a real value on training. The only true value I know is one that has a $ sign.

When most people talk about evaluation they usually mean validation.

4. I know what the PDCA cycle is, how it generates continuous improvement and where evaluation fits into it.

PDCA stands for Plan, Do, Check, Act. It is a model which has been around since the 1920's and Deming used it to great effect in quality management. It only works if you use improvement measures. A key part of it is the Check stage. Here you check whether the planned improvement has happened and then feedback the results, good or bad. It is very similar to Kolb's Learning Cycle. If you replace the word Check with Evaluate you have a perfect, simple and very powerful system for iterative, continuous improvement and learning. If you have never tried it out try it tomorrow.

5. I know what a feedback loop is and how it encourages learning.

This follows on from the PDCA cycle. Without feedback loops the organisation does not know what is working and what isn't. Do you have effective feedback loops on all training? It is just as important to feedback when something isn't working as when it is working. We learn from our mistakes.

6. I can distinguish between basic training and improvement (added value) training.

Basic training is training that the organisation needs just to stay in operation. Think of airline pilot training, the airlines could not operate without it. The only way to ROI basic training is to think negatively (how many planes would crash if we didn't train the pilots properly?).

Added value training identifies an improvement gap. So, for example you train to improve sales and profit. This training is easy to do ROI calculations for.

(See also Question 1).

7. All our added value training is subject to an ROI calculation.

You can only score well on this if you scored well on Question 6.

8. Reactions to all training are measured.

Straightforward Level I happy sheets will do for a maximum score. Even sampling will qualify for a maximum score. I never set too much store by happy sheets but they can still provide some useful indications.

9. Tests are used after all training to measure how much trainees have learned.

Level 2. If you really take training seriously then some testing should always take place. However, I will be the first to agree that testing is often highly contentious. Nevertheless, we do it when we have to for legal reasons so why not all the time?

10. Observations are made to check how much learning is being transferred to the workplace.

Level 3, the most time-consuming level. Very meaningful but can be very disappointing when you see very limited transfer to the workplace. Sampling is highly recommended and this would qualify for a maximum score.

11. We evaluate the impact of all training on the organisation.

I mean evaluation but for basic training (Box 1) validation will suffice. I don't know any organisation that does this 100% so if you gave yourself a maximum score you're the tops (or bending the truth?).

12. We do not design any training until we have already established the evaluation measures to be used.

For me this is a crucial question. The biggest problem that trainers have when they first try to evaluate is that they don't realise they must establish the measures before the training was designed. Trying to produce measures afterwards, without baseline measures, is a rather pointless exercise. Again, validation measures will suffice for Box 1 training.

For more insights please ask for a copy of ‘Evaluating backwards?’

13. I can clearly define added value.

Actually a great deal simpler than you might have thought, even though added value can be a slippery concept. If you make widgets you can only add value by increasing widget production/sales (without increasing average cost), reduce the average cost of widgets or charge a higher price. I would also accept an improvement in the quality of widgets in the belief that this will feed through to lower costs or higher prices or higher sales.

Obviously the same applies to the provision of services and the public sector just as much, although they rarely have an opportunity to dictate prices or charges.

If you defined added value in terms of creativity and innovation you are probably right. But these things will only really add value if they produce the improvements cited above. Added value always, always, always (is that enough emphasis?) has a $ sign attached.

For more insights please ask for a copy of ‘What is added value?’

14. The amount of money we spend on training could not achieve a greater return if used elsewhere in the organisation.

All added value training should produce an ROI greater than any other investment and in practice you will often find this is easier to achieve than you might think. Effective, added value training generates significant returns (100+% at least).

All basic training, assuming you validate its effectiveness, should be delivered as efficiently as possible.

15. We have a set a pass mark for all training programmes.

Ideally this is what we should do but it is such a contentious issue maybe we will leave this one for now.

If you have any queries, comments or other feedback on this questionnaire please put them onto the Forum page.

Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!