Author Profile Picture

Kenneth Fee

Read more from Kenneth Fee

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

The top ten evaluation mistakes


Kenneth Fee and Dr Alasdair Rutherford address this month's theme from an evaluation perspective. 

There’s some great work being done in L&D, but not so much when it comes to evaluating it. The worst thing is, HR and L&D professionals often seem to be unaware they’re making fundamental errors. We’ve compiled a list of the top ten mistakes made in evaluating learning and development – are you guilty of any of these?

1. Never making it a priority

Evaluation is usually seen as important, but rarely as urgent. This leads to it being neglected. Some even argue that if you’ve correctly designed a learning intervention to meet correctly identified learning needs, then you don’t need to do any evaluation. Wrong. See our previous article, Making Evaluation a Priority, for the five reasons people put off evaluation, and the five things you can do about it. Sheer neglect is one of the most common mistakes.

2. Starting too late

Some people think, perhaps because evaluation comes after needs analysis, design and delivery in the systematic training cycle, that you can leave evaluation until the end. Wrong. If you wait until after training has started, or even after it has been completed, then you haven’t aligned your needs analysis and design with your evaluation. Even worse, you have no baseline to compare against, and if you have no baseline (how well things were done before the learning intervention) then how do know whether it’s made any difference?

3. Evaluating the wrong things

Robert Brinkerhoff says evaluating training is like evaluating the wedding instead of the marriage. Training is a means to an end, and the evaluation focus should be on performance improvement and business results. A hugely disproportionate amount of evaluation effort checks learner satisfaction and immediate reactions to training, when this effort would often be better expended looking instead at the difference learning makes. If you regularly and routinely evaluate all learner reactions to all training then most of that effort is wasted.  Evaluation should be about finding meaningful evidence.

4. Partisan treatment

Sometimes you need to be committed to your cause, such as when you’re making a business case for something to be funded. However, this sort of partisan approach is completely wrong when it comes to measuring the results of learning and development. Jack Phillips is right when he argues your return on investment (ROI) calculations should be conservative in estimated benefits and liberal in estimating costs. If you want to be taken seriously, you need to be objective. This is a good reason to secure external, which is to say independent and impartial, help with evaluation.

5. Lack of the right skills

How many HR and L&D professionals have applied research skills? Do they even realise these are needed for robust evaluation?  Consider, for example, whether you are clear about the difference between precision and accuracy, or the difference between causation and correlation.  Can you decide when it’s best to use qualitative analysis and when you should prefer quantitative? Can you make an ROI calculation? Can you construct a Business Impact Model? If your answer to any of these questions is no, you lack all the skills to conduct proper evaluation.

6. Mixing purposes

It’s amazing how common it is for people to collect data without having a plan for analysing it! Evaluation should always have a clear purpose, and it’s important not to confuse different purposes. Do you want to prove the value of learning by measuring its results? Or do you want to investigate ways to improve the learning? Or do you want to quality assure your learning, for consistency, or to meet agreed standards? All of these are different purposes and require different approaches, and unless all stakeholders are in agreement about what you’re aiming to accomplish, these are likely to get mixed up.

7. Not relating learning to business outcomes

This mistake arises when the L&D function becomes divorced from the business, and forgets what it’s there for. Learning, in business terms, is not an end in itself. It is a means to achieve business goals, or to achieve them faster, or to exceed them, or to help clarify new business goals. Successful learning interventions are linked to the outcomes and values that matter for the business, and can demonstrate the contribution they make. Unless you can clearly show your stakeholders the difference L&D makes to the business, you’re making this mistake.

8. Overevaluating

It’s a big mistake to exhaustively evaluate everything, especially when this alienates learners. How often do we see reluctance to fill out survey forms, and low returns? The solution is sampling. You need to know how to select a big enough, representative sample of learners, or other stakeholders, and find out what you need to know from them. Deeper investigation is usually better than wider. Stop carpet bombing everyone with happy sheets, stop wasting resources, and focus your efforts on evaluating what really matters.

9. Badly designed questions and questionnaires

One of the biggest mistakes is badly designed questions and questionnaires. You need to know why you’re asking questions, what the different types of questions are - such as discrete, dichotomous, open or contingency questions - and when to use them. Badly designed scales can also skew responses. If your questions aren’t the right ones to ask, then the answers won’t yield the information you need.

10. Ineffective reporting

Finally, mistakes are made in evaluation reporting. These include boring documents, misusing diagrams, lacking visualisation, and missing the key aspect of personalisation – as Donald Kirkpatrick says, facts tell, but stories sell. Reports need to have findings, conclusions and recommendations, and they need to be illustrated with real people’s experiences and viewpoints. Reports need to communicate effectively, packing an emotional punch as well as revealing all the essential facts and figures, and need to be in the right media for the audience’s expectations. Can you honestly say that of all your reports?

Do you recognise any of the ten most common mistakes?  Save your blushes – nearly everyone is making some of them.  But do something about it.  Get help.  Now.

This article first appeared on the Airthrey website. Kenneth Fee and Dr Alasdair Rutherford are the founding directors of learning evaluation firm Airthrey Ltd. Ken is a career learning and development professional, whose latest book, 101 Learning & Development Tools, deals with evaluation among other topics. Alasdair is an evaluation and econometrics specialist, and a Research Fellow at the University of Stirling

Author Profile Picture

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.


Thank you!