Godfrey Parkin explains why trainers can’t get away with smile sheets and the occasional ROI calculation.
I have just returned from Dallas (howdy y’all), where I made two presentations on evaluation strategy at the annual conference of the International Society for Performance Improvement (ISPI). ISPI conferences are very different from the big American training conferences in so many ways. The attendance is smaller (around 1,800 people) and more cosmopolitan.
The tone is both more casual and more collegial. Because most of those attending tend to be senior people in their organisations, there are many strategic issues in among the tactical topics. At an ISPI conference, the discussions that take place both in-session and in the breaks are characterised by both infectious enthusiasm and earnestness.
People network not because they are hustling for business but because they are eager to share experiences and learn from others. Admittedly, there is more of an academic or theoretical theme to much of the content, and you can’t make affirmative statements without having their basis challenged, but it makes for a much more engaging and useful experience than one finds in vendor-loaded training conferences.
It was both encouraging and distressing to discover from talking with people from some of the world’s largest employers that learning evaluation is indeed as badly done as my experience of a few dozen clients had indicated.
Encouraging, because the awareness of the need to take a more pro-active strategic approach to evaluation is being felt at higher levels in big organisations; distressing because so many of those charged with doing something about it are struggling to make sense of the task.
I talked with people from global pharmaceutical, financial services, retail companies, and even divisions of the US department of homeland security. All are facing growing demands on their training resources along with mounting scrutiny of their effectiveness. They all readily agree that the way evaluation was done five years ago was inadequate then and is hopeless today.
You just can’t get by on smile sheets and the occasional ROI calculation. Just because everybody else is doing that doesn’t make it ‘word class’ and it is a poor excuse to simply buy into the ubiquitous mediocrity of the profession.
An awful lot of what passes for learning evaluation is meaningless, irrelevant, and worthless, and it quite rightly lacks credibility with senior management. Our data are not derived from solid evaluative practices; we measure the wrong things at the wrong place in the wrong way; the resulting information is not actionable or disappears into a black hole; and it is reported (if at all) in ways which communicate poorly or do not tie back to the strategic goals of the organisation. In most organisations it seems that state of learning evaluation is, frankly, pathetic.
Our evaluation is invariably intervention-specific and activity-focused instead of impact-specific. If the purpose of training is to improve the performance of the organisation, then the purpose of training evaluation is to measure performance improvement, not to report on bums-in-seats, learner happiness, or test results.
Yet only eight companies in every hundred make any attempt to measure the impact of their efforts at all. How can training expect to gain the respect and stature that warrants a ‘seat at the table’ when we are incapable of demonstrating in a convincing way our ongoing contribution to the performance of the company?