googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Holy Grail or Empty Vessel?

default-16x9

Garry Platt responds to a recent article, The Holy Grail of Evaluation? on TrainingZONE, in which Kevin Lovell explained the principles behind an automated evaluation system developed by Knowledge Pool.


Kevin Lovell representing the Knowledge Pool recently published a short article on TrainingZONE entitled “The Holy Grail of Evaluation?”. It was claimed in this article that a new system they have introduced was both ‘ground breaking’ and that it ‘automated higher level evaluation’. It was also asserted that the system provided ‘hard evidence of business benefit’.

These extravagant claims are worthy of further investigation, and it is my contention that some of them fail under closer examination and that this “Holy Grail” is wholly frail.

Essentially the system described by Kevin Lovell is a questionnaire which is sent out three months after the developmental programme and asks the candidate a series of questions about their line manager and their use and application of the learning in the workplace. It focuses on several areas, including: quality, cost reduction, customer satisfaction.

Self-assessment
The first issue here is that apparently in Knowledge Pool’s experience learners are ‘very good at recognising when they have used what they learned, also whether that learning has helped them improve their job skills’. If this is so it runs contrary to the much research into people’s ability to self calibrate and objectively assess their own achievements and performance. I reference the work of Le Torneau1 ,Church2 and Silvia and Dulval3 as evidence of this. But on a more pragmatic and immediate level one only has to watch some of the people auditioning for ‘X Factor’ to realise that many people haven’t got a clue about their own ability and whether they can and do successfully apply the most basic of techniques successfully. Alternatively they might have frail egos to defend, careers to promote or practised bone idleness to cover up. Consequently I cannot and do not concur that people can consistently and widely make objective judgements about whether what they have learnt is being used and used to good effect.

The second issue is whether when an individual is applying their learnt skills it is being done in an environment and situation that both warrants it and leads to better workplace results. The fact that someone can achieve cost reductions or improve quality in the workplace does not necessarily mean that an improvement is achieved for the business. On the contrary it might even lead to a reduction in over all business performance; we simply do not know from this questionnaire, it is assumed but on no reliable premise.

Validity
The third issue centres on validity. It is stated in the article that the reliability and consistency of the answers were checked by ‘comparisons with reaction feedback and anecdotal evidence from the L&D team’. Unless something was lost in the editing of this piece it is proposed here that by contrasting the questionnaires with reaction level feedback and stories told by the L&D team, against 1000 responses in ten different organisations and an untold number of subject matter areas (that’s an awful lot of anecdotes and stories) the validity of the answers gathered was confirmed? Validity in this context would be confirmation that all or most of the claims made by the responders were verified and assured correct by solid evidence and factual data; anecdotes and reaction level questionnaires are neither.

The fourth issue is the claim that Knowledge Pool has produced a system that can ‘paint a meaningful picture of how learning impacts a business’. If by meaningful picture they meant a representative analysis of what was being achieved by the business as a result of the learning, then no, in my opinion it doesn’t even come close to achieving this. It is outlined in the article that the questions listed in the questionnaire ask about how the individual thinks they has improved their own performance. Does this tell us how the training has impacted on the business? No, it doesn’t even tell us how the training has actually impacted on the individual only how they ‘think’ it has, and the reliability of this is clearly open to question.

ROI
Finally, the article states that the proposed methodology cannot provide ‘hard ROI statistics’. Let us be absolutely clear here, this approach cannot supply the softest of ROI evidence, not even of the Andrex toilet tissue variety of softness. There is no costing undertaken in this system, no substantive analysis of savings and most definitely no tracking of financial benefits achieved via a comparison of pre and post course results.

In terms of evaluation this is no ‘Holy Grail’, it is not ‘ground breaking’ either, the Westinghouse Corporation years ago produced a system called TOTEM which was an acronym for Transfer of Training Evaluation Model, it did exactly what it said on the tin. It was a process and system for measuring the transfer of training into the workplace. Anyone interested in finding out about their approach can visit: http://tinyurl.com/2hnvqd You will discover that it is administratively simple but sophisticated in its outputs but makes no extreme claims for its results.

So if this methodology is not the ‘Holy Grail of Evaluation’ what is it? Using Kirkpatrick’s model it is in fact a Level 1 Reaction level evaluation tool masquerading as a Level 3+ evaluation strategy. It is not possible to escape the fact you are asking participants for their views and opinions about their own achievements. This is not proof or even reliable indicative evidence of workplace performance and business level outcomes.

Evaluation at the organisational level is difficult to accomplish. Extraordinary claims like those made for this system require extraordinary proof; I find the evidene in this case lacking.