Sean Errington of People Projects looks at the best ways to evaluate that intangible metric, trainer performance.
If you are reading this article the chances are you regularly read the professional HR and training press. When doing so, and when reading articles about measuring the effectiveness of training, do you recall any reference to observation of trainer performance? Quite probably not. Could I say you are as likely to see any reference to observation of trainer performance, as you are a declaration that "all measurement is a waste of time and resources"? Yet, observation of trainer performance does take place: if it takes place, it is reasonable to assume that the practice is deemed to be of worth by those who indulge in the practice. Certainly in the government funded training world the process is a very well-established one. If observation is a valuable practice, why is there no reference to it on the agenda for conferences focusing on evaluation, or in journals and learned articles on the subject? Is it a professional practice that dare not speak its name? Is there a coven of practitioners who are determined to keep this dark art a secret? Perhaps I am one of those people who all too readily see a conspiracy in the most innocent of situations.
An alternative and perhaps more rational perspective, is that there is an established evaluation orthodoxy. This orthodoxy suggests that the only - or most significant - approach to evaluation is through one or several indirect techniques; particularly ones was associated with Donald Kirkpatrick. These are indirect measures because they do not measure the process as it happens. Perhaps it is time to challenge the status quo and explore what observation of trainer performance could bring to the evaluation party, and how it might do this.
The proposition presented here, is not that observation of trainer performance should replace established evaluation processes, rather that it adds a significant dimension, and complements and enhances these processes.
Indirect measurement – current practice
Perhaps it would be useful to look at what typical existing evaluation activities deliver. These of course include the ubiquitous 'happy sheet' or learner perception questionnaire. This in essence tells us the extent to which participants enjoyed and valued the content, how it was delivered, lunch, the accommodation and the resources used. All of which is very important but it does not provide any objective evidence that learning took place, and simply because participants enjoy a learning session does not itself prove the session was worthwhile. But of course we have other evaluation tools, namely Kirkpatrick level 2 evaluation. Pre- and post-skills / knowledge testing clearly identifies whether learning has taken place and what impact the training has had. This does not however dealers any significant insight into how effectively learning took place.
It could be argued that evaluation tools do capture participant's comments about training effectiveness. Written or verbal comments for example that a particular topic was not well taught, reveal simply that. Such feedback identifies a problem exists, but does not provide the information necessary to identify the root cause of the problem. Talking to trainers regarding written or verbal comments from learners, may well be most unenlightening, particularly if the trainer does not recognise why participants have had difficulties with a topic.
So, to conclude this section: established evaluation methods tell us many important and valuable things, but they do not tell us significant things about a trainer's performance, and the efficiency of learning during learning sessions.
Professional development and ROI
It is of course reasonable to say that managers should expect trainers to be effective at analysing their own performances. The difficulty here is that tutors may:
- have a limited understanding of what good and better training looks like, which will inhibit the accuracy of their judgements on their own performance
- recognise what they do not do well, but have little idea why or what causes the difficulties they experience – subsequent discussions with a manager about the problems therefore become very hypothetical
Let us now take an ROI perspective, in relation to what an organisation invests in its trainers. It is not unlikely that considerable resources have been invested in an individual trainer. This could include investment in:
- their general professional development
- gaining trainer and advanced trainer qualifications
- subject-specific update training
And even possibly, secondments where trainers return to the work environment to maintain the currency of their subject knowledge / skills. The latter could be significant in maintaining their credibility as trainers. Without a well structured evaluation of their technical training competence, which an observation delivers, it is not unreasonable to suggest, that an organisation is not robustly monitoring whether the investment made is delivering the return it should.
Sean is a passionate educator and advanced skills trainer delivering training organisation improvement training. Sean has worked at all levels in public education from primary schools to universities, and is involved in the inspection of publicly-funded learning. He also works with organisations as diverse as Hanson Aggregates and the Football Association. For more information have a look at www.people-projects.co.uk/store/index.php and www.portal4learning.co.uk