No Image Available

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Evaluation of training delivery pt1: A dark secret


Sean Errington of People Projects looks at the best ways to evaluate that intangible metric, trainer performance.

If you are reading this article the chances are you regularly read the professional HR and training press. When doing so, and when reading articles about measuring the effectiveness of training, do you recall any reference to observation of trainer performance? Quite probably not. Could I say you are as likely to see any reference to observation of trainer performance, as you are a declaration that "all measurement is a waste of time and resources"? Yet, observation of trainer performance does take place: if it takes place, it is reasonable to assume that the practice is deemed to be of worth by those who indulge in the practice. Certainly in the government funded training world the process is a very well-established one. If observation is a valuable practice, why is there no reference to it on the agenda for conferences focusing on evaluation, or in journals and learned articles on the subject? Is it a professional practice that dare not speak its name? Is there a coven of practitioners who are determined to keep this dark art a secret? Perhaps I am one of those people who all too readily see a conspiracy in the most innocent of situations.  
"If observation is a valuable practice, why is there no reference to it on the agenda for conferences focusing on evaluation, or in journals and learned articles on the subject?"

An alternative and perhaps more rational perspective, is that there is an established evaluation orthodoxy. This orthodoxy suggests that the only - or most significant - approach to evaluation is through one or several indirect techniques; particularly ones was associated with Donald Kirkpatrick. These are indirect measures because they do not measure the process as it happens. Perhaps it is time to challenge the status quo and explore what observation of trainer performance could bring to the evaluation party, and how it might do this.

The proposition presented here, is not that observation of trainer performance should replace established evaluation processes, rather that it adds a significant dimension, and complements and enhances these processes.


Indirect measurement – current practice

Defending the orthodoxy, you might ask why waste time in exploring the potential of observation, when what exists more than adequately meets existing evaluation needs. But then we didn't know we needed deodorant, until some people suggested perhaps we did not need to go around smelling like ageing French cheese.  

Perhaps it would be useful to look at what typical existing evaluation activities deliver. These of course include the ubiquitous 'happy sheet' or learner perception questionnaire. This in essence tells us the extent to which participants enjoyed and valued the content, how it was delivered, lunch, the accommodation and the resources used. All of which is very important but it does not provide any objective evidence that learning took place, and simply because participants enjoy a learning session does not itself prove the session was worthwhile. But of course we have other evaluation tools, namely Kirkpatrick level 2 evaluation. Pre- and post-skills / knowledge testing clearly identifies whether learning has taken place and what impact the training has had. This does not however dealers any significant insight into how effectively learning took place.

It could be argued that evaluation tools do capture participant's comments about training effectiveness. Written or verbal comments for example that a particular topic was not well taught, reveal simply that. Such feedback identifies a problem exists, but does not provide the information necessary to identify the root cause of the problem. Talking to trainers regarding written or verbal comments from learners, may well be most unenlightening, particularly if the trainer does not recognise why participants have had difficulties with a topic.

So, to conclude this section: established evaluation methods tell us many important and valuable things, but they do not tell us significant things about a trainer's performance, and the efficiency of learning during learning sessions.


Professional development and ROI

In many working environments managers and supervisors have considerable contact with those they manage during the work process. They have many and frequent opportunities to see for themselves, how well an individual performs. In other contexts managers and supervisors may not often see their staff perform, but they see the products of the performance, e.g. completed work documents. Where managers manage trainers, unless they choose to observe them, they are not seeing them do what they are being paid to do. Looking at the design of training materials, the quality of session plans and so forth contributes to the overall performance picture, but is only a small part of the whole training process. Participant evaluations and other evaluation techniques as we have established, do not definitively tell us what tutors do well, and in what respects they could improve. Where trainers deliver qualifications you could argue the results should speak volumes. They certainly do, but results do not tell us what techniques individual tutors excel at, or what aspects of content they may find challenging. In other words when we are talking about trainer performance management and development, how do managers have a complete understanding of the capability of their trainers if they have not observed them?

It is of course reasonable to say that managers should expect trainers to be effective at analysing their own performances. The difficulty here is that tutors may:
  • have a limited understanding of what good and better training looks like, which will inhibit the accuracy of their judgements on their own performance
  • recognise what they do not do well, but have little idea why or what  causes the difficulties they experience – subsequent discussions with a manager about the problems therefore become very hypothetical

Let us now take an ROI perspective, in relation to what an organisation invests in its trainers. It is not unlikely that considerable resources have been invested in an individual trainer. This could include investment in:
  • their general professional development
  • gaining trainer and advanced trainer qualifications
  • subject-specific update training

And even possibly, secondments where trainers return to the work environment to maintain the currency of their subject knowledge / skills. The latter could be significant in maintaining their credibility as trainers. Without a well structured evaluation of their technical training competence, which an observation delivers, it is not unreasonable to suggest, that an organisation is not robustly monitoring whether the investment made is delivering the return it should.

Sean is a passionate educator and advanced skills trainer delivering training organisation improvement training. Sean has worked at all levels in public education from primary schools to universities, and is involved in the inspection of publicly-funded learning. He also works with organisations as diverse as Hanson Aggregates and the Football Association. For more information have a look at and

No Image Available

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.


Thank you!