googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Measuring Trainers Performance

default-16x9

Hello, my name is Viviene Petersen and I am the Head of Sabre Training at Sabre Pacific in Sydney, Australia. Sabre Pacific provides technology solutions to the travel industry. Part of the services we offer to both new and existing customers (travel agents) includes software training, the cost of which is incorporated into the contractual agreement. The training team’s primary responsibility is train customers how to use the Sabre reservation system.

As part of company wide performance measure drive, I need to create appropriate and ongoing performance measures for both national training team and the individual trainers. The tricky part being that the training department has little or no control over who attends training. Although, we actively promote our customer training program at every possible opportunity, sending staff to training courses is entirely at the discretion of the customer. Also generating new business is dependant on our sales team.

Current situation:
(1) Some of our courses have practical tests with an 85% pass mark. Of the 2000 plus students trained annually, only a handful do not pass. They are, however given the opportunity to re-sit at a later stage for a small fee.
(2) Course evaluations are consistently rated v. good to excellent across all key areas, which includes facility, trainer, course material, expectations, relevancy etc.
(3). Service Level Agreements are in place for training administration processing times.
(4) Specific time frames are nominated for trainers coming up to speed to deliver new courses. (All team members have training qualifications)
(5) In a company wide survey in 2003 training department achieved a 10% increase in customer satisfaction from 88% to 98%.

I would greatly appreciate any feedback or suggestions you may be able to offer.


viviene petersen

10 Responses

  1. trainers performance
    It seems as though you have everything in place already, Vivienne.
    In the end, the success criterion is “Do the delegates end up being able to Use the Sabre reservation system?” And you already have the practical tests that show whether that has happened.
    You have the course evaluations which show what happened and customer satisfaction.
    The only possible addition, which might be interesting, is to ask trainers what they would consider to be fair performance measures for teams and individuals, and work with them.
    If they feel confident with you, you might get some useful material to work with.

  2. Missing the point

    Hi Viviene

    Please bear with me if this looks a little harsh – the details you give demonstrate that you have almost certainly missed the point.

    The result of a survey carried out by IBM at their Rochester training facility, over a decade ago, demonstrated a very simple yet crucial finding:

    Trainers who consistently get a rating of over 80% may actually be extremely expert in social skills (i.e. creating rapport with the trainees) – and mediocre as trainers.
    That is to say, it was found that the best results in terms of what was learned, retained and translated into practical skills when assessed 3, 6 and 9 months later, were associated with trainers who consistently generated ratings in the 61%-80% band.

    Those trainers who consistently got scores of 81% and above were found to be relating to their trainees very successfully, but not actually achieving as much in the way of utilisable knowledge transfer as those in the preceding band.

    Of course trainers should create a good WORKING relationship with their trainees. But after that, the bottom line of trainer assessment is surely: When the trainees get back to the workplace, and after a “settling in” phase, are they able to perform the tasks they were trained to do with the required level of skill?

    This means getting feedback from manages *at least* three months after the course. Immediate post-course ratings are, in practice, next to useless except on the (hopefully rare) occasions where they highlight a trainer who is having major difficulties.

    Hope this is of use

    Paul

  3. follow-up evaluation
    I agree with Paul, Vivienne. We always send out follow-up evaluation forms 4 weeks after the training to see if the skills taught are being used and how well trainees are getting on, it’s the effectiveness of the training you should be trying to measure not how good trainees felt after the session

  4. Assessing trainers and training
    Viviene
    I think Paul’s comments are just a wee bit harsh, as he intimates at the start of his message. I think what you are doing now is ahead of many others and should be applauded. That said, Paul is quite right to distinguish between popularity and effectiveness. I would certainly like to see the IBM research he refers to and examine the precise findings.
    In my organisation we do get qualitative and quantitative feedback on the trainers. We also do 3 month post-event evaluation of impact (and have done for over 12 years, so we have quite a lot of data). I am pleased to say that we have very few poor ratings of trainers but those poor ratings do correlate slightly to poor application of learning. Top ratings of trainers does not automatically predict effective application. However, I think it is perfectly reasonable to expect high end of course ratings (perhaps reflecting good interpersonal skills etc.) AND good results back in the workplace. I profoundly disagree with Paul that end of event comments are next to useless. They are good indications of the customer service you offer your customers – the learners – and can be important in marketing, reputation building and highlighting good training practice.
    I cannot comment on the 85% pass marks you get – this obviously varies according to subject matter, organisational standards and test mechanisms. I can comment on the 98% customer satisfaction rating – this is good, even though these types of ratings are only a litmus test. I would be surprised if the trainers, the designs, or the delivery are poor with a result like that.
    If you want to look more holistically at the training operation you could do worse that use the Australian public sector framework which can be found at
    http://www.apsc.gov.au/publications03/capability2.htm
    Best of luck
    Graham

  5. Linking measures, Customer feedback & Repeat business
    Hi Viviene

    A few ideas to chew over.

    I would suggest building a process that tracks and links your measures across the different areas, agreeing a target as to what the impact from one to another should be. Quite a bit of work would be needed to develop and test a model that is challenging, yet achievable.

    I would look to extended customer feedback to get more quantative numbers as to what business delegates are achieveing post attendance.

    Track how much repeat business is a result of the training, rather than just the sales team. Agreement with Sales would need to be thrashed out.

    Look into the value that would be lost if your training was not effective. Work would be needed with colleagues in other departments and with customers. Then flip this to assess the value added.

    Regards

    Phil Lowde

  6. Just a thought
    Just a simple suggestion. Why not try to observe your trainers in action. This is a common practice over here since most NVQ training providers get inspected by government bodies who insist on observing training sessions. I use it all the time as an assessment tool for trainee trainers and it works really well. If you would like me to forward a copy of the documentation we use let me know.

  7. Happy Sheets – Bah humbug!
    In reply to Graham’s comment about the usefulness of “happy sheets” I would refer you to Andrew Bradbury’s excellent (in my opinion) book “Successful Presentation Skills,” pages 134-135.

    Bradbury indicates that the use of post-course evaluation sheets is at best questionable and gives as evidence four sets of two unabridge comments from evaluation sheets related to four runnings of the same course, by the same person, using the same course materials. In each case the two comments contradict each other. I will quote just two of them:

    2a. “Pace generally too slow and notes do not add more detail to the lectures.”
    2b. “Very detailed – lots to absorb in a day, but good notes to take away.”

    3a. “Good clear concise material.”
    3b. “Could do with better presentation material.”

    This certainly gels with my own experience, and on the assumption that the author is telling the truth about these being real life quotes, I rest my case.

    Best wishes

    Paul

  8. “Live” supervision
    Sporadic observation of the trainer (with trainees permission) can give you first hand information on your trainers performance. Ensure there is a debriefing with your trainer after the event. Any notes would ideally be written up and shared with the trainer so the process is transparent. I’ve heard this process called “live” supervision as it occurs on-the-job.

    One suggestion for the post course evaluation is to do it by phone rather than in writing. You may get a better response rate and be able to delve into any issues that arise.

  9. Happy sheets can have their uses
    I am involved with monitoring long term trainng delivery in a large voluntary organisation. Most of the trainers are volunteers. Thus monitoring the quality of their training delivery is sensitive but very imnportant

    I agree that long term evaluation against the orginal training objectives is the best approach.

    I find a Happy Sheet (HS), filled in on the day, can be a very useful addition to long term evaluation.

    It is clear that if participants like their trainer, they are reluctant to give them bad marks on any kind of score sheet. But a well designed HS can pick up when you have a disaster on your hands – and it does happen – particularly when you have a large number of inexperienced trainers delivering a new training project.

    I find the most useful Happy Sheet question is to ask participants to write down the three words which best describe the training experience they have just had. The range of answers to an open question like this gives a useful snapshot of genuine participant reaction.

    Words like useful, thought provoking and practical are what I look for.

    Complicated, tiring, or boring are the kind of words which set my alarm bells ringing.

    Waiting for long term evaluation may be too late. The impact of a trainer who is presenting badly or does not understand their material needs quick intervention.

    I believe both Happy Sheets and long term evaluation have a part to play in this important process.

  10. Evaluation Tool for Assessing Trainers
    We, at Matrix FortyTwo, have worked with a number of clients now to develop an evaluation tool for training managers and trainers to assess each other and provide valuable feedback.

    We also work with the client’s assessors to train them to assess and provide feedback but providing a structured tool in the organisation has been invaluable for ongoing assessment, feedback and coaching. If you would like to see our own tool (we also assess our own trainers as you would imagine), I am happy to share it with you.

    The advantage of doing this is that you are not relying on users who do not necessarily know what they should be looking for (as mentioned in many of the earlier comments) but encourages support and ownership within the team, particularly if you opt for peer-assessment.

    Hope you get what you need.

    Kind regards.

    Jooli Atkins