googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Measuring Trainer / business effectiveness

default-16x9

We are reviewing our bonus structure for our Trainers and want to include hard measures which both positively impact the business and that they can influence and measure themselves on an ongoing basis. We have some ideas based on what we currently measure as a department - training attrition, productivity / tasks controls - but to many of them their are external influences on the measure. Can anyone help with what they measure and how Trainers bonuses are awarded please?
Gemma Gibson

5 Responses

  1. Looking the wrong way
    Addressing the issue rather than the person, this question seems to indicate a most profound misunderstanding of the training process, *suggesting*, as it does, a “jug and mug” process in which the trainers are the only active participants.

    Mind you, it also sounds like the company “reward” system is based on a “carrot and stck” mentality. And that is also long past it’s sell by date.

    What your company may find it can benefit from is a proper (effective) training evaluation programme – Leslie Rae (amongst others) has written extensively on this subject.

    FWIW, in our company there is a single bonus scheme – profit sharing. On the basis that we only employ people who are fit to do the job, the bonus pool is divided up so that each person gets the same percentage of their annual salary.
    It’s simple, it’s transparent and whilst it may not be perfect, we’ve not yet found a system to beat it.

    (Unless you know better?)

  2. Only part of the picture
    Hard measures include ; No. of courses run, No.of courses designed, timekeeping, on the job observation of delivery etc

    Hard/soft; Happy sheets, no of TNA’s, No of evaluations plus recommendations etc. 360 degree appraisals.

    In the past I have been bonused on happy sheets and appraisals, the rational behind happy sheets being that although very flawed they do provide some indication of how trainees reacted to the trainer’s style and an insight into the ‘perception/usefulness’ of
    the training dept to the wider organisation.

    All in all I agree with Paul, training is a more qualitative skill to measure, to apply business metrics to it is to misunderstand how training can affect the business and the trainer’s influence within learning. Training and its success will always involve a combination of factors only one of which is the trainer, to award a bonus simply on ‘outputs’ appears ineffective and unfair.

  3. Evaluating Trainers
    Gemma – if you send me your e mail address I’ll send you a set of Microsoft Word articles which introduces a number of systems at which you can either laugh at or utilise. Amongst these systems is an approach which illustrates a method of analysing the training being delivered and where and what it should be impacting on in the business. It is what is called 1st, 2nd and 3rd Order ROI analysis. It helps explore what impacts should actually be occurring and contributes to understanding where problems are appearing either in the systematic approach the organisation has adopted in the delivery of learning and development within the company or perhaps somewhere in the approach of the trainer. It would be my contention that the core value and main purpose of a trainer in a commercial environment is the contribution they make to the organisations bottom line and this approach helps isolate where there might be breakdowns in this.

    The other system is LEAP, LEarning and APlication. This is a system where by you can track participants entering into the learning milieu, and then determine the impacts on the learner during the process and then most importantly the subsequent changes back in the workplace. Identification of failure may have nothing to do with the trainer and more to do with the system, but never the less it will give you clear feedback.

    Finally trainers who don’t know what they should be doing may not always do it. And the goals and targets which are given to Learning Specialists are often vague and unclear, SMART objectives can be extremely helpful here, and whilst many managers know what SMART is very few of them can actually write SMART objectives and that’s because SMART as a mnemonic is unhelpful and not at all definitive or very descriptive so I have produced a system to help write SMART objectives so that they are indeed SMART and comply with all the relevant criteria. As a consequence it is much easier to assess at a later date whether the trainer has achieved what was asked of them.

    I don’t experience nor see anything wrong in linking bonuses to either individual or group based performance provided it’s done sensitively, sensibly and with a dollop of common sense. If you like the sound of all this drop me an e mail headed Evaluating Trainers and I’ll send you the material. My e-mail address is garry.platt@wgrange.com

    Relevant Web Sites:
    http://www.trainingjournal.com/abstract/2002/10702.htm
    http://www.mce.be/download/newsletter/august/rightroi.pdf
    http://www.mce.be/download/newsletter/august/roiservant.pdf

  4. The Figures Say…
    To the best of my knowledge, Woodland Grange is a highly respected training provider, and if Garry is a senior member of staff there I’m going to assume that his answer reflects a substantial degree of relevant knowledge and experience.

    But, I did have reasons for making the comments that I did, which can be partially summarised with a single statistic:

    From a study carried out in the late 70’s by IBM it emerged that when training results were measured 3, 6 and 9 months after the training and compared with the “happy sheet” ratings, it was trainees who marked the course and trainer between 61-80% who showed the most improvement, NOT those who gave a rating of 81% and above. It seems that trainers who get the supposedly ideal ratings of 90% and over are mostly be marked according to their SOCIAL skills, NOT on the real value of the training.

    Yet how many trainers are criticised, or even find their jobs are on the line if they regularly get happy sheet averages in the very band of ratings which indicate a high performing trainer?

    If trainers are rated according to the ROI of the training they carry out then, IMO, whoever is doing the assessing really should know what REALLY determines the effectiveness of the training.

  5. The Figures Say – Response
    Paul makes a number of comments which I would like to respond to and first is the issue of ‘Happy Sheets’. Happy sheet ratings (Or as I know them Reaction Level Validation Questionnaires) play a useful if very minor role in the process of validating training procedures but they are not intended to indicate whether the training delivered and learning received has had any impact or value in the workplace. ANYONE who attempts to make any correlation between the two is missing the point and misunderstands the purpose of the instrument. But, Reaction Level Validation Questionnaires ‘might’ indicate where there were delivery/reception problems in learning process, which is of course a prime responsibility of the trainer.

    Paul continued: “Yet how many trainers are criticised, or even find their jobs are on the line if they regularly get happy sheet averages in the very band of ratings which indicate a high performing trainer?”

    I have no idea what the answer is to this but I am absolutely certain that it would be a rather misinformed organisation or individual that made any assessment of a trainer based predominantly or wholly on just this information for the reasons outlined in my opening paragraph.

    Paul concluded with “If trainers are rated according to the ROI of the training they carry out then, IMO, whoever is doing the assessing really should know what REALLY determines the effectiveness of the training.”

    This may be being semantic, but it’s important our language is precise here; IMO what REALLY determines the effectiveness of the training is its impact on the organisation’s bottom line; no impact = no effectiveness. (Only training which is deemed ‘compulsory’ by the host organisation either from a cultural, legal or legislative perspective ‘might’ stand outside this determination.)

    If however the question is; What CONTRIBUTES or LEADS to this effectiveness, then the following would be major factors:

    1. The competence with which the trainer has identified the learning need.
    2. The skill and creativity the trainer utilised in designing the response.
    3. The effectiveness with which the trainer undertook the delivery.
    4. The appropriateness of the methods the trainer developed for assisting transference to the workplace.
    5. Finally the efficacy and worth of the evaluation systems the trainer employed.

    The methods and techniques I outlined in my first posting will help contribute to identifying where a trainer might be succeeding or failing in any of the areas listed from 1 to 5 above, and consequently why they may or may not be impacting on the ROI of the training.

    To repeat a line from my first posting: ‘It would be my contention that the core value and main purpose of a trainer in a commercial environment is the contribution they make to the organisation’s bottom line -’ And that’s what we should be seeking to measure.