googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Let’s have a heated debate!!

default-16x9

"Those who can - do. Those who can't - teach." Any truth in this old saying??? Given the audience of this forum I suspect that most will disagree.

I don't know about you but during my career I think I've probably been on the recieving end of the full spectrum of trainers. From the totally gifted and inspiring with obvious love, enthusiasm and expertise in their subject - to the tedious and boring with little in-depth knowledge who think because they have read a book on the subject can teach it.

Qualifications or a string of letters after someone's name is no guarantee or their training abilities - so how can we raise the bar in quality in the training profession.

Hopefully personal recommendation from clients mean that the good get more work and the bad fade away. However is this really the case? I don't think so as a bit of slick advertising can bring in new clients.

Raising standards in the training arena can only benefit the profession as a whole. So what would you consider the best way to do this?

I sometimes send people "undercover" on the courses we run to check quality and content. If it is not possible to do this does anyone ever video there own performance and critically appraise it.

Feedback questionnaires can prove useful depending on the questions asked and the honesty of the audience. For us, trainers and course content have to reach 95% rating of OK or better (of which 80% must be good or excellent) - if not we take action to correct this. We also read every single comment we recieve.

Should there be some sort of standardised "marking" system for trainers so you can display you "score"? I can envisage numerous pitfalls with a system such as that.

So back to the question - how do we raise the standards in training.


Tracy Murray

2 Responses

  1. Link to business improvements
    Honestly I’m not sure that feedback ratings count for a damn, nor ratings based on how happy any one was when they left.

    All that measures, is that people enjoyed the training and percieved it as useful.

    You need to get to measuring the actual business improvements and performance improvements if you want to get a measure of the “value” of training.

    Of course these measures only work if you define what you want to see prior to training, ensure that the learning is properly supported in the workplace afterwards (train a group without involving their management in coaching and support afterwards and lose a big chunk of the “effectiveness” of training) and measure changes appropriately (at the right time as well as measuring the right thing).

    Then you have a real measure of effectiveness – happy sheets and undercover “agents” are a bit of a waste of time in comparison.

    If you can’t measure performance improvement – why are you training in the first place?

    If you can’t be bothered then don’t work from happy sheets – have all your delegates assessed on the day for a change in performance at least that would be mildly more useful data.

    Evaluation it’s not just for Christmas – it’s for the life of the training.

  2. Assessing trainer effectiveness
    Tracy
    I would distinguish between training effectiveness and trainer effectiveness.
    Training effectiveness is, as Nik says, about results, outcomes and ultimate impact. It is partly down to the trainer/deliverer but is also dependant on other things such as the needs analysis, the training design, getting the right people on the programme at the right time, the mix of course and non-course methods and how they are managed, the post course support, the focus on applying, adapting and extending the learning, etc.
    If you want to isolate the contribution of the trainer, then customer feedback has a part to play. Simple end of event evaluations are not the whole picture but should be part of it. I am sure that feedback from one customer or one course is fairly meaningless. But consistently excellent or deadful ratings over time, and over different courses, can be a fair indicator.
    Qualifications and experience (in the subject matter and in training) are no guarantee, but the absence of all four would cause me concern.
    ‘Mystery shopping’ or sitting in is, as you suggest, a further option. But one of the best ways is personal recommendation. Getting the view of at least one other independant person who you know and can trust, or who ‘knows their stuff’ is often regarded as immensely valuable. It is not scientific. It is not fool-proof. But what is?
    Those who can’t do, teach? I don’t think so. But maybe the link between doing and teaching, for some subjects, is not a strong one.
    What about: those who can’t teach, train trainers? – I couldn’t possibly comment!
    Graham