No Image Available

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Trainers’ Tips: Measuring success

measuring_success_2

Daviesr4 asked about how best to measure success of a training session for the delegates themselves and was looking for tips on things to try, things to avoid. Here’s what the community said.

Observe - don't test

Both Paul Kearns and Nkellingley suggest setting baselines prior to training and then observe the differences in the follow up stage or assist managers/colleagues in doing this if you are unable to do this directly. As Nkellingly writes; “Testing is fine as a measure of knowledge, but not much cop as measure of practice.”

What will change?

Pauluk advises that for training students to do something new, establishing with managers what performance they expect as a result of this training is a must, but if you are training to learn something different, then know what improvements are expected. He also referenced Prof. Kurt Kraiger as a good source for relevant advice.

Ideas for evaluating training

BryanEdwards recommends using a knowledge/skill checklist in order to list all the competencies being developed on the course. Delegates have to complete the checklist pre and post course, and then their line managers test out the same competencies at a later date. He says that this gives delegates more ownership of their development and acts as a neat 'handover tool' between off-job trainer and on-job manager. It helps them support (as well as test) their delegate in their development planning as a result of the course. Bryan also suggested visiting www.abctrainingsolutions.biz/tnaevaluationtools.html and scroll down to 'TNA & Evaluation - Coaching Skills' for a sample.

You should always have a 'bench mark'

Angela Dickinson says bench marks are a must in order to measure the effectiveness of any form of training and to evaluate its success. Ask yourself what do you expect to see people doing differently as a result of your training and by when? If you don’t know the outcomes you expect from the training then it is extremely difficult to identify if it has provided what you wanted it to.

She also says that while it’s great to test the learning directly after the training course, the real question should be are the participants of the course applying the skills that they have learned as a result of the training? You should always test the learning but more importantly you should always track the implementation of skills learned, are they all achieving the impact you desired?

Are you asking the right question here?

Finally, Schma_m says that if success of training is defined as improved performance then learners must have the opportunity to apply their learning to situations that have the chance to deliver better performance. To do this requires the line manager or team leader to agree not just *what* performance outcomes the training must support but also *how* to facilitate that in terms of pre, during and post event activities. Unless the culture and environment of the organisation allows for this, it can cause a blockage.

One way of overcoming this is to test knowledge as well as observe learners demonstrating their competence in class. If they come up to standard at this point you know they're leaving the training with the required standard. Failure to perform after the class is now almost certainly caused by something outside of the training.

He recommends Robert Brinkerhoff's book on the Success Case Method as it offers detailed examples of finding both successful transfer of learning to improved performance as well as determining the causes for lack of improvement.

He also says that encouragiong and supporting participants and their managers in drawing up a plan and agreement for the required performance outcomes from training and how, between them, they will make this happen.

Finally, he adds that as part of the contracting and sales process a trainer has gone through to get this 'gig' it simply must be addressed with the client. Developing reliable and credible baseline measures are often trickier than many think. Getting sufficient clarity about just what issues the training is intended to help with is also pretty tricky to get sufficiently 'right'. And if you bring a business brain to this problem as well as your training/consulting brain, well, then you'll be 'quids in'!

If you have other suggestions of how you can measure success, please do share it here with the community.

No Image Available