No Image Available

Seb Anthony

Read more from Seb Anthony

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Evaluation of the impact of training

default-16x9

My client has bought and run several behavioural training programmes on coaching and other aspects of development. He wants to think about how to evaluate the impact of the training.

We are interested in doing this creatively and I am attracted to appreciative evaluation using appreciative inquiry. Does anyone have any relevant experience of using Appreciative Inquiry for this or of any other interesting ways of evaluating the impact of training?

Any help or ideas would be most welcome!

Best wishes
Nick Heap

8 Responses

  1. Reality and Desire
    Nick,

    behaviour change training only works when the management of a company change as well. So if you are working with line managers, the business unit managers need to go through change.

    What you need to measure are a few things.
    Certainly measure the appreciative side of things, but what is much more imperical is measuring the culture of the organisation. This would be broken down into measure the culture of management, the culture of the workers, and also to be able to measure desire of teams against reality of teams.
    So the people may say the trainig was great, but they do not change becuase higher maangement are stil ldoing things the old way.

    To measure this you could use Human Synergistics tools, relativley low cost way of assessing the actual and possibility of coaching and other OD training actually working. I am not an expert in this area, but having looked at what Human Synergistics can do based on their 30 years of work, it makes more sense. Form that you can then use creativity to work out how to get all levels to make the change rather than just nod in agreement and then carry on in the usual UK way of ‘holding on’.

  2. Impact of Management Development Training
    My recent MBA dissertation looked at the impact of management development training (MDT)on the SME.

    Having studied over 150 up to date independent, peer reviewed, academic research papers as part of the literature review, I found that not one could establish a direct causal link between MDT programmes and bottom line profitability. The waters are just too muddy to make such a link. It was however possible to shown a link between MDT and survivability.

    As a trainer myself I found this a surprise and in contradiction to current mantras promoted by the training business.

    Are we in danger of making claims that can not be substantiated and are we not risking our standing by not doing what it says on the tin when the questions are asked by our clients and our claims subjected to rigorous testing?

    Leslie

  3. Links between bottom line and management development
    Last year I was tasked with tackling poor staff retention in a major department of a large local authority – the dept had some 5000 staff. Poor retention was costing an extra £600K in unplanned recruitment costs and there was a danger that the authority’s ‘star rating’ could go down due to the dept not being able to fully deliver its services. High stakes in deed.

    In determining the root causes of the poor staff retention we discovered that a lack of key people management competences, along with low clarity on line manager roles and priorities were major contributors to the problem. With a poor approach to induction thse factors accounted forover 85% of the problem.

    From this we developed a highly structured induction experience, involved line managers in coaching front line staff and supervisors, and also provided a very focused development programme in people management skills. At the same time we put together an education programme to inform all staff as well as all managers of the various roles, responsibilities and priorities, where they fit in to the dept’s strategy, and their role in all of this.

    The entire initial costs for this were approx £67K, with a confident expectation (because we had done a good root cause analysis) that there would be approx £300K in savings on recruitment costs in year 1 alone.

    I also engaged successfully with the unions in order to identify issues from their perspective. For them a key indicator of success would be a dramatic reduction in the grievances staff brought to them.

    The programme is in the early stages of roll-out so I can’t comment on final results, but last I heard it was on track.

    I think Leslie (Spiers) is right – trainers could be in danger of talking a good fight but not actually fighting a good fight.

    It is a poor excuse to say that management development is not tangible enough to link firmly and explicitly to the bottom line.

  4. Should have thought about this earlier
    Buying programmes and then asking the evaluation questions is putting the cart before the horse. Imagine trying to buy a train ticket without having a destination?!
    Moreover, appreciative enquiry is not an evaluation methodology – whenever it’s undertaken.

  5. Evaluation on the impact of training
    I feel that the response by Kearns has completely missed the point raised at the beginning of this thread. It was to how evaluate behaviour training. The response was I feel an insult to the originator of the thresd. Nick was given no opportunity to take the obvious pre-training action and if people wish to respond to a question they should stick to the point. Leslie

  6. Appreciative inquiry and evaluation
    Thank you all for your help. I have now done some research and come up with some alternatives to discuss with the client.

    The most attractive is about finding stories about what people have done with the training and then attempting to put some numbers to the benefits. This is an Ai-ish approach.

    You may be interested to know that Appreciative Inquiry is being used for evaluation. There is an issue of Ai Practitioner Feb 2005 #12 on it. This is available via Anne Radford on http://www.aipractitioner.com

    If you have any more thoughts, I would be glad to hear them.

    Best wishes

  7. What advice do you give on evaluation
    My original advice still stands. Either you do pre-intervention measurement for evaluation purposes or you cannot evaluate. The AI people might call their approach evaluation but it is best described as “reverse engineering”, “retro fitting” or even “post rationalisation” but it ceratinly cannot be described as evaluation.
    The pre-measurement approach is based on sound theory. If you don’t agree then you need to offer your client an equally sound, alternative theory. If you have one I would be very interested to know what it is.

    Regards

    Paul