Stakeholders really only need to understand what return on their investment training/development provides. Why therefore is so much time & effort spent on complex evaluation processes when a carefully constriucted cost/benefit analysis gives a clearer picture ?
I Catchpole
3 Responses
What’s in a word?
Dear I,
Hamblin defined evaluation as “any attempt to obtain information on the effects of a training programme and assess the value of trinaing in the light of that information” – your cost/benefit analysis is therefore evaluation under this definition.
Why make it more complex? For greater depth presumably. Does cost/benefit assess effectiveness and to what degree? Does c/b provide the trainer with feedback about their performance? Does c/b link back to the learning objectives? Does c/b reveal weaknesses in training? Does c/b evaluate every stage/process in training?
If one is part of a training dept or external supplier then these factors are critical in giving the dept confidence in its ability to meet its own targets and those of other stakeholders.
“Stakeholders really only need to understand what return on their investment training/development provides”
Rather than pre-judging what they really need to understand – simplicity and clarity – isnt depth,clarity and complexity of evaluation merely a relection of listening to and exceeding their needs?
Both!
My opinion on this is quite clear in that I strongly believe both are needed – they address different issues, play to different audiences and support different streams of the training profession.
Firstly doing a cost benefit analysis is a must. If any trainer / training function is going to be taken seriously or considered in strategic terms by a business then they / it needs to act in a business like way. Good quality training is not cheap and fiscal responsibility is an underpinning criteria for developing credibility in the function (have you ever seen an IT project go ahead with out a buisness plan!). If money is spent by the training / HR function ‘willy nilly’ with no way of quantifying it in terms which the decision makers of the business understand, then it is not surprising that the function is treated like a second (and often scorned) cousin. Anyone who wants to be taken seriously in a business context needs to behave in a business like manner.
On the second issue of indept evaluation of the training outcomes, behavioural changes and future development capabilities – how else will the function be able to fulfil its role as the enabler of future business performance, if it does not know where it stands in terms of the skills and attitudinal base that it is trying to build and which are value for money interventions and which are not? Good quality, indepth evaluation is like gold dust – but only if it is used in a strategic and constructive way.
More attention to the skills and disciplines of needs analysis and post intervention evaluation by the training function, set within a strong commercial skill set (how many training teams have actually read their businesses business plan and discussed how they will align their input to it?), would, I feel, do the credibility of the function a world of good.
Evaluation theory – what’s new in training evaluation?
I am a newcomer to training zone so forgive me if I cover old ground. Although I am currently working in a training role, I am a social researcher by trade. I can’t help but notice how little of the debate on evaluation mention the recent methodological developments in the field and how these could be applied to training evaluation. It seems to me that training could learn alot from evaluation generally, particularly in the fields of programme and policy evaluation. On the other hand, many trainers still see the Kirkpatrick model as the best, although his work originated decades ago. If anyone could point to examples of where other approaches have been used (for example, stakeholder evaluation, theories of change or Realistic evaluation) I would be very interested. L