No Image Available

TrainingZone

Read more from TrainingZone

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Donald Kirkpatrick answers members’ evaluation concerns

default-16x9

Donald KirkpatrickDonald Kirkpatrick - the 'father of evaluation', has kindly answered more of your evaluation conundrums, sent to us by members after we interviewed him last year.





Question: First of all, thanks so much for a great contribution to Learning and Development.
I would very much appreciate any tips on how I can get definitive figures at Level 4, when all other business areas are fighting for a slice of recognition to justify their existence too?

E.g., a new sales training programme is put into place. When looking at the increase in sales and profits (which training could claim is down to the increased performance as a result of the programme):


  • Technology also want to claim a stake in the increase, due to more effective technology recently introduced
  • Managers who have coached and mentored feel it is they who have brought about the change
  • Back office staff who claim their new processes enable sales staff to spend more time on sales
  • Recruitment teams who claim that they've used better processes and recruited higher calibre staff over last few years etc.

So, how can training put a figure on what is essentially seen in the business as a joint venture?

I am also very clear on how we can measure Levels 3 and 4 when the right conversations with management have taken place prior to the learning. Any tips for having to 'pick up' evaluation after someone else has started the process (i.e., has run the training, evaluated Levels 1 & 2) but moved on before undertaking any further evaluation? The conversations with management haven't taken place, and I am expected to pluck figures out of the air to justify training's existence!

Lastly (if allowed), where on earth do I start trying to evaluate programmes such as 'influencing skills' or 'team work'?

Thanks so much for your time,
Katy Walton

Answer: You have asked some challenging questions; the first has to do with 'proving' that the changes in results could have come from other sources rather than a training programme for sales people.

I use the term 'proving' with great caution because evidence is more accurate. Let me refer you to a sales training programme at Hobart Corporation. They had 10 training regions. They decided to give the training to salespeople in five of the regions (experimental group) and not to the other five regions (control group). They did their best to be 'sure' that the experimental and control groups were the same in all important categories, the critical factor in comparing experimental with control groups. Six months later they compared sales in both groups. The figures were an increase in sales in the experimental groups of $987,885 and a decrease in sales in the control group of $356,920, a difference of $1,344,805! I would call this 'proof beyond a reasonable doubt'.

If someone claimed that the results could have come for other reasons, it would be pretty easy to analyse each group and see if there were any unusual happenings such as the economy in the various regions, the loss of major customers, increased competition from other suppliers, etc.

Another example is a reduction in turnover in a company. The turnover rate among new employees was between 4 and 6% for the last six months. The training department thought it was because the supervisors and foremen of the new employees had never been trained in how to orient and train employees and discouraged employees, so they quit. So they conducted a training programme teaching the supervisors and foremen how to orient and train new employees. The goal was to reduce turnover to 2% or less. After the programme, the objective was accomplished and for the next six months turnover rates were 1-2%.

The question was raised asking whether other things might have caused the reduction. And the answer was 'yes'. So they identified other possible causes, including the economy, the hiring procedures, and any special benefits to the new employees. In this case it was quite easy to check those three possible sources and see if they had changed.
So, the general answer to 'proving' that the results came from a training programme is to measure the results and then check out any other changes that could have affected the results. Good luck! Easy to say but a real challenge to do.

For other ideas, please read the chapter on 'Results' in one of my books for case examples from many organisations.

Regarding your second question of taking over for another trainer, get whatever information you can from your predecessor. For example, they probably used reaction sheets to measure Level 1. And perhaps, used some way to try to measure Level 2.

Also, I suggest that you organise a focus group of six to eight people who attended, and conduct a well-organised discussion of their evaluation of the programme. Try to get some agreement of their reaction to the programme, what they learned, and to what extent they have or will implement what they learned. From there, you can start to measure Level 3, Behaviour. My books have guidelines and forms and procedures for doing it.

Your last question has to do with evaluating 'soft' skills programmes including such popular courses as teamwork, leadership, and coaching. Forget about trying to attach monetary results to these programmes unless you can relate attitudes and morale to things such as turnover which can be reduced to monetary terms. But, you can evaluate change in behaviour which is usually followed by positive results.


Question: I have just started a postgraduate course at the University of Sussex on elearning design and am keen to integrate Evaluation features in the 'theoretically sound' course; I will need a design that the user can engage with whilst learning. The end result would be some measure of how successful/unsuccessful the course has been in achieving its objectives.

I imagine this would be useful for anyone out there who authors their own content so would welcome any features/mechanisms you would advocate to incorporate this into the learning experience.

Best Regards,
Conrad Hamer

Answer: First of all, try to set your objectives in some kind of matrix which can be measured. For example, you may be teaching my four levels. Set the objectives that each participant will be able to state the four levels, define them, and state the guidelines for measuring each.

Some students may already know about the four levels, so a pre-test and post-test will be necessary.

As a final evaluation, ask:
To what extent do you plan to use ideas from this course? (Try to get some figures by using the question, followed by)
___To a large extent ___To some extent ___Not at all
If you answered 'to a large extent' or 'to some extent', please give two or three examples.
___________________________________________
If you answered 'not at all', please check why not:
___ The programme didn't provide any ideas I can use
___ I won't have time – too many higher priorities
___ My supervisor will prevent/discourage me from
making changes
___ Other. Please state


Question: Don, As a proud owner of one of your autographed books, I have two questions. First, if you could share why public education in America (k-16) continues to ignore your four Levels of Evaluation. Given that most evaluation is at Levels 1 and 2, when Level 3 is initiated such as through some state to national legislation, why do educators fear the accountability embedded within Levels 3 and Level 4?

The second question has probably been asked before, but please help me understand why so many HR professionals to consultants still insist that you cannot track Level 4 in quantifiable terms? In other words, why do they fear Level 3 and even more so Level 4?

Leanne Hoagland Smith

Answer: The reason that public education, except MBA and other practical programmes, have essentially ignored my four levels is that educators generally don't think they can learn anything from outside sources, especially anything connected with business and industry. They have their own collection of 'experts'. And, except for many vocational schools, they do not consider me an expert!

Many HR professionals, excluding training professionals, have their own problems and agendas including performance appraisals. I have written a book on this and related it to training, but most PA programmes are strictly geared to merit increases and other personnel decisions. Also, many are unhappy because training no longer falls under HR. The creation of Corporate Universities has taken away a lot of power and prestige from HR managers, and I used to be one in industry so I know this.

If you are referring to training professionals, there is growing interest and activity in Levels 3 and 4. My son Jim and his wife, Wendy are writing a book called 'Training on Trial'. This will help to stimulate more efforts to evaluate Levels 3 and 4 because the 'jury' i.e., those that will decide whether or not to approve the training budget, will be asking for more evidence at 3 and 4. And more and more trainers are already doing more of it because they know that the day of reckoning is coming.


Question: I have used your model in different companies, and what I have found challenging is making the process sustainable, with constant changes not only in management teams but also in HRD function. What would your thoughts be on making the evaluation process sustainable?

In order to make the evaluation sustainable, there are several items to consider.


  • When you get the contract as a consultant, see if you can include some follow up work to see what has happened in terms of your suggestions
  • When you are working with an organisation, I assume you are working with a particular person. Be sure that that person is in agreement with what you are doing, and leave some suggestions for follow up including you coming back to evaluate progress
  • Hopefully, your consulting includes specific actions for both trainers and managers to take to complete what your suggested. The first item should be suggested

Thanks so much for your simple but effective (and enduring) contribution to focusing on L&D outcomes! Since our business is all about pushing quantifiable goal achievement and behaviour change, we are constantly talking about the four levels.

A while back, Jack Phillips postulated there was a level 5 to be added, which addressed ROI. I have maintained that there isn't really another level, and his idea is simply a measurement approach to the existing four.

I believe a better approach is implementing a baseline measurement, before L&D activities take place, and that this will generally facilitate the needed effectiveness measurement. This seems to inherently fit with your model. I am curious to hear your thoughts. Thanks again for your highly valued contribution!

Tery Tennant

Answer: Thanks for your comments including the fact that there is no level 5 as Mr. Phillips has suggested. Any monetary measurement is included in Level 4, Results.

The interesting fact is that he at least considers my 4 levels or there would be no level 5.

In relationship to your suggestion, my son Jim - who teaches and writes books with me - recommends ROE, return on expectations. This means that before the training programme is launched, trainers should ask the 'jury', those who will decide whether or not the training budget should be approved: 'What do you expect from the training programmes?' And 'how would you define success?'.

This information could then become the consideration for planning programmes that are well received, teach the knowledge, attitudes, and skills necessary to do an effective job, and clarify what behaviour is necessary to accomplish the desired results. In fact, these three items should be considered in reverse order.

Jim and his wife Wendy are writing a book for AMA called 'Training on Trial'. This will help to challenge trainers to consider their jury and be prepared to be judged by its members.


Question: I love the simplicity of this model. I also consider the following: Evaluate both the results and processes and learn from both success and failure. It is important to review with the perspective of contribution to customers' satisfaction, overall success of the company, and our own personal development. We need to learn from both successes and failures, and accumulate it as knowledge and know-how. So, in summary: (1) Evaluate the results and the processes and share it with members involved (2) Evaluate from key perspectives: customers', the business, and your own (3) Understand the reasons of success and failure.

Kon Stoilas

Answer: Thanks for your helpful comments. I would add one item to your three practical suggestions.
In order to minimise 'failures', be sure to study what other organisations have done and don't be shy about borrowing and adapting their ideas to your own suggestions. Each of our three books provides case studies of successful approaches to evaluate at all four levels.

'Implementing the Four Levels', a summary of the four levels, guidelines for evaluating, a special chapter on getting managers on board, and case studies on evaluation implementation at each of the levels.


Question: I have been working on introducing a broader process of evaluation and have secured up to Level 2 of the process. I am currently working on the implementation of Level 3 and would really benefit from some advice on how I might make the transition successful, and as with Bhavna's comments, what is the secret of sustaining the process?

Jo-Anne Phillips

Answer: My son Jim and I have written three books listed at the end of my comments. One book, 'Transferring Learning to Behavior' provides the answer to your problem which we call the 'missing link'.

I got a call recently from a training professional at MGM in Las Vegas. She said they have evaluated a programme at Levels 1 and 2 and asked if it is OK to skip Level 3 and go to Results. I said 'no!'.

We strongly urge all trainers to build a 'chain of evidence' which means evaluating at all four levels. It is impossible to determine whether or not any results came from a training programme unless Level 3, Behaviour is measured. And this book shows how to evaluate at Level 3 as well as how to be as certain as possible that change in behaviour will occur. Tery's comments will also be helpful

Our great thanks to Donald for answering so many members' questions.

For more information:
You can find out more about Donald Kirkpatrick's theory of evaluation in the following books:

  • 'Evaluating Training Programs: The Four Levels'
    3rd edition, with many case studies of successful implementation
  • 'Transferring Learning to Behavior' with details on being sure that the training is applied on the job and suggesting a number of ways for measuring it
  • 'Implementing the Four Levels', our latest book which makes it easy to evaluate at all four levels
  • Read our interview with the great man himself and read his first set of answers

    Newsletter

    Get the latest from TrainingZone.

    Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

    Thank you!