googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Struggling with evaluation and ROI?

default-16x9

Looking at Any Answers postings over the last few years, it's evident that many members find themselves thrown in the deep end when asked to develop a process for evaluating training in their organisation. Kirkpatrick's model is often cited, but does it actually work? Is there any point in 'happy sheets'? Is ROI something that should be left to the accountants?

TrainingZONE is planning to launch a unique, online specialist area dedicated to help develop your understanding of what it means to evaluate training, helping you to build evaluation fully into the training cycle. We'd like to hear from you, firstly, if this sounds like just the thing you need, but also if there are specific things about evaluation that are leaving you confused, so we can take account of them when building the new area. Simply e-mail your comments through, or add your thoughts to the bottom of this page.
Stephanie Phillips

24 Responses

  1. Training Evaluation
    We are currently reviewing and redeveloping training for our software. Previously we had no evaluation process at all and relied totally on unsolicited client feedback. I would appreciate any suggested feedback formats that encourage constructive criticism and honest feedback on both the content and the delivery of our training.

  2. Strategic evaluation process
    I am currently devising an evaluation strategy to look at the efficacy value of staff development courses. I am particularly interested in using the notion of ‘self efficacy’ for the main evaluation criteria as the literature suggests this produces the most accurate results. I would like to hear from others who have an effective evaluation strategy and process

  3. Evaluation forms
    I would like to know if there’s a better way to evaluate trainings than merely using evaluation forms. Perhaps there’s a better way to design evaluation forms? My experience in software training is that people are keen to tick any box to answer the question in order for the training to be over! This doesn’t encourage honest or helpful feedback. There must be a better way?!

  4. Training Evaluation
    ‘Happy sheets’ provide a ‘snapshot in time’ of delegates immediate response to a training event. However, as they always appear at the end of a course, there is the temptation to complete them hastily in order to ‘escape’. I find an open discussion with delegates (prior to happy sheets being distributed, reviewing the highs and lows of the day/s, often generates more honest and detailed responses.
    What I believe is a more challenging issue is how the benefits of that training are applied and evaluated once a delegate returns to the workplace. The use of practical skills acquired through training is easier to demonstrate than soft skills, focussing on attitude. I would be interested to hear others views on this point.

  5. Evaluation
    First principles need to be addressed here: what is the evaluation testing? ‘Happy sheets’ really only test whether your attenders had a nice day. While that’s an important consideration (important that people associate learning with a pleasant environment), it isn’t actually about the training. To refocus, it’s essential that we maintain an ‘training audit trail’. First, clearly setting aims and objectives, closely related to the context in which attenders work and circulated to the trainees early. Second, structuring the happy sheet around those, to assess to what extent those aims/objectives were met. Finally, a follow-up evaluation six-months or so after the training, to get attenders to assess to what extent the intervention has affected the way in which they work. This is certainly the approach that I have used in trying to assess whether workshops have been successful.

  6. MMeasure the outcomes – not the inputs!
    Training is surely about helping bring about changes in skills, knowledge or behaviour levels. Identify where you’re aiming this level (these levels) to be and measure if they have been achieved in practice.

    Measuring tne training input is meaningless; it’s like assessing the quality of petrol – it could well be the finest but it’s no good in a diesel car!

  7. comments re training evaluation
    I think the theory and difficulties have been well-documented, and I would be interested in some basic practical information, such as pro-forma forms that people use, whether and how they use electronic means of evaluation, how they get people and their managers to do post-course evaluation in terms of how the training/development has made a difference to that person’s work, and how they integrate training evaluation with appraisal.

    Also evaluation must fit with the organisation’s culture and requirements, so I would assume there is a variety between organisations in what aspects of training are evaluated. how do you make it appropriate to your organisation as well as useful to you in training terms?

  8. Evaluation resources
    As Judy Rowley comments, this subject has been well and truly documented – including by myself, giving not only the theory, difficulties and easements, but practical help on achioeving full and realistic evaluation, plus a selection of evaluation instruments for use in a variety of situations. If any one wants to follow this up please email me and I shall be able to give you a full (realistic) set of references that will tell you all about the subject.

  9. Honest Evaluations
    I agree with Helen Palmer that written evaluations are not always candid. They will not give any constructive critism of the content of the class, but rather a personal critique of the instructor – and not of the instructor’s teaching ability. Is there a method of receiving truly honest and constructive evaluations on which to base future training curriculum?

  10. A way forward with evaluations in non-profit organisations
    I have been working for some while now on using the EFQM excellence model to link to training evaluation.
    This works well at the lower levels of Kirkpatrick, and appears to work at levels 3 and 4 with some effort applied. If you want more information please contact me.

  11. Qualitative Evaluation of Training
    I think I have a reasonable understanding of the theory underlying training evaluation. I even appreciate and value the aims outlined within the Kirkpatrick model. My problem is that I am struggling to develop methods that allow qualitative evaluation of the impact of training interventions on organisational performance without making spurious claims that can easily be challenged.
    Has anybody out there any ideas about how to evaluate the impact of training and development on the performance of an organisation?

  12. New feedback forms
    I have recently introduced some new feedback forms in order to address some of the issues already mentioned, especially in relation to a lot of forms being regarded as ‘happy sheets’. Tick boxes have been replaced with questions which depending on the answer given encourages the student to justify the answer.

    It has been piloted through August and the results have exceeded my expectations. There is a lot of information provided upon which positive actions can take place. There is also a separate sheet for the trainer which needs a little bit more work. Feedback forms should always be designed with what you want to know in mind, ask the relevant questions, promote evidence to support answers and take action.

  13. Impact on organisational performance
    I agree with Kevin McDermott that I too could make spurious claims about the value add of training, but I’m not sure that I could actually prove it! We have the processes (and the forms!!!) in place, but I’m not convinced that this really tells me that training activities are contributing to a better service and an overall improvement in organisational performance. I would really value more information and debate on this topic

  14. ROI responsibility
    Chris Walton raises significant and interesting points and I agree with him that this topic (one of the two most valuable in training and development) should be aired. But not only among trainers – line managers have a VERY important role to play, but all too rarely do a) trainers seek their involvement and b) line managers consider being involved. If Chris is a trainer he should be aware of the different responsibilities of the various (5 roles) people who should be involved in training and development; if he is a line manager he should be aware that training is not just the responsibility of the trainer – in most cases their direct involvement should end when they have completed an effective training programme at which the participants have learned – the line manger RESUMES his/her responsibility when the learner returns to work, to ensure that there has been learning and that this is implemented in the working situation. The trainer can certainly support the line manager in this, but the responsibility for ensuring learning implementation and therefore ROI takes place. The thousand dollar question is ‘How many line managers actually do this?’.
    The 5 roles I have mentioned above are what I have titled in my writings ‘The Training Quintet’, describing the 5 roles – Learner, Line manager, Senior Manager, Training Manager and Trainer – who should be fulfilling their responsibilities (the quintet can also be called the Evaluation Quintet).
    Leslie Rae

  15. Evaluation and validation approaches
    I apologise for another entry but on re-reading the comments made so far they triggered a couple of points I would like to make.
    1. Helen Palmer and Judith Ingression question alternatives to useless tick boxes. There is an answer – don’t use them! Instead ask relevant questions: ‘What have you learned? What have you been reminded of? What are you going to do with this learning? What have you not learned that you needed to/expected to? See Teg Griffiths response of 29 August – he appears to be going the right way.
    2. Judy Rowley – me being egoistic!! Echelon recently published a resource ringbinder plus CD of mine, plus availability for purchase individually online. This resource contains 39 copiable and modifiable instruments for TNA and evaluation – Training Evaluation Toolkit.

  16. The Quintet -music to my ears!
    I agree with Leslie, the answer is not just with the learner, but those that work with them. We try to make sure that learning/training is closely linked to business objectives identified by staff with their managers (and helped by mentors), so these people are also in the loop for getting evaluation feedback. How do we do it? I ask them! We do have forms, but these are given about a month after the learning/training so there has been time for it to be put into practice/embedded (or not, as the case may be) Forms are a checklist with comments with some feedback on the quality of training, but most is focused on work outcomes, comments of note are followed up 1-1. But then, again, we are only 20 staff all of whom are happy to express an opinion.

  17. My right to know?
    We too have evaluation forms that ask managers to sit down with those who have participated in training activities to assess whether or not the skills, knowledge and approaches “learned” on the course are being / have been applied. This is supposed to take place c. 3 months after the training has happened. I’ve been asked on countless occasions to simplify the form and the process – this evaluation is still seen as a “training thing”! I feel I have a right to know that the training and development activities that I co-ordinate and deliver add value to the service. If they don’t then either there’s something wrong with the training – so why bother – or it’s not being applied! Just how do we create a truly learning culture that is involved in all aspects of the learning cycle – from learning to application?

  18. A way forward with Kirkpatrick
    There is a lot of concern about how to carry out the various levels of Kirkpatrick. What is required is a separation of the levels and decided on what one is looking for at each level. As I commented before, by using the EFQM excellence model, one can ‘get a handle’ on the structure and form of an evaluation strategy for any programme. The most important task is for the client, not the training provider, to make a difinitive statement that should be phrased along the lines of,
    “At the end of the project, a specified target group will benefit by having more of ……..(something) or will have a better standard of …….(something).”
    This statement will take one beyond just training more people to do something, it will address why they are being trained and what the final outcomes are expected.
    This is sometimes very difficult for managers and strategic decisions makers to do, but can be done with practice.
    If anyone needs more information please contact me.
    (I work as an evaluator within the Public sector).

    Mike Griffiths.

  19. Yes – but what of qualitative evaluation
    I have read with interest the postings thus far. We also have immediate post course evaluation and follow-up evaluation to identify how the learning has been used. This is great for tracking performance improvement by individuals.
    However, what IiP are asking for is the determine the impact of the training investment on the performance of the team and organisation as a whole. I should be able to say that because I trained n staff in y activity that this has had a measurable effect on the business in terms of profitability, service quality, recruitment and retention of staff etc. This is where I am haveing a problem. Does anyone out there have any qualitative evaluation methods that would help me to demonstrate the impact of training in this way.

  20. Implementation of learning

    I go along with Chris Walton to a substantial amount in his organization’s encouragement (?) for the line managers and training participants to sit down together to discuss implementation. However, I feel for them to do this after three months is leaving it too late. I suggest that a more effective programme is for the learners to return to work with their Action Plan and within a week meet with their managers to agree what they are going to do and how, and, importantly, when they are going to meet again to review the process. That is the important part of the whole process and takes implementation and ROI away from being a ‘training thing’. If you are informed when the review meeting is to take place, you can send a simple enquiry to the line manager about the progress and offering further assistance to their staff. As I have said on many occasions before, if the trainer is convinced that the training programme is effective and feedback on learning validation from the participants (not a Happy Sheet) confirms this, implementation at work is the responsibility of the learners’ line managers.
    Leslie Rae

  21. Subjective assessments
    Kevin. I have an instrument that can be used to assess the effects, ie the efficiency of subjective/soft forms of training, rather like validating objective or quasi-objective training. It doesn’t help to ROI subjective training although it can start the process by letting you know how well you are training. The rest is up to the line manager to assess the value of the implementation – where have I heard that before! If you email me at wrae804418@aol.com we can discuss this in more detail, if you wish.
    Leslie Rae

  22. ROI on technical training
    Few of the ROI examples out there talk about technical training, what about the ROI of Technical training – does anyone have any experience of this. The Return on investment workshop on the 14th Jan skimmed the topic, but I would like to know if anyone has more detailed comments.

  23. EvaluationZone can help…
    EvaluationZone can explore these issues further – it’s designed for members to raise specific issues like these which can be addressed with other members or one a one-to-one basis. Add your questions now at http://forum.trainingzone.co.uk/~EvaluationZone

    Stephanie Phillips
    Editor, TrainingZONE

  24. The baseline is crucial in evaluation
    The key to effective evaluation is establishing a clear baseline before the training starts. If you want to evaluate in £’s then you need a £ sign baseline (sales, costs, profit etc.)

    If you want qualitative validation (sorry, anything without a £ sign does not qualify as evaluation – only the lower standard of validation)then you need a baseline to suit (eg. customer satisfaction ratings, complaints etc.)

    With regard to the question below – first you have to decide whether the technical training is really just basic training (ie. getting an engineer up to a minimum company standard) or whether some performance improvement is expected (shorter callout times). The first can only be ROI’d by saying what is the cost if we don’t do the training. This will usually be hypothetical and is not to be recommended. The second type of training requires a callout time/cost baseline.

    Hope this helps – all the real, detailed answers are available at EvaluationZone

    Best regards

    Paul Kearns