Happy sheets rear up again... I am writing a dissertation on evaluation sheets, and was wondering if anyone was brave enough to send me a copy of theirs? (nikki_brun@tiscali.co.uk)
The general theme is that if generic evaluation sheets mean very little (we all know we should be using long term evaluation) why do so many companies use them, and if you have to,what are the best questions to ask.
I would really appreciate any thoughts from you on the subject. Also if you have changed from them to another system or vice versa, I would like to know the thought process behind the change and how it has worked.
Many thanks for your help,
Nikki
Nikki Brun
17 Responses
One measure is better than no measure
They are a measure, just one measure – better than no measure – use them (certainly keep their limitations in mind) but track them so that you can spot any trends. This way you are comparing like with like; so even where feedback show people as being ‘happy’ with the training you will know there is a problem because previous ratings averaged ‘extremely happy’.
Obviously you also need to add/use other measures / indicators to really evaluate.
Arian Associates Ltd
Like any training organisation we use evaluation sheets. And like a lot of people we question their value at times, but you are welcome to a copy of ours.
Email us on info@arian-associates.co.uk and we will reply with a copy attached
Limitations of Reactionnaires
In my book ‘Assessing the Value of Your Training’, Leslie Rae, Gower, 2002 (Chapter 8) I give 26 limitations of Reactionnaires (I refuse to use the phrase ‘Happy Sheet’) and 6 values of them – enough said.
I can’t give them here as they exceed 2000 characters.
Too many are too long
Only yesterday I was at an all-day event, for which the feedback sheet asked us to rate each of 20 sessions on 6 aspects using a scale of 1 to 4. I doubt many people bothered. What were they expecting to do with all this data anyway? Trainers sometimes make this mistake too – a long form asking for all sorts of data which will never be used. So keep it short and simple.
not all reactionnaires are equal
I have worked with many organisations, consultancies and employers and have seen many different examples. Some are really quite good and some are B****y awful.
I have known some organisations who just file them….and others who analyse them in intelligent detail. I have seen them used to justify crass financial waste and to evaluate the effectiveness of training, trainers, venues and company policies.
“Happy sheets don’t waste resources, people waste resources”.
Sorry Nikki, that is more a soapbox stance than useful data!
A sheet with a clear purpose
Getting participants’ reactions to “relevant aspects” is essential. The data is then used as part of the continuous improvement feedback loop in the training system.
What is a “relevant aspect”? Includes:
* Pre-Course work (relevance to course, difficulty, length, resource difficulties, time-frame, etc.)
* Pre-Course Administration (application/registration process, confirmation details, location/map/access information, payment methods, etc.)
* Facilities (venue, breakout rooms, lighting, airconditioning, mobile phone coverage, landline and laptop online access, toilets, parking facilities, internal signage, on-site registration, etc.)
* Presentation (audio quality, screen image clarity, comfort with Course activities, apparent “skills/ability” of facilitator, suitability of handouts/resources, etc.)
The participants’ reactions to these “relevant aspects” informs the decision-making process for future Courses.
N.B. The reactions to the facilitator need to be treated with care. Sometimes very effective learning with great ROI comes from experiences that participants did not enjoy or from a facilitator that they did not like at all. It is only much later that participants realise how much they gained from the experience. A great example is a Platoon Sergeant on a Recriut Course – he or she moves from hated to respected to loved throughout the training.
The above all fits with the first of D. Kirkpatrick’s 4 Levels of Evaluation.
It is not relevant to seek reactions to how much was learnt, how they or their businesses will benefit, etc.
Use the end-of-course evaluation for what it is designed for and it is a great tool. Do not let it be corrupted to inappropriate uses.
Regards,
Radcliff
happy to send you ours
Drop me a line and I’ll send you ours. To some extent, the discussion on how useful they are is irrelevant if the CLIENT wants to see them used. Obviously we try to make ours as useful as we can….
Stop me if you’ve heard this before…
Nikki
Firstly I’d like to echo Leslie Rae’s comment. This is a man who has been around training for more years than I’ve had hot dinners – and definitely (IMO) knows what he is talking about.
In contrast, with all respect to Radcliffe, take this comment which I believe is typical of the views of people who use immediate post-course questionnaires – use the sheet to assess the:
‘apparent “skills/ability” of facilitator’
These are honest to God, full and unedited quote from a book by someone I know and can vouch for. It consists of two comments by people at the same presentation:
“4a. ‘[the presenter] covered a tremendous volume of material quickly and lucidly.’
4b. ‘Disappointing – felt instructor’s knowledge was too limited.'”
Which demonstrates nothing at all except that the two writers had different points of view.
End of story.
And the reason why companies still indulge in these fruitless excercises?
Because, as far as I can tell, most people outside the training community, and a good few inside, don’t know their gluteous maximus from the conjunction of their ulna, radius and humerus when it comes to evaluation.
Alternatively, something is better than nothing – not in the sense that vague evaluation is better than none at all (it isn’t – it’s worse) but because someone there’s often some twit higher up the management tree (all too often the head of HR) who wants to see “results” in black and white and who is most easily placated by a nice set of statistics, even though they are, in reality, meaningless.
Best wishes
Paul
Feedback plus Attendance
You are getting lots of valuable comments about evaluation, but another angle is that the form is a useful way of ‘proving’ that the person was there at the training event (in case of any dispute!) and also gives them the opportunity to comment on what went on (in case of any contractual query later)
Four quick questions
I just ask four questions which help me decide how to improve the session next time (what to change/what to keep). Once I’ve reviewed the comments and actioned them (if necessary) I then bin them.
1) What was the most significant thing you learned on the session today?
2) What was good about the session?
3) How could the next session be better?
4) Are there any other comments you’d like to make?
I find these questions particularly useful when reviewing a course someone else has run. e.g. SMEs or staff reporting into me.
Long, long, long term
In my organisation we use extensive evaluation sheets but recognise that one, individual sheet cannot be used to take action on. Instead we compile the evaluation sheets over a minimum of 6 months and try to spot trends in appreciation. Upgoing trends need to be analysed. What have we changed? Downgoing trends the same. Stable trends need to be looked at to improve.
We use a 6 months/1 year/ 1.5 years model as we provide the same training every month.
Trends and emotions
I agree with previous reply-I use my sheets to view trends. There value to the organisation may be limited but they help me maintain standards. I am prone to tinker and sometimes the feedback tells me to leave
things alone!
Some sheets I have seen ask delegates to ring words from a long list of “feelings” for example bored, happy, refreshed, motivated, indifferent etc. I think these are useful as long as the trainer has identified the feeling they wish to achieve. For example I would not target Time management to be exhilirating.
just a few thoughts
Good luck
Pete
happy for feedback!
Not sure I’d call them ‘happy sheets’… anyone in business should want feedback of the good and not so good aspects.
The ‘good’ remarks are useful for testimonials and the not so good for learning and marketing policies.
The key is to make the feedback form as creative and as enjoyable to fill in as the actual training session.
Happy Sheets -yuk
Just a comment on Annah Ross’s ‘improver sheet’. Only question 4 to me has any use whatsoever. 1. Suggests there was only one significant thing in the session. 2 Tells the learner that the session was good, but you are asking them to say that it was even better than good – no request for them to say how ‘bad’ it was. 3. This tells them that the session had failings. Why are you being so apologetic?
Happy sheets -yuk, yuk
Martin
One MEASURE is better than no measure at all, bt sheets giving each individual’s personal feelings, attitudes (at that time of day), opinions (guided or misguided) etc etc are not ‘measures’ of training or learning at all. One MEASURE of ‘what’ they have learned is better than no measure at all – because knowing how well the training/learning has gone is evaluation, i.e. determining learning, its extent and what they are going to do with it. Not ‘Oh yes Mr/Ms trainer, I thought you were wonderful!
Evaluation
Pascal – What is wrong with one real evaluation sheet or other measurement for one course with one set of learners. Or do you have several courses of the same kind with the same group of learners each time. Evaluation is finding out what and to what extent that group of learners have learned from that event, whether it is ok with the agreed objectives for the learning event, and what they are going to do with their learning. The next event (say
of the same kind)will have a different set of learners (who may show they have learned different things – but that evidence should vary little from event to event = valid validation)
Initial reation is important in context
I am surprised by those who do not see the value in the 'happy sheet' – incidentally I don't like the name – as it is stage 1 of a more comprehensive analysis of the learning experience. I have always found huge value in the Kirkpatrick model of evaluation, which allows me to measure 1) reaction on the day (critical anecdotal or 'word of mouth' feedback from one key person can kill a very good programme, if this feedback is not gathered from all participants), 2) percieved 'learning' 3) perceived use and productivity increase, 4) return on investment.
Without all four dimensions it is difficult to assess value, and anecdotal feedback is an important part of that process – how people 'feel' at the end of the experience. If we do not assess how people feel and what they think at the end of a training course, then I don't think much continuous improvement is possible.
I liked the comment someone mentioned about 'tinkering', the happy sheet can validate the direction of a programme, or show the need for improvement.
In terms of rating the trainer in this way, it can be very 'hit and miss', as we all know how subjective this can be. However it does have value if the responses have a recurring trend.
Using the Kirkpatrick model can be challenging, ensuring that you get the required feedback/measurement without too may forms to fill in for the participants, and without too much administration for you – however it can not only enhance your programmes, but it can validate the need for investment in L&D! 🙂
A very interesting topic!