Can you automate higher level evaluation to make it time and cost efficient? Kevin Lovell explains one such system...
Take any group of learning and development (L&D) professionals, ask them who measures learning activity (the quantity and quality of training delivery) and nearly all the hands go up. Then ask who measures learning outcomes (what happens after learners return to their day jobs) and most of those hands go down.
Why is that? Everyone agrees that the measurement of business benefit is highly valuable, and return on investment (ROI) seems to be the holy grail of learning and development. There’s no shortage of books and articles on how to conduct higher level evaluations (Kirkpatrick level 3 or 4, ROI, or what I will call ‘learning outcomes’). So why are learning outcomes so rarely measured when they are reckoned to be so valuable?
Barriers to higher level evaluations
The Chartered Institute of Personnel and Development’s annual learning and development report 2006 identifies the two main barriers to higher level evaluations as lack of resources (76%) and time (67%). Furthermore, 80% of L&D professionals believe learning delivers more value than they can demonstrate. In short we don’t have the skills or the time to do it and we don’t know how to demonstrate the benefits of learning effectively.
Drivers for an evaluation strategy
At KnowledgePool we’re managing huge amounts of learning investment on behalf of our clients. In different ways they all ask the same basic question: “Have you spent our money wisely?”. Like any professional L&D organisation we measure learning activity, but our evaluation strategy needed to address learning outcomes as well.
To achieve this we had to do two things. First, identify information that would be easy to capture, yet paint a meaningful picture of how learning impacts a business. Second, find an inexpensive yet robust means to collect that data on a large scale.
What information to collect?
Our experience of evaluating learning face-to-face told us that learners are very good at recognising when they have used what they learned, also whether that learning has helped them improve their job skills. So we piloted a questionnaire to be sent out three months after a particular learning intervention, which explores three areas:
1. How much they have used what they learned (to assess the transfer of learning to the workplace).
2. How much their line manager helped them to use what they learned (recognising the pivotal role played by line managers) .
3 .How much the learning has improved their performance at work (seven questions probing generic areas such as quality, cost reduction, customer satisfaction, etc.).
These questions show the impact of learning on the business on one hand, whilst allowing capture via a questionnaire on the other.
Data collection
We used an online questionnaire and invited learners to respond using emails (including reminders if no response was received). The whole process was automated using our LiveBooker learning administration system – the emails were sent automatically at the appropriate time. Responses were collated automatically, with reporting achieved through our data warehouse. There was no paper involved, no photocopying, no stamps, no re-keying of data. The questionnaire is generic and has been applied across a wide range of interventions.
Pilot results
We gathered around 1,000 responses from ten different organisations, covering many different subject matter areas. The results were checked for reliability and consistency including, for specific courses, comparisons with reaction feedback and anecdotal comments from the L&D team. Our conclusion is that the results do provide an acceptably accurate statement of what happens when learners return to work.
The generic nature of the questionnaire allowed us to calculate ‘national average’ scores for each question, allowing clients to benchmark their results against these averages. Specific pilot results are listed below.
Transfer of learning to the workplace and the role of line managers
- 70% of delegates applied what they learned (hence 30% don’t).
- 53% believed their line manager helped them apply what they learned (but 47% say no).
Significantly, of the 53% who did get line manager support, nearly all of them applied their learning. However of the 47% without line manager support, more than half failed to apply what they learned. Overall, 25% of learners neither applied what they have learned, nor got line manager support to do so.
The picture is clear: where line manager support exists, the transfer of learning to the workplace is much higher. This is reflected in the performance improvement figures – individual performance improvement was significantly higher where learning was applied to the workplace, so line manager support translates directly into performance improvement.
Performance improvement
By comparing the results from specific courses, the results show where learning has contributed to performance improvement. We found the results beneficial in these ways:
1. High-impact and low-impact courses. See which courses delivered above-average or below-average performance improvement.
2. Areas of high/low impact. Across the seven performance questions, you could assess whether a course impacts performance improvement in the right areas. For example, an ‘Effective Selling Techniques’ course should achieve a high score on the Sales Improvement question.
3. Who gained the most. An analysis of performance improvement against job title for a specific course showed whether a course hit the mark for its intended audience and what, if any, difference was achieved for other delegates.
4. Good reaction feedback (happy sheets) is no guarantee of performance improvement. We have identified courses where the trainer did an excellent job, the learning objectives were met and the delegates had a great time. However the learners did not (or could not) apply the learning, with little or no signs of performance improvement.
All this provides the L&D team with hard evidence of business benefit, informs decisions on course design, and provides feedback on the targeting of learning. And all this is based on learning outcomes rather than activity.
Conclusion
This generic ‘Learning Outcomes’ questionnaire has become part of our standard service offering, issued to any delegate three months after completing any formal learning.
We believe this is a ground breaking step towards making higher level evaluations possible on a wide scale, through a combination of carefully chosen questions and technology to automate the data collection.
It cannot hope to match an in-depth evaluation using interview techniques and detailed analysis of business metrics, nor can it deliver hard ROI statistics. However it can provide L&D with valuable information about learning outcomes – often where none is currently available – and at minimal cost.