Most learning and development professionals are great at building content focusing on the learner. We can also evaluate and test if this knowledge retrospectively. However, we don't have the data and insights to know that behavioural and skill development is being displayed in the work place.
Being able to measure learning transfer and look at impact rather than ROI is key to influencing the future direction of L&D processes and culture in your organisation.
Painting a picture…
We work really hard to deliver a great learning experience, everyone on the course feels positive, energised and ready to take back these new skills and behaviours into the workplace - they ‘are’ going to be a better leader.
Learners are 100% committed to the change process. As the person who designed and probably delivered parts of this course you are feeling proud and energised that everyone had such a great time, and you feel super positive believing that the learners will go and action what they’ve learnt, and performance improves because of it.
However, on returning to work the following week nothing much changes! Why?
The results from the latest Global Learning Transfer Research 2017 shows that only 7.7% of respondents felt that their approach to behavioural change was highly effective and 29% don’t actually know if their learning interventions are benefiting job performance. Yet still 30% believe their function is to “support their employees in improving job performance”. This is a huge disconnect!
So why is there such a void between what we in L&D believe is happening, and what is actually happening in the real world?
Evaluation is important but only a small part of the story. It allows us to ask questions about the course and trainer, or test ‘knowledge’, to see if the learner can actually retain information they have been ‘given’, however it’s delivered.
Practice & reinforcement is absolutely key to allowing performance improvement to take place & flourish.
Learning transfer looks specifically to see if that new skill or behaviour is actually being displayed in the workplace, which is surely why we send our employees on courses.
34% from the research agree that their main aim is to ‘drive improved business results’.
Data
I would also argue the data and insights we can gather from measuring the impact of effective learning transfer is hugely more valuable than that we get from our evaluation techniques. I use the word ‘can’ strongly; the vast majority of organisations are not gathering any data in this area at all.
In the future, this will be key to influencing the direction of an organisation’s learning approach. If we don’t have the correct data (or any data at all) how are we supposed to really find out what L&D activity works & what doesn’t?
L&D teams get hung up on ROI and attempting to measure in monetary terms and that’s understandable as professionals take steps to being more business-oriented. Let’s take it as a given that training is, the vast majority of the time, beneficial all round.
As we move into a data-driven world, we as L&D professionals will be expected to prove that: the intervention has had an impact on the individual; new skills/behaviours are being displayed; and that the training added value and had an impact, not just to the individual but to the organisation who invested in it.
This moves L&D into the role of performance consultants as well as content curators.
How does this look from the learner’s perspective?
By way of an example take a leadership development course. Typically costing in the region of £10,000+ per learner and run over a series of months with the various modules focusing on all the areas it’s been identified that the leader needs to improve.
Both during and after the course, if the individual is not made accountable to apply what they’ve learnt they simply won’t - even with all the best of intentions! Accountability is fundamental to the successful learning transfer. We can’t say that purely delivering the programme means it was a success.
What measurements have we got to say that new skills and competencies are applied? Where’s the proof that this intervention has benefitted the organisation?
What are top-performing organisations doing?
Behavioural change is not easy. It can be done over time, but there needs to be a process of accountability all round and also the ability track & monitor these improvements. Creating the right conditions will certainly aid that process a great deal.
Top-performing organisations, as highlighted by the Towards Maturity Benchmark 2016/17 analysis found that companies in the top quartile align their L&D activity with business need, 93% vs 62%.
We need to accept that change happens over time and mistakes will be made along the way.
These companies are also building a culture of learning through allowing their staff to make mistakes, 72% vs 35%. This is one of the single biggest areas of thinking we should explore further. As adults, we are so frightened of making mistakes we can almost paralyse ourselves into not changing at all.
We need to accept that change happens over time and mistakes will be made along the way. Surely it’s far better to allow those steps towards improvement than to instill a feeling of fear that everything has to be perfect all the time! That’s simply not the way of the world, so let’s embrace failure and learn from it.
94% of top-quartile organisations also allow for new skills to be practised, vs 41% who do not.
Practice & reinforcement is absolutely key to allowing performance improvement to take place & flourish. If we are not giving our learners the opportunity to use these new skills how can we expect them to hone & improve upon them.
Finally, we look at engagement, 82% of top companies allow for sharing of successful stories, a very simple but effective way of highlighting to the rest of the business the value that L&D departments bring to the organisation.
Where to next?
With all of this evidence, why are we not concentrating more on learning transfer than evaluation? Good question!
Perhaps it’s because the evaluation piece is perceived to be easier and less complex, therefore cheaper and less time consuming. Could it be that because we as L&D professionals are not being asked for the data that we simply choose to ignore it?
As we often use external providers & once they have delivered “their bit” & have been paid, they don’t see it as their responsibility to embed the learning?
I don’t actually believe there is one single answer, but it’s a multitude of these elements combined.
However, what I am sure of is that in the future we will be asked for the data, and the sooner we start collecting it, the better our decisions will be and the more insight we will have into all our learning & development activity and effective learning transfer, thereby ultimately improving performance of learners and the business overall.
One Response
Good points all around – you
Good points all around – you’re really stating the obvious: learning transfer is evaluation, or at least part of it. No evaluation is complete without a sense of learning transfer.