No Image Available

Seb Anthony

Read more from Seb Anthony

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Loss of new skills after training

default-16x9

I'm preparing a training session for 60 managers in a local authority on ensuring new skills are transitioned after a training session.

I want to stress to them how quickly new skills and knowledge are lost if the student is not given an opportunity to use them immediately.

I know studies have been carried out which states the percentage loss of the knowledge post training after 1 week, 2 weeks, 3 weeks etc if the skills are not applied.

Does anybody have these statistics or can they point me to the info?

thanks

Mike Harbon

6 Responses

  1. ebbinghaus curve of forgetting
    Hello Mike,
    One piece of research that is quoted is from 1885 (!) when Hermann Ebbinghaus discovered that our ability to recall information shows a rapid decrease over a very short space of time and that within a month, up to 80% may be forgotten! This piece of research, known as the Ebbinghaus Curve of Forgetting, despite being ancient still holds true if steps aren’t taken to combat this natural tendency to forget.

    There’s also some research, apparently by Tony Buzan, that demonstrates how well things are retained if you review the learning at 24 hours, 1 week, 1 month, 3 months and 6 months, and our experience definitely backs that up.

    Happy to discuss if you want to. Email me at [email protected]

  2. Retention of knowledge and skills
    Mike

    There is only one answer to this: it depends.

    It depends on the quality of the training, the quality of the learning, the topic, context, prior experience, line manager support, consolidation and application opportunities, and so on. Treat any claims of percentages with a very large dose of salt. One thing is for sure, ‘use it or lose it’ is a good adage.

    One of the classic reasons learning interventions fail, and the potential value from otherwise good L&D is lost, is in the lack of transfer of learning into action, behaviour change and impactful results.

    It is well worth looking closely at every step in the chain from learning to application. Some issues to consider include:
    clear post event objectives (linked to the desired success criteria);
    giving space for meaningful action planning;
    strong support and challenge from a manager, mentor or peers;
    creating opportunities for consolidation, practice with feedback, and experimentation;
    passing the learning on (teaching others is a great way to embed learning as well as multiplying the potential impact);
    presenting back – reporting back what they have done and what impact it has had;
    further learning – not seeing the main event as the end but as a springboard for further learning such as reading, shadowing or a work-based project.

    One final point. Some learning moves out of our conscious memory and gets embedded into our habits, espoused values and reactions. So, even if we cannot recall the learning precisely, that does not mean that it has had no impact on what we think and feel or how we may act. Though I suspect that we forget more than we remember, change less than we might, and fritter away learning gems as if it were free and easily replaced.

    Graham

  3. Couldn;t agree more with Graham
    Hi Mike

    I think Graham’s comments are extremely helpful. As a specialist in stress management and wellbeing, the key is engaging people sufficiently that they put into practice what they learn.

    I couldn’t quote any exact percentages, but I do know that unless people can see personal benefits for adopting the learning, it won’t happen and even then, it requires commitment and motivation to change old habits.

    I have personally found that asking delegates to set their own objectives at the end of seminars and then following through with 1-2-1 coaching is effective.

    This means that people choose aims that are directly relevant to them and once they start to see positive results from adopting what they learn, they are keen to continue with the process.

    I guess it’s the old adage that I use a lot. It’s fine sitting in the training, but what counts is the action and if you want things to change, you have to do something different. If people get this early on in the training session, they find it easier to adopt.

    I hope this helps.

    Annie Lawler

  4. beware of ebbinghaus’s #’s
    Ebbinghaus used mostly nonsense syllables which because they are meaningless, are likely to have very low retention.

    That’s why “it depends” is the correct answer. It depends on the learning material, previous learning, learners, etc., etc., etc.

  5. wary of Ebbinghaus
    Hello Will,
    Thanks for your comments about Ebbinghaus – I’d forgotten about the nonsense syllables.
    I agree that it ‘all depends’ but it’s interesting how often statistics are asked for. And most statistics come from academic study which is not the same as a real learning environment.
    I do know that we test our learners 24 hours after they have learnt 52 facts and we regularly get retention levels of over 70% (and occasionally up to 100%) but it’s not a rigorously designed experiment so it’s not likely to pass the type of scrutiny required for publication.
    Have you got any other retention statistics?

    Warm regards,
    Stella Collins

  6. Retention Statistics
    First, here are retention statistics I WOULD NOT RECOMMEND:

    http://www.willatworklearning.com/2009/01/another-example-of-the-bogus-percentages-now-by-qube-learning.html

    Let me help us think about this. Suppose you ask easy questions on trivial information and people can remember 80% of that a day later. Alternatively, suppose you ask authentic relevant scenario-based questions (asking people to decide what to do in real-world situations) and they only remember 50% a day later. Which is better? Suppose the trivial was remembered at 50% and the scenario-based were remembered at 80%. Which is better? I can’t tell from this, can you?

    Here’s what I would do. Create an authentic test that really separates expert, good, and novice performers from each other in terms of what they should be doing on the job.

    Aim for (in this order):
    Real-world performance.
    High-fidelity simulated performance.
    High-fidelity scenario-based decision.
    Low-fidelity simulated performance.
    Low-fidelity scenario-based decision.
    Recall of Critical information.
    Recall of Perfunctory information.
    This by the way is stolen from my research-to-practice report (available for Free at http://www.work-learning.com/catalog called Measuring Learning Results…).

    The higher the level, the more authentic.

    Making a long, long, long story short: My point is that you probably should be testing your own retention.

Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!