No Image Available

Seb Anthony

Read more from Seb Anthony

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

data on learning retention from interactive multimedia

default-16x9

Does anyone know of any research or evalaution data on retention of learning of learners using interactive CBTs and/or e-learning approaches with high utilization of multimedia versus the traditional PowerPoint presentations?

We've been asked by a client to provide some data - is there anything out there to support or demolish my prejudices?
Alyson Morley

5 Responses

  1. Sight – Sound – and once more, with feeling!
    Hi Alyson

    Here’s some AVERAGE figures which might be useful:

    – People will remember about 70% of a purely verbal presentation, 3 hours later; but as little as 10% after 3 days

    – People will remember about 75% of a purely visual presentation, 3 hours later; and around 20% after 3 days

    – People will remember about 85% of a mixed verbal/visual presentation 3 hours later; and as much as 66% after 3 days.

    And ,making the presentation interactive is reckoned to support an even higher rate of retention.

    These figures are from a book I wrote a little over 10 years ago, and I know they came from a reliable source – I just can’t remember what it was 🙁

    Still you might try Googling with those figures.

    As to “death by PowerPoint”, I suspect that nowadays (given a constant programme of improvement) that is more of a commentary on the lack of creativity of some of the people using PowerPoint than an accurate reflection of what the software can do in the right hands.

    You might want to look at this book which explains various ways of creating PowerPoint presentations that capture the audience’s interest rather than just bombarding them with bullet points:

    “Killer Presentations” by Nicholas B. Oulton, How To Books (2005).

    Sorry, I didn’t realise anecdotal evidence was being considered.
    In that case, over 25 years as a sixth form tutor, training course designer and trainer in the IT sector, all using mixed media, indicates that this approach improves learning for a majority of students/adult learners

    Good luck

  2. Ask the learners
    No doubt those with multimedia axes to grind can find some data to help you, but if I tried I’m sure I could find data to show that some folk will not learn much from either approach – learning styles and all that. Then there’s the content, the background, previous experience and disposition of the learners…it gets a bit silly.

    So based on no data but lots of experience, what you propose makes intuitive sense. As to cost-effectiveness/ROI of each, well that’s yet another tin of oligochaetes.

    Maybe your intuition is what s/he calls prejudice or vice versa. Learning media decisions are never that binary.

  3. Quality of instruction more important than the medium
    Hi Alyson,

    I believe the first media study was conducted by the US army in 1947 (Hall and Cushing 1947). It concluded that that quality of instruction is more important than the medium when it comes to learning. This has been backed up by other studies. You may wish to read this article http://www.hull.ac.uk/php/edskas/edtech/mtc2.pdf

  4. No Significant Difference!
    Dear Alyson

    Thomas L. Russell’s book, The No Significant Difference Phenomenon: A Comparative Research Annotated Bibliography on Technology for Distance Education (2001, IDECC, fifth edition), is a fully indexed, comprehensive research bibliography of 355 research reports, summaries and papers that document no significant difference (NSD) in student outcomes based on the mode of education delivery (face to face or at a distance).

    The No Significant Difference (NSD) website (http://www.nosignificantdifference.org ) has been designed to serve as a companion piece to the book. The primary purpose of the NSD website is to expand on the offerings from the book by providing access to appropriate studies published or discovered after the release of the book. In addition to studies that document no significant difference (NSD), the website includes studies which do document significant differences in student outcomes based on the mode of education delivery. The significant difference (SD) entries on the website are further classified into three categories:

    – better results through technology – improvement in outcomes when curriculum is delivered at a distance;

    – better results in the classroom – improvement in outcomes when curriculum is delivered face to face; or

    – mixed results – some variables indicate improvement when curriculum is delivered at a distance, while others indicate improvement when curriculum is delivered face-to-face.

    I am sure your client will find the book and website an invaluable resource.

    Best wishes

  5. Learning retention
    Alyson
    The ‘no significant difference’ evidence is interesting and certainly left me much more informed in my confusion. But this is an area where I suspect there are no easy answers.
    The more I read about this the more convinced I am that there is ‘no significant similarity’. That is to say, the variables are so many that to draw firm conclusions about any one delivery method, or medium, compared to another is risky.
    Learning may comprise understanding, short and long term memory, application to the job, application to a new situation, degrees of competence or skill, attitudinal changes and so on.
    How well people learn will depend on motivation, the environment, organisational culture, personality, mood, social context, the topic, prior knowledge, group dynamics and peer pressure, familiarity with the method or technology, learning or cognitive style, opportunity to ask questions, practice, timing, style of the delivery…I could go on.
    In order to isolate the difference between any two methods (eg elearning v powerpoint, facilitated group work v coaching) the other factors would need to be the same in each case.
    The fact is that each learning experience is unique and comprises a rich mix of enabling and disenabling factors.
    I used to think that if your sample was large enough to even out these other factors you could still make some generalisable rules, but I am less sure now.
    I now tend to think that we are best to focus on just three main things:
    1. Appropriateness – making sure the content, method, timing, etc. are as appropriate as possible
    2. Excellence – whatever you do, however you do it, make sure that it is of the highest quality you can realistically deliver (quality in this context being as a support for learning, not glitz)
    3. Psychological factors – creating the right climate, understanding why people learn or don’t learn, working with motivational issues, self doubt, resistance, etc.
    These are not easy to address but I wonder whether we should put more energy into understanding and facilitating this complexity rather than trying to set up any one method as being inherently better than another.
    Graham

Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!