No Image Available

Seb Anthony

Read more from Seb Anthony

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Evaluation studies into elearning

default-16x9

If anyone has any data on stdies that have been carried out into evaluation of elearning methods of delivery of training in comparison with more traditional methods I would be grateful. I would be particularly interested in any research that has been carried out in colleges of further education.
Maggie Young

4 Responses

  1. Best Practice College
    Maggie

    I believe Swindon College are very active in the area of e learning and are hosting an event for the British Association Of Open Learning to tell everyone about their work

  2. No significant difference
    I am grateful to Adrian Snook who first pointed the way to this link. All the research you want (or perhaps vendors may not in some cases) to show that there is no significant difference between elearning, distance learning and other learning delivery methods in terms of effectiveness. research annotated from 1928 to 2002. that should do it for you…
    http://teleeducation.nb.ca/nosignificantdifference/

    Clive Hook
    Clearworth – a class apart
    http://www.clearworth.com

  3. Evaluation studies into elearning
    Hi Maggie,

    Call me old fashioned, a maverick or just plain narrow minded, but I disagree with the ‘No Significant Difference Phenomenon’ argument. Whilst many accept these statistics, I challenge their merits.

    It is fair to say that whatever method is used to deliver a training/learning solution, the expected outcome can be achieved. However, the best method adopted is dependant on several variables. The most important issue to address is that of individual learning styles. Combined with this element are individual skill-set, culture, background, language, attitude, gender, abilities and disabilities.

    When considering the use of e learning, I would always look seriously at the expected outcome. If this method is to increase knowledge alone, or to enhance a training solution, it may well be appropriate. If I were looking at soft skills training, I would opt for a more traditional training scenario. However, if one is to learn a new system, especially new technology, a blended approach is often the best method.

    The difficulty in measuring effective training/learning, is compounded by the subjective nature of many evaluation processes. The use of ‘happy sheets’ to evaluate training, in most cases, is a fruitless exercise.
    The true measurement of a learning experience is not confined to test results, but the change in skills, knowledge and attitude. Training and learning are iterative processes and are evolved by the interaction of people, environment and technology.

    The process of a true TNA would afford the answers to any training professional, as to which method is both effective for individuals and costs (long term) to the company.

    I would be very interested in other views on this subject.

    Kind regards,

    Clive

  4. Evaluation Studies
    This is not an easy one! But both Clives are correct – the “No Significance” study, based largely I believe on Level Two evaluation seems to support the idea that the media chosen can be a red herring. But of course, it’s with levels 3 and 4 that we are most interested and work still needs to be done in this area, although at a college level, perhaps level 2 would be sufficient. Not knowing the target audience it’s hard to say.

    Looking to the corporate sector, the Thompson Group have a study they conducted that compares the use of different methods employed in IT skills training. This was interesting. The website link is http://www.netg.com/NewsAndEvents/PressReleases/view.asp?PressID=49.

    What is needed is for more institutions and organisations to work on their evaluation strategies and share their findings.