googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Consistent measurement of Quality

default-16x9

We are a call centre across 3 locations. We have 3 teams of Quality Analysts at each site that are responsible for listening to and checking the quality of interaction between our agents and customers. Currently, these teams measure the correct application of technical information. We are now introducing checks on how well our customers are engaged in as far as soft skills are concerned. Our agents have been trained in the skills that we will now measure. Our task now is to train the quality teams on 'how to measure and rate the application of soft skills' The biggest challenge I see around this is how to achieve consistency and objectivity between the assessors on a subject that is by definition subjective? Can you help please? Thanks in advance...
Kevin Green

4 Responses

  1. Can be done
    I’ve done this, I disagree that quality is subjective. You can make a list of objective indicators that agents have to do which defines your quality scale.
    ie. Did agent say welcome.
    Did agent listen ie. remain silent, make verbal noises, pick up on what was said, summarise.

    Things like did the agent use empathy are I agree somehwat subjective but if you limit it to a y/n or not applicable rather than a ‘how much empathy’ this overcomes the problem.

    You can use these these to build a scale if you wish. Alternatively (and easier) use them all then create a cumulative score at the end.

  2. I agree plus…
    In addition to Juliet’s thoughts, which are sensible, could I add the following. Having identified the aspects on which to judge agents’ performence, consider a priority order – which is most/least important, so that you don’t rely too much on overall score, but also compare performence on the more important ratings.

    Two other things – get different QAs to rate the same performence (perhaps by videoing a few agents) and look for variation in their judgements to test the validity of the process (there are some statitistical tests you can do to improve reliability of scores), and tell your agents exactly how they are being scored and let them have a chance to discuss their scores in order to help them improve.

    It’s worth considering the possible types of rating scale. There is the Y/N/na scale, the Likert-type which uses a range from strongly positive to strongly negative, with or without a central neutral score, and the sematic differential which uses opposed statements (eg Very talkative/Very quiet) and scores people on a scale from one extreme to the other (again with or without a neutral midpoint). The advantage of the latter is that you can use scales where the midpoint is desirable rather than the extremes.

    If you want to chat through any of these points, I’m happy to talk through them.
    Good luck
    David
    (PS. Have you thought of using the national occupational standards as the basis for your rating scales?

  3. Further thoughts
    Both Juliet and David’s thoughts are correct. If you focus on behaviours rather than skills, you can make the observations of those behaviours much more objective, particularly if you use scalable responses rather than Y/N observations. Compiling a competence model based on behaviours might be a good way to proceed and you can use this model to prepare a development centre for your proposed assessors in order that you can build a common approach to the assessment of observed behaviours. We have done this may times and would be happy to share our experiences with you.

    Derek Day

  4. I agree
    I agree. We have in place quality scores accross 12 headings. All of our call centre advisors get a bonus dependent on thier scores over a three month period. We train them towards these goals and re train them according to their needs. We have noticed that all calls are dealt with at a much higher level and our customers are much happier with the response and care given.