I am working with an organisation who are linking performance to pay for the first time this year. My role is to help managers use thier judgement in rating staff, they have objectives which were set last year with their staff under the old scheme and capability levels, so the how and what will form the basis of the rating. So working from that starting point how do they have the conversation when reviewer and reviewee have evidence that supports "their" assessment.
Ratings will be between Improvement needed, strong performance (consistently meets objectives and capabilites, excellent and outstanding performance.)
Are there criteria for "judgement"?
Any books or thoughts that might help my research, or anyone out there who has faced similar challenges?
VICKI Evans
2 Responses
Criteria for Judgement
Seems as though the die was cast last year. Using SMART objectives or something similar would have helped. Also, if you are helping managers to learn how to set objectives it helps to ask them ‘How could a person re-interpret this objective’ to try and narrow the gap between intention and interpretation. However, it may be for this year they will have to negotiate a compromise based on the performance evidence that is available – the quality of evidence will impact on this.
If you can persuade them to set aside time, another method for the future is to get both reviewer and reviewee to agree upon the indicators of success for each objective at the time the objective is agreed. If the objectives are repeated tasks and/or common to several people then they could work on these indicators together and come up with positive/negative indications of performance. As a way of generating the ‘criteria’ you seek you might be able to get them to do this for last year’s objectives – but I suspect it may be too late. Interested to know what others think though.
Good luck with it anyway.
Colin
Case Studies?
You could write a series of short case studies which describe an individual and their behaviours over a given period. Then get the individuals to read them and assess the individuals against the ‘capability levels’ you set. The results can then be compared and significant variances explored to find out why the ‘incorrect’ marker assessed the candidate in the way they did. I have used this process to elicit whether people understand the definitions of the competences they are being asked to review and also get some view of their ability to objectively judge situations. The whole process also contributes to calibrating and getting some degree of evenness amongst the scoring group.
In my opinion, I think with the system you are using here with definitions like: “Improvement needed, Strong performance (consistently meets objectives and capabilities, Excellent and outstanding performance” you are always going to be a hostage to inconsistencies amongst the marking group.
If I was going to recommend one book in this area the one I immediately think of is by Marianne Talbot; Make Your Mission Statement Work.
http://www.amazon.co.uk/Make-Your-Mission-Statement-Work/dp/1857038207/ref=sr_1_1/202-9202100-3191820?ie=UTF8&s=books&qid=1176829113&sr=8-1