Adding to the conversation around this month's theme, Sarah Lewis poses a few evaluation questions.
To design the most impactful evaluation process for your project, workshop or other intervention, there are some questions that need careful consideration:
Who is the audience?
Who is the audience for the data or information you are intending to produce? Some possible answers to this question include: the participants, the process or project commissioners, participants’ managers, future participants, the event designers, or some other outside audience such as regulatory bodies or external funders. In each case the information they regard as valuable is likely to be different. They will be looking for different things from your evaluation process. You need to be clear about who this information is for before you can begin to answer these other questions.
What is the purpose of the evaluation?
There are more answers to this question than may at first be apparent. Sometimes the answer is the straightforward one of measuring some change, but not always. For instance, if the audience is future participants then the purpose of the evaluation may be to create interest, curiosity or excitement about the workshop. If the audience is the event designers than comparative impact of different sections may be of prime importance. If the audience is participants’ managers then consequent actions maybe more important to capture than internalised learning.
What to measure?
This choice needs to be made in the context of the first two questions. For example, if my primary concern is a team building exercise, then my priority is to demonstrate the improvement in team relations or dynamic. For this I will take a before and after measure of the things we are supposed to be working on. Any objective for a workshop can be framed as a question. Any question can be designed so that people can give a rating answer. For example, ‘On a scale of 1-10 how well do we understand how to get the best out of each other?’ The questions can be asked and scored before and after a workshop. It is a rough and ready, highly illuminative measure that creates a shared awareness of both current state and progress made on ‘touchy-feely’ topics.
Alternatively, if your workshop is about safe handling of dangerous materials then your priority will be to measure knowledge gained through an end-of-course test. If it is more about ongoing performance, for example supervisor coaching skills training, then your evaluation will need to extend into the post workshop time.
How long a time period does your evaluation need to cover?
The shorter the time period the more confidently you can say that the changes observed are to do with the specific intervention. With team development workshops I want to demonstrate that change has already taken place by the end of the day to demonstrate the value of the time everyone has invested in working together. So I design an evaluation around the day’s events: the audience is me, the participants, and to a lesser extent the HR commissioner.
Were I the team’s manager, I would be much more interested in longer term changes in team behaviour. The evaluation would need to extend over a period of months. Regular conversations would need to be had with the participants asking ‘Give me an example of how you have used the experience of...in the last two weeks.’ People don’t automatically make these connections, so if you just ask the bald question ‘have you put that learning into practice’ you may well get a false reading of the impact of the event.
How can you triangulate your evaluation data?
The problem with extended evaluations is that the variables become ever more confounded. In other words, as time passes, you can say with less and less confidence that the outcomes you are seeing are to do with your intervention and not some other intervening factor.
One way around this is to triangulate your data. This means that before you do your training you predict a number of ways in which the impact will show. For example with phone sales training these might be: more time spent researching and preparing each phone call, less phone calls made but of longer duration; lots of short phone calls when the right contact is not available; more talking time spent talking to budget holders; clearer articulation of product benefits; specific needs assessment questions being asked; a higher percentage of calls leading to meetings and so on. If you measure all of these, and they move in the anticipated direction of a successful intervention, then you have a better basis for saying it was the training that made the difference.
How can you direct people’s attention to the value created?
The questions you ask in your evaluation will direct people’s attention to specific features of the experience more than others. It helps to be clear what effect or knowledge you are hoping to create. If you are running a pilot, and it is important to effect improvements, then ask people to discriminate between sections of the event, and to suggest improvements. However if it is more important that they put what they learn into action, then direct your evaluation questions to usefulness and intended use. If you need to demonstrate change then before and after measures are crucial. When a change is likely to unfold over time, it’s a good idea to measure small initial changes in behaviour to help people notice that change is happening. If it is about creating a vibe or a buzz about an event or experience, then ask people about highlight moments, feelings, excitement and so on.
Evaluation is a socially constructed process. Understanding it as a co-created social, dynamic, value-laden process means that we can thoughtfully design evaluation processes that give information, add value, affect perceptions, and create potential for action.
Sarah Lewis M.Sc. C.Psychol is an associated fellow of the British Psychological Society and a principal member of the Association of Business Psychologists. She is an acknowledged Appreciative Inquiry expert, a regular conference presenter and a published author, including ‘Positive Psychology at Work’ (Wiley) and ‘Appreciative Inquiry for Change Management’ (KoganPage). Sarah specialises in working with organisations to co-create organisational change using methodologies such as Appreciative Inquiry, and the practical application of positive psychology. Contact: sarahlewis@appreciatingchange.co.uk