Has 360 feedback gone amok?
David A Waldman; Leanne E Atwater; David Antonioni
Reprinted with permission
Executive Overview
Three hundred and sixty degree feedback programs have been implemented in a growing number of American firms in recent years. A variety of individual and organizational improvement goals have been attributed to these feedback processes. Despite the attention given to 360 feedback, there has been much more discussion about how to implement such programs than about why organizations have rushed to join the bandwagon or even what they expect to accomplish. Are companies doing 360 degree feedback simply because their competitors are? What evidence exists to suggest that 360 degree feedback prompts changes in managers' behavior? This article explores the outcomes that organizations can realistically expect and provides recommendations for implementing innovations such as 360 feedback to best ensure improvements will be realized and the process will be a success.
Such programs can involve feedback for a targeted employee or manager from four sources: (1) downward from the target's supervisor, (2) upward from subordinates, (3) laterally from peers or coworkers, and (4) inwardly from the target himself. Studies show that about 12 percent of American organizations are using full 360 degree programs, 25 percent are using upward appraisals, and 18 percent are using peer appraisals. Furthermore, it appears that the trend is growing. The most obvious reasons for this growth include the desire on the part of organizations to enhance management development, employee involvement, communication, and culture change.
The rise of 360 degree feedback can be traced to the human relations movement of the 1950s and 1960s, when organizations attempted to improve organizational processes and communication through various forms of what came to be known as organizational development. One popular form of organizational development was survey/feedback.
Survey/feedback involves a general employee survey of such factors as jobs, benefits, pay, and organizational communication. Traditional survey/ feedback was geared toward overall organizational processes, while 360 degree feedback programs are targeted toward supplying information to specific individuals, e.g., supervisors and managers, about their work behaviors. Traditional survey/feedback was an upward feedback process. While 360 degree programs have relied heavily on upward feedback, at least some attempts have been made to gather peer, supervisor, and/or customer feedback.
Reasons for Adopting 360 Degree Feedback
A key purpose driving the present use of 360 degree feedback is the desire to further management or leadership development. Providing feedback to managers about how they are viewed by direct subordinates, peers, and customers/clients should prompt behavior change. Many managers have not received as much honest feedback as is necessary for an accurate self-perception. When anonymous feedback solicited from others is compared with the manager's self- evaluations, the manager may form a more realistic picture of his or her strengths and weaknesses. This may prompt behavior change if the weaknesses identified were previously unknown to the manager, especially when such change is encouraged and supported by the organization.
Other potential benefits of 360 degree initiatives are targeted ultimately toward organizational change and improvement. These initiatives reflect resource dependence theory, which views organizational change as a rational response to environmental pressures for change or strategic adaptation. By increasing managerial self-awareness through formalized 360 degree or upward feedback, an organization's culture will become more participatory and will be able to react more quickly to the needs of internal and external customers. This should ultimately lead to increasing levels of trust and communication between managers and their constituents, fewer grievances, and greater customer satisfaction.
In addition to the logical, performance-based reasons for pursuing a 360 degree feedback program, at least three other reasons account for its proliferation.
Imitation
Institutional theory suggests that organizations make attempts to imitate their competition or other firms in an organizational network. This suggests that the choice to adopt 360 degree feedback reflects a response to environmental pressures. Such conformity gives a firm a sense of external legitimacy.
As an example, we worked closely with a large telecommunications firm to implement an upward feedback program. From the beginning, we sought to determine the precise reasons why the firm wanted to pursue this program. A consistent reason simply seemed to be a desire to keep up with the competition. Managers asked us to provide lists of other companies using upward feedback, almost as if that alone were reason to adopt it. Performance-based thinking was not absent, but a form of "satisficing" might have been at play whereby improved performance was expected simply by imitating others.
A similar phenomenon of imitation occurred years ago regarding quality circles and has occurred more recently with TQM. TQM can be implemented in a number of different ways. These include the use of various scientific and statistically-oriented approaches to solving quality problems, and an increase in activities directed toward understanding customers' perceptions and desires pertaining to quality. In an attempt to achieve external legitimacy, later adopters have often not been overly concerned about the specifics of TQM implementation and how or whether such specifics are actually linked with performance outcomes.
The recent implementation of teams in organizations provides another relevant example. Organizations have created teams to mimic the competition. Managers reason that if the competition is using teams and they are doing well, they should use them or fall behind. Little thought has gone into determining what improvements can be expected, or how technical and managerial systems would require change to support teams.
Institutional theory and imitation become more and more relevant as organizations face uncertain situations. Indeed, this may still be the case for 360 feedback since little research evidence exists regarding the precise methods and contexts in which it can positively affect organizational outcomes. In such situations, attempts to copy the actions of reputable others seem reasonable, and late adopters may not seriously question the potential effectiveness of 360 feedback. That is, they see no need to systematically demonstrate performance improvements before engaging in a widespread rollout.
A case can be made that given the increasing uncertainty, rapid change, and increasing competition facing organizations, managers feel that spending additional time and money testing the usefulness of such innovations prior to full implementation is not cost effective. It may be smarter in the long run to adopt the innovation and simply drop it later if it is unsuccessful. In short, we acknowledge the logic of attempting to imitate what other firms are doing with regard to 360 feedback initiatives. But imitating without clearly understanding what other firms have accomplished, or the likely outcomes for one's own firm, may be a questionable strategy.
360 Degree Feedback as Part of Performance Appraisal
A second alternative reason for the proliferation of 360 degree feedback is the desire to expand formal appraisal processes by making such feedback evaluative, thereby linking it directly with a manager's or employee 's performance appraisal . Our most recent experiences suggest that there are pressures to make 360 feedback evaluative because companies want to get their money's worth.
In theory, the use of 360 feedback for evaluative purposes seems logical. An individual held directly accountable for ratings received will be more motivated to take action to make improvements based on the feedback. Unfortunately, problems exist that may negate the possible benefits of 360 degree feedback if it is made evaluative. Employees may rebel and try to sabotage the program. For example, in the case of upward feedback, implicit or even explicit deals may be struck with subordinates to give high ratings in exchange for high ratings. Such maneuvering is less likely when the feedback is being provided strictly for developmental purposes.
Research has demonstrated that when ratings become evaluative rather than purely developmental, some raters (up to 35 percent) change their ratings.
UPS tested the potential of using 360 ratings for evaluation. The company asked employees after they had provided upward ratings whether they would have altered the ratings if they knew they would be used as part of their managers' formal performance evaluations. Their findings suggested that some individuals would raise, and some would even lower, ratings if they were to be used for evaluation. Changes in ratings were made primarily in order to affect outcomes, i.e., keep the manager from trouble, or in some cases to get the manager in trouble.
Three hundred sixty degree ratings are typically collected anonymously. Ratings that are not anonymous may differ from those that are. Ratings become less genuine if the rater believes he or she will be identified. Not surprisingly, some raters indicate that they would raise their ratings if they were going to be identified to their managers. Anonymous ratings also have potential drawbacks. If anonymous 360 ratings were used as part of the documentation for a personnel action involving a manager-e.g., demotion, dismissal, or unattained promotion or pay raise-that manager could potentially make a legal case against the firm. Since the ratings are anonymous, they cannot be traced to specific individuals, and hence their validity could come into question in a court action. In contrast, traditional performance appraisal ratings are typically signed by the rater, i.e., one's supervisor, making them more verifiable.
A rating should be used for appraisal purposes only when the raters are committed to the goals of the organization, rather than merely to their own personal goals. This is often not the case, as the rater is primarily concerned with his or her own short-term needs. For example, a subordinate may only provide high upward feedback ratings to a manager who maintains the status quo, even though the individual and the organization could use a high degree of challenge.
This suggests another caution regarding ratings: be careful what you measure. If a manager's 360 ratings depend on creating a positive or even relaxed climate, these factors may actually detract from work directly geared toward bottom line results. For example, customers may call the manager away from the office frequently, or necessitate many hours on the phone, thus making the manager less available to employees. If this customer-oriented behavior is not part of the criteria measured and availability to subordinates is part of the criteria, customer-oriented behavior will diminish over time and be replaced by more frequent interactions with employees. Yes, relationships with employees may improve, but at what cost?
Some companies have abandoned the use of 360 feedback for appraisal purposes. For example, half of the companies surveyed in 1997 that had implemented 360 degree feedback for appraisal had removed it because of the negative attitudes from employees and the inflated ratings.
Not all experts agree that using 360 degree feedback for evaluation is a problem. If traditional appraisal depends on the opinion of a supervisor who is not always in the best position to judge, and is never anonymous, wouldn't 360 appraisal be an improvement even if not always totally honest? Ratings from multiple sources also usually produce more reliable data. Data from a variety of organizations indicated that ratees were more satisfied with multi-rater appraisal than single-rater appraisal. Obviously, some ratees believe that 360 appraisal is an improvement over traditional appraisal, while others do not. This belief likely stems from such factors as levels of trust in the organization, and the type of traditional appraisal used.
We would suggest caution in adopting 360 appraisal. Use 360 feedback strictly for development at first. Let managers and others become comfortable with the process. Once employees see that negative repercussions are unlikely and managers see that the information truly is helpful, they will be less apprehensive about using 360 ratings for evaluation.
Using 360 Degree Feedback for Political Purposes
A third reason that companies engage in 360 feedback is politics. There is often competition among individuals and groups over ideas and the individuals or groups pushing those ideas. Individuals or groups try to impress higher level management with their innovative ideas and plans. A manager with authority to make an implementation decision may attempt to appropriate credit. In an organization that we helped to implement upward feedback, we communicated initially with a training director. Once his boss bought into the plan, the boss assumed ownership and credit. Indeed, the training director eventually left the organization.
Similarly, a company as a whole may adopt 360 degree feedback to manage an impression. Organizations may embrace 360 feedback to convey an impression of openness and participation to clients or recruits when, in fact, this is not part of the organization's culture. While the innovations themselves may not be very successful, the political gains from impression management may be valuable.
Where are the Data?
A problem related to the absence of purpose in implementing 360 feedback is the absence of data, as well as the resulting dearth of knowledge on how or even whether 360 feedback really works. In recent telephone interviews with individuals who had spearheaded the implementation of 360 degree feedback in a number of Fortune 500 companies, the availability of effectiveness data was discouraging. The only data available were employee and manager perceptions of the process, random anecdotes, or, on rare occasions, changes in employee ratings of managers before and after upward feedback. Recent research in a retail store setting has shown that subordinate and peer ratings of managers increased after managers received 360 feedback, but managers' ratings from their supervisors and customers did not change. In addition, this research revealed that store sales volume was unaffected by the 360 feedback intervention.
There are some data suggesting productivity improvements among university faculty and improved customer satisfaction ratings following the implementation of 360 feedback. However, the research generating these data did not include a control group, so it is difficult to conclude that the 360 process was solely responsible for the improvements.
We expect that in the future, few organizations will be able to afford to engage in costly training or development activities purely altruistically, or on the basis of speculative success. Rather, decisionmakers and participants will need to be convinced that the development effort can be expected to have a positive impact on the bottom line.
Evaluating 360 Degree Feedback Efforts
The above arguments suggest that little is known about the effects of 360 degree feedback programs in organizations. The following recommendations are offered in the hope of realizing more systematic knowledge regarding ways to ensure the effectiveness of 360 degree feedback programs. These recommendations should apply equally well to other organizational innovations, such as TQM and teams.
Make Consultants/Internal Champions Accountable for Results and Customization
How often are the people who are pushing an organizational innovation told that they must go into the process with specific goals, realistic timetables, and a plan for measuring results? We would argue that this is a rare event. Instead, consultants may jump on the 360 bandwagon, put together enticing packages, and subsequently feel reluctant to charge companies for evaluation. They may also fear demands from managers to explain the need for evaluation.
The result is a rush to implementation without a clear understanding of needs or expected results. Consultants, both internal and external, may simply implement the programs or activities of other organizations without systematic testing. One common example is the use of off-the-shelf 360 surveys. Although leadership may be a common factor of importance in, say, a mining organization, a police agency, and a high-tech think tank, a one- size-fits-all approach to survey items is not likely to be effective. The items will need to be customized.
Conflicts of interest can result when program evaluation is left in the hands of people who have either marketed or championed a process. Care must be taken to make sure that the evaluation process is objective, and that the data are verifiable.
Engage in a Pilot Test Initiative
Firms should learn to crawl before they walk. Managers tend to want immediate action, while a pilot study may last a year or longer. However, the benefits of a pilot study can be immense. In organizations with traditional hierarchies, the inversion of the organizational pyramid that accompanies 360 degree feedback can be threatening and problematic. Pilot studies can identify the threat and problems.
A pilot test we ran in a f ew departments before full-scale implementation of upward feedback in a large telecommunications firm identified problems with our original survey items that could be modified. We discovered both employee and managerial resistance and fear, which we were able to counteract with general information sessions for all employees in the targeted departments. We identified concerns with confidentiality and anonymity that stemmed from an earlier survey intervention by another company where breaches of confidentiality were suspected. We were able to present our strategies for ensuring anonymity and confidentiality to ease these concerns. Because these problems were corrected, we were able to implement a relatively smooth roll-out across the division. In addition, we were able to follow up the pilot group before implementation and obtain some initial effectiveness data. Our ability to demonstrate at least some success on a small scale helped convince reluctant managers that the rollout could be beneficial to the company.
Create Focus Groups to Identify Effectiveness Criteria Measures
The list of possible effectiveness criteria measures for an intervention such as a 360 degree feedback program can be quite extensive. Measures should focus on activity levels as well as results. Possibilities include:
1. ratee and rater reactions to the program, i.e., the extent to which they believe the process is valuable;
2. response rates - obviously a program cannot succeed if potential raters do not respond when surveyed;
3. grievance rates;
4. customer satisfaction;
5. employee satisfaction;
6. absenteeism/turnover;
7. recruiting success, e.g., strong qualifications of applicants and new hires;
8. work behaviors, e.g., leadership, communication, employee development efforts;
9. work performance, e.g., individual work output or contributions to work unit output; and
10. positive image with clients, customers, competitors, and suppliers.
One way to identify criteria is to form focus groups. The groups could be asked what they think would improve if those being rated got better at the dimensions on which they were being rated. The groups should be pressed for specifics and then guided to systematically monitor progress on the identified criteria before and after the innovation was fully implemented.
Evaluate Using a Pre-Post Control Group Design
Evaluation of the process is crucial to ensure that it is aiding in the accomplishment of the organization's goals, and working as intended. At least in the early stages, the organization should adopt a pre-post control group design to assess the impact of the process. Behaviors and outcomes should be measured prior to feedback, as well as after feedback, and some individuals should be selected to take part while others are not.
This recommendation may cut against the grain of typical managerial thinking. Many managers assume that if something is worth doing, it is worth doing for everybody right now. Managers also do not like their people being used as guinea pigs. We urge a reconsideration of this line of thought. In fact, this evaluation design could be implemented simply by beginning the process in stages in various parts of the organization.
Clearly, more experimental field studies on 360 degree feedback are needed. Research partnerships between the academic institutions and business organizations should be established. Research is needed on whether improvements in managerial or leadership behaviors cause improvements in performance and on whether the improvements have an effect on employee satisfaction, absenteeism, and turnover. With proper control for other factors that could affect the results, it is possible to determine what needs to be done to improve the 360 degree feedback process and, ultimately, whether the process is worth the time, money, and effort.
Be Careful What You Measure and How It's Used
What gets measured (and rewarded) drives behavior. Even when 360 degree feedback ratings are used strictly for developmental purposes, individuals will tend to modify behaviors in ways to receive more positive ratings. Therefore, it is extremely important that 360 degree surveys reflect those behaviors that the organization values most highly. Care should also be taken to ensure that behaviors measured are closely tied to the accomplishment of the organization's goals.
Student evaluations of teaching should encourage better teaching styles and classroom relationships. Communication between instructors and students should also improve. However, it can be argued that the process may also encourage behaviors and outcomes that are not always beneficial. Instructors may avoid challenging students for fear of upsetting them and obtaining lower student evaluations at the end of the semester. Assignments and readings may be made easier, and faculty may be hesitant to disagree with students' comments or concerns for fear of appearing disagreeable. Moreover, sensing that students dislike ambiguity, instructors may "teach the test" (i.e., virtually announce what will be on exams through the use of study guides) and provide a lockstep method of accomplishing assignments and research projects. The growing phenomenon of grade inflation should not be surprising. However, the ultimate customers, society and future employers, need and seek students who have been challenged and can adequately deal with ambiguity in solving problems. Future employers and graduate schools want to be able to look at grade point averages that have meaning. Although this example of upward feedback can provide valuable information for its recipients, we need to realize that people generally modify their behavior toward what gets measured and rewarded. Such behavior may not always lead to the realization of long-term goals and outcomes.
Train Raters
Almost all 360 degree instruments rely on rating scales. Research has clearly established that raters commit different types of rating errors, such as rating too leniently or too harshly. Some raters play it safe by consistently using the central rating point. Other errors include halo effects (generalizing from doing well in one area to perceptions of doing well in other areas) and recency effects (weighting heavily behavior observed most recently). Raters need training in how to complete forms and how to avoid rating errors. Training should also cover the objectives of the surveys and the overall process. UPS, for example, explains the appraisal feedback process, and discusses how data will be used.
A few medium-sized organizations in the midwest have indicated to us that they are providing raters with a frame of reference training and teaching raters how to keep a log of observed behaviors that correspond with survey items. Frame of reference training covers the roles, responsibilities, and accountabilities of the ratee. Survey items are linked to roles and responsibilities in an attempt to help raters create a common frame of reference when they rate a ratee.
To improve observations, raters are given surveys to keep throughout the year and are instructed to record their observations of incidents that they would use to help them determine their final ratings. Raters are encouraged to take their record of work incidents and supplement their ratings with written feedback. According to the HRM Directors in these organizations, raters thus far are willing to take risks to provide raters with specific written comments.
Furthermore, the amount of written feedback has remained about the same over the last three years. Finally, ratees have indicated that the written feedback is more valuable to them than numerical ratings.