fa. A. . $959.}: 1.....irrpa». 95..) (i. .. p I [IV .55....4: C15. (21.2.: (J c: I ii}; 1: ... . .37 3F . 51.5.3.5 l. A . (11.53:; ‘rathDKP It 533.21} 5/); (pl. (’3 NH I: WW lililllllilllll LIBRARY Michigan State University This is to certify that the dissertation entitled Performance Appr" 1' 991 F‘j" *7 7"?"3 in T. 0w EnForcement Agencies presented by Frank V. Hughes has been accepted towards fulfillment of the requirements for Ph.d. degree in Criminal Justice and Crimino logy \ J Major professor MSU is an Affirmative Action/Equal Opportunity Institution 0- 12771 ‘14- r 'iJL. PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. nilljjvl L: V ’. ......._.- ,-_ .- 4..- DATE DUE DATE DUE DATE DUE .t t; '3' [2292090 m r __ ma DLP 1 3 20—01 @151“ ’ '1 U 0 2 0 1 is 0 ' ‘ 1‘ '-\UG 1 9 1997 Mei—19' 99 fl“ ,, MSU is An Affirmative Action/Equal Opportunity Institution ,\ PERFORMANCE-APPRAISAL SYSTEMS IN LAW ENFORCEMENT AGENCIES By Francis Hughes A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY School of Criminal Justice 1990 (0 05C»; (02"; C? ABSTRACT PERFORMANCE-APPRAISAL SYSTEMS IN LAN ENFORCEMENT AGENCIES By Francis Hughes The purpose of the study was to assess the state of the art of performance-appraisal systems in police agencies throughout the United States. In addition to examining the component aspects of individual systems, the 'writer examined the possible effect of agency size, type, region, and presence or absence of a collective- bargaining agreement on the design and implementation of various types of performance-appraisal systems. The writer also examined internal relationships that might exist within the survey data. Three specific internal variables were chosen for analysis: level of training provided raters, presence of a systematic job analysis, and the appraisal format itself. A final purpose was to conduct a qualitative analysis on individual performance-appraisal instruments submitted by the sample population. The archival data for the study were taken from a lQ-question survey developed by the Michigan Department of State Police. The sample population was 400 police agencies (50 state, 200 municipal, and 150 county). The sample population had representation from all 50 states and reflected small, medium, and large departments in Francis Hughes terms of total enlisted officers. Three hundred police agencies returned the survey questionnaire, and a majority of those included with it a copy of their appraisal instrument. Findings from the research study indicated that a large majority of police agencies had had their appraisal system in place for at least five years and used it primarily for employee feedback and motivation. Half of the sample population had conducted job analysis before developing their appraisal system. The demographic variables of Size, type, region, and presence or absence of a collective-bargaining agreement had little effect on the design of performance-appraisal systems. The data Showed that agencies that conducted a systematic job analysis were more likely also to provide formal training to raters within the organization and more likely to use a behaviorally anchored rating scale. Qualitative analysis of the actual performance-appraisal instruments indicated that the: most common method used by police agencies was the graphic rating scale, with variance in the descriptive technique used to indicate the degree of a given activity or trait. ACKNOWLEDGMENTS I would like to thank my children, Sean, Brigid, and Colin, for standing behind me through the six long years of classes and weekend trips to the library. Hithout their willingness to sacrifice so that I might have the time for school, this dissertation would not have been written. I also would like to thank my chairman, Dr. John Hudzik, for his guidance and steady hand during the dissertation stage, as well as the committee members, Drs. Robert Trojanowicz, Cleo Cherryholmes, and Theodore Curry. Most of all I want to thank my wife, Ann, for all her support and love during the last Six years. Her limitless patience and undying belief in my ability helped me over the hurdles and through the times of doubt and indecision. iv TABLE or CONTENTS LIST OF TABLES ....................... Chapter I. INTRODUCTION .................... Purpose and Framework of the Study ........ Research Questions ................ Overview of the Study ............... II. LITERATURE REVIEW .................. Performance Appraisal: A Critical Tool ...... Public-Sector Pressures ............. Police Agency Performance Appraisal ....... Performance Appraisal: General Issues ...... Goal Setting and Control ............ Uses of Performance Appraisal ........... Feedback .................... Rewards ..................... Criteria for Validation ............. Promotion .................... Uses in the Public Sector ............ Uses in the Private Sector ........... Uses in Law Enforcement ............. Contlusions Regarding Uses of Performance Appraisals .................. Develooment of Performance-Appraisal Systems: Analyzing the Job ................ Methods of Obtaining Job Information ....... Methods of Job Analysis ............. Purpose of Job Analysis ............. Criterion Development .............. Objective Measures of Performance ........ Subjective Measures of Performance ....... Development of Performance-Appraisal Systems: Issues of Reliability .............. Rater Biases: Effect on Reliability ...... Training to Increase Reliability ......... III. IV. Development of Performance-Appraisal Systems: Issues of Validity ............... Development of Performance-Appraisal Systems: Sources of Rater Information .......... Immediate Supervisor .............. Multiple Raters ................. Peer Assessment ................. Self-Assessment ................. Subordinate Assessment ............. Development of Performance-Appraisal Systems: Methods of Appraisal .............. Trait-Based Methods ............... Behavior-Based Methods ............. Forced-Choice Method .............. Behaviorally Anchored Rating Scales ....... Management by Objectives (MBO) ......... Conclusions ................... Development of Performance-Appraisal Systems: Legal Considerations .............. Uniform Guidelines on Employee-Selection Procedures .................. The Role of Job Analysis ............ Behavior-Based Recommendations ......... Communication of Standards ........... Supervisor Training ............... Documentation .................. Conclusions ................... Performance-Appraisal Checklist .......... METHODOLOGY ..................... Background .................... The Sample Population ............... Survey Questionnaire ............... Research Questions ................ ANALYSIS AND DISCUSSION OF RESULTS ......... Introduction ................... General Findings ................. Summary of Findings ............... Demographic Variables ............... Agency Type ................... Agency Size ................... Geographic Region ................ Collective-Bargaining Agreement ......... Summary ..................... vi Internal Variables ................ 98 Level of Training ................ 98 Systematic Job Analysis ............. l03 Performance-Appraisal Format .......... lO9 Summary ....... . ............. llO Qualitative Findings ............... ll2 Comparison of Formats .............. ll2 Performance Dimensions/Traits .......... ll4 Academic Standard or Definition ......... ll6 Summary ..................... 117 V. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS ...... ll9 Introduction ................... ll9 Summary ...................... ll9 Purpose and Method of the Study ......... ll9 The Sample Population .............. 121 Survey Questionnaire .............. 122 Research Questions ............... 123 Qualitative Findings .............. l32 Conclusions .................... 133 Policy Recommendations for Law Enforcement Administrators ................. T48 Recommendations for Future Research ........ l53 APPENDICES A. POLICE AGENCIES IN SAMPLE POPULATION ........ 155 B. SURVEY QUESTIONNAIRE ................ 162 C. MICHIGAN STATE POLICE COVER LETTER ......... 166 D. QUALITATIVE ANALYSIS SUBSAMPLE ........... 167 BIBLIOGRAPHY ........................ 169 vii Table TO. IT. T2. T3. T4. LIST OF TABLES Summary of Police Agency Survey Returns ......... Employee Performance-Evaluation Systems in Law Enforcement ...................... Demographic Variables x Survey Questions ........ Informal Training Provided to Rater x Survey Questions Formal Written Instructions Provided to Rater x Survey Questions ....................... Formal Training Session Provided to Rater x Survey Questions ....................... Job Analysis Conducted on Officer Rank x Survey Questions ....................... Job Analysis Conducted on Corporal Rank x Survey Questions ....................... Job Analysis Conducted on Sergeant Rank x Survey Questions ....................... Job Analysis Conducted on Lieutenant Rank x Survey Questions ....................... Job Analysis Conducted on Captain Rank x Survey Questions ....................... Job Analysis Conducted on Major Rank x Survey Questions ....................... Behaviorally Anchored Rating Scale x Survey Questions . . Performance Dimensions/Traits .............. viii Page 83 84 94 99 TOT TOT T04 T05 T06 T07 T07 T08 TTO TT5 CHAPTER I INTRODUCTION Organizations are judged by their records of achievement. The private enterprise is judged by its capacity to make a profit for stockholders, the public agency by its ability to provide prompt and courteous service, and the police agency by its ability to suppress unlawful activity. In each of these activities, the factor that determines profit or loss, efficiency or ineffectiveness, order or disorder, is the personnel who make up the organization. For any organization to accomplish its mission and goals in the most efficient manner, it must be able to use effectively the knowledge, skills, and abilities of its most costly and important resource--manpower. To do this, the capabilities of employees must be known, and a means must be provided to assess each employee’s level of performance in comparison with what is desired. This process is performance appraisal. It may seem obvious that employees must be evaluated. Almost everyone who has had work experience will, nonetheless, observe that it is equally obvious that performance iS rarely seriously or systematically evaluated. This paradox was an issue during civil service reform efforts that took place in the United States in the 19705. A major concern of these reforms was to increase productivity in government while at the same time limiting the number of government employees. This meant that performance levels of individual employees and of agencies had to rise. Subsequently, performance had to be evaluated, diagnosed, and improved. The evaluation of employees and their performance occurs naturally. Almost as a matter of course, we judge each other’s work and attitudes, and rank people along some scale of best to worse. What does not occur naturally and frequently is a systematic evaluation and the communication of results to those being evaluated. The use of systematic methods of appraising employee performance began in the public sector. The federal government developed forms in the 18505 to use in rating employees according to personal traits and work habits. When New York City established its civil service system in 1883, it also adopted a procedure for employee evaluation. School systems began using forms for evaluating teachers in 1896. In contrast, the private sector did not seriously use performance evaluation until just before World War II. The federal Civil Service Reform Act of 1978 includes a provision mandating performance evaluations: Each agency shall develop one or more performance appraisal systems which: (1) provide for periodic appraisals of job performance of employees; (2) encourage employee participation in establishing performance objectives; and (3) use the results of performance appraisal as a basis for training, rewarding, reassigning, promoting, demoting, retaining, and separating employees. Reform legislation at the state and local levels during the 1970s also included language mandating or encouraging performance appraisals. Specific provisions varied, but the basic intention was to affirm the importance of performance appraisals. A major reason for the ineffectiveness of performance evaluations is the lack of a response to supervisors and employees who have asked, "So what?" If a supervisor is not convinced that a performance appraisal will be put to some use, he or She is not likely to treat the appraisal process seriously. It is not pleasant to face an employee and point out where he or she has failed. The incentive generally is to avoid such unpleasant confrontations and either exaggerate the positive aspects of an employee’s performance or fail to conduct the appraisal at all. In fact, performance evaluations frequently have been divorced from the rest of personnel management, thereby inviting employees and supervisors to treat the appraisal process frivolously. Ironically, the potential uses of sound performance evaluations are so numerous and important that there should never be a problem responding to the question, "So what?" The possible and desirable uses include rewards and sanctions that clearly affect both the organization and employees within the organization. Performance appraisal is a critical element of human resource planning that must occur within any organization. Planning should be based in part on an understanding of the strengths and weaknesses of the existing workforce. Part of the planning process is for top management to set the goals for the organization. Once established, performance appraisal becomes an auditing procedure that generates the necessary information to control and direct the process of an organization. Within the process, overall goals are translated into more Specific objectives, objectives that should serve as guideposts for employee performance. When performance appraisal is considered within its utility to an organization, several operational objectives are critical. The first is the ability to provide adequate feedback to employees to improve subsequent performance. This function is so Simple it is too often overlooked in the day-to-day world of supervision. It requires not only an understanding of what is expected by both employee and supervisor but, more important, a communication system that provides constructive feedback so that future performance can be improved. An outgrowth of this objective is the identification of employee training needs. Unsatisfactory performance is often the result of lack of training, and when these needs are identified, positive corrective steps can be taken to give employees every opportunity to succeed in their jobs. A second objective of performance appraisal is the validation of examinations used to select employees. The validation of selection tests depends, in part, on data indicating how' well current employees are performing. The performance-appraisal process requires measures of employee output or job-related dimensions that tap the behavioral domain of the job analysis, the facilitation of interrater reliability, professional and objective administration, and continual rater observation of the employee’s performance. A third objective of performance appraisal is the identification of criteria used to allocate organizational rewards such as merit pay, and punishments such as demotions and dismissals. Effective reward allocation may require a valid performance- appraisal system that ranks employees according to a quantifiable scoring system. Sufficient variance in scores is essential to differentiate across various performers. In allocating rewards, the system must also have credibility with all employees. The same performance-appraisal format may also be used for disciplinary action, ranging from a verbal warning to dismissal. Thus, the documentation required for such action must also be facilitated by the same performance format. With the passage of the federal Civil Service Reform Act, with its provisions tying performance to merit pay and bonuses, the importance of performance appraisal in the public sector has been greatly heightened. A final objective of performance appraisal is the identification of promotional potential. This requires that job- related performance appraisals have several dimensions in the incumbent’s job, the same as or similar to the job to which the incumbent may be promoted. This indicates the employee’s ability to assume increasingly difficult assignments. When used for promo- tional purposes, the performance appraisal should also rank employees comparatively; measure their contribution to organizational objectives, and perhaps document their career aspirations and long-term goals. Regardless of the numerous and important uses for sound performance-appraisal information, many employees, supervisors, and administrators still ask the question "So what?” Perhaps a partial answer lies within the unique mission and goals of a particular organization. Or perhaps it is the level of importance given to the performance-appraisal process by top management within the organization. It was this paradox that first sparked the writer’s interest in performance-appraisal systems. Police agencies were chosen as the organizational model primarily because of the writer’s 18 years’ experience in the Michigan Department of State Police. As a career officer, the writer has experienced both sides of the performance- appraisal process. This experience has provided some insight into the reasons why performance appraisal, for the most part, is not systematically designed and implemented among police agencies in the United States. One way to gain insight into the level of importance attributed to performance appraisal within the law enforcement community is to review current police supervision and administration texts. Sheehan and Cordner (1979) in W discussed performance appraisal within the context of the staffing function, one of the six basic management functions. The two-page discussion focused on feedback being the evaluative tool by which the performance of people within an organization is measured. Mention was made that a variety of techniques are used to evaluate employees, but there was no discussion as to what techniques might be more suited to a particular type or Size of police agency. The major emphasis seemed to be that police departments, not administered from a systems perspective, make little or no effort to evaluate employee performance. The nuts and bolts of designing, implementing, and evaluating a formal performance-appraisal system within police agencies were not discussed. In The Police Manager, Lynch (1986) failed to discuss the role of performance appraisal within police agencies. There was a rather thorough discussion of management by objectives (M80) and the assessment-center process. M80 was discussed as a management tool to aid in the decision-making process, a process whereby police managers and their subordinates can identify areas of growth, set standards to be reached, and measure the results against standards that have been set. The assessment-center process was discussed as a means to observe and evaluate candidates for promotion over a specific period of time. Positive and negative aspects of the assessment process were detailed as a guide for police managers. Both of these techniques are within the context of performance appraisal but fall short of providing necessary information on the design, implementation, and evaluation of a performance-appraisal system within a police agency. In Police Administration: Structures, Processes and Behavior, Swanson and Territo (1982) treated performance appraisal with a similar emphasis. The three pages of text dedicated to performance appraisal were presented within the context of human resource management. The various purposes of performance appraisal were reviewed, and the authors stressed that the first priority of police management is to install an evaluation system that has a good chance of delivering. How that should be accomplished, in terms of a systematic step-by-step process, was not discussed. A more thorough discussion of performance appraisal was presented by Iannone (1987) in Supervision of Police Personnel. Two chapters were dedicated to the subject. Topics included objectives of the rating system, reasons for system failure, techniques for gathering performance data, setting standards for performance, and the validity and reliability of various rating systems. The complexity involved in any performance-appraisal system was stressed, as was the need for any police agency to approach the design, implementation, and evaluation of a particular system as a crucial personnel function. It is not the writer’s intention to use the previous discussion to indict police supervision and management textbooks. However, the disparity that is evident in terms of the importance given to performance appraisal may be a partial explanation for its not being viewed as a crucial personnel function by many police departments. Almost all police departments in the United States conduct some form of performance appraisal on their officers. Public accountability and prioritization of resources require police managers to make at least a minimal effort to document what officers are doing on a daily, monthly, or yearly basis. The problem lies not in the decision to appraise employees, but rather in the manner in which a performance-appraisal system is designed, implemented, and evaluated. Many problems or stumbling blocks stand in the way of an effective performance-appraisal system. Many are common to all organizations, but the writer’s interest was directed toward those that are particular to police departments. The first stumbling block is the lack of commitment to performance appraisal from the chief, director, or sheriff. Too often the performance-appraisal system is viewed as a necessary evil that has to be done once a year, rather than a positive mechanism to accomplish departmental goals and objectives. If there iS commitment from the top, the necessary time and personnel will also be given to the design, implementation, and evaluation of a performance-appraisal system. A second stumbling block is police departments’ stubbornness in admitting that performance appraisal is a technical and complicated personnel function. It is no longer a process of sitting down once a year with officers and reviewing their "numbers," such as tickets, arrests, complaints taken, accidents investigated, and so ("L It requires expertise that, for the most part, is not available within the rank and file of most departments. Rather than seek assistance 11 conducted, the~ method of“ performance-appraisal training that is used, type of employee input obtained before design, how results are communicated to ratees, and the types of formats used to collect the appraisal information. This aspect of the study was descriptive in nature; the writer examined frequency data obtained from a survey questionnaire mailed to 400 hundred state, municipal, and county agencies by the Michigan State Police (MSP) in February 1988. The MSP collected the data as part of an organizational effort to evaluate its current system and to develop a new performance-appraisal system for its uniform division. A second purpose of the study was to examine those features of police organizations and their environments that influence the type and design of performance-appraisal systems. Four specific features of police organizations were examined: (a) type of agency, (D) Size of agency, (c) geographic region, and (d) presence or absence of a collective-bargaining agreement. For this aspect of the study, the writer examined the correlation between the four demographic variables and the sample data collected by the MSP. A third purpose of the study was to examine internal relationships that might exist within the survey data. Unlike demographic variables, internal variables are considered component parts of a formal performance-appraisal system. Three specific variables were chosen: (a) level of performance-appraisal training provided to raters, (b) presence of a systematic job analysis for ranks evaluated within the appraisal system, and (c) the appraisal 12 format itself. These variables were derived from survey questions 9, 10, and 16, respectively, and were correlated with the remaining survey questions to ascertain possible relationships. A fourth and final purpose of the study was to conduct a qualitative analysis of the performance-appraisal instruments returned with the completed survey questionnaire. Of the 300 departments in the sample population that returned the questionnaire, 182 enclosed a copy of their departmental performance-appraisal instrument. The qualitative analysis was guided by the methodological standards and recommendations set forth in the performance-appraisal literature. Research Questions In the classical research endeavor, testable and falsifiable hypotheses are deduced from the theory guiding the study, and empirical data are used to challenge the hypotheses. In contrast to the classical endeavor, this study is best described as exploratory. According to Katz and Kerlinger, the aims of exploratory field studies are to discover the significant variables and to develop the foundation for eventual hypothesis testing and theory construction. These aims equate quite closely with the limited purposes of the study. Although not based on formal hypotheses, the study was guided by a set of research questions derived from available information about the practice of perfbrmance appraisal among police agencies and the writer’s personal experience in the field. The research 13 questions established an overall framework for the study, identifying the variables and relationships for which measures were developed. The research questions and a brief rationale for each question follow. 1. What is the present level of performance-appraisal activity being undertaken in police organizations? This research question reflects the state-of-the-art purpose of the study. The aim was to determine to what extent police agencies engage in the activities that comprise performance appraisal. Specification included length of current system, ranks evaluated, single or multiple instruments, purposes of the system, frequency of evaluation and by whom, type of training provided to raters, presence or absence of job analysis before system use, type of employee input, presence or absence of a collective—bargaining agreement and effect on system, type of feedback to employee, format used to collect performance-appraisal information, and the level of employee acceptance of the formal appraisal system. 2. What is the relationship between police agency size and the design and implemenation of various types of performance- appraisal systems? Although all police agencies conduct some form of performance appraisal, it seems ‘likely that ‘the number* of' employees in an organization at least partially contributes to the form and extent of performance appraisal it undertakes. Certainly, many of the sophisticated activities of performance appraisal are not likely to be found in very small police organizations. 14 The writer expected to find that larger police departments were more likely to engage in systematic job analysis, provide a higher level of training to raters on the performance-appraisal process, and use different instruments for the various ranks within the agency. These activities are usually associated with larger departments because they are more likely to have personnel departments with employees who possess the necessary technical skills to design, implement, and evaluate a performance-appraisal system. 3. Is there a relationship between geographic distribution of police agencies and the design and implementation of various types of performance-appraisal systems? There is little evidence in the performance-appraisal literature or in the writer’s career experience that would indicate that the form or extent of performance-appraisal systems would vary by the geographic region in which they are located. However, the writer suspected that police agencies in New England, Middle Atlantic, and East North Central states are more likely to have sophisticated perfbrmance-appraisal systems because they have been in existence the longest and, for the most part, have more enlisted officers than those agencies in the other regions. 4. What is the relationship between the presence or absence of a collective-bargaining agreement and the design and implementation of various types of performance-appraisal systems? Unionization of police agencies seems to be increasing, and police union contracts frequently have provisions pertaining to personnel-related matters. Performance-appraisal systems affect the 15 "terms and conditions of employment”; they help determine salaries, promotions, transfers, and opportunities for training. Given this relationship, one would expect to find those agencies that have collective-bargaining agreements more likely to use the information for compensation, promotion, and training purposes. 5. What is the relationship between police agency type (state, municipal, county) and the design and implementation of various performance4appraisal systems? Apart from agency size, geographic distribution, and the presence or absence of collective-bargaining agreements, performance-appraisal activities among police agencies may vary by governmental level. Because performance is the direct result of job duties and responsibilities, differences in the job may show a varying relationship to performance appraisal. 6. How does the presence‘of a systematic job analysis influence the design and implementation of various types of performance-appraisal systems? Because a systematic analysis of all jobs within a police agency is such a critical first step in the performance-appraisal process, one would expect to find that those agencies that do it are more likely to incorporate other critical aspects in their systems. For example, they Should be more likely to use different instruments to evaluate the various ranks, and more likely to solicit a high level of employee input into the design of their appraisal system. 16 7. How does the presence of formal training for raters influence the design and implementation of various types of performance-appraisal systems? Another critical step in the appraisal process is formal training for raters on the purpose and use of a particular appraisal instrument. One would expect that those police agencies that engage in formal training of raters would be more likely to have conducted a systematic job analysis for all ranks, and would be more likely to have an appeal process in place. The presence of formal training for raters should also correlate with a higher level of employee acceptance of the performance-appraisal system. 8. How does the appraisal format itself influence the design and implementation of the various types of performance— appraisal systems? The: performance literature indicated that certain appraisal formats such as behaviorally anchored rating scales (BARS) require the initial identification of all relevant job dimensions before they can be properly designed. One would expect to find that those agencies in the sample population that indicated using a BARS format conducted a systematic job analysis on all ranks within the department. Overvi w h A literature review of performance-appraisal systems is the subject of Chapter 11. Performance appraisal in both the public and private sectors is examined because the technical aspects that comprise a given system are found in both settings. There is also a practical reason for examining both sectors since the nmjority of 17 publications and research studies have focused on the private sector. It is evident that attention to the public sector has increased in the last 15 years, probably because of the increased numbers of federal, state, municipal, and township employees in the work force during that time. But even with the increased attention paid to performance appraisal in the public sector, little research is available specific to the law enforcement community. Perhaps this is due to a lack of interest on the part of police agencies in general to address difficult problems that are encountered when designing and implementing performance-appraisal systems. In the literature review, Special attention is paid to research studies have have analyzed the components of a performance-appraisal system (i.e., objectives or purposes, job analysis, training of raters, and strengths and weaknesses of particular appraisal formats). Chapter II concludes with a checklist that summarizes the features of a performance-appraisal system and the issues it should address. The methodological design of the study is folly described in Chapter 111. It includes a discussion of the limitations of the survey instrument designed by the Michigan Department of State Police and the sample population. Reasons for choosing the particular demographic and internal variables are presented. 'The methodology used to conduct a qualitative analysis on the performance-appraisal instruments returned with the survey questionnaire is also discussed. 18 Results of the data analyses are discussed in Chapter IV. In Chapter V, a summary of the study is presented within the context of the eight research questions. Conclusions from the study are presented by comparing the results against methodological standards in the literature review and summarized within the Performance Appraisal Checklist. Chapter V concludes with policy recommenda- tions for law enforcement administrators and the writer’s recommen- dations for future research. CHAPTER II LITERATURE REVIEW Performance Aggraissl: A Critical 1991 Performance evaluation, the identification, evaluation, and development of individual performance in organizations, has always been a critical component of managerial practices. Evaluations are used by management as they make vital decisions in the areas of selection, placement, training, compensation, and promotion, among others. Yet performance evaluation is a problematic and difficult managerial activity. For example, human judgment, inescapable in evaluation, is often fallible and easily influenced by factors other than the behavior of those being rated. In addition, the effective use of a performance-evaluation system requires considerable time, energy, and commitment on the part of management, many times seen as being "taken away" from their "real" job. The importance and necessity of performance-appraisal systems is accepted in both the public and private sectors, although there is far less than full agreement on the manner in which the appraisal is conducted and the form it will take. Formally structured management performance-appraisal systems are a conventional part of American companies and have been for years. In a 1977 survey of almost 300 firms of varying sizes in many different industries, 19 20 three out of four reported having such systems covering at least some of their managers. Among the larger firms the practice was almost universal, and they had used such systems for many years (Lazer & Wikstrom, 1977). Public-Segtor Pressgrgs In the public sector, the pressures by citizens for public accountability, decreased funding, increased services with decreasing budgets, and governmental regulations have forced public administrators to focus on performance-appraisal systems as a technique to influence and control employee behavior, with the hope of increasing productivity and effectiveness. The increasing emphasis on performance appraisal is not without merit. Between 1950 and 1978, the number of public employees grew 130%, whereas private-sector employment rose only 55%. The federal sector grew 35%, while state and local government employment rose 200%. In 1978, government could be viewed as the country’s major employer, with 2.8 million federal civilian workers, 3.5 million state employees, and 9.2 million local government and school employees. In addition, in a recent study it was found that performance appraisals were used to evaluate as many as 117,000 state employees in California and 1,100 in Oklahoma (Huber, 1983). ' e n erf anc r ' 1 Police agencies are one of many organizations operating in the public sector. They, too, are affected by citizen demands for more accountability and decreased funding, and at the same time they are 21 expected to provide services in an efficient manner even though the role, duties, and responsibilities of police officers have changed considerably within the last two decades. However, the methods and procedures for evaluating individual police officers have remained relatively static over the same period of time. In a monograph entitled "Performance Appraisal in Police Departments,” published in 1972 by the Police Foundation, Landy summarized the status of performance appraisal as discouragingly low both in individual municipal police agencies and the law enforcement community as a whole. We attributed this status to perceived inadequacies in some existing performance-appraisal systems and to confusion about the place of performance appraisal in a total personnel operation. Such difficulty can cause both supervisors and subordinate police officers to object and even resist particular appraisal systems. Within law enforcement agencies, performance measurement is a powerful tool because it links three distinct functions: determining what ought to be, determining what is, and determining a process to change. To gain acceptance of a set of measures for police performance is to establish what police ought to do. Performance refers to a valued action or the accomplishment of some valued state of affairs. Measurement is the description of an aspect of something according to an explicit criterion. Performance measurement is thus the process by which values are attached to 22 criteria, and they are then used as a basis for describing events (Landy, 1978). The Federal Bureau of Investigation is one police agency that has a keen interest in performance measurement and its associated problems. Their interest is based on a number of reasons: 1. It is one of the primary vehicles by which organizational effectiveness is gauged. 2. There is a healthy level of concern as to whether it is being accomplished in the most effective manner possible. 3. It serves as a feedback mechanism on various organizational systems, subsystems, and strategies. 4. In periods of resource scarcity it provides a rationale for resource-allocation decisions, both internal and external. 5. It can serve as a warning indicator of significant changes in the internal or external environment (Colwell & Koletar, 1984). Effective performance appraisal requires a strong commitment of manpower and time on the part of any organization, police agencies notwithstanding. Police agencies, however, have traditionally allocated little manpower and time to this effort, perhaps due to the technical and complex nature of the task. Formal literature dealing with performance-appraisal systems in police administration is relatively small in scope when compared to fields such as industry, education, the military, and the public sector at large. As in any occupation, the need for effective (:erformance-appraisal systems in law enforcement is matched only by the difficulty in meeting the need (Knowles a DeLadurantey, 1974). 23 Eerfgrmsnge Apprajsglg Gengrsl Issugs Performance-appraisal literature, while somewhat limited in documenting the types of' systems in use throughout the public sector, is rather extensive when examining the issues critical to successful implementation. The issues constantly raised over performance appraisal can be categorized into the fbllowing questions: (a) Why is performance evaluation important in organizations? (b) What purposes are served by performance evaluation? (c) Whose performance Should be appraised? (d) What should managers evaluate? (e) Who should appraise performance? (f) How frequently should performance be appraised? (9) How should performance-evaluation results be communicated? (h) What are the major problems with performance appraisal? (i) How can performance appraisal be improved? and (j) How should managers choose among alternative performance-evaluation methods or designs? (Szilagy, 1983). Human-resource accountability distinguishes managerial from nonmanagerial jobs. Despite this distinction, and despite the fact that human resources are still the single largest expenditures in many organizations, systems for the management of human resources are in a primitive state as compared with systems for the management of other resources. Consider, for example, the sophisticated systems involving budgets that organizations use for the management of financial resources, sophisticated systems of inventory control and materials 24 management. This sophistication is further demonstrated by the formal schedules by which supervisors are held accountable for the management of time and information. Only in the area of human resources do organizations seem to lack the management systems and consistent adherence to procedures for holding supervisors accountable for resources (Bernardin G Beatty, 1984). figal Sgtting and ggntrgl Performance evaluation is the single most important device available to an organization for setting and obtaining goals. Typically, goal setting is a process that involves many parties at all levels within the organization. Top management and the board of directors usually formulate goals in terms of broad, global outcomes to be achieved by the organization. Each of the organization’s units, in turn, must translate these overall goals into specific objectives. Such a process can be described as a four-step cycle: (a) establish standards, (b) record performance levels, (c) review performance records in light of standards, and (d) determine corrective action (Szilagy, 1983). These four steps, taken together, constitute a control function. Performance evaluation plays an important role in control because it serves as an audit facilitating control. Performance evaluation, then, becomes an auditing procedure that generates the information necessary to control and direct the process of an organization. Typically, the review procedure would start at the first level of operations, each employee’s performance being 25 reviewed by his or her immediate supervisor in each department. Entire departmental performance would be reviewed at the next managerial level, followed by the overall performance of the organization being reviewed by the chief executive officer, board of directors, or trustees. o rma When performance appraisal is considered in terms of its utility to an organization, several operational performance- appraisal objectives seem critical. These include (a) the ability to provide adequate feedback to employees to improve subsequent performance, (b) the identification of employee training needs, (c) the identification of criteria used to allocate organizational rewards, (d) the validation of selection techniques to meet Equal Employment Opportunity Commission (EEOC) requirements, and (e) the identification of promotable employees from within the organization. To .accomplish ‘these objectives, the performance-appraisal system must, of course, be an accurate measure of performance. Eeedhaek A performance-appraisal system’s adequacy to provide feedback and improve performance requires that it have the following charac- teristics: be unambiguous and clearly specify the job-related performancer expected, use behavioral terminology, set behavioral expectations for employees to work toward, and use a problem-solving focus that results in a specific plan for improvement. If performance appraisals are to identify training needs, the format 26 must specify ratees’ deficiencies in behavioral terms, include all relevant job dimensions, and identify environmental deterrents to desired performance levels. Baum; Performance appraisals are also used in the allocation of organizational rewards such as merit pay, and punishments such as disciplinary action. Effective reward allocation may require a valid performance-appraisal system that ranks employees according to a quantifiable scoring system. Sufficient variance in scores is essential to differentiate across various performers. In allocating rewards, the system must also have credibility with all employees. The same performance-appraisal format may also be used for disciplinary action, ranging from a verbal warning to dismissal. Thus, the documentation required for such decisions must also be facilitated by the same performance format. With the passage of the Civil Service Reform Act, with its provisions tying performance to merit pay and bonuses, the importance of performance appraisal in the public sector has been greatly heightened. ; i! . E M 1.! !' Performance appraisal must be designed to facilitate the validation of selection techniques. The process requires measures of employee output or job-related dimensions that tap the behavioral domain of the job analysis, the facilitation of interrater reliability, professional and objective administration of the 27 appraisal system, and continual rater observation of the employees’ performance (Schneier, 1978). mm The identification of promotional potential requires that job- related performance appraisals have several dimensions in the incumbent’s job, the same as or similar to the job to which the incumbent may be promoted. This indicates the incumbent’s ability to assume increasingly difficult assignments. The appraisal Should also rank employees comparatively, measure their contribution to organizational objectives, and perhaps document their career aspirations and long-term goals. Uses in the Publig Sector In their 1975 study, Field and Holly examined the uses of appraisal information in state government. They found the most frequently listed purposes of appraisal, in descending order, to be the following: 1. Promotions, demotions, and layoffs Manpower planning Salary adjustments Communication between supervisors and subordinates Determination of management-development needs Updating position descriptions N O? 0" «b u N o o o o o 0 Validation of selection and promotion procedures The seven uses identified by Field and Holly were included in a 1983 study of state government appraisal systems. Respondents were 28 asked to rate the importance of each use in their state. The results showed a sflightly different ordering, with higher ratings given to communication and salary-adjustment uses (Tyer, 1983). A 1978 survey by the Bureau of National Affairs emphasized the importance of appraisals for evaluation purposes. Appraisals were used to determine salary increases and promotions of more than 80% of all office employees and more than 65% of all production employees. Similarly, in 1981, Huber found that appraisal information was used by 75% of the states surveyed to determine salary increases and by 66% to make promotional decisions. More important, it was found that appraisals were used by all states surveyed to decide whether probationary employees should be retained (Huber, 1983). Usgs in the Private Sggtor The uses of performance-appraisal information in the private sector tend to parallel those identified in the public sector. In a 1977 Conference Board Report (Lazer & Wikstrom, 1977), 293 companies were asked to state the objectives for which their appraisal systems were developed. Their answers can be grouped into seven general categories, and the most frequent responses are presented in descending order: 1. Performance measurement 2. Performance improvement 3. Compensation 4. Promotion 29 5. Feedback 6. Manpower planning 7. Communication between superior and subordinate The frequency with which companies stated these objectives was consistent across the "fiddle-management levels, with only a slight variation when applied to top management. Uses in Law Enfgrggmgnt Police agencies are extremely labor intensive; therefore, police administrators are very concerned with assessing the performance of enlisted as well as civilian employees. However, the research literature is scant in terms of studies that have examined the types of performance-appraisal systems that are currently in place, and specifically the uses of appraisal information within police agencies. A 1979 survey of 196 police departments revealed that although 89% gathered performance information in the form of supervisory ratings, only 34% used this appraisal information as input into personnel decisions. In addition, only 42% used rating information as a source of feedback for counseling and/or coaching purposes (Lee, Malone, & Greco, 1981). When considering changes in performance-appraisal objectives of police agencies over recent years, it is interesting to compare two nationwide studies. A 1958 study of 100 police agencies in Southern California reported the following uses of performance-appraisal information. The results are provided in rank order from the 30 frequently reported to least frequently mentioned (Knowles & DeLadurantey, 1974); 1. Promotion of employees Means of improving employee performance Placement and/or dismissal of employees «h N N o o 0 Development of performance standards 5. Evaluating selection and/or training techniques A study in 1969 of 120 Los Angeles police officers reported the following uses of performance-appraisal information, in rank order: 1. Training of employees Promotion or transfer Assignment or transfer within the agency 450)“) Discipline anclusions Regarding Uses of Performance Appraisgls From the foregoing studies, it may be seen that performance- appraisal information was used primarily for personnel transactions and for employee improvement through training or assignment, as opposed to other uses such as discipline and discharge. It is unlikely that the traditional uses of performance-appraisal informa- tion have changed with contemporary police agencies. An indirect issue that surfaces when examining the objectives of performance appraisal is whether an organization should use a single multipurpose format or develop individual formats for each use. Results of the 1977 Conference Board Report (Lazer & Wikstrom, 1977) indicated that when a panel of 25 managers responsible for 31 various human-resource activities was asked to generate a list of problems encountered with performance-appraisal systems, the first item mentioned was "conflicting multiple uses.” The data also showed that more than 75% of the respondents used their single instrument for three or more purposes. Further, more than 30% of the companies surveyed reported that they tried to accommodate five or more uses with their single appraisal. It is obvious that the single format, multipurpose appraisal system, even with its faults, will appeal to most organizations in both the private and public sectors. The appeal is directly related to the simplicity and economy of using a single format. Performance appraisal, when properly done, is an expensive and time-consuming activity for any organization. When one considers all the problems with implementing a performance-appraisal system, it is no wonder that Single format, multipurpose appraisal systems take a back seat to other issues and concerns. Development of Performance—Appraisal Systems: Analyzing the ggb The first step in a sequence designed to provide information about job performance is job analysis, which identifies the important elements of a particular job. The assumption is that a supervisor must identify the essence of a job before deciding to place a particular individual in that job, determining a Specific research strategy for identifying potential candidates for that job, or counseling individuals already in the job. A job analysis is a 32 description of a job, not of the person in that job. A job analysis helps determine what standards will later be used to measure the performance of individuals in that job. For any performance-appraisal system to work, job content must be exhaustively examined. Certainly, the determination of job content is of obvious importance, but it is frequently overlooked or not sufficiently assessed in the development of a performance- appraisal instrument. The following excerpt from a court case involving performance appraisal shows how this can occur: . The analyst did not verify the description by making an on-site inspection of the employee who actually performed the job. . . . The former procedure was flawed insofar as it created the possibility of inconsistent descriptions, over- or under-inflation of job duties or requirements, and was associ- ated with the lack of EE awareness of the evaluation procedure. The criteria actually employed by the defendants were not developed by professional consultants, but rather adapted from a commercially available method of job analysis from which defendants borrowed what they believed to be pertinent to their needs. (Greenspan v. Automobile Club of Michigan, 1980, p. 195) Methods of Obtaining Job Information There are four general methods for collecting information on jobs: (a) questionnaires, (b) written narratives or daily diaries, (c) observation, and (d) interviews. The questionnaire method is frequently used to collect data about jobs because it requires less staff time than the other tech- niques. The employee, the supervisor, or both complete a question- naire that includes questions about all phases of the job or environment in which the work is done. The data are examined, and information is extracted for writing job descriptions. Caution 33 should be taken when using questionnaires as the only method for gathering job information. A follow-up interview Should be used to supplement and verify the information gathered with the question- naire. Written narratives and daily diaries are self-reporting tools that may be used to document the activities included in a particular job, as well as the amount of time Spent on them throughout the day. If done carefully, these techniques provide an accurate description of a job, while eliminating the recall error characteristic of questionnaires and interviews. However, daily diaries are tedious to complete and put an additional work load on the employee. An employee may simply not have the time to complete a diary during regular work hours, and if the employee is pressed for time the information may be incomplete or inaccurate. Although this method can elicit a general synopsis of the activities of a job, it usually cannot supply very much detailed hard data about the Skills or expertise employed. The most direct method used for obtaining job-related information is actually observing employees during their working hours. 'The supervisor* will most likely’ be a good source for selecting the best work stations where the most profitable job observations can be made. The observer should be primarily concerned with determining whst the employees do, fig! they do it, any they do it, what still is involved, and the phys1§§1_dgm§nds put on employees by the job. This technique is most effective for gathering information about routine jobs in the organization. 34 Conducting interviews with employees can supply information relative to all aspects of the job, including the nature and sequence of various component tasks. The Skilled interviewer can ask many probing questions that otherwise might have gone unanswered or unmasked when the employee simply received a questionnaire to complete. This method is highly recommended as a data-gathering mechanism, supplementary to the questionnaire and to observation. Questionnaires can be incomplete, and observation can leave an analyst confused about some job tasks. Interviewing the job holder or supervisor can help fill in the blanks. Methods gf Job Analysis In terms of performance appraisal, there is no one best way to conduct a job analysis. To some extent all the various methods have advantages and disadvantages that can be weighed only in the context of answers to the following questions: (a) What other purposes must the job analysis serve, in addition to appraisal development? (b) What level of specificity is required for rating performance (e.g., broad functions, duties, activities, or tasks)? Generally, the kinds of information sought in the job-analysis process_can be characterized as either job focused (the what of the job) or employee focused (the how of the job). Many job-analysis methods have been developed that incorporate one or both of these general categories. The following discussion highlights three specific methods. 35 One employee-focused method of job analysis breaks the job down into the most important knowledge, skills, abilities, and other characteristics required to do the job. Many organizations use subject-matter experts from inside and outside of the agency to gather these types of data. An example of this method is the Position Analysis Questionnaire (PAQ), which contains 194 job elements. It is based on the idea that there is a common behavioral structure for* a, wide variety of' jobs and that small discrete elements can be identified and quantified for individual jobs. This commonality across jobs results from workers doing similar tasks rather than from the technology being used or the product being produced. The PAQ is generally best suited for setting compensation rates within an organization and not for performance appraisal. A job-focused method identifies the tasks most frequently performed on the job (task inventories). The tasks are arranged in logical groups such as those encompassing some broad duty category. For' example, in police agencies it might be tickets, arrests, complaints investigated, cars assisted, report writing, and so on. Four types of data about each task are collected in a task inventory: (a) Hgy_gttgn is the task done?, (b) flgy_jmpgytsgt is the task to overall performance?, (c) When was the task learned?, and (d) To whst gytent do successful employees do this task better than marginal or poor employees? These types of data are usually gathered from supervisors or from employees performing the job. A third general method of job analysis is the Critical Incident Technique (CIT). This method focuses on what an employee does that 36 characterizes him or her as a "good" or "bad“ employee. Critical incidents provide information that is both job centered and behavior centered. Overall, the primary value of the critical incident method is that it generates a record of specific behaviors from those persons who are in the best position to make the necessary and sufficient observations and evaluations. The emphasis is on observing behaviors that are critically important to doing the job well. CIT methods are easy for employees to use and understand. The major limitation of the CIT is that, because it employs a narrative form, incident reports may be biased. Also, if jobs are looked at only in terms of isolated incidents, it may be difficult to ascertain how various tasks fit together. Finally, this method is quite time consuming and expensive to administer. Due to the many purposes of job analysis and the varying degrees of knowledge, skill, and ability required in most jobs, a multi-method approach to job analysis is almost always preferable and superior to any single method. An example of a multi-method approach to job analysis would be to administer a task inventory in conjunction with a CIT. Another important aspect of the job-analysis process is the validity of the information obtained. The issue of validity in job analysis is best summarized by the following questions: 1. Do the duties and tasks described in job descriptions represent the actual and important duties and tasks in type and proportion? 37 2. Are the environment and conditions of the job accurately conveyed, in addition to job content? 3. 00 persons who possess the requisite knowledge, skills, and abilities assumed to be important to the job actually perform better than those who do not have such knowledge? If individuals in the organization can respond in the affirmative to these questions at the conclusion of the job-analysis process, the performance-appraisal system stands a much better chance of being legally defensible. Eurposg of Job Analysis The best defense against litigation in both the public and private sectors is to ensure that any performance-appraisal system is fundamentally related to the critical aspects of the job. As the federal government’s Uniform Guidelines on Employer Selection Procedure (1978) stated: There shall be a job analysis which includes an analysis of the important work behaviors required for successful performance. . . . Any job analysis should focus on work behavior(s) and tasks associated with them. (Sec. l4.C.2) Thus, it appears that the most important criterion is that appraisals be the outgrowth of job analysis. Unfortunately, such a tie-in to job analysis is apparently lacking in the public sector. Huber (1983) noted that only three states-—Utah, Idaho, and Colorado--either directly or indirectly linked their performance- appraisal systems to job appraisal. The growing number of court cases is indicative of the growing importance being placed on 38 appraisal reliability and validity. Public agencies, like their private-sector counterparts, must be able to demonstrate conclusively the job relatedness of their appraisal instruments. Failure to do so may result in violations of EEOC regulations and an invalidation of the entire appraisal process. gritgrign ngglopmggt Performance appraisal involves two distinct processes: (a) observation and (b) evaluation of what is observed. In practice, however, observation and evaluation represent the last elements of a four-part sequence: job analysis, followed by criterion develop- ment, followed by criterion validation, followed by performance appraisal. Job analysis is a description of a job, not of the person in that job. The goal of perfbrmance appraisal is not to make distinctions among jobs, but rather to make distinctions among people, especially those in the same job. Criterion development is the critical intermediary process. In criterion development, the important job components identified in the job analysis are transformed into measures suitable for describing individual behavior. Performance appraisal, the last step in the sequence, represents the actual process of gathering information about individuals based on critical job requirements. It is a process intended to identify the strengths and weaknesses of individuals engaged in their work roles (Casio, 1978). 39 b'e ive as r s f an One of the major issues in performance measurement is the nature of the information gathered. Should it be objective or subjective? Objective measures include production data such as dollar volume of sales and units produced; in the case of police personnel it might be the number of arrests or tickets written within a given time. These variables directly define the goals of the organization but often suffer from several glaring weaknesses, the most serious of which are performance unreliability and modification of performance by Situational characteristics. For example, dollar volume of sales is influenced by several factors outside of the individual’s control, such as territory location, nature of the competition, and distances between accounts. The primary focus of performance appraisal is to judge an individual’s performance, not factors beyond his or her control. Objective measures focus not on behavior but rather on outcomes or "results of behavioru” ,Although there will be some degree of overlap between behavior and results, the two remain qualitatively different. Police agencies have traditionally collected objective information in order to measure their officers’ performance. The quantitative nature of objective data lends itself to the compilation of crime statistics that so often are the standard by which the public gauges a given police department’s efficiency. Although this trend does not appear to be changing throughout the police community, several progressive police agencies are 40 questioning whether a better evaluation mechanism could be used. The Federal Bureau of Investigation (FBI) is one such agency, and they cited the following weaknesses of the objective quantitative approach (Colwell 8 Koletar, 1984): l. A system that is purely quantitative encourages suboptimi- zation. That is, less important criminal problems could be over- emphasized to produce numerical indications of success, while more important criminal activity was slighted in comparison. 2. There is, indeed, a bias toward that which is capable of quantification, thus potentially overstating its import. A serious danger with such partial measurement is that attention might be paid only to those output items that can be measured. 3. A quantitative approach is insensitive to important qualitative considerations. Thus, a person who robbed a bank with a toy gun and got 5250 was recorded as one conviction and given equal statistical weight with the person who robbed a bank. with an automatic weapon, killed two people, and escaped with $250,000. Obviously, from a law enforcement perspective, one criminal is in a different league from the other. 4. The emphasis on quantitative measures would invariably lead to distorted resource-use patterns so as to place supervisors in the best possible light. Directives and urgings to the contrary, managers would tend to respond most to the bottom line, relatively Short-term results they knew would be the basis for their individual and organizational evaluation. 41 The discussion of objective measures of performance is not intended to downgrade the importance of gathering objective information. Results or work outputs are an important aspect of any job, especially in police departments. However, objective information often has to be qualified by examining factors that may be beyond the employees’ control. u iv Measur r The disadvantages of objective measures have led researchers and managers to place more emphasis on subjective measures of job performance. Although the number of arrests per month is not a matter of judgment on the part of the supervisor, an estimate of the patrol officer’s initistive is a judgment. Because subjective measures depend on human judgment, they are prone to certain kinds of errors associated with the rating process. 'H) be useful, they must be based on a careful analysis of the behaviors viewed as necessary and important for effective job performance. There is enormous variation in the types of subjective performance measures used by organizations. Regardless of their form, however, subjective performance appraisals frequently suffer from various behavioral barriers that limit their effectiveness. Behavioral barriers to successful performance appraisal may be political or interpersonal. Political barriers stem from super- visory apprehension concerning the use to which the results of performance appraisal will be put by their peers. In this type of climate, personal values and biases can replace organizational 42 standards. Unfairly low ratings may be given to highly valued subordinates so they will not be promoted out of a rater’s department: likewise, personal bias may lead to favored treatment for other employees (Casio, 1978). Interpersonal barriers arise from the actual face-to-face encounter between subordinate and superior. Because of a lack of communication, employees may think they are being judged according to one set of standards, when their superiors actually use different or shifting standards. Furthermore, supervisors typically resist making face-to-face appraisals. Such resistance can reduce appreciably the validity of the ratings because, rather than confront substandard performers with low ratings, supervisors often find it easier to give average or above-average ratings to inferior performers. Devglopment of Performange—Apprsisal Systems: Issues 9f Rgliabjlity Reliability in performance appraisal becomes possible only when there is substantial agreement among raters as to good and bad employee performance. In light of this, who does the rating is important. In addition to being cooperative and trained in the techniques of rating, raters must have direct experience with, or first-hand knowledge of, the individual to be rated. Typically, a rater will be the employee’s first-line supervisor. Higher-level managers are often too far removed from the scene of operations about employees several levels below them. 43 Rater Biases: Effect on Re i bi it Researchers in organizational behavior have identified a number of errors or biases in performance appraisal that can detract from the validity of the process. These include (a) halo effect, (b) central tendency, (c) leniency, (d) personal prejudice, and (e) recency effect (Werther, 1985). The halo effect occurs when the rater’s personal opinion of the employee sways his or her measurement of performance. For example, if a supervisor personally likes or dislikes an employee, that opinion may distort the supervisor’s estimate of the employee’s performance. This problem is most severe when raters must evaluate their friends or those they strongly dislike. Some raters do not like to rate employees as above average or unsatisfactory, so the rating is distorted to make each employee appear to be average. Instead of checking either extreme, raters place their marks near the center of the scale; thus the term "error of central tendency" has been applied to this bias. Personnel departments sometimes unintentionally encourage this behavior by requiring raters to justify, in writing, extremely high or low ratings. The leniency bias results when raters tend to be easy in evaluating employee performance. Such raters see all employees as good, and because of this predisposition, they fail to be objective. The strictness bias is just the opposite effect and results from raters being too harsh in their evaluation of employee performance. Sometimes the strictness bias results because the raters want peers 44 or supervisors to see them as tough judges of subordinates. Both extremes of this bias more commonly occur when performance standards are vague. A rater’s dislike for a group or class of people may distort the rating employees receive. For example, male supervisors in police departments may' give undeservedly low ratings to female patrol officers because they hold "traditionally male jobs." Sometimes raters are unaware of their prejudice, which makes such biases even more difficult to overcome. Whereas the halo bias affects one’s judgment of an individual, prejudice affects entire groups. When prejudice affects the ratings of protected class members, this form of discrimination can lead to violation of equal employment laws. Training to Increase Reliability When using subjective performance measures, ratings are affected strongly by the employee’s most recent actions. Recent actions, either good or bad, are more likely to be remembered by the rater and thus may bias the performance measurement. When subjec- tive performance measures must be used, personnel specialists can reduce the distortion from biases through training, feedback, and the proper selection of performance-appraisal techniques within the organization. Rating errors are errors in judgment that occur in a systematic manner when an individual observes and evaluates another. What Inakes these errors so difficult to correct is that the observers or 45 raters are usually unaware that they are making them. Even in those instances when raters are aware of errors, they are frequently unable to correct them themselves. For years, psychologists have stressed the importance of organizations providing training to raters to improve objectivity and accuracy in evaluating an employee’s performance. But it is only recently that training programs for reducing rater errors have appeared within organiza- tions (Latham & Wexley, 1981). The importance of training those individuals within an organization who are responsible for conducting performance appraisals cannot be overstated. This holds true in both the public and private sectors, but, unfortunately, few organizations have formal training programs in place. Training of police supervisors who are raters must primarily deal with the identification of clear performance criteria that are part of the performance-appraisal system. To be sure, a given system may not be effective because of subjective rater errors previously discussed. Much of the literature has seemed to underrate this as a problem that successful training must solve (Lawther, 1984). Yet these kinds of errors may only be symptomatic of greater problems that are due to the lack of clearly defined performance standards. A rater who is not sure what properly defines excellent performance may be more apt to give a high rating to an officer because of friendship or other personal feelings. 46 Training methods to reduce rater error are varied, but they usually incorporate some form of lecture, workshop, or group- discussion technique. A recent review of the literature indicated that the majority of approaches to reducing rater errors suffer from one or more methodological problems. For example, many training programs do not provide trainees an opportunity to practice the skills learned, nor do they provide them feedback on how well they are performing. Other studies have failed to include a control group, whereas still others have not evaluated the effects of the training at all (Latham & Wexley, 1981). Lack of formalized training for raters is probably most prevalent in the public sector among police agencies. First-line supervisors are usually promoted from the patrolman rank and, as such, are instructed that one of their supervisory responsibilities is performance appraisal of those under their command. It is assumed that they are aware of the performance standards set by the agency and therefore require no further explanation or assistance in filling out the appraisal instrument. Many police agencies rely on the instructions provided as part of the appraisal instrument to answer any questions the rater might have. Unfortunately, bias errors are not understood by most police organizations, and raters are generally unaware that they are making them. WWW Issues gf Validity Validity in the context of performance appraisal is the extent to which ratings on an appraisal instrument correspond to actual 47 performance levels for those who are rated. Just as in interpreting a test we wish to infer the degree to which a person possesses a certain trait or quality measured by the test, so also in interpreting a performance rating we wish to infer that a person’s actual level of perfbrmance on a dimension is reflected by his or her performance rating on that dimension. Although validity may seem fairly simple to achieve in appraisal, it is probably the most difficult of all effectiveness criteria to attain. Construct validity is most appropriate for data in an appraisal context. It has been defined as the degree to which variability in a measure is a function of variability in some underlying construct. Thus, in an appraisal context, variability in the ratings of individual performance levels should be a function of the variability of the actual performance levels of those individuals. What Should be stressed in the definition of validity is the notion of inference. Validity does not refer to any specific measurement strategy. For example, it should never be said that behaviorally anchored rating scales are a valid appraisal technique because that implies that validity is a constant across uses, users, and situations. Rather, the inferences made from the use of such scales may be said to be valid or invalid according to the evidence presented. One of the most important circumstances is the purpose of the appraisal. For example, if ratings are to be used as a basis for promotion, it must be shown that the ratings are valid predictors of future performance. 48 Criterion-related validity is the concern that the performance measure relates to other' measures of important outcomes. The central issue is the relationship between predictor variables (selection criteria) and criterion variables (subsequent job performance). Is there a high and positive correlation between what we expect performance to be based on--selection criteria-~and subsequent job performance? The greater the magnitude of the correlation, the more valid the performance measure. Another traditional type of validity is content validity. It ensures that the performance-appraisal measure and its administration derive logically from the conceptual definition of the performance dimension. This aspect of validity is tied directly to the job-analysis process. To achieve content validity, the performance-appraisal instrument must measure the actual and important duties and tasks of the job in type and proportion. The evidence presented to support inferences of validity in any performance-appraisal system can and should be from many different sources, such as multi-raters, multi-criteria, and multi-methods. It should also be based on methodologies that have been recommended for content, criterion-related, and construct validity. ev lo ment of P rforman -A r i t m ° Sources of Rats: Informatjgg A key component of any performance-appraisal system is determining who will conduct the appraisal. Five possible parties can do the appraising: the supervisor(s) of the person to be appraised, organizational peers of 'the appraisee, the appraisee 49 him/herself, subordinates of the appraisee, and persons outside the immediate work environment of the appraisee (Cummings & Schwab, 1973). Each of these individuals might be appropriate, depending on the purpose of the appraisal and the dimensions or performance standards being appraised. ' 1mmgdjatg Supgryisg: There are two primary justifications for placing responsibility for performance appraisal in the hands of the individual’s immediate supervisor. The hierarchy of formal authority that exists in most organizations, and especially within police agencies, legitimates the superior to make both evaluative and developmental decisions concerning subordinates. In addition, the superior generally controls the magnitude and scheduling of many of the rewards and punishments received by the subordinate. To the extent that performance is enhanced when rewards are based on performance, the appraisal and reward-punishment authority’ should be in ‘the same hands. Several potential liabilities of appraisal by the superior have been studied and discussed in the literature. First, a subordinate may feel threatened by a rater who has control over any rewards or punishments he or she may receive. Moreover, the appraisal communication process often tends to flow from superiors down, so that subordinates feel they must defend themselves and justify their actions. As a result, little feedback or coaching tends to take place (Cummings 8 Schwab, 1973). 50 In addition, superiors frequently feel uncomfortable in the appraisal role because the role demands Skills that they do not possess, they' have an ethical objection to ”playing God," communication of negative appraisals may alienate subordinates, or they realize that in some cases the superior does not have sufficient reward-punishment authority to implement the results of an appraisal even if he or she does a thorough job of it. As a consequence, the superior often conducts the appraisal in a routine and lackadaisical manner. These four reasons are especially applicable to police agencies. Most first-line supervisors are sergeants and were promoted from the patrolman rank. They receive little, if any, training in the area of performance appraisal or even in effective coaching and development techniques. First-line supervisors also work very closely with their subordinates and will avoid situations that alienate them from the rest of the officers. Finally, sergeants for the most part have little effect on the rewards or discipline of officers under their command and may develop an "it doesn’t matter“ attitude when praising subordinates’ performance. Mu ti 1 a rs The literature on appraisal has suggested that the process can be made more valid by the use of several superiors (multi-raters) at the same organizational level or at successive levels (Lee et al., 1981). The rationale is that it is unlikely that even the immediate supervisor will observe all the relevant dimensions of a 51 subordinate’s behavior. Supervisors 'who are familiar with a subordinate’s work performance and who have observed him or her for an appreciable amount of time either submit independent ratings, which may be combined into a single rating, or meet together to develop a joint rating of each subordinate. Multiple rating stresses participation, joint effort, and cooperation in contrast to the more dictatorial methods of employing rating systems and, if properly used, can emphasize future development rather than past performance (Knowles & DeLadurantey, 1974). Peer 5 ment An alternative to the superior appraisal method used in some organizations is the use of peer assessment. This approach would appear to be most effective in organizations where a high level of trust exists among peers, coupled with a noncompetitive reward system. An additional factor would be situations in which information about the appraisee’s performance is uniquely available to his or her peers. Peer appraisals are generally found among highly professional organizations, such as physicians in clinics and scientists in industrial organizations. Drawbacks to this method focus on the competitive nature of many organizations and the possible psychological conflicts in which such a system places an employee. One comparison study of peer assessment. methods among 145 police officers indicated negative reactions to this technique. Officers indicated that peer assessment was not fair, not accurate, not liked by them, and should 52 not be used in promotion decisions. On a more positive note, the peer assessors indicated that even though peer assessments were not liked, these would not generate too much competition among fellow workers and would not be used by the peer assessor to present a fellow worker in a bad light to lessen competition (Love, 1981). Perhaps the causes for 'the negative reactions toward peer assessment lie in its implication of a redistribution of power within the organization. The shift of power would potentially affect the personnel system of the organization if fellow employees were given substantial input into promotion and other performance- based decisions. This would require modification of the traditional belief that management holds the sole right to performance assessment . Se f-Asse ment Self-appraisal is another option available for evaluating performance. The justification for its use is strongest where employees are in the best position to evaluate their own methods of work; for example, where the employee is working in extreme physical isolation or is the unique possessor of a rare skill. The impetus for self-appraisals comes from the trend, beginning in the 19605, in which major emphasis was placed on personal growth, self-motivation, and organizational potential of the employee. The systematic biases associated with self-analysis have severely limited its use within organizations for appraisal purposes. Self-analysis has, however, grown in popularity as a 53 mechanism for employees to better understand the performance standards within a given system. Disagreements between the supervisor’s rating and that of the employee can and should form the basis for important discussions in the appraisal interview. §ubordinate Assessmggt Performance appraisal can also be conducted by subordinates or individuals outside of the immediate work environment. Subordinate appraisals are generally used to augment information initially gathered through more traditional methods. The information is also helpful to accommodate change in a supervisor’s behavior or to assist organizations in assessing the leadership potential of their lower-level managers. Use of outside appraisers is justified when there is a need for specialized expertise in a particular content area, or when the objectives of the appraisal can only be insured by someone without a vested interest in outcomes of the appraisal. Organizational practice and empirical evidence would indicate that, most typically, the appraisal process is conducted by the employee’s immediate supervisor. The perceived threats and biases associated with this method have led some organizations to use peer, self, subordinate» and outside sources of appraisal. 'The appropriateness of a particular method will depend on the purpose of the appraisal and dimensions or performance standards being appraised. v 10 e f rf c - The importance of performance appraisal to the success and continued growth of any organization has led both practitioners and academicians to create many methods to appraise past performance. Most of the techniques are a direct attempt to minimize some particular problem found in other approaches. No one technique is perfect; each one has inherent advantages and disadvantages. Perhaps the most basic of methods to appraise past performance is a unidimensional global rating, which uses a rater’s overall estimate of performance without distinguishing between critical job elements or dimensions. There are numerous problems with the use of unidimensional formats, and they are also questionable as measures of performance from a legal standpoint because they are not based on job analysis and thus are not job related. lrsit-Based Methgds Trait-based scales are numerous multi-dimensional (or graphic) approaches to measuring performance. They are more useful than global scales because they recognize that job performance consists of separate dimensions or job elements. The first of these is the familiar trait-based scale using dimensions such as loyalty, dependability, and so on. Other dimensions traditionally found on these formats are cooperation, initiative, and self-confidence (Beatty & Schneier, 1979). There are problems in the use of trait-based scales, centering on potential ambiguity and subjectivity. That is, what specifically 55 is meant by “lack of cooperation”? Thus, many of these scales are generally evaluated as only poor to fair relative to performance- appraisal objectives. More important, trait-based scales are typically not sufficiently job related or based on a thorough job analysis. Thus, an organization is vulnerable to litigation by a disgruntled employee. Another built-in problem with this type of scale is the tendency for the rater to fall victim to the halo effect and central-tendency biases previously discussed. The graphic rating scale, with its almost endless varieties, is the most common method of evaluating police officer performance (Landy, 1977). First, a list of personality or behavioral traits is arrived at by analyzing factors that appear to lead to success or failure on a particular job. Second, various descriptive phrases or adjectives reflecting degrees of a given activity are prepared. The rater is instructed to place a check mark along a line at a point that, in his or her judgment, represents the degree of that particular quality possessed by the ratee. The graphic rating scale is open to criticism on the method of evaluating or scoring results. Too often, the various items making up the scale are given numerical values and are weighted. The process of translating judgments of an individual’s personality or ability into a single numerical score is questionable. A follow-up interview between rater and ratee is basic to the graphic rating system, as it is with every other format discussed here. 56 thsngr-Based Mgthgds A significant step beyond global and trait-based scales is the behavior-based scale. These scales are based on a job analysis and an attempt to determine what an employee does at work. A behavior- based scale provides specific feedback to employees because it is based on the activities required of the job. It captures specific information across employees for reward allocations and about each employee specifically in the assessment of training needs. Behavior-based scales are often seen as more accurate than the two previously mentioned performance-appraisal formats because of their job relatedness and specificityu Finally, because dimension-based scales can meet the legal requirements for a criterion measure, these certainly can be an improvement for the validation of selection procedures. Forced-Choicg Methgd A special type of behavioral checklist is known as the forced- choice system, which was designed specifically to reduce leniency errors and to establish objective standards of comparison between individuals. A well-known variation of this format is the Ohio State Highway Patrol forced-choice inventory. In a typical forced- choice procedure the supervisor examines a list of four statements and picks one of the four as most descriptive of the officer and one as least descriptive. The four items in each group have been carefully chosen for certain properties. Two of the items (one positive, one negative) have previously been shown accurately to 57 identify good and poor performers, respectively. The other two items have previously been shown to vary in favorability (one favorable and one unfavorable) but to have no value in identifying good and poor performers. In terms of its development and scoring, the forced-choice instrument is quite complicated. However, it separates the various levels of performance from a supervisor’s attempts to judge an officer favorably or unfavorably, regardless of performance level. Forced-choice systems are generally suitable for specific administration and research purposes but seldom are useful for individual counseling or personal development of officers (Landy, 1977). The major drawback with dimension-based scales is that although they provide specification of the particular activities of the employee, the scale points are of limited use if they are only numerically and/or adjective anchored. They provide little specific feedback on what behavior led to the rating given. Thus, a dimension-based performance appraisal may be deficient in assessing an employee’s specific behavior within the job dimensions because only adjective anchors (e.g., poor, fair, good, excellent) or numerical anchors are used. h vior l nchore a ' l Behaviorally anchored rating scales (BARS) are also dimensional scales. However, the scale points are behavioral statements, illustrating varying degrees of performance, not merely adjectives 58 or numbers. BARS are far more specific in terms of identifying employee behavior relative to performance in a specific job dimension. They are also more sophisticated than dimension-based formats and require considerably more time to develop. BARS seem to provide excellent feedback to employees in specifying not only what activities employees are to engage in, but also the behavior a rater perceives that a ratee has demonstrated during the performance period (Beatty, Schneier, s Beatty, 1977). BARS have received considerable attention in the recent literature as researchers have attempted to improve on the more traditional methods of appraising past performance. The development of a BARS system is generally accomplished in four stages and requires a considerable commitment of organizational time and manpower, if done correctly. Each stage usually involves a high degree of interaction between a cross-section of employees and those supervisors who will be doing the rating. 1. Stage one consists of a detailed task analysis, which results in behavioral descriptions of the major activity areas and tasks that comprise a particular job. 2. Step two entails the assignment of ”importance" scores to each task through systematic collection of data concerning both the frequency of performance of the task and the criticality of the task to the job. 3. Stage three involves the development and refinement of behavioral statements describing various levels of proficiency fbr the important tasks. 59 4. The final stage investigates the validity and reliability of the instrument (Rosinger, Myers, & Levy, 1982). In a 1976 study, 58 municipal police agencies cooperated in constructing and field testing supervisory and peer ranking scales. The findings pointed out that behaviorally anchored scales possess some properties not possessed by the more comonly used graphic rating scales. The most important of these properties is the potential for counseling and feedback. The role of the patrol officer is complicated and often subtle. The traditional graphic rating scale, with its use of arbitrary verbal anchors such as "good,” "satisfactory," “poor," and so on, is not equal to the task of providing useful feedback for the improvement and maintenance of patrol officers’ performance. The present scales represent a framework for measuring and describing such performance. Furthermore, it appeared that BARS can be developed in one setting and effectively used in other settings (Landy, Farr, & Saal, 1976). ana ement b 0b'ec 'v Unlike many of the traditional perfbrmance-appraisal methods, MBO may be viewed as a management tool to aid in the decision-making process. Although it is a multi-dimensional approach in that there are often many objectives to be accomplished, it is unique in that it provides a measure of employees’ contributions to the goals and objectives of the organization. Ratees evaluated with an M80 method are being evaluated not so much on what they do, but on what they contribute to the goals and 60 objectives of the organization. Obviously, it is difficult to develop specific indicators of employee contribution, but it can be done for many jobs. It is accomplished with more ease in lower- level and entry-level jobs within an organization than in higher- level jobs. MBO can be described as a six-part process: (a) statement of agency mission, (b) establishment of long-term goals, (c) estab- lishment of short-term objectives, (d) development of a step—by-step process that will be used to reach each objective, (e) development and implementation of action plans, and (f) evaluation of the system (Lynch, 1986). Conclgsions Because performance appraisal serves so many purposes, there can be no general method appropriate for all purposes. The problem for management is to determine what kind of perfbrmance-appraisal method is adequate, given the purpose to be served. It is important to remember that perfbrmance criteria consist of many dimensions, only part of' which may be relevant to a specific auditing or compensation purpose. In addition, the specific purposes of performance appraisal vary widely between different kinds of organizations. Hospitals, insurance firms, universities, and police departments, for example, vary widely in terms of most environmental, organizational, and individual factors influencing performance. 61 Specifically, the problem for managers is to select a performance-appraisal method that is appropriate, given the following considerations: Specific grganjzatigna] and gnytrgnmgntsl_nrgggrt1gs, such as technology, the design of the organization, and the firm’s industry. Unique 1ndiytdgs1__gnay§ttgr1sttts influencing performance, including specific skills, abilities, and motivation levels of employees. The mix of specific work behaviors that are appropriate, given organizational and individual considerations. The mix of relevance performancg dimensions, given a considera- tion of the organization and the individuals involved. The specific set of ggsls to be achieved at departmental and organizational levels (Szilagy, 1983). Each of these conditions must be specified in turn, in order to choose an appropriate system for evaluating performance. There are no universal methods of’ evaluation that can be applied in all organizations for all purposes. The central problem in performance appraisal is the design of a system that suits the purpose for the appraisal and is tailored to the unique characteristics of each organization. ngelgpment of Pgrfgtmancg-Apprsisal systgms: Legal gogsjggrstjgns A discussion of performance-appraisal systems is not complete without examining the legal requirements and concerns applicable to performance appraisal. Although no performance-appraisal system is 62 completely safe from litigation, organizations can take steps to reduce the likelihood of being found guilty of discrimination in their employment practices. The development, implementation, and use of a performance- appraisal system falls under the rubric of employee-selection activity and consequently under the dictate of the federal Uniform Guidelines on Employee Selection Procedures (1978). Although there has been increased awareness among employers of the necessity of selecting employees fairly, many organizations still regard selection as merely the decision to hire or not to hire. This is a mistake. Postemployment decisions like the ones discussed throughout the literature review also involve selection. Failure to adhere to the Uniform Guidelines and related laws when making such decisions can result in costly litigation and reinstatement with back-pay awards (Burchett & DeMeuse, 1985). The Uniform Guidelines on Employee Selection Procedures have essentially gained the force of law in recent years. When Congress passed the Civil Rights Act of 1964, it explicitly authorized the use of any professionally developed test as long as it was not designed, intended, or used to discriminate. In an attempt to interpret and clarify how testing could satisfy the requirements of the law under Title VII, the EEOC and Department of Labor released guidelines in 1966 and 1968, 63 respectively. Their view was that if a test had an adverse effect (screened out a higher proportion of minorities or females than white males), it was illegal unless it was used because of business necessity. In 1971, the Supreme Court upheld that even if discrimination is unintentional, employment tests that have an adverse effect and cannot be justified as a business necessity are still in violation of Title VII (Griggs v. Duke Power Company, 401 U.S. 424, 1971). The EEOC and Department of Labor guidelines were subsequently combined into the Uniform Guidelines on Employee Selection Procedures in 1978. Two other government agencies, the Civil Service Commission and the Department of Justice, also adopted these guidelines in 1978. The Uniform Guidelines do not apply only to written tests; they cover all selection procedures that are used in raking employment decisions. They apply to preemployment and postemployment prac- tices. Therefore, the Uniform Guidelines are clearly applicable to performance evaluations when the results of these evaluations are used in making employment decisions. Since 1971, the Guidelines have been the authoritative source for requirements of employment testing. It appears that the primary objective of the Guidelines is to require employers to demonstrate that their tests measure the behaviors necessary for successful on- the-job performance whenever such tests disproportionately screen out particular groups. The courts have generally judged the legality of employment procedures under Title VII according to the principles laid out by the Guidelines. 64 Ihe Rglg gt ggg Ana1y51s The courts have repeatedly condemned the use of performance- evaluation instruments that have not been developed from a systematic analysis of the job. In a landmark case, Albemarle Paper Company v. Moody (422 U.S. 405 [1975]), the Albemarle Company’s ranking instrument had not been developed from an analysis of the job, and supervisors had not been given specific directions about how to rank employees: furthermore, employees were ranked together, irrespective of their job duties. The court noted that there was no way of knowing what aspects of performance supervisors were evaluating, or whether they were even evaluating the same aspects of performance. In Wade v. Mississippi Cooperative Extension Service (372 V.Supp. 126 [1974]), a statewide service organization was found in violation of Title VII because the appraisal instrument was not derived from a job analysis. Similarly, the court reiterated the job-analysis directive in Patterson v. American Tobacco Company (568 FZd 300 [1978]). In the recent case of Carpenter v. Stephen F. Austin State University (706 F2d 6708 [1983]), the university was chastised by ‘the court for using obsolete job descriptions in evaluating its employees. Consequently, periodic job analysis also seems to be important. hav — d da The Uniform Guidelines clearly specify and the courts have generally upheld that employee evaluation should concentrate on 65 job-specific behaviors rather than on potentially relevant traits, abilities, and psychological characteristics. In the early 19705, the courts ruled that such aspects as appearance, ethical habits, and loyalty are vague and subjective and may not have any effect on job performance (Brito v. Zia Company, 478 F2d 1200 [1973]). The courts, however, have not forbidden the use of subjective supervisory ratings altogether. In a recent case, the court agreed that the Houston Police Department could use subjective judgment in deciding on qualification for a position (Ramirez v. Hofheinz, 619 F2d 442 [1980]). In general, the courts have looked favorably on subjective evaluations when they are supplemented by more objective, behavioral measures of performance. tommunication of Stsgfisygs Courts have reacted negatively to performance-evaluation systems when standards have not been communicated to employees. For example, in Rowe v. General Motors (457 F2d 348 [1972]), the court ruled that one of the discriminatory aspects of the motor company’s performance-appraisal policy was that criteria on which promotions were based were not clearly communicated to hourly employees. In contrast, the court more recently decided that a performance- appraisal system was legal because, among other things, employees were explicitly informed of the standards on which they would be evaluated (Zell v. United States). Therefore, the communication of clear, specific performance standards is essential in any performance-appraisal system. 66 r T 1 Training supervisors to evaluate employees properly also seems to be an important consideration in avoiding legal problems. For example, in Carpenter v. Stephen F. Austin State University, the court ordered the development of written, objective guidelines to assist supervisors in making promotion and hiring decisions. In Rowe v. General Motors, the court noted that, without some sort of evaluation guidelines, it was impossible to determine whether employees were being judged by the same criteria. Documentatign One of the most important considerations in a legally defensible performance-appraisal system is documentation. Reasons for personnel evaluations and subsequent actions must be properly recorded in writing if employers are to defend themselves adequately in the courts. In Marquez v. Omaha District Sales Office, Ford Division of the Ford Motor Company (440 F2d 1157 [1971]), the court found the company guilty of racial discrimination when it could not document its reason for removing an employee from a promotion list. Conversely, the court recently ruled in favor of a company’s decision not to promote an employee because the personnel file contained many specific instances of inadequate performance (Turner v. State Highway Commission of Missouri, 31 EPD 33, 352 [1982]). W The development, implementation, and evaluation of a performance-appraisal system are difficult undertakings for any 67 organization, public or private. Because so many critical issues must be considered, organizations often settle for any system that will accommodate the short-term need. Often organizations, especially police agencies, lack the qualified staff to conduct the critical steps of performance appraisal, such as job analysis and criterion development. But the difficulty encountered in performance appraisal is far outweighed by the importance of doing it right, not only from a legal perspective, but for the overall health of any organization. A review of the law provides an excellent overview of performance-appraisal issues and points to several policies organizations should follow to increase the accuracy and appropriateness of performance-appraisal procedures. Analyze the job to ascertain characteristics important to successful performance. These characteristics should be specific and behaviorally anchored and incorporated into a rating instrument. General skills, abilities, and personality traits are criteria that should be avoided. Once performance standards are developed, they must be communi- cated to all employees. Since raters are the key to obtaining performance information, they must be trained on how to use the rating instrument. At a minimum, raters should be provided with written instructions on the proper use of the instrument, as well as with written criteria upon which they are to base their judgments. Documentation of evaluations is critical because it usually leads to some form of personnel action. 68 As with any system, performance appraisal must be monitored. There should periodically be a formal review of work performance behaviors and output measures to ensure that they have not become irrelevant or obsolete. Any system that relies on human judgment is susceptible to error, but a concerted effort to make the performance-appraisal system fit the needs and unique characteristics of ‘the organization will go a long way toward improving productivity. Perf man -A r ' Included at the end of this chapter is a Performance-Appraisal Checklist. This was compiled to summarize the features a performance-appraisal system should have and the issues it should address. PERFORMANCE-APPRAISAL CHECKLIST 1. 15 performance appraisal conducted for all levels, occupa- tions, or ranks within the organization? 2. Are the various objectives of the performance appraisal specified? 3. Is one performance-appraisal instrument used for all the various objectives? 4. Was a thorough job analysis conducted for each position before the development of the performance-appraisal instrument? 5. Was a Single or multi-method process used to conduct the job analysis? 69 6. Is the rating conducted by the employee’s immediate super- visor? 7. Is the rating conducted by more than one individual, i.e., multiple raters? 8. Is training provided to raters before the appraisal takes place? 9. Is the training provided to raters evaluated? 10. Are performance criteria communicated to the employee before the appraisal? 11. Are the results of the performance appraisal communicated to the employee in a face-to-face interview? 12. Is the performance-appraisal system objective, subjective, or both? 13. Is the performance-appraisal system single or multi-method? 14. Has the performance-appraisal system been evaluated for construct, criterion, and content validity? CHAPTER III METHODOLOGY Background In October 1987, the Michigan State Police (MSP) established a Performance Evaluation Task Force with the responsibility for developing a new performance-appraisal system to replace or modify its current one. A subcommittee was given the task of determining what other police agencies throughout the United States were doing in the performance-appraisal arena. The subcommittee subsequently developed a l9-question survey that was mailed to 400 police agencies (50 state, 200 municipal, and 150 county) around the country. The sample' population had representation from all 50 states and reflected small, medium, and large departments in terms of total enlisted officers. Survey respondents were also asked to include with their completed questionnaire a copy of their performance-appraisal instrument. Of' the total sample population, 300 agencies returned the survey, and a majority of those agencies also included a copy of their appraisal instrument. These data were reviewed by the subcommittee to assist them in the eventual design of a new system for the Michigan State Police. 70 71 The Director of the Michigan State Police was contacted and asked whether the data collected by the subcommittee could be used as the data base for this research study. Approval was granted by the department, and all relevant information was provided to the researcher. It is acknowledged that the data base for this research is archival in nature and that the writer had no conceptual input into the design or administration of the survey instrument. The Sam i The sample population for this study was 400 police agencies located throughout the United States: 50 state, 200 municipal, and 150 county agencies (see Appendix A). The rationale for this distribution was a combination of representatjveness and accessihil- 1ty. The MSP thought that they would be able to collect data from all of the state agencies in the country due to greater organiza- tional ties and similar working environments than those with municipal and county police agencies. Because there are only 50 such agencies, the rationale was also not cost prohibitive. Since nationally there are far more municipal than county agencies, the sample population was weighted more heavily with this type. The size of the sample population was considered large enough to provide representative data without being cost prohibitive or too burdensome for coding and analysis responsibilities. The criteria for choosing individual police agencies for the sample population included sgenty_s11g, rggignal djstrjbutign, and 72 random selection. In terms of agency size, the MSP Operated on the assumption that larger police organizations were more likely to have formal performance-appraisal systems. Sample size was arbitrarily drawn at not less than 80 enlisted officers within the police agency, and no limit was set for maximum enlisted strength. Although this criterion left a Significant number of municipal and county agencies represented, it did correlate with the stated purposes of the research study. The MSP wanted regional distribution among the sample population and therefore skewed the selection process so that each of the 50 states had municipal and county agencies represented. The rationale for regional distribution was to eliminate bias toward any one particular region of the country in the selection process. Beyond this regional criterion and the size variable already discussed, individual agencies were selected at random until the population of 400 police agencies was achieved. The source document used to make the random selection was the 1987 Uniform Crime Re ort, published by the Federal Bureau of Investigation. This national report provides various law enforcement statistics, one being the number of enlisted and civilian employees for all police agencies in the United States. There were several advantages and disadvantages to using this sample population for purposes of the research study. A major advantage was the size of the sample population. Of the 400 police agencies that received the survey questionnaire, 300 or 75% returned a completed questionnaire. Coupled with the volume of data 73 available was the fact that small, medium, and large police agencies from all over the United States were represented in the sample population. A major disadvantage to using this sample was inherent in archival data. The writer had no input into the size of the sample population or in selecting which police agencies would be surveyed. Also, the decision to include only those agencies with 80 or more enlisted officers was weighted heavily toward states with larger populations. However, given the purposes of the research study, the size and disparity of the sample population were considered adequate to draw conclusions on the extent of performance appraisal among police agencies in the United States. Surve ues ionnaire In deciding what format to use to gather the necessary appraisal data, the subcommittee decided on a mail questionnaire (see Appendix B). The l9-question instrument was mailed to the appropriate department head with a cover letter signed by the Director of the Michigan State Police. The cover letter explained the formation of the Task Force and the purpose of the questionnaire; it pointed out that respondents would receive an executive summary of the findings. Although this was a minor detail, it was thought this approach would increase both the response rate and the quality of the data received (see Appendix C). The 19 questions were designed not only to gather information on the extent to which other police agencies had formal wvw—~‘—~. ... .__,._.—..v.. . 74 performance-appraisal systems, but also to determine how long these systems had been in place. Specifically, the Department was interested in finding out what other agencies were using the appraisal information for, and whether or not they were using one instrument to appraise all the different ranks or levels within their respective police agency. Questions were included to assess how often other departments were conducting the appraisals, and who in the agency was responsible for doing the rating. Other areas of interest included the type of training other agencies were providing their raters, the kind of employee input that took place before implementing their performance-appraisal system, and whether or not the police agency conducted a job analysis of the individual ranks or levels that were part of the performance-appraisal process. Since troopers and sergeants in the MSP are members of a collective-bargaining unit, the Department was interested in determining whether the presence of a collective-bargaining agreement was affecting performance- appraisal systems within other police agencies. In terms of the survey instrument, a closed-answer format was selected, with an ”other” category provided for any number of responses that fell outside of the answers provided. The questionnaire was prefaced with an instruction sheet that (a) stated the purpose of the research, (b) noted that the questionnaire was designed to be completed in no more than 15 minutes, (c) explained the use of the ”other" category, and (d) provided a space for the 75 name of a contact person to whom the executive summary could be mailed. This approach incorporated sound survey methods that would likely increase the response rate and help reduce rater error. The closed-statement format used in the survey instrument requires less time for respondents to complete and thus may increase response rate. Open statements, such as "Describe the performance appraisal method used to evaluate sergeants in your agency," were avoided because they elicit too much variance in responses. It is acknowledged that this approach also has its drawbacks in terms of capturing the actual methods used within the sample population. It does, however, decrease considerably the time and cost of coding the data. By providing an "other" category at the end of appropriate questions, respondents were given an opportunity to provide information outside of the answers provided. The disadvantages of' the closed-statement format were also partially offset by the MSP instructing respondents to include with their completed survey a copy of their performance-appraisal instrument. Examination of the actual appraisal instrument provided additional information outside of the answers provided in the questionnaire. Because the questionnaire format requires the respondent to identify and comprehend the variables being presented, pretesting is imperative. Before initial mailing is even considered, the questionnaire must be pretested among respondents similar to those in the sample population. This procedure helps determine whether 76 the respondent comprehends the questions in the manner intended and clears up question wording and order problems. The MSP was cognizant of this need and used area county and municipal agencies as well as neighboring state departments to pretest the questionnaire. Before being finalized, the instrument was also carefully reviewed by several consultants who had been retained as part of the Task Force responsibility. Respondents were also provided with the name of a contact person in the department to call if they had any questions or problems with the survey instrument. There are several methodological advantages and disadvantages of the survey questionnaire format used by the MSP. The writer believes that the format used by the MSP increased the response rate to the survey questionnaire. It was designed to be completed in 15 minutes or less, it was prefaced by a personal letter from the Director, and it advised respondents that they would receive an executive summary of the findings. Although the survey incorporated a closed-answer format, the inclusion of an "other" category with each question, and the request that respondents include a copy of their appraisal instrument with the completed questionnaire partially offset the limited nature of the data. A major disadvantage of the survey questionnaire is the lack of control over who in the police agency provided the performance- appraisal information. For example, Question 18 asked the individual filling out ‘the questionnaire to "Rate the level of acceptance by employees of your formal performance evaluation 77 system." It is very possible that the answer given would vary, depending on the position or responsibilities of the person filling out the questionnaire. This format also assumes that the individual ~completing the questionnaire is knowledgeable about all areas of the performance-appraisal process. W To accomplish the purposes of this study, the raw data collected by the MSP were coded and specific variables identified for each of the 19 survey questions. Unique identification numbers were then assigned to each of the police agencies in the sample population. To adjust the data set obtained from the MSP to meet the goals of the research study, several modifications were made. Although the actual number of enlisted officers was coded into the computer data base, it was impractical to use such a wide disparity for correlation purposes. Therefore, the sample population was grouped in the following manner: Small agencies were those with 75 to 199 enlisted, medium 200 to 499 enlisted, and large 500 or more enlisted officers. This grouping, although somewhat arbitrary, was selected for both statistical and content reasons. With the sample population comprised of 400 police agencies, there was almost an equal distribution of individual departments within each of the three groups. Large departments (500 or more enlisted) are more likely to have specialized units in areas that affect performance appraisal, 78 such as recruiting, personnel, training, and planning and research. In contrast, it. was assumed that small departments (75 to 199 enlisted) would have few or no resources dedicated to specialized areas outside of general patrol. There are, however, medium-Size departments (200 to 499 enlisted) that do have varying degrees of in-house specialization assigned to personnel-related areas. A similar grouping for police agency size was incorporated into the Michigan State University Manpower Planning Development Project (1981). The 1987 Uniform trims Repgrt incorporated a nine-region breakdown for police agencies in the United States, and this format was initially coded into the computer data base. This large a breakdown also proved impractical for correlation purposes. Therefore, the variable of geographic region was reduced to three categories. The first region included New England (CN, MN, MA, NH, RI, VT), Middle Atlantic (NJ, NY, PENN), and East North Central (IL, IN, MI, OH, WI) states. The second region included South Atlantic (DE, DC, FL, GA, MD, NC, SC, VA, W.VA) and East South Central (AL, KY, MS, TN) states. The third and final region included West North Central (IA, KN, MIN, MO, NA, ND, SD), West South Central (AK, LA, OK, TX), Mountain (AZ, C0, 10, MT, NV, N.MEX, UT, WY), and Pacific (ALASKA, CA, HW, OR, WA) states. The remaining demographic. variables of' agency type (state, municipal, and county), as well as the presence or absence of a 79 collective-bargaining agreement, did not require any modification for correlational purposes. Unlike demographic variables, internal variables are considered component parts of a formal performance-appraisal process. Three specific internal variables were chosen for analysis: (a) level of training provided to raters, (b) presence of a systematic job analysis, and (c) the appraisal format itself. These variables were derived from Survey Questions 9, 10, and 16, respectively, and were correlated with the remaining survey questions to ascertain possible relationships. These three internal variables were chosen because they are critical component parts of any performance-appraisal system. Reliability in performance appraisal becomes possible only when there is substantial agreement among raters as to what is good and bad employee performance. This cannot be accomplished unless there is adequate training for supervisors. Training increases reliability and improves objectivity and accuracy in evaluating an employee’s performance. The importance of job analysis and the appraisal format itself are crucial to the issue of content validity within a performance- appraisal system. Content validity ensures that the appraisal measure and its administration derive logically from the conceptual definition of the performance dimension. This aspect of validity is tied directly to the job-analysis process. To achieve content validity, the performance-appraisal instrument must measure the 80 actual and important. duties and tasks of 'the job in type and proportion. Although other internal variables were present in the survey data, i.e., frequency of rating, purposes of rating, and type of employee input, the writer did not think they were as critical to the performance-appraisal process as the three selected for analysis. The final purpose of the study was to conduct a qualitative analysis on the performance-appraisal instruments returned by the sample population with their completed survey questionnaires. Of the 300 departments that returned the questionnaire, 182 or 60% enclosed a copy of their performance-appraisal instrument as requested. Because it was unnecessary to conduct a qualitative analysis on all 182 performance-appraisal instruments, a random sample of 50 instruments was chosen (see Appendix D). The analysis was conducted as a three-step process. First, the 50 instruments were categorized by responses given to Question 16 (i.e., Indicate format(s) used to evaluate your employees: graphic rating scale, BARS, written essay, forced choice, etc.). Instruments were then examined to determine differences and similarities among the categories in terms of instructions for raters, justification required for an unsatisfactory rating, requirement fOr discussion of rating with employee, and the Ineasurement scale used. This criterion was chosen because it provided information not captured in the survey questionnaire, and 81 which the writer thought was important to the performance-appraisal process. In the second step of the qualitative analysis, the various traits or performance dimensions used to evaluate departmental personnel, i.e., written communication, report writing, and interpersonal skills, were examined. In the third and final step, the actual performance-appraisal instruments in the subsample were compared against an academic standard. This standard was derived from a thorough review of the performance-appraisal literature. For example, the performance- appraisal literature has generally defined a graphic rating scale as a list of personality or behavioral traits arrived at by analysis of factors that appear to lead to success or failure of a particular job. The rater is instructed to indicate along a scale the degree to which the rater possesses that particular quality. 00 instruments categorized by respondents as graphic rating scales actually meet this academic standard? CHAPTER IV ANALYSIS AND DISCUSSION OF RESULTS 19.1mm In this chapter, the findings of the study are presented within the framework of the survey questionnaire and in the general context of the research questions in four sections: (a) general findings; (b) demographic variables of police agency size, type, geographic region, and presence of a collective-bargaining agreement; (c) internal relationships; and (d) qualitative findings. Data for the general findings section of the results were derived from the completed survey questionnaires and from the 1987 FBI Uniform Crime Reports. Survey responses were coded and used to create the composite variables. Frequency distributions, percent- ages, and descriptive summary statistics were computed. The chi-square statistic was used as the primary test for the existence of zero-order relationships between the factors under consideration. These analyses provided evidence that documented the effects of a number of environmental agency features on the development and use of a particular performance-appraisal system. For purposes of this study, statistical significance was defined as probability level of less than .05. The actual strength of each relationship was interpreted through use of the gamma statistic. 82 83 General findings The survey questionnaire was mailed to 400 police agencies: 50 state police, 200 municipal, and 150 county. The sample included representation from all 50 states with a minimum enlisted strength of 80 officers and a maximum of 27,425. Of the total sample, 300 agencies returned a completed questionnaire: 47 state police, 154 municipal, and 99 county departments. Table 1 provides a summary of agency returns. Table l.--Summary of police agency survey returns (N = 300). Frequency Percent Agency Level State 47 16 County 99 33 Municipal 154 51 Agency Sige (Full—time enlisted) 75-T99 105 35 200-499 95 32 500+ 100 33 Ggographig Regipn New England, Middle Atlantic, East North Central states 105 35 South Atlantic, East South Central states 83 28 West South Central, West North Central, Mountain, and Pacific states 112 37 84 The data were divided for each of the three agency types and are presented for each question in the survey in Table 2. The table indicates a percentage of total responses rounded off to equal 100%. Questions 3, 5, ll, 15, and 16 asked the respondent to check all foils that were applicable; therefore, these totals do not equal 100%. Table 2.--Employee performance-evaluation systems in law enforcement. All State Municipal County Agencies 1. Does your agency have a formal performance evaluation system for enlisted officers (i.e., a written documented process)? Yes 92 84 9T 87 N0 8 16 9 13 2. How long has your current formal performance evaluation system been in place? Less than one year 2 8 6 6 One to three years l4 19 18 18 Three to five years 16 ll l4 13 More than five years 67 62 62 63 3. Indicate the enlisted ranks that are evaluated within your present system. Check all that apply. Trooper, etc. 100 95 98 96 Corporal 51 30 55 42 Sergeant 95 92 96 94 Lieutenant 86 88 88 88 Captain 72 71 76 73 Major 54 22 34 31 Other 26 40 36 36 85 Table 2.--Continued. All State Municipal County Agencies 4. Does your agency utilize the samg written instrument for all the ranks checked in question 3? Yes 77 57 77 67 No 23 43 23 33 5. What are the major purposes of your performance evaluation system? Check all that apply. Promotion 70 63 72 76 Retention 55 51 65 57 Training/development 69 69 63 67 Compensation 52 39 50 45 Feedback/motivation 91 85 85 86 Discipline/discharge 62 54 55 56 Other 2 9 6 7 6. Does your agency utilize the same written instrument for all the purposes checked in question 5? Yes 86 82 90 85 No 14 18 10 15 7. According to your agency policy, how often is a formal perform- ance evaluation conducted? (Check the appropriate box for each rank) Officer Monthly 2 4 0 2 Quarterly 2 5 l 3 Semi-annually 12 35 28 29 Annually 81 54 68 73 Other 2 2 2 2 Corporal Monthly 0 5 O 0 Quarterly 0 30 2 3 Semi-annually 17 65 27 26 Annually 83 O 69 70 Other 0 0 2 l 86 Table 2.--Continued. All State Municipal County Agencies Sergeant Monthly 0 l 0 0 Quarterly 0 4 l 3 Semi-annually 15 38 28 3O Annually 83 57 69 66 Other 3 O 2 1 Lieutenant Monthly 0 l O 0 Quarterly 0 3 l 2 Semi-annually ll 35 26 28 Annually 87 61 7O 69 Other 3 l 3 2 Captain Monthly 0 l 0 1 Quarterly 0 2 l 2 Semi-annually 7 32 21 24 Annually 9O 64 72 71 Other 3 l 6 3 Major Monthly 0 3 O 1 Quarterly 0 3 3 2 Semi-annually 4 34 24 23 Annually 92 61 68 71 Other 4 0 5 3 8. Typically, who in your agency injtiglly completes the formal performance evaluation instrument? Immediate supervisor 91 98 99 97 Supervisor 2 levels above 2 l l 0 Supervisor 3 levels above 7 l 0 0 Other 0 0 0 2 87 Table 2.-—Continued. All State Municipal County Agenci es Indicate the most common method of training your agency pro- vides supervisors who completg the formal performance evalua- tion instrument. Check all that apply. No training 1 3 6 3 Informal training 17 3O 51 29 Formal written instruct. 37 34 49 34 Formal training sessions 42 32 49 33 Other 3 l O 1 Did your agency conduct a systematic job/task analysis prior the development and implementation of your formal performance to evaluation system? Check the appropriate answer for each rank. YES N0 5 M 0 AA S M C Officer 70 57 54 58 3O 43 46 Corporal 52 48 57 52 48 52 43 Sergeant 67 53 53 55 33 47 53 Lieutenant 66 52 49 53 34 48 51 Captain 69 47 46 50 31 53 54 Major 65 33 49 46 35 67 51 What type of employee input was obtained in developing your formal performance evaluation system? Check all that apply. No input 21 12 23 14 Oral comments 36 22 35 23 Written comments 29 25 3O 23 Structured group input 45 24 25 23 Formal written survey 27 10 6 9 Other 7 7 20 9 Does your agency have a collective bargaining agreement with uniform personnel? If you answer no, go to question 14. Yes 38 64 52 56 No 62 36 48 44 Table 2.--Continued. All State Municipal County Agencies 13. If you answered yes to question 12, does the collective bar- gaining agreement impact your formal performance evaluation system (i.e., time periods, who does the rating, purpose of the instrument, etc.)? Yes 22 15 23 18 No 78 85 77 82 14. Is there a formal appeal process for the employee who is dis- satisfied with their rating? Yes 83 7O 85 77 No 17 30 15 23 15. How are the results of the evaluation process sommunicatgd to the employee? Check all that apply. Oral feedback only 9 7 6 6 Copy given to employee 33 20 34 21 Summary given to employee 9 5 10 6 Copy with interview 70 52 82 51 Summary with interview 21 15 21 14 Other 2 2 3 2 16. What format(s) is utilized for your formal performance evalua- tion instrument? Check all that apply. Global ranking lO 4 O 4 Single graphic rating scale 12 9 15 ll Separate graphic rating scale 50 65 74 66 BARS 14 10 4 8 Forced choice 21 24 18 22 Written essay 43 32 32 34 Goal setting/M80 41 23 14 23 Other 49 6 9 9 89 Table 2.--Continued. All State Municipal County Agencies 17. Check which of the following statements best describes your formal performance appraisal system. New/still being evaluated 5 10 12 9 Acceptable/no plans to change 67 58 56 59 Presently being re- evaluated 21 27 28 26 Other 7 5 5 5 18. Rate the level of acceptance by employees of your formal performance evaluation system. Poor 2 3 6 4 Fair 24 27 24 25 Good 31 56 46 48 Very good 29 13 21 18 Excellent 14 l 3 4 Note: Forty-seven out of 50 state agencies surveyed, responded. Average size = 980, ranging from minimum of 115 to maximum of 5,624 enlisted personnel. One hundred fifty-four out of 200 municipal agencies surveyed, responded. Average size - 855, ranging from minimum of 84 to maximum of 27,425 enlisted personnel. Ninety-nine out of 150 county agencies surveyed, responded. Average size - 392, ranging from minimum of 86 to maximum of 5,418 enlisted personnel. The data indicated that a majority of agencies (82%) had a formal performance-appraisal system for enlisted officers that had been in place for more than five years. These systems generally encompassed all ranks within the department, up to and including captain. The same written instrument was used for all ranks by the 90 majority of respondents, with a high of 77% for state and county departments and a low of 57% for municipal ones. In terms of the purpose of the performance-appraisal system, more than 80%. of the respondents indicated employee feedback/ motivation, followed by 76% promotion, 67% training/development, 57% retention, 56% discipline/discharge, and 45% compensation. A high percentage of the sample population (80%) used the same instrument for all the various purposes and conducted the appraisal annually. The most common method of training provided to raters was a combination of informal and formal in-house programs. ‘This trend was apparent in the type of employee input obtained before the implementation of a particular performance-appraisal system. The majority of respondents used oral and written comments from employees in conjunction with a structured group process. In terms of those police departments having collective- bargaining agreements, municipal departments reflected a slightly higher percentage: 64% compared to 38% for state and 52% for county departments. Slightly more than half of the respondents conducted a systematic job analysis before the development and implementation of their formal performance-appraisal system. 'This trend remained constant regardless of the different ranks evaluated. The nature and quality of the job-analysis process, however, was not gathered as part of the survey questionnaire. Nearly all of the respondents indicated ‘that the» immediate supervisor initially completed the formal performance-evaluation instrument. 9] Slightly more than half of the sample population used a separate graphic rating scale format to measure several performance areas or traits of their officers. This type of format generally incorporated a written comment section for the rater to justify an unacceptable rating. The types of formats least used by respondents were global ranking, forced choice, and behaviorally anchored rating scales (BARS). It was apparent from examining the actual instruments that were returned with the questionnaire that many departments were using formats that combined more than one type of design, i.e., graphic rating scale, written essay, and goal setting. Fifty-one percent of the respondents communicated the results of ‘the appraisal process in a face-to-face interview with the employee along with a copy of the completed rating. Slightly more than half of the sample population viewed their performance- appraisal system as "generally acceptable” and had no immediate plans to modify or replace it. In terms of employee acceptance of the performance-appraisal system, 30% rated their system as "fair," and approximately 50% rated it as "good." Summar f Fi ’n A ”typical” performance-appraisal system among the sample population had been in place more than five years and evaluated all ranks through captain using the same instrument. It was generally used for the purposes of employee feedback, training, and promotion 'and was conducted annually by the employee’s immediate supervisor. \Job analysis was not always conducted before the development of the 92 system (approximately 52% of the time), but employees generally were given the opportunity to provide input in the form of oral and written comments and through structured groups, i.e., quality circles. Although most police agencies in the sample population provided training for their raters, it was either very informal in nature, i.e., a brief pep talk just before the appraisal taking place, or formal written instructions included within the appraisal format. Only one-third conducted formal training sessions for raters. A "typical" performance-appraisal system also used a separate graphic scale to evaluate various traits and performance areas. The format included written essay and goal-setting components. The system was described as acceptable, and there were no plans to change it in the future. The level of acceptance by employees of their performance-appraisal system was characterized as "good" to "very good." It is worthwhile to note that the "level of acceptance" by employees of their performance-appraisal system might have been rated "poor to fair" if the question had been answered by a patrolman or sergeant. It has been the writer’s experience that very few rank-and-file police officers like any appraisal system. It is most likely that the person completing the questionnaire had a management—level position within the police department and therefore was more inclined to portray the level of acceptance by employees as favorable . 93 Demographic yariablgs One of the purposes of the study was to examine various features of police agencies per se that might influence the type and design of their formal appraisal systems. Four specific features or demographic variables were chosen for analysis: (a) agency size, (b) type, (c) geographic region in which it is located, and (d) the presence or absence of a collective-bargaining agreement. The three main agency types were state, municipal (city), and county. The size of the agencies was determined by "enlisted strength" --small having 75 to 199 officers, medium having 200 to 499, and large having more than 500 hundred enlisted officers. The sample population was grouped into three main geographic regions: (a) New England, Middle Atlantic, and East North Central states, (b) South Atlantic and East South Central states, and (c) West South and North Central, Mountain, and Pacific states. Summary statistics for demographic variables are presented in Table 3. en T e A significant relationship existed between the type of police agency and the inclusion of the rank of major within the formal performance-appraisal system. The relationship was strongest among state and municipal agencies, the former including the rank of major more often and the latter including it less often than the expected value. It was possible, however, that this finding reflected the fact that the rank of major is much more common among state police agencies than it is among municipal departments. Also, the enlisted 94 .ucmsmmgmm mcwcpmmsan-o>mpom—_oum mocmaamuua oepo.-\om..\~m.m Nem.\a~.~ moao.-\ope.\ap.m eao~.-\poo.\P.m~ mm~ saunas co _a>as .mp ammo.-\ama.\mmc. oem.\_am. mmoo. \mwa.\mp. .mp_. \Soo.\a.o. mmN pascac om: .mop som_. \mm~.\o_P. mmm.\~mm. mmNm.-\~eo.\m.m ance. \mao.\~.e emu pastas mmgam opnapg~> u_;gagmcswo .mcovpmmza xm>s=m x mmpnawsm> upgaasmoEmo--.m mpnmh 95 strength of state police agencies within the sample population was generally much larger than that of municipal departments. The larger the agency, the presence of higher ranks was more likely due to Span of control. A significant relationship was found between the type of agency and the presence or absence of a collective-bargaining agreement. Municipal departments had a collective-bargaining agreement for their enlisted officers more often than the expected value, whereas state agencies did less often. Municipal police agencies have traditionally been unionized and would therefore reflect a higher level of collective-bargaining agreements. State police agencies, on the other hand, have traditionally been part of a civil service system covering all state employees and, for the most part, have not had the constitutional right to engage in collective bargaining. The type of police agency and the particular format used within the performance-appraisal instrument were significantly related. State agencies were more likely to engage in goal-setting techniques with their officers as part of the appraisal process. County agencies, on the other hand, were less likely to incorporate this appraisal technique. A possible explanation for this finding is that many state governments have incorporated an M80 or goal-setting approach within their budget process. Perhaps this practice has proven somewhat successful and has carried over into other areas such as performance appraisal. It may also be explained by the fact that state civil service agencies tend to be more sophisticated than their county and municipal counterparts. 96 A enc i e The Size of the police agency was indicative of the inclusion of the rank of major within the performance-appraisal process. Medium and large agencies (those having an enlisted strength of more than 200 officers) were more likely to include the rank of major in the appraisal process. As previously indicated, larger agencies generally have a greater span of control and are more likely to have the rank of' major in their organizational hierarchy. Smaller agencies (those having fewer than 200 officers) most likely will not have positions above the captain rank. A significant relationship was found between police agency size and the particular format used to evaluate officers. Large agencies within the sample population were more likely to use a BARS. Because BARS require considerable time and expertise to develop and implement, it is reasonable to assume that the larger agencies were more likely to have the necessary resources. Many large departments have personnel departments and, as such, are better equipped to develop and implement a BARS format within their respective police agencies. Gepgraphic Regipn The regional distribution of the sample population was related to whether or not they had a collective-bargaining agreement with enlisted personnel. Respondents in the New England, Middle Atlantic, and East North Central states were more likely to have collective-bargaining agreements than those in the other two 97 regions. This geographic region was made up of predominantly industrial states that tended to have larger municipal police departments than respondents in the other two regions. As discussed under agency type, municipal agencies have traditionally been unionized, thus explaining the relationship. Collegtive-Bsrgaining Agrgemgnt A significant relationship was found between the presence of a collective-bargaining agreement and the enlisted ranks not included in the appraisal process. Those agencies that had a collective- bargaining agreement were less likely to appraise the captain and major ranks within their formal system. A possible explanation for this finding is that the ranks of captain and major are considered as high-level supervision or administration, thus not normally considered part of the bargaining unit. The writer expected to find a significant relationship between this variable~ and the purposes of' the appraisal system because police contracts frequently have provisions pertaining to personnel issues, i.e., compensation, promotion, and training opportunities. The lack of any correlation may be explained by "standard operating procedure" within police agencies. Regardless of the presence or absence of a collective-bargaining agreement, police agencies traditionally use appraisal information for similar purposes. §gmmary Overall, the demographic variables of agency type, size, geographic-region, and presence of a collective-bargaining agreement 98 seemed to have little significant effect on the performance- appraisal systems of the sample population. The inclusion of the rank of major within the formal appraisal seemed to be affected by the type of agency, its enlisted strength, and the presence or absence of a collective-bargaining agreement. The type of format chosen to appraise enlisted officers was somewhat influenced by the type of agency and the relative size of agencies in the sample population. The presence of a collective- bargaining agreement correlated with a specific geographical region and with a particular type of agency. Internsl Variables Unlike demographic variables, internal variables were consid- ered component parts of a formal appraisal process. Three specific variables were chosen for analysis: (a) level of training provided to raters, (b) systematic job analysis, and (c) the appraisal format itself. These variables were derived from Survey Questions 9, 10, and 16, respectively, and were correlated with the remaining survey questions to ascertain possible relationships. Summary statistics for the internal variables are presented in Tables 4 through 13. Lgvel of Training Question 9 asked respondents to indicate the most common method of training provided to supervisors charged with completion of the ‘formal performance-appraisal instrument. Potential responses were "no training," ”informal training (i.e., oral instructions)," 99 "formal written instructions,’I "formal training session (i.e., workshop, seminar, in-service, etc.)," and "other." Respondents were directed to check all answers that applied to their agency. A significant relationship was found between training and use of the same written instrument to evaluate all ranks in the agency (see Table 4). Those agencies in the sample population that did not use the same written instrument were more likely to provide their supervisors with formal training. Table 4.--Informal training provided to rater x survey questions. Survey Question X2 /Prob Gamma n 4. Same instrument for all ranks l0.0/.001 .4250 244 lld. Formal written survey of all 7.6/.005 -.3680 244 employees 17. Status of performance-appraisal 15.2/.001 -.2900 256 system l6e. Forced-choice format 5.2/.021 -.3528 252 169. M80 format 4.0/.O45 -.2986 252 The performance-appraisal literature has supported the use of different instruments for evaluation when the knowledge, skills, or abilities vary among jobs or, in this study, among ranks within police agencies. If, in fact, different appraisal systems are used to evaluate various ranks, the level of training provided to supervisors should also increase so that employees’ knowledge, 100 skills, and abilities can be adequately measured. It may also indicate a greater level of sophistication among certain agencies in the sample population. In this instance, the sophistication is evident through formal training for supervisors in conjunction with various methods to appraise employees. A significant relationship was also found between supervisor training and the subsequent use of performance-appraisal information (see Table 5). Those agencies that gave their supervisors formal training were more likely to use the results for employee training and development. It has been the writer’s experience that supervisors who are better trained will exhibit a more positive approach to the performance-appraisal process because they understand its value to the organization. They are also better equipped to identify employees’ strengths and weaknesses, which is crucial to identifying training needs and fostering employees’ development. ‘This is especially true when a component of the training focuses on "people skills" that enhance the rater’s coaching and counseling techniques. The relationship between the type of employee input respondents obtained in developing their appraisal system and the level of training provided to supervisors was statistically significant (Table 6). Agencies that solicited input in the form of written comments or through structured focus groups were more likely to provide their supervisors with written instructions or a formal training session. Police agencies that accept the organizational 101 Table 5.--Formal written instructions provided to rater x survey questions. Survey Question X2 /Prob Gamma n 4. Same instrument for all ranks 9.2/.002 .4063 244 llb. Oral comments from selected 6.5/.OlO .3390 244 employees 11c. Written comments from selected l4.l/.OOO .4829 244 employees lld. Structured group input 5.8/.015 .3191 244 17. Status of performance-appraisal 9.1/.026 .2597 256 system Table 6.--Formal training session provided to rater x survey questions. Survey Question X2 /Prob Gamma n 4. Same instrument for all ranks l.9/.l62 .1911 244 11a. No employee input l9.l/.OOO -.6272 243 11b. Oral comments from selected 4.8/.027 .2932 244 employees llc. Written comments from selected 8.l/.OO4 .3758 244 employees 11d. Structured group input 27.9/.000 .6477 244 lle. Formal survey of all employees 6.7/.009 .5745 245 14. Formal appeal process 17.l/.OOO .5758 253 17. Status of performance-appraisal 31.3/.OOO .5386 256 system 18. Employee acceptance of system l7.2/.001 -.3479 252 102 value in obtaining employee input may also realize the value to providing a higher level of training to supervisors who conduct the appraisal. Both steps go hand in hand to make a performance— appraisal system successful (i.e., increasing long-term employee acceptance and reducing rater bias and error). A significant relationship also existed between training and the current status of performance-appraisal systems within the sample population (Table 6). Those agencies that conducted formal training sessions for their supervisors were more likely to rate their performance-appraisal system as acceptable, with no immediate plans to modify or replace it. Well-trained supervisors generate a level of confidence in a performance-appraisal system. This confidence is shared not only by the raters, but by those being evaluated as well. Finally, the data indicated that the level of training and the level of employee acceptance of their appraisal system were significantly related (Table 6). When formal training was given to supervisors, there was a greater likelihood that the acceptance level was "very good." This relationship can be explained by the confidence factor discussed previously. Employee acceptance will generally correspond with the confidence employees have in their supervisors’ ability to evaluate their performance accurately and fairly. When supervisors are properly trained in their respective appraisal systems, they are more likely to work out problems with employees and provide constructive steps for improved performance. 103 Systematic Job Analysis Question 10 in the survey asked respondents to indicate whether they conducted a systematic job analysis on the ranks of officer, corporal, sergeant, lieutenant, captain, and major before develop- ment and implementation of their formal appraisal system. Chi-square analyses yielded a significant relationship between the presence of a job analysis for the ranks of officer, sergeant, and lieutenant and a greater likelihood that respondents used a different. performance-appraisal instrument to appraise each rank (see Tables 7 through 12). This relationship has been strongly supported in the performance-appraisal literature. A systematic job analysis identifies the knowledge, skills, and abilities required to perform successfully in a given job (or rank). Because the officer, sergeant, and lieutenant ranks incorporate different traits and performance areas, one would expect to see police agencies using different performance-appraisal instruments. A significant relationship was also found between the presence of a systematic job analysis and the agency’s desire to obtain employees’ input before developing their performance-appraisal system. Respondents who conducted a job analysis for all ranks were more likely to solicit written comments or conduct a formal written survey of all employees. 104 Table 7.--Job analysis conducted on officer rank x survey questions. x2 /Prob Survey Question Gamma n 4. Same instrument for all employees 3.5/.000 .4856 241 5c. Purpose is training/development 4.0/.044 .2730 241 9b. Informal training 9.6/.OOl -.3902 239 9c. Formal written instructions ll.5/.OOO .4241 239 9d. Formal training session l9.4/.OOO .5326 239 11a. No employee input 34.2/.000 -.7796 233 11c. Written comments from selected 5.8/.016 .3330 234 employees lld. Structured group input l9.l/.OOO .5787 234 lle. Formal survey of all employees 19.4/.000 .8631 234 14. Formal appeal process l4.3/.OOO .5341 238 16d. BARS format 7.3/.OO6 .6604 237 17. Status of performance-appraisal ll.O/.Oll .2945 239 system 18. Employee acceptance of system 26.3/.000 -.5482 236 105 Table 8.--Job analysis conducted on corporal rank x survey questions. X2 /Prob Survey Question Gamma n 3b. Appraises corporal rank 9.4/.002 .6599 117 4. Same instrument for all ranks 5.7/.Ol7 -.4540 117 5d. Purpose is compensation 6.3/.Oll .4432 117 9b. Informal training 4.0/.O37 -.3584 115 9c. Formal written instructions ll.8/.OOl .5824 115 9d. Formal training session 9.3/.002 .5290 115 lla. No employee input l4.4/.OOO -l743l 115 11c. Written comments from selected 3.4/.O47 .3773 115 employees lld. Structured group input 4.3/.O36 .4070 115 lle. Formal survey of all employees 12.7/.000 1.0000 114 14. Formal appeal process 4.5/.033 .4790 115 169. M80 format 4.0/.O43 .4253 115 18. Employee acceptance of system l4.3/.OOl -.5524 114 106 Table 9.--Job analysis conducted on sergeant rank x survey questions. Survey Question X2 /Prob Gamma n 4. Same instrument for all ranks 8.9/.002 -.4062 227 5d. Purpose is compensation 4.5/.O32 .2819 227 9b. Informal training 6.9/.008 .3416 225 9c. Formal written instructions 8.8/.003 .3812 225 9d. Formal training session l7.2/.OOO .5161 225 11a. No employee input 30.0/.OOO .7674 219 11c. Written comments from selected 7.3/.OO6 .3803 220 employees 11d. Structured group input lO.4/.OOl .4448 220 lle. Formal survey of all employees l6.l/.OOO .7552 220 14. Formal appeal process 7.0/.OO7 .4059 224 15d. BARS format 9.l/.002 .7020 224 17. Status of performance-appraisal 9.3/.024 .2640 225 system 18. Employee acceptance of system 27.6/.000 -.5736 222 107 Table lO.--Job analysis conducted on lieutenant rank x survey questions. Survey Question X2 /Prob Gamma n 4. Same instrument for all ranks 8.5/.OO3 .4288 214 9b. Informal training 7.9/.OO4 -.3753 212 9c. Formal written instructions 8.1/.004 .3785 212 9d. Formal training session l6.7/.OOO .5214 212 11a. No employee input 24.4/.000 -.7328 206 11c. Written comments from selected 6.3/.011 .3628 207 employees 11d. Structured group input 9.4/.002 .4325 207 lle. Formal survey of all employees 15.9/.000 .7568 207 16d. BARS format lO.2/.OOl .7244 211 18. Employee acceptance of system 22.3/.OOO -.5334 210 Table ll.--Job analysis conducted on captain rank x survey questions. Survey Question X2 /Prob Gamma n 9d. Formal training session 15.1/.OOO .5287 187 11a. No employee input l6.l/.OOO -.6740 182 lle. Formal survey of all employees lO.l/.OOl .6253 183 16d. BARS format 6.3/.001 .6030 186 169. M80 format 7.0/.OO8 .4363 186 18. Employee acceptance of system l7.2/.OOl -.5046 185 108 Table 12.--Job analysis conducted on major rank x survey questions. Survey Question X2 /Prob Gamma n 9d. Formal training session 8.7/.OO3 .5603 101 11a. No employee input 5.0/.025 —.5721 100 lle. Formal survey of all employees 6.3/.Oll .7269 100 14. Formal appeal process 5.5/.Ol8 .6321 101 169. M80 format 6.l/.013 .5066 102 18. Employee acceptance of system lO.6/.OO4 -.5285 100 Police agencies that understand the importance of a systematic job analysis also understand the long-term benefit of obtaining employee input. It creates an environment in which employees believe they have a stake or ownership in the performance-appraisal system. This environment also generates a higher level of confidence in the overall appraisal process. Police agencies in the sample population that conducted systematic job analysis for the officer, sergeant, lieutenant, and captain ranks were also more likely to use a BARS system to appraise their employees. This relationship has been strongly supported in the performance-appraisal literature since one of the critical steps in the development of a BARS system is the identification of all relevant job dimensions. This identification process is inherent in a systematic job analysis. 109 The data revealed a strong and consistent relationship between the presence of job analysis and the type of training given to supervisors. When a job analysis was completed, respondents were less likely to provide informal training and more likely to provide either formal written instructions or formal training sessions for their supervisors. This relationship remained strong for all ranks. Job analysis and supervisory training are two cornerstones for any successful performance-appraisal system. It may be suggested that any agency that would spend the time and expertise to conduct a systematic job analysis would also see the value in formal training for supervisors. Performance-Appraisal Format The data revealed a significant relationship between the type of input respondents obtained before developing their performance- appraisal system and the use of a BARS. Those agencies that indicated having structured group input were more likely to incorporate a BARS format (Table 13). Because BARS require the identification of all relevant job dimensions, employees are often u5ed to provide input into the process. The use of supervisors and subordinates in this process ensures that all relevant job dimensions are identified before writing the behavioral anchors for each job dimension. 110 Table l3.--Behaviorally anchored rating scale x survey questions. Survey Question X2 /Prob Gamma n 11d. Structured group input from 7.4/.006 .5465 241 employees The writer expected to see the type of performance-appraisal format correlate with individual performance-appraisal objectives. For example, the literature would support that BARS are excellent formats to use if the objective of the performance appraisal is employee feedback/development and assessing training needs because they are far more specific in terms of identifying employee behavior relative to performance on a specific job dimension. The absence of any correlation in the data can perhaps be explained by the fact that police agencies generally choose a particular format based on cost, time, and expediency factors, not as a result of first identifying objectives and then choosing the appropriate format. One would expect to find this relationship within a sophisticated police department where the necessary financial and personnel resources are available. Seminar! Of the internal variables, the presence of a systematic job analysis appeared to have the greatest effect on performance- appraisal systems within the sample population. Those police agencies that conducted a job analysis for the various ranks were 111 more likely to use different instruments to evaluate each rank. They were also more likely to obtain formal input from all employees before developing their performance-appraisal system. Respondents who incorporated job analysis in their performance- appraisal process were more likely to provide either formal written instructions or formal training sessions for their supervisors. In terms of the type of instrument used to appraise employees, completion of a job analysis showed a strong relationship with the use of a BARS. The level of training provided to supervisors is another internal variable that had a significant effect on the performance- appraisal process. When supervisors received formal written instructions or training sessions, respondents were more likely to use the appraisal information for employee development and more likely to have an appeal process in place. When this higher level of training was present, agencies in the sample population were more likely to rate their systems as "adequate" and to characterize the employee acceptance level as ”very good." The data demonstrated a strong relationship between obtaining formal input from all employees before developing their performance- appraisal system and formal training for their supervisors. Those agencies that gave supervisors formal training were also more likely to use different instruments to evaluate each rank. 112 Qualitative Findings The purpose of this section is to describe and examine the performance-appraisal instruments returned by police agencies with their completed survey questionnaires. Of the 300 departments in the sample population that returned the questionnaire, 182 or approximately 60% enclosed a copy of their departmental performance- appraisal instrument. A random sample of 50 instruments was chosen for analysis. The 50 instruments chosen at random for analysis were representative of the sample population. The ratio of agencies that returned the survey questionnaire to agencies selected for qualitative analysis was: 51% to 58% municipal, 33% to 26% county, and 16% to 16% state police agencies. Comparison of Formats Twenty-three agencies in the subsample indicated that they used only a separate graphic rating scale, eight incorporated a separate graphic rating scale with a written essay, and five indicated using both formats along with goal setting or MBO. Five of the subsample members indicated using only a BARS, while the remaining nine indicated a multiple format, usually consisting of all of the previously mentioned formats. The five categories of instruments had numerous similarities but very few differences in their overall format. All of the appraisal instruments in the subsample included directions for the rater in how to complete the appraisal. The length of the instrument ranged from one to. ten pages. The length of the 113 instrument did not correspond to any particular category, although those of agencies that indicated using a BARS were a minimum of three pages to a maximum of ten pages. All of the subsample except two (separate graphic rating scale) instructed the rater to include in the appraisal examples of "work well done." This was documented, for the most part, in a separate section of the appraisal form. Approximately half of the subsample required raters to provide written documentation when they indicated unsatisfactory performance or when the employee did not meet expectations. This technique was not particular to any one of the five categories but was present to some extent in all five. Even though a goal-setting or MBO format was indicated in only 7 of the 50 questionnaires, 38 instruments incorporated a variation of goal setting. This aspect was generally included in a separate section of the appraisal and was worded as "areas to be worked on next appraisal period" or "individual performance objectives, remarks, or significant planning information." Forty-three agencies in the subsample required that the rater discuss the results of the appraisal with the employee before review by the next level of supervision. All 50 agencies provided a section within the appraisal form where employees could provide written comments after the review process had been completed. The type of scale used to measure employee performance within a given dimension, such as report writing, showed the greatest variance. Scales ranged from three categories (proficient, needs improvement, unsatisfactory) to five categories (exceeds standard, 114 very good, meets standard, needs improvement, below standard). Twenty-five of the agencies incorporated a numerical range within each level, usually three points but as high as seven points. Those agencies that incorporated a point scale totaled the points as part of an overall evaluation, whereas those that did not merely summarized the ratings given within each performance area. Many of the instruments, more than 80%, provided behavioral statements for the rater within each performance area. For example, under dependability, below expectations would be characterized as "constantly has to be watched or prodded." Exceeds expectations in the same area would be characterized as "never has to be watched over or prodded." This technique was reflected in the total subsample except for two agencies, but it was most predominant in those instruments where BARS was indicated as the only format used. Performance Dimensipnsllraits Examination of the performance-appraisal instruments in the subsample indicated a high degree of commonality in terms of performance dimensions. For the most part, police officers were evaluated on similar knowledge, skills, and abilities, regardless of the type of performance-appraisal format. ”Physical condition" and "appearance" appeared most often, with "loyalty" appearing least often. It was apparent from examination of the instruments that the terminology and definitions provided to clarify these dimensions were consistent throughout the subsample. Performance dimensions Similar' in definition or scope were grouped together, based on the writer’s experience in the field. 115 For example, judgment was defined in one instrument as "decision making and problem solving_ability.' In another instrument the same dimension was called "decision making” and was defined as ”officer’s ability to solve problems.” Seventy-two percent of the subsample measured their officers on at least 15 separate performance dimen- sions or traits, whereas 25% incorporated more than 15 dimensions. The performance dimensions/traits and their percentages of inclusion are presented in Table 14. Table l4.--Performance dimensions/traits (n - 50). Dimension/Trait Percent Inclusion Physical condition/appearance 70 Care and use of equipment 62 Interpersonal skills/teamwork 60 Report writing 54 Departmental knowledge/rules/regulations 52 Decision making/problem solving 52 Attendance 50 Public relations 48 Initiative 48 Oral communication 42 Written communication 40 Job knowledge 40 Attitude 38 Handling stress 36 Dependability 34 Quality of work 32 Criminal investigation 28 Quantity of work 28 Officer safety 28 Leadership 20 Traffic enforcement 14 Time management 10 Courtesy 10 Driving skill 6 Loyalty 2 116 Academic Standard or Definitipn As discussed in the literature review chapter, the graphic rating scale, with its almost endless varieties, is the most common method of evaluating police officers’ performance. A list of personality or behavioral traits is arrived at by analyzing the factors that appear to lead to success or failure on a particular job. Various descriptive phrases or adjectives reflecting degrees of a given activity such as "judgment" are prepared. The rater is instructed to indicate along a scale the degree to which the ratee possesses that particular quality. BARS are an attempt to improve the ambiguity that often exists within graphic rating scales. The development of a BARS system is generally accomplished in four stages and requires a considerable commitment of organizational time and manpower. Stage one consists of a detailed task analysis, which results in behavioral descriptions of the major activity areas and tasks that comprise a particular job. Stage two entails the assignment of "importance scores" to each task through systematic collection of data concerning the frequency of performance of the task and the criticality of the task to the job. Stage three involves the development and refinement of behavioral statements describing various levels of proficiency for the important tasks. The final stage investigates the validity and reliability of the instrument. When comparing the 50 instruments in the subsample against these academic standards, it was obvious that the majority of them fell within the definition of a graphic rating scale, regardless of 117 the five categories discussed earlier. The only real differences among the instruments can be seen in the method of description used to indicate the degree of a given activity. Some of the instruments provided simple numerical and adjectival anchors, whereas others provided rather descriptive behavioral anchors (i.e., "officer occasionally loses composure; has difficulty making decisions; frequently requires assistance for routine matters”). Those instruments that provided behavioral anchors can best be described as a "hybrid" format, combining aSpects of both graphic rating and BARS systems. This is not surprising because the design and implementation of a BARS is a very costly and time-consuming enterprise for any organization to undertake. It also requires considerable expertise that traditionally is not found within police agencies. Sweaty A qualitative examination of the performance-appraisal instruments in the subsample showed that, regardless of the format(s) indicated in the survey questionnaire, the majority of police agencies used a similar approach to evaluate their officers. It included instructions for the rater and required a written explanation when the rating was less than satisfactory. It incorporated some form of goal-setting or performance objectives for the employee and allowed for written comments by the employee after the review process had been completed. It used behavioral 118 statements to reflect various levels of performance and was best categorized as a graphic rating scale. A possible explanation for the similarity in format is that police agencies are very similar in the duties they perform. The knowledge, skills, and abilities necessary to perform successfhlly in the job are relatively standard and have not changed to a great extent over the years. Police agencies also tend to rely on each other for direction and assistance regarding personnel issues. In an area such as performance appraisal, it is not uncommon for agencies to incorporate a system borrowed from another police agency that is similar in type and size. It is the writer’s experience that perhaps another reason for this similarity is the lack. of initiative on the part of top administrators to design and implement a performance-appraisal system that fits the needs of their department. Without commitment from the top of the agency, realistic drawbacks such as time, money, and expertise will generally inhibit any attempt to break from tradition. CHAPTER V SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS Introduction This chapter contains a summary of the findings within the context of the eight research questions and the qualitative analysis of performance-appraisal instruments returned with the completed questionnaires. Conclusions from the study are presented by comparing the research data against methodological standards and criteria discussed in the literature review and summarized in the Performance Appraisal Checklist. Recommendations for law enforcement administrators and for future research are presented in the final sections. The chapter begins with a brief summary of the purpose and method of the study. §ummarx Purpose and Method of the Study The first purpose of the study was to assess the state of the art of performance appraisal in police organizations throughout the United States. The data collected included the purposes of the various systems, ranks evaluated, who does the evaluation, how often it is conducted, the method of performance-appraisal training provided to raters, type of employee input obtained before designing the system, how the results of the appraisal are communicated to 119 120 employees, and the types of formats used to collect appraisal information. This aspect of the study was descriptive in nature and examined frequency data obtained from a survey questionnaire mailed to 400 state, municipal, and county agencies by the Michigan Department of State Police in February 1988. The data were collected by the MSP as part of an organizational effort to evaluate its current system and to develop a new performance-appraisal system for its uniform division. A second purpose of the study was to examine those features of police organizations and their environments that influence the type and design of performance-appraisal systems. Four specific features of police organizations were chosen: (a) type of agency: state, municipal, and county; (b) size or enlisted strength; (c) geographic region; and (d) presence or absence of a collective-bargaining agreement. This aspect examined the correlation between the four demographic variables and the sample data collected by the MSP. A third purpose of the study was to examine internal relationships that may exist within the survey data. Unlike demographic variables, internal variables were considered component parts of a formal performance-appraisal system. Three specific variables were chosen for analysis: (a) level of performance— appraisal training provided to raters, (b) presence of a systematic job analysis, and (c) the appraisal format itself. These variables were derived from Survey Questions 9, l0, and l6, respectively, and 121 were correlated with the remaining survey questions to ascertain possible relationships. A fourth and final purpose of the study was to conduct a qualitative analysis on the performance-appraisal instruments returned with the completed survey questionnaire. 0f the 300 departments in the sample population that returned the questionnaire, l82 enclosed a copy of their departmental performance-appraisal instruments. The Sample Popglatign The sample population was 400 police agencies located throughout the United States (50 state, 200 municipal, and l50 county agencies). The rationale for this distribution was a combination of representativeness and accessibilityu ‘The Michigan Department of State Police believed they would be able to collect data from all state agencies in the country due to greater organizational ties and similar working environments than those with municipal and county police agencies. Because there are only 50 such agencies, the rationale was also not cost prohibitive. Since nationally there are far more municipal than county police agencies, the sample population was weighted more heavily with this type. The size of the sample population was considered large enough to provide representative data without being too burdensome for coding and analysis responsibilities. The criteria for choosing individual police agencies within the sample population were agency size, regional distribution, and 122 random selection. In terms of agency size, the MSP operated on the assumption that larger police agencies were more likely to have formal performance-appraisal systems. Sample size was set arbitrarily at not less than 80 enlisted officers within the police agency, and no limit was set for maximum enlisted strength. Although this criterion left some municipal and county agencies unrepresented, it did correlate with the stated purposes of the research study. The MSP wanted regional distribution among the sample population and therefore skewed the selection process so that each of the 50 states had municipal and county agencies represented. The rationale for regional distribution was to eliminate bias toward any one particular region of the country in the selection process. The source document used to make the random selection of 400 agencies was the l987 Uniform Crime Report published by the Federal Bureau of Investigation. This national report provides various law enforcement statistics, one being the number of enlisted and civilian employees for all police agencies in the United States. Survey Questionnaire In deciding on what format to use to gather the necessary appraisal data, the MSP decided on a mail questionnaire. The l9- question instrument was mailed to the appropriate department head with a cover letter signed by the Director of the Michigan Department of State Police. The cover letter explained the formation of a task force and the purpose of the questionnaire; it 123 pointed out that respondents would receive an executive summary of the findings. It was believed this approach would increase both the response rate and the quality of the data received. The survey instrument used a closed-answer format with an "other" category for any number of responses that fell outside the answers provided. The questionnaire was prefaced with an instruction sheet that (a) stated the purpose of the survey, (b) noted that the questionnaire was designed to be completed in no more than l5 minutes, (c) explained the use of the "other" category, and (d) provided a space for the name of a contact person to whom the executive summary could be mailed. This approach incorporated sound survey methods that would likely increase the response rate and help reduce rater error . Research Questions Research Question I. What is the present level of performance- appraisal activity being undertaken in police organizations? A "typical" performance-appraisal system among the sample population had been in place for more than five years and evaluated all ranks through and including captain using the same instrument. It was generally used for the purposes of employee feedback, training, and promotion and was conducted annually by the employee’s immediate supervisor. Job analysis was not always conducted before develOIJing the performance-appraisal system, but employees were genera‘lly given the opportunity to provide input into the system in the ft>rm of oral and written comments and structured groups (i.e., quality circles). 124 Although most police agencies in the sample population provided training for their raters, it was either very informal in nature (i.e., a brief pep talk before the appraisal taking place) or formal written instructions included in the appraisal instrument. Only one-third of the respondents conducted some type of formal training for their raters. A "typical" performance-appraisal system also used a separate graphic rating scale to evaluate various traits and performance dimensions. The format also included some form of written essay and goal-setting components. The systenl was described by most respondents as "acceptable," and they had no plans to change it in the future. Employees’ level of acceptance of their performance- appraisal system was characterized as "good" to "very good." It is worthwhile to note that this level of acceptance might have been rated differently if the survey had been completed by a rank-and-file officer. It has been the writer’s experience that very few rank-and-file officers like any type of performance- appraisal system. It is most likely that the person completing the survey had a management-level position with the department and was therefore more inclined to portray the level of employee acceptance as favorable. 3g;gppgh_9pg§tipp_g. What is the relationship between police agency size and the design and implementation of various types of performance-appraisal systems? The writer expected the data to show that large police agencies (500 or more enlisted officers) had a more sophisticated 125 performance-system than medium and small agencies because they very often have personnel departments within the organizational structure--sophisticated in the sense that they were more likely to conduct job analysis for each rank, use different instruments to evaluate each rank, and provide formal training for their raters. 0n the contrary, the data showed a correlation only in the areas of a greater likelihood to include the rank of major and use of a BARS. The lack of any additional correlation with the variable of agency size is in the writer’s opinion attributable to a "traditional" police approach to the design and implementation of a performance-appraisal system. Few police agencies, regardless of enlisted strength, approach performance appraisal in a serious and technical manner. Instead they rely on performance-appraisal techniques and methods that have been used for years and that are common within the police community. Unfortunately, they fail to commit the necessary time and resources to design and implement performance-appraisal systems that incorporate sound and current evaluation techniques. Research Question 3. Is there a relationship between geo- graphic distribution of police agencies and the design and implementation of various types of performance-appraisal systems? The only relationship that was evident in the data was between the regional distribution of the sample population and the presence of a collective-bargaining agreement with enlisted personnel. Respondents in the New England, Middle Atlantic, and East North 126 Central states were more likely to have collective-bargaining agreements than were police agencies in the other two regions. This geographic region is made up of predominantly industrial states (New York, New Jersey, Ohio, Pennsylvania, Illinois) that tend to have larger municipal departments than those in the other two regions. Large municipal police departments in this region of the country were the first to seek collective-bargaining agreements. The combination of an industrial setting and the relative size of the police agencies in this region both contribute to a greater environment for collective-bargaining agreements. Research Question 4. Hhat is the relationship between the presence or absence of a collective-bargaining agreement and the design and implementation of various types of performance- appraisal systems? Agencies in the sample population with collective-bargaining agreements were less likely to appraise the ranks of captain and major. A possible explanation for this finding is that the ranks of captain and major are classified as high-level supervision or administration and thus are not normally considered part of the bargaining unit. The writer expected to find a significant relationship between the presence of a collective-bargaining agreement and the purposes of the appraisal system because police contracts frequently have provisions that pertain to personnel issues (i.e., compensation, promotion, and training opportunities). The lack of any such correlation may be explained by "standard operating procedure" within police agencies around the country. Regardless of the 127 presence or absence of a collective-bargaining agreement, police agencies generally use appraisal information for similar purposes. Research Question 5. What is the relationship between police agency type (state, municipal. county) and the design and implementation of various performance-appraisal systems? The data revealed a relationship between this variable and the particular format used by the sample population to evaluate their officers. State agencies were more likely to engage in goal-setting techniques (i.e., M80) as part of the appraisal process than were county agencies. A possible explanation for this finding is that many state agencies have incorporated an M80 approach within their budget process. Perhaps the practice has proved somewhat successful and has carried over into other areas such as performance appraisal. It may also be explained by the fact that state agencies are generally part of a large civil service system and tend to be more sophisticated than county agencies. Research Question 6. How does the presence of a systematic job analysis influence the design and implementation of various types of performance-appraisal systems? When job analysis was conducted for the ranks of officer, sergeant, and lieutenant, there was a greater likelihood that respondents used a different performance~appraisal instrument to appraise each rank. This relationship has been strongly supported in the performance-appraisal literature. A systematic job analysis identifies the knowledge, skills, and abilities required to perform 128 a given job (or rank) successfully. Since the officer, sergeant, and lieutenant ranks incorporate different performance dimensions, one would expect to see police agencies using different performance- appraisal instruments to appraise enlisted officers. Police agencies that conducted job analysis were also more likely to obtain employee input in the form of a formal written survey before developing their appraisal system. Police agencies that understand the importance of a systematic job analysis also understand the long-term benefit from soliciting employee input. It creates an environment in which employees believe they have a stake or ownership in the overall system and usually corresponds to a higher level of confidence in the validity of the instrument. Police agencies in the sample population that conducted systematic job analysis for the officer, sergeant, lieutenant, and captain ranks were also more likely to use a BARS to appraise their employees. This relationship has been strongly supported in the performance-appraisal literature because one of the critical steps in the development of a BARS system is the identification of all relevant job dimensions. This identification process is inherent in a systematic job analysis. The data revealed a strong and consistent relationship between the presence of job analysis and the type of training given to supervisors. When job analysis was conducted, respondents were less likely to provide informal training and more likely to provide either formal written instructions or formal training sessions for their supervisors. This relationship remained strong for all ranks. 129 Job analysis and supervisory training are two cornerstones for any successful performance-appraisal system. It may be suggested that any agency that would spend the time and expertise to conduct a systematic job analysis would also see the value in formal training for supervisors to increase reliability. Research Questipn 7. How does the presence of formal training for raters influence the design and implementation of various types of performance-appraisal systems? Police agencies that provided their supervisors with formal training (i.e., seminars or workshops) were more likely to use different instruments to evaluate the various ranks within their agencies. The performance-appraisal literature has supported the use of different instruments for evaluation when the knowledge, skills, or abilities vary among jobs or, in this study, among ranks within police agencies. If, in fact, different appraisal systems are used to evaluate various ranks, the level of training provided to supervisors should also increase so that different levels of employee knowledge, skills, and abilities can be adequately measured. A significant relationship also existed between supervisor training and the subsequent use of performance-appraisal information. Those agencies that gave their supervisors' formal training were more likely to use the results for employee training and development. Supervisors who are better trained may exhibit a more positive approach to the performance-appraisal process because they understand its value to the organization. They may be better 130 equipped to identify employee strengths and weaknesses, which is crucial to identifying training needs and fostering employee development. The level of training provided to supervisors was also related to the type of employee input respondents obtained in developing their appraisal system. Agencies that solicited input in the form of written comments or through structured focus groups were more likely to provide their supervisors with written instructions or a formal training session. Police agencies that accept the organizational value of obtaining employee input may also realize the value of providing a higher level of training to supervisors who conduct the appraisal. Both steps go hand in hand to make a performance-appraisal system successful (i.e., increasing long-term employee acceptance and reducing rater bias and error). A final relationship existed between the level of training and the level of employee acceptance of their appraisal system. When formal training was given to supervisors, the greater the likelihood that the acceptance level was "very good." Employee acceptance will generally correspond to the confidence employees have in their supervisors’ ability to evaluate their performance accurately and fairly. When supervisors are properly trained in their respective appraisal systems, they are more likely to work out problems with employees and provide constructive steps for improved performance. 131 Research Question 8. How does the appraisal format itself influence the design and implementation of the various types of performance-appraisal systems? The data revealed a significant relationship between the type of input respondents obtained before developing their performance- appraisal system and the use of BARS. Those agencies that indicated having structured group input were more likely to incorporate a BARS format. Since BARS require the identification of all relevant job dimensions, employees are often used to providing input into the process. The use of supervisors and subordinates in this process ensures that all relevant job dimensions are identified before writing the behavioral anchors for each job dimension. The writer expected to see the type of performance-appraisal format correlate with individual performance-appraisal objectives. For example, the literature would support that BARS are excellent formats to use if the purpose of the appraisal is employee feedback/ development and assessing training needs because they are far more specific in terms of identifying employee behavior relative to performance on a specific job dimension. The absence of any correlation in the data can perhaps be explained by the fact that police agencies generally select a particular format based on cost, time, and expediency factors, not to satisfy performance-appraisal objectives. One might only see this type of relationship within a highly sophisticated police agency where the necessary financial and personnel resources are available. 132 Qualitative Finding; Examination of the performance-appraisal instruments showed that regardless of the format(s) indicated in Question 16 of the questionnaire the majority of police agencies used a similar approach to evaluate their officers. This approach included instructions for the rater and required a written explanation within a given dimension when the rating was less than ”satisfactory" or "average.” It incorporated some form of goal setting or performance objectives for the employee and allowed for written comments by the employee after the review process had been completed. It used behavioral statements to reflect various levels of performance and is best categorized as a graphic rating scale. A possible explanation for this similarity in format is that police agencies are very similar in the duties they perform. The knowledge, skills, and abilities necessary to perform successfully in the job are relatively standard and have not changed to a great extent over the years. Police agencies also tend to rely on each other for direction and assistance regarding personnel issues. In an area such as performance appraisal it is not uncommon for agencies to implement a particular system that was borrowed from another agency of similar type and size. Perhaps another explanation for this similarity is the lack of initiative on the part of top police administrators to design and implement a performance-appraisal system that fits the needs of their departments. Without commitment from the top of the agency, realistic drawbacks such as time, money, and expertise will 133 generally inhibit any attempt to break from a traditional approach to performance appraisal. Cpnclgsjpns The following conclusions are presented by comparing the research findings against methodological standards and criteria discussed in the literature review and summarized in the Performance Appraisal Checklist. It is acknowledged that the archival data collected by the Michigan Department of State Police do not provide a complete picture of performance-appraisal systems within police agencies. Other relevant information as to the design, implementation, and evaluation of a particular performance-appraisal system within the sample population was outside the scope of the survey questionnaire. These limitations were taken into considering when comparing the research findings against criteria in the performance-appraisal literature. Is performance appraisal conducted for all levels or ranks within the organization? The research findings suggested that police agencies included all enlisted ranks within their formal performance-appraisal system (i.e., officer, sergeant, lieutenant, captain, and major). The highest frequency occurred within the ranks of officer and sergeant because their duties and responsibilities are easily quantifiable (i.e., number of tickets, complaints taken, investigative arrests, and public contacts). Higher ranks within police agencies fall more into the supervisory and administrative category, where appraisal 134 techniques usually take the form of subjective ratings. For the most part, these ratings are not job specific but measure general performance traits such as leadership, planning and decision making, initiative, and supervisory skills. Are the various objectives of the performance appraisal specified, and are different rating methods used for different objectives? Frequency data from the research indicated that police agencies used appraisal information primarily for feedback/motivation purposes, followed by promotion, training/development, retention, discipline, discharge, and compensation. Just how or to what extent the appraisal information was used within individual departments is difficult to determine. For example, police agencies often use appraisal information in a very informal manner to evaluate an employee’s suitability for promotion to the next rank. The process is informal because the employee is not advised before the evaluation that the results can be used to determine promotional suitability. The literature would suggest that since different rating methods are better suited to different objectives, police agencies that have multiple performance-appraisal objectives should use multiple rating methods. Research findings revealed that just the opposite occurred within police agencies. Eighty-six percent of the sample population used the same performance-appraisal method for feedback/motivation, promotion, training/development, compensation, and so on. 135 The failure of police agencies to use multiple rating methods for various performance-appraisal objectives is most likely due to a lack of expertise within the department. Few police agencies have skilled personnel on staff who are qualified to design and implement multiple rating methods. This procedure is also very time consuming and costly, two drawbacks that will generally keep even large departments from following sound methodological standards. The lack of multiple rating methods for various objectives is not limited to police agencies but is very common within both public and private organizations. Is a thorough job analysis conducted for each rank before developing a performance-appraisal system? If so, is a single or multi-method used? Police agencies in the sample population conducted a job analysis for their enlisted ranks on an average of less than 50% of the time before developing and implementing their formal performance-appraisal system. The rank of major showed the highest frequency, 54%, whereas the rank of officer was 42%. Because job analysis is the cornerstone of a valid performance- appraisal system, the results are somewhat alarming. One of the possible reasons for these findings is a false assumption on the part of many police administrators that they already know the knowledge, skills, and abilities required to perform successfully the job of officer, corporal, sergeant, lieutenant, captain, and major. This assumption often leads to the development or continuation of a "traditional" performance-appraisal system in 136 which the difficult task of job analysis is set aside, or at best conducted in a perfunctory manner. Another possible reason for the lack of job analysis within police agencies is the technical and time-consuming nature of the task. Most police agencies do not have the necessary staff who are knowledgeable about the various methods and techniques of effective job analysis. Even those large agencies that do have the staff may fail to provide them with the time or support necessary to analyze all enlisted ranks. Rather than hire outside personnel specialists to assist in the job-analysis process, many police agencies may simply skip this crucial step and rely on their in-house expertise to validate their individual appraisal system. In terms of whether police agencies use a single or multiple job-analysis method, the answer lies outside the scope of this research study. However, it has been the writer’s experience that most police agencies, at best, use a single method to conduct job analysis. It usually takes the form of administrators reviewing the duties and responsibilities of a given rank and then ensuring that these "dimensions" are evaluated within the appraisal process. Due to the many purposes of job analysis and the varying degrees of knowledge, skill, and ability required in most jobs, a multi-method approach is almost always preferable and superior to any single method. The reality in police agencies, however, is that the time, expertise, and cost associated with a multi-method approach often outweigh the obvious benefit. The writer wonders 137 whether many police administrators even perceive the benefits of a multi-method approach to job analysis. If they did, the frequency of job analysis in the sample population would have been higher. Is the performance-appraisal rating conducted by the employee’s immediate supervisor, and to what extent are multiple raters used? Within the sample population, an employee’s immediate supervisor was generally responsible for completing the performance- appraisal instrument. This practice is based on the principle that an officer’s performance is best judged by the person who directs and reviews his or her activities on a daily basis. This practice, however, does have built-in limitations, especially when applied to the rank of officer. Most first-line supervisors are sergeants and have been promoted from the officer rank. They receive little, if any, training in the area of performance appraisal or in effective coaching and counseling techniques. First-line supervisors also work very closely with their subordinates and will often avoid situations that alienate them from the rest of the officers (i.e., an unsatisfactory appraisal rating). For the most part, sergeants have little effect on the rewards or discipline of officers under their command and may develop an "it doesn’t matter" attitude when appraising subordinates’ performances. Even though this research study did not capture data on the presence or extent of multiple raters, the writer speculates that most police agencies do not use multiple raters. The hierarchy of 138 formal authority and chain of command are traditional principles within the law enforcement community. The use of multiple raters requires a much more flexible approach to performance appraisal and therefore is limited to the more progressive police departments. Multiple rating also requires considerably more time to complete than the traditional approach and necessitates good communication and consensus among various supervisors. Is training provided to raters before the appraisal takes place? The importance of training those individuals in a police agency who are responsible for conducting performance appraisal cannot be overstated. Training of raters must deal primarily with the identification of clear performance criteria. It must also identify a number of biases or errors in performance appraisal that can detract from the validity and reliability of the process (i.e., halo effect, central tendency, personal prejudice, and recency effect). What makes these errors so difficult to correct is that raters are usually unaware that they are making them. Even in those instances in which raters are aware of potential errors, they are frequently unable to correct them without formal training. Results from the study showed that police agencies did, in fact, provide training for their supervisors, but the type of training was evenly split among informal programs, formal written instructions, and formal training sessions. Unfortunately, the first two fall short of providing raters with adequate knowledge and skills to evaluate accurately the performance of subordinates. Both 139 approaches treat performance appraisal as just another administrative responsibility, not much different from supervising complaints or conducting a roll call briefing. The lack of formal training for raters is ironic because police agencies, more than most of their public-sector counterparts, understand the inherent value of sound training programs. Countless hours are spent annually training employees in first aid, firearms, stress management, defensive tactics, crisis intervention, evidence collection, and so on. Unfortunately, training supervisors in performance appraisal is not considered as high a priority with most police agencies. In Performance Planning and Evaluation, Ilgen (l986) conducted an extensive review of the literature and summarized a number of different methods considered to be applicable to the training of managerial personnel. The techniques all fall into three basic categories: (a) information-presentation techniques, (b) simulation methods, and (c) on-the-job practice. Information-presentation techniques include lectures, seminars, closed-circuit television, programmed instruction, and films. The common thread tying these methods together is that they attempt to impart skills, facts, and concepts without requiring either simulated or actual practice on the job itself. These techniques are the most common, but the learning is generally passive and little time is given to practice new skills or provide feedback. 140 The second approach attempts to simulate various aspects of the trainee’s job so that they can provide realistic training in the actual types of work activities that the trainee might encounter without actually incurring the risks of costly mistakes during the learning process in a real job situation. For example, the trainee may be presented with the problem of an argumentative subordinate during the appraisal interview. Trainees elicit information and guidance from the trainer and then offer a solution to the problem. On-the-job methods assume that practice of the actual task to be performed is preferred to practice on some contrived task. These methods rely on an apprenticeship program in which the trainee is cast in the role of student and his or her supervisor is cast in the role of teacher. This has the advantage of one-on-one tutoring. However, the method provides very little control for the quality or quantity of information and coaching the trainee receives. Also, the supervisor must have well-developed appraisal skills and be willing to spend the time necessary to pass those skills on, in order for this technique to be effective. Many performance—appraisal programs are designed to provide employees with feedback about their strengths and weaknesses so that they can build on their strengths and work on their weak spots in the future. In the same way, evaluators can benefit from feedback about the nature and quality of their performance as evaluators. This may be in the form of information about (a) how their ratings compared with those of other supervisors, (b) the reactions of employees on the usefulness of the performance feedback and ,¢_~T'\‘ 'x“ ,, 7 141 counseling they received, and (c) data on the accuracy of decisions based on those performance ratings (for example, promotions). Are performance criteria for individual ranks communicated to the employee before the appraisal takes place? An important aspect of any appraisal system is ensuring that the departmental expectations for successful performance are communicated to, and understood by, officers of all ranks. To what extent this takes place within the law enforcement community was outside the scope .of this research study. However, the writer speculates that supervisors often take this critical component for granted. It is not uncommon to hear a supervisor tell a subordinate, "You have been a cop for 20 years; I should not have to tell you what a good day’s work is!" The attitude of assuming a subordinate knows and understands what is expected is less threatening to a supervisor than being confronted with questions as to the validity or practicality of specific expectations. What generally happens in police agencies is that any communication or clarification of performance criteria is accomplished after the appraisal has taken place. Progressive police departments, however, incorporate several approaches to better ensure that performance criteria are understood by their officers. One is to have employees also fill out the appraisal instrument based on how they believe they are performing in the job. This is followed by a face-to-face interview in which the subordinate and supervisor compare the two ratings. This form of 142 "self-assessment" provides an excellent opportunity to clarify departmental expectations and to reach consensus about what is expected for successful performance. Another approach is to implement the performance-appraisal system with a stipulation that after one year the entire process will be evaluated. At that time, representatives from each rank will review performance criteria to determine whether they reflect departmental goals and objectives, that they are understood by officers, and that the training provided to raters was adequate. If the appraisal system is not accomplishing the desired result, corrections or modifications can be made with input from employees of all ranks. One would think that this evaluation component would be common within any organization that has a performance-appraisal system. It is the writer’s experience that evaluation is often neglected in the area of performance appraisal, most likely due to the personnel time required to accomplish it properly. Are the results of the performance appraisal communicated in a face-to-face interview? Results from the survey questionnaire showed that only 51% of the police agencies in the sample population communicated the appraisal findings to the employee in a face-to-face interview. Other common techniques were providing the employee with either a copy or a summary of the completed appraisal. For the reasons previously discussed, it is important that supervisors sit down with employees to discuss the performance appraisal. Officers must 143 understand performance criteria if they are to carry out their job duties and responsibilities successfully. The greatest drawback to this occurring is the lack of training for raters in "people skills.‘I Many supervisors are intimidated by the interview process because they are ill equipped to discuss performance expectations due to a lack of training in coaching and counseling techniques. They also feel inadequate to set and justify attainable goals for employees to reach in the next appraisal period. If employees are merely given a copy or a summary of the appraisal findings, it sends a negative message that the entire process is not all that important. istfihe performance-appraisal system objective, subjective, or o . Examination of the performance-appraisal instruments returned with the survey questionnaire provided a partial picture of the types of performance measures used by police agencies. It is a combination of both objective and subjective measures. Police agencies have traditionally collected objective information to measure their officers’ performance (i.e., number of tickets, arrests, public contacts, complaints taken, and so on). The quantitative nature of objective data lends itself to the compilation of crime statistics, which so often are the standard by which the public gauges a given police department’s efficiency. Police agencies must be careful when using objective measures of performance that they take into consideration situational factors outside of the officer’s control. For example, a supervisor must 144 take into consideration the particular shift an officer works when measuring the number of drunk-driving arrests effectuated in the appraisal period. The likelihood of encountering this type of activity is greatly enhanced during nighttime hours. To expect an officer who works straight days to perform comparably in this area with one who works straight afternoons is unrealistic. Many of the police agencies in the sample population also included subjective measures of performance in their appraisal systems (i.e., initiative, leadership, dependability, and so on). Although the number of drunk-driving arrests within an appraisal period is not a matter of judgment on the part of the supervisor, an estimate of a patrol officer’s initiative is. Because subjective measures depend on human judgment, they are prone to biases and errors associated with the rating process. To be useful they must be based on a careful analysis of the behaviors viewed as necessary and important for effective job performance. Two things must occur if police agencies are successfully to use subjective measures of performance. The first is to train raters, in order to increase reliability and lessen the opportunity for rater bias. The second is to assist raters in the evaluation process by providing behavioral descriptors within a given dimension rather than asking them to arbitrarily rate the employee on a numerical or adjectival scale. The lack of formal training has already been discussed and would indicate that the reliability of subjective measures of performance by police agencies without formal 145 rater training is questionable. The value of using behavioral anchors is discussed under the next question. Is the performance-appraisal format single or multi-method? This issue was discussed under the question, Are different rating methods used by the sample population for different objectives? Examination of the actual instruments returned with the questionnaire showed that police agencies for the most part used a similar method to evaluate their officers. It is best categorized as a "hybrid" format, combining aspects of both graphic rating and behaviorally anchored rating scales. Unfortunately, most police agencies still rely on traditional numerical or adjectival anchors to measure an employee’s performance level within a given dimension. This puts a heavy burden on raters to adequately measure subjective dimensions such as leadership or dependability. Without clear and concise behavioral descriptors and subsequent training in their application, raters are susceptible to making numerous reliability errors. The lack of multiple performance-appraisal formats in the sample population was probably due to a lack of sophistication on the part of most police agencies. Selection and implementation of the appropriate appraisal method, given the objectives of the system, is a technical field. It requires personnel who are knowledgeable about the research literature and the techniques of a particular method. It also requires a considerable amount of time 146 spent conducting a thorough job analysis of each position or rank within the department. The conclusions presented in this study are not intended to indict the law enforcement community for not having methodologically sound performance-appraisal systems. It is very likely that a similar research study in other public-sector or even private-sector organizations would show similar findings. The purpose was to assess the state of the art of performance appraisal in police agencies and to examine the effect of demographic and internal variables on those systems. In drawing conclusions from the research study, it is worthwhile to discuss the role of performance appraisal within a larger management perspective. In this context, performance appraisal is viewed as a much broader process (performance- management cycle), one in which employee performance is tied directly to organizational goals and objectives. The cycle begins with job analysis, in which the work performed by individual employees is related to organizational goals and unit objectives. The next phase involves setting performance standards and enumerating employee expectations to meet the standards. As subsequent performance is monitored, it is imperative for the supervisor to give ongoing feedback to the employee on his or her progress. A review stage follows, in which performance ratings are prepared and an interview is held with the employee to discuss the results. The review stage accomplishes two purposes: to define and solve performance-related problems that may have developed during 147 the rating period, and to identify and develop the career goals of the employee. The performance-management cycle is an ongoing process in which each step feeds into the other. For example, if the interview stage reveals that employee expectations are not compatible with organizational goals or that they cannot be practically achieved, the supervisor must make the necessary changes or modifications. The MBO process incorporates this broader approach to performance appraisal to a greater extent than does the graphic rating scale commonly found within police agencies. It usually involves supervisor and employee meeting once a year to develop a performance plan. The plan is then used by the subordinate as a basis for directing his or her activities and eventually by the supervisor as the basis for evaluating and rewarding the subordinate’s performance. The concept of employee involvement as a critical factor in the success or failure of an organization has received considerably more attention in the private than in the public sector. Many of the participatory management programs that began in the private sector grew out of a greater emphasis on and need for quality control. One such effort, statistical quality control, was introduced in Japan around l95l through visits by such authorities as Drs. Deming and Juran from the United States. Early efforts were concentrated on educating and training top and middle management in the use of statistical quality control and 148 its implementation. However, leaders in Japan came to realize that product quality is actually determined by the workers, supervisors, and foremen who work on the production-plant floor. This realization brought about the introduction of quality circles in Japan and later in the United States and marked the beginning of a grass-roots movement that stressed the importance of a "team approach" to organizational success. A total quality-control philosophy believes that today’s workers have a strong need to be bona fide participants in the responsibilities and benefits of running the organization. The greater the degree of involvement, the stronger becomes the employee’s commitment to succeed. In this setting, performance monitoring is an accepted practice and feedback for all employees is timely and relevant, providing a basis for swift corrective action. Although the application of total quality control has been most evident in product-oriented organizations, i.e., automobile industries, its "team approach" to organizational success may soon find its way into the public sector. Policy Recommendations for Law Enforcement Administrators Before presenting several policy recommendations for law enforcement administrators and middle managers, it is worthwhile to acknowledge what the writer believes are inherent organizational restraints in police agencies that may have contributed to the research findings. A major restraint is that most police agencies 149 do not relate the performance of line officers to the broad goals and objectives of their respective departments. As discussed under the context of a performance-management cycle, performance standards and specific officer expectations must relate to unit objectives and organizational goals. If this does not occur, the feedback stage of the cycle has less meaning for the employee and the supervisor who is conducting the rating. Subsequently, defining and solving performance-related problems may be treated in a perfunctory manner, with little attention paid to improving overall performance. This situation is further aggravated by the organizational structure of many police agencies, where decisions are made at the top and carried out at the bottom. Police agencies operate on the premise of chain of command and, as such, seldom create an environment in which officers believe they are participants in the responsibilities and benefits of running the organization. Another organizational restraint inherent in many police agencies is the lack of personnel who are knowledgeable about the design, implementation, and evaluation of performance appraisal. Traditionally, police agencies have promoted from within, and their managerial positions are occupied by officers who have proven themselves in their field. Although such experience is invaluable in running a police department, it does not provide the types of knowledge and skills required to develop and monitor effective performance-appraisal systems. 150 A final organizational restraint is the "crisis-management" style of many police agencies. Due to the nature of police work, managers often get so caught up in reacting to day-to-day problems that little time is spent on organizational planning. Effective performance appraisal requires a large degree of organizational planning, with input from all levels of the police agency. This requires a large time commitment that many agencies think they can ill afford, given the numerous day-to-day problems they must face. Because more than 63% of the sample population had had their performance-appraisal system in place for more than five years, it is appropriate that the first recommendation for law enforcement administrators is to reevaluate their current system. The most critical step in this process would be to ensure that a recent job analysis has been conducted for all ranks within the department. Job content should be exhaustively examined to ensure that the present standards used to measure the performance of individuals in that job are valid. Police administrators should not rely on "traditional" descriptions of individual jobs, but should strive to identify unique job characteristics that may exist for their employees. Whenever possible, multi-methods of job analysis should be employed to capture the varying degrees of knowledge, skill, and ability required in most police jobs. If police agencies do not have the in-house expertise to conduct multi-method job analysis, they should solicit assistance from their state and national associations or from academic institutions in their area. 151 The International Association of Chiefs of Police and the National Sheriffs Association have staff available to assist police agencies in the design and implementation of performance appraisal. Many academic institutions also have faculty who are knowledgeable about the personnel field, who can provide assistance to police agencies in the technical aspects of job analysis. Another alternative is to hire qualified consultants to administer or assist in the job-analysis process. A second recommendation concerns training for raters. The research substantiated that police agencies provide very little formal training for supervisors in the purpose and design of their individual performance-appraisal systems. Training for raters increases the reliability of the appraisal information and must be a priority in police departments. A practical approach is to incorporate in a supervision school for newly promoted sergeants an 8- to l6-hour block on the department’s performance-appraisal process. All aspects of the appraisal process should be discussed to ensure that participants are clear on the purpose of the appraisal and the criteria used to measure successful job performance. Also included should be a "people skills" component that instructs supervisors in coaching and counseling techniques. These skills are invaluable for raters in the performance-appraisal interview and in the day-to-day job of supervision. 152 Supervisors must also be given an opportunity to practice their newly acquired skills. Use of video tape to evaluate mock performance-appraisal interviews can identify weaknesses in people skills as well as to identify areas in which the rater is unclear on the performance criteria. Hands-on training and evaluation in performance appraisal far exceeds the more traditional lecture approach and can be implemented practically in many agencies. This approach not only stresses the importance of the performance- appraisal system but also gives supervisors a greater level of confidence in their ability to evaluate officers under their command. A final recommendation is the direct result of the writer’s experience in the law enforcement field. If performance appraisal is to be a successful component of any police agency, it must be integrated into supervision on a daily basis. Too often performance appraisal is viewed as a once-a-year exercise instead of a daily responsibility of supervisors. If the employee is aware of and understands departmental expectations, the supervisor is in an excellent position to provide feedback to reinforce positive performance and to correct negative performance daily. If approached in this vein, the appraisal rating is a culmination of ongoing supervision and feedback, and the employee will most likely receive a fair evaluation. 153 Reeommendatipps fer future Beeeareh One factor that may be limiting the usefulness of existing rating scales for police officers is the general use of graphic rating scales with poorly defined verbal anchors. This belief is partially substantiated by the findings of this research study. This type of scale has been widely criticized in the literature as an adequate measure of job performance in police departments (Landy, 1977). Behaviorally anchored rating scales (BARS) have received considerable attention in the recent literature as researchers have attempted to improve on the more traditional methods of performance appraisal. BARS possess some properties not possessed by the more commonly used graphic scales. The most important of these properties is the potential for counseling and feedback for officers. The role of the patrol officer is complicated and often subtle. The traditional graphic rating scale, with its dependence on the use of arbitrary verbal anchors such as "below average," "average,” "excellent," and so on, is not equal to the task of providing useful feedback for the improvement and maintenance of patrol officers’ performance. Perhaps future researchers can answer the question, Is it possible to develop a BARS for the job of police officer that will be accepted by individual superiors and officers, that can be modified to fit the needs of any department, and that is technically sound? Earlier research by Landy, Farr, Saal, and Freytag (1976) explored this very possibility. In their study, 58 municipal police 154 agencies cooperated in constructing and field testing supervisory and peer ranking scales. Eight supervisory and nine peer scales were developed. The researchers believed that BARS could be developed in one setting and effectively used in other police settings. Whereas they would agree with other researchers who have pointed out the value of rater involvement in the scale-development process, their data suggested that many police agencies were able to use the supervisory and peer scales effectively, despite the fact that raters from those agencies were not directly involved in the development of the rating scales. The possibility of answering the previous research question in the affirmative is an exciting proposition for police agencies across the country. It would, of course, necessitate further research, perhaps duplicating the Landy et al. study in another area of the country or following the same methodology with a group of county police departments and comparing the scales developed. APPENDICES APPENDIX A POLICE AGENCIES IN SAMPLE POPULATION SYS:RECORD DELETED TITLE ............... CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE SHERIFF CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE COMMISSIONER CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE l55 BIRMINGHAM POLICE DE PHOENIX POLICE DEPAR TUCSON POLICE DEPARI LOS ANGELES POLICE D OAKLAND POLICE DEPAR SAN DIEGO POLICE DEP SAN FRANCISCO POLICE SAN JOSE POLICE DEPA COLORADO SPRINGS POL DENVER POLICE DEPARI NARIFORD POLICE DEPA NILMINGTON POLICE DE METROPOLITAN POLICE FORT MEADE POLICE DE JACKSONVILLE POLICE MIAMI POLICE DEPARTM ORLANDO POLICE DEPAR TAMPA POLICE DEPARTM SI. PEIERSBURG POLIC ATLANTA POLICE DEPAR COLUMBUS POLICE DEPA SAVANNAH POLICE DEPA HONOLULU CITY AND CO BOISE POLICE DEPARTM CHICAGO POLICE DEPAR EVAHSION POLICE DEPA ROCKFORD POLICE DEPA ARLINGTON HEIGHTS P0 AURORA POLICE DEPART CHAMPAIGH POLICE DEP SPRINGFIELD POLICE D DECATUR POLICE DEPAR ELGIN POLICE DEPARIM JOLIET POLICE DEPARI OAK PARK POLICE DEPA PEORIA POLICE DEPARI EVANSVILLE POLICE DE FORT WAYNE POLICE 0E INDIANAPOLIS POLICE SOUTH BEND POLICE DE DES MOIHES POLICE DE KANSAS CITY POLICE D NICHITA POLICE DEPAR LEXINGTON-FAYETTE UR LOUISVILLE POLICE DE BATON ROUGE POLICE D NEH ORLEANS POLICE D SHREVEPORI POLICE DE PORTLAND POLICE DEPA BALTIMORE CITY POLIC BOSTON POLICE DEPARI CAMBRIDGE POLICE DEP NORCESTER POLICE DEP DETROIT POLICE DEPAR DEARBORN POLICE DEPA ANN ARBOR POLICE DEP FLINT POLICE DEPARTN GRAND RAPIDS POLICE KALAMAZOO POLICE DEP ME 710 NORTH TNENTIETN 620 NEST NASHINGION 270 SOUTH STONE AVEN 150 NORTH LOS ANGELE 455 7TH STREET 801 N. MARKET, P.O. 850 BRYANT STREET 201 N. MISSION STREE 224 EAST KIONA, P.O. 1331 CHEROKEE STREET 50 JENNINGS ROAD 1000 FRENCH STREET 300 INDIANA AVENUE N 15 NORTHWEST ISI 501 EAST BAY STREET. 400 NORTHNEST 2ND AV 100 SOUTH HUGHEY AVE 1710 NORTH TAMPA SIR 1300 ISI AVENUE NORT 175 DECATUR STREET 937 ISI AVENUE, P.O. 201 NABERSHAM STREET 1455 SOUTH BARAIANIA 7200 BARRISTER STREE 1121 SOUTH STATE SIR 1454 ELMNOOD AVENUE 420 NEST STATE SIREE 33 SOUTH ARLINGTON M 350 NORTH RIVER SIRE 82 EAST UNIVERSITY A 617 EAST JEFFERSON 707 EAST N000 STREET 150 DEXTER COURT 150 NEST JEFFERSON S I VILLAGE HALL PLAZA 542 SOUTHNEST ADAMS I7 NORTHNEST 7TH SIR CITY COUNTY BUILDING 50 NORTH ALABAMA SIR 701 NEST SAMPLE EAST FIRST 1 COURT A 701 NORTH 7TH CITY HALL,4TM FLOOR, 1409 FORBES ROAD 633 NEST JEFFERSON 300 NORTH BOULEVARD. 715 SOUTH BROAD SIRE 1234 TEXAS AVENUE. P 109 MIDDLE STREET 601 EAST FAYETTE 154 BERKELEY STREET 5 NESTERN AVENUE 9'11 LINCOLN SOUARE 1300 BEAUBIEN STREET 16099 MICHIGAN AVENU 100 NORTH 5TH AVENUE 210 EAST 5TH STREET 333 MONROE AVENUE NO 215 NEST LOVELL SIRE BIRMINGHAM PHOENIX TUCSON LDS AHGELES OAKLAND SAN DIEGO SAN FRANCISCO o) > N JOSE COLORADO SPRINGS DENVER HARTFORD NILMINGTDN NASHINCTON FORT MEADE JACKSONVILLE MIAMI ORLANDO —-¢ ,- :l '9 ST. PETERSBURG ATLANIA COLUMBUS SAVANNAH HONOLULU BOISE CHICAGO EVANSTON ROCKFORD ARLINGTON HEIGHTS AURORA CHAMPAIGH SPRINGFIELD DECATUR ELGIN JOLIET OAK PARK PEORIA EVANSVILLE FORT NAYNE INDIANAPOLIS SOUTH BEND DES MOIHES KANSAS CITY NICHITA LEXINGTON LOUISVILLE BATON ROUGE NEW ORLEANS SHREVEPORI PORTLAND BALTIMORE CITY BOSTON CAMBRIDGE NORCESTER DETROIT DEARBORN ANN ARBOR 'H '— o—a GRAND RAPIDS KALAMAZOO STATE ZIP. AL 35203 AZ 35003 AZ 85701 CA 90012 CA 94607 CA 92112 CA 94103 CA 95110 CO 80901 CO 80204 CI 06120 DE 19305 DC 20001 FL 33841 FL 32202 FL 33152 FL 32801 FL 33602 FL 33705 GA 30335 GA 31994 GA 31401 HI 96814 ID 83704 IL 60605 IL 60201 IL 61101 IL 60005 IL 60505 IL 61820 IL 2701 IL 62425 IL 60120 IL 60431 IL 60302 IL 61602 IN 47713 IN 46302 IN 46204 IN 46625 IA 50309 KS 66101 KS 67202 KY 40505 KY 40202 LA 70821 LA 70119 LA 71161 ME 04101 MB 21202 MA 02116 MA 02139 MA 01608 MI 48226 MI 48126 MI 48107 MI 48502 MI 49503 MI 49001 SYS:RECORD OELEIEO IIILE ............... Ho CHIEF OF POLICE No CHIEF OF POLICE No CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE No CHIEF OF POLICE NO CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE No CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE No CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho SHERIFF Ho CHIEF OF POLICE No CHIEF OF POLICE No CHIEF OF POLICE Ho CHIEF OF POLICE No CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE No CHIEF OF POLICE Ho Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho COHHISSIONER Ho CHIEF OF POLICE Ho CHIEF OF POLICE NO CHIEF OF POLICE Ho COHHISSIOHEH No CHIEF OF POLICE No CHIEF OF POLICE Ho OIPECIOH Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE Ho CHIEF OF POLICE m CMHOqum 156 DEPT ................ LANSING POLICE DEPAR LIVONIA POLICE DEPAR PONTIAC POLICE DEPAR SAGINAN POLICE DEPAR SOUTHFIELD POLICE DE STERLING HEIGHTS POL NARREN POLICE DEPARI BATTLE CREEK POLICE FARMIHGTON HILLS POL JACKSON POLICE DEPAR HUSKEGON POLICE DEPA ROYAL OAK POLICE DEP ST. CLAIR SHORES POL TAYLOR POLICE DEPARI NESTLAHD POLICE DEPA NYOHING POLICE DEPAR DULUTH POLICE DEPARI MINNEAPOLIS POLICE D ST. PAUL POLICE DEPA JACKSON POLICE DEPAR KANSAS CITY POLICE D ST. LOUIS POLICE DEP SPRINGFIELD POLICE D BILLINGS POLICE DEPA LINCOLN POLICE DEPAR OMAHA POLICE DEPARTM LAS VEGAS MEIROPOLIT RENO POLICE DEPARIME MANCHESTER POLICE DE NASHUA POLICE DEPARI ATLANTIC CITY POLICE BAYONNE POLICE DEPAR CAMDEN POLICE DEPARI EAST ORANGE POLICE D ELIZABETH POLICE DEP JERSEY CITY POLICE D NENARK POLICE DEPARI TREHION POLICE DEPAR ALBUQUERQUE POLICE D ALBANY CITY POLICE D BUFFALO CITY POLICE NEH ROCHELLE CITY P0 MEN YORK CITY POLICE ROCHESTER CITY POLIC SCHENECTADY CITY POL NHITE PLAINS CITY PO YONKERS CITY POLICE ASHVILLE POLICE DEPA CHARLOTTE POLICE DEP DURHAM DEPARTMENT OF FAYETTEVILLE POLICE GREENSBORO POLICE DE RALEIGH POLICE DEPAR NINSTON-SALEM POLICE FARGO POLICE DEPARTM AKRON POLICE DEPARTM CANTON POLICE DEPARI CINCINNATI POLICE DE ADDRESS ............. 120 NEST MICHIGAN 15050 FARMINGTON ROA 110 EAST PIKE STREET 612 FEDERAL STREET 26000 EVERGREEN 40333 DODGE PARK 29900 CIVIC CENTER 0 20 NORTH DIVISION ST 31555 11 MILE ROAD 216 EAST NASHINGTOM 980 JEFFERSON STREET 221 EAST 3RD STREET 27665 JEFFERSON 11075 PINE 36701 FORD ROAD 2650 DEHOOP SOUTHNES CITY HALL 129 EAST FIRST STREE 101 EAST 10TH STREET P.O. BOX 17 1125 LOCUST STREET 1200 CLARK STREET 321 EAST CHESTNUT EX 220 NORTH 27TH STREE 233 SOUTH IOIH STREE 505 SOUTH 15TH STREE 400 EAST SIENART SIR 455 EAST SECOND SIRE 351 CHESTNUT STREET 0 PANTHER DRIVE. P.O 1300 BACHARACH BOULE 630 AVENUE C 800 FEDERAL STREET 61 NORTH MUNN AVENUE 33 MORRELL STREET 8 ERIE STREET 31 GREEN STREET 225 NORTH CLINTON AV 401 MARQUEITE STREET MORTON AVENUE 1 BROA 74 FRANKLIN STREET 90 BEAUFORT PLACE 1 POLICE PLAZA 150 SOUTH PLYMOUTH A 531 LIBERTY 279 HAMILTON AVENUE 10 ST. CASIMER AVENU P.O. BOX 7148 825 EAST 4TH 314 NORTH MANGUM SIR P.O. BOX 966 300 NEST NASHINGTON 110 SOUTH MCDONELL P.O. BOX 3114 201 NORTH 4TH STREET 217 SOUTH HIGH 221 3RD SOUTH NEST 310 EZZARD CHARLES 0 CITY ................ LANSING LIVONIA PONTIAC SAGINAN SOUINFIELD STERLING HEIGHTS NARREN BATTLE CREEK FARMINGTON HILLS JACKSON MUSKEGON ROYAL OAK ST. CLAIR SHORES TAYLOR NESTLAND NYOHING DULUTH MINNEAPOLIS ST. PAUL JACKSON KANSAS CITY ST. LOUIS SPRINGFIELD BILLINGS, LINCOLN OMAHA LAS VEGAS RENO MANCHESTER NASHUA ATLANTIC CITY BAYONNE CAMDEN EAST ORANGE ELIZABETH JERSEY CITY NENARK TRENION ALBUQUERQUE ALBANY BUFFALO CITY HEN ROCHELLE CITY HEN YORK CITY ROCHESTER SCNENECTADY CITY NHITE PLAINS CITY YONKERS CITY ASHVILLE CHARLOTTE DURHAM FAYETTEVILLE GREENSBORO RALEIGH NINSTON'SALEM FARGO AKRON CANTON CINCINNATI STATE ZIP.. MI 48933 MI 48154 MI 48058 MI 48607 MI 48076 MI 48078 MI 48093 MI 49016 MI 48018 MI 49201 HI 49440 MI 48067 MI 48081 MI 48180 MI 48185 MI 49509 MN 55802 MN 56264 MN 55101 MS 39205 MO 64106 MO 63103 MD 65802 MT 59103 NE 68508 NE 68102 NV 89101 NV 89502 NH 03101 NH 03061 NJ 08401 NJ 07002 NJ 08102 NJ 07017 NJ 07202 NJ 07302 NJ 07102 NJ 08629 NM 87102 NY 12202 NY 14202 NY 10801 NY 10038 NY 14614 NY 12305 NY 10601 NY 10701 NC 28807 NC 28201 NC 27701 NC 28302 NC 27402 NC 27603 NC 27102 MD 58107 OH 44308 OH 44702 OR 45214 SYS:RECORD 119 I20 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 I48 I49 150 151 152 153 154 155 156 DELETED No TIIIF' CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE SUPERINIENDENI CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE DIRECTOR CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE DIRECTOR CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE T57 DEPT CLEVELAND POLICE DEP DAYTON POLICE DEPARI HAMILTON POLICE DEPA YOUNGSTONN POLICE DE TOLEDO POLICE DEPARI SPRINGFIELD POLICE D KETTERING POLICE DEP LIMA POLICE DEPARIME PARHA POLICE DEPARTM LANTON POLICE DEPARI NORMAN POLICE DEPARI OKLAHOMA CITY POLICE TULSA POLICE DEPARTM EUGENE POLICE DEPARI PORTLAND POLICE DEPA SALEM POLICE DEPARTM ALLENTONH POLICE DEP BETHLEHEM POLICE DEP ERIE POLICE DEPARTME HARRISBURG POLICE DE LANCASTER POLICE DEP PHILADELPHIA POLICE PITTSBURGH DEPARTMEN READING POLICE DEPAR SCRANTON POLICE DEPA CRANSTON POLICE DEPA PANIUCKEI POLICE DEP PROVIDENCE POLICE DE NARNICK POLICE DEPAR CHARLESTON CITY POLI COLUMBIA POLICE DEPA GREENVILLE POLICE DE SPARTANBURG DEPARIME SIOUX FALLS POLICE D CHATTAHOOGA POLICE 0 JACKSON POLICE DEPAR KNOXVILLE POLICE DEP MEMPHIS POLICE DEPAR NASHVILLE MEIROPOLII ABILENE POLICE DEPAR AMARILLO POLICE DEPA ARLINGTON POLICE DEP AUSTIN POLICE DEPARI BEAUMONT POLICE DEPA CORPUS CHRISTI POLIC DALLAS POLICE DEPARI EL PASO POLICE DEPAR FORT NORTH POLICE DE HOUSTON POLICE DEPAR IRVING POLICE DEPARI LUBBOCK POLICE DEPAR MIDLAND POLICE DEPAR ODESSA POLICE DEPARI PASADENA POLICE DEPA SAN ANTONIO POLICE D TYLER POLICE DEPARTM OGDEN CITY POLICE DE SALT LAKE CITY POLIC BURLINGTON POLICE DE ADDRESS ............. 1300 ONTARIO STREET 335 NEST 3RD STREET 331 SOUTH FRONT SIRE 116 NEST BOAROHAH ST 525 NORTH ERIE 130 NORTH FOUNTAIN A 3600 SHROYER ROAD 117 EAST MARKET 5750 NEST 54TH STREE 10 SOUTHNEST 4TH SIR 201 NEST GRAY STREET 701 COLCORD DRIVE 600 CIVIC CENTER 777 PEARL STREET. R0 1111 SOUTHNESI SECOH 555 LIBERTY STREET S 425 HAMILTON STREET 10 EAST CHURCH STREE MUNICIPAL BUILDING. I23 NALHUT STREET. P 39 EAST CHESTNUT SIR POLICE ADM BLDG.RM 3 PUBLIC SAFETY BUILDI 815 NASHIHGTON STREE CITY HALL, MULBERRY 275 ATNOOD AVENUE 121 ROOSEVELT AVENUE 209 FOUNTAIN STREET 99 VETERANS BOULEVAR P.O. BOX 98 1409 LINCOLN STREET 4 MCGEE STREET 145 BROAD STREET, P. 501 NORTH DAKOTA AVE 3300 AHNICOLA HIGHHA 234 INSTITUTE STREET 800 EAST CHURCH AVEN 201 POPLAR AVENUE 200 JANE ROBERSTON P 555 NALNUI, P.O. BOX 609 SOUTH PIERCE SIR 717 NEST MAIN STREET 715 EAST 8TH STREET 255 COLLEGE STREET. I616 MARTIN LUTHER K 2014 MAIN STREET 109 SOUTH CAMPBELL 1000 THROCKMORION 61 RIESNER STREET 845 NEST IRVING BOUL 1015 9TH STREET. P.0 406 EAST FEYAS. P.O. 221 NORTH LEE STREET 1114 DAVIS 214 NEST NUEVA STREE 711 NEST FERGUSON 2549 NASHIHGTON BLVD 450 SOUTH 300 EAST S 82 SOUTH NINOOSKI CITY ................ CLEVELAND DAYTON HAMILTON YOUNGSTONN TOLEDO SPRINGFIELD DAYTON LIMA PARHA LANTON ORMAN OKLAHOMA CITY TULSA EUGENE PORTLAND SALEM ALLENTONH BETHLEHEM ERIE HARRISBURG LANCASTER PHILADELPHIA PITTSBURGH READING SCRANTON CRANSTON PANIUCKEI PROVIDENCE NARNICK CHARLESTON CITY COLUMBIA GREENVILLE SPARTANBURG SIOUX FALLS CHAIIANOOGA JACKSON KNOXVILLE MEMPHIS NASHVILLE ABILENE AMARILLO ARLINGTON AUSTIN UMOHI CORPUS CHRISTI DALLAS EL PASO FORT NORTH HOUSTON IRVING lUBBOCK MIDLAND ODESSA PASADENA SAN ANTONIO - m > _. -< ,— f" OGDEN CITY SALT LAKE CITY BURLINGTON STATE ZIP.. OH 44113 OR 45402 OH 45013 OH 44503 OH 43604 OH 45502 OH 45429 OR 45801 OH 44134 OK 73501 OK 73069 OK 73102 OK 74103 OR 97401 OR 97204 OR 97301 PA 18101 PA 18018 PA 16501 PA 17101 PA 17602 PA 19106 PA 15219 PA 19601 PA 18501 RI 02920 RI 02860 RI 02903 RI 02886 SC 29402 SC 29201 SC 29601 SC 29304 50 57102 IN 37406 IN 38301 IN 37915 IN 38101 IN 37201 TX 79602 TX 79101 TX 76010 TX 78701 TX 77701 TX 78408 TX 75201 TX 79999 TX 76102 TX 77002 TX 75060 TX 79401 TX 79701 TX 79760 TX 77501 TX 78204 TX 75702 UT 84409 UT 84111 VT 05401 SYS:RECORD 178 179 180 181 182 183 184 185 186 187 188 189 190 191 I92 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 DELETED TITLE ............... NO NO No No No No No No No No No No NO N0 No No No No No Ho NO NO No NO No No No No No No No No Ho Ho No No No No No No No No Ho No No No No No NO NO M0 NO NO No No NO NO No No CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE‘ CHIEF OF POLICE CHIEF OF POLICE CHIEF OF POLICE SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF 158 DEPT ................ ALEXANDRIA POLICE DE ARLINGTON COUNTY POL CHESAPEAKE CITY POLI HAMPTON POLICE DEPAR NEWPORT NENS POLICE NORFOLK POLICE DEPAR PORTSMOUTH POLICE DE RICHMOND CITY POLICE ROANOKE CITY POLICE VIRGINIA BEACH POLIC SEATTLE POLICE DEPAR SPOKANE POLICE DEPAR TACOMA POLICE DEPARI CHARLESTON POLICE DE HUNTINGTON POLICE DE GREEN BAY POLICE DEP KENOSHA POLICE DEPAR MADISON POLICE DEPAR MILWAUKEE POLICE DEP RACINE POLICE DEPARI NEST ALLIS POLICE DE CHEYENNE POLICE DEPA JEFFERSON COUNTY SHE MOBILE COUNTY SHERIF MONTGOMERY COUNTY SH MARICOPA COUNTY SHER PIMA COUNTY SHERIFF' PULASKI COUNTY SHERI ALAMEDA COUNTY SHERI CONTRA COSTA COUNTY FRESNO COUNTY SHERIF KERN COUNTY SHERIFF’ LOS AHGELES COUNTY S MONTEREY COUNTY SHER ORANGE COUNTY SHERIF RIVERSIDE COUNTY SHE SACRAMENTO COUNTY SH SAN BERNARDINO COUNT SAN DIEGO COUNTY SHE SANTA CLARA COUNTY S IULARE COUNTY SHERIF VENTURA COUNTY SHERI ADAMS COUNTY SHERIFF EL PASO COUNTY SHERI JEFFERSON COUNTY SHE PUEBLO COUNTY SHERIF NEH CASTLE COUNTY SH ALACHUA COUNTY SHERI BRONARD COUNTY SHERI COLLIER COUNTY SHERI DADE COUNTY SHERIFF' ESCAHBIA COUNTY SHER MILLSBOROUGH COUNTY LEE COUNTY SHERIFF'S LEON COUNTY SHERIFF’ ORANGE COUNTY SHERIF PALM BEACH COUNTY SH PINELLAS COUNTY SHER POLK COUNTY SHERIFF' ADDRESS ............. 400 NORTH PITT 2100 NORTH 15TH SIRE P.O. BOX 5225 40 EAST LINCOLN SIRE 224 26TH STREET 811 EAST CITY HALL A 711 CRAWFORD STREET 501 NORTH 9TH STREET 309 3RD STREET SOUTH PUBLIC SAFETY BLDG.M PUBLIC SAFETY BLDG.. NEST 1100 MALLON STR 930 TACOMA AVENUE SO P.O. BOX 2749 P.O. BOX 1659 307 SOUTH ADAMS 1000 55TH STREET 211 SOUTH CARROLL ST 749 NEST STATE STREE 730 CENTER STREET 7310 NEST NATIONAL A 1915 PIONEER STREET 4600 COMMERCE AVENUE P.O. BOX 113 142 NASHIHGTON AVE.. 120 SOUTH FIRST AVEN 1801 SOUTH MISSION 2900 SOUTH NOODROW 1225 FALLON STREET 651 PINE STREET. P.O 2200 FRESNO STREET, 1415 TRUXTUH AVENUE. 211 NEST TEMPLE. R00 142 NEST ALISAL, P.O 550 NORTH FLONER. P. 4050 MAIN STREET. P. 711 G STREET. P.O. B 251 N. ARRONHEAO AVE 222 NEST C STREET, P 180 NEST HEDDING ST. COUNTY CIVIC CENTER 800 SOUTH VICTORIA A 1831 EAST BRIDGE 15 EAST CUCHARRAS SI 1600 ARAPAHOE STREET 909 COURT STREET ELEVENTH 4 KING SIRE 913 SOUTHEAST 5TH ST 2600 SOUTHNEST 4TH A P.O. DRAWER 1277 1320 NORTHWEST 14TH P.O. BOX 18770 2008 8TH AVENUE. P.O 2055 ANDERSON AVENUE 1117 THOMASVILLE RD. P.O. BOX 1440 3228 GUHCLUB ROAD 250 NEST ULMERION RD 455 NORTH BROADWAY CITY ................ ALEXANDRIA ARLINGTON CHESAPEAKE CITY HAMPTON NENPORT NENS NORFOLK PORTSMOUTH RICHMOND CITY ROANOKE CITY VIRGINIA BEACH SEATTLE SPOKANE TACOMA CHARLESTON HUNTINGTON GREEN BAY KENOSHA MADISON MILNAUKEE RACINE NEST ALLIS CHEYENNE FAIRFIELD MOBILE MONTGOMERY PHOENIX TUCSON LITTLE ROCK OAKLAND MARZINEZ FRESNO BAKERSFIELD LOS ANGELES SALINAS SANTA ANA RIVERSIDE SACRAMENTO SAN BERNAROINO SAN DIEGO SAN JOSE VISALIA VENTURA BRIGHTON COLORADO SPRING GOLDEN PUEBLO NILMINGION GAINSVILLE FORT LAUDERDALE NAPLES MIAMI PENSACOLA TAMPA FORT MYERS TALLAHASSEE ORLANDO NEST PALM BEACH LARGO BARTON STATE ZIP.. VA 22310 VA 22201 VA 23320 VA 23669 VA 23607 VA 23501 VA 23704 VA 23219 VA 24011 VA 23456 NA 98184 NA 99260 NA 98402 NV 25330 NV 25717 NI 54301 HI 53140 WI 53709 HI 53233 WI 53403 HI 53214 NY 82001 AL 35064 AL 36601 AL 36101 AZ 85003 AZ 85713 AR 72204 CA 94612 CA 94553 CA 93717 CA 93303 CA 90012 CA 93902 CA 92702 CA 92502 CA 95805 CA 92401 CA 92112 CA 95110 CA 93291 CA 93009 CO 80601 CO 80901 CO 80419 CO 81003 DE 19801 FL 32602 FL 33310 FL 33939 FL 33125 FL 32523 FL 33601 FL 33901 FL 32302 FL 32802 FL 33406 FL 34294 FL 33830 SYS:RECORD 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 DELETED IIIlF No NO No No No SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF SHERIFF COUNTY PROSECUTOR SHERIFF SHERIFF SHERIFF SHERIFF COUNTY PROSECUTOR DEPT SARASOTA COUNTY SHER CLAYTON COUNTY SHERI COBB COUNTY SHERIFF' DEKALB COUNTY SHERIF FULTON COUNTY SHERIF RICHMOND COUNTY SHER COOK COUNTY SHERIFF' DUPAGE COUNTY SHERIF LAKE COUNTY SHERIFF' NILL COUNTY SHERIFF' ALLEN COUNTY SHERIFF LAKE COUNTY SHERIFF' MARION COUNTY SHERIF POLK COUNTY SHERIFF' JOHNSON COUNTY SHERI SEDGHICK COUNTY SHER JEFFERSON COUNTY SHE CALCASIEU PARISH SHE CADDO PARISH SHERIFF EAST BATON ROUGE PAR JEFFERSON PARISH SHE LAFAYETTE PARISH SHE SAINT TAMMANY PARISH TERREBONHE PARISH SH ANNE ARUHDEL COUNTY BALTIMORE COUNTY SHE HONARD COUNTY SHERIF HARFORD COUNTY SHERI MONTGOMERY COUNTY SH PRINCE GEORGES COUNT WAYNE COUNTY SHERIFF OAKLAND COUNTY SHERI MACOMB COUNTY SHERIF LIVINGSTON COUNTY SH INGHAM COUNTY SHERIF WASHIENAW COUNTY SHE KALAMAZOO COUNTY SHE KENT COUNTY SHERIFF' BERRIEN COUNTY SHERI JACKSON COUNTY SHERI NENNEPIN COUNTY SHER RAMSEY COUNTY SHERIF ST. LOUIS COUNTY SHE HARRISON COUNTY SHER MINDS COUNTY SHERIFF CLAY COUNTY SHERIFF' GREENE COUNTY SHERIF DOUGLAS COUNTY SHERI NASHOE COUNTY SHERIF BERGEN COUNTY SHERIF BURLINGTON COUNTY SH CAMDEN COUNTY SHERIF ESSEX COUNTY PROSECU ESSEX COUNTY SHERIFF HUDSON COUNTY SHERIF MONMOUTH COUNTY SHER MORRIS COUNTY SHERIF PASSAIC COUNTY PROSE 159 ADDRESS 2200 MAIN STEET. P.O COURTHOUSE SQUARE P.O. BOX 649 556 NORTH MCDOMOUGH 136 PRYOR STREET CITY COUNTY BUILDING RICHARD J. DAILY CEN 501 NORTH FARM ROAD 10 NORTH COUNTY SIRE 14 NEST JEFFERSON 1 MAIN STREET 2293 NORTH MAIN SIRE 220 EAST MARYLAND ST CO. COURTHOUSE.51M K COURTHOUSE TOWERS 525 NORTH MAIN STREE 600 FISCAL COURT BUI P.O. BOX V COURTHOUSE, 500 TEXA 223 ST. LOUIS. P.O. P.O. BOX 327 P.O. BOX 3508 510 EAST BOSTONI P.0 COURTHOUSE ANNEX. P. CHURCH CIRCLE. P.O. 401 BOSLEY AVENUE COURTHOUSE P.O. BOX 150 50 COURTHOUSE SQUARE COURTHOUSE 525 CLINTON STREET 1201 NORTH TELEGRAPH 43565 ELIZABETH ROAD 510 HIGHLAHDER WAY 630 NORTH CEDAR SIRE 2201 HOGBACK AVENUE 1500 LAMONT AVENUE 701 BALL AVENUE NORT 919 PORT STREET 212 NEST NESLEY COURTHOUSE. 300 S. 4 3401 NORTH RICE SIRE COURTHOUSE. ROOM 103 P.O. BOX 1480 P.O. BOX 1452 COURT STREET. P.O. B COURTHOUSE. P.O. BOX 505 HALL OF JUSTICE 170 SOUTH SIERRA ST. ONE COURT STREET BUILDING 4'7. GRANT PARKAGE BLDG.SUITE 2 ESSEX COUNTY COURT B COURIHOUSE.NEH COURT 595 NENARK AVENUE COURT STREET. P.O. B COURTHOUSE 77 HAMILTON STREET CITY SARASOTA JONESBORO MARIETTA DECATUR ATLANTA AUGUSTA CHICAGO NHEATON NAUKEGAH JOLIET FORT NAYNE CRONH POINT INDIAHOPOLIS DES MOIHES OLATHE NICHITA LOUISVILLE LAKE CHARLES SHREVEPORI BATON ROUGE GRETNA LAFAYETTE COVINGTON HOUMA ANNAPOLIS TONSON ELICOTI CITY ELAIR ROCXVILLE UPPER MARLBORO DETROIT PONTIAC MT. CLEMENS HONELL MASON ANN ARBOR KALAMAZOO GRAND RAPIDS ST. JOSEPH JACKSON MINNEAPOLIS SHOREVIEN DULUTH GULFPORT JACKSON NEST POINT LEAKESVILLE OMAHA RENO NACKENSACK MOUNT HOLLY CAMDEN NENARK NENARK JERSEY CITY FREEHOLD MORRISTONN PAIERSON STATE ZIP.. FL 33578 GA 30236 GA 30061 GA 30030 GA 30303 GA 30911 IL 60602 IL 60187 IL 60085 IL 60431 IN 46802 IN 46307 IN 46204 IA 50303 KS 66061 KS 67203 KY 40202 LA 70602 LA 71101 LA 70821 LA 70054 LA 70502 LA 70434 LA 70361 MB 21204 MD 21204 MB 21043 MB 21014 MD 20850 MD 20772 MI 48226 MI 48053 MI 48043 MI 48843 MI 48854 MI 48104 MI 49001 MI 49503 MI 49058 HI 49201 MN 55415 MN 55112 MN 55802 MS 39501 MS 39205 MS 39773 MS 39451 NE 68183 NV 89505 NJ 07601 NJ 08060 NJ 08101 NJ 07102 NJ 07102 NJ 07306 NJ 07728 NJ 07960 NJ 07505 SYS:RECORD 296 N 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 DELETED O 160 III! F DEPT ADDRESS CITY SHERIFF BERNALILLO COUNTY SH 401 MARQUEITE STREET ALBUQUEROUE SHERIFF MONROE COUNTY SHERIF 130 SOUTH PLYMOUTH A ROCHESTER SHERIFF NASSAU COUNTY SHERIF 240 OLD COUNTRY ROAD MINEOLA SHERIFF SUFFOLK COUNTY SHERI 1 CENTER DRIVE BIVERHEAD SHERIFF NESTCHESTER COUNTY S 110 GROVE STREET NHIIE PLAINS SHERIFF BUHCOMBE COUNTY SHER P.O. BOX 7218 ASHEVILLE SHERIFF CUMBERLAND COUNTY SH 131 DICK STREET FAYETTEVILLE SHERIFF FORSYTH COUNTY SHERI P.O. BOX 2100 NINSION SALEM SHERIFF GUILFORD COUNTY SHER P.O. BOX 3427 GREENSBORO SHERIFF MECKLENBURG COUNTY S 800 EAST 4TH STREET CHARLOTTE SHERIFF NAKE COUNTY SHERIFF' P.O. BOX 550 RALEIGH SHERIFF CLARK COUNTY SHERIFF 120 NORTH FOUNTAIN A SPRINGFIELD SHERIFF FRANKLIN COUNTY SHER 370 SOUTH FRONT SIRE COLUMBUS SHERIFF LUCAS COUNTY SHERIFF 1622 STIELBUSCH TOLEDO SHERIFF HAMILTON COUNTY SHER 1000 MAIN STREET CINCINNATI SHERIFF MONTGOMERY COUNTY SH 41 NORTH PERRY STREE DAYTON SHERIFF STARK COUNTY SHERIFF 4500 ATLANTIC BLVD N CANTON SHERIFF OKLAHOMA COUNTY SHER 321 PARK AVENUE OKLAHOMA CITY SHERIFF TULSA COUNTY SHERIFF 500 SOUTH DENVER STR TULSA SHERIFF CLACKAMAS COUNTY SHE 2223 SOUTH KAEN ROAD OREGON CITY SHERIFF MULTNOMAH COUNTY SHE 12240 NORTHEAST GLIS PORTLAND SHERIFF NASHIHGTON COUNTY SH 146 NORTHEAST LINCOL HILLSBORO SHERIFF ALLEGHENY COUNTY SHE 111 COURTHOUSE PITTSBURGH SHERIFF CHARLESTON COUNTY SH P.O. BOX 605 CHARLESTON SHERIFF GREENVILLE COUNTY SN 4 MCGEE STREET GREENVILLE SHERIFF RICHLAND COUNTY SHER I400 HUGER STREET, P COLUMBIA SHERIFF SPARTANBURG COUNTY S P.O. BOX 771 SPARTANBURG SHERIFF KNOX COUNTY SHERIFF' 400 MAIN AVENUE KNOXVILLE SHERIFF SHELBY COUNTY SHERIF 201 POPLAR. NINTH FL MEMPHIS SHERIFF SULLIVAN COUNTY SHER P.O. BOX 305 BLOUTVILLE SHERIFF BEXAR COUNTY SHERIFF 200 SOUTH MAIN SAN ANTONIO SHERIFF BRAZORIA COUNTY SHER P.O. BOX 1046 ANGLEIOH SHERIFF DALLAS COUNTY SHERIF 600 COMMERCE STREET DALLAS SHERIFF EL PASO COUNTY SHERI 600 EAST OVERLAND, P EL PASO SHERIFF FORT BEND COUNTY SHE P.O. BOX 40 RICHMOND SHERIFF GALVESTON COUNTY SHE 715 19TH STREET GALVESTON SHERIFF HARRIS COUNTY SHERIF 1301 FRANKLIN STREET HOUSTON SHERIFF HIDALGO COUNTY SHERI 3500 SOUTH CLOSNER. EDINBURG SHERIFF JEFFERSON COUNTY SHE 1149 PEARL STREETI P BEAUMONT SHERIFF NUECES COUNTY SHERIF 901 LEPARD STREET. P CORPUS CHRISTI SHERIFF TARRANI COUNTY SHERI 300 NEST BELKNAP FORT WORTH SHERIFF TRAVIS COUNTY SHERIF 1010 GUADALUPE STREE AUSTIN SHERIFF NEBB COUNTY SHERIFF' 1002 HOUSTON. P.O. B LAREDO SHERIFF CHESTERFIELD COUNTY COURTHOUSE. P.O. BOX CHESTERFIELD SHERIFF FAIRFAX COUNTY SHERI 4110 CHAINBRIDGE ROA FAIRFAX SHERIFF HENRICO COUNTY SHERI P.O. BOX 27032 RICHMOND SHERIFF PRINCE NILLIAM COUNT 9250 LEE AVENUE MANASSAS SHERIFF ROAHOKE COUNTY SHERI 305 EAST MAIN STREET SALEM SHERIFF KING COUNTY SHERIFF' 516 THIRD AVENUE SEATTLE SHERIFF PIERCE COUNTY SHERIF 930 TACOMA AVENUE SO TACOMA SHERIFF SNOHOMISH COUNTY SHE 3015 NETMORE EVERETT SHERIFF DANE COUNTY SHERIFF' 210 MONONA AVENUE MADISON SHERIFF MILNAUKEE COUNTY SHE 821 NEST STATE STREE MILNAUKEE SHERIFF NAUKESHA COUNTY SHER 515 WEST MORELAND BO NAUKESHA STATE NM ZIP.. 87102 14614 11501 11901 10601 28807 28301 27102 27402 28202 27602 45502 43207 43624 45202 45422 44711 73102 74103 97045 97230 97124 15219 29402 29609 29201 29304 37902 38103 37617 78205 77515 75202 79941 77469 77550 77002 78539 77704 78403 76102 78767 78040 23832 22030 23273 22110 24153 98104 98402 98201 53709 53233 53186 COMPANY .............. Departaent of Public Delaware State Polic Maine State Police Naryland State Polic Departaent of Public State Police Neadqua New Jersey State Pol New York State Polic Pennsylvania State P Hhode Is. Div. of St State Dept. of Publi Virginia DeptVof St, Departeent of Public Alaska State Trooper Highway Patrol Burea California Highway P Colorado State Patro Idaho State Police Montana Highway Patr Nevada Highway Patro New Mexico State Pol North Dakota Highway Oregon State Police South Dakota Highway Utah Highway Patrol Nashington State Pat Hyoaing Highway Patr Illinois State Troop Indiana state Police Departaent of Public Kansas Highway Patro Minnesota State Patr Hissouri State Nighw Nebraska State Patro Ohio State Highway P Division of State Pa Departaent of Public Arkansas State Polic Florida Highway Patr Georgia Dept. of Pub Kentucky State Polic Dept. of Public Safe Mississippi DPS North Carolina Highw ranIHv Oklahooa Highway Pat SC State Highway Pat Tennessee Highway Pa Texas DPS 161 srnrrr 100 Nashington Stree P.O. Box 430 36 Hospital street Headquarters Office 1010 Coaaonwealth Av Jaaes Hayes Safety B Box 7068 State Caepus. Buildi 1900 Elaerton Avenue P. O. Box 185 103 S. Main St.. Nat P. 0. Box 27472 725 Jefferson Road 5700 Tudor load P. O. Box 6638 P.O. Box 898 700 Kipling Street P.O. Box 55 301 Roberts Avenue 555 Nright Nay P.O. Box 1628 Capitol Building 107 Public Service B 500 E. Capitol Stree 4501 South 2700 Vest General Adainistrati P.O. Box 1708 401 Araory Building 301 State Office Bui Nallace State office 122 S. N. 7th DPS-Transportation B 1510 East [In Box 94907 660 East Main Street P. O. Box 7912 P.O. Box 1511 P.O. Box 5901 Neil Kirk-an Buildin P.O. B01 1456 919 Versailles Road P.O. Box 66614 P.O. Box 958 512 N. Salisbury Str starrr P.O. Box 11415 P.O. Box 191 1150 Foster Ave.. Na P.O. Box 4087 crrv Hartford Dover Augusta Pikesville Boston Concord Nest Trenton Albany Harrisburg North Scituate Waterbury Hichaond South Charleston Anchorage Phoenix Sacra-ento Denver ' . Carson City Santa Fe Bisaarck Salea Pierre Salt Lake City OIy-pia Cheyenne Springfield Indianapolis Des Hoines Topeka St. Paul Jefferson City tincoln Coluabus Madison Montgoeery Little Rock Tallahassee Atlanta Frankfort Baton Rouge Jackson Raleigh CITY ................ Oklahoaa City Coluabia Nashville Austin STATE ZIP.. 061 CT 01 DE 19901 ME 04333 MD 21208 MA 02215 NH 03301 NJ 08625 NY 12226 PA 17110 RI 02857 VT 05676 VA 23261 NV 25309 AK 99507 AZ 85005 CA 95804 CO 80215 ID 83707 MT 59620 NV 89711 NM 87501 ND 58505 OR 97310 SD 57501 UT 84114 NA 98504 NY 82001 IL 62706 IN. 46204 IA 50319 KS 66603 MN 55155 MO 65102 NE 68509 OH 43205 NI 53707 AL 36130 AR 72215 FL 32304 GA 30317 KY 40601 LA 70896 MS 39205 NC 27611 STATE ZIP.. OK 73116 SC 29202 IN 37210 TX 78773 SALUTATION.... Colonel Forst Mr. Craviet Colonel Deeers Hr. Iippett Coaaissioner N Colonel Iverso Colonel Pagano Hr. Constantin Colonel Sharpe Colonel Stone Coawissioner N Colonel Suthar Colonel Donoho Colonel Jent Colonel Thoeps Coaaissioner S Colonel Deapse Colonel Huaphe Chief Landon Colonel HcCowa Chief Curran Colonel Berg Superintendent Colonel Jones Colonel Chabri Chief Iellevik Colonel Ayers Colonel O'Sull Superintendent Coolissioner S Colonel Picker Colonel Leddin Colonel Noffaa Colonel Le Gra Colonel Nalsh Nr. Van sistin Nr. Hells Colonel Boodwi Colonel Burket Colonel Earp Coolissioner E Colonel HcCOra Colonel Cvitan Colonel Cardwe SALUIATION.... Colonel Allen Colonel Lanier Colonel Nallac Colonel Gosset APPENDIX B SURVEY QUESTIONNAIRE 162 INSTRUCTIONS The enclosed survey is desiuned to gather data on the nature and type of performance evaluation systems currently in use within law enforcement agencies throughout the nation. Your cooperation in completing and returning the survey is greatly appreciated and will have a significant impact on the design of a performance evaluation system for the Michigan Department of State Police. This questionnaire is designed to be completed in approximately fifteen minutes. For each of the following questions please respond by placing an "X" by the answer which best represents your current sygtem. If one of the choices does not fit your agencies' system, please check ”other" and provide a short description. After you have answered all the questions on the am ey, please attach a co 21° of your written performance evaluation instrument and other instructional material you feel might help in our efforts. We would be pleased to provide you with an Executive Summary of our study findings. Simply provide the name and address of a contact person in your agency and we will send the summary to you as soon as our study is completed. The department will also be making followup contacts with various agencies after initial review of re turned data, and having a contact person will be of great assistance. CONTACT PERSON AGENCY ADDRESS Th ou in advance for your prompt response to this survey. We would appreciate the completed questionnaire returned by FEBRUARY 15, 1988. Direct any questions you may have to: F/Lt. Stephen P. DeBoer Michigan State Police Executive Division 714 S. Harrison Road East Lansing, MI 48823 (517) 337-6148 (TURN T0 REVEHSE SIDE) 163 QUESTIONNAIRE Does your agency have a formal performance evaluation systan for enlisted officers (i.e.: a written documented process)? _ Yes (If checked yes, go on to question 2.) __ No (If checked no, disregard answering further and return survey) How long has your current formal performance evaluation syst- been in place? __ Less than one year _ Three to five years _ One to three years _ More than five years Indicate the enlisted ranks that are evaluated within your present system. Check all that apply. __ Trooper, patrolman, or deputy _ Captain _ Corporal _ Major __ Sergeant _ Other: _ Lieutenant Does your agency utilize the same written instrument for all the ranks checked in question 3? Yes _ No (If no, return all instrtnaents with completed questionnaire) What are the major w of your formal performance evaluation system? Check all that apply. _ Promotion _ Feedback/Employee motivation _ Retention _ Discipline/Discharge _ Training and development __ Other: Compensation (i.e. merit pay etc.) Does your agency utilize the same written instrument for all the purposes checked in question 5? Yes _ No (If no checked, return all instruments with completed questionnaire) According to your agency policy, how often is a formal performance evaluation conducted? (Check the appropriate box for each rank) Monthly Quarterly Semi-Annual Annual Other r"1 r"1 r"1 r"1 r"1 officer 1.-..J l--J 1.--J L-_J L__J r"1 r“1 r"1 r"1 r"1 Corporal L--J 1.--] 1.--] l--J l_-J r"1 r"1 r"1 r"1 r"'1 Sergeant l--1 l--1 l--J 1--.! t-_J r"1 r"1 r"1 r"1 r'-1 Lieutenant l—-J l--J l--1 1.--: i--J r"1 r"1 r“1 r"1 r"1 Captain l--J l--J l--J I.--) t--J r“1 r“1 r"1 r“1 r"1 W0? 1.--] t--i 1.--] 1--.: 1.--; 164 8. Typically,. who in your agency gitiallz completes the formal performance evaluation instrum _Immediate supervisor :Supervisor two levels above the uploy __ Supervisor three levels or more above ythe uployee _ Other 9. Indicate the most common method of training your agency provides supervisors’ who conlete the formal performance evaluation instrument. Check all that apply. No training provided Informal training. (i. e. oral instructions from raters supervisor or peers) Formal written instructions from the agency Formal training session, (i.e. workshop, seminar, in —service, etc. _ Other Did your agency conduct a systuatic job/task analysis prior to the development and implementation of your formal performance evaluation systu? Check the appropriate answer for each rank. 10 Yes No Yes No rt 1 r'1 '1 V Officer l-J L-J Lieutenant l.-J 1-1 r1 r1 1‘1 r'1 Corporal L-J 1-4 Captain l-J 1-1 r'1 r‘t r'1 r'1 Sergeant l-J l-J mor 1-1 -1 11. What type of uployee input was obtained in developing your formal performance evaluation system? Check all that apply. No uployee input Oral cements from selected employees Written cements from selected uployees Structured group input, (1. e. quality circles. focus groups, etc.) Formal written survey of all uployees Other 12. Does your agency have a collective bargaining agreement with enlisted personnel? If you answer No, go to question 14. Yes No 13. If you answered Yes to question 12, does the collective bargaining agreement impact your formal performance evaluation systn, (i.e. time periods, who does the rating, purpose of the instrument, etc.) Yes (If yes, enclose specific contract language) No 14. Is there a formal appeal process for the employee who is dissatisfied with their rating? Yes No 165 15. :11 gm thep results of the evaluation process gmicated to the esployee? Check Oral feedback only Copy of completed format given to employee wary of results given to employee Copy of completed format provided with face to face interview. Stmtmary of results provided with face to face interview Other 16. What format(s) is utilized for your formal performance evaluation instrument? Check all that apply. _ Global ranking (i.e., rank all employees from best to worst) _ Single graphic rating scale combining all performance areas _ Separate graphic rating scale for numerous performance areas (i.e., attendance, Job knowledge, work quality. etc.) _ Behaviorally anchored rating scale (BARS) _ Forced choice method (i.e., rater selects one or more stat-eats from a specified set which best describe employee) _ Written essay (i.e., rater describes the performance of the employee in a storyhlike written narrative) _ Goal setting (i.e., rater and -ployee agree upon performance goals which are reviewed regularly-HBO) _ Other 17. Check which of the following statements but describes your formal performance appraisal systu. _'1'he system is relatively new and is still being evaluated. _'1'he systa is generally acceptable and there are no immediate plans to —modify or replace it. _ The system is presently being reevaluated and may be modified or replaced in the future. _ Other (Please explain) 18. Rate the level of acceptance by elployees of your formal performance evaluation system. _ Poor _ Good _ Excellent _ Fair _ Very good 19. Does your agency wish to receive an Executive Stmtmary of the survey results? Yes No PLEASE ATTACH A COPY OF YOUR PERFOWANCE EVALUATION INSTRIMENT(S), PERTINENT CONTRACT LANGUAGE, AND INSTRUCTIONAL MATERIALS- WITE THE COMPLETED SURVEY. A PREADDRESSED LABEL HAS BEEN ENCLOSED TO ASSIST YOU WITH THE MAILING. APPENDIX C MICHIGAN STATE POLICE COVER LETTER 166 STATE OF MICHIGAN JAMES J. BLANCHARD. GOVERNOR DEPARTMENT OF STATE POLICE 714 SOUTH HARRISON ROAD, EAST LANSING. MICHIGAN 48823 cox. a 1‘. DAVIS. once-ma January 25, 1988 DEAR : Our agency is currently undergoing a re-evaluation of its performance appraisal system for enlisted officers. With this letter, I am requesting your assistance in completing the enclosed questionnaire which is designed to gather data on the nature and type of appraisal systems currently in use within law enforcement agencies. Yours is one of 400 of the largest state, county, and municipal agencies selected from around the country to assist in gathering data. The length of the questionnaire is short and should take no more than fifteen minutes to complete. An executive summary of the results will be made available to you upon request. It would also be of great benefit to our task if you would provide samples of forms, guidelines, policies, and procedures used in the administration of your system. These materials will not be distributed to any other agency. Thank you in advance for your cooperation in this important effort. Sincerely, DIRECTOR Enclosure _- APPENDIX D QUALITATIVE ANALYSIS SUBSAMPLE 167 Quantitative Analysis Subsamplg Nashua Police Department Spartanburg Department of Public Safety Kettering Police Department Kenosha Police Department Spokane Police Department Honolulu Police Department Fargo Police Department Salt Lake City Police Department Elgin Police Department Philadelphia Police Department Greenville Police Department Royal Oak Police Department Kansas City Police Department Memphis Police Department Oak Park Police Department Burlington Police Department Des Moines Police Department Portland Police Department Baltimore Police Department Farmington Hills Police Department Phoenix Police Department Los Angeles Police Department Seattle Police Department Rochester Police Department Cincinnati Police Department Muskegon Police Department Knoxville Police Department Corpus Christi Police Department Washington County Sheriff Department Leon County Sheriff Department Nueces County Sheriff Department Maricopa County Sheriff Department El Paso County Sheriff Department El Paso County Sheriff Department Hillsboro County Sheriff Department Monmouth County Sheriff Department Arlington County Sheriff Department Haukesha County Sheriff Department Sarasota County Sheriff Department Hashtenaw County Sheriff Department Clackamas County Sheriff Department Tarrant County Sheriff Department Nashua, NH Spartanburg, SC Kettering, OH Kenosha, WI Spokane, WA Honolulu, HA Fargo, ND Salt Lake City, UT Elgin, IL Philadelphia, PA Greenville, SC Royal Oak, MI Kansas City, KA Memphis, TN Oak Park, IL Burlington, VT Des Moines, IA Portland, ME Baltimore, MD Farmington Hills, MI Phoenix, AZ Los Angeles, CA Seattle, WA Rochester, NY Cincinnati, OH Muskegon, MI Knoxville, TN Corpus Christi, TX Hillsboro, OR Tallahassee, FL Corpus Christi, TX Phoenix, AZ El Paso, TX Colorado Springs, CO Tampa, FL Freehold, NJ Arlington, VA Haukesha, HI Sarasota, FL Ann Arbor, MI Oregon City, OR Fort Worth, TX 168 Minnesota State Patrol Colorado State Patrol Illinois State Police Indiana State Police New Jersey State Police Montana Highway Patrol Ohio State Highway Patrol California Highway Patrol St. Paul, MN Denver, CO Springfield, IL Indianapolis, IN Trenton, NJ Helena, MT Columbus, OH Sacramento, CA BIBLIOGRAPHY Balch, D. E. "Performance Rating Systems--Suggestions for the Police." Journsl pf Ppljss Ssi 1gpsga and Admjpjstpstjpp 2,1 (1974). Bartlett, C. J. "what’s the Difference Between Valid and Invalid Halo? Forced Choice Measurement Without a Choice." Jgurnal of Applied Psychology 64 (1982): 218-26. Beatty, R. H.; Schneier, C. E., and Beatty, J. R. "An Empirical Investigation of Perceptions of Rater Behavior Frequency and Rater Behavior Change Using Behavior Expectation Scales (BES)." Personnel Psychology 30 (1977): 647-58. Berk, Ronald A., ed. Performance Assessment: Methods and Applica- tions. Baltimore, Md.: Johns Hopkins University Press, 1986. Bernardin, H., and Beatty, R. Performance Appraisal: Assessing Human Behavior gt Work. Boston, Mass.: Kent Publishing, 1984. Bopp, Q. J. "Performance Evaluation." Polics Chief (July 1981): 6 ~67. Borman, H. C. "Effects of Instructions to Avoid Halo Error on Reliability and Validity of Performance Evaluation Ratings." Journal of Applisd Psychology 60 (1975): 556-60. "Format and Training Effects on Rating Accuracy and Rater __Errors." Journal of Applisd Psyshglogy 64 (1979): 410-21. Burchett, Shelley R., and DeMeuse, Kenneth P. "Performance Appraisal and the Law." £g_spppg_ 62 (I985): 29- 37. Carroll, S. J., and Schneier, Craig E. Perfgrmancg Appraisal and Rev'ew stem : he Ident'fi ati Mea urem nt an ev lo - ment of Performance in Orgsnizations. Glenview, 111.: Scott, Foresman, 1982. Cummings, L. L., and Schwab, D. r orm n in r ani i ns: Det rminan and A rais . Glenview, 111.: Scott, Foresman, 1973. 169 170 Casio, Wayne. Applied Psysholggy lp Earsgnnal Managamant. Reston, Va.: Reston Publishing, 1978. , and Valenzie, E. R. ”Relations Among Criteria of Police Performance." Journal pf Appljag Esyshglpgy 63 (1978): 210-26. Casio, Wayne, and Zedeck, Sheldon. ”Performance Appraisal Decisions as a Function of Rater Training and Purpose of the Appraisal." MW 6 (1982): 52-58. Colwell, W. L., and Koletar, J. W. "Performance Measurement for Criminal Justice." urnal f 1 e nce n dmi ' r - tion 12 (1984): 146-56. Eichel, E., and Bender, H. E. rma --A S ud f Current Technigges. New York: American Management Association Research and Information Service, 1984. Epstein, Sidney, and Layman, Richard. Guidaljnes fgr Poljga Per- fprmance Appraisal, Prgmotion, and Plasement Prosedura. Wash- ington, D.C.: National Institute of Law Enforcement and Criminal Justice, 1973. Fournies, F. Performance Appraisa1--Dasign Manual. Bridgewater, N.J.: F. Fournies and Associates, 1983. Glueck, William F. Personnel: A Diagnostic Approash. Dallas, Tex.: Business Publications, 1978. Henderson, Richard I. Prastisal Guide to Parformapga Appraisal. Reston, Va.: Reston Publishing, 1984. Holley, W. H., and Feild, H. S. ”Analyzing Performance Appraisal Systems." Perspnnal qurpal 55 (1986): 457-63. Huber, Vandra L. "An Analysis of Performance Appraisal Practices in the Public Sector: A Review and Recommendation." Egalig Parsoppal Management 12 (Fall 1983): 258-26. Jacobs, R. "Behavioral Criteria for Evaluating Police Performance." Eplica Chief 36 (January 1979). ; Kafry, D.; and Zedick, S. ”Expectations of Behaviorally Anchored Rating Scales." £a:spppal_£syshglggy 33 (1980): 595- 640. Klimosky, R. J., and London, M. "Role of the Rater in Performance Appraisal.” 193W 59 (1974): 445-51. Knowles, L., and DeLadurantey, 8. ”Performance Evaluation." qurnal pf Appliad Esysholpgy 2 (1974): 28-33. 171 Kobrmzi E.)W. "Measuring Officer Efficiency." Eplice Chief 63,4 976 . Landy, F. J. "Performance Appraisal in Police Departments." Police Foundatjgn. Washington, D.C.: Library of Congress, 1977. andF J L. "Performance Rating."Esyshglogiga1 _‘TBu letin 87 (i986): 72-107. Police Performance Appraisal. Technical Report NI-7l- 063-6. University Park: Penn State University, 1973. Landy, F. J.: Farr, J. L.; and Saal, F. E. ”Behaviorally Anchored Scales for Rating the Performance of Police Officers.“ Journal of Applied Psychology 61 (1976): 50-58. Landy, F. J, Zedeck, S. ; and Cleveland, J. (Eds. ). Performance Measurement and Theory. Hillsdale, N. J. Lawrence Erlbaum Associates, 1983. Latham, Gary P., and Wexley, Kenneth N. Increasing Productivity Through Performance Appraisal. Reading, Mass.: Addison- Wesley, 1981. "Training Managers to Minimize Rating Errors in the Observation of Behavior." Journal of Applied Psychology 60 (1975): 550-55. Lawther, W. C. "Successful Training for Police Performance Evalua- tion Systems." Journal of Police Science and Administration 12,1 (1984): 41-46. Lazer, R. I., and Wikstrom, W. S. Appraising Managerial Perform- ance: Current Practices and Future Directions. New York: Conference Board, 197. Lee, Raymond; Malone, Michael; and Greco, Susan. "Multitrait- Multimethod-Multirater Analysis of Performance Ratings for Law Enforcement Personnel." Journal pf Applied Psychology 66 (1981): 625-32. Livenberger, Patricia, and Keanevy, Timothy. "Performance Appraisal Standards Used by the Courts.“ Personnel Administratgr 26 (1981): 89—94. Love, K. "Accurate Evaluation of Police Officer Performance Through the Judgment of Fellow Officers." Journal of Police Science and Administration 12 (1984): 146-56. 172 . "Comparison of Peer Assessment Methods: Reliability, Validity, Friendship Bias, and User Reaction." Journal of Applied Esyghglpgy 66,4 (1981): 451-45. "Empirical Recommendations for the Use of Peer Rankings in the Evaluation of Police Officer Performance." Public Personnel Management 12,1 (1983): 25-32. McCall, Morgan W., and DeVries, David L. Appraisal in Context: Clashing With Organizational Realities. Technical Report No. 4, 19. Center for Creative Leadership. Moore, P. Public Parsonnel Management--A Cpntingancy Approach. Lexington, Mass.: D. C. Heath, 1985. Morrisey, George L. Performance Appraisals in the Public Sector: Key to Effective Supervision. Reading, Mass.: Addison-Wesley, 1983. Morrison, Ann M. "Shape of Performance Appraisal in the Coming Decade." Personnel 58 (1981): 12-22. National Management Survey of Police Collective Bargaining Agree- ments. Washington, D.C.: Police Executive Research Forum, 1981 Odom, Vernon J. Performance Appraisal: Legal Aspects. Technical Report No. 3. Center for Creative Leadership, 1977. Olsey, F. "A New Approach to Performance Appraisal." Personnel Administrator 26 (1981): 64. Passonte, J. A. "Implementing the BARS Approach to Appraisals.“ Personnel Administrator 26 (1981): 64. Patterson, 8. Performance Evaluation and Collective Bargaining in Other State Law Enforcement Agencies. Springfield, Ill.: Personnel Bureau, Division of Administration, Illinois State Police, 1986. Pearce, Jon E. "Employee Responses to Formal Appraisal Feedback." Journal of Applied Psychology 12 (1986): 211-18. Performance Appraisal: The Latest Legal Nightmara. New York: Alexander Hamilton Institute, 1986. Peterson, Richard B. Systematic Management of Human Resources. Reading, Mass.: Addison-Wesley, 1979. I73 Rosinger, 6.; Myers, L. 8.; and Levy, Gerard. "Development of a Behaviorally Based Performance Appraisal System.” Eansgnnal Esygnplggy 35,1 (1982): 57-88. Schneier, D. B. ”The Impact of EEO Legislation of Performance Appraisal." Bansgnnal 55 (1978): 24-34. Schwab, D. P.; Heneman, H. 6.; and DeCotiis, T. A. "Behaviorally Anchored Rating Scales: A Review of the Literature.” Earsgn; nel Esyshglggy 24 (1971): 419-34. Spielberger, Charles, ed. 1 1 c ion a v luati n. Wash- ington, D.C.: Hemisphere Publishing, 1979. Stahl, O. G. u i nn ° ’ . New York: Harper and Row, 1971. Steinman, Michael. "Managing and Evaluating Police Behavior." Journal of Polica Sgianca and Administration 14 (1986): 285-92. Swank, C. J., and Conser, J. A. Tha Pglica Earsonnel Systan. New York: John Wiley and Sons, 1983. Szilagy, Andrew. Qrganizatignal Behavjpr ang Perfgnnance. Glenview, 111.: Scott, Foresman, 1983. Tyer, C. 8. "Employee Performance Appraisal in American State Governments." Eubljc Parsgnnal Management 11 (1983): 199-212. Vaughn, Jerald R. "Peer Evaluation in Multidimensional Performance Evaluation." Polisa Chief (August 1981): 58-60. Walsh, W. F. "Standards of Performance and the Appraisal Process." P 11 Chi f (June 1982): 31-32. Wells, Ronald. "Guidelines for Effective and Defensible Performance Appraisal Systems." Personnal Journal 61 (1982): 67-82. Werther, William. Parspnnal Managemant and Human Raspgrges. New York: McGraw-Hill, 1985. Whitaker. G. P. W. Washington, D.C.: National Institute of Justice, 1982. ; Mastrofski, Stephen; and Ostrom, Elinor. Basi es 'n En jsa Earfgrmansa. Washington, D.C.: National Institute of Justice, 1982. Wiatrowski, M. D. "Issues in Police Performance Evaluation." Bollgg Jggrnal 58,1 (1985): 49-59. MICHIGAN STATE UNIV. LIBRQRIES I l 1 11111 31293000628080