THE PERSONNEL ASSESSMENT CENTER: A STUDY OF EFFECTS UPON ASSESSEES Thesis for the, Degree of Ph. D. ' MICHTGAN STATE UNIVERSITY DAT/TD SELLERS VCGELS, Jr. 1973 This is to certify that the thesis entitled THE PERSONNEL ASSESSMENT CENTER: A STUDY OF EFFECTS UPON ASSESSEES presented by David Sellers Vogels, Jr. has been accepted towards fulfillment * of the requirements for ‘ Ph . D degree in Business Adminis trat ion / £6 ‘Lq/ ./o‘5( r flprofessor Date May 18, 1973 l 0-7 639 ABSTRACT THE PERSONNEL ASSESSMENT CENTER: A STUDY OF EFFECTS UPON ASSESSEES BY David Sellers Vogels, Jr. Within the past few years, many organizations have turned to the personnel assessment center as a method for identifying management potential. Much research has been devoted to ascertaining the predictive validity of this method. Most of the research evidence indicates that, with varying degrees of validity, prediction of useful criteria can be made from the multiple evaluation infor- mation obtained at a personnel assessment center. However, desPite the ever-increasing use of personnel assessment techniques and methods, virtually no research has been devoted to determining the existence or extent of any impact upon the assessee arising from his personnel assessment center experience. The present research has been concerned primarily with the effects of a personnel assessment exPerience upon the assessee's job performance and job satisfaction after assessment. It is logical to assume that assessee reaction to David Sellers Vogels, Jr. assessment will differ and that these differences might be accounted for at least in part by personality character- istics, particularly self-esteem. This logical assumption led to the develoPment of the hypotheses. Basically these state that after an assessment center experience, job per- formance and job satisfaction would: 1. Increase for those assessees with high self- esteem who attained above median assessment ratings. 2. Not change for those assessees with high self- esteem who attained below median assessment ratings. 3. Not change for those assessees with low self- esteem who attained above median assessment ratings. 4. Decrease for those assessees with low self- esteem who attained below median assessment ratings. The subjects were 60 managers who participated in a personnel assessment center, conducted by the parts divi- sion of a manufacturing firm in the automotive industry. As part of the assessment process, assessment ratings were obtained on ten variables for each subject. These were combined for use as a dependent variable here. Other data were collected as part of the research project. These were measures of: 1. Personality characteristics, where the measuring instrument used was the Ghiselli Self-Descrip- tion Inventory (SDI). The SDI was administered as part of the assessment center battery. 2. Job performance, where a supervisor's rating was used. A rating was obtained on the assessee David Sellers Vogels, Jr. before assessment and again about six months after assessment. 3. Job satisfaction, where the measuring instru- ment used was the Job Descriptive Index (JDI). The JDI was administered as part of the assess- ment center battery and again about six months after assessment. To test the hypotheses, measures of job performance and job satisfaction measures were examined before assess- ment and six months after assessment. A three-way analysis of variance was used. That is, a 2 x 2 x 2 factorial de- sign was employed. In the design, the main effects were: (1) assessment rating; (2) level of self-esteem; and (3) time. The dependent variables were job performance and job satisfaction. Where the analysis indicated a sig- nificant F ratio, the Scheffe method of means comparison was used to test the significance between all possible com- binations of pairs of means. As a result of the hypothesis testing, Hypotheses 1 and 4 were rejected and Hypothesis 3 was accepted. ’ Hypothesis 2 was accepted in part since job performance did not change; and it was rejected in part since job satisfaction with promotions decreased. As just indicated only one significant change, which was contrary to the hypothesized direction, was found: satisfaction with promotions declined for those assessees with high-self esteem who attained belOw median assessment ratings. This finding suggests that those who think highly David Sellers Vogels, Jr. of themselves may be disturbed by their low assessment rating or "failure". They then apparently externalize this failure to the promotion system, a system which may be associated in their minds with the assessment program. If the research findings are replicated in other organizations, supervisors who have pondered the impact of assessment ratings on job performance and job satis- faction may be reassured that the only area of concern seems to be promotion satisfaction. Other than that, there is no apparent impact six months after assessment. THE PERSONNEL ASSESSMENT CENTER: A STUDY OF EFFECTS UPON ASSESSEES BY David Sellers Vogels, Jr. A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Management 1973 ACKNOWLEDGMENTS It is with a deep sense of gratitude that I acknowl— edge the assistance received during the course of my doctoral studies. While many peOple certainly warrant a "thank you", I believe that I must particularly single out several groups of peOple for Special thanks because of the support and encouragement they gave me. I thank the personnel at the Air Force Institute of Technology for giving me the Opportunity to undertake a doctoral program in business administration. Colonel Robert H. McIntire, Director of Civilian Institutions, always stood ready to assist me accomplish the various duties I had by virtue of being the senior Air Force student on campus. Major James S. Austin, Jr., Program Manager, rendered much invaluable support and advice as he guided me through my three-year academic tour. I thank the personnel of Detachment 380, Air Force ROTC for their fine administrative support during my three years at Michigan State University. Colonel LeRoy M. Wenstrom, Professor of Aerospace Studies, rendered ii vast personal support and relieved me of many ”paper- work" burdens; thereby, enabling me to devote more atten- tion to my student tasks. I thank Mrs. Josephine McKenzie for her excellent service in typing this thesis under the severe pressure of time. Her skill made the final "wrap-up” of the thesis an accomplished task. I thank my thesis committee for their ever-ready assistance and guidance. Professor Thomas H. Patten, Jr., committee member, with his warm personal friendship sustained me through some of the troubled times I experi— enced during my doctoral program. Professor Frederic R. Wickert, committee member, with his many constructive comments saved me from several potential ”traps” I had set for myself during the writing of the thesis. Pro- fessor Henry L. Tosi, committee chairman, with his un- stinting efforts on my behalf, literally carried me through my entire doctoral program. To Dr. Tosi I must give a special "thank you", because his energy, zeal, ready availability, and constant attention to my thesis really made it possible for me to ultimately attain the PhD degree. I thank my family for their many sacrifices in my behalf. My sons David, Bob, Tad, and Jon have never iii enjoyed having a father who was not a student in some pro- gram of academic endeavor. But despite the many drawbacks being a student placces on one's family life, they have never complained. My wife Mimi through over twenty years of our married life has borne a heavy burden as I have pursued my personal objectives for advanced education. While at times she may have wondered about the validity of my goals, she never hesitated in backing my desire to achieve a PhD degree. To her I owe a very special "thank you". It is to her that I dedicate this thesis for her patience, understanding, encouragement, faith, and love which ultimately made it all come true. iv TABLE OF CONTENTS LIST OF TABLES. . . . . . . . . . . . . . . . . . . LIST OF FIGURES . . . . . . . . . . . . . . . . . . TABLE OF APPENDICES . . . . . . . . . . . . . . . . Chapter 1 THE PERSONNEL ASSESSMENT CENTER. . . . . . Introduction . . . . . . . . . . . . . . DeveIOpment of Assessment Centers. . . . Typical Assessment Center. . . . . . . . Research on Assessment Centers . . . . . Research Objective . . . . . . . . . . . RESEARCH DESIGN AND METHODOLOGY. . . . . . Theoretical Concepts . . . . . . . . . . Model and Hypotheses . . . . . . . . . . Research Design. . . . . . . . . . . . Research Sample. . . . . . . . . . . . . Measuring Instruments. . . . . . . . . . Statistical Methods. . . . . . . . . . . Summary. . . . . . . . . . . . . . . . . RESEARCH RESULTS . . . . . . . . . . . . . The Impact of Assessment on Performance and Satisfaction. . . . . . . . . . . . Relationships of Ghiselli SDI Traits . . Summary. . . . . . . . . . . . . . . . . SUMMARY, DISCUSSION, AND CONCLUSIONS . . . The Impact of Assessment on Performance and Satisfaction. . . . . . . . . . . . Relationships of Ghiselli SDI Traits . Implications for Management. . . . . . . V Page vii ix 10 16 25 31 31 39 42 48 56 7O 72 74 75 85 91 92 95 106 110 Chapter Page Suggestions for Future Research. . . . . . 112 Conclusion . . . . . . . . . . . . . . .‘. 113 LIST OF REFERENCES. . . . . . . . . . . . . . . . . . 115 APPENDICES. . . . . . . . . . . . . . . . . . . . . . 121 vi Table 2-4 2-8 LIST OF TABLES Subjects Involved in Analysis of Variance of Various Effects of Assessment . . . Subjects Involved in Analyses of Ghiselli Characteristics and Various Criterion. Coefficients of Correlation Between the Scores of Managers, Supervisors, and Workers on the Various SDI Scales and Their JOb Success. . . . . . . . . . . Assessment Rating Scale Inter—Item Correlations . . . . . . . . . . . . . Reliabilities and Validities of JDI Scales . . . . . . . . . . . . . . . . Self-Rating Performance Rating Scale Inter—Item Correlations. . . . . . . . Supervisor Performance Rating Scale Inter-Item Correlations. . . . . . . . Peer-Average Performance Rating Scale Inter-Item Correlations. . . . . . . . Intercorrelations Among Assessment Rating and Various Job Performance Ratings I O O O O O O O O O O O O O O 0 Relationship Between: (A) Before and After Assessment, (B) Supervisor Job Performance Rating, and (C) Assessment Rating for Assessees at Different Levels of Self—Esteem. . . . . . . . . vii Page 49 49 59 62 64 66 67 68 69 76 Table Page 3-2 Relationship Between: (A) Before and After Assessment, (B) Job Satisfaction (Work), and (C) Assessment Rating for Assessees at Different Levels of Self-Esteem. . . . . . . . . . . . . . . . . 78 3-3 Relationship Between: (A) Before and After Assessment, (B) Job Satisfaction (Supervision), and (C) Assessment Rating for Assessees at Different Levels of Self-Esteem. . . . . . . . . . . . . . . . . 80 3-4 Relationship Between: (A) Before and After Assessment, (B) Job Satisfaction (Promotions), and (C) Assessment Rating for Assessees at Diferent Levels of Self-Esteem. . . . . . . . . . . . . . . . . 82 3-5 Relationship Between: (A) Before and After Assessment, (B) Job Satisfaction (Co-Workers), and (C) Assessment Rating for Assessees at Different Levels of Self-Esteem. . . . . . . . . . . . . . . . . 84 3—6 Correlations of Ghiselli Traits with Overall Job Performance Ratings. . . . . . . 86 3-7 Correlations of Ghiselli Traits with Job Satisfaction Scores. . . . . . . . . . . 88 3-8 Correlations of Ghiselli Traits with Assessment Ratings . . . . . . . . . . . . . 9O viii LIST OF FIGURES OSS Assessment Staff's Principles of Assessment. . . Model of Assessment Effects Upon an Assessee. . . . Identification and Designation of Assessee Groups. . Schematic of Analysis of Variance. Variables and Techniques of the Assessment Center. ix page 40 44 46 52 TABLE OF APPENDICES Appendix Page A Ghiselli Self-Description Inventory (SDI). . 121 B Job Descriptive Index (JDI). . . . . . . . . 123 C Performance Self—Rating Form . . . . . . . . 127 D Supervisor/Co-worker Performance Rating Form (Before Assessment) . . . . . . . . . . 133 E Supervisor Performance Rating Form (After Assessment) . . . . . . . . . . . . . 139 CHAPTER 1 THE PERSONNEL ASSESSMENT CENTER Introduction In the United States today, most peOple are concerned about the utilization of resources. This concern is often focused on the environmental impact of resource manage- ment, but other facets of resource management are not entirely ignored. One such facet is the management of human resources. For an indication of this particular concern one need only consider the increase over the past decade of federal-state legislation regulating employment practices (e.g., in the area of civil rights). After such consideration, it is apparent that American society is indeed concerned with the management of human resources. Accepting the presence of this societal concern, one can understand why, The key occupational group in an industrial society is management. Effective direc- tion of human efforts -- whether in the public or private sectors of the economy ~- is central to the wise and efficient utilization of human and material resources (Campbell, et a1, 1970, p. l). 1 2 Accordingly, as Dunnette (1971) asserts, there is a chronic need for more and better managers. In an effort to overcome this short supply, within the past few years some industrial firms and government agencies have turned to the personnel assessment center as a method of identi- fying managerial talent. An assessment center is a place where judgments are made about the managerial potential or develOpmental needs of personnel in the organization. 'Assessments', in this context are the pooled judgments of several specially trained managers who use a variety of criteria to evaluate a man's performance as he goes through several different 'test' situations. Usually some paper— and—pencil tests are also used, and an intensive interview is a normal part of the assessment procedure. It is this matter of multiple judgments based upon observations of performance in several situations that is the crux of the assess- ment center method (Wikstrom, 1967, p. 39). It is estimated that more than one hundred organiza— tions are using the assessment method for evaluating em— ployee potential (Byham, 1971). Many thousands of indi— viduals have been participants at an assessment center. One estimate is that some 100,000 have been assessed (Jaffee, et a1, 1970). The wide-Spread use of this approach represents the latest stage in the develOpment of the assessment method -- a method which first evolved in the mid-1930's. Development of Assessment Centers The first use of the assessment method for evaluating individuals is generally credited to Murray (Taft, 1959). He used a series of interviews, tests, and experimental pro- cedures which were devised for research in the area of the psychology of personality. The research was conducted during the period 1934-37 and involved mainly Harvard male undergraduates who were paid to participate as sub- jects. Assessors were members of the Harvard Psycho- 10gical Clinic, who rated assessees as a committee. Murray credits these assessment procedures with contributing to the study of personality in three ways: 1. A great deal of information is assembled which can be used to interpret the re- actions of each subject in each experiment. In this manner, the experimentor is able to discover many of the operating variables, rather than having to content himself merely with crude statistical results, such as are obtained in most experiments. 2. Different aspects of personality are brought to light by the different situations that are presented. 3. Errors which arise from the experimentor's personal vieWpoint are minimized (Murray, 1938, pp. 705—706). World War II marked the beginning of the use of this approach for selection purposes. Initially, German mili- tary psychologists used the assessment method for 4 non-commissioned officer selection (Eysenck, 1953). In 1942, the British established War Office Selection Boards (WOSBs) to select candidates for training leading to an officer's commission in the British Army. The WOSB assessment program involved a variety of standardized, simulated real—life situations, interviews, and standard- ized pencil-and-paper tests. The assessees lived at the assessment centers during the program, which usually lasted three days. Assessors were British Army officers, psychologists, and psychiatrists. Judgments of the assessors were pooled, with the president of the WOSB rendering the final decision on selection (Morris, 1949). A similar program was conducted by the Office of Strategic Services (058) during 1944-45 to select candi— dates for training as intelligence and espionage agents. As with the WOSBs assessees, the candidates lived at the assessment center for three days. During this period, each assessee completed a number of paper-and-pencil intelligence and personality tests and a detailed per- sonal history questionnaire. Also assessees were given two outdoor situational tests; an extensive personal interview; and tests of prOpaganda skills, observation and memory, and mechanical comprehension. In addition, each assessee underwent a stress interview -- a procedure 5 designed to test the candidate's capacity to tolerate severe emotional and intellectual strain. Assessors were psychologists, psychiatrists, and other social scientists. During the assessment program, assessors independently rated the assessees. At the completion of the prOgram their judgments were pooled, for a staff decision on an assessee's final rating. The OSS method of assessment was perhaps the first to reflect an emerging emphasis on group and situational exercises to assess the individual characteristics. This emphasis on a man-group focus represented a shift from earlier assessment methods which emphasized an individual- personality focus. Based upon their experience with assessment, the OSS staff formulated a set of principles, which have provided useful guidelines in establishing assessment centers (Figure l—l). FIGURE 1-1 088 ASSESSMENT STAFF'S PRINCIPLES OF ASSESSMENT 1. - Make a preparatory analysis of all the jobs for which candidates are to be assessed. 2. - On the basis of the preparatory analysis of jobs, list all the personality determinants of success or failure in the perform- ance of each job; and from this list select the variables to be measured by the assessment process. 3. - Define (in words that are intelligible to the personnel officers and administrators of the organization) a rating scale for each personality variable on the selected list as well as for the one over—all variable. 4. - Design a program of assessment procedures which will reveal the strength of the selected variables. 4.1. — Plant the assessment procedures within a social matrix composed of staff and candidates. 4.2. - Select several different types of procedures and several procedures of the same type for estimating the strength of each variable. 4.3. — Include in the program a number of situational tests in which the candidate is required to function at the same level of integration and under somewhat similar condi- tions as he will be expected to function under in the field. 5. - Construct a sufficient formulation of the personality of each assessee before making Specific ratings, predictions, and recommendations. 6. - Write, in nontechnical language, a personality sketch of each assessee, which predictively describes him as a functioning member of the organization. 7. - At the end of the assessment period hold a staff conference for the purpose of reviewing and correcting the personality sketch and of deciding on the ratings and recommendations of each assessee. 8. - Construct eXperimental designs as frames for assessment proce- dures so that all the data necessary for the solution of strate- gic problems will be systematically obtained and recorded. Source: OSS Assessment Staff, Assessment of Men (New York: Rinehart, 1948), pp. 28-56. 7 In the 1940's (1946-49), the Veterans Administration conducted a clinical assessment program for the primary purpose of validating certain assessment techniques. Assessees were 471 clinical psychology graduate students, who were rated by clinical and non-clinical psychologists (Taft, 1959). While this assessment prOgram was not de- signed to identify managerial potential, it did have an impact on the kinds of assessment methods which are used for this purpose today. For example, the evaluation of the program led to the conclusion that both psychologists and non-psychologists can competently act as assessors. This finding is the basis for the wide-spread use of non- psychologists as assessors in industrial assessment centers. In general, the findings of this particular research pregram with respect to validity of assessment did much to foster confidence is this method of evaluating the future potential of individuals in varying positions (Kelly, 1951). Several additional clinical assessment programs were conducted in the early 1950's, again for the purpose of validating various assessment techniques. Some of the more important of these were: (1) the California Insti- tute of Personality Assessment and Research (IPAR) program assessing advanced graduate students: (2) the University 8 of Chicago program assessing students in theology, educa- tion, and arts; (3) the Menninger School of Psychiatry prOgram assessing psychiatric candidates; (4) the British Civil Service Selection Boards (CISSB) program assessing civil service candidates; and (5) the United States Air Force program assessing Officer Candidate School (OCS) applicants (Bray & Grant, 1966; Taft, 1959). These pro- grams served to point out that the kind of assessment center is quite different for different types of organ- izations and organizational situations. For example, the ”buddy" rating technique used to assess officer candidates, where individuals are in close and continuous contact with one another for some 90 days, is not a technique which could be readily used to assess individuals in an industrial assessment program. The American Telephone and Telegraph Company (AT&T) in 1956 made the first industrial application of a per- sonnel assessment center. It was used as a major research methodology for its long-term Management Progress Study. This study was intended to uncover information about the personal deve10pment of men as they worked as managers within the Bell System. As a part of the study, AT&T operated assessment centers in the summers of 1956 through 1960 to Obtain information concerning the backgrounds and 9 abilities of individuals starting management careers in the Bell System. The first assessees were young men, recently recruited into the Michigan Bell Telephone Com- pany (Wikstrom, 1967). Managers in the firm were not permitted to use the assessment reports in making promotion decisions. But they did learn enough of the assessment procedures in- volved to believe that these procedures would be useful in selecting foremen from the skilled craftsmen working for the company. So in 1958, acting upon this belief, Michigan Bell opened an assessment center for the appraisal of candidates for promotion to management from vocational occupations. This was the first operational assessment center, organized by an industrial firm, to provide information to line managers (Wikstrom, 1967). This assessment program -- the Personnel Assessment Program -- is still in existence, assessing approximately 500 participants a year. Currently, nineteen Bell System companies Operate 70 assessment centers and evaluate thousands of management candidates each year. In addition, many organizations both large and small have followed the lead of Michigan Bell in establishing assessment centers. Some of these organizations are Standard Oil (Ohio), J. C. Penney, 10 International Business Machines, General Electric, Sears, Caterpillar Tractor, Olin-Mathieson, Ford Motor, Wolverine Tube, Peace Corps, Internal Revenue Service, and Union Carbide (Jaffee, et a1, 1970). Basically, these assess- ment centers all follow a similar pattern. Hence, a description of the "typical" assessment center will serve as an adequate description of most assessment centers. Typical Assessment Center The basic purpose Of an organizational assessment center is the identification of individuals with potential for first-level supervisory positions in the organization. A secondary purpose of assessment is the development of individuals in the organization, enhancing their management potential by providing important feedback so that they can improve their effectiveness. That is, assessment results are used to plan a program Of development for the assessee to overcome weaknesses detected at the assessment center. For example, the assessee may demonstrate difficulty in expressing his ideas to others. To overcome this defi- ciency,acourse in public Speaking or more participation in group meetings could become part of the development plan for this individual. 11 Assessees and Measures Usually a small number of assessees, probably not more than twelve, nominated by their supervisors as having shown management potential through their job performance, participate in an assessment session. A session lasts two days, and during this time the assessee is subjected to a wide variety of individual and group measurement tech- niques. Individual measurement techniques evaluate the individual without reference to his interaction with others. Group measurements, on the other hand, evaluate the individual as he interacts with others in a group setting. Individual measures typically include: 1. Interview. The most common interview situation employed in assessment centers involves one interviewer (an assessor) and one interviewee (an assessee). The interview usually follows a structured outline guide consisting Of a number of carefully designed, Open-ended ques- tions. These questions assure coverage of desired subject areas, while providing the interviewee an opportunity to respond freely without structuring imposed by the question itself. 12 2. In-Basket Exercise. The in-basket used as an assessment technique usually contains such items as reports, memoranda, letters, and other materials requiring simulated action on the part of the assessee. These materials are eSpe- cially prepared to reflect a realistic Operating situation to the individual being evaluated. The task involved is to take the required actions to deal with the problems presented by the mate- rials, which are designed to present a wide variety of problems with differing degrees of complexity. 3. Psychological Tests. Various paper-and-pencil tests are used in assessment, such as: School and College Aptitude Test (SCAT), Contemporary Affairs Test, and Strong Vocational Interest Blank. Some projective tests such as the Rorschach and the Thematic Apperception Test (TAT) are used, but rather infrequently, in assessment centers. Group measures typically include: 1. Leaderless Group Discussion (LGD) Exercises. In the LGD a half-dozen or more assessees discuss some problem on which they all have approximately Assessors 13 equal information. The problem is a controver— sial one, so it lends itself to discussion and interpersonal actions. Assessors rate the be- havior of the assessees on a number of specific characteristics which have been previously chosen as important elements of the job for which the candidates are being considered. Management Games. Small group games provide a live demonstration of factors such as the assessee's ability to plan, communicate, organize, and reach decisions in a realistic setting. For example, one situational game is the supervisory meeting in which each assessee sponsors a ficti- tious candidate for a promotion. Since there is only one promotion available, all assessees must agree on the promotion of one individual. This game is intended to test each assessee's ability to lead, compete, and COOperate. Typically six line managers one or more levels above the individuals being assessed do the evaluating. In some companies assessors get little more than an orienta- tion on the assessment process. But most companies take 14 three or four days to train their assessors. The emphasis Of assessor training is usually on observing behavior, interviewing assessees, and conducting the in-basket exercise. In addition, assessors usually practice on an exercise given at the assessment center. Seldom are professional psychologists used as assessors, since, The little research available indicates that professionals do no better than trained line managers in performing their tasks. While the professional psychologists may have some superior observational skills, this is probably negated by their lack of company knowledge (Byham, 1971, p. 13). In most large companies, assessors serve only once. However, smaller companies usually establish a pool of trained assessors, drawing upon this pool for individual assessors to serve as needed. One notable exception to the general practice is in the Bell System where assessors serve for six months. Evaluation Processes The data collected on each assessee are discussed at a staff conference conducted at the end of the assessment session. Assessors then rate the individual assessee on several preestablished dimensions thought to be relevant for managerial performance in the organization. In many 15 organizations, an overall rating of management potential is also determined for each assessee. After all the ratings have been decided upon by the staff, an assessment report is prepared. Practices with reSpect to assessment reports vary from organization to organization. The most common prac- tice is to have the assessment staff prepare a fairly— detailed written narrative report on each assessee, evaluating the individual's strengths and weaknesses for future management positions. Emphasis sometimes is placed on how to overcome any noted weaknesses, which enables the report to be used as the basis for individual development. Generally, the assessment data are made available to the individual, as well as being retained for organization use. One of the most important, yet most hazardous, aspects of assessment center Operation is feeding the reports back to the candidates. Companies handle this in widely different ways, depending on the purpose of their centers (Byham, 1970, p. 158). Some companies offer assessees the option of receiving or not receiving feedback. Other companies provide feed- back to all assessees as a part of their assessment pro- cess. In some companies, assessees receive feedback prior to departing the assessment center. In other organiza- tions, they may wait some time for their feedback. 16 Typically, an assessor or a former assessor conducts the feedback interview. However, if a staff psychologist is available, in some organizations he has the responsibility of discussing the assessment results with the assessee. Research on Assessment Centers There are several questions which need to be answered before the value of assessment can be apprOpriately deter- mined. Among these questions are: 1. To what extent is the assessment process a valid predictor of management potential? 2. To what extent is the individual assessee affected by the assessment process? Validity of Assessment with regard to the validity of assessment, there appear to be a number of research reports which support the assessment method of identifying management potential. Undoubtedly the most extensive research is that conducted by the American Telephone and Telegraph Company (AT&T) in conjunction with its Management Progress Study. This study was initiated in 1956 as a longitudinal study of the develOpment of young men in a business environment. "Its purpose is very general -- to learn more than is now known about the characteristics and growth of men as they become, or try to become, the middle and upper managers of a large 17 concern" (Bray, 1964, p. 420). Subjects of this continuing study are 422 men, em— ployed in six of the Bell System's telephone companies. About two-thirds of the men are college graduates, who were assessed soon after their employment. The other third are men who started in the Bell System as vocational employees, and advanced into management positions early in their careers. Each subject spent three and one-half days at an asSessment center, going through in groups of twelve. The assessment center process was intended to discover the abilities, aptitudes, motivational and per- sonality characteristics, attitudes, and interpersonal competence of potential managers. To determine these factors, some 25 assessment variables were developed. Techniques used to measure these variables were: inter- view, in-basket exercise, small—business game, group dis- cussions, questionnaires, projective tests, and paper- and-pencil tests. The assessment staff (usually nine professionally trained individuals) assembled, reviewed, and discussed the results. Typically, one to one and one-half hours were devoted to evaluating each subject separately. The subject was independently rated by each staff member on the 25 variables. The staff also evaluated the man's 18 potential as a management person in the Bell System. Since the data from assessment have not been used as bases for promotion decisions, it seems reasonable to argue that the relationship between assessment results and promotions is not contaminated by the fact that the decision maker has previous knowledge of assessment ratings, and thus is using these data as a basis for pro- motion rather than the traditional managerial evaluations. There have been various research reports by AT&T personnel concerning their Management Progress Study. Bray and Grant (1966) reported on the management level achieved and the current salary of the 422 subjects who were still with the Bell System as of July 1965. Approxi- mately one-fifth (21%) of the assessees had achieved middle-management positions. In general, the college group had progressed more rapidly than the non-college group. Of the 55 men achieving middle management, 43 (78%) were predicted correctly by the assessors. In contrast, of the 73 men who have not advanced beyond the first level of management the assessment staffs pre- dicted that 69 (95%) would not reach middle management within 10 years (Bray & Grant, 1966, p. 18). In a later report, Campbell and Bray (1967) examined the subsequent job performance of men assessed five years 19 earlier. -They reported that 55% of the men promoted be— fore the assessment program began were considered "above average performers”, while 68% of the men who were assessed as "acceptable" and later promoted to management were con- sidered "above average performers". This is a statistically significant finding ....It indicates that the assessment pro- gram has been a definite aid in the selec- tion of better performers at the first level of management (Campbell & Bray, 1967, pp. 10-11). In yet another report from.AT&T, Bray and Campbell (1968) described the application of the assessment center method to the selection of prospective communication con- sultants (salesmen). The assessment center evaluated individuals who had been recently hired as salesmen. The assessees had met all employment standards and had been screened by their local Bell company as qualified for the job of communications consultant. Assessment techniques used were paper-and-pencil tests, an interview, and indi- vidual/group simulations. Assessment staff judgments, as to acceptability for sales employment, were used to place each assessee into one of four categories: (1) more than acceptable; (2) acceptable; (3) less than acceptable; and (4) unacceptable. Great care was taken that the results of the man's performance at the assessment center 20 did not affect his assignment or appraisal on the job. All of the men in the study were Bell System employees at the time of their assessment and there was no feed- back to their trainers, their supervisors, or to the men themselves on their perform- ance at the assessment center (Bray & Campbell, 1968, p. 37). Assessment center judgments were compared against the primary criterion of first-hand observation of actual sales contacts made approximately six months after assess- ment. The job performance observations were made by a special Observational team working out of the AT&T head— quarters in New York. A total of 78 assessees were evalu- ated against review standards which included preparation, usage prospecting, recommendations, closing, and imple- mentation. As a result of the special Observations, each man's performance was classified as either meeting stand- ards or failing to meet standards. Of the 9 men judged "more than acceptable" by the assessment center staff, all met review standards. And of the 21 men judged "unacceptable", only 2 passed the field review. For the middle groups ("acceptable" and "less than acceptable"), 26 of 48 assessees met the review standards. These data indicated that assessment center judgments were highly correlated with the field review for the two extreme groups ("more than acceptable" and 21 "unacceptable"). Bray and Campbell reported that the overall correlation between assessment center judgment and subsequent field-performance ratings was .51. These three research reports and others reported as part of the AT&T management studies (Grant & Bray, 1968; Grant, Katovsky, & Bray, 1967), indicate a relatively high degree of predictive validity for assessment centers. Other research studies provide corroboration of the AT&T results. For example, at IBM, Wollowick and McNamara (1969) reported a study where 94 lower and middle managers were selected for assessment on the basis of having above- average potential for advancement. Despite this restric- tion of range, they found a correlation Of .37 with the global assessment rating and the criterion of increase in management responsibility three years after assessment. Also at IBM, Kraut and Scott (1972) reported their findings from a review of the career progress of 1,086 employees in sales, service, and administrative functions of the Office Products Division of IBM. The subjects of this review were employees who had participated in the IBM assessment program from 1965 through the end of 1970. The employees involved were from nonmanagement positions, who were candidates for first—level management jobs. The re— view was intended to evaluate the validity of the 22 assessment program. Two major criteria were used: (1) second-level promotion; and (2) demotion from first-level management. Kraut and Scott found that, "among those who were rated higher in the program, significantly greater propor- tions go on to second positions" (Kraut & Scott, 1972, p. 126). Kraut and Scott also found that none of the sales em- ployees in the highest assessment rating group were demoted, compared with a 19% demotion rate in the lowest assessment rating group. However, this difference was not statisti- cally significant. On the basis of their review, Kraut and Scott concluded: The data collected thus far indicate this large-scale assessment program appears useful by making discriminations of man- agement potential which are later confirmed by the rate of promotions, as well as de- motions (Kraut & Scott, 1972, p. 128). Other organizations, such as Standard Oil of Ohio (Carleton, 1970; Finkle and Jones, 1970), General Electric (Meyer, 1970), and Sears Roebuck (Bentz, 1967) also have established the validity of the assessment method for identifying management potential. However, there is one research report critical of the assessment process. Hinrichs (1969) showed that ratings 23 of management potential based on a review, by two experi- enced managers, of the personnel records of 47 IBM managers correlated .46 with the ratings of management potential these 47 managers had received after attending the IBM assessment center. Based upon his finding, Hinrichs (1969, p. 431) concluded, The data suggest that traditional approaches to the assessment of management potential in the form of a careful evaluation of personnel records and employment history ... can per- haps provide much of the same information which evolves from the lengthy and expensive 2-day assessment program. However, Dunnette (1971, p. 106) disagreed with this conclusion, and stated, In my Opinion Hinrichs' argument, though reasonable, cannot be sustained on the basis of .46 he reports in his investigation. Nearly 80 percent of the variance in the assessment program rating remains unasso- ciated with the ratings based on the per— sonnel records; therefore, it seems highly probable ... that the 'lengthy and expensive' assessment program does contribute inde- pendent, valid, and useful diagnostic infor- mation about men's abilities and behavioral tendencies that is not contributed by ratings based merely on file information. Impact of Assessment Most research on assessment is aimed at the question of validity of the technique. There is little research which examines the impact of the assessment procedure on II (III . Iain-ll. 5‘) \[III I Ill] II ’0' .. I III II 5’11" ll 4" 24 either the organization or the individual. One study concerning the effects of an assessment center experience is a research report prepared by the Pacific Telephone and Telegraph Company. It was primarily concerned with deter- mining if there were any long-range negative effects for participants, attributable to their assessment experience. In particular the researchers were interested in the effects on those participants who had been rated "below average" or "not acceptable". A random sample of 99 men was selected for intensive interviews by staff psychologists. Of this sample 47 were ”unsuccessfur'(had attended an assessment center but had not been promoted) and 52 were "successful" (had been pro- moted after attending an assessment center). The inter- viewers used a focused interview technique to explore eight specific areas for possible change in the assessee's life. The most significant conclusion was that unsuccessful assessees did find ways of adjusting to their poor perform- ance in assessment. But the ways were not those expected. Instead of becoming frustrated and giving up, the unsuc- cessful assessees adjusted in more constructive ways, with many of them (38%) appearing to use some form of rational- ization. There was some evidence of long-range negative effects on the unsuccessful assessees, in comparison with 25 the effects on the successful assessees; but on the whole, positive effects seemed to predominate. Unsuccessful participants appeared to have undergone a self-appraisal which resulted in expanded involvement and increased self— develOpment activity. The research study concluded that, "there seems to be very little reason for concern about any permanent damage that assessment has caused with those who have not been promoted" ("PAR Effects Study”, 1968, p. 30). There are other research findings which tend to sup- port this conclusion. For instance, Jaffee, Bender, and Calvert (1970) indicated that participants in the Union Carbide Company assessment program felt the program gave an individual a fair chance to prove himself. And the participants generally accepted the premise that poor per- formance was the "fault" of the individual and not of the system. Byham & Pentecost (1970) indicated that they have found no conclusive evidence that candidates who do poorly in assessment centers start looking around for another job. Research Objective The research evidence discussed in the last section provide substantial evidence that, with varying degrees of validity, prediction of useful criteria can be made 26 from multiple assessment information obtained at a per- sonnel assessment center (Campbell, 1972). This is sup- ported by Byham (1970), who concluded that the accumula- tion of research findings from a variety of centers lent considerable credibility to the overall validity of the assessment technique. Byham (1970, p. 154) states: In a survey of the 20 companies that operated centers, I uncovered some 22 studies in all that showed assessment more effective than other approaches and only one that showed it §§_effective as some other approaches. None showed it less effective. As I suggested before, these studies exhibit correlations between center prediction and achievement criteria such as advancement, salary grade, and performance ratings that range as high as .64. However, some psychologists have questioned the pre— dictive validity of assessment centers. Cronbach (1960) claimed there is a definite problem in reconciling the statistical evidence with the claimed "clinical" validity of assessment techniques. Taft (1959) agreed that prob- lems arise with respect to clinical versus statistical predictions. He also asserted that problems arise from conditional factors that affect the criteria. Hardesty and Jones (1968) criticized most validity methods because they are done after many personnel deci— sions, based on the assessment information, have been made. Bray and Grant (1966) supported this criticism, stating, 27 . . . where prior screening has been effec- tive and/or the assessment results have influenced personnel decisions allowance for the consequent restrictions on range of subsequent performance has been inade- quate (Bray & Grant, 1966, p. 4). Nevertheless, despite these criticisms, there are strong supporters for the assessment center method of evaluating management potential. Dunnette (1971, p. 16) in his review of the literature on assessment centers concluded that, "multiple assessment procedures for identifying managerial talent have been shown to possess the particular advantages suggested by their advocates". Byham (1971, p. 16) agreed asserting that, "the assess- ment center is a superior method of predicting management potential —— compared with methods such as supervisor appraisals and tests". If the increasing use of assessment centers is an indication, many organizations appear to also support the assessment method of identifying management potential. Perhaps a reason for this support, arguments about pre- dictive validity notwithstanding, is that assessment has face validity. That is, assessment provides, . . . more 'real'measures of what a manager might run into on the job than paper-and- pencil inventories. Instead of measuring traits or getting at behavior tendencies indirectly, simulated procedures allow direct observation of a man's behavior in 28 approaching what appears to be a highly job-relevant but still fairly well- structured and standardized stimulus configuration (Campbell, et a1, 1970, p. 142). Assessment appears then to be an approach to the identification of management potential which is more readily understood and accepted,by both those undergoing assessment and those using assessment results, than the previous approaches which used a "traits" or "predictor” concept to identify managerial talent. The attributes of the assessment method (e.g., better prediction of potential, assessee exposure to managerial demands and reSponsibilities, identification of develOpment needs, and assessor training), and the strong support assessment centers have received from many industrial psychologists and personnel managers, may account for the ever- increasing use of the personnel assessment method for identifying management potential. However, despite this increasing use of assessment, virtually no research has been directed at examining the impact of such a process on the assessee. If the assess— ment procedure does have an effect upon individuals who are exposed to it, it may result in both direct and in— direct effects on the organization. If it has negative effects, turnover may increase. Assessment may heighten 29 expectations which may not be met in the future. This could lead to frustration and dissatisfaction with the organization. The primary objective of the research is to examine the relationship between the effects of assessment and job performance and attitudes. Basically, the research question is whether or not individuals,who are exposed to assessment and get different kinds of feedback, react differently in terms of the way they feel or the level of their job performance. It is expected that this rela- tionship might be affected by certain personality charac— teristics. In addition, these characteristics may affect the individual's job performance and job satisfaction. A secondary objective of the research is to examine the relationship between personality characteristics and assessment center ratings. The reason for the interest in this relationship stems from the fact that personality characteristics or traits, as measured by personality in- ventories, have been used in the past and are still being used to some extent today to determine management poten- tial. Since the assessment center is also used to deter— mine management potential, some relationship might exist between the two methods -- assessment and traits -— of identifying management potential. Should such a 3O relationship be found, then it is possible that results of a personality inventory may be predictive of assessment ratings. If this is so, management could substitute a fairly inexpensive personality measurement for the more costly assessment process, for purposes of identifying management potential. CHAPTER 2 RESEARCH DESIGN AND METHODOLOGY In this chapter the underlying theoretical concepts of the research are develOped. Then the model and hypothe- ses generated by these concepts are presented. Finally, the research design, research sample, measuring instru- ments, and statistical methods employed in the research are discussed. Theoretical Concepts As noted in the last chapter, with one exception no research was found which dealt with effects on the assessee resulting from participation at an assessment center. Since the main purpose of this research is to determine the effects on the assessee Of an assessment center experience, it appears that the research will be delving into relatively "unexplored" aspects of the assessment center method of identifying management potential. Accordingly, it is important to develOp some concepts within which to conduct the research. 31 32 A convenient starting point is to consider the pur- pose of a personnel assessment center. The primary pur— pose Of the typical center, as explained previously, is the identification of individuals with potential for supervision. The underlying premise of this purpose is that those who do well at an assessment center will move into positions of greater supervisory responsibility. Those who do poorly will remain in their present posi- tion in the organization. Some may believe that those who do poorly are "deadwood", and if they become discour- aged and leave the organization, the organization bene- fits. This latter position may not necessarily be sound since it disregards the fact that the usual practice is for supervisors to nominate individuals, who have shown management potential through their job performance, to attend an assessment center. Thus to the on-the-job supervisor, the assessee who does poorly is certainly not "deadwood"; rather he is a good worker who may lack super- visory potential. Also the assessee may represent a size- able investment in terms of experience or technical compe- tence, and losing him may actually hurt the organization. If the basic purpose of personnel assessment is to identify those with management potential and not to eli- minate those who do not exhibit this potential, it is 33 understandable that management concern over assessees' reaction to an assessment center experience does exist (Byham, 1971). Just what particular reaction may cause concern is difficult to precisely discern because there are many phases of the assessment method. And each of these may cause varying reaction on the part of the assessee. For example, the nomination to attend an assessment center may well generate a sense of satisfac- tion with the organization's procedure for selecting indi- viduals for supervisory positions. In addition, the physical facilities of the center, the measuring tech~ niques used, the extent of assessor qualification, and the type of feedback interview may all bring about an assessee reaction. Such reaction could be satisfaction or dissatis- faction, or could be encouragement or discouragement with the assessment procedures. However, perhaps the single most important aspect of assessment to both the assessee and the organization is the final ratings given each individual by the assessment staff. For this reason, this research will focus on the effect of assessment ratings on reaction of individuals. This seems to be reasonable since it has been found that, "typically the greatest concern of management is the indi- vidual who attends an assessment center and does poorly" 34 (Byham, 1971, p. 17). This concern appears to be important because manage- ment experience has suggested that those who do poorly, with respect to some event which closely affects their working career, often become discouraged about their future with the organization. Such discouragement may generate aftereffects impinging upon the assessee's interrelation- ships with the organization. These aftereffects may vary, but if they affect the individual's performance they are likely to be a problem. This is based on the premise that performance is the keystone of productivity; and in turn, that productivity is the basic ingredient of an efficient and effective organization. If assessment does have an effect, the manager who sends someone to a center may justifiably be concerned with the effects of assessment on the subordinate's subsequent job performance. In addition to effects on performance, the typical manager is also concerned with the subordinate's job satisfaction. Consequently, the manager may well ponder the impact of assessment results both on his subordinate's job performance and on his attitude toward the job, be— cause the impact can alter the effectiveness of the entire work unit. These then are the problems with which the research 35 is concerned. When an individual undergoes a personnel assessment what is the extent to which job performance and job satisfaction change after he receives feedback about assessment ratings? It is likely that those who do poorly may experience negative effects. That is, they may be said to have suffered adverse effects from the assessment eXpe- rience. However, it is unlikely that all individuals are affected in the same fashion by feedback. The individual's personality, especially his conception of himself,may be an important factor which affects his reaction to assess- ment. One approach to conceptualizing the notion of self- concept is the idea of self-esteem. One's self esteem refers to the extent to which the individual perceives himself to be effective in dealing with the problems that confront him (Ghiselli, 1955). That is, one's self-esteem is the extent to which the indi— vidual sees himself as a competent, need-satisfying indi— vidual (Korman, 1970). So self-esteem is the value the individual places on his image or concept of himself (Kay, .EEJElr 1962). Some individuals see themselves as being sound in judgment and able to COpe with almost any situation (Ghiselli, 1955). These are considered to have high 36 self-esteem (HSE). Other individuals think of themselves as slow to grasp things, making many mistakes, and being generally inept (Ghiselli, 1955). This group would be considered as having low self-esteem (LSE). These indi— vidual self-perceptions may be considered as relatively persistent personality traits which occur relatively con— sistently across various situations (Korman, 1970). A person's self-esteem affects the evaluation he places on his performance in a particular situation and the manner in which he behaves when in interaction with others. Self-esteem concerns the amount of value an individual attributes to various facets of his person and may be said to be affected by the successes and failures he has experienced in satisfying central needs. It may be viewed as a function of the coincidence between an individual's aspirations and his achievement of these aspirations. Self-esteem, then, may be defined as the degree of correspondence between an individual's ideal and actual concepts of himself (Cohen, 1968, p. 383). In general, HSE persons expect to be successful in meeting their aspirations, while LSE persons expect to encounter failure eXperiences. This implies that indi— viduals with HSE might well react to new situations with expectations Of success, since in the past they have been successful in meeting their needs. And conversely, indi- viduals with LSE might well react to new situations with expectations of failure, since in the past they have been unsuccessful in meeting their needs. Hence LSE's are more 37 vulnerable to the effects of failure experiences, which in turn reinforces the general discrepancy between self- ideals and self—percept (Cohen, 1968). The levels of self—esteem may generate different kinds of expectations. First, HSE persons may be less affected by the communication of failure experiences and more responsive to success experiences than are LSE persons, since they may protect themselves from negative self- evaluation and be less vulnerable to the impact of outside events. Second, LSE persons may be more affected by what others communicate to them concerning their performance, since they are more apt to indulge in negative self- evaluation and may be more vulnerable to the impact of outside events (Cohen, 1968). The level of self-esteem might also be important in reactions to assessment. Korman has demonstrated that self-esteem is a useful moderator to study other aspects of organizational life. For example, his research has pro- duced evidence showing that self-esteem is a moderating variable in vocational choice (Korman, 1966), in task success and task satisfaction (Korman, 1970). Also Korman (1970) has hypothesized that the level of self- esteem affects the satisfaction-performance relationship in that at high levels of self-esteem performance predicts 38 satisfaction; whereas at low levels of self—esteem satis- faction predicts performance. Self—Esteem and Assessment These considerations lead to some tentative hypotheses concerning self—esteem as it may have an affect on an individual's reaction to assessment ratings, and change in job performance and job satisfaction after assessment. First, considering those who attain "good" (above median) assessment ratings, there should be different re-. actions and aftereffects from the HSE person and from the LSE person. For the HSE person, the ratings reinforce his self-image because his expectation that he can successfully overcome most situations in his work environment is con- firmed. Thus his behavior patterns which led to his ratings are reinforced, and consequently his performance and satisfaction would be expected to rise as he continues to follow what has been a successful path for him. For the LSE person, positive assessment results indicate that he has met with a "success". However, a success is basi- cally not congruent with his self—image and he may tend to disregard the assessment ratings, unless they are reinforced later by other positive indications of his behavior patterns (e.g., a promotion). SO consequently 39 his performance and satisfaction would be eXpected to remain the same as before assessment. Second, those who attain "poor" (below median) assessment ratings, are likely to react differently as a function of self-esteem. For the HSE person, low assess- ment ratings are incongruent with his self-image and he will tend to reject or "shake-off" this poor showing on his part. While his self—perception of his behavior is not reinforced in this instance, his performance and satisfaction should tend to remain the same as before assessment because he believes that the path he has followed in the past has been generally successful for him and will remain so in the future. For the LSE person, the ratings confirm his self—belief that he is inept and un- able to COpe with a new situation. He is now "proved un— qualified" for increased management responsibilities by the organization he works for. And since the LSE person is vulnerable to failure, it can be expected that his per— formance and satisfaction will decrease after assessment as he follows that path he believes he fits. Model and Hypotheses Based upon the theoretical concepts develOped in the last section, a model of the effects on an assessee of an assessment center experience has been formulated (Fig. 2-1). 40 M cofiuomwmfiumm K mocmEuomumm I cmwpmz SOHOm rxcofiuomwmwumm .k .\ n1 mocmBnOmumm A. cmfipmz m>on< '4 304 m cofluomwmfiumm mocmauOmuom ucmEmmomm< ..cowuumwmfiumm .\ agape: sonm 1T1. swam muomom nllloocmEDOuuom .4. . ammummimamm cOHuOmwwHumm e mocmEMOMAOm AT cmwpmz m>on< + ucmEmmwmm< umum< lwdwumm ucmEmwomm¢ mmmmmmm< z< 20m: meummmm HszmmMmm< mo AMQQZ Hum MMDOHM 41 In the model, a moderating variable -- self-esteem -- is hypothesized to affect the nature of the reaction to assessment ratings. Essentially it is hypothesized that where assessment ratings positively reinforce the assessee's self—esteem, job performance and job satisfac- tion change after assessment; and where assessment ratings are incongruent with or negatively reinforce the assessee's self-esteem, job performance and job satisfaction remain unchanged after assessment. More specifically, it is hypothesized: 1. For the HSE person who receives above median assessment ratings, job performance and job satisfaction will increase or change in a positive direction after assessment. 2. For the HSE person who receives below median assessment ratings, job performance and jOb satisfaction will not change after assessment. 3. For the LSE person who receives above median assessment ratings, jOb performance and job satisfaction will not change after assessment. 4. For the LSE person who receives below median assessment ratings, job performance and job satisfaction will decrease or change in a negative direction after assessment. 42 The hypotheses are based on two assumptions: 1. Job satisfaction is in some way related meaning- fully to job performance. This assumption is supported by Porter and Lawler (1968) who took the position that there is a relationship between job performance and job attitudes, which they assert is not necessarily a causal relationship but is a consistency of direction relationship. 2. Before assessment jOb performance levels for all assessees are satisfactory or better. Hence, if the assessee maintains this level of perform- ance after assessment, he may still be regarded as a "good performer". The assessee is not considered to have experienced an adverse effect, or to have "failed", if his job performance level does not increase after assessment. Research Design The hypotheses presented in the last section basically assert that an assessee's jOb performance and jOb satisfac- tion will be affected by assessment center ratings and the assessee's level of self—esteem. Expressed in other words, it is hypothesized that an assessee's level of performance and satisfaction may be different or may remain unchanged after assessment, when compared with his before assessment levels of performance and satisfaction. So in order to test the hypotheses, it is necessary to compare assessee performance and satisfaction at two points in time: (1) before the assessee knows his assessment center ratings; and (2) after the assessment center ratings have been given to the assessee. 43 Primary Research Objective The research design for testing the hypotheses re- quires a "before-after" measure of assessee job performance and job satisfaction. Any changes in performance and satisfaction then can be analyzed to determine the extent to which these changes fit the pattern predicted by the hypotheses. Because it is hypothesized that changes in job per- formance and job satisfaction will vary according to assessees' assessment rating and level of self-esteem, it is necessary to divide assessees into groups according to these two factors. This grouping of subjects was accom- plished by first dividing subjects according to self- esteem and then according to assessment results. In both cases the median score or rating was used as the dividing point. 1. Self-esteem: The median score on the self- assurance scale of the Ghiselli SDI (used to determine self-esteem)was 30. This score is the 56th percentile rank according to Ghiselli's norms (Ghiselli, 1971, p. 145). The next lower score -- 29 -- is the 47th percentile rank. So the median used here approximates that of the population from which Ghiselli derived his norms. 44 Subjects falling above the median score were placed in the above median group, and were desig- nated high self-esteem (HSE). The subjects falling below the median score were placed in the below median group, and were designated .1ow self-esteem (LSE). 2. Assessment results: The median overall assess- ment rating for the assessees was 5.25. Those falling above the median were considered as demonstrating above norm (+) management potential, while those falling below the median were con- sidered as demonstrating below norm (-) manage- ment potential. Both the HSE and LSE groups were further divided according to assessment rating. At this point four groups of subjects had been identified and designated as indicated in Figure 2-2. FIGURE 2-2 IDENTIFICATION AND DESIGNATION OF ASSESSEE GROUPS Desig- Group nation 1. high self—esteem, high assessment rating HSE+ 2. high self-esteem, low assessment rating HSE- 3. low self-esteem, high assessment rating LSE+ 4. low self-esteem, low assessment rating LSE- 45 Since it was hypothesized that assessees would differ systematically as to assessment ratings and self-esteem over time, a three-way analysis of variance was used to test the hypotheses. That is, a 2 x 2 x 2 factorial design was employed. The main effects were assessment center rating (high: +, low: -), self-esteem (high: HSE, low: LSE), and time (before assessment, after assessment). The dependent variables were job performance and job satis- faction. The design may be portrayed as shown in Figure 2-3. HSE Self— . Esteem LSE 46 FIGURE 2-3 SCHEMATIC OF ANALYSIS OF VARIANCE Job Job Performance Performance Job Job Satisfaction Satisfaction Job Job Performance Performance Job Job Satisfaction Satisfaction Before After Assessment Assessment Time _._._ High (+) Assessment Low (—) 47 Secondary Research Objective The basic research design examines the primary Objec- tive of the research: determining the effects of assess- ment ratings and self-esteem upon assessee job performance and job satisfaction before and after assessment feedback. The relationships among personality characteristics and various measures of job performance and job satisfaction were also considered. This phase of the research was con— ducted using: (1) correlational analyses of personality traits and various job performance measures; and (2) cor- relational analyses Of personality traits and various job satisfaction measures. The second Objective Of the research was to determine if measures of personality characteristics could poten- tially be substituted for an assessment center evaluation, as a method for identifying management potential. Such an Objective required examination of the relationships among personality traits and assessment ratings. Accordingly, the research design for this phase of the research called for correlational analyses of personality characteristics and assessment ratings rendered by the assessment center staff. 48 Research Sample Subjects for the research were 60 males employed by the parts division of a large manufacturing firm in the automotive industry. On the average these men were 35 years old, had completed college, had been with the organ- ization nine years, and earned $20,000 a year. Twenty- four of the men worked in district sales offices located throughout the United States. They represented such func- tional areas as customer service, marketing, advertising, merchandising, and sales. All could be considered as potential district sales managers. The other 36 men worked in parts depots also located throughout the United States. They represented such functional areas as traffic, warehouse Operations, packaging, Operations planning, and systems coordination. All of these men could be considered as potential depot managers. Subject shrinkage was experienced during the research because some of the before assessment measures were not completed by all subjects, and because some supervisors and subjects did not complete the after assessment mea- sures. For the analysis of variance portion of the research design, Table 2-1 indicates the N's of the four groups of assessees for the measurement of the dependent variables. 49 TABLE 2-1 SUBJECTS INVOLVED IN ANALYSIS OF VARIANCE OF VARIOUS EFFECTS OF ASSESSMENT Dependent Variables * GIOUP Job performance Job satisfaction (N) (N) HSE+ 11 ll HSE- 6 7 LSE+ 10 5 LSE- 12 .9. Total 42 32 * HSE+ : high self—esteem, high assessment rating HSE- . high self-esteem, low assessment rating LSE+ ' low self-esteem, high assessment rating LSE- : low self-esteem, low assessment rating For the correlational analysis phase of the research design, Table 2-2 indicates the N's involved in the examination of the various measures. TABLE 2—2 SUBJECTS INVOLVED IN ANALYSES OF GHISELLI CHARACTERISTICS AND VARIOUS CRITERIA Measure N Assessment Evaluation 56 Job Performance 56 Job Satisfaction 34 50 Assessment Program The division's assessment program is much like that "typical" assessment center described in the first chapter. However, it should be noted that the primary objective of the assessment program was to identify managerial strengths and weaknesses for the purpose of planning develOpmental, training, or experience needs in order to build a more effective managerial force in the organization. The center was thus oriented toward the identification of specific develOpmental needs in order to enhance the organization's pool of managerial talent, as Opposed to the mere identifi- cation Of individual managerial potential as is the case for most centers. Nevertheless, the personnel staff of the division recognized that information generated at the center would be a part of the decision process relating to promotions. The assessees themselves regarded center results as important to their future careers in the organization. Other than that, the center was Operated much like other assessment centers. Assessees were selected on a random basis to attend an assessment session. The assessees, who went through the center in groups Of twelve, began their experience at the center with a group discus- sion meeting, during which the center administrator 51 outlined in detail the purpose and conduct of the center and the dimensions or variables to be measured. Also dis- cussed at this meeting were the roles of participants and assessors. Assessors were six managers or supervisors, usually two levels removed from the assessees and not related to the assessees in a direct supervisor-subordinate relation- ship. The assessors received a week of intensive training to develOp skills of evaluation. They usually served as evaluators for one assessment session. Assessees were evaluated over a period of two days on ten variables. Assessment techniques used were: (1) in- basket; (2) problem solving; (3) leaderless group discus- sion cases; and (4) two leaderless group discussion assigned roles. Figure 2-4 indicates the specific vari- ables evaluated, the techniques used for evaluating each variable, and the assessee behavior the assessors were instructed to look for during assessment. 52 wmaouucoo m>wumuuwficfi8pm nwwabmumo pcm .muficsu nuoaao ocu cm>Hm cmn3 .mummoaop on moool wmuwcuo mo xuoa gnu mNficmqu can coda Ou zufiafibm mzu 302m Hmspfi>wpcfi OLD moon: NOOHmfiomp OLD mwo mcwuusd cmnu umzumu .wabmafim>m coaumsDOLcH Ono mo mammb one so pcOQOOH pcm samumwpossfi mummmwooc OH cOHmHomp m cogs Onwcwoomu Hespfi>fipcfi mnu moon: mmuomw ucm>maouuw pcm ucm>onu cmozuwb mcaumcfiafiuomap mo mabmdmo o: mHI wmeuHuOHud Hamwcficmwa Oucfi xu03 we: mo mucmawam use ou mans on mHI wmucoawpsfi pcsom oxma cam .muumm ucmcwuumd mumsam>m 6cm boo xmmm Ou mans Hmspw>wpcfi onu mHI wmmon m Oman uoc .umpmwa m up Be: Ou pcOdmmu OHdOmd on: chHuooqu now EH: 0u xooa manomd on: mxuflawumon wcHumwuo usonuHB xmmu O NO cowusHOm one ca EH: SOHHOm Ou deOOd bow Ou mabm Hmspw>fipcfi mnu mHI Suwafin< HmCOfimeHcmwuo mwmco>fimfiomn wcwxmz cowmfiowo aficmAOpmog Mom poxooq pow>msmm mommmmm< nus mace nos wea>som pmcwfimmm mmmmo Emaboum uoxmmm ICH manmwum> mumsam>m ou pom: Ongflccuma oanmfium> mmezmo HZmmemmm< mmH mo mMDOHszmH Qz< mmqm «IN MMDUHM 53 mmuoxuoauoo mo mmmc Im>fiuomwwo mcu a: Ouwm Ou maps on mHI wuomumucw owns on 8053 :ua3 muocuo uo mommocxmo3 cam mnuwcmuum mnu w>fiwuumd zamumusoom Ou mans Hmspfi>fipcfi ecu mHI mxflaammmooosm cowumsufim Ono cues Hmop Ou mem on ma no .muoooo mwcmcu coppsm m coca possumsam osooon o: moon: emcowuooufip no mcowuom O>Humcuouam mumuocmw Ou OHbm Hespfi>wpcfi ecu ma .OOAumsuowcH 3w: no mcoauwpcoo mcfiwcmno nufis poomm cons: wwcflaaou Hana mnu bow Ou wco msu on mHI wumuumumlmamm m on mHI masouw OLD cufi3 wcoam ow on moop no muco>o no manomd mucosamcw ob unaduum >Ho>wuom Hespw>apcfl ecu moon: wmcmad was NO muumwwm owcmu chH OLD ucsouom OucH oxmu on moon: News on cmo mmcwa upmmc umnu Om mmaspmzom a: bow on mmoal ucwscnmowfia OOOfiqucoo wcfiwcmsu Ou omdoammm w>aumauacH pom poxooq uoa>mzmm mmmwwmm< nos «Hoe nus maa>aom pmcwwmm< mommo SOHboum umxmmm ICH oanmfium> oumsflm>m ou pom: mogwcnOmH OHbmfium> pmoaflucoollqlm mMDUHm 54 «consumes umaamuw was mHI wmucfloa AOHOB unwfianwfin on moon: NEAOM wanmpcmuw lumps: paw Hmuwwoa m ca cOfiumSuoucH souufius ucomoud Hmspfi>apcw ecu cmuu x x coaumOHGOEEOU swuuHHS NpONHcmwuo Ham3 paw ucmpwwcoo pcsom o; moon: we“: nufia cowummum> Icoo m cfl pmwmwcw coca DOObm wcaxamu ma on umna pOOOOMOOcs madooa on: wcowumsuww wcfiumoa m ca 3mH> mo ucfiOQ was ucommud AHO>HD sommmo pcm xaumwao HmOpH>Hch one cmol x x x x coaumuficsaaou Hmuo Qua mace nos wea>aom Sesame pom poxooq pmcwwmm< mommo Emaboum IcH pofl>mcmm mommwmm< mabmfium> Oumaam>m Ou pom: osvflcnooy managem> pwscfiucOUIlclm mmonm 55 During the second two days of an assessment session, the assessors evaluated the assessees' assessment center performance. Each assessee was rated, on the ten vari- ables evaluated, on a nine-point scale. No overall rating was given. subjectively, an assessee was considered to have attained "good" results if he received a rating above 6 for a measurement factor. He was considered to have attained "poor" results if he received a rating of 3 or below for a measurement factor. (See Figure 2-4 for list of factors.) Those receiving ratings in between were con- sidered "average". A written report, summarizing the findings and Opinions of the assessors, and containing develOpmental suggestions and recommendations, was prepared for each assessee. Only an original of the report was made, and it was retained by the personnel planning office of the division as a permanent part of the firm's management develOpment file. The results of the center were given to the assessee by a member of the center staff about two weeks after an assessment session. Each assessee was given a list of the ten variables evaluated and the assessment center staff's ratings on these variables. The assessee was encouraged to take notes, but he was not given a COpy of 56 the final report. In keeping with the primary purpose of the assessment center, emphasis during feedback was placed on overcoming weak areas noted by the assessment staff. Stress was placed on how a develOpment program could in- crease the assessee's on—the-job effectiveness, which in turn would enhance his managerial capability and potential. The approach used in feedback appeared to be that it was better to detect and to correct weaknesses now, than to be embarrassed and possibly hindered by them later. No report was given to the assessee's supervisor, other than an oral summary of the assessment ratings and an outline of a recommended develOpment program. Measuring Instruments Various measures were used to determine: (1) per- sonality characteristics; (2) assessment performance; (3) job satisfaction; and (4) jOb performance. Personality Characteristics The Ghiselli Self-Description Inventory (Appendix A) was used to determine the individual assessee's personality characteristics. The Self-Description Index (SDI) was administered during an assessment session before assess- ment results were known. For determining the effects of assessment on the 57 assessee job performance and jOb satisfaction the SDI self- assurancesxsle was used, as it was by Korman (1970), to measure the individual assessee's level of self-esteem. For the secondary objectives of examining the relationships among personality characteristics,job performance ratings, job satisfaction scores, and assessment ratings, the SDI provided a personality inventory which has been frequently used by other researchers. The SDI, according to Ghiselli (1971), measures indi- vidual abilities, traits, and motivations found in the successful manager. As defined by Ghiselli (1971), these are: I. Abilities: l. Supervisory ability: capacity to direct the work Of others, and to organize and inte- grate their activities so that the goal of the work group can be attained. ' 2. Intelligence: cognitive capacity of the mind involving such capacities as judgment and reasoning; and the capacity to deal with ideas, abstractions, and concepts. 3. Initiative: has two aspects: (a) ability to act independently and ability to initiate actions without stimulation and support from others; (b) capacity to see courses of action and implementations that are not readily apparent to others. II. Personality Traits: 4. Self—assurance: extent to which the indi- vidual perceives himself to be effective 58 in dealing with the problems that confront him. 5. Decisiveness: extent to which an individual sees that a decision must be made and goes ahead and makes it. 6. Masculinityefemininity: extent to which an individual of one sex manifests the traits, perceptions, or other qualities associated with members of the Opposite sex. 7. Maturity: that state where the processes of develOpment are complete so that there is no further natural growth or improvement. 8. Working class affinity: extent to which the individual is likely to be accepted or re- jected by those of the working class as a suitable person to associate with. III. Motivations 9. Need for occupational achievement: desire to achieve the responsibility and the prestige which is associated with high position. (This trait is sometimes referred to as achievement motivation.) 10. Need for self-actualization: desire to utilize one's talents to the fullest extent. 11. Need forypower: desire to direct and control the activities of others. 12. Need for high financial reward: desire for monetary gain from one's work. 13. Need foryjob securipy: extent to which an individual is fearful of his circumstances and wants protection from adverse forces. Ghiselli (1971) has reported validity coefficients between scores and job successes as shown in Table 2-3. 59 TABLE 2-3 COEFFICIENTS OF CORRELATION BETWEEN THE SCORES 0F MANAGERS, SUPERVISORS, AND WORKERS ON THE VARIOUS SDI SCALES AND THEIR JOB SUCCESS Managers Supervisors Workers Supervisory ability .46 .34 .10 Intelligence .27 .06 .03 Initiative .15 -.O7 .02 Self-assurance .19 .18 -.03 Decisiveness .22 .15 .05 Masculinity—femininity -.05 -.O7 -.O9 Maturity -.03 .13 .02 Working class affinity —.17 .07 -.03 Need for occupational achievement .34 .08 .01 Need for self-actualization .26 -.03 .05 Need for power over others .03 .12 -.16 Need for high financial reward —.18 -.05 -.10 Need for job security -.30 -.05 -.ll Source: Edwin E. Ghiselli, Explorations in Manggerial Talent (Pacific Palisades, California: Goodyear Publishing, 1971), p. 150. In developing the data shown in Table 2-3, Ghiselli used 306 managers, 111 line supervisors, and 238 line workers drawn from a wide assortment of business and industrial firms located in various parts of the United States. These individuals were administered the SDI and also rated by their superiors. The SDI scores were cor- related with judgments of the superiors, which had been divided into two categories: (1) more successful; and (2) less successful (Table 2-3). 60 Assessment Performance in the Present Study The assessment center ratings given each assessee in the present study by the cooperating organization's assess- ment staff were used as a measure of assessment perform- ance. The staff evaluated assessees on variables or dimensions which in general included the following: The ability to plan, organize and control effec- tively. The ability to work with others and to influence them to a course Of action. The ability to exercise leadership in a group -- or to contribute materially to a group's goals. The capacity to learn from, or to use written and oral communications. The ability to adjust to changing conditions. Sensitivity toward the Opinions and feelings of others. The ability to analyze data, solve problems or arrive at decisions. The specific variables evaluated are shown in Figure 2-4. As previously indicated, each assessee was rated on all ten variables by the assessment staff, with each rating made on a nine-point scale. The points on the scale were defined for the assessors as follows: 1 -- Low: Shows considerable negative behavior in this particular skill area or has consistently not dis la ed expected . behav1or when the $1 ua ion required It. 61 3 —- Below Average: Shows little of this skill and would definitely need development in this area. U) I I Satisfactory: Displays an adequate amount of the skill but could prob- ably use some develOpment in the area; but there is nothing to indi- cate that he has difficulties in the skill. 7 -- Above Average: Displays particular skill to a greater degree than many presently functioning managers. The skill is displayed strongly. 9 -- Exceptional: Shows as much of the skill as could be expected. Very well prepared for a management job con- sidering only the particular skill in question. An inter-item analysis, using Pearson's moment cor- relation,was made of these assessment ratings (Table 2-4). 62 oo.a as. am. um. 08. mm. me. am. as. no. es nonsmoaesaaoo amuse“: oo.a we. no. me. am. as. me. so. me. a coaumoaasaaoo Hmuo oo.H me. co. co. Se. so. mm. Ne. w namaeumuman oo.a am. as. as. am. me. am. A menace on mmeoamom oo.a we. mm. mm. ok. km. S m>aumauaea oo.H mm. Ne. He. SA. A coaumwmama oo.H ea. om. mm. s kneaana Hmeoaumuaamwuo oo.H as. as. m mmmam>amaumo oo.H an. N messes eoamauma oo.a H aaemnoemma OH a w a e m a m N a smuH BwuH Aemuzv monHaumauaaH oo.H am. am. SH. me. n eowumwmamn oo.H Na. Na. as. e knaauna Haeoaumuaamweo oo.H me. 00. m mmmew>amauma oo.H as. N messes cowmaumn oo.a H aaemuoemma 0H m m k e m a m N H Em EmuH OH Aomuz I undammmmm< ouomwmv monHaumNuHeH oo.H oe. ma. mo. mm. m acaumwmsma oo.a NH. Nm. we. a auNaNna Nmeoeumuaemwuo oo.N mm. om. N mmmem>umaume oo.N ms. N Nessa: aoamwome oo.H a aaemnmemma ON a N N e m e N N a amuH EOOH Aomnz I undammoww< ouomomv monHmmmbm mIN mqmNSSNSNeN oo.N NN. SS. SN. NN. S :ONSSSSNSS oo.N SS. SN. SS. S NSNNNSS NmaoNSSSNcSSNo oo.N SS. Ne. S SSSSS>NSNSSS oo.N SS. N SchSz eoNSNuma oo.N N aNSSSSSSmN SN S S N S S e S N N aSSN EmuH Aomuz I ucmammmwm< ouowmmv monH<4mmmoo ZmHHImmHzH m4€lmmmm wIN mude 69 Since the internal reliabilities of all three scales were high, it was decided that one overall rating would be used for each of the scales. This overall rating was the average of the ten individual ratings on each scale. As explained before, self-ratings and peer-average ratings were used to help estimate the validity of the supervisor's judgment of assessee job performance. To make this estimate, intercorrelations of the three scales were computed, and are shown in Table 2-9. Also shown in Table 2—9, are the intercorrelations of the overall assessment center rating with the three before assessment overall job performance ratings. TABLE 2-9 INTERCORRELATIONS AMONG ASSESSMENT RATING AND VARIOUS JOB PERFORMANCE RATINGS (Before Assessment - N = 56) Rating Assessment Peer-Average Supervisor Self Assessment 1.00 Peer—Average .29* 1.00 Supervisor .30* .52** 1.00 Self .44** .08 .30* 1.00 * Significant at .05 level ** Significant at .01 level 70 Table 2-9 indicates a relatively high degree of agree- ment exists between supervisors and peers, and a moderate degree of agreement exists between supervisors and self- ratings, as to assessee job performance. While peer and self-ratings were not correlated, the pattern of multi- rater agreement suggests that the use of the supervisor's performance rating is an acceptable measure Of before assessment and after assessment job performance. Statistical Methods Three statistical analyses used for the research were: (1) analysis of variance; (2) Scheffe's post hoc method for comparison of means; and (3) correlational analysis. Analysis of Variance Analysis of variance was used because such analysis provides a method for the simultaneous comparison of many means in order to determine if some statistical relation exists among the variables involved. A major advantage of this statistical method is that reasonable departures from the statistical assumptions of normality and homoge- neity will not seriously affect the validity of the inferences drawn from the data. 71 Scheffe Test After an analysis of variance, when a significant F ratio is present, interpretation of the data Often requires a comparison of pairs of means. The differences between some pairs may be significant, while the differences be— tween others may not be significant. The research design used in this research calls for this type of data interpre- tation. Accordingly, the Scheffe method of means compari- son was used when a significant F ratio was found. The Scheffe method was selected because no special problems arise from unequal N's, and results are not seriously affected by moderate violations of the assump- tions of normality and homogeneity of variance. Finally, the method uses more rigorous standards than other multiple comparison methods, so fewer significant differences re- sult. Because it is more rigorous than other procedures, when using the Scheffe method the common practice is to use the .10 level of significance instead of the .05 level (Ferguson, 1966, p. 297). This practice was followed in analyzing the present research data. All combinations of means were subjected to the Scheffe test; however, only those means with significant differences are reported. 72 Correlational Analysis Correlation coefficients were obtained using Pearson's product-moment correlation. The degree of relationship is provided by the product-moment correlation coefficient "r", which is the average product of first moments of two distributions. The sign and size of r, which cannot be greater than +1.00 or less than -l.00, provide a very understandable indication of the direction and degree of relationship between two variables (Nunnally, 1967). Correlational analyses were used to determine the inter-item correlations among the various performance scales, and to determine the extent of relationship among these scales. In addition, correlational analyses were used to determine the extent of the relationship among personality traits and assessment ratings, among person- ality traits and job performance measures, and among personality traits and job satisfaction measures. Summary In this chapter, the underlying theoretical concepts of the research, and the model and hypotheses generated by the concepts, were presented. The research design, the research sample, the measuring instrument used, and the various statistical methods employed in analyzing the 73 research data, were discussed. Now to be considered are the results of the research, which are presented in the next chapter. CHAPTER 3 RESEARCH RESULTS This research has two objectives. The primary Objec— tive is to determine the extent to which an assessment center experience affects assessee job performance and job satisfaction. In Chapter 2, certain effects on indi- viduals were hypothesized, and these were expected to be different as a function Of the assessee's level of self- esteem and level of assessment rating. To test the hypotheses, a three-way analysis of variance was performed. The independent variables were self—esteem, assessment ratings, and time. The dependent variables were job per- formance and job satisfaction. This chapter reports the results of the analyses of variance. This chapter also reports results of the examination of the relationships among characteristics measured by the Ghiselli SDI and job performance and job satisfaction. Additionally, Ghiselli SDI dimensions were correlated with assessment ratings. Since these latter analyses were exploratory in nature, no specific hypotheses were 74 75 develOped. More extensive discussion and interpretation of the research results follow in Chapter 4. The Impact of Assessment on Performance and Satisfaction In this section, a series of five tables presents the results of the analyses Of variance concerning assessment, self-esteem, and time effects on assessee job performance and job satisfaction. Also shown in each table are the means for the four groups of assessees. Where a signifi- cant F ratio occurs, the Scheffe comparison of means is used to determine which difference between all combinations of means is significant. Impact of Assessment on Job Performance Table 3-1 reports the ANOVA results with respect to supervisor job performance ratings, examining the relation- ship between the dependent variable supervisor performance rating and the main effects of assessment rating, self- esteem, and before-after. For this analysis, performance data were available for 42 subjects who were rated both before assessment and after assessment by their super- visors. RELATIONSHIP BETWEEN: VISOR JOB PERFORMANCE RATING, AND (C) ASSESSMENT RATING FOR 76 TABLE 3—1 (A) BEFORE AND AFTER ASSESSMENT, (B) SUPER- ASSESSEES AT DIFFERENT LEVELS OF SELF-ESTEEM (N=42) 1. ANOVA Results Source d.f. SS MS F sig Assessment (A) 1 3.5323 3.5323 6.5221 0.013 Self-Esteem (SE) 1 0.9421 0.9421 1.7027 0.196 A x SE 1 2.8515 2.8515 5.2650 0.024 Before-After (BA) 1 0.0426 0.0426 0.0787 0.780 A x BA 1 0.1565 0.1565 0.2890 0.592 SE x BA 1 0.7907 0.7907 1.4600 0.231 Error 77 41.7022 0.5416 Total 83 52.5870 11. Mean Scores for each Group in ANOVA Before After Assessment Assessment Lo(:) Hi(+)_ Lo(:) Hi(+) HSE 7.5 7.0 7.2 7.3 Self— (N= 6) (N=ll N= 6) (N=ll) Esteem LSE 6.5 7.5 6.5 7.1 N=15) (N=lQ) N=15) (Nslo III. Scheffe Comparisons (p j .10) 1. Assessees who received high assessment ratings received signifi- cantly higher supervisor job performance ratings than did assessees who received low assessment ratings. 2. High self-esteem assessees with high assessment ratings received significantly higher supervisor job performance ratings than did low self-esteem assessees with low assessment ratings. 3. No other differences between all possible combinations of means were significant. 77 The assessment variable effect on the supervisor job performance rating was significant difference at the .01 level. The Scheffe tests confirmed that there was a sig- nificant between mean job performance rating Of assessees who received above median assessment ratings and the mean performance rating of those assessees who received below median assessment ratings. This finding suggests that there was a significant agreement between assessors and supervisors as to the "high" and "low" performers (Table 3—1). Also a significant F ratio (.02) was present for the interaction of assessment and self-esteem (A x SE). The Scheffe test confirmed that there was a significant dif- ference in mean job performance ratings between the HSE+ group of assessees and the LSE- group (Table 3-1). Impact of Assessment on Satisfaction with Work Table 3-2 reports the ANOVA results with respect to job satisfaction (work scale), examining the relationship between the dependent variable satisfaction with work and the main effects of assessment ratings, self-esteem, and before-after. For this analysis, data were available for 32 subjects who completed the JDI during assessment and again six months after assessment. RELATIONSHIP BETWEEN: 78 TABLE 3-2 (A) BEFORE AND AFTER ASSESSMENT, (B) JOB SATISFACTION (WORK), AND (C) ASSESSMENT RATING, FOR ASSESSEES AT DIFFERENT LEVELS OF SELF—ESTEEM (N=32) 1- ANOVA Results Source d.f. 88 MS F sig Assessment (A) 1 96.4405 96.4405 2.4513 0.123 Self-Esteem (SE) 1 258.2827 258.2827 6.5649 0.013 A x SE 1 11.5030 11.5030 0.2924 0.591 Before—After (BA) 1 19.8376 19.8376 0.5042 0.481 A x BA 1 11.5836 11.5836 0.2944 0.590 SE x BA 1 15.2542 15.2542 0.3877 0.536 Error 57 2242.5393 39.3428 Total 63 2609.7500 11. Mean Scores for each Group in ANOVA Before After Assessment Assessment Lo(e) Hi(f), ,Lo(r) Hiji) Self— HSE 42.1 37.5 2.6 40.4 Esteem N=7), (N=11 N=7) ,(N=ll) LSE 35.8 37.8 38.8 33.4 (N= 9) (*2 N=9) OPS) III. Scheffe Compprisons (p f .10) 1. High self-esteem groups were significantly more satisfied with work than were low self-esteem groups. 2. No other differences between all possible combinations of means were significant. 79 The self-esteem effect was significant at the .01 level, indicating a tendency for high self-esteem persons to be more satisfied with their work than low self-esteem persons. The Scheffe test confirmed that the two groups of HSE assessees were significantly higher than the two LSE groups of assessees in their satisfaction with work (Table 3-2). Impact of Assessment on Satisfaction with Supervision Table 3-3 reports the ANOVA results with respect to job satisfaction (supervision scale), examining the rela- tionship between the dependent variable satisfaction with supervision and the main effects of assessment rating, self—esteem, and before-after. For this analysis, data were available for 32 subjects who completed the JDI during assessment and again six months after assessment. RELATIONSHIP BETWEEN: 80 TABLE 3-3 (A) BEFORE AND AFTER ASSESSMENT, (B) JOB SATISFACTION (SUPERVISION), AND (C) ASSESSMENT RATING, FOR ASSESSEES AT DIFFERENT LEVELS OF SELF-ESTEEM (N=32) 1. ANOVA Results Source d.f. SS MS F 813 Assessment (A) 1 98.2572 98.2572 1.7974 0.185 Self-Esteem (SE) 1 5.7784 5.7784 0.1057 0.746 A x SE 1 339.5707 339.5707 6.2118 0.016 Before-After (BA) 1 5.4383 5.4383 0.0995 0.754 A x BA 1 0.1665 0.1665 0.0030 0.956 SE x BA 1 0.0320 '0.0320 0.0006 0.981 Error 57 3115.9277 Total 63 3621.7344 II. Mean Scores for each Group in ANOVA Before After Assessment Assessment Lo(:) Hi(:) Lo(:) Hi(}) HSE 50.9 42.7 50.6 43.9 Self— _(N=7) (N=ll N=7) (N=ll) Esteem LSE 44.6 48.0 46.0 47.0 N=9) (N=5) 4(N=9) (N=5) III. Scheffe Comparisons (p f .10) 1. Low self-esteem assessees who received high assessment ratings and high self-esteem assessees who received low assessment ratings were significantly more satisfied with supervision than were high self-esteem assessees who received high assessment ratings and low self—esteem assessees who received low assessment ratings. 2. No other differences between all possible combinations of means were significant. 81 The interaction of assessment and self-esteem (A x SE) was significant at the .02 level. The Scheffe test confirmed that the LSE+ and HSE- groups of assessees were significantly more satisfied with their supervision than the HSE+ and LSE- groups (Table 3-3). Impact of Assessment on Satisfaction with Promotions Table 3-4 reports the ANOVA results concerning job satisfaction (promotions scale), examining the relation- ship between the dependent variable satisfaction with promotions and the main effects of assessment rating, self-esteem, and before-after. For this analysis, data were available for 32 subjects who completed the JDI during assessment and again six months after assessment. RELATIONSHIP BETWEEN: 82 TABLE 3-4 (A) BEFORE AND AFTER ASSESSMENT, (B) JOB SATISFACTION (PROMOTIONS), AND (C) ASSESSMENT RATING, FOR ASSESSEES AT DIFFERENT LEVELS OF SELF-ESTEEM (N=32) I. ANOVA Results Source d.f. SS MS F sig Assessment (A) 1 13.5951 13.5951 0.2510 0.618 Self-Esteem (SE) 1 39.1862 39.1862 0.7236 0.399 A x SE 1 72.0675 72.0675 1.3308 0.253 Before—After (BA) 1 274.4924 274.4924 5.0687 0.028 A x BA 1 36.1126 36.1126 0.6669 0.418 SE x BA 1 26.2778 26.2778 0.4852 0.489 Error 57 3086.7784 54.1540 Total 63 3576.8594 x1, Mean Scores for each Group in ANOVA Before After Assessment Assessment Lo(j) Hi(+ Lo(-) Hi(+) HSE 21.6 20.2 13.3 17.1 Self— N=7 N=11) N31) (N=ll Esteem LSE 23.0 19.6 19.6 16.6 (N=9) (N151 (N= 9) (N=5) III. Scheffe Comparisons (p: .10) 1. High self-esteem assessees who received low assessment ratings were significantly lower in satisfaction with promotions after assessment . 2. No other differences between all possible combinations of means were significant. 83 The before-after (time) effect was significant at the .03 level. The Scheffe test confirmed that there was a significant decrease after assessment in satisfac- tion with promotions on the part of the high self-esteem assessees who received low assessment ratings (Table 3-4). Impact of Assessment on Satisfaction with Co-Workers Table 3-5 reports the ANOVA results with respect to job satisfaction (co-workers scale), examining the rela- tionship between the dependent variable satisfaction with co—workers and the main effects of assessment rating, self-esteem, and before-after. For this analysis, data were available for 32 subjects who completed the JDI during assessment and again six months after assessment. RELATIONSHIP BETWEEN: 84 TABLE 3-5 (A) BEFORE AND AFTER ASSESSMENT, (B) JOB SATISFACTION (CO-WORKERS), AND (C) ASSESSMENT RATING, FOR ASSESSEES AT DIFFERENT LEVELS OF SELF-ESTEEM (N=32) 1. ANOVA Results Source d.f. SS MS F sig Assessment (A) 1 67.9095 67.9095 1.2599 0.266 Self-Esteem (SE) 1 66.1171 66.1171 1.2266 0.273 A x'SE 1 61.7411 61.7411 1.1454 0.289 Before—After (BA) 1 17.3040 17.3040 0.3210 0.573 A x BA 1 9.2326 9.2326 0.1713 0.681 SE x BA 1 5.4915 5.4915 0.1019 0.751 Error 57 3072.3882 53.9015 Total 63 3370.0000 II. Mean Scores for each Group in ANOVA Before After Assessment Assessment Lo(j) H (+) Lo(r) Hi(+) HSE 49.1 44.5 46.7 43.0 Self— (N=7) .1N=ll) (N=7) (Bell) ESteem LSE 47.3 47.6 48.8 46.2 1N= 91 (N=5L (N=2 (Na-5) Ill. Scheffe Comparisons (p f .10) 1. No differences between all possible combinations of means were significant. 85 There were no significant effects found in this par- ticular analysis of variance (Table 3-5). Relationships of Ghiselli SDI Traits In this section results are reported of the cor- relational analyses of the relationships of the 13 Ghiselli SDI traits with: (1) job performance; (2) job satisfac- tion; and (3) assessment ratings. Ghiselli Traits and JOb Performance The correlations of the 13 Ghiselli SDI traits with the three measures of before assessment jOb performance are reported in Table 3-6. Fifty—six (56) subjects re- sponded to the Ghiselli SDI and completed a self—rating form, as part of the assessment battery of tests. These 56 subjects also were rated by two co—workers and their supervisor before assessment results were known. For Iourposes of this analysis, co—worker ratings were averaged :Eor a "peer" rating. CORRELATIONS OF GHISELLI TRAITS WITH OVERALL JOB PERFORMANCE RATINGS OBTAINED BEFORE ASSESSMENT (N=56) 86 TABLE 3-6 Ghiselli Trait Performance Ratings Correlations Peer Supervisor Self Supervisory Ability ~.34* -.1O .31* Intelligence -.06 .07 .46** Initiative .02 .02 .28* Self-Assurance —.12 .04 .38** Decisiveness .08 .14 .16 Masculinity-Femininity -.13 .09 .19 Maturity —.12 -.10 .04 Working Class Affinity .05 -.03 .06 Achievement Motivation .01 .06 .37** Need for Self-Actualization .00 -.ll .22 Need for Power .10 .08 .39** Need for High Reward -.01 .05 —.13 Need for Security -.06 .02 -.46** * Significant at .05 level ** Significant at .01 level 87 The Ghiselli SDI characteristics had no significant correlation with supervisor ratings. with the exception of the supervisory ability trait, the SDI characteristics had no significant correlation with pger ratings (Table 3-6). Seven of the SDI traits did have significant correla- * tions with self—ratings. This suggests that the assessee has a fairly constant image of himself as measured by two assessment instruments, one evaluating his self-concept of job performance and the other his concept of his person- ality (Table 3-6). Ghiselli Traits and Job Satisfaction The correlations of the 13 Ghiselli SDI traits with the four measures of before assessment job satisfaction are reported in Table 3-7. These correlations were ob- tained from the JDI administered to 34 subjects as part of the assessment battery of tests. Four scales of the JDI were used to measure the assessees' attitude toward the job. CORRELATIONS OF GHISELLI TRAITS WITH JOB SATIS- 88 TABLE 3-7 FACTION SCALES OBTAINED BEFORE ASSESSMENT (N=34) Job Satisfaction Scales Correlation Ghiselli Traits Co- Work Supervision Promotions Workers Supervisory Ability .24 -.17 .12 .01 Intelligence .15 .09 -.14 .05 Initiative .30 -.13 .03 .12 Self—Assurance .22 .03 -.14 .07 Decisiveness .31 .09 .19 .12 Masculinity—Femininity .09 -.26 -.16 .23 Maturity .30 -.04 -.04 .24 Working Class Affinity .20 -.10 .18 .07 Achievement Motivation .30 .03 .05 .22 Need for Self-Actual- ization .02 -.23 .12 -.09 Need for Power .16 .37* -.24 -.07 Need for High Reward .37* .01 -.O7 -.18 Need for Security .27 -.00 -.12 -.11 * Significant at .05 level 89 Only two traits had significant correlations with job satisfaction: (1) need for power with the supervision scale; and (2) lack of need for high financial reward with the work scale (Table 3—7). In essence then, the SDI had virtually no relationship with the job satisfaction mea- sures . Ghiselli Traits and Assessment Ratings Correlations of the 13 Ghiselli traits with assess- ment ratings are reported in Table 3-8. For this analysis, assessment ratings for 56 subjects were used. 9O coHumofioaano cmuuwuz oouumoficsaaoo Hmuo unmaanmowaa mwcmno Cu owcoamwm m>HumHuficH .oa \ONQO‘ coaummoaon huwawn< HmGOHumecmmuo mmoco>fimaooa wafixmz ooamaomn mfinmuwwmmq umeflmHHW> UCwEwwmmg + qulfi .H HmsmH Ho. 06 0:60HHchHm «4 H6>6H mo. 0m 0amUHHHame 4 m~.- 4H.- H~.- .«mm.- «.Hm.- 4Hm.- s~.- 44Hs.- mH.- «.me.- «46m.- suHusumm 06H ewmz so.- no. No.1 mH.- oo.- no.1 HH.- SH.- ~o.- NH.- ao.- eumzmm :Hm anm you 6662 mm. mo. mo. wH. NN. mo. NH. «Hm. mH. «N. aH. 06368 06H emmz HH. mo.u «a. Hm. ow. mH. MH. SN. OH. om. mH. =6Hum~HHmnuufiuoz uaoam>wano< mo.- ~H.- SH.- 6H.- mH.- H~.- HN.- oH.- NN.- mH.- mH.- HuHcHHH< mmmHo maquoz mo.- No.1 ao.- no.1 mH.- om.u mH.- no.1 mo.- HO.. OH.. suHusumz SH. ao. mH. mo. mo. HH. HH. oH. oH. «N. «H. suHaHaHa6a1s0HaHHaummz Ho. HH. mo. mH. om. mo. OH. mH. mo. SH. SH. mmmcm>HmHoma «on. 50. mm. 4mm. aH. NN. cm. «H. mm. «44m. «mm. moamusmmH08H0H6H «ans. mm. ««m¢. ««Hm. ««mm. «amm. «8mg. «*08. *«qq. ««mq. «ems. mocmwfiaaoucH mo. mo.- mH. NH. mH. HH. mH. 6H. so. 6H. NH. a0334 spomH>Hmaam +oH a m a o m e m N H HHm num>o onue HHHmmHeu cowumamuuou mdwumm ucmammmwm< Homuzv mquHf initiative and the assessment rating on the leadership \rariable (Table 3-8). §EEEE£X The basic results and findings of the statistical analyses of the research data were presented in this chapter. The next chapter presents a more extensive dis— cussion of the research findings. CHAPTER 4 SUMMARY, DISCUSSION, AND CONCLUSIONS This research has been concerned primarily with the effects of a personnel assessment experience, which could be a truly significant event in an individual's organiza- tional life, with respect to its impact on his job perform- fi ance and job satisfaction. It is logical to assume that assessee reaction to assessment will differ and that these differences might be accounted for at least in part by personality character- istics, particularly self-esteem. This logical assumption led to the develOpment of the hypotheses presented in Chapter 2. Basically these state that after an assessment center experience, job performance and job satisfaction would: 1. Increase for those assessees with high self— esteem who attained above median assessment ratings —- the HSE+‘group of assessees. 2. Not change for those assessees with high self- esteem who attained below median assessment ratings -- the HSE- group of assessees. 3. Not change for those assessees with low self- esteem who attained above median assessment ratings -- the LSE+ group of assessees. 92 93 4. Decrease for those assessees with low self- esteem who attained below median assessment ratings -- the LSE- group of assessees. The subjects were 60 managers who participated in a personnel assessment center, conducted by the parts divi- sion of a manufacturing firm in the automotive industry. As part of the assessment process, assessment ratings were obtained on ten variables for each subject. These were combined for use as a dependent variable here. Addi- tional data, identified by the company research staff as being gathered for research purposes, were also collected for the present study. These data were measures of: 1. ‘Personality characteristics, where the measuring instrument used was the Ghiselli Self-Description Inventory (SDI). The SDI was administered as part of the assessment center battery. 2. Job performance, where a supervisor's rating was used. A rating was obtained on the assessee before assessment and again about six months after assessment. 3. Job satisfaction, where the measuring instru- ment used was the Job Descriptive Index (JDI). The JDI was administered as part of the assessment center battery and again about six months after assessment. To test the hypotheses, a three—way analysis of vari- ance was used. That is, a 2 x 2 x 2 factorial design was employed. In the design, the main effects were: (1) assessment rating_(above median or +, below median or -): 94 (2) level of self-esteem (above median or HSE, below median or LSE); (3) Elma (before assessment, after assessment). The dependent variables were job performance (supervisor's rating) and different aspects of job satisfaction (JDI scores). Where the analysis indicated a significant F ratio, the Scheffe method of means comparison was used to test the significance between all possible combinations of pairs of means. Adjunct to the primary purpose of the research was a determination of the relationship between personality characteristics and two types of criteria: performance ratings and satisfaction scores. The Ghiselli SDI mea- sured personality characteristics. Performance ratings used were self, peer, and supervisor ratings completed before assessment results were known. Satisfaction scores were from work, supervision, promotions, and co-workers scales of the JDI administered before assessment results were known. The relationship between personality characteristics and assessment center evaluation was also examined. The Ghiselli SDI was used to determine personality character- istics. The assessment center staff's ratings of ten variables were used as a measure of assessment. Correla- tions between SDI traits and assessment ratings were 95 obtained. The results of the statistical analyses are reported in the preceding chapter. These will be interpreted and discussed in this chapter. The Impact of Assessment on Performance and Satisfaction The analysis of variance led to the rejection of two hypotheses, partial support for one hypothesis, and sup- port for one hypothesis. 1. No significant changes were found in after assessment job performance and job satisfac- tion for the HSE+ assessees. Thus this hypothesis was rejected. 2. No significant changes were found in after assessment job performance for the HSE- assessees. This finding supports the hypothesis. But a significant decrease was found in satisfaction with promotions after assessment. The change in job satisfaction is contrary to the effect hypothesized. Accordingly, the hypothesis is supported in part and rejected in part. 3. No significant changes were found in after assessment job performance and job satisfac- tion for the LSE+ assessees. These findings support the hypothesis. 4. No significant changes were found in after assessment job performance and job satisfac- tion for the LSE— assessees. Thus this hypothesis was rejected. In general then, results of the analysis lead to a conclusion that for the most part an assessment center 96 experience does not affect assessee job performance, at least in the time span measured. It also appears that assessment does not affect job satisfaction, except perhaps in the case of satisfaction withppromotions. For satisfaction with promotions, the findings indi- cated that the high self—esteem individuals who received low assessment ratings (HSE- assessees) became signifi- cantly less satisfied with promotions after assessment (Table 3-4). The finding may indicate an underlying dis— satisfaction with promotions which has come to the surface after the assessment center experience. If so, there may be many reasons for the dissatisfaction. To uncover the reasons would require further in-depth research. From the data, it seems reasonable to conclude that the dissatis- faction with promotions may arise because the HSE- assessees are disappointed by their low assessment ratings. Such ratings would conflict with their self-perceived competence, and might be regarded as a "failure". Since an HSE person tends to externalize failure, he may blame low ratings on the assessment process. In turn, since they may relate the assessment program with promotions, this could lead to more dissatisfaction with promotions. This finding could be of significance to managers. It indicates that perhaps concern should be directed to 97 the HSE person who does poorly at an assessment center. This could have important implications, for as shown in Table 3-1, supervisors tend to rate the job performance of high self—esteem assessees with low assessment ratings (HSE- group) as being basically the same as the job per- formance of those who do well in assessment (HSE+ and LSE+ groups). Thus it appears that sppervisors regard the HSE- individual who gets a low rating in assessment as a "good performer". Therefore, it is reasonable to expect that it may be desirable to retain this individual, and not lose him because of dissatisfaction stemming from his poor assessment showing. One way to maintain the HSE- assessee's job satisfac- tion is to "tailor" the feedback of assessment results. Such feedback should stress the importance of a develop- ment program to overcome weak areas. It should minimize the impact of assessment results on future promotions. For example, the assessee could be told that while the assessment results are undoubtedly disappointing, it is better to uncover weaknesses at this point in a career, rather than later. Now they can be corrected if effort is devoted to a development program. If he were promoted and then the managerial weaknesses showed up, he may suffer embarrassment and frustration, and perhaps even a 98 demotion. If possible, the experiences of other HSE- assessees could be used. For example, other HSE- assessees may have experienced similar assessment "shock": but these persons may have undertaken develOpment programs and sub- sequently continued to progress in the organization. Lack of Significant Findings The lack of statistically significant findings sup- porting the hypotheses may be due to several factors. 1. The measuring instruments may not have been sensitive, so the lack of significant findings stem from measurement error. However, the SDI (used to measure personality charac- teristics) and the JDI (used to measure job satisfaction) were reliable and significantly related to other measures (Tables 2-3 and 2-4). The supervisor job performance rating (used to measure job performance) had a high internal reliability and a moderate degree of correlation with two other measures of job performance: the peer— average rating and the self—rating (Tables 2-7 and 2-9). The composite assessment rating used also had a high degree of internal reliability. It appears then that the measuring instruments were adequately reliable for research purposes. 99 2. The measuring instruments may not have been taken seriously by the assessees and their supervisors. The instruments were identified as being part of a company research program to evaluate the assessment pro- gram. For this reason, assessees and supervisors might not give serious consideration to completion of the instruments. Or they might tend to "slant" their answers, based on their attitudes toward the assessment program. This is a problem confronted in many research projects. It is a problem which is extremely difficult to overcome. Basically, the researcher must rely on his subjects making an honest effort in completion of measuring instruments. 3. The small number of subjects in the research sample may have precluded significant findings. In this research, the small number of subjects may well have limited statistical significance. With a small number of subjects involved, large changes in job per- formance and job satisfaction would have to be observed in order for the changes to be statistically significant. Further, with a small sample, individual differences may have a major effect on cell means. For example, there were only 5 assessees in the LSE+ group for job satisfac- tion analyses. If one assessee reacted to assessment 100 differently from the other four in the group, the mean for the group would change considerably. With a large number of subjects, individual differences would not impact on the group mean. Also, with a large number of subjects, small changes in performance and satisfaction would likely be more statistically significant wherever ET they occur. Unfortunately, sample size is often a problem with this kind of research. Rather than forego research, it seems advisable to proceed even with a small group of subjects. 4. The time span between measures may not have been such that any effects were observable. The time dimension could be extremely important. For example, there well may be an immediate effect on job performance and job satisfaction after feedback of assess- ment results. This effect may not be present six months later, when the after assessment measures in this research were taken. Or, there may be a long-range effect on per- formance and satisfaction if the assessee becomes con- vinced that the assessment results are indeed affecting his career with the organization. 5. The moderators selected may not be important as they relate to the effects of assessment. This research suggests that self—esteem does not moderate the effects of assessment on subsequent job 101 performance and job satisfaction. But other variables might. In Table 3-6 it is noted that three personality traits, other than self-assurance (self-esteem) have significant correlations with assessment ratings. Certainly the search for moderating variables should not be abandoned. 6. Assessment has limited or no impact on an individual. The reason for this limited or no impact is that an assessment center experience does not affect assessee job performance and job satisfaction. A possible explanation for this conclusion may lie in the nature of the assess- ment center used in the research. By stressing assessment as a tool for develOpment, and deemphasizing it as a tool for promotion, the organization may have convinced assessees that assessment was not a "threat" to their careers. Consequently, assessees had no reactions to assessment, except for the HSE— assessees who were appar- ently disturbed by the fact that they did not do as well as they had anticipated. _§ignificant Findings Now turning to a discussion of what the research did Jreveal, several statistically significant findings did emerge from the analyses of data. 102 1. Assessees who attained high assessment ratings received higher before-after supervisor job performance ratings than those assessees who attained low assess- ment ratings (Table 3-1). This finding indicates that there is agreement between assessors and supervisors as to which group of assessees were ”high performers" and which were "low performers". The finding suggests that assessors and supervisors, using the same set of ten items on a rating scale, are able to identify relative effectiveness in a similar fashion, independent of the situation in which they are rating. Supervisors saw assessees perform over a period of time, and rated them on differing sets of tasks than assessment staff would. In assessment, indi- viduals performed for a short time and were rated on a standard set of tasks. Despite these different rating situations, both supervisors and assessors agree as to who are "high performers" and as to who are "low per- formers". The close agreement implies that perhaps an assess- ment center may not be a necessary method for identifying management potential. If so benefits other than identifi- cation of management potential should be derived from assessment for the assessment program to be worthwhile. One such benefit may be that assessment reports provide if!) - 103 data from which recommendations for assessee develOpment can be made. Another benefit may be that assessment pro- vides a standardized evaluation of individuals from all parts of the organization. That is, without assessment it would be difficult to evaluate management potential by comparing supervisor ratings of those who work in sales offices with ratings of those who work in parts depots. Perhaps the most convincing argument for the assess— ment center is that, despite the findings here, only a .f I. 5'1 L.» moderate correlation (r = .30, p i .05) exists between assessment ratings and before assessment supervisor per- formance ratings (Table 2—9). This correlation suggests that the assessment center may be evaluating different aspects of management potential, which supervisors are unable to discriminate. 2. High self—esteem persons who attain high assessment ratings (HSE+ assessees) received higher before-after supervisor job performance ratings than did low self- esteem persons who attained low assessment ratings (LSE- assessees) (Table 3-1). This finding is another indication of the agreement between assessors and supervisors as to who are "high performers" and who are "low performers". But this agree— ment is not complete, which may account for the moderate correlation discussed under the preceding finding. For 104 ’example, as shown in Table 3-1, supervisors and assessors disagreed as to the "performance" of the HSE- group of assessees. Supervisors considered the group to be "high performers". Assessors considered the group to be "low performers”. The implication of this finding is that there may be F5 a group of assessees, who do poorly at assessment, whose loss to the organization may not be detrimental. This group are the LSE- assessees. Both assessors and super- visors agree that these assessees are "poor performers". ' This is the only group of assessees for which such agree— ment exists. The pattern of agreement between assessors and super- visors suggests that perhaps the concern for the person who does poorly at an assessment center should be directed to the HSE person, and not to the LSE person as conceptual- ized in Chapter 2. 3. Assessees with high self—esteem were more satisfied with their work than assessees with low self-esteem (Table 3-2). The explanation for this finding may lie in the very concept of self-esteem. That is, the HSE individual per- ceives himself as being competent to perform virtually any task. In essence, he enjoys his work and consequently he is satisfied with his work environment. On the other hand, 105 the LSE individual is less confident of his ability to perform tasks. He may even dread his work and consequently be less satisfied with the work environment than his HSE fellow-worker. From the finding, it appears that the HSE person likes his work, while the LSE person does not. 4. Low self-esteem assessees who attained high assessment ratings (LSE+ group) and high self-esteem assessees who attained low assessment ratings (HSE- group) were more satisfied with supervision than high self- esteem assessees with high assessment ratings (HSE+ group) and low self-esteem assessees with low assessment ratings (LSE— group) (Table 3—3). There are little data available which help to explain this finding. However, some implications may be drawn. Since the finding is complex, it is necessary to consider its implications for each assessee group separately. a. HSE+ assessees appear dissatisfied with the style of their supervisors, although they are considered "high performers" by both supervisors and assessors. The dis- satisfaction may be a result of the subjects' high self-esteem level which leads to a feeling that supervisors, in general, are less competent than they. HSE- assessees appear to like their super- vision, as they liked their work. Their satisfaction in these two areas contrasts with their lack of satisfaction with pro- motions, which was discussed earlier in the chapter. It may be that the "threat" to their career, they may have perceived arising from their low assessment ratings, has had an impact on their thinking only with respect to promotions and not to other aspects of the job. 106 c. LSE+ assessees appear to like their supervisors' style. Perhaps the super— visor would want to be aware that there are some ”good" performers who, because of their low self—esteem, may need encouragement. d. LSE- assessees appear to be a group that is not going anywhere in the organization. All indications are that they are "poor" performers. These assessees apparently T, are not satisfied with their supervision “a and their work. Supervisors may want to take a "hard look" at this group of indi- viduals. Relationships of Ghiselli SDI Traits k5 The relationships of the SDI personality character- istics with job performance ratings, job satisfaction scores, and assessment ratings can be discussed briefly because only a few significant relationships were found. Performance Ratings The findings indicate that virtually no relationship exists between the SDI and peer-averagegperformance ratings and the SDI and sppervisor performance ratings (Table 3—6). However, there are a number of significant correlations between SDI traits and self-ratings of per- formance (Table 3-6). Such correlations could occur because both are self-ratings. It is to be expected that an individual would have a constant image of himself which might bias his response to all the instruments. 107 The latter correlations reveal a pattern of person- ality characteristics similar to those found in Table 3—8. They indicate that the self-rated "high performer" has traits of intelligence, achievement motivation, and lack of need for security. In addition, the individual who rated himself a "high performer" revealed strong traits of self-assurance and lack of need for power. To a lesser extent he revealed traits of supervisory ability and initiative. The correlations among SDI traits and self-ratings (Table 3—6) suggests that the individual has a fairly constant image of himself as measured by two instruments, one determining a variety of personality traits, the other a self-concept of job performance. Satisfaction Scores The findings indicate that virtually no relationship exists between the SDI and the work, supervision, pro- motions, and co—workers scales of the JDI, despite the fact that both are self—ratings. In essence, the findings are inconclusive, although they perhaps may suggest that one's self-estimate of personality characteristics has no relationship with one's attitude toward various facets of his job. 108 Such results are possibly not consistent with Eran (1966). He used the decision making approach (now decisiveness) scale of the SDI to separate managers into "highs" and "lows". Using the Porter Management Positions Questionnaire to measure job attitudes, Eran found that those with high self—ratings of decisiveness were sig- nificantly more satisfied with their job as managers. In the present research it was found that high self- esteem assessees were significantly more satisfied with their work than were low self-esteem assessees. But this is only one significant correlation out of a possible six on this satisfaction scale. Assessment Rapings The personality characteristics of the individual who does well at an assessment center do emerge from the data. For as shown by the correlations, the assessee who attains high assessment ratings describes himself as intelligent, achievement motivated, and lacking in need for security. To a lesser extent he is also self-assured. Of the four traits just cited, self-ratings of intelligence appear to be the most highly correlated with assessment results. In turn, assessment results are pre- dictive of management potential, as discussed in the review 109 of the research literature in Chapter 1. Thus the intel- ligent scale may be most useful in predicting management potential. This view is supported by such researchers as Stogdill (1948), Goode (1951), Randle (1956), and Harrell (1961) who indicate that intellectual ability is an im- portant trait in successful managers. However, the findings do also indicate some disagree- ment between the assessment ratings and the Ghiselli SDI over what appear to be similar variables. There is no correlation between some variables where, at least on the face of it, there should be a correlation. For example, both the assessment staff and the SDI purport to measure "initiative" and "decisiveness". But the findings (Table 3-8) indicate that these traits as viewed in assessment are not the same as measured by the SDI. "Leadership" and "supervisory ability" connote similar abilities, but the findings (Table 3-8) indicate that they are not the same. Likewise, "organizational ability” and "supervisory ability" connote similar abilities, but again the findings (Table 3-8) indicate that they are not the same. This suggests the need for better conceptualization of the meaning of the characteristics rated. While the assessment center and the SDI both attempt to identify management potential, both may be looking at 110 different slices of this potential. Hence, one method can- not be readily substituted for the other, and still obtain the same assessment results. In addition, as suggested Ipreviously, assessment provides identification of manage— Inent potential for a diversified number of candidates on a similar set of tasks. Assessment also provides information tiseful to the candidate for develOpment and to the organ- ization in providing develOpmental opportunities. Implications for Management The results of the present study indicate that manage— Inent should be aware of the impact an assessment program nuay have upon individuals and the organization. The Etnalysis suggests that there are some implications to be noted. 1. High self-esteem persons who received low assessment ratings are rated high by super- visors (Table 3-1). This group became significantly less satisfied with promotions after assessment (Table 3-4). But they were significantly satisfied with work and super- vision (Tables 3-2 and 3-3). Should this dissatisfaction remain high, it could Ileuad to turnover of those regarded as "good" performers. 'Tfue pattern of satisfaction may indicate that this group "fiits" the organization despite their poor assessment Srudwing. Hence, a development program may prove beneficial 111 to both the individual and the organization. 2. Low self-esteem persons who received low assessment ratings are rated low by super- visors (Table 3-1). This group indicated dissatisfaction with work and with super- vision (Tables 3-2 and 3-3). This group of assessees appear low on nearly all mea- sures. Consistent low rankings, in comparison with the <3ther three groups of assessees, suggests that this par- ticular group should be evaluated. In other words, a "good hard look" may identify some individuals who might loe more effective with a job or career change. 3. Low self-esteem persons who received high assessment ratings are rated high by super- visors (Table 3-1). This group indicated dissatisfaction with work (Table 3-2), and satisfaction with supervision (Table 3-3). This reflection of dissatisfaction and satisfaction nuay indicate that one group of "good" performers needs fiirther praise and encouragement for their work efforts fdrom.supervisors. Such action may enhance the groupHs OVerall job satisfaction. 4. High self-esteem persons who received high assessment ratings are rated high by super- visors (Table 3-1). This group indicated dissatisfaction with supervision (Table 3-3), and satisfaction with work (Table 3-2). This reflection of dissatisfaction and satisfaction Tuadz indicate that another group of "good" performers ineueds a modification in the management style to which 112 they are exposed. This may occur through training for their supervisors, or perhaps with a job change. 5. There was significant correlation (r = .30, p S .05) between assessors and supervisors over "high performers" and "low performers" (Tables 2—9 and 3-1). The agreement between supervisors and assessors may raise questions concerning the need for an assessment program. However, it should be recognized that benefits over and above identification of "good" and "poor" per- formers may accrue from an assessment program. For ex- ample, assessment provides information which in some organizations has proved useful in establishing develOp— ment programs for assessees. Suggestions for Future Research It is suggested that this research be replicated. A larger sample size would be desirable. Also, consideration Inight be given to using an assessment center where the eemphasis is on the use of assessment results for future forogression in the organization. Where such an objective eocists, findings may be different from those in the present stuady where the assessment center's objective was supposed tc> be develOpment of the assessee. Different moderating variables may also result in diffiferent findings. For this reason, it is suggested that 113 traits determined by instruments other than the SDI be used as possible moderators of the effects of assessment ratings on assessee job performance and job satisfaction. These moderators could be examined singly and in combina- tion to determine if any effects different from those in the present research are present. The time dimension should be more controlled. In the present research measures were obtained at two points in time. A more comprehensive longitudinal study, with mea- bur—"H. sures at several points in time, may reveal effects of assessment ratings on assessee job performance and job satisfaction. For any future research, the same research design is believed adequate. That is, a 2 x 2 x 2 factorial design appears suitable for analyzing the research a repeated measure design (the before—after the repeated variable) should be considered provide more complete analysis of the data, data. However, time would be since it may in the sense that both within and between subject variance would be accounted for. Conclusion The reported research has attempted to determine the extent to which participation at an assessment center 1 114 affects assessee job performance and job satisfaction. The research findings indicated that with one exception, no significant changes occurred in assessee job perform- ance and job satisfaction after assessment. The one ex- ception noted was that high self-esteem assessees who received low assessment ratings (HSE- group) were sig- nificantly less satisfied with promotions after assessment. Accordingly, the management concern mentioned in the second chapter, for the assessee who does poorly at an assessment center, appears to be an apprOpriate concern for the HSE- group of assessees, at least within the time dimension (six months after assessment) covered by the research and as measured by the instruments used. If the research findings are replicated in other organizations, supervisors who have pondered‘the impact of assessment ratings on job performance and job satisfaction may be reassured that the only area of concern seems to be promotion satisfaction. Other than that, there is no apparent impact six months after assessment. LIST OF REFERENCES LIST OF REFERENCES Albrecht, Paul A.; Glaser, Edward M.; and Marks, John. Bentz, V. Jan. "The Sears Experience in the Investigation, "Validation of a Multiple-Assessment Procedure for Managerial Personnel." Journal of Applied Psychology. 48 (December, 1964), 351-360. :8 Bernard M. "The Leaderless Group Discussion." Psychological Bulletin, 51 (September, 1954), 465-492. Description, and Predictation of Executive Behavior." 4 in Predicting Managerial Success. Edited by John A. g Myers, Jr. Ann Arbor, Michigan: Foundation for a Research on Human Behavior, 1968, pp. 59-152. Bray, Douglas W. Issues in the Study of Talent. New York: Columbia University Press, 1954} . "The Management Progress Study." American Psychologist, 19 (June, 1964), 419-420. . "The Assessment Center: Opportunities for Women." Personnel, 48 (September-October, 1971), 30-34. and Campbell, Richard J. "Selection of Salesmen by Means of an Assessment Center." Journal of Applied Psychology, 52 (February, 1968), 36-41. and Grant, Donald L. "The Assessment Center in the Measurement of Potential for Business Manage- ment." Psychology Monographs, 80 (1966), Whole No. 625. and Moses, Joseph L. "Personnel Selection." Annual Review of Psychology, 23 (1972), 545-576. Byham, William C. "Assessment Centers for Spotting Future Managers." Harvard Business Review, July- August, 1970, pp. 150-160, 162-168, 17IlI72. . "The Assessment Center as an Aid in Management Development." Training and Develgpment Journal, 25 (December, 1971), 10422. 115 116 and Pentecost, Regina. "The Assessment Center: Identifying Tomorrow's Managers." Personnel, 47 (September-October, 1970), 17-28. Campbell, John P.; Dunnette, Marvin D.; Lawler, Edward E., III; and Weick,Karl E., Jr. Managerial Behavior, Performance, and Effectiveness. New York: McGraw- Hill, 1970. Campbell, Richard J. and Bray, Douglas W. "Assessment Centers: An Aid in Management Selection." Personnel Administration, 30 (March-April, 1967), 6-13. Carleton, Frederick 0. "Relationship Between Follow-Up Evaluations and Information Developed in a Management Assessment Center." Proceedings of American Psycho- logical Association, 78 (1970), 565-566. Cohen, Arthur R. "Some Implications of Self-Esteem for 1 Social Influence." in The Self in Social Interaction. E? Edited by Chad Gordon and Kenneth J. Gergen. New York: Wiley, 1968. Cronbach, Lee J. Essentials in Psychological Testing. New York: Harper & Row, 1960. DiCostanzo, Frank and Andretta, Thomas. "The Supervisory Assessment Center in the Internal Revenue Service." Training and Development Journal, 24 (September, 1970), 12-15. Dunnette, Marvin D. "Predictors of Executive Success." in Measuring Executive Effectiveness. Edited by Frederic R. Wickert and Dalton E. McFarland. New York: Appleton-Century-Crofts, 1967, pp. 7-48. . "The Assessment of Managerial Talent." in Advances in Psychological Assessment, Vol. 2. Edited by Paul McReynolds. Palo Alto, California: Science & Behavior Books, 1971, pp. 79-108. Eran, Mordechai. "Relationships Between Self-Perceived Personality Traits and Job Attitudes in Middle Management." Journal oprpplied Psychology, 50 (October, 1966) 424-430. Eysenck, Hans J. Uses and Abuses of Psychology. Baltimore: Penguin Books, 1953. Ferguson, George A. Statistical Analysis in PsycholOgy and Education. New York: McGraw-Hill, 1966. 117 Finkle, Robert B. and Jones, William S. Assessing Corporate Talent: A Key to Managerial Manpower PIanning. New York: Wiley, 1970} Finley, Robert M., Jr. "Evaluation of Behavior Predic- tions From Projective Tests Given in a Management Assessment Center." Proceedings of American Psycho- logical Association, 78 (1970), 567-568. Garrett, Henry E. Statistics in Psychology_and Education. New York: McKay, 1966. Ghiselli, Edwin E. "Self-Description Inventory." Berkeley: University of California, 1955 (Mimeo- graphed). . "Managerial Talent." American Psychologist, 18 (October, 1963), 631-642. . Explorations in Managerial Talent. Pacific Palisades, California: Goodyear Publishing, 1971. Gordon, Chad and Gergen, Kenneth J. (Editors). The Self in Social Interaction. New York: Wiley, 1968. Grant, Donald L. and Bray, Douglas W. "Contributions of the Interview to Assessment of Managerial Potential." Journal of Applied Psychology, 53 (February, 1969), 24-34. ; Katovsky, Walter; and Bray, Douglas W. "Con- tributions of Projective Techniques to Assessment of Management Potential." Journal of Applied Psychology, 51 (June, 1967), 226-232. Guilford, J. P. Psyghometric Methods. New York: McGraw- Hill, 1954. Hardesty, D. L. and Jones, W. S. "Characteristics of Judged High Potential Management Personnel -- The Operations of an Industrial Center." Personnel Psychology, 21 (Spring, 1968), 85-98. Hinrichs, J. R. "Comparison of 'Real Life' Assessments of Management Potential with Situational Exercises, Paper-and—Pencil Ability Tests, and Personality Inventories." Journal of Applied Psychology, 53 (October, 1969), 425-432. Jaffee, Cabot L. "A Tridimensional Approach to Management Selection." Personnel Journal, 46 (July-August, 1967), 453-455. 118 . Effective Management Selection: The Analysis of Behavior by Simulation Techniques. Reading, Massachusetts: Addison-Wesley, 197I. ; Bender, Joe; and Lynn, Calvert O. "The Assess- ment Center Technique: A Validation Study." Management of Personnel Quarterly, 9 (Fall, 1970), 9-14 0 Kay, Emanual; French, John R. P., Jr.; and Meyer, Herbert H. "A Study of Threat and Participation in an Industrial Performance Appraisal Program." General Electric Company, Behavioral Research Service, May, 1962. Kelly, B. Lowell. Assessment of Human Characteristics. Belmont, California: Brooks/Cole, 1967. and Fiske, Donald W. "The Prediction of Success in the V. A. Training Program in Clinical Psychology." American Psychologist, 5 (1950), 395—406. The Prediction of Performance in Clinical Psychology. Ann Arbor: UniVersity of’Michigan Press, 1951. Korman, Abraham K. "Self-Esteem Variable in Vocational Choice." Journal of Applied ngchology, 50 (December, 1966), 479-486. . "The Prediction of Managerial Performance: A Review." Personnel Psychology, 21 (Autumn, 1968)a, 295-322. . "Task Success, Task P0pularity and Self-Esteem as Influence on Task Liking." Journal of Applied Psychology, 52 (December, 1968)b, 484-490. . "Toward a Hypothesis of WOrk Behavior." Journal of Applied Psychology, 54 (February, 1970), 31-41 0 Kraut, Allen L. "A Hard Look at Management Assessment Centers and Their Future." Personnel Journal, 51 and Scott, Grant J. "Validity of an Operational Management Assessment Program." Journal of Applied Psychology, 56 (April, 1972), 124-129. Laurent, Harry. "Research on the Identification of Management Potential." in Predicting Managerial 119 Success. Edited by John A. Myers, Jr. Ann Arbor, Michigan: Foundation for Research on Human Be- havior, 1968, pp. 1-34. Lawshe, C. H. and Balma, Michael J. Principles of Personnel Testing. New York: McGraw-Hill, 1966. Lindzey, Cardner (Editor). Handbook of Social Psychology. Cambridge, Massachusetts: Addison-Wes1ey, 1954. LOpez, Felix M., Jr. Evaluating Executive Decision Making: The In—basket Technique. AMA Research Study No. 75. New York: American Management Association, 1966. McConnell, John H. "The Assessment Center in the Smaller Company." Personnel, 46 (March-April, 1969), 40-46. . "The Assessment Center: A Flexible Program for Supervisors." Personnel, 48 (September- October, 1971), 35-40. Meyer, Herbert H. "An Evaluation of a Supervisory Selection Program." Personnel ngchology, 9 (Winter, 1956), 499-513. . "The Validity of the In-Basket Test as a Measure of Managerial Performance." Personnel Psychology, 23 (Autumn, 1970), 297-307. Morris, Ben 8. "Officer Selection in the British Army, 1942-1945." Occupational Ppychology, 23 (October, 1949), 219-234. Murray, Henry A. Exporations in Personality. New York: Oxford University Press, 1938} Myers, John A., Jr. (Editor). Predicting Managerial Success. Ann Arbor, Michigan: Foundation for Research on Human Behavior, 1968. Nunnally, Jum C. PsyChometric Theory. New York: McGraw-Hill, 1967. OSS Assessment Staff. Assessment of Men. New York: Rinehart, 1948. ‘ "PAR Effects Study." Pacific Telephone and Telegraph Company, 1968. 120 "Personnel Assessment Program." Detroit, Michigan: Michigan Bell Telephone Company, undated. Randle, C. Wilson. How to Identify Promotable Execu- tives." Harvard Business Review, May-June, 1956, pp. 122—134. Slevin, Dennis P. "The Assessment Center: Breakthrough in Management Appraisal and DevelOpment." Personnel Journal, 51 (April, 1972), 255-261. Smith, Patricia Cain; Kendall, Lorne M.: and Hulin, Charles L. The Measurementtof Satisfaction in WOrk and Retirement. Chicago: Rand McNally, 1969. Taft, Ronald. "Multiple Methods of Personality Assess- ment." Psychological Bulletin, 56 (September, 1959), 333-352. Thomson, Harvey A. "Comparison of Predictor Criterion Judgments of Managerial Performance Using the Multitrait-Multimethod Approach." Journal of Applied Psychology, 54 (December, 1970), 496-502. Wickert, Frederic R. and McFarland, Dalton E. (Editors). Measuring Executive Effectiveness. New York: Appleton-Century-Crofts, 1967. Wikstrom, Walter S. "Assessing Managerial Talent." The Conference Board Record, 4 (March, 1967), 9-44 0 Wollowick, Herbert B. and McNamara, W. J. "Relationship of the Components of an Assessment Center to Manage- ment Success." Journal of Applied Psychology, 53 (October, 1969), 348-352. .8 fic". APPENDICES I. 4 . 4 N .4- IIIICI..|IIJ 1‘1. m .0 . ..Ddlr APPENDIX A Ghiselli Self-Description Inventory 121 msoumcmw umm oE ucmw mucH msomwmusoo an S SH HHH SH S umoco: ucmmmmHa manHuduwucm pmcHaumumc m w mmuo mHSuNCI cow 2 quu w m comm c u: H m Hm p 6 mm H m mH Hg 6 c m mHanm o>HumHoouddm oHumHHmmu HSHOH so we a mean mo a unumm u use comm : u Hu on 6 mm Hx H; H «H p p H o oHuwnumdamm memHuom ucmHonwm wcHum>mmuma m mudm m moo cow: % 0 mac u Em HS S mm H H Hm HH H mH H HS S HmUHwOH pmmHOQ chx oHumwuocm u so 0 c Sun m oumum o a How u x H mm p c H H m cm nHH p «H H w n s mDOHucoHomcoo Homunwsocu pmuuHaldpmnm mecmHum pmHHouucooleom ano uuon m>Hucw>cH m>HumcmeaH mm muwoch mH pouomummc: HH m>HumHodooo m on H> o So u u : wousomwu woouo u p HH. H om HH 5 wH H m 0H s n N pmeHcch m>Hmwmuwoua Homcde wchcmumumcco m m on own mo uumud ummuom Hp HH mm x m NH H H m Hp H mHnHchQmmu wHSCOHuommwm mDOHuumoch oHnmdmo .som mmbHuummp umoE xchu so» pucB moo ocu x0050 onwn mcuo3 mo mHHma mnu mo comm cH .muHmd wcHBOHHow mnu mo comm cH ppoB mco mchmn :x: cm mumHm .cmo :0» mm mHummcon com mHouwH Isuom mm memuso> mnHHomop ou mug om .mum3mcm mocha no uLwHH o: mum ouwcH .wHomH:o% mnHHommc 90% so: mom cu pom mmwmmom 30> o>mHHmb so» muHmHu wnu mo muouoHd m chuno ou mH >H0uco>cH chu mo uncouod och VMOBzm>ZH ZOHHmHmUmMQIMHmm 122 ucwHumdsH mHnHchQSwHuH mHnmuHoxo %Hm Howuchmu prmccmdmpco cmeQmuso pmemHummch o>Hmmwuwwm Hmocho wchqulemm pmuummnupumc Hmuqu SSSHSSHSS oHumHEHmmmd pwumCOHcHdo ‘mht.’ 11. . blind so mo No He oo mm mm mm .30» m>HmmHEnsw om acmumHOucH we pHou ammSH mao>umc chonpsum % pm wow m and % 005 H c Hp mm 5 H8 no u pochHnlmHuumu mHnmmwcwco pouommwm cmumucmolmHmm an ucopcodwp be wamemmlemm opsu hammuv thcwHumco m mm o mu up maom muumsv c HwH mm Ha H H ms H xmma mmcou muaumaaH mo c m dEoo % coco wcouum mm H H H mm a as p ; oHnHuomuuch m>Hmcmwop HmaOHuoEm w> mm>w woo o>Hu ucmwouum H Hm H . w ms unopCOQmmp mHnmumcs AmHo: mo um uowm zwcHum mmm omu H H H on . Ne Hx UHumnumdm soHHmnm maoHanamca mH oo o How a mum r H m as HH w H Hq H mmmHmHmo pwuHmucoo %nm mwbHHommc ummmH xcHLu 30% who: moo mnu sumac 30H0n mpuo3 mo HHmd 0:» mo comm oq mm mm mm on mm «m mm CH APPENDIX B Job Descriptive Inventory (JDI) 123 unmaanHdaooom Ho wwcwm mm>HU ucmmmem mmmecm uom oHdaHm 1|III1 pouomdmmm wcHumuuwsum IIIII1 m>Hummuu umow know so .IIIII coco .IIIII onwcmHHmno IIIII1 wcHuom H3818: Ill wcHaHSHSSS waomwuHH IIIIII wcHusom Hummus IIIII. wcHumcHommm 1III11 Ill .EmuH mnu mchmb :w: m mUMHd .aouH mcu u30bm opHump uoccmo 30% HH .6 .SmuH on» oprmn :2: no mode .xuoa usom mo uowdmm umHsoHuumd m mbHHommp uo: wwop EmuH mnu HH .n .amuH ocu opHmwb RM: m wode .xuos Hsom mo uomdmm umHsoHuHmd m mobHuommp amuH osu NH .m "Bonn mamuH ecu pom xuoz .H COHuomm 124 .. I'lJWV 1.lr cmcmmc coxa pcsou< pwumaemu onso HNSH .Illl1 nwsocw mmH>uwa36 u.cwmon sac >5 so 05 mm>me .IIIII mumcnoulda IIIII. HommHHkucH IIIII1 HmHucmaHmcH 1IIIII pmm IIIII. stuoma HHSB 20h mzocM .IIIII xuoa coow mmmHmum auonnoum .IIIII ouHHoaSH onmood< .IIIII mmmde cu puma .IIIII ccmum H muons ma mHHmH .IIIII moH>cm he mxm< .lllll .EwuH mzu mpHmmn :m: m womHa .EmuH msu usonm mpHomc uoccmo 50% NH .0 .EmHH mLu mchmn :2: cm mode .uomH>umdsm poo» mo uomdmm cm mnHHomoc uoc mmoc aouH msu HH .n .EmuH mzu mpHmmn :w: m mode .HomH>deom uso> Ho Homawm HmHSUHuHmQ m mmnHHommp EmuH onu HH .m HBOHmn meuH mnu pom conH>Hmdom .HH GOHHUmm 125 .1551. 4 n8 w! coHuoaoud How mocmno poow SHHHmm mcoHuosoud umHsmmm chHuosoud ucmsvaHcH moHHom :OHuoaoud HHmmc: COHuoaoud HoH mocmno coco SSH Scm1emoe muHHHno co coHuoaoum cmuHBHH umsaoaom HuHcsuHOQQO ucoamucm>pm now AuHcauHOQQO pooo .EmuH mnu mpHmwn :w: m mode .emuH may uoonm opHomp uoccmo 30% HH .0 oEQUH USU QUHmmfl. :z: cm mode .Eoumxm coHuoEoud Hack Ho uoodmm umHsoHuumd m wnHHummp uoc mwOp EmuH wnu HH .3 .EouH mcu mpHmwn z»: m wode .Emumzm COHuoEoud H50» Ho uomdmm HmHsoHuHmd m mmbHHommp EmuH wnu HH .m "SOHmn maouH who pom mGOHuoeoum .HHH COHuoom 126 umoa cu chum .IIIII mmHamcm oxme cu zmmm HomOH .IIIII ucmeHHmucH mumoumucH Bouumz IIIII. ummm IIIII1 m>Huo< .IIIII mHnHchQSmm .lllll >0m>HHd oz .lllll pHQSum .lllll BadmmdecD .lllll mDOHana< HNmH IIIII1 30Hm uHmEm IIIII. wcHHom Luna cow HHmH IIIII1 wcHumHsaHum .EmuH mnu mpHmmn :w: m mode .BmuH wnu usonm mpHump uoccmo :0» HH .0 .EmuH on» mpHmmn :z: cm mode .mumxuoanoo Hoom Ho uomamm cm mnHuowmp uoc moOp EmuH mnu HH .n .EMDH och mchmb :w: m mode .mHoxH031ou use» Ho Homdmm HmHsoHuHmd m mmpHHommp amuH who HH .m "BOHon mEouH mnu pom mumxuozlou .>H COHuowm APPENDIX C Performance Self-Rating Form 127 SELF-RATING FORM FOR: (Enter your name here) INSTRUCTIONS This form serves two purposes. First, it will introduce you to the skills which are evaluated in the Assessment Center. You will rate yourself on these skills with regard to your own performance back on your job. As you read through the skills and their definitions, you should be aware that they have been identified by Division management as critical to managerial success. The second function of this rating form is purely research. The Divi- sion, in connection with Personnel and Organization Staff, is conduct— ing an evaluation study of the Center to determine how much of the information collected at the Center could be obtained from pe0ple who merely observe you on your job. That obviously includes yourself as an observer. This week your supervisor and a few of your own co- workers will also rate you on a form almost identical to this one. When all the ratings are collected on approximately 60 Center partici— pants this year, the different sources of information will be compared. NONE OF THE RATINGS GIVEN BY YOUR SUPERVISOR, YOUR COWORKERS, OR YOUR- SELF WILL BE A PART OF YOUR PERMANENT RECORD. THEY WILL ONLY ASSIST US IN EVALUATING THE CENTER. Follow these steps in rating yourself: 1. Read the definition provided for each skill. 2. Read the definition a second time and think about the meaning we are trying to convey. 3. Think of your own job requirements and decide whether you have had enough opportunities to test your performance on the particular skill in order to give yourself a rating. 4. If you feel that your job has not given you ample Oppor- tunities to test and measure the skill, place a check in the box to indicate this. 5. If you do have a good idea of your job performance on the skill, rate yourself by circling a number along the scale. 6. Briefly give an example of your typical behavior which led you to give the particular rating. Be specific. FOR RESEARCH PURPOSES ONLY Skill: 1.2(3 or a task without creating hostility? «1 Lb people look to you for direction? 0 Do people respond to you as a leader, not just a boss? 0 Are you able to get people to follow you in the solution R1tinn: [::] My Job has not given me the opportunity to t>st and measure this ability. (If you have not checked the box above, circle your rating of yourself.) 1 P 3 10W BELOW Although 1 AVERAGE haw: had the I have opu1rtunity, shown I have shown little of “our of‘ this this Skill skill in my in my work work. l‘lxumple: Skill: Lhcision Making, 0 O O 4 5 6 SATISFACTORY I have shown an adequate amount of this skill in my work. Are you able to seek out and evaluate pertinent 7 8 ABOVE AVERAGE I have shown an above aver- age amount of this skill in my work. facts, and make sound Judgments? Are you able to put elements of your work into meaningful priorities? 9 EXCEPTIONAL I have con- sistently shown as much of this skill as could be i expected of a person in my Job. Are you capable of diacrbminating between relevant and irrelevant facts? Hating: [::] My Job has not given me the opportunity to test and measure this ability. (If you have not checked the box above, circle your rating of yourself.) 1 2 3 low BELOW AVERAGE Example: A 5 6 SATISFACTORY (See above for explanation) 7 8 ABOVE AVERAGE 9 EXCEPTIONAL ;H.ll l: le'l.:iv‘ 1.259 :u.u- <1 1x1you recognize when a decision is necessary immediately and respond on the basis of the information available, rather than putting off the decision? It1thug: [::] My job has not given me the opportunity to test and measure this (If you hWVV not 1 2.) 10W Al thmn'll I hILVv- lflld tin“ opportunity. 1 llrlw' 111101131" noun of this skill in my work. l'i‘fituuplt': ability. checked the lox.abovc, circle your rating or yourself.) 3 BELOW AVERAGE I have shown little of this skill in my work A O Skill: Organizational Ability o O O O 5 6 SATISFACTORY I have shown an adequate amount of this skill in my work. 7 ABOVE AVERAGE I have shown an above aver age amount of this skill in my work. Do you show the ability to plan and organize the work of others? Do you delegate, when given the opportunity, and establish administrative controls? 8 9 EXCEPrIONAL I have con- sistently shown as much of this skill as could be expected of a person in my Job. Do you set up schedules so that deadlines can be met? Do you take into account the long range effects of your plans. Hating: [::] My job has not given me the opportunity to test and measure this (If you have not checked the box above, circle your rating of yourself.) 1 ? IOW Example: ability. 3 BELOW AVERAGE A (See above for explanation) S 6 SATISFACTORY 7 ABOVE AVERAGE 8 9 EXCEPTIONAL 130 ilkill: Initirnlvv 0 D11 you actively uttvmpt to influence people or events or do you passively go along with the group? n Aw you a self-starter? Are you the one to get the lr'Lll rolling? Hating: [::] My Job has not given me the opportunity to test and measure this :111111 ty . (ll' yHH haw. 11111. checked the box above, circle your rating of yourself.) 1 'r' «i ll- 5 (J 7 8 9 lOW BELOW SNPISFACTORY ABOVE EXECPTIONAL A_l.1l;(1:1.-ih 1 AVERAGE I have shown an AVERAGE I have con- thu had the I have adequate amount I have shown sistently opportunity, shown of this skill in an above aver- shown as much I h:1v<- shown little of my work. age amount of of this skill none 01' this this skill this skill in as could be skill in my in my work. my work. expected of work. a person in my job. {'1 rumpltf- : Skill: Rcuponnc to Changing Conditions 0 When faced with changing condi- tions or new information, are you able to generate alternative actions or directions? o to you become flustcred when n sudden change occurs, or are you able to deal with the situation successfully? hntihr: [::] My Job has not given me the opportunity to test and measure th1s ability. (1r y1n have not checked the box above, circle your rating of yourself.) 1 R 3 4 5 6 T 8 9 I ()W BELOW SATISFACTO HY MOVE EXCEPTIONAL AVERAGE AVERAGE (See above for explanation) I",:~:r1mp L": 1231 Skill: Hi cwrnmvnt 0 Are you able to accurately perceive the strengths and weaknesses of others with whom you must interact? o Are you able to size up the effectiveness of coworkers? Uiillhn [::] My job has not given me the opportunity to test and measure this ‘lll’l iity . (If you hnvv not checked the box above, circle your rating of yourself.) 1 3 h 5 6 T - 8 9 LOW BELOW SATISFACTORY ABOVE EXCEPTIONAL Although ] AVERAGE I have shown an AVERAGE I have con- hnvo had the I have adequate amount I have shown sistently opportunity, shown of this skill in an above aver- shown as much I have shown little of my work. age amount of of this skill none of this this skill this skill in as could be skill in my in my work. my work. expected of work. a person in my Job. l'I:<:meJ.L‘: Skill: Oral Communications 0 Can you clearly and effectively present your point of view in a meeting situation? 0 Do people understand what you are talking about when engaged in a conversation with you? 0 Do you sound confident and well organized? Hutinfl: [::] My Job has not given me the opportunity to test and measure this ability. (Ir you have not checked the box above, circle your rating of yourself.) 1 2 3 h 5 6 7 8 9 low lflHOW SATISFACTORY ABOVE EXCEPTIONAL AVERAGE AVERAGE (See above for explanation) :»:;um_~i« 1 Skill: Written Communications Rating: (If you have not checked the box above, circle your rating of yourself.) 1 P 10W Although I haw had the opportunity, I have shown none of this skill in my work. Example: Ukill: huiinn: D lrgation 3 BELOW AVERAGE I have shown little of this skill in my work. A 132 Can you present written information in a logical and understandable form? Do you highlight mador points? Is your grammar adequate? 5 6 SATISFACTORY I have shown an adequate amount of this skill in my work. 7 8 ABOVE AVERAGE I have shown an above aver- age amount of this skill in my work. 0 Do you delegate appropriately to others when given the opportunity? [::] My Job has not given me the opportunity to test and measure ability. (It you have not checked the box above, circle your rating of yourself.) 1 :3 5 h 5 6 7 8 LOW BELOW SATISFACTORY ABOVE Although i AVERAGE I have shown an AVERAGE have had the I have adequate amount I have shown opportunity, shown of this skill in an above aver- I have shown little of my work. age amount of none of this this skill this skill in skill in my Wk)lk c in my work. my work. [::] My Job has not given me the opportunity to test and measure this ability. 9 EXECPTIONAL I have con- sistently shown as much of this skill as could be expected of a person in my Job. ”'1' this 9 EXCEPTIONAL I have con- sistently shown as much of this skill as could be expected of a person in my Job. APPENDIX D Supervisor/Co-worker Performance Rating Form (Before Assessment) This form is to be used to rate the performance of 133 INSTRUCTIONS Your relationship to this individual (Check one): Have you ever attended the Assessment Center? His supervisor in His Coworker No .‘h II Yes, as an observer. u ’- ]? -_V Yes, as a participant. This rating form is part of a research study to assist the Division in evaluating the Assessment Center. The ratings which you are asked to give the individual named above can neither help nor hurt him; your ratings will not become a part of his permanent record. On the following pages you will find a number of job skills which are important for management personnel. 1. 2. Read the definition provided for each skill. Read the definition a second time and think about the meaning we are trying to convey. Think of your eXperiences with the individual being rated and the requirements of his job and decide whether you have seen enough of his performance to enable you to give him a rating. If you feel that you have not observed the individual with regard to a specific skill, place a check in the box to indicate this. If you have observed him sufficiently, give him a rating by circling a number along the scale. Briefly give an example of his typical behavior which led you to give him the particular rating. Be Specific. lEi4 £23 RESEARCH PURPOSES ONLY Skill: Leadership 0 Is the individual able to get people to follow him in the solution of a task without creating hostility? 0 Do people look to him for direction? 0 Do people respond to him as a leader, not Just a boss? Rating: [::] I have not observed him enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual.) 1 2 3 h 5 6 7 8 9 LOW BELOW SATISFACTORY ABOVE EXCEPTIONAL Although he AVERAGE He has shown AVERAGE He has con- has had the He has an adequate He has shown sistently opportunity, shown amount of this an above aver- shown as much he has shown little of skill in his age amount of of this skill none of this this skill work. this skill in as could be skill in his in his work. his work. expected of work. a person in his Job. Example: Skill: Decision Making 0 O 0 Rating: [::] I have not Is the individual able to seek out and evaluate pertinent facts, and make sound Judgments? Is he able to put elements of his work into meaningful priorities? Is he capable of discriminating between relevant and irrelevant facts? observed him enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual.) 1 2 3 LOW BELOW AVERAGE Example: A 5 6 7 8 9 SATISFACTORY ABOVE EXCEPTIONAL AVERAGE (See above for explanation) 135 Skill: Decisiveness 0 Does he recognize when a decision is necessary immediately and respond on the basis of the information available, rather than putting off the decision? Rating: B I have not observed him enough to give a rating on this skill. If! (If you have not checked the box above, circle your rating of this individual.) 1 2 3 u 5 6 7 8 9 LOW BELOW SATISFACTORY ABOVE EXCEPTIONAL Although he AVERAGE He has shown AVERAGE He has con- has had the He has an adequate He has shown sistently opportunity, shown amount of this an above aver- shown as much he has shown little of skill in his age amount of of this skill none of this this skill work. this skill in as could be skill in his in his work. his work. expected of E5 work. a person in his Job. Example: Skill: Organizational 0 Does he show the ability to plan and organize Abilit the work of others? '—__-_JL 0 Does he delegate, when given the opportunity, and establish administrative controls? 0 Does he set up schedules so that deadlines can be met? 0 Does he take into account the long range effects of his plans? Rating: [::] I have not observed him enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual.) 1 2 3 1+ 5 6 7 8 9 Low BELOW SATISFACTORY ABOVE EXCEPTIONAL AVERAGE AVERAGE See above for explanation) Example: 136 Skill: Initiative 0 Does he actively attempt to influence people or events or does he passively go along with the group? 0 Is he a self-starter? Is he the one to get the ball rolling? Rating: [::] I have not observed him enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual.) 1 LOW Although he has had the opportunity, he has Shown none of this skill in his work. Example: 2 3 1+ 5 6 7 8 9 BELOW SATISFACTORY ABOVE EXCEPTIONAL AVERAGE He has shown AVERAGE He has con- He has an adequate He has shown sistently shown amount of this an above aver- shown as much little of skill in his age amount of of this skill this skill work. this skill in as could be in his work. his work. expected of a person in his Job. Skill: Response to Changing Conditions 0 When faced with changing con- Rating: ditions or new information, is he able to generate alternative actions or directions? 0 Does he become flustered when a sudden change occurs, or is he able to deal with the situation successfully? D I have not observed him enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual.) 1 2 LOW Example: 3 1* S 6 7 8 9 BELOW SATISFACTORY ABOVE EXCEPTIONAL AVERAGE AVERAGE (See above for explanation) 137 Skill: Discernment o Is he able to accurately perceive the strengths and weaknesses of others with whom he must interact? o Is he able to size up the effectiveness of coworkers? Rating: I have not observed him enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual.) 1 2 3 u 5 6 7 8 9 LOW BELOW SATISFACTORY ABOVE EXCEPTIONAL Although he AVERAGE He has shown AVERAGE He has con- has had the He has an adequate He has shown sistently opportunity, shown amount of this an above aver- Shown as much he has shown little of skill in his age amount of of this skill none of this this skill work. this skill in as could be skill in his in his work. his work. expected of work. a person in his Job. Example: Skill: Oral Communications 0 Can he clearly and effectively present his point of View in a meeting situation? 0 Do people understand what he is talking about when engaged in a conversation with him? 0 Does he sound confident and well organized? Rating: 1 I I have not observed him enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual.) 1 2 3 1+ 5 6 7 8 9 LOW BELOW SATISFACTORY ABOVE ' EXCEPTIONAL AVERAGE AVERAGE (See above for explanation) Example: 138 Skill: Written Communications 0 Can he present written information in a logical and understandable form? 0 Does he highlight major points? 0 Is his grammar adequate? Rating: I: .If you have not checked the box above, circle your rating of this individual.) I have not observed him enough to give a rating on this skill. 1 2 3 1+ 5 6 7 8 9 LOW BELOW SATISFACTORY ABOVE EXCEPTIONAL Although he AVERAGE He has shown AVERAGE He has con- has had the He has an adequate He has shown sistently opportunity, shown amount of this an above aver- shown as much he has shown little of skill in his age amount of as could be none of this this skill work. this skill in expected of skill in his in his work. his work. a person in work. his Job. Example: Skill: Delegation 0 Does he delegate appropriately to others when given the opportunity? Rating: l:::] I have not observed him enough to give a rating on this skill. {If you have not checked the box above, circle your rating of this individual.) 1 2 3 h 5 6 7 8 9 LOW BELOW SATISFACTORY ABOVE EXCECEPTIONAL Although he AVERAGE He has shown AVERAGE He has con- has had the He has an adequate He has shown sistently opportunity, shown little amount of this an above aver- shown as much he has shown of this skill in his age amount of as could be none of this skill in work. this skill in expected of skill in his his work. his work. a person in work. his Job. Example: APPENDIX E Supervisor Performance Rating Form (After Assessment) 139 This form is to be used to rate the job performance of This individual ___*__ does-- _______dons not-—report to me now. Please circle the appropriate number to indicate your rating on each skill. Check the box if you feel that you cannot give a rating. Have you seen any overall change in this subordinate's Job performance since he attended the Assessment Center? Great deterioration Slight Some Great W Maxims clans: W Wei. 1 2 3 1+ 5 Skill: Le e hi 0 Is the individual able to get people to follow him.in the solution of a task without creating hostility? 0 Do people look to him for direction? 0 Do peOple respond to him as a leader, not just a boss? Rating: [::] I have not observed him.enough to give a rating on this skill. (If you have not checked the box above, circle your rating of this individual ) l 2 3 ’+ 5 6 7 8 9 LOW BELOW SATISFACTORY ABOVE EXCEPTIONAL Although he AVERAGE He has shown AVERAGE He has con- has had the He has an adequate He has shown sistently opportunity, shown amount of this an above aver- shown as much he has shown little of skill in his age amount of of this skill none of this this skill work. this skill in as could be skill in his in his work. his work. expected of work. a person in his job. Skill: Do i ion Makin o Is the individual able to seek out and evaluate pertinent facts, and make sound judgments? o Is be able to put elements of his work into meaningful priorities? 0 Is be capable of discriminating between relevant and irrelevant facts? Rating: [:1 I have not observed him enough to give a rating on this skill. l .1 3 ’4 5 6 7 8 9 Skill: Dnyigiyvnupg n ”HUS ho recognizn when a decision is necessary immediately and respond on the basis of the information available, rather than putting off the decision? Rating: [::] l haVu not observed him enough to give a rating on this Skill. 1 R 3 h S 6 7 8 9 140 Skill: W 0 Does he show the acuity to plan and organize the work or Others: ANNEX 0 Does he delegate, when given the Opportunity, and establish ad- ministrative controls? 0 Does he set up schedules so that deadlines can be met? 0 Does he take into account the long range effects of his plans? Rating: [3 I have not observed him enough to give a rating on this skill. 1 2 3 1+ s 6 7 8 9 LOW BELOW SATISFACTORY ABOVE EXCEPTICNAL - AVERAGE AVERAGE Skill: W 0 Does he delegate apprOpriately to others when given the opportunity? Rating: B I have not observed him enough to give a rating on this skill. 1 2 3 u s 6 7 8 9 Skill: m 0 Does he actively attempt to influence peeple or events or does he passively go along with the group? 0 Is he a self-starter? Is he the one to get the ball rolling? Rating: [:1 I have not observed him enough to give a rating on this skill. 1 2 3 l: 5 6 7 8 9 Skill: Wing c When faced with changing conditions or new information, is W he able to generate alternative actions or directions? 0 Does he become flustered when a sudden change occurs, or is he able to deal with the situation successfully? Rating: E] I have not observed him enough to give a rating on this skill. 1 2 3 1+ 5 6 7 8 9 Skill: W o Is he able to accurately perceive the strengths and weaknesses of others with whom he must interact? o Is he able to size up the effectiveness of coworkers? Rating: B I have not observed him enough to give a rating on this skill. 1 2 3 1+ 5 6 7 8 9 Skill: gm; 0 Can he clearly and effectively present his point of view in a C 10 meeting situation? 0 Do people understand what he is talking about when engaged in a conversation with him? 0 Does he sound confident and well organized? Rating: B l have not observed him enough to give a rating on this skill. 1 2 3 u 5 6 7 8 9 Skill: Written 0 Can he present written information in a logical and understand- C m tio able form? 0 Does he highlight major points? 0 Is his grammar adequate? Rating: [:1 I have not observed him enough to give a rating on this skill. 1 2 3 1+ 5 6 7 8 9 Imm'mltmm' IN 9 3 5 7 7 7 1 3 0 3 9 R" I" Em "I '1’ MW