A SURVEY OF THE PROCEDURES FOR EVALUATING THE PERFORMANCE OF SECONDARY PUBLIC SCHOOL PRINCIPALS IN MICHIGAN By Robert Mayfield Towns A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Administration and Higher Education 1974 ABSTRACT A SURVEY OF THE PROCEDURES FOR EVALUATING THE PERFORMANCE OF SECONDARY PUBLIC SCHOOL PRINCIPALS IN MICHIGAN By Robert Mayfield Towns The Problem This study was designed to determine the status of performance evaluation of secondary public school principals in Michigan as perceived by the principals; to obtain cri­ ticisms, suggestions, and recommendations for the improve­ ment of evaluation techniques; to evaluate these data and use the results to suggest implications for performance evaluation improvement; and to gather additional data for later analysis. The Method A research instrument was developed to collect data from a random stratified sample of secondary public school principals in Michigan. Each public high school was ordered by Michigan Education Association geographical region and by Michigan Athletic Enrollment Classification. A random stratified sample consisting of 50 per cent of the public secondary schools in each strata was then drawn. Robert Mayfield Towns Completed instruments were returned from 254 principals. This number represented approximately 87 per cent of the sample. Responses to the questionnaires were coded for computer use. The Control Data Corporation 6500 computer was used to tabulate and analyze the data. Tables of distribution recording the frequency, percentage, and standard deviation were constructed for several items in the instrument. Chi-square tables of distribution and the one-way analysis of variance statistical technique were used for data comparisons. The .05 alpha level was chosen as the criterion for determining statistical significance. Findings of the Study Thirty-eight per cent of the respondent schools indicated the use of formal performance evaluation pro­ cedures. This included 71 per cent of the Class A school respondents, 38 per cent of the Class B school respondents, 31 per cent of the Class C school respondents, and 7 per cent of the Class D school respondents. Fifty-six per cent of the metro county school respondents and 23 per cent of the nonmetro county school respondents reported the use of formal performance evalu­ ation procedures. The prescribed rating scale method of formal performance evaluation was reported used by 42 per cent Robert Mayfield Towns of the respondents. Thirty-eight per cent of the respon­ dents indicated the use of the performance objective method of evaluation. A significant relationship at the .05 alpha level was found between school athletic enrollment classifi­ cation and the principals1 perception of whether formal performance evaluations help to improve their adminis­ trative efficiency. Seventy per cent of the 96 respon­ dents indicated that evaluations helped to improve their administrative efficiency. Ninety-six per cent of the 96 principals who indi­ cated the use of formal performance evaluations, reported they favor formal evaluations of secondary public school principals. This included 100 per cent of the Class C-D school respondents. A significant relationship at the .05 alpha level was found between principals' perceptions of whether formal evaluations help improve administrative efficiency and principals' support of formal evaluations. One hundred per cent of the principals who support formal evaluations also indicated that evaluations helped them improve their efficiency as an administrator. Suggestions offered by respondents dealt with such concerns as bargaining units for administrators, incorporating due process in the use of evaluations, statewide use of evaluations to improve administrative Robert Mayfield Towns performance, and a close working relationship with the board of education to allow for formal evaluations in informal settings. Conclusions 1. Principals who have experienced formal performance evaluations strongly support the concept of administrative evaluations. 2. Principals who have experienced formal performance evaluations consider evaluations to be helpful in their administrative efficiency. 3. The prescribed rating scale method of evaluation was used slightly more often than the performance objective method of evaluation. 4. Principals indicated a high level of interest in administrative formal performance evaluation as evidenced by the percentage of respondents and the many requests for the results of the study. Recommendations 1. Local school districts should give careful con­ sideration to the establishment of f rmal per­ formance evaluation procedures for administrators. 2. Evaluation philosophy and technique of the smaller schools should be studied in order to Robert Mayfield Towns identify those characteristics which contribute to the strong support by the principals of these schools. The evaluatee should be directly involved and have considerable input in the evaluation tech­ nique . Formal performance evaluation techniques should be designed and developed specifically for the purpose of promoting performance effectiveness. Schools should seriously consider the performance objective method approach to the formal per­ formance evaluation of school administrators. DEDICATION This dissertation is dedicated to my parents, Walker and Ruth Towns, for their abiding love, patience, support, and understanding. Without their influence and example this study would not have been possible. ACKNOWLEDGMENTS The writer wishes to express his sincere appreciation to those who have contributed to the development of this study: To Dr. Van Johnson, Committee Chairman, College of Education, for his sincere counsel, guidance, encour­ agement, and personal interest during the development and completion of this study. To the other members of the Guidance Committee who provided assistance: Dr. Max Raines, College of Education; Dr. Robert Nolan, Highway Traffic Safety Center; and Dr. James B. McKee, College of Social Science. To Miss Linda Glendening of the Office of Research Consultation, College of Education, for her excellent assistance in the statistical analysis of the data. To Mr. Paul Henry for his assistance in reading the rough draft. To my wife, Joyce, and children, Karen, Kevin, and Kent, a special acknowledgment is extended for their patience, understanding, sacrifice, and cooperation during the many months of intense work involved in this disser­ tation . iii TABLE OF CONTENTS Chapter I. Page NATURE OF THE S T U D Y ......................... 1 The P r o b l e m ............................ 1 Introduction II. III. . . . . . . . . . 1 Need for the S t u d y ...................... Purpose of the Study...................... Questions for Study ...................... Limitations of the S t u d y ................ Assumptions Underlying the Study. . . . Definition of Terms ...................... Organization of the S t u d y ................ 3 6 6 8 9 9 11 REVIEW OF SELECTED LITERATURE................ 14 Introduction ............................ Survey Method of Research ................ Questionnaire Development ................ Questionnaire Returns ................... Purposes of Evaluation ................... Review of Related Studies ................ S u m m a r y .................................. 14 14 17 19 22 34 42 METHODS OF PROCEDURE......................... 45 I n t r o d u c t i o n ............................ Selection of the Sample................... Description of the Sampling Technique U s e d .................................. Sampling Distribution by Area in the S t a t e ............................... * The Questionnaire Approach................ Development of the Questionnaire. . . . 45 45 50 Pilot Administration................... Questionnaire Format................... 54 54 iv 50 51 52 Chapter Page Questions for S t u d y ...................... Data Collection Procedures................ 57 59 Administration of the Questionnaire. 59 . Treatment of the D a t a ................... S u m m a r y .................................. IV. PRESENTATION AND ANALYSIS OF DATA . . . . 60 62 64 Introduction ............................ Purpose of the Study...................... Preliminary Information Statements . . . Questions for S t u d y ...................... Question One ............... 64 64 65 66 68 Question O n e ......................... 68 Question T w o ..................... 73 Question T w o ......................... 73 Question Three............................ 77 Question Three......................... 77 Question F o u r ............................ 81 Question F o u r ......................... 81 Question F i v e ............................ 84 Question F i v e ......................... 84 Question S i x ............................ 87 Question Six . 87 Question Seven ^ . Question Seven......................... Question Eight 91 91 ......................... 95 Question Eight......................... 95 Question N i n e ............................ 100 Question N i n e ......................... 100 v Chapter Page Question T e n ............................ 107 Question T e n ......................... 107 Question Eleven ......................... 109 Question Eleven ...................... 109 Responses to Write-in Statements . . . Purposes for Which Principals Are E v a l u a t e d ......................... Purposes for Which Evaluations Ideally Should Be U s e d ............ Comments and R e m a r k s ................ S u m m a r y ............................... Summary of Findings................... V. SUMMARY, CONCLUSIONS, RECOMMENDATIONS, AND IMPLICATIONS FOR FURTHER RESEARCH . . . Purpose of the S t u d y ................... The Sample............................... Questions for Study...................... Design and Procedures ................... Results of the A n a l y s i s ................ Conclusions ................... Recommendations......................... Implications for Further Research .. . 113 113 113 114 115 116 125 125 126 126 12 8 131 142 147 150 APPENDICES Appendix A. Michigan Education Association Geographic R e g i o n s ............................... 152 B. Map of Metro/Nonmetro C o u n t i e s ............. 153 C. Sampling Distribution Map of PublicHigh Schools Selected for the Study . . . . 154 Principal's Performance Evaluation Questionnaire............................ 155 Cover L e t t e r ................................ 159 D. E. vi Appendix Page F. Follow-Up Letter .............................. 160 G. Sample of Public High Schools.................. 161 SELECTED BIBLIOGRAPHY.................................. 177 LIST OF TABLES Table Page 1.1. Michigan athletic enrollment classification . 11 3.1. Metro counties in the state of Michigan 47 3.2. Nonmetro counties in the state of Michigan 3.3. Michigan Education Association geographical regions listing population and sample percentages............................ 3.4. 3.5. 3.6. 3.7. 4.1. 4.2. 4.3. 4.4. 4.5. . . . 47 49 Michigan athletic enrollment classification listing population and sample per­ ............................... centages 49 Response distribution by athletic enrollment classification. ......................... 60 Response distribution by Michigan Education Association Region ...................... 61 Response distribution by metro/nonmetro c o u n t y ............................... 61 Distribution of schools with formal evalu­ ation procedures by athletic enrollment classification......................... 70 Distribution of schools with formal evalu­ ation procedures by geographic area. . . 71 Distribution of schools with formal evalu­ ation procedures by metro/nonmetro county status ............................ 73 Distribution of the methods of evaluation and the school enrollment classification . 75. Distribution of the principals' perceptions of formal evaluations and the school enrollment classification ................ 78 viii Table 4.6. 4.7. Page Comparative data on the number of years evaluations have been used according to the school enrollment classification . . 82 One-way analysis of variance of school enrollment classification and the number of years evaluations have been p r a c t i c e d ......................... 83 4.8. Comparative data on the frequency of evalu­ ations per year according to the school enrollment classification........ 83 4.9. One-way analysis of variance of school enrollment classification and the fre­ quency of evaluations per year. . . . 84 4.10. Comparison of evaluation purposes as exper­ ienced by principals and purposes for which principals feel evaluations ideally should be u s e d ......... 86 4.11. Comparative data on the grievance procedures accessible to principals and the use of evaluations in establishing evidence where dismissal from service is an issue. 89 One-way analysis of variance of grievance procedures accessible to principals and the use of evaluations in the dismissal process................................... 90 Comparison of those who evaluate principals and principals' support of formal evalu­ ations .................................. 92 Comparison of methods of evaluation and principals' support of formal evalu­ ations .................................. 94 4.12. 4.13. 4.14. 4.15. 4.16. Comparison of those who evaluate and improvement in administrative effi­ ciency as perceived by the principals. Comparison of purposes for which principals are evaluated and improvement in admin­ istrative efficiency as perceived by the principals............................... ix . 97 98 Page Table 4.17. 4.18. 4.19. 4.20. 4.21. 4.22. 4.23. 4.24. 4.25. Distribution of schools using the prescribed rating scale method of evaluation and schools using performance objective method of evaluation by athletic enroll­ ment classification ...................... 102 Distribution of schools using the prescribed rating scale method of evaluation and schools using performance objective method of evaluation by geographic area 104 Distribution of schools using the prescribed rating scale method of evaluation and schools using performance objective method of evaluation by county status . 106 Comparative data on the comprehensive evalu­ ation technique scores according to the school athletic enrollment classification. 108 One-way analysis of variance of school enrollment classification and the com­ prehensive evaluation technique scores 109 • Comparative data on the comprehensive evalu­ ation technique scores and principals1 perceptions of whether formal evalu­ ations help improve administrative efficiency ............................... 110 One-way analysis of variance of principals1 perceptions of whether formal evaluations help to improve administrative efficiency and comprehensive evaluation technique scores ................................... 111 Comparative data on the comprehensive evalu­ ation technique scores and principals1 support of formal evaluations............. 112 One-way analysis of variance of principals1 support of formal evaluations and com­ prehensive evaluation technique scores. 112 x CHAPTER I NATURE OF THE STUDY The Problem Introduction Public schools in the seventies are being con­ fronted with the accountability syndrome. Client reaction to school systems has been expressed by the term accounta­ bility. While the word "accountability" has several interpretations, one of its implications is that schools today are not functioning in the role of outstandingly effective delivery systems in terms of their major purposes. Clients are demanding better schools and school officials are seeking better appraisal systems to assist them in the process of motivating administra­ tive personnel to consistently higher levels of perfor­ mance. As Nicholson observes, the connotations of "accountability in education" have been broadened to include evaluation of administrative performance.^ ■^Everett W. Nicholson, "The Performance of Prin­ cipals in the Accountability Syndrome," The Bulletin of the National Association of Secondary School Principals, LVI (May, 1972), 94.-----K--- 1 2 The purpose of this exploratory study is to examine the status of performance evaluation of secondary public school principals in Michigan as perceived by the principals and to provide preliminary criteria for developing improved techniques of evaluation based on an analysis of the responses given to a questionnaire. 1 2 3 Nicholson, Redfern, Barrilleaux, Castetter and 4 5 6 7 Heisler, Niehaus, DeVaughn, and Stufflebeam variously 1Ibid. , p. 96. 2 George B. Redfern, "Principals: Who's Evaluating Them, Why, and How?" The Bulletin of the National Association of Secondary School Principals, LVI (May, 1972), 86-87. 3 Louis E. Barrilleaux, "Accountability Through Performance Objectives," The Bulletin of the National Association of Secondary School Principals, LVI (May, 1972) , 105. 4 William B. Castetter and Richard S. Heisler, "Approving and Improving the Performance of School Admin­ istrative Personnel," Center for Field Studies, Graduate School of Education, University of Pennsylvania, pp. 9-10. 5 Stanley W. Niehaus, "The Anatomy of Evaluation," The Clearing House, XLII (February, 1968), 332. ®J. Everett DeVaughn, "Policies, Procedures, and Instruments in Evaluation of Teacher and Administrator Performance" (paper presented at AASA Annual Convention, Atlantic City, N.J., February 16, 1972), p. 3. 7 Daniel Stufflebeam, "The Relevance of the CIPP Evaluation Model for Educational Accountability" (paper presented at the Annual Meeting of the American Associ­ ation of School Administrators, Atlantic City, N.J., February 24, 1971), p. 14. 3 support the theme that principal performance can be eval­ uated and that principals must be highly involved and have considerable input in the evaluation technique. The concept of administrative performance evalu­ ation and principal involvement in the procedure is succinctly stated by Nicholson when he writes: So what can secondary principals do at this time? Probably the most important thing is to be active in the process of developing accountability schemes for the secondary school principalship. The types of principal evaluation formats will be numerous and fittingly adapted in large measure to local conditions. Whatever the scheme is, however, the principal must be highly involved and have con­ siderable input; for who knows better than the principal himself what criteria should be utilized in the determination of effective administrative performance?1 Need for the Study Strickler observes that evaluation, when properly implemented, is a useful tool for self-improvement. "Evaluation," he continues, "as an end unto itself is meaningless, but as a means whereby an individual is able to judge, initially and periodically, his progress toward established goals, it has an importance that cannot be exaggerated." 2 ■'’Nicholson, "The Performance of Principals in the Accountability Syndrome," p. 97. 2 Robert W. Strickler, "The Evaluation of the Public School Principal," The Bulletin of the National Association of Secondary School Principals, XL (February, 1^57), 55. 4 Howsam and Franco further emphasize the importance of evaluation of administrative performance when they write: If evaluation is concerned with the improvement of service, the consequences of neglect are serious. If it is concerned with deciding who should be allowed to continue to administer, failure to evaluate will have tragic long-term consequences. In any event, what is done, should be by design rather than by default. And it should be based on the soundest evidence that is available.1 The need for a study like this is further sup­ ported by DeVaughn who states: Most appraisal procedures and instruments have been inadequate and highly subjective and have been administered under an assumption that the superior somehow possessed the required competence to make the correct judgment, usually without the involve­ ment of the evaluatee in the process through self­ appraisal, when the evaluatee perhaps best knows his strengths and weaknesses and could adequately state his professional need for help if invited to do so in an open, relatively threat-free climate. Redfern suggests that defining leadership produc­ tivity in education is more complex than in many other managerial endeavors. Principalship productivity cannot be measured in terms of units produced. The need to assess the principal's productivity, despite the inherent perplexities, is of the utmost urgency. Methods must be Robert B. Howsam and John M. Franco, "New Emphasis in Evaluation of Administrators," National Ele­ mentary Principal, XLIV (April, 1965), 36. 2 DeVaughn, "Policies, Procedures and Instruments in Evaluation of Teacher and Administrator Performance," p. 4. 5 found to evaluate leadership output and to stimulate higher levels of achievement.^ The implications of accountability are so inclu­ sive that it is important for educators not to move in haste without serious debate and thought. Barrilleaux 2 further observes that despite caution, the accountability movement is sufficiently massive that principals should not consider themselves immune to its immediate effects. Secondary school principals have an important role in the development of evaluative techniques. It seems imperative that they be active in the process of developing accountability schemes for the secondary school principalship. The principal knows best what criteria should be utilized in the determination of effective administrative performance. If school systems are to remain viable and rele­ vant to the society which they serve, the necessity is at hand for engaging in a process of evaluating prin­ cipals. ^Redfern, "Principals: Why, and How?" p. 87. 2 Who's Evaluating Them, Barrilleaux, "Accountability Through Performance Objectives," p. 103. 6 Purpose of the Study The purpose of this study was: (1) To determine the status of performance evaluation of secondary public school principals in Michigan? (2) To obtain criticism, suggestions, and recommen­ dations for the improvement of evaluation tech­ niques ; (3) To evaluate these data and use the results to suggest implications for performance evaluation improvement; (4) To gather additional data for later analysis. Questions for Study The questions this study attempted to answer were: 1. How do secondary public schools with formal evaluation procedures distribute themselves in terms of school enrollment, geographic area, and metro/nonmetro status? 2. what is the relationship between the method of formal evaluation practices as experienced by principals and school enrollment? 3. What are principals' perceptions of formal evalu­ ations as expressed by their responses to (a) the role of formal evaluations in improving 7 administrative efficiency, (b) their support of formal evaluations, and (c) the role of formal evaluations in offsetting negative unofficial informal evaluations? 4. How are the number of years formal evaluations have been practiced and the frequency of formal evaluations related to school enrollment? 5. What is the relationship between the purposes for which principals are formally evaluated and the purposes for which principals feel evaluations ideally should be used? 6. What is the relationship between grievance pro­ cedures as experienced by principals and the use of evaluations to establish evidence where dis­ missal from service is an issue? 7. How are those who evaluate secondary public school principals and the method of evaluation related to principals' support of formal evalu­ ations? 8. How are those who evaluate secondary public school principals and the purposes for which principals are formally evaluated related to the principals' perceived improvement in admin­ istrative efficiency? 8 9. How do schools which use a prescribed rating scale method of evaluation differ from schools which use the performance objective method of evaluation in terms of enrollment, geographic area, and metro/nonmetro status? 10. What is the relationship between comprehensive evaluation technique scores and school enrollment? 11. How are comprehensive evaluation technique scores related to principals' perceptions of whether formal evaluations help improve administrative efficiency and to principals' support of formal evaluations? Limitations of the Study This study was limited to include only the prin­ cipals of secondary public schools in Michigan (N=583).1 Principals of the public junior high schools and elementary schools were excluded. No attempt was made to generalize beyond the total population included in the study. The survey questionnaire was constructed accord­ ing to prescribed principals for such instruments which were found to have support in the literature reviewed Michigan Education Directory and Buyers Guide (Michigan Education Directory, 701 Davenport Building, Lansing, Michigan, 1972-73). 2 Infra., pp. 12-16. and 9 thus makes claim to face validity. The committee and the researcher decided that this kind of validity met the requirement for this study. Assumptions Underlying the Study The following assumptions are essential to this study: (1) That principals have insights and/or perceptions which they will share concerning the character­ istics of administrative performance evaluations they have experienced; (2) That principals' perceptions, while they may be influenced by personal experiences and current personal situations at the time of responding, will be honestly shared; (3) That survey questionnaires, when carefully designed, have certain face value, thus making possible the use of data so gathered for pur­ poses of analyzing administrative performance evaluations. Definition of Terms Metropolitan Area.--A metropolitan area is a county or group of contiguous counties which contains at least one central city of 50,000 inhabitants or more. Counties contiguous to the one containing such a city 10 are included in a standard metropolitan area if they are essentially metropolitan in character and socially and economically integrated with the central city.1 Evaluation.— "Consideration of evidence in the light of value standards and in terms of the particular situation and the goals which the group or individual is striving to attain." 2 Questionnaire.— The questionnaire refers to a document containing a list of planned, written questions which required a response from the secondary school principal. For the purposes of this project, the term "questionnaire" was used interchangeably with the term "diagnostic instrument." Secondary School Principal.--The secondary school principal was the administrator directly responsible for the management and supervision of the secondary school program involving grades 7-12, 8-12, 9-12, or 10-12. Comprehensiveness.— "That characteristic of a point of view which strives for a maximum of inclusiveness so that the whole picture rather than scattered or isolated segments is in view." Yorks ^Carter V. Good, ed., Directory of Education (New McGraw-Hill Book Company^ Inc., 1955), p. 545. 2Ibid., p. 209. 3lbid., p. 117. 11 Comprehensive Evaluation Technique Score.— The sum of weighted responses to questionnaire items five and six. Michigan Athletic Enrollment Classification.--The Michigan Athletic Enrollment Classification is a ranking (A, B, C, D) of all public high schools in the state of Michigan according to the number of students enrolled in grades 9-12 by the fourth Friday of the school year. The Michigan Athletic Enrollment Classification hereafter will be referred to as school classification or school enroll­ ment. A description of these classification categories is listed in Table 1.1. TABLE 1.1.— Michigan athletic enrollment classification Class A B C D Number of Students 1361 or more students 651 to 1360 students 339 to 650 students Less than 339 students Organization of the Study Chapter I presents an introduction to the study and a discussion of the need for such a study. This is followed by a statement concerning the purpose of the study and a listing of questions for which answers were sought. The limitations and underlying assumptions of 12 the study are presented. The special terms used in the study are defined and the chapter closes with an overview of the organization of the study. Chapter II reviews selected literature under the following headings: (1) the survey method of research, (2) questionnaire development, (3) questionnaire returns, (4) the purposes of evaluation, and (5) the review of related studies. A conceptual frame of reference is developed for application in the analysis of the data. Chapter III describes the design of the study, the development of the questionnaire, data collection procedures, and the plan for the analysis of the data. The design describes the population selected, and a description of the sampling technique used. The section on data collection procedures describes the adminis­ tration of the questionnaire and methods of tabulation. The plan for analysis describes the ways in which recom­ mendations and suggestions will be examined. Chapter IV contains the presentation and analysis of the data. Chapter V summarizes the study and draws con­ clusions from the analysis of the data. Recommendations are made for further study and some possible improvements of evaluation procedures are suggested. 13 Copies of the questionnaire, the cover letter, the follow-up letter, Michigan Education Association geographic regions, a map of the metro/nonmetro counties, a sampling distribution map and a list of sample schools are included in the Appendices. CHAPTER II REVIEW OF SELECTED LITERATURE Introduction This chapter presents a review of selected litera­ ture and attempts to develop a theoretical framework in which to study selected aspects of the status of per­ formance evaluation of secondary public school principals in Michigan. The chapter is sub-divided under the following topics: (1) Survey method of research, Questionnaire development, (2) (3) Questionnaire returns, (4) Purposes of evaluation, and (5) Review of related studies. Survey Method of Research For certain kinds of educational research, the survey method of research is especially recommended. Good, Barr and Scates suggest that "the normative-survey approach is appropriate whenever the objects of any class vary among themselves and one is interested in knowing the extent to which different conditions obtain among 14 15 these o b j e c t s T h e y further point out that the term "survey" suggests the gathering of data about current conditions. The term "normative" suggests an attempt to ascertain what is the normal or typical condition or practice. "The survey attack is always appropriate," they continue, "when information concerning current conditions is desired in any field, however well explored, in which there are changes of condition or changes of population frequently from time to time." 2 Herriott refers to the survey research method as a form of scientific inquiry. He notes that it is par­ ticularly useful in the study of social and socialpsychological relationships. In descriptive survey research, he writes, The sample is selected to describe a welldefined population in terms of its characteristics, attitudes, or behavior. . . . Probability theory is utilized to assess the sampling error surrounding the se de scr iption s. The most basic element in the survey research method is that of "reasoning." Through this process the survey objectives and design are determined. In descriptive studies, reasoning may involve merely the Carter V. Good, A. S. Barr, and Douglas E. Scates, The Methodology of Educational Research (Wew York: Appleton-Century-Crofts, Inc., 1941), p. 289. 2Ibid., p. 295 16 careful identification of the population to be described and the variables on which this description is to take place.1 Herriott further suggests that in survey research the investigator faces a complex problem of reducing his data to reliable and valid indexes of the concepts sug­ gested by his reasoning. The researcher must usually develop his own measures of key concepts. This can be done in an ad hoc manner by assigning assumed numerical weights to different responses chosen in terms of their "face validity" and summing them to form a "total score" for a particular index. 3 Slonim suggests some advantages in using the sampling technique. He lists such advantages as: reduced costs, (2) reduced manpower, information more quickly, (1) (3) gathering initial (4) obtaining data unavailable otherwise, and (5) an actual increase of the accuracy in some instances. The risk that an estimate made from sample data does not truly represent the total population under study can be greatly reduced if probability sampling methods are combined with a sufficiently large ■'"Robert E. Herriott, "Survey Research Method," Encyclopedia of Educational Research, ed. by Robert L. Ebel (4th ed.; New York: The Macmillan Company, 1969) , p. 1400. 2Ibid., p. 1402. 3 Morris James Slonim, Sampling in a Nutshell (New York: Simon and Shuster, 1960) , pp. T~, TI 17 sample. He further suggests that "sampling is only one component, but undoubtedly the most important one, of that broad field of scientific method known as statistics." Slonim lists the following steps in the development of a sample surveys "(1) determine as precisely as possible the population, or universe, to be surveyed, (2) set up a sampling 'frame,' questionnaire, (3) give thought to the (4) carry out a small-scale pretest, and (5) conduct the survey."^Questionnaire Development The literature reviewed indicated that question­ naires were used frequently in a variety of research studies. Good, Barr, and Scates 2 quote Koos1 report that out of 581 studies of all kinds which he has reviewed, one-fourth had made use of the questionnaire. Several lists of criteria which provided guide3 lines for the construction of questionnaires were dis­ covered in the literature. Wise, Nordburg, and Reitz presented the following set of guidelines: 1Ibid., p. 19. 2 Good, Barr, and Scates, Methodology of Edu­ cational Research, p. 325. 3 See also Carter V. Good, Essentials of Educational Research (New Yorks Appleton-Century-Crofts, 1566) , p. 221. 18 1. 2. 3. 4. 5. 6. 7. Individual items should be phrased or expressed so that they are easily understood by the respondent. The questions should be programmed in such a manner that the sequence of questions helps the respondent. Questionnaire items should assist the respondent to determine the character of his response. Questions should not invite bias or prejudice or predetermine the respondent's answer. The questionnaire should not be constructed in such a way that it appears to over-burden the respondent. The items on a questionnaire should never alienate the respondent. The respondent ought to be made to feel that he is an important part of the research project.1 Good 2 suggests that the responses to the question­ naire should be valid so that the entire body of data taken as a whole will answer the basic question for which it is designed. He then presents a series of questions dealing with decisions about question content, question wording, and form of response to the question. Validity should also be considered when constructing a questionnaire. The following questions. Good feels, should be considered in any attempt to establish validity: 1. 2. 3. Is the question on the subject? Is the question perfectly clear and unambiguous? Does the question get to something stable, which is typical of the individual or of the situation? John E. Wise, Robert Nordburg, and Donald J. Reitz, Methods of Research in Education (Boston: D. C. Heath and Company, 1967), p. 101. 2 Good, Essentials, p. 223. 3Ibid., pp. 223-24. 19 4. Does the question pull or have extractive power? Will it be answered by a large enough proportion of respondents to have validity? 5. Do the responses show a reasonable range of variation? 6. Is the information consistent, in agreement with what is known, and in agreement with expectancy? 7. Is the item sufficiently inclusive? 8. Is there a possibility of obtaining an external criteria to evaluate the questionnaire?1 Wise, Nordburg, and Reitz 2 claim that a balanced questionnaire should include some open-end questions which are more likely to shed light on the respondent's true feelings. Questionnaire Returns Herriott 3 observes that the major weakness of the questionnaires is the low percentage of return to the researcher. Purcel, Nelson, and Wheeler 4 report that Scott found, in his study of incentives, that stamped envelopes and official sponsorship were effective in securing 1Ibid., pp. 224-25. 2 Wise, Nordburg, and Reitz, Methods of Research, p. 100. 3 Herriott, "Survey Research Method," p. 1402. 4 David J. Purcel, Howard F. Nelson, and David N. Wheeler, Questionnaire Follow-Up Returns as a Function of Incentives and Responder Characteristics (Minnesota: University of Minnesota, Project MINI-SCORE, 1970), p. 2. 20 returns. A study by Orr and Neyman1 found that the length of the questionnaire affected the return rate. A 37 per cent response to a four-page questionnaire as compared to a 30 per cent response to an eight-page questionnaire was reported. They also found that the peak return rate occurred twelve days after mailing. Analysis of the time interval data seems to indi­ cate that the greatest response comes near the end of the second week after the mailing of the questionnaire. As the number of incentives were increased the time interval was shortened slightly. 2 Sex seems also to be a factor in the likelihood that questionnaires will be returned. Purcel, et a l . , report that in one sample period 60 per cent of females 3 had responded versus 41 per cent of males. Incentives were found to be more effective with males than with females. Other researchers found that: (1) a typewritten letter of transmittal increased the return rate signifi­ cantly over a duplicated letter; (2) the nature of the appeal for assistance made in the cover letter affected the rate of return, with the most effective for his group of former college students being an appeal to help improve 1Ibid. 3Ibid., p. 8. 2Ibid., p. 12 21 education for others; (3) whether or not the respondent was asked to sign the questionnaire made little difference in item response.^" Based on the findings of Purcel, Nelson, and Wheeler there was both evidence and opinion that returns would be increased by constructing a questionnaire that: (1) is logical in question organization; (2) is clear and unambiguous in wording— unbiased in phrasing; (3) is non-repetitive and non-trivial; (4) is as brief as possible; (5) is attractively reproduced; (6) avoids the use of the word "ques­ tionnaire"; (7) keeps directions brief, clear and distinct; (8) is printed on colored p a p e r . 2 In studies where questionnaires were used, concern for follow-up procedures was found to be necessary. The literature suggested that certain procedures were more likely to result in a higher return rate than others. The following procedures were recommended: a return self-addressed stamped envelope, (1) include (2) use a stamped rather than a business reply envelope, (3) include official sponsorship by a party respected by the potential respondent, letter, (4) include a personalized accompanying (5) consider the time (day of week and time of year) of mailing the questionnaire, (6) include assurance Studies by Moore; Sletto; and Gerberich and Mason cited by Purcel, Nelson, and Wheeler, Questionnaire, p. 2. 2 Purcel, Nelson, and Wheeler, Questionnaire, p. 3. 22 of confidentiality, (7) offer a summary of results, and (8) contain a deadline date for returning.^Good and Scates support the questionnaire as a tool for research when they write: The use of a questionnaire in descriptive-survey studies extends the investigators' powers of observation by serving to remind the respondent of each item, to help insure response to the same item from all cases, and to keep the investigator from collecting only the unique, exceptional or unusual facts particularly interesting to him. The question­ naire tends to standardize and objectify the obser­ vations of different enumerators, by singling out particular aspects of the situation.2 Purposes of Evaluation Concern for the purposes of evaluation of pro­ fessional performance was quite evident in the literature. An attempt was made in this review of literature to briefly survey this issue, with special interest in the purposes of performance evaluation of the principalship. The theme of performance effectiveness as the goal of evaluation was found repeatedly in the literature. 3 Campbell and Gregg suggest that the general pur­ pose of evaluation is to improve the effectiveness of 1Ibid. 2 Carter V. Good and Douglas E. Scates, Methods of Research (New York: Appleton-Century-Crofts, Inc., 1554), p. 606. 3 Roald F. Campbell and Russell T. Gregg, eds., Administrative Behavior in Education (New York: Harper and Brothers, Publisher, 1957), p. 512. 23 goal achievement. By means of the evaluating process, strengths can be discovered and maintained, weaknesses can be identified and minimized. They conclude that effective evaluation should result in the continuing improvement of organizational plans and procedures and of individual and group efforts in the accomplishment of accepted purposes. Strickier supports the theme of performance effec­ tiveness through evaluation when he writes: If the assumption that the principalship is one of the most important positions of educational leader­ ship in the public system is valid, it must follow that continuous professional and personal develop­ ment is prerequisite to the fulfillment of his responsibilities. It also follows that he must not only be encouraged and stimulated to improve; the school system which he serves must also pro­ vide for an evaluation of his principalship to insure the professional and personal growth the position demands.1 2 3 Howsam and Franco and Tolle also stress the theme that evaluation should emphasize the improvement of performance effectiveness. Robert W. Strickier, "The Evaluation of the Public School Principal," The Bulletin of the National Association of Secondary School Principals, XLI (February, 7""””5 " 5 T " " " ” ~ 2 Robert B. Howsam and John M. Franco, "New Empha­ sis in Evaluation of Administrators," National Elementary Principal. XLIV (April, 1965) , 37. ^Donald J. Tolle, "Evaluation: Who Needs It?" (paper presented at a faculty workshop held at Mineral Area College, Flat River, Missouri, September 3, 1970), p. 3. 24 Iwamoto and Hearn^ observe that evaluation in education is becoming increasingly important. Educators are being called upon to prove the merit of their pro­ grams with objective evidence. They further note that evaluation is more than a measure of past progress. It is the basis for building better programs in the future. Niehaus declares that "an evaluation must be an illuminating thing, and as it illuminates, it must yield understanding, knowledge, and a realistic sense of security and an awareness of fulfillment of what has already been accomplished," 2 He also observes that if those who are charged with responsibility in educational research or other kinds of operational programs do not evaluate appropriately, someone will evaluate for them. Unfortunately, the degree and intensity of noise which some evaluations generate are by no means predicated by qualifications of the evaluators. Educators must evaluate to know where they have been, to know at what point they have arrived, and to have an idea of where they are going. Niehaus concludes that "there is specific need for some new and practical innovations in evaluation ^David Iwamoto and Norman E. Hearn, "Evaluation Is A Pull Time Job," American Education, U.S. Department of Health, Education, and Welfare, V (April, 1969), 18-19. 2 Stanley W. Niehaus, "The Anatomy of Evaluation," The Clearing House. XLII (February, 1968), 332. 25 procedures. Structured evaluation instruments must yield objective, definitive, and clear-cut information. They must illuminate rather than compound and confuse."* Nolte 2 suggests the appraisal of administrators should be done in terms of process and of outcomes. Means and ends cannot be evaluated separately. How we do what we do conditions the ends which will be secured and, since the ends of the education effort are often far removed and subtle in character, appraisal of administration through the study of outcome alone is not possible. In an analysis of major principles of the evaluation process, Lewis 3 makes note that a major concern is the role of a given value system in the establishment of 4 goals and in the assessment of their attainment. Heier suggests the use of training programs to explain the 1Ibid., p. 334. 2 M. Chester Nolte, An Introduction to School Administration (New York: The Macmillan Company, 1967), pT 133. 3Leslie Lewis, "Evaluation: A Relationship of Knowledge, Skills, and Values" (from the Symposium "An Interdisciplinary Look at Evaluation," presented at the Joint Annual Meeting of the American Education Research Association and the National Council on Measurement in Education, Minneapolis, Minnesota, March, 1970). 4 H. D. Hexer, "Implementing an Appraisal-ByResults Program," Personnel, XLVII (November-December, 1970), 25. 26 evaluation process including the results expected, administrative procedures, dates and time frames, and the use of forms required in the evaluation process. In his comments on the purpose of evaluation, Kelly'*’ observes that the evaluator needs to learn how to guard against over-simplification. be able to describe complexity. To do this means to He continues, "to borrow a phrase from the researcher, within the develop­ ment process the evaluator must work to avoid the type one error or the too quick rejection of the null hypothesis that says: no difference." 2 Kelly further argues that developmental evaluation works to guard against over-simplification. He concludes that the evaluator must develop a series of data sets that will allow judgments to be made as to whether or not the intentions of development have been fulfilled in practice. It is in this way that the evaluator will guard against over-simplification. He will guard against the notion that wishing makes it so. Edward F. Kelly, "Extending the Countenance: A Comment for Evaluators" (paper presented at the Association for Educational Communications and Technology Annual Con­ vention, Minneapolis, Minnesota, April 16, 1972), p. 2. 2Ibid. 3Ibid., p. 6. 27 Demeke^ presents the theme that evaluation should be developed in terms of specifically explained areas of competence. He lists seven specific areas of competence to be used as evaluation criteria while Adams 2 suggests fourteen criteria to be included in the evaluation pro­ cedure . Culbreth declares that "if we misjudge the capacity and performance of our subordinates, we will fail to develop their full potential and fail to realize the full benefit of a valuable asset." 3 The National Education Association and the American Association of School Administrators have sup­ ported the evaluation of educational services. The National Education Association believes that it is a major responsibility of the teaching pro­ fession , as of other professions, to evaluate the quality of its services. To enable educators to meet this responsibility more effectively, the Association calls for continued research and Howard J. Demeke, "Guidelines for Evaluation: The School Principalship— Seven Areas of Competence," Department of Educational Administration and Supervision, Arizona State University, Tempe, Arizona, 1971. 2Velma A. Adams, "In West Hartford It's the Kids That Count," School Management, XV (September, 1971), 22. 3 George Culbreth, "Appraisals That Lead to Better Performance," Supervisory Management, XVI (March, 1971), 8* 28 experimentation to develop means of objective evaluation of the performance of all professional personnel. . . . 1 The American Association of School Administrators has declared: If growth is not static, sporadic, or unilinear, then the appraisal of what is happening becomes more important than what has happened. If this is true, then evaluation is an integral part of the whole process of becoming. Evaluation processes are significant factors in the development of the person who accepts and understands the process of becoming. Evaluation should be a continuous examination of the immediate experience rather than a procedure used at the end of a unit of work or at a specified time.2 Engleman, Cooper, and Ellena describe evaluation in terms of (1) determining the extent to which objectives have been attained, (2) pointing out the discrepancies between the results obtained and the standards set for 3 each objective, and (3) interpreting the results. They suggest that effective evaluation is a continuous, 1National Education Association, Addresses and Proceedings of the 105th, Annual Meeting (Minneapolis, Minnesota, July, 1967), p. 498.-------2 American Association of School Administrators, "Inservice Education for School Administrators" (Report of the AASA Commission on Inservice Education for School Administrators, Washington, D.C., 1963), p. 194. 3 Francis E. Engleman, Shirley Cooper, and William J . Ellena, Vignettes on the Theory and Practice of School Administration (New York: The Macmillan Company, 1963), p. 58. 29 comprehensive, cooperative process^- and predict that through adequate evaluation depicting strengths and weaknesses in existing practice, the exceptional practice of today will become the common practice of tomorrow. Culbreth continues support of the theme that per­ formance effectiveness is the goal of evaluation when he argues that making objective setting a part of every appraisal interview will improve the effectiveness of goal achievement. He suggests two kinds of objectives: (1) improvement goals that will help administrators become more productive in their present position and (2) personal development goals that will help the administrator achieve the private growth to which he aspires. Both the organization and the individual are helped through evaluation according to Castetter and Burchell. The organization is able to communicate to individuals the goals of the system, the specific objectives of the position, the plans which have been made to support the individual as he performs his role, the standards of performance the organi­ zation has established, the criteria it will employ in assessing the performance, the information it will gather to make the evaluation, and the steps it will take to improve individual effectiveness on the basis of the appraisal. The individual will be helped by the appraisal by providing him with information and counsel on changes which may be needed in his performance and 1Ibid., p. 62. 2Ibid., p. 63. 3 Culbreth, ’"Appraisals That Lead to Better Per' formance," p. 10. 3 2 30 methods for implementing the changes. There is also value in the opportunity the administrator has to feed back to the evaluator, facts and feelings about obstacles which prevent more effective individual performance. The evaluation process is conducive to creating better understanding between evaluator and evaluatee and to developing a positive influence on the feelings of evaluatees.i Stufflebeam defines evaluation as "the process of delineating, obtaining, and providing useful information for judging decision alternatives." points for consideration: 2 He suggests three (1) evaluation is conceived of as a systematic, continuing process, (2) the evalu­ ation process includes three basic steps, (a) the deline­ ation of questions to be answered and information to be obtained, (b) the obtaining of relevant information, and (c) the providing of information to decision makers so that they can use it to make decisions and thereby improve on-going programs, and (3) evaluation is conceived of as 3 a process to serve decision making. The concept that evaluation and accountability are interrelated is supported by stufflebeam. He defines William B. Castetter and Helen R. Burchell, "Edu­ cational Administration and the Improvement of Instruction," Educational Research and Service Bureau, Graduate School of Education, University of Pennsylvania, p. 62. 2 Daniel Stufflebeam, "The Relevance of the CIPP Evaluation Model for Educational Accountability" (paper read at the Annual Meeting of the American Association of School Administrators, Atlantic City, N.J., February 24, 1971), p. 4. 3Ibid 31 accountability as "the ability to account for past actions in terms of the decisions which precipitated the actions, the wisdom of those decisions, the extent to which they were adequately and efficiently implemented and the value of their effects."^ He concludes that "evaluation studies provide the kind of information needed for accountability." 2 The four kinds of evalu- ation serve particular accountability needs. 3 Lessinger defines accountability . . . as the product of a process. At its most basic level, it means that an agent, public or private, entering into a contractual agreement to perform a service will be held answerable for per­ forming according to agreed-upon terms within an established time period, and with a stipulated use of resources and performance standards.4 Howsam and Franco suggest that evaluation has two fundamental concerns, responsibility and accounta­ bility. They identify the basic questions to be answered as (1) the nature and extent of the responsibility under­ taken by the evaluatee and (2) the ability of the evaluatee 5 to account for his execution of the responsibility. ^•Ibid. , p. 14. ^Ibid. , p. 18. 3 The acronym CIPP was derived from the first letters of the names of four kinds of evaluation; context, input, process, and product. 4 Leon Lessinger, "Engineering Accountability for Results in Public Education," Phi Delta Kappan, LII (December, 1970), 217. 5 Howsam and Franco, "New Emphasis in Evaluation of Administrators," p. 37. 32 Young declares that accountability is causing many educators to think more precisely about their goals, how they can be achieved, and how they can determine the degree to which they have been achieved. In the past, quality in education has been described in terms of input--courses, dollars spent, and numbers of teachers. Today, the public is concerned about output— the results in terms of actual student learning. People want to know the quality of the return on their educational investment.'*' The major reasons, according to Young, for the call for accountability include the high costs of edu­ cation and low pupil achievement. Stenner 2 discusses education in terms of big business. A Gallop Poll of public attitudes toward education has shown that Americans rate the financial crisis as the number one problem of the public schools."* Local tax­ payers want to know how wisely their education dollars are being spent. ^Stephen Young, "Accountability and Evaluation in the 70*s: An Overview" (paper presented at the Annual Meeting of the Speech Communication Association, San Francisco, California, December 27, 1971), p. 2. 2 Jack Stenner, "Accountability by Public Demand," American Vocational Journal, XLVI (February, 1971) , 34. 3 George Gallup, "The Third Annual Survey of the Public's Attitudes Toward the Public Schools, 1971," Phi Delta Kappan, LIII (September, 1971) , 35. 33 We do not know what it costs on the average to increase a student*s reading ability by one year. All we know is what it costs to keep him seated for one year. Advocates of accountability feel it would make more sense if we moved from a "per-pupil" cost to a "learning unit" cost.'*' One reason for demanding accountability is to determine the cost-effectiveness of the schools. Young further suggests that educators have made few moves to measure results and proclaim their success in terms of output— the performance of students. At the same time, educational failures have been glaringly recognized.2 Lessinger 3 suggests that hxgh dropout rates are one indicator of low pupil achievement. As further evidence of low pupil achievement, Lessinger cites the 30,000 plus functional illiterates— people with less than a fifth-grade reading ability— in the U.S. today who hold high school diplomas. 4 1Leon M. Lessinger, "Robbing Dr. Peter to Pay Pauls Accounting for Our Stewardship of Public Education," Educational Technology, XI (January, 1971) , 11. 70's: 2 Young, "Accountability and Evaluation in the An Overview," p. 6. 3 Lessinger, "Robbing Dr. Peter to Pay Paul: Accounting for Our Stewardship of Public Education," p. 12. 4 Leon M. Lessinger, "Accountability for Results: A Basic Challenge for America's Schools," American Edu­ cation, V (June-July, 1969) , 2. 34 Many schools are not providing the kind of edu­ cation that provides rational, responsible citizens. Ivor Berg's thesis is that public education does not give students the skills they need.1 While educators have avoided the measurement and display of their success, their failures have been measured and displayed outside the school system. Culbreth warns of the basic faults of evaluation programs when he lists the following items. 1. 2. 3. 4. Overemphasis on forms— If forms take precedence, an appraisal becomes a report card. Through this the evaluator may lose sight of the objective, proper evaluation with an eye to improvement. Poor communication— There must be two-way com­ munication. The evaluatee needs to explain why he performed the way he did. The evaluator needs to listen objectively and with an open mind. Reason, not emotion, should guide the dis­ cussion. Adhering to the once-a-year approach— The appro­ priate time for an evaluation rarely if ever coin­ cides with a timetable. Looking to the past and ignoring the future— Goals should be set for future development. Evaluation should motivate evaluatee toward improvement.2 Review of Related Studies The purpose of this section of the review was to examine related studies. Based on the review of the "^Ivor Berg, Education and Jobs; The Great Train­ ing Robbery (New YorKl Prueger Publishers, 1970). 2 George Culbreth, "Appraisals That Lead to Better Performance," Supervisory Management, XVI (March, 1971), 8-9. 35 literature, very few studies have been done for the pur­ pose of examining the status of performance evaluation of secondary public school principals. The studies that were reviewed dealt with limited aspects of performance evaluation and were indirectly related to the specific interests of this study. In the field of business and industrial personnel management, stress is given to the necessity of accurate evaluations for salary purposes. Thus the popularity of such techniques as the rank order method, paired com­ parison techniques, and others which result in a list of employees in order of desirability. There is a good deal of discussion both in edu­ cational literature and outside the profession which stresses that evaluation of personnel is likely to do more harm than good in terms of productivity and morale if its primary objective is not to improve performance. As early as 1897, Brooks1 reported the reaction of teachers to supervision, merely generalizing her presentation of conclusions reached through analysis of accumulated data. Sarah Brooks, "Supervision as Viewed by the Supervised," National Education Association Proceedings (Washington, D.C., 1897), pp. 225-32. 36 % Bird'*' sought to discover some of the qualities of supervisors most appreciated by teachers by obtaining the reactions of experienced teachers enrolled in various college classes. 2 3 4 Bell, Nutt, and Saunders carried on similar studies but obtained their data directly from teachers in service. Gist and King^ utilized a questionnaire to obtain information from Seattle teachers with respect to how principals may be most helpful. Gray** gathered similar information from teachers regarding help received from principals. ^G. E. Bird, "Teachers' Estimates of Supervisors," School and Society, V (June 16, 1917), 717-20. 2 A. D. Bell, "Grade Principal as Seen From the Teacher's Desk," Popular Education, XLII (September, 1924), 12-13. ^H. W. Nutt, "The Attitude of Teachers Toward Supervision," Education Research Bulletin, Ohio State Uni­ versity (February 6, 1924), 59-64. 4 Olga Saunders, "What Teachers Want from the Prin­ cipal in His Capacity as a Supervisor," School Review, XXX (October, 1925), 610-15. ^A. S. Gist and W. A. King, "The Efficiency of the Principal from the Standpoint of the Teacher," Ele­ mentary School Journal, XXIII (October, 1922), 120-26. g W. S. Gray, "Methods of Improving the Technique of Teaching," Elementary School Journal, XX (December, 1919), 273-75. 37 Hubbard^- used a questionnaire to obtain from Detroit teachers what they expected from supervisors. Hart used the same technique to sample teachers' appraisal of supervision by the high school principal. Kyte 3 used the questionnaire technique to obtain teachers' appraisal of the helpfulness of principals. 4 Strickler reports the following analysis of a questionnaire study in regard to principal evaluation among school districts of a population over 100,000 for the school year 1955-56. The questionnaire was sent to 81 school districts of a population of 100,000 to 499,999, and to 17 school districts of a population over 500,000, a total of 98 districts. The questionnaire was returned by 52 of the 81 districts (64.2%), by 14 of the 17 districts (82.4%), a total return of 66 of 98 districts or 67.3 per cent. ^Evelyn B. Hubbard, "What Teachers Expect of Supervisors," Detroit Journal of Education, III (May, 1923), 416-17. 2 M. C. Hart, "Supervision from the Standpoint of the Supervised," School Review, XXXVII (September, 1929), 537-40. ^G. C. Kyte, "The Elementary School Principal as a Builder of Teacher Morale," First Yearbook of the Department of Elementary School Principals (Michigan Education Association, 1927), pp. 44-52. 4 Strickler, "The Evaluation of the Public School Principal," pp. 55-58. 38 The analysis of the data indicated no significant difference in evaluation procedure between the school systems of the two sizes. Consequently, for schools of over 100,000 student population, practically all systems, over 96 per cent, do evaluate the principal and the majority of school districts make the evaluation at regular intervals throughout his tenure of office. The evaluation is infrequently done according to a rating scale or device and more often represents a purely subjective judgment on the part of the individuals making the evaluation. The evaluation is based, with few exceptions, upon the principal's executive ability, pro­ fessional leadership, community leadership, professional growth and personal qualities. The purpose of the evalu­ ation is generally to determine the principal's retention in the position or his promotion within the system. Salary advancement is very seldom based upon the evalu­ ation. Strickler, in his conclusions, suggests the need for two specific studies: one of the attitudes of the public school principal toward his evaluation and a second to establish criteria for the evaluation of the principalship and their application to an experimental group of public school principals.1 1Ibid., p. 58 39 Educational Research Service has conducted three surveys on procedures for evaluating the performance of administrators and supervisors in local school systems. ERS Circular No. 5, 1964, identified only 50 plans for appraising administrative personnel, and some of the plans were quite informal.1 A 1968 survey of all systems enrolling 25,000 or more pupils and a selected group of 31 smaller systems uncovered 62 formal programs of administrative evaluation. 2 For the 1971 survey, the decision was made to limit the mailing list only to systems enrolling 25,000 or more pupils, omitting the sampling of smaller systems included in the previous surveys.^ Although the sample and the number of replies in the 1971 survey were less than in the 1968 survey, the 1971 survey revealed 84 systems which have formal Educational Research Service, American Associ­ ation of School Administrators and NEA Research Division, Evaluation of School Administrative and Supervisory Per­ sonnel, ERS Circular No. 5 (Washington, D.C., the Service, October, 1964), pp. 1-40. 2 Educational Research Service, American Associ­ ation of School Administrators and NEA Research Division, Evaluating Administrative Performance, ERS Circular No. 7 (Washington, D.C., the Service, November, 1968), pp. 1-56. 3 Educational Research Service, American Associ­ ation of School Administrators and NEA Research Division, Evaluating Administrative/Supervisory Performance, ERS Circular No. 5 (Washington, D.C., the Service^November, 1971), pp. 1-60. 40 procedures for assessing the performance of administrative/ supervisory personnel. These 84 represent 54.5 per cent of the 154 responding systems, whereas the 62 systems identified in 1968 were only 39.5 per cent of the total response in that survey. The 1971 survey figures appear to indicate that the larger the school system, the more likely it is to have an evaluation program for adminisi trative and supervisory employees. From the responses of this survey, it is evident that in educational circles administrative evaluations are seldom used to make salary determinations. Only 12 of the 84 systems indicated that evaluations are used to determine regular or merit increments in salary. 2 There are 12 general types of evaluation pro­ cedures identified by ERS among the 84 submitted. The 12 procedures are grouped into two general types— those which assess the evaluatee against prescribed performance standards (indicators of character, skill, and performance which have been chosen as standards against which all personnel in a similar position will be assessed); and procedures which are based on individual job targets or performance goals, against which each evaluatee will be 1Ibid., p. 1. 2Ibid., p. 3 41 rated as to degree of accomplishment of each goal (man­ agement by objective approach) Despite the difficulty in developing and imple­ menting a performance goals procedure, a growing number of systems are adopting it in one form or another— 25 per cent (21 systems) in the 1971 survey, as compared with 13 per cent (8 systems) in the 1968 study and only one system in 1964. Bernstein and Sawyer 2 advocate the job-target approach to the evaluation of principals. They suggest that the contemporary principal's success should be measured by how well he performs the activities and dis­ charges the responsibilities encompassed in his assign­ ment. A traditional problem is that this measurement has been made by means of objective evaluation instru3 ments. When measured by these standards, the principal is generally regarded solely as an administrator by objective, i.e., he is evaluated according to the degree to which he satisfies pre-determined task-performance criteria. The principal's true effectiveness often 1Ibid., p. 6. 2 Julius C. Bernstein and Willard Sawyer, "Evalu­ ating The Principal," The Principalship: Job Specifi­ cations and Considerations for the 70*s (Washington, D.C., National Association of Secondary School Principals, 1970), pp. 11-18. 3Ibid., p. 11. 42 depends on how well he administers by exception, i.e., how he anticipates, identifies, and copes with the myriad of intangible but critical factors that influence the achievement of successful job-targets.^ A "task" is defined as some concrete duty that the principal must perform as part of his ordinary, dayto-day routine. They may not be closely related to the larger issues of education; indeed they might impede the principal as he tries to address these issues. A "job-target" is defined as an objective that relates to the long-range issues of school improvement. "Job- targets" are likely to have significant impact on such areas as curriculum or community relations. They are goals that are worthy of being the core concerns of the modern principal. 2 The modern principal should be evaluated in terms of how well he organizes the resources at his command, first to define and then to achieve truly important job-targets. Summary This review of literature was divided into four areas of concern. There was general agreement that the survey method of research was an acceptable way to gather data. Questionnaires were found to be used frequently in all kinds of research. XIbid There are certain guidelines 2 Ibid 43 which, when followed, tend to produce better and more reliable results. A low rate of return was one of the major problems experienced in the use of questionnaires. Here also the literature suggests ways of improving the percentage of return. The use of probability theory in selecting the sample has greatly increased the value of the findings of studies of questionnaires. The literature emphasized concern for the purposes of evaluation of professional performance. The theme of performance effectiveness as the goal of evaluation was found repeatedly in the literature. Evaluation is more than a measure of past progress; it is the basis for building better programs in the future. There was general agreement that there is specific need for some new and practical innovations in evaluation procedures. Evaluation and accountability are interrelated. Evaluation studies provide the kind of information needed for accountability. Very few studies have been conducted for the purpose of examining the status of performance evaluation of secondary public school principals. The studies that were reviewed dealt with limited aspects of performance evaluation. There was general agreement that evaluation of personnel is likely to do more harm than good in terms 44 of productivity and morale if its primary objective is not to improve performance. I CHAPTER III METHODS OF PROCEDURE Introduction This chapter provides a detailed presentation of the research design, including: sample of public high schools, sampling technique used, distribution by area, (1) selection of the (2) description of the (3) outline of the sampling (4) the questionnaire approach, (5) development of the questionnaire, study, (7) data collection procedures, (6) questions for (8) treatment of the data, and (9) summary. Selection of the Sample A random stratified sample of secondary public school principals in the state of Michigan comprised the population of this study. The basic sampling unit was the public high school, not the school district. The public high schools listed in the 1972-73 "Michigan Education Directory and Buyer's Guide"'1’ i Michigan Education Directory and Buyer's Guide (Michigan Education Directory, 701 Davenport Building, Lansing, Michigan, 1972-73). 45 46 comprised the total population (N=583) of this study. A random stratified sample (n=293) of public high schools from this population was chosen for study. The rationale for the random stratified sample of public high schools was derived from Sampling Opinions1 by E. J. Stephen and P. J. McCarthy and Sample-Size Determination 2 by Arthur E. Mace. Each public high school in Michigan is classified by the Michigan High School Athletic Association as either A, B, C, or D according to the number of pupils enrolled.^ "Michigan Statistical Abstracts"^ identifies the type of counties, metro or nonmetro, where each public high school in the state was located. A list of the metro counties is found in Table 3.1 and the nonmetro counties in Table 3.2. Appendix A identifies the Michigan Education Association geographical regions. Appendix B contains a map of the metro and nonmetro counties in Michigan. ■^E. J. Stephen and P. J. McCarthy, Sampling Opinions (New York: John Wiley and Sons, 1958), pp. 1032 York: Arthur E. Mace, Sample-Size Determination (New Reinhold Publishing Co., 1964), pp. 2-3. 3 Michigan High School Athletic Association Bulle­ tin, Directory Issue 1972-1973 School Year, XLIX, November, 1972, No. 3 (Michigan High School Athletic Association), 245-50. 4 Michigan Statistical Abstracts (Michigan State University, Graduate School, Business Administration, 1968), pp. 535-36. 47 TABLE 3.1.— Metro counties in the state of Michigan Clinton Eaton Genesee Ingham Jackson Kalamazoo Kent Lapeer Macomb Monroe Muskegon Oakland Ottawa Saginaw Washtenaw Wayne TABLE 3.2.- -Nonmetro counties in the state of Michigan Alcona Alger Allegan Alpena Antrim Arenic Baraga Barry Bay Benzie Barrien Branch Calhoun Cass Charlevoix Cheboygan Chippewa Clare Crawford Delta Dickinson Emmet Gladwin Gogebic Grand Traverse Gratiot Hillsdale Houghton Huron Ionia Iosco Iron Isabella Kalkaska Keweenaw Lake Leelenaw Lenawee Livingston Luce Mackinac Manistee Marquette Mason Mecosta Menominee Midland Missaukee Montcalm Montmorency Newago Oceana Ogemaw Ontonagon Osceola Oscoda Otsego Presque Isle Boscommon Sanilac Schoolcraft Shiawassee St. Clair St. Joseph Tuscola Van Buren Wexford 48 The public high schools were then grouped into strata according to the Michigan Education Association geographical regions and further grouped into the four athletic enrollment classifications A, B, C, and D within each stratum respectively. The population percentage for each Michigan Edu­ cation Association geographical region was then computed. Listed in Table 3.3 are the population and sample per­ centages according to the Michigan Education Association geographical regions. Each population figure represents the percentage of public high schools in the state and each sample figure represents the percentage of public high schools in the study for each Michigan Education Association geographical region. A random stratified sample representing 50 per cent of the public high schools in each athletic enroll­ ment classification was then drawn. The population and sample percentages according to the athletic enrollment classification is included in Table 3.4. The sample drawn was then sub-divided into cate­ gories based on the Michigan Education Association geo­ graphical regions with data identifying the Athletic classification of each school selected and the metro or nonmetro county in which the school selected was located. A random stratified sample of 293 public high schools was drawn. This design enabled reliable 49 TABLE 3.3.— Michigan Education Association geographical regions listing population and sample percentages MEA Geographical Regions Region Region Region Region Region Region Region Region Region Region Region Region Region Region Region Region Region 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 and 18 Totals Population Sample 4.28% 7.03% 7.54% 4.45% 7.89% 4.43% 7.16% 7.50% 4.43% 7.84% 5.46% 6.48% 6.48% 8.87% 4.77% 7.16% 4.77% 6.48% 4.43% 3.75% 2.38% 7.50% 5.31% 6.51% 6.34% 8.74% 5.14% 7.54% 4.80% 6.34% 4.45% 3.77% 2.05% 7.71% 99.89% * 99.89% * Due to rounding TABLE 3.4.— Michigan athletic enrollment classification listing population and sample percentages Athletic Enrollment Classification Class Class Class Class A B C D Totals Population Sample 28.3% 27.6% 23.0% 21.1% 28.6% 27.6% 22.8% 21.0% 100.0% 100.0% 50 descriptive statistical comparisons to be made for each of the Michigan Education Association geographical regions outlined, and for the different classifications of public high schools as determined by the athletic con­ ference enrollment classification. Description of the Sampling Technique Use*? All public high schools in the state of Michigan comprised the population (N-583) of this study. A stratified random sample (n=293) from this population was chosen for study. Each public high school in the state of Michigan was ordered by the Michigan Education Association geographical region and Michigan High School Athletic Association classification. Each public high school in a given region and athletic classification category had equal probability of being selected. This technique was derived from the texts, Statistics^ by 2 Hays, and Statistical Analysis and Inference by Armore. Sampling Distribution by Area in the State A sample of 293 public high schools in Michigan was drawn, using the technique as outlined above. ^William L. Hays, Statistics (New Yorks Rinehart and Winston, 1963), pp. 64, 215. 2 A Holt, Sidney J. Armore, introduction to Statistical Analysis and Inference (New Yorks John Wiley and Sons, inc., 1967), pp. 236-37, 309. 51 sampling distribution map of public high schools selected was then developed, showing the distribution of selected schools. This sampling distribution map is found in Appendix C. It should be noted that the concentration of schools in the southeast portion of the state, as illustrated on the sampling distribution map, directly reflects the large population of public high schools located in this geographic area. The Questionnaire Approach It was decided to use the questionnaire approach in gathering data from the 293 secondary school principals selected in the random stratified sample of public high schools in the state of Michigan. The survey of literature provided the rationale for the questionnaire approach. The writing of Good and Scates exemplifies this rationale. The questionnaire is a major instrument for data gathering in descriptive-survey studies and is used to secure information from varied and widely scattered sources. The questionnaire is particu­ larly useful when one cannot readily see personally all of the people from whom he desires responses or where there is no particular reason to see the respondent personally. This technique may be used to gather data from any range of territory, some­ times national or international.1 ^Carter V. Good and Douglas E. Scates, Methods of Research (New York: Appleton-Century-Crofts, Inc., ir5T)Tp. 32. 52 The validity of the questionnaire in a descriptivesurvey was pointed out by Spahr and Swenson. 1 Remmers 2 also indicated that the questionnaire approach is a use­ ful method for the collection of data. The use of the questionnaire approach in research studies has been endorsed by Parten, 3 Cronback, 4 and Scates and Yeomans 5 as an effective method for the collection of information. Development of the Questionnaire An opinion survey type questionnaire was designed to gather information concerning the status of performance evaluation of secondary public school principals in Michigan as perceived by the principals. Walter E. Spahr and Rinehart J. Swenson, Methods and Status of Scientific Research (New York: Harper and Brothers, 1930), pp. 232-^3. 2H. H. Remmers, Introduction to Opinion and Attitude Measurement (New York: Harper and Brothers, 1954), p. 52. 3 Mildred Parten, Surveys, Polls, and Samples— Practical Procedures (New York: Harper and Brothers# T5SUT, p.” 57.------4 Lee J. Cronback, Essentxals of Psychological Testing (New York: Harper and Brothers, 1960), p. IZF5. 5 Douglas E. Scates and Alice V. Yeomans, The Effect of the Questionnaire Form on Course Requests of Employed Adults (Washington, D.C.: American Council on Education, 1960) , pp. 2-4. 53 The questionnaire included the following content areas: (2) (1) practices included in evaluation procedures, purposes for which principals are evaluated, (3) recommended purposes for which evaluations should ideally be used, (4) principals' opinions of evaluations, (5) personnel who serve as evaluators, and (6) the status of written grievance procedures for principals. A review of the literature^" dealing with the development of questionnaires provided the necessary theoretical background. A number of questionnaires, used to gather data in similar types of studies, were reviewed and items for possible use were selected. These items were circulated among fellow administrators in the Beecher School District where comments and sug­ gestions were solicited. A rough draft of the questionnaire was prepared incorporating the suggestions offered by fellow adminis­ trators. Fifteen public high school principals were asked to respond to the questionnaire. Following this preliminary trial administration of the questionnaire the participants were encouraged to react verbally to the instrument. Several helpful suggestions were made and later incorporated into a further revision. Consultations with members of the researcher's doctoral committee and with staff members from the Office 1See Chapter II for the review of the literature. 54 of Research Consultation resulted in still further revisions of certain questions prior to the pilot administration. Pilot Administration Printed copies of the revised questionnaire were presented personally to twelve secondary school principals in Genesee County. The pilot study involved three secon­ dary school principals in each of the four athletic enrollment classifications, namely class A, B, C, and D size schools. The selection process involved a random sample of secondary school principals in Genesee County not previously selected in the study sample. The purpose of the pilot study was to refine the questionnaire as an instrument to be used in gathering data for the study. The results of the responses were carefully tallied, analyzed, and combined with the suggestions of several colleagues. These suggestions resulted in some minor changes in the general format of the questionnaire along with the deletion of some items and the addition of others. Questionnaire Format The final form of the questionnaire is presented in Appendix D. content areas: The questionnaire included the following 55 (1) Demographic Data (2) Practices included in evaluation procedures (3) Purposes for which principals are evaluated (4) Purposes for which principal performance evalu­ ations should ideally be used (5) Principals1 opinions of performance evaluations (6) Personnel, by position, who serve as evaluators (7) Status of written grievance procedures for principals Three types of questions were used in the instru­ ment. In the first type the respondent provided the requested short-answer response. The second type asked the respondent to check all responses that apply to him. In the third type, the respondent was requested to check the yes or no response. Two preliminary information statements were included at the beginning of the questionnaire requesting school enrollment and the Michigan Education Association Region of the school. Questionnaire item one relates to research question one. This item is combined with the responses from the preliminary information statements (see above) and provides data concerning characteristics of schools with evaluation procedures as compared to characteristics of schools which do not have evaluation procedures. 56 Questionnaire item five deals with specific practices of evaluation procedures and provides infor­ mation to research question two. Items eight, nine, and eleven provide data about the principals' opinion of evaluations and relate to research question three. Research question four was answered through data gathered by responses to questionnaire items two and four. These items deal with the period of time evaluations have been used and the frequency of evaluations. Questionnaire items six and seven deal with evaluation purposes experienced and evaluation purposes recommended. These items provide data for research question five. Research question six deals with the relationship between grievance procedures experienced by principals and the role of evaluations in the dismissal process. This question relates to items six-f and twelve in the questionnaire• Research question seven was answered through data gathered from questionnaire items five, nine, and ten. Questionnaire items six, eight, and ten provide the data which relate to research question eight. Questionnaire items five-a and five-b provide data for research question nine which asks for the basic type of evaluation form used in each school. Question­ naire items five and six provide an answer to research 57 question ten which asks for the relationship between comprehensive evaluation technique scores and school enrollment. Research question eleven dealt with the relation­ ship between comprehensive evaluation technique scores and principals' opinions of performance evaluations. The response was provided through data gathered from items five, six, eight, and nine. Questions for Study This study attempted to answer these questions: 1. How do secondary public schools with formal evaluation procedures distribute themselves in terms of school enrollment, geographic area, and metro/nonmetro status? 2. What is the relationship between the method of formal evaluation practices as experienced by principals and school enrollment? 3. What are principals' perceptions of formal evalu­ ations as expressed by their responses to (a) the role of formal evaluations in improving administrative efficiency, (b) their support of formal evaluations, and (c) the role of formal evaluations in offsetting negative unofficial informal evaluations? How are the number of years formal evaluations have been practiced and the frequency of formal evaluations related to school enrollment? What is the relationship between the purposes for which principals are formally evaluated and the purposes for which principals feel evaluations ideally should be used? What is the relationship between grievance pro­ cedures as experienced by principals and the use of evaluations to establish evidence where dismissal from service is an issue? How are those who evaluate secondary public school principals and the method of evaluation related to principals' support of formal evalu­ ations? How are those who evaluate secondary public school principals and the purposes for which principals are formally evaluated related to the principals' perceived improvement in admin­ istrative efficiency? How do schools which use a prescribed rating scale method of evaluation differ from schools which use the performance objective method of evaluation in terms of enrollment, geographic area, and metro/nonmetro status? I 59 10. What is the relationship between comprehensive evaluation technique scores and school enrollment? 11. How are comprehensive evaluation technique scores related to principals' perceptions of whether formal evaluations help improve administrative efficiency and to principals' support of formal evaluations? Data Collection Procedures Administration of the Questionnaire A revised, printed copy of the questionnaire (see Appendix D), together with a cover letter (see Appendix E ), and a stamped, self-addressed envelope was mailed to the 293 secondary public school principals in Michigan who were included in the sample. The questionnaires were mailed on May 10, 1973. Consideration was given to the choice of mailing time and date as suggested in the literature. Each questionnaire was coded to identify the following: (1) name of the high school, (2) athletic classification by student enrollment, and (3) metro or nonmetro county in which the school was located. A follow-up letter (see Appen­ dix F), another copy of the questionnaire, and a second stamped, self-addressed envelope were mailed on May 24, 1973, to those who had not responded. 60 Considerable interest was indicated by the respondents. responded. Eighty-seven per cent of the sample (n=254) The number and percentage of response by athletic enrollment classification are shown in Table 3.5. The number and percentage of response by Michigan Edu1 cation Association Region are shown in Table 3.6. The number and percentage of response by metro/nonmetro county are shown in Table 3.7. See Appendix G for the sample schools. TABLE 3.5.— Response distribution by athletic enrollment classification Athletic Enrollment Classification Number in Sample Number of Respondents Percentage Class A 83 59 71.1 Class B 76 76 100.0 Class C 78 75 96.2 Class D 56 44 78.2 293 254 86.7 Total Treatment of the Data The data of this research project were treated with descriptive statistics. Procedures recommended through consultations from the Office of Research The Michigan Education Association geographic regions were grouped into ten areas to provide a minimum of two public high schools in each athletic class cate­ gory. (See Appendix G) 61 TABLE 3.6.— Response distribution by Michigan Education Association Region "EA Area S I I Area I (MEA Regions 1, 2, 3) Area II (MEA Region 4) Area III (MEA Region 5) Area IV (MEA Regions 6, 7) Area V (MEA Region 8) Area VI (MEA Region 9) Area VII (MEA Regions 10, 11) Area VIII (MEA Regions 12, 13) Area IX (MEA Regions 14, 15) Area X (MEA Regions 16, 17, 18) Total Respondents Percentage 56 13 23 47 10 20 83.9 76.9 86.9 35 19 74.3 89.5 26 26 17 21 80.8 35 35 100.0 33 31 93.9 24 19 79.2 29 28 96.6 293 254 86.7 TABLE 3.7.— Response distribution by metro/nonmetro county „3v.,.a M a variable Number in Sanlpie Number of_____ _____ Respondents Percentage Metro 137 114 83.9 Nonmetro 156 140 90.4 293 254 86.7 Total 62 Consultation, College of Education, Michigan State Uni­ versity, were used to establish the plan for the analysis and treatment of the data. The data from the questionnaires were key punched into computer data cards. The Michigan State University Control Data Corporation 6500 computer was used to tabu­ late and analyze the data. The methods used to analyze the data obtained from the questionnaires were: (1) tables of distribution recording the frequency, percentage, and standard deviation, (2) chi-square tables, and (3) one-way analysis of variance. The .05 alpha level was chosen for this research study to establish statistical significance. This level indicated that the observed differences between groups was likely to occur by chance only five times out of every 100 cases. No hypotheses were tested since it was agreed by the research committee that the study was a normative survey and was exploratory in nature. Summary The population and design of the study, develop­ ment and administration of the questionnaire, data col­ lection procedures, and treatment of data were described in this chapter. 63 This was a normative survey study. A question­ naire was used to gather data through a random stratified sample of secondary public school principals in Michigan. Data were gathered in six areas: in evaluation procedures, pals are evaluated, (1) practices included (2) purposes for which princi­ (3) purposes for which evaluations should ideally be used as perceived by principals, principals' opinions of evaluations, (5) personnel who serve as evaluators, and (6) the status of written grievance procedures for principals. Three methods of analysis were described: (1) tables of distribution providing the frequency, percentage, and standard deviation, (4) (2) chi-square tables, and (3) the one-way analysis of variance. CHAPTER IV PRESENTATION AND ANALYSIS OF DATA Introduction This chapter presents the results of the study according to the data received from the principals. The respondents were secondary school principals in the state of Michigan. Data are presented from the responses of the ninety-six principals who indicated the use of formal performance evaluation procedures. Purpose of the Study The purpose of this study was: (1) To determine the status of performance evaluation of secondary public school principals in Michigan; (2) To obtain criticisms, suggestions, and recommen­ dations for the improvement of evaluation tech­ niques ; (3) To evaluate these data and use the results to suggest implications for performance evaluation improvement; (4) To gather additional data for later analysis. 64 65 This chapter presents the results of the study in terms of responses received from the principals included in the sample who reported the use of formal performance evaluation procedures. Preliminary Information Statements Through demographic data and interpretation of the returned questionnaires, several bits of information were solicited from the respondents. These data are presented in Table 3.5, Table 3.6, and Table 3.7, but are summarized here in order to describe the sample. Table 3.5 gives the response distribution by athletic enrollment classification. There were fifty- nine Class A school respondents which represented slightly over 71 per cent of the schools in the Class A sample. One hundred per cent of the seventy-six Class B schools in the sample responded to the questionnaire while 96.2 per cent of the seventy-eight Class C schools in the sample responded. Forty-four Class D schools responded which represented 78.2 per cent of the schools in the Class D sample. Table 3.6 indicates the response distribution by Michigan Education Association Region. In MEA Regions 4, 6, 7, 14, and 15, 74 to 80 per cent of the sample schools responded to the questionnaire while 80 to 90 per cent of the sample schools responded in MEA Regions 1, 2, 3, 5, 8, and 9. In MEA Regions 12 and 13, 66 93.9 per cent of the sample schools responded while 96.6 per cent of the sample schools responded in MEA Regions 16, 17, and 18. In MEA Regions 10 and 11, 100 per cent of the 35 sample schools responded to the questionnaire. Table 3.7 gives the response distribution by metro and nonmetro county. There were 114 metro county school respondents which represented 83.9 per cent of the metro school sample. Ninety per cent of the 156 nonmetro county sample schools responded to the question­ naire. The preliminary information presented above has been summarized from Table 3.5, Table 3.6, and Table 3.7 in order to describe the sample. The remaining portion of the chapter will present an analysis of the data gathered from the respondents (n=96) who reported the use of a formal method of performance evaluation of secondary public school principals. Questions for Study This study attempted to answer these questions: 1. How do secondary public schools with formal evaluation procedures distribute themselves in terms of school enrollment, geographic area, and metro/nonmetro status? What is the relationship between the method of formal evaluation practices as experienced by principals and school enrollment? What are principals1 perceptions of formal evalu­ ations as expressed by their responses to (a) the role of formal evaluations in improving adminis­ trative efficiency, (b) their support of formal evaluations, and (c) the role of formal evalu­ ations in offsetting negative unofficial informal evaluations? How are the number of years formal evaluations have been practiced and the frequency of formal evaluations related to school enrollment? What is the relationship between the purposes for which principals are formally evaluated and the purposes for which principals feel evaluations ideally should be used? What is the relationship between grievance pro­ cedures as experienced by principals and the use of evaluations to establish evidence where dis­ missal from service is an issue? How are those who evaluate secondary public school principals and the method of evaluation related to principals' support of formal evalu­ ations? 68 8. How are those who evaluate secondary public school principals and the purposes for which principals are formally evaluated related to the principals' perceived improvement in admin­ istrative efficiency? 9, How do schools which use a prescribed rating scale method of evaluation differ from schools which use the performance objective method of evaluation in terms of enrollment, geographic area, and metro/nonmetro status? 10. What is the relationship between comprehensive evaluation technique scores and school enroll­ ment? 11. How are comprehensive evaluation technique scores related to principals' perceptions of whether formal evaluations help improve administrative efficiency and to principals' support of formal evaluations? Question One Question One 1. How do secondary public schools with formal evaluation procedures distribute themselves in terms of school enrollment, geographic area, and metro/nonmetro status? 69 The first: question was analyzed by describing secondary public schools that have formal evaluation procedures according to their athletic enrollment classification, geographic area, and metro/nonmetro status. Table 4.1 shows the distribution of schools with formal evaluation procedures'*’ according to their athletic enrollment classification. Including all athletic enrollment classifications, 37.8 per cent (n®96) of the respondent schools (n=254) reported the use of some type of formal evaluation procedure. Class A school respondents indicated a 71.2 per cent use of formal evaluation procedures. Class B school respondents show a 38.2 per cent use of evaluation procedures while 30.7 per cent of the Class C school respondents indicated the use of evaluation procedures. Class D school respondents indicated a 6.8 per cent use of formal evaluation procedures. Due to the low frequency of response from Class D schools (3), hereafter, athletic enrollment classification will be redefined as; Class A schools, Class B schools, and Class C and D schools combined. Table 4.2 presents the distribution of schools with formal evaluation procedures according to their ^The term "formal evaluation procedures" refers to the procedures for evaluating the performance of secondary public school principals in Michigan. TABLE 4.1.— Distribution of schools with formal evaluation procedures by athletic enroll ment classification Number in Sample Number of Respondents Number of Respondent Schools with Formal Evaluation Procedures Percentage of Respondent Schools with Formal Evaluation Procedures Class A 83 59 41 71.2 Class B 76 76 29 38.2 Class C 78 75 23 30.7 Class D 56 44 3 6.8 293 254 96 37.8 Athletic Enrollment Classification Total TABLE 4.2.— Distribution of schools with formal evaluation procedures by geographic area Number MEA Area Number in Sample . ” Number of Respondent Schools with Formal Evaluation Procedures Percentage of Respondent Schools with Formal Evaluation Procedures Area I (MEA Regions 1, 2, 3) 56 47 25 53.2 Area 11 (MEA Region 4) 13 10 3 30.0 Area III (MEA Region 5) 23 20 9 45.0 Area IV (MEA Regions 6, 7) 35 26 16 61.5 Area V (MEA Region 8) 19 17 6 35.3 Area VI (MEA Region 9) 26 21 8 38.1 Area VII (MEA Regions 10, 11) 35 35 9 25.7 Area VIII (MEA Regions 12, 13) 33 31 10 32.3 Area IX (MEA Regions 14, 15) 24 19 6 31.6 Area X (MEA Regions 16, 17, 18) 29 28 4 14.3 293 254 96 37.8 Total 72 Michigan Education Association geographic area.* The percentage of respondent schools with formal evaluation procedures ranged from a low of 14.3 for Area X (MEA Regions 16, 17, 18) to a high of 61.5 for Area IV (MEA Regions 6, 7). Ranging from 25 to 35 per cent of respondent schools with formal evaluation procedures were Areas II, V, VII, VIII, and IX. Area VI had 38 per cent of respondent schools with formal evaluation procedures while Area III had 45 per cent and Area I had 53 per cent. Table 4.3 shows the distribution of schools with formal evaluation procedures according to their metro/ .nonmetro county status. The metro county school respondents indicated a 56.1 per cent use of formal evaluation procedures while 22.9 per cent of the nonmetro county school respondents indicated the use of formal evaluation procedures. This question was answered by presenting the number of schools in the sample, the number of respondents, and the number and percentage of respondent schools with formal evaluation procedures for each of the athletic enrollment classifications, the geographic areas, and the metro/nonmetro county status. Class A schools located in Detroit and in the metropolitan counties of Wayne, Washtenaw, Jackson, *Supra., p. 52. 73 Monroe, Lenawee, Oakland, Macomb, and St. Clair have better than a 50 per cent possibility of having formal evaluation procedures. This compares to a 37.8 per cent possibility for the 254 respondent schools. TABLE 4.3.— Distribution of schools with formal evaluation procedures by metro/nonmetro county status Number of Respondent Schools with Formal Evalu­ ation Pro­ cedures Percentage of Respondent Schools with Formal Evalu­ ation Pro­ cedures Number in Sample Number of Respondents Metro Nonmetro 137 156 114 140 64 32 56.1 22.9 Total 293 254 96 37.8 County Status Question Two Question Two 2. What is the relationship between the method of formal evaluation practices as experienced by principals and school enrollment? Because each principal could indicate use of more than one of the fourteen stated methods of evalu­ ation, the researcher attempted to answer this general question by looking at fourteen different relationships (the relationship between the use of a particular method 74 of evaluation and the school athletic enrollment classifi­ cation) . Each relationship was analyzed by using the chi-square statistic. Table 4.4 presents the sample distribution of enrollment classification by use of a particular evalu­ ation method for each of the fourteen methods of evalu­ ation. Included in the table is the chi-square statistic for each relationship (use of a particular method of evaluation and school enrollment classification), the number of schools and the percentage of schools. The only significant relationship at the .05 alpha level was for the evaluation procedure wherein the evaluatee signs the evaluation form. Contributing to this is the fact that a greater percentage of Class A schools marked the "evaluatee signs the evaluation form" category than did those in Class B and Class C-D schools. There were forty-one Class A school respondents, twenty-nine Class B school respondents, and twenty-six Class C-D school respondents. Eight of the fourteen methods of evaluation were reported used by over 50 per cent of the Class A school respondents. Five of the fourteen methods of evaluation were reported used by over 50 per cent of the Class B schools and four of the fourteen methods of evaluation were reported used by over 50 per cent of the Class C-D schools. Data from the ninety-six respondent schools indicate that the lowest percentage TABLE 4.4.— Distribution of the methods of evaluation and the school enrollment classification School Enrollment Classification Method of Evaluation 1. Prescribed rating scale 2. Performance objec­ tives 3. Narrative form 4. Self-evaluation 5. Pre-evaluation conference 6. Conference during evaluation process 7. Post evaluation conference 8. Automatic evalu­ ation review by third party 9. Evaluatee receives copy of evaluation form 10. Evaluatee may only examine copy of evaluation form 11. Evaluatee signs evaluation form 12. Evaluatee*s signa­ ture does not sig­ nify agreement with evaluation 13. Evaluatee may file a dissenting statement 14. Evaluatee may discuss evaluation with evaluator's superior A B Chi-Squa Statist) C--D % n \ Yes No Yes NO Yea No Yes No Yes No Yes No Yes No Yes NO 46 54 44 56 44 56 51 49 27 73 51 49 68 32 24 76 (19) (22) (18) (23) (18) (23) (21) (20) (11) (30) (21) (20) (28) (13) (10) (31) 38 62 28 72 • 59 41 38 62 14 86 55 45 66 34 28 72 (U) (18) ( 8) (21) (17) (12) (11) (18) ( 4) (25) (16) (13) (19) (10) ( 8) (21) 38 (10) 62 (16) 42 (11) 58 (15) 54 (14) 46 (12) 50 (13) 50 (13) 15 ( 4) 85 (22) 31 ( 8) 69 (18) 88 (23) 12 ( 3) 31 ( 8) 69 (18) Yes No 83 17 (34) ( 7) 83 17 (24) ( 5) 96 4 (25) ( 1) 2.86 Yes No 5 95 ( 2) (39) 0 100 ( 0) (29) 0 100 ( 0) (26) 2.74 Yes NO Yes No 83 17 63 37 (34) ( 7) (26) (15) 52 48 45 55 (15) (14) (13) (16) 54 46 50 50 (14) (12) (13) (13) 9.52’ Yes No Yes No 56 44 54 46 (23) (18) (22) (19) 41 59 34 66 (12) (17) (10) (19) 50 50 50 50 (13) (13) (13) (13) 1.47 ^Statistically significant at the .05 alpha level. n % n .65 2.12 1.59 1.34 2.26 3.82 4.43 .33 2.61 2.65 (Jt 76 of response was reported for the procedure whereby the evaluatee may only examine a copy of the evaluation form. Only 5 per cent of the ninety-six respondents Indicated use of this method. Forty-two per cent of the total schools reported use of a prescribed rating scale while 38 per cent Indicated use of performance objectives. A narrative form of evaluation was reported used by 51 per cent of the respondents while 47 per cent Indicated the use of self-evaluation. were reported by 20 Pre-evaluation conferences per cent of the schools, 47 per cent Indicated the use of conferences during the evaluation and 73 per cent reported the use of post evaluation con­ ferences. The evaluation Is automatically reviewed by a third party in 27 per cent of the schools while the evaluatee receives a copy of the evaluation form in 86 per cent of the schools. The evaluatee's signature does not signify agreement with the evaluation in 54 per cent of the schools. Fifty per cent of the schools indicated they may file a dissenting statement to the evaluation while 47 per cent reported they may discuss the evaluation with the evaluator's superior. The frequency of the common use of specific methods of evaluation was greatest for Class A school respondents. The frequency for Class B school respondents was slightly greater than Class C-D schools. Only three of the fourteen methods of evaluation were used by 77 over 50 per cent of all the schools in each of the three athletic enrollment classifications. J Question Three Question Three 3. What are principals' perceptions of formal evalu­ ations as expressed by their responses to (a) the role of formal evaluations in improving adminis­ trative efficiency, (b) their support of formal evaluations, and (c) the role of formal evalu­ ations in offsetting negative unofficial informal evaluations? The overall objective of this question was to look at the principals' perceptions of formal evaluations. The respondents were the principals who indicated the use of formal evaluation procedures. To fulfill this objec­ tive, respondents were instructed to answer (yes, no) the three separate questions stated above in question three. The relationship between their answer to each question and their schools' athletic enrollment classifi­ cation was then described. Each relationship was tested, using the chi-square statistic. Table 4.5 shows the sample distribution of the principals' perceptions of formal evaluations and the three school enrollment classifications. The table includes the chi-square statistic for each relationship, TABLE 4.5.--Distribution of the principals' perceptions of formal evaluations and the school enrollment classification School Enrollment Classification --------------------------------A B C-D % n % n % n Principals' Perception of Formal Evaluations 1. Role of evaluations in improving admin­ istrative efficiency Chi-Square Statistic Yes No 56 44 (23) (18) 72 28 (2 1 ) ( 2) 88 12 (23) ( 3) 8.04* 2. Support of evalu­ ations Yes No 93 7 (38) ( 3) 96 4 (27) ( 1) 100 0 (26) ( 0) 2.15 3. Role of evaluations in offsetting nega­ tive unofficial informal evaluations Yes No 63 37 (26) (15) 63 37 (17) (1 0 ) 83 17 (19) ( 4) 2.97 * Statistically significant at the .05 alpha level* 79 the number of principal responses, and the percentage of principal responses. The only significant relation­ ship at the .05 alpha level was found between school enrollment classification and the principals' perception of the role of evaluations in improving administrative efficiency. Seventy per cent of the ninety-six princi­ pals indicated that evaluations helped to improve their administrative efficiency. Contributing to this sig­ nificance was the fact that within Class A schools only about half the principals (56%) indicated that formal evaluations improved administrative efficiency, while within both Class B and Class C-D schools a very high percentage of principals indicated that formal evalu­ ations improved administrative efficiency. Within each of the three athletic enrollment categories, a very high percentage of the principals (96%) indicated that they favored formal evaluations. Sixty-eight per cent of the principals indicated that official positive evaluations helped them offset unofficial negative informal evaluations. Sixty-three per cent was the lowest affirmative response and was reported by both the Class A and the Class B principals. Class C-D principals reported an 83 per cent affirmative response. Principals in Class C-D schools reported a greater affirmative response to all three categories 80 thus indicating a more positive perception of formal evaluations than reported by principals in Class A and Class B schools. Also of interest is the relationship between the responses to each of the three questions which indicate the principals' perceptions of formal evaluations. Seventy-one per cent of the principals who favor formal evaluations also agree that official positive formal evaluations help offset unofficial negative informal evaluations. One hundred per cent of the principals who support formal evaluations also indicate that evaluations helped them improve their efficiency as an administrator. Seventy-nine per cent of the principals who indicate that evaluations help improve administrative efficiency also report that official positive formal evaluations help offset unofficial negative informal evaluations. Through the principals' (yes, no) responses to the three separate questions, it was determined that 78 per cent of the ninety-six respondents indicating use of formal evaluation procedures have a favorable perception of formal evaluations. 81 Question Four Question Four 4. How are the number of years formal evaluations have been practiced and the frequency of formal evaluations related to school enrollment? The objective of this question was to determine if the period of time formal evaluations have been prac­ ticed in each of the respondent schools or if the number of times formal evaluations occur are directly or indirectly related to the enrollment of the school. Respondents were instructed to answer, in terms of years, the period of time formal evaluations have been prac­ ticed in their school. The frequency of evaluations in their schools was analyzed in terms of the number of formal evaluations experienced each year. Each relation­ ship was tested by the one-way analysis of variance tech­ nique . Table 4.6 shows the relationship between enroll­ ment classification and the number of years evaluations have been practiced. Class A schools (X=3.63) have used evaluations slightly longer than Class C-D schools (X=3.23). Class B schools (X=2.72) were found to have used formal evaluations for the shortest period of time. The average mean years (X~3.19) suggests that secondary 82 public school principal performance evaluation is a relatively recent development in Michigan. TABLE 4.6.— Comparative data on the number of years evalu­ ations have been used according to the school enrollment classification Athletic Enrollment Classification n Mean Standard Deviation Class A 41 3.63 2.80 Class B 29 2.72 1.77 Class C-D 26 3.23 1.77 Table 4.7 shows no significant relationship between school enrollment classification (category variable) and the number of years evaluations have been used (dependent variable). The number of years secondary public school principal performance evaluations have *•* tm been used in a school does not seem to be directly or indirectly related to the school size factor. Table 4.8 presents the summary of data for school enrollment classification and the frequency of secondary public school principal evaluations per year. Class.B schools have a slightly higher (X=1.22) frequency of evaluations per year than Class A and Class C-D schools. The average mean frequency (X=1.18) suggests that secondary public school principals are evaluated slightly more than once each year. It is noted that 83 TABLE 4.7.— One-way analysis of variance of school enroll­ ment classification and the number of years evaluations have been practiced Source of Variation Between Categories Within Categories d. f. M.S. 2 7.04 93 5.16 F Statistic 1.36 P < 0.26 95 Total TABLE 4. 8 .— Comparative data on the frequency of evalu­ ations per year according to the school enrollment clas­ sification Athletic Enrollment Classification Standard Deviation n Mean Class A 39 1.15 0.54 Class B 27 1.22 0.58 Class C-D 21 1.19 0.68 84 there is missing data for nine respondents who were excluded from this analysis. Table 4.9 shows that there was no significant relationship between school enrollment classification (category variable) and the frequency of principal evalu­ ations per year (dependent variable). A school's athletic enrollment classification does not appear to relate directly or indirectly to the number of years formal evaluations have been used in a school or to the fre­ quency of formal evaluations. TABLE 4.9.— One-way analysis of variance of school enroll­ ment classification and the frequency of evaluations per year Source of Variation Between Categories Within Categories Total d. f . M.S. 2 0.04 84 0.35 F Statistic 0.11 P < 0.90 86 Question Five Question Five 5. What is the relationship between the purposes for which principals are formally evaluated and the purposes for which principals feel evaluations ideally should be used? 85 Respondents were instructed to answer (yes, no) to the six purposes for which principals in their school are formally evaluated and to answer (yes, no) if in their opinion, each of the same six purposes should ideally be used for formal evaluations. Because each principal could indicate use and support of more than one of the six purposes of evaluation, an attempt was made to answer this question by looking at the six dif­ ferent relationships between purposes for which princi­ pals are evaluated and principals' opinions of these purposes as being ideally used in evaluations. Each relationship was analyzed by using the chi-square sta­ tistic. Table 4.10 shows the relationships between the purposes for which principals are evaluated and the pur­ poses for which principals feel evaluations ideally should be used. There is a significant relationship between each of the six purposes of evaluation and principals' opinions of these purposes as ideally being used in formal evaluations. Included in the table is the chi- square statistic for each comparison and the frequency and percentage of responses for evaluation purposes as experienced by principals and as recommended for being ideally used in evaluations. There was a total of ninety-six respondents in this category. Ninety-five per cent of the respondents TABLE 4.10.— Comparison of evaluation purposes as experienced by principals and purposes for which principals feel evaluations ideally should be used Responses to Experienced and Recommended Purposes of Evaluations Purposes of Evaluation Agree n Disagree % n % Chi-Square Statistic ** 1. Assessing the evaluatee's pre­ sent performance in accordance with prescribed standards 67 70 29 30 14.07 2. Helping the evaluatee establish relevant performance goals 59 62 37 38 9.20 3. Identifying areas in which improvement is needed 87 91 9 9 8.11 4. Determining qualifications for permanent status 68 71 28 29 11.03 5. Keeping records of performance to determine qualifications for promotion 55 53 41 47 8.07 6. 64 67 32 33 15.61 400 70 176 30 ** ** ** ** ** Establishing evidence where dis­ missal from service is an issue Total Indicates either positive or negative response to both of the corresponding experienced and recommended purposes of evaluation. ** Statistically significant at the .05 alpha level. 87 who experienced the use of evaluations in assessing the evaluatee's performance in accordance with prescribed standards recommended that evaluations be used for this purpose while 98 per cent of those who experienced the use of evaluations in helping the evaluatee establish relevant performance goals recommended that evaluations include this factor. Ninety-eight per cent of the respondents who experienced the use of evaluations to identify areas in which improvement is needed suggested this to be an ideal role of evaluations. Table 4.10 indicates that principals tend to strongly agree that the evaluation purposes which they have experienced, as identified in the table, should ideally be a part of evaluation purposes. A very high percentage of the respondents (95100 per cent) who experienced evaluations to assess per­ formance in accordance with prescribed standards, to establish relevant performance goals, and to identify areas in which improvement is needed indicated these same purposes should ideally be used in evaluations. Question Six Question Six 6 . What is the relationship between grievance pro­ cedures as experienced by principals and the use 88 of evaluations to establish evidence where dis­ missal from service is an issue? The objective of this question was to compare the specific grievance procedure accessible to each respondent with the use of evaluations in the dismissal process in the respondent's school. Respondents were instructed to identify the grievance procedure used in their school or to indicate that principals are not covered by the grievance procedure and to answer (yes, no) if evaluations are used to establish evidence where dismissal from ser­ vice is an issue in their school. This question was tested by the one-way analysis of variance technique. Table 4.11 presents the relationships between the use of evaluations in establishing evidence where dis­ missal from service is an issue and the grievance pro­ cedures accessible to principals. Included in this table are the frequency and percentage responses of grievance procedures experienced by principals according to the use of evaluations in the dismissal process. There were ninety-five respondents in this analy­ sis. Thirty-three respondents indicated the use of evaluations in the dismissal process. Twenty-five of these thirty-three respondents were not covered by any grievance procedure. An additional forty respondents, also not covered by any grievance procedure, reported they had not experienced the use of evaluations in the TABLE 4.11.— Comparative data on the grievance procedures accessible to principals and the use of evaluations in establishing evidence where dismissal from service is an issue Use of Evaluations in the Dismissal Process Principal Grievance Procedures Yes n No % n % 1. Principals are covered by their own grievance procedure 3.2 14 14.8 2. Principals are covered by a grievance procedure which covers all profes­ sional personnel 2.1 8 8.4 3. Principals are covered by a grievance procedure which covers all school employees 2.1 0 0.0 4. Principals are covered by the teachers' grievance procedure but only in grievances involving teachers 5. Principals are not covered by any grievance procedure Total 1.1 0.0 25 26.0 40 42.1 33 34.5 62 65.3 90 dismissal process. Seventeen respondents reported they had formal grievance procedures specifically designed for principals. Fourteen of these seventeen reported they had not experienced the use of evaluations in the dismissal process. Thirty of the ninety-five respondents reported they had access to a formal written grievance procedure. Twenty-two of the thirty principals indicated that evaluations are not used to establish evidence for dis­ missal from service. Sixty-five of the ninety-five respondents reported they were not covered by any grievance procedure. Forty of the sixty-five principals reported that evaluations are not used to establish evidence for dismissal from service. Table 4.12 shows that there was no significant relationship between grievance procedures accessible to principals (dependent variable) and the use of evaluations in establishing evidence where dismissal from service is an issue (category variable). TABLE 4.12.— One-way analysis of variance of grievance procedures accessible to principals and the use of evalu­ ations in the dismissal process Source of Variation Between Categories Within Categories Total d.f. 1 93 94 M.S. 7.58 2.71 F Statistic 2.80 p < .098 91 The use of evaluations in the dismissal process does not appear to be directly or indirectly related to grievance procedures accessible to principals. Question Seven Question Seven 7. How are those who evaluate secondary public school principals and the method of evaluation related to principals' support of formal evalu­ ations? The objective of this question was to determine if the principals' support of formal evaluations is directly or indirectly related to those who evaluate the principals or to the method of evaluation used. Respondents were instructed to answer (yes, no) for each of the fourteen methods of evaluation, for each of the eight listed evaluators, and to indicate support or opposition to formal evaluations of secondary public school principals. Each relationship was tested, using the chi-square statistic. Table 4.13 presents the relationship between those who evaluate secondary public school principals and principals' support of formal performance evaluations of secondary public school principals. Respondents to "those who evaluate principals" were instructed to indicate all evaluators that apply to their school. 92 In the analysis of data, when a respondent reported the use of two or more evaluators, the evaluator with the highest professional position was tabulated. The only listed nonprofessional evaluator was the "community" which was not reported as being used by any of the ninety-five respondents. All respondents reported "those who evaluate" in either the superintendent or assistant superintendent category. TABLE 4.13.--Comparison of those who evaluate principals and principals' support of formal evaluations Evaluators by Position Principals' Support of Formal Evaluations — ■ ■ ■ Yes No n % n % Superintendent 67 74 4 100 Assistant Superintendent 24 26 0 0 Chi-Square Statistic: 1.412 Ninety-one (96%) of the ninety-five respondents indicated support of formal evaluations. Seventy-four per cent (n=67) of these ninety-one respondents reported evaluations by the superintendent. Four respondents expressed a negative opinion of formal evaluations. Each of the four reported evaluations by the superin­ tendent. Table 4.13 shows no significant relationship between those who evaluate principals and principals' support of formal evaluations. 93 Table 4.14 shows the relationship between each method of evaluation {yes, no, for each method of evalu­ ation) and principals' support {yes, no) of formal evalu­ ations. The table includes the chi-square statistic for each of the fourteen comparisons, the number and per­ centage of principals experiencing the stated method of evaluation who support formal evaluations and the number and percentage who oppose formal evaluations. The table shows the following two significant relationships. The use of the narrative form of evaluation was reported by forty-nine of the ninety-five respondents, each of whom indicated support of formal evaluations. The practice wherein the evaluatee may file a dissenting statement to the evaluation was reported by forty-eight of the ninety-five respondents, forty-four of whom also indi­ cated support of formal evaluations. Principals tend to favor support of formal evaluations when given an opportunity to respond to the narrative style of evalu­ ation. Eighty-six per cent (n=78) of those principals who support formal evaluations (n=91) reported exper­ iencing the method of evaluation wherein the evaluatee receives a copy of the evaluation, while 75 per cent (n-6 8 ) of those principals who support formal evaluations reported experiencing the post evaluation conference TABLE 4.14.— Comparison of methods of evaluation and principals' support of formal evaluations Principals' Support of Formal Evaluations v__ Yes No Chi-Square n % n % Statistic Method of Evaluation 1. Prescribed rating scale 2. Performance objectives 3. Narrative form 4. Self-evaluation 5. Pre-evaluation conference 6. Conference during evalu­ ation process 7. Post evaluation conference 8. Automatic evaluation re­ view by third party 9. Evaluatee receives copy of evaluation form 10. Evaluatee may only examine copy of evaluation form 11. Evaluatee signs evaluation form 12. Evaluatee*s signature does not signify agreement with evaluation 13. Evaluatee may file a dis­ senting statement 14. Evaluatee may discuss evaluation with evaluator's superior Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No 38 53 37 54 49 42 43 48 18 73 43 48 68 23 24 67 78 13 2 89 59 32 48 43 95 96 100 93 100 91 96 96 95 96 96 96 97 92 92 97 95 100 100 96 94 100 92 100 2 2 0 4 0 4 2 2 1 3 2 2 2 2 2 2 4 0 0 4 4 0 4 0 5 4 0 7 0 9 4 4 5 4 4 4 3 8 8 3 5 0 0 4 6 0 8 0 Yes No Yes No 44 47 41 50 91 100 93 98 4 0 3 1 9 0 7 2 ^Statistically significant at the .05 alpha level. .11 2.66 4.45* .01 .07 .01 1.21 1.08 .66 .09 2.12 3.45 4.09* 1.38 95 method. Principals who are involved in follow-up tech­ niques of evaluations tend to be supportive of formal evaluations. The use of a prescribed rating scale was reported by forty of the ninety-five respondents, thirty-eight of whom indicated support of formal evaluations. In com­ parison, the use of performance objectives was reported by thirty-seven of the ninety-five respondents, each of whom indicated support of formal evaluations. Principals evaluated by the use of performance objectives reported just slightly greater support of formal evaluations than did principals evaluated by a prescribed rating scale. The four respondents opposing formal evaluations indicated common use of four of the fourteen stated methods of evaluation and no use of three of the fourteen stated methods of evaluation. The support of formal evaluations by principals who indicated the use of formal performance evaluation procedures does not appear to be directly or indirectly related to either the one who evaluates principals or to the method of evaluation. Question Eight Question Eight 8 . How are those who evaluate secondary public school principals and the purposes for which 96 principals are formally evaluated related to the principals' perceived improvement in admin­ istrative efficiency? The purpose of this question was to determine if improvement in administrative efficiency# as perceived by the principals involved in formal evaluation procedures# is directly or indirectly related to those who formally evaluate secondary public school principals or to the purposes for which principals are evaluated. Respondents were instructed to answer (yes, no) to each of the six purposes of evaluation, to each of the eight listed evaluators, and to indicate support or opposition to formal evaluations of secondary public school principals. Each relationship was tested by the chi-square statistic. Table 4.15 shows the comparison between those who evaluate secondary public school principals and improve­ ment in administrative efficiency as perceived by the principals. Respondents to "those who evaluate prin­ cipals" were instructed to indicate all evaluators that apply to their school. In the analysis of data, when a respondent reported the use of two or more evaluators, the evaluator in the highest professional position was tabulated. All respondents reported "those who evaluate" in either the superintendent or assistant superintendent category. 97 TABLE 4.15.--Comparison of those who evaluate and improve­ ment in administrative efficiency as perceived by the principals Perceived Improvement in Administrative Efficiency Evaluators by Position Yes No n % n % Superintendent 51 76 21 72 Assistant Superintendent 16 24 8 28 Chi-Square Statistic: .148 There was a total of ninety-six respondents. Sixty-seven (70%) of the respondents indicated evaluations helped to improve their administrative efficiency. Fifty-one (76%) of these sixty-seven respondents reported evaluations by the superintendent. Twenty- nine respondents reported evaluations did not help to improve their administrative efficiency. Twenty-one (72%) of these twenty-nine respondents reported evalu­ ations by the superintendent. Table 4.15 shows no sig­ nificant relationship between "those who evaluate prin­ cipals" and improvement in administrative efficiency as perceived by the principals involved in the formal evalu­ ation procedure. Table 4.16 presents the relationships between purposes for which principals are evaluated and improve­ ment in administrative efficiency as perceived by the TABLE 4.16.— Comparison of purposes for which principals are evaluated and improvement in administrative efficiency as perceived by the principals Improvement in Adminis­ trative Efficiency Purposes for Which Principals Are Evaluated % n % 42 25 75 63 14 15 25 27 1.73 74 65 13 16 26 35 2. Helping the evaluatee establish relevant per­ formance goals NO 37 30 3. Identifying areas in which improvement is needed Yes No 62 5 71 56 25 4 29 44 .96 4. Determining qualifications for permanent status Yes No 8 73 69 3 26 27 31 .05 59 5. Keeping records of per­ formance to determine qualifications for pro­ motion Yes No 21 81 66 5 24 19 34 2.04 46 6. Yes No 24 43 73 9 20 27 32 .21 68 Establishing evidence where dismissal from service is an issue Yes n 00 Yes NO Chi-Square Statistic No .00 1. Assessing the evaluatee's present performance in accordance with prescribed standards Yes 99 principals. The table includes the chi-square statistic for the above relationships, the number and percentage of principals indicating the use of the stated purpose of evaluation who perceive improvement in administrative efficiency and those who perceive no improvement in administrative efficiency. The table shows no sig­ nificant relationship between purposes for which prin­ cipals are evaluated and improvement in administrative efficiency as perceived by the respondents. The per­ centage of principals who experienced the stated purpose of evaluation and who perceived improvement in adminis­ trative efficiency ranged from 71 per cent to 81 per cent. Seventy-one per cent (n=62) of those principals who indicated the use of the purpose of evaluation of "identifying areas in which improvement is needed" (n=87) indicated they perceived improvement in administrative efficiency. Seventy-five per cent (n=42) of those principals who indicated the use of the purpose of evaluation of "assessing the evaluatee*s present per­ formance in accordance with prescribed standards" (n=56) also perceived improvement in administrative efficiency. Eighty-one per cent (n=21) of those principals who indicated the use of the purpose of evaluation of "keep­ ing records of performance to determine qualifications for promotions" (n=26) indicated they perceived improve­ ment in administrative efficiency. 100 Ninety-three per cent (n=62) of those principals who perceived improvement in administrative efficiency (n=67) reported the use of the evaluation purpose of "identifying areas in which improvement is needed." When evaluations are used to identify areas in which improvement is needed, principals tend to perceive improvement in their administrative efficiency. Improvement in administrative efficiency, as per­ ceived by the principals involved in formal evaluation procedures, does not appear to be directly or indirectly related to those who formally evaluate principals or to the purposes for which principals are evaluated. Question Nine Question Nine 9. How do schools which use a prescribed rating scale method of evaluation differ from schools which use the performance objective method of evaluation in terms of enrollment, geographic area, and metro/nonmetro county status? This question was analyzed by describing secondary public schools that use the prescribed rating scale method of evaluation and schools that use the performance objective method of evaluation according to their athletic enrollment classification, geographic area, and metro/ nonmetro county status. Respondents were instructed to 101 answer (yes, no) to the prescribed rating scale method of evaluation or the performance objective method of evaluation if either method was used in their school. Due to the nature of these two methods of evaluation, they could not be used simultaneously. The prescribed rating scale method of evaluation refers to the evaluation procedure whereby the evaluatee is assessed by prescribed performance standards, such as indicators of character, skill, and performance, which have been chosen as standards against which all personnel in a similar position will be assessed. Performance objectives method of evaluation refers to evaluation procedures which are based on individual job targets or performance goals, against which each evaluatee will be rated as to the degree of accomplishment of each goal. Table 4.17 presents the distribution of schools using a prescribed rating scale method of evaluation and schools using performance objectives method of evaluation according to their athletic enrollment classification. Eighty-one per cent (n=77) of the ninety-six respondents reported using either a prescribed rating scale or per­ formance objectives in their evaluation procedure. Class A school respondents indicated that thirty-seven of forty-one schools (90%) use either the prescribed rating scale method (n-19) or performance objectives (n=18). Eleven of the twenty-nine Class B school TABLE 4.17.— Distribution of schools using the prescribed rating scale method of evalu­ ation and schools using performance objective method of evaluation by athletic enroll­ ment classification Athletic Enrollment Classification Number of Respondents Respondents Using Prescribed Scale Respondents Using Performance Objectives Respondents Using Other Methods of Evaluation n n % n % n % A 41 19 46 18 44 4 10 B 29 11 38 8 28 10 34 C-D 26 10 38 11 42 5 20 Total 96 40 42 37 39 19 19 respondents reported using a prescribed rating scale while eight of the twenty-nine respondents indicated use of performance objectives. Ten of the twenty-six Class C-D school respondents reported use of the pre­ scribed rating scale while eleven of the twenty-six respondents indicated use of performance objectives. Respondents from Class A and Class C-D schools, who marked one of the two categories, were nearly evenly divided in their use of the two stated methods of evaluation. Class B school respondents slightly favored the use of the prescribed rating scale method of evalu­ ation. Table 4.18 shows the distribution of schools using a prescribed rating scale method of evaluation and schools using performance objectives method of evaluation accord­ ing to their Michigan Education Association geographic area (see Appendix A ) . Respondents using a prescribed rating scale method of evaluation ranged from a low of 17 per cent for Area IX (MEA Regions 14, 15) to a high of 67 per cent for Area II (MEA Region 4). Respondents using the performance objectives method of evaluation ranged from nonuse for Area II (MEA Region 4) to a high of 52 per cent for Area I (MEA Regions 1, 2, 3). Schools in the Southeast section of the state (Areas I and IV) favor the use of the performance objective evaluation method (see Appendix A ) . Other geographic areas TABLE 4.18.— Distribution of schools using the prescribed rating scale method of evalu­ ation and schools using performance objective method of evaluation by geographic area Number of Respondents Geographic Area Respondents Using Prescribed Scale Respondents Using Performance Objectives Respondents Using Other Methods of Evaluation n % n % n % 25 11 44 13 52 1 4 Area II (MEA Region 4) 3 2 67 0 0 1 33 Area III (MEA Region 5) 9 5 56 3 33 1 11 16 5 31 9 56 2 19 Area V (MEA Region 8) 6 3 50 2 33 1 17 Area VI (MEA Region 9) 8 5 63 3 37 0 0 Area VII (MEA Regions 10, 11) 9 2 22 2 22 5 56 10 4 40 2 20 4 40 Area IX (MEA Regions 14, 15) 6 1 17 1 17 4 66 Area X (MEA Regions 16, 17, 18) 4 2 50 2 50 0 0 96 40 42 37 39 19 19 Area I (MEA Regions 1, 2, 3) Area IV (MEA Regions 6, 7) Area VIII (MEA Regions 12, 13) Total 104 n 105 of the state do not show a clear preference between the two methods of evaluation. Table 4.19 presents the distribution of schools using a prescribed rating scale and schools using per­ formance objectives according to their metro/nonmetro status. Eighty-one per cent (n=51) of the sixty-three metro school respondents and 79 per cent (n=26) of the thirty-three nonmetro school respondents reported using either the prescribed rating scale method of evaluation or performance objective method of evaluation. Twenty- seven metro school respondents and thirteen nonmetro school respondents reported using the prescribed rating scale method in the evaluation procedure while twentyfour metro schools and thirteen nonmetro schools indi­ cated the use of performance objective method of evalu­ ation. The use of the prescribed rating scale was slightly favored by the metro school respondents while the nonmetro school respondents were evenly divided between the two methods. The use of the prescribed rating scale method of evaluation and the use of the performance objective method of evaluation do not appear to be directly or indirectly related to the school athletic enrollment classification. Schools in the Southeast section of the state (Geographic Areas I and IV) favor the use of the 106 performance objective method of evaluation (see Appendix A ) . Other geographic areas of the state do not show a clear preference between the two methods of evaluation. TABLE 4.19.— Distribution of schools using the prescribed rating scale method of evaluation and schools using per­ formance objective method of evaluation by county status County Status Number of Respondents Respondents Using Prescribed Scale Method of Evaluation n % Respondents Using Performance Objective Method of Evaluation n Respondents Using Other Methods of Evaluation % n % Metro 63 27 43 24 38 12 19 Nonmetro 33 13 39 13 39 7 22 Total 96 40 42 37 39 19 19 Metro school respondents using one of the two methods of evaluation slightly favor the prescribed rating scale method of evaluation while nonmetro schools were evenly divided between the two methods. Of the seventy-seven respondents using one of the two methods of evaluation, 42 per cent (n=40) reported using the prescribed rating scale while 40 per cent (n~37) indicated using the performance objective method of evaluation. Nineteen respondents reported using other methods of evaluation. 107 Question Ten Question Ten 10. What is the relationship between comprehensive evaluation technique scores and school enrollment? The comprehensive evaluation technique score is defined as the sum of responses to the fourteen stated practices which are included in the respondents' evalu­ ation procedures and the seven stated purposes for which the respondents are evaluated. Responses were weighted according to the researcher's assessed importance of the item in determining the comprehensiveness of the evaluation technique. Eight evaluation practices (items 5a-5h) were weighted with a value of 2 (see Appendix D). The remaining six evaluation practices (items 5i-5n) and each of the seven stated evaluation purposes (items 6a-6g) were assigned a value of 1 by the researcher (see Appendix D). Respondents were instructed to answer (yes, no) to each of the stated practices included in their evaluation procedures and to each of the stated purposes for which they are evaluated. Three basic evaluation forms were listed (items 5a-5c) from which respondents made a single choice. In accordance with the weighted items and respondents' choice of one of the three basic evaluation forms, 25 was established by the researcher as the maximum comprehensive evaluation technique score. 108 The objective of this question was to determine if the comprehensiveness of the evaluation technique, as measured by the comprehensive evaluation technique score, was directly or indirectly related to the school athletic enrollment classification. This question was tested by the one-way analysis of variance technique. Table 4.20 shows the relationship between the school athletic enrollment classification and the com­ prehensive evaluation technique score. Class A school respondents (X=13.73) and Class C-D school respondents (X=13.38) reported comprehensive evaluation technique scores slightly higher than Class B school respondents *» denotes A * c o « b| Metropolitan counties fAUli tNIHAfl AM I W I M III tltl 153 * l» APPENDIX C SAMPLING DISTRIBUTION MAP OF PUBLIC HIGH SCHOOLS SELECTED FOR THE STUDY APPENDIX C SAMPLING DISTRIBUTION MAP OF PUBLIC HIGH SCHOOLS SELECTED FOR THE STUDY APPENDIX D PRINCIPAL'S PERFORMANCE EVALUATION QUESTIONNAIRE r APPENDIX D PRINCIPAL'S PERFORMANCE EVALUATION QUESTIONNAIRE May 10, 1973 High School Enrollment______________ . MEA Region of your school 1. Does your school system have a formal method of periodically evaluating the performance of high school principals? YES______ NO_________ If NO, please so indicate and return the questionnaire in the self-addressed stamped envelope. If YES, please complete the remainder of the question­ naire and return it in the self-addressed stamped envelope. 2. How long has your school used a formal evaluation procedure for high school principals? __________ years. 3. Must high school principals serve a probationary period? YES________ , for a________ year period. NO 4. How frequent are evaluations for high school principals? During probation, how often?________ Thereafter, how often? 5. ________ Which of the following practices are included in your evaluation procedures? CHECK ALL THAT APPLY a. Use form which calls for rating in terms of a prescribed scale. b. Use form which calls for specific performance objectives. 155 156 6. c. Use narrative form (providing space for evalu­ ator's comments only}. d. Self-evaluation is required. e. Conference on the upcoming evaluation is held before the evaluation period begins. f. Informal evaluator-evaluatee "conferences" are held during the evaluation process. g. Conference is held after evaluation is completed. h. Evaluation is automatically reviewed by someone other than the original evaluator. i. The evaluatee receives a copy of the completed evaluation for his files. j. The evaluatee is shown, but may not keep, a copy of the evaluation. k. The evaluatee signs the evaluation form, 1. The evaluatee*s signature does not signify that he concurs with the assessment. m. If he is not satisfied with the assessment, the evaluatee may file a dissenting statement, which is appended to the evaluation form. n. The evaluatee may request a conference with the evaluator's superior if he is not satisfied with the evaluation. For what purposes are principals evaluated? (In the list below, please check each purpose for which, in your experience, evaluations have actually been applied in your system— NOT the purposes for which evaluations ideally should be used.) a. To assess the evaluatee's present performance in accordance with prescribed standards. b. To help the evaluatee establish relevant per­ formance goals. c. To identify areas in which improvement is needed. d. To determine qualifications for permanent status. I 157 7. e. To have records of performance to determine qualifications for promotion. f. To establish evidence where dismissal from ser­ vice is an issue. g. Other, e.g. salary increments, compliance with board policy (please specify}: For what purposes do you feel evaluations ideally should be used? (In the list below, CHECK ALL THAT APPLY.) a. TO assess the evaluatee's present performance in accordance with prescribed standards. b . To help the evaluatee establish relevant per­ formance goals. 8. c. To identify areas in which improvement is needed. d. To determine qualifications for permanent status. e. To have records of performance to determine qualifications for promotion. f. To establish evidence where dismissal from service is an issue. g. Other, e.g. salary increments, compliance with board policy (please specify)t Have evaluations helped to improve your efficiency as an administrator? YES 9. Do you favor formal evaluations of high school principals? YES 10. NO NO Who formally evaluates high school principals in your school? CHECK ALL THAT APPLY: 158 The Superintendent______ Supervisors Assistant Superintendent Teachers Other Principals________ Students Assistant Principals____ Community Others, including central office personnel (please list): 11. Do official positive formal evaluations help you offset unofficial negative informal evaluations? YES 12. 13. NO Are high school principals in your school covered by a formal, written grievance procedure? a. Principals are covered by their own grievance procedure. b. Principals are covered by a grievance procedure which covers all professional personnel. c. Principals are covered by a grievance procedure which covers all school employees. d. Principals are covered by the teachers' grievance procedure but only in grievances involving teachers. e. Principals are not covered by any grievance procedure. Comments / Remarks: APPENDIX E COVER LETTER APPENDIX E COVER LETTER May 10, 1973 Dear Principal: Your school has been selected as part of a state­ wide sample based upon enrollment and geographical location to participate in a study concerning evaluation of secondary public school principals in Michigan. This study is being conducted to identify the cur­ rent procedures for evaluating the performance of secondary public school principals in Michigan and to provide preliminary criteria for developing improved techniques of evaluation. Principals are the only personnel being surveyed for this study. Please return the completed questionnaire in the enclosed self-addressed stamped envelope. It has been designed so that it can be completed in approximately six minutes. Your responses will remain both confi­ dential and anonymous. Questionnaires are coded only for statistical purposes. No school or principal will be individually identified. In order that the study be meaningful, it is important for you to participate. Your cooperation and assistance are greatly appreciates. If you would like to examine the results of the study, please so indicate in item 13 of the questionnaire. Sincerely yours, Robert M. Towns, Principal Beecher High School 159 APPENDIX F FOLLOW-UP LETTER APPENDIX F FOLLOW-UP LETTER May 24, 1973 Dear Principal: To date, the completed Principal's Performance Evaluation Questionnaire mailed on May 10, 1973, has not been returned. Please find enclosed, for your convenience, a second questionnaire and a self-addressed stamped envelope. Your response is urgently needed in order for the study to be meaningful. Your assistance in completing and returning the enclosed questionnaire will be sincerely appreciated and will contribute greatly in defining the status of secondary school principal evaluation procedures in Michigan. This study is being conducted to provide data for my doctoral dissertation at Michigan State University under the direction of Dr. Van Johnson in the Department of Administration and Higher Education. Sincerely yours, Robert M. Towns, Principal Beecher High School 160 APPENDIX 6 SAMPLE OF PUBLIC HIGH SCHOOLS APPENDIX G SAMPLE OF PUBLIC HIGH SCHOOLS County Name of School AREA 1 (MEA Regions 1 , 2 Athletic Classifi­ cation MEA Region Grades Metre NonMetre , and 3) Region #1 Detroit, Cass Tech­ nical Wayne A 1 9-12 M Detroit, Central Wayne A 1 9-12 M Detroit, Cody Wayne A 1 10-12 M Detroit, Denby Wayne A 1 9-12 M Detroit, Finney Wayne A 1 9-12 M Detroit, Ford Wayne A 1 10-12 M Detroit, Mackenzie Wayne A 1 9-12 M Detroit, Mumford Wayne A 1 9-12 M Detroit, North­ western Wayne A 1 10-12 M Detroit, South­ eastern Wayne A 1 9-12 M Detroit, South­ western Wayne A 1 9-12 M Detroit, Western Wayne A 1 9-12 M Ecorse Wayne B 1 8-12 M Allen Park Wayne A 2 10-12 M Dearborn, Fordson Wayne A 2 10-12 M Garden City, Garden City East Wayne A 2 10-12 M Region #2 161 162 Athletic Classifi­ cation „ “ ?A Region Grades Metro/ “?n_ Metro Name of School County Garden City, West Senior Wayne A 2 10-12 M Grosse Pointe, Grosse Pointe North Wayne A 2 10-12 M Inkster, Cherry Hill Wayne A 2 9-12 M Lincoln Park Wayne A 2 10-12 M Livonia, Bentley Wayne A 2 10-12 M Livonia, S tevenson Wayne A 2 10-12 M Melvindale Wayne A 2 9-12 M Plymouth Wayne A 2 10-12 M Southgate, Schafer Wayne A 2 9-12 M Taylor, John F. Kennedy Wayne A 2 10-12 M Wayne, John Glenn Wayne A 2 9-12 M Wyandotte, Theo­ dore Roosevelt Wayne A 2 10-12 M Dearborn Heights, Riverside Wayne B 2 8-12 M Flat Rock Wayne B 2 10-12 M Grosse Isle Wayne B 2 10-12 M Inkster Wayne B 2 9-12 M Livonia, Churchill Wayne A 2 10-12 M Rockwood, Calson Wayne B 2 7-12 M Adrian Lenawee A 3 9-12 M Ann Arbor, Pioneer Washtenaw A 3 10-12 M Region #3 163 Athletic Classifi­ cation Name of School County Monroe Monroe Temperance, Bed­ ford MOnroe A Blissfield Lenawee Chelsea MEA Region Grades Metro/ NonMetro 9-12 M 3 10-12 M C 3 9-12 NM Washtenaw B 3 9-12 M Dexter Washtenaw B 3 9-12 M Jackson, North­ west Jackson B 3 10-12 M Milan Washtenaw B 3 9-12 M Monroe, Jefferson Monroe B 3 10-12 M Parma, Western Jackson B 3 10-12 M Clinton Lenawee C 3 7-12 NM Concord Jackson C 3 9-12 M Dundee Monroe C 3 7-12 M Grass Lake Jackson D 3 7-12 M Morenci Lenawee C 3 9-12 NM Onsted Lenawee C 3 7-12 NM Ottawa Lake, White* ford Monroe C 3 7-12 M Sand Creek Lenawee D 3 9-12 M Springport Jackson C 3 9-12 M Petersburg, Sum­ mer fie Id Monroe D 3 9-12 M Whitmore Lake Washtenaw C 3 7-12 M 164 County Name of School Athletic Classifi­ cation mpa „ Region Grades Metro/ NonMetro AREA 2 (MEA Region 4) Region #4 Hastings Barry B 9-12 NM Battle Creek, Harper Creek Calhoun B 10-12 NM Battle Creek, Pennfield Calhoun B 9-12 NM Coldwater Branch B 10-12 NM Marshall Calhoun B 9-12 NM Battle Creek, Springfield Calhoun C 9-12 NM Jonesville Hillsdale C 7-12 NM Middleville Barry c 9-12 NM Olivet Eaton c 7-12 M Athens Calhoun c 8-12 NM Litchfield Hillsdale D 8-12 NM Tekonsha, Rose D. Warwick Calhoun D 7-12 NM Waldron Hillsdale D 7-12 NM Kalamazoo, Loy Norrix Kalamazoo A 5 10-12 M Niles Berrien A 5 10-12 NM St. Joseph Berrien A 5 10-12 NM Comstock Kalamazoo B 5 9-12 AREA 3 (MEA Region 5) Region #5 M 165 Athletic Classifi­ cation MEA Region Grades Metro/ NonMetro Name of School County Dowagiac, Union Cass, Berrien Van Buren B 5 10-12 NM Edwardsburg Cass B 5 9-12 NM South Haven Van Buren B 5 9-12 NM Stevensville, Lakeshore Berrien B 10-12 NM Three Oaks, River Valley Berrien B 5 9-12 NM Vicksburg Kalamazoo B 5 9-12 M Cassopolis Cass C 5 9-12 NM Colon St. Joseph D 5 7-12 NM Constantine St. Joseph C 5 9-12 NM Decatur Van Buren C 5 7-12 NM Eau Claire Berrien C 5 9-12 NM Gobles Van Buren D 5 7-12 NM Hartford Van Buren C 5 7-12 NM Watervliet Berrien C 5 9-12 NM Burr Oak St. Joseph D 5 9-12 NM Climax, ClimaxScott Kalamazoo D 5 7-12 M Covert Van Buren D 5 9-12 NM Galien Berrien D 5 7-12 NM Schoolcraft Kalamazoo D 5 9-12 M 166 County Name of School AREA 4 (MEA Regions Region Athletic Classifi­ cation MEA Region Grades Metro/ NonMetro and 7) 6 #6 Center Line Macomb A 6 10-12 M Mt. Clemens Macomb A 6 9-12 M Roseville Macomb A 6 10-12 M St. Clair, South Lake Macomb A 6 9-12 M Warren Macomb A 6 10-12 M Warren, Cousino Macomb A 6 10-12 M Warren, Fitz­ gerald Macomb A 6 7-12 M Warren, Lincoln Macomb A 6 10-12 M Warren, Warren Wbods Macomb A 6 9-12 M Algonac St. Clair B 6 9-12 NM Mt. Clemens, Chippewa Macomb B 9-12 M Mt. Clemens, Clintondale Macomb A 6 9-12 M Richmond Macomb C 6 7-12 M St. Clair, St. Clair St. Clair B 6 9-12 NM Armada Macomb C 6 8-12 M Memphis St. Clair C 6 8-12 NM 10-12 M- Region #7 Berkley Oakland 167 Athletic Classification MEA *®g on Grades Metro/ NonMetro Name of School County Birmingham, Ernest W. Seaholm Oakland 10-12 M Bloomfield Hills, Andover Oakland 10-12 M Bloomfield Hills, Lahser Oakland A 10-12 M Clarkston Oakland A 10-12 M Clawson Oakland A 10-12 M Farmington, North Farm­ ington Oakland A 10-12 M Hazel Park Oakland A 10-12 M Madison Heights, Lamphere Oakland A 10-12 M Oak Park Oakland A 10-12 M Rochester Oakland A 9-12 M Royal Oak, Dondero Oakland 9-12 M Royal Oak, Kim­ ball Oakland 9-12 M Walled Lake, Walled Lake Central Oakland 9-12 M Walled Lake, Walled Lake Western Oakland 9-12 M Auburn Heights, Avondale Oakland B 10-12 M Holly Oakland B 9-12 M Madison Heights, Madison Oakland B 9-12 M 168 Name of School County Athletic Classifi­ cation Ortonville, Brandon Oakland C 7 9-12 M M AREA 5 (MEA Region Region MEA Region Grades Metro/ NonMetro 8) #8 Grand Ledge Eaton A 8 9-12 Howell Livingston A 8 10-12 NM Lansing, Waverly Ingham A 8 10-12 M Owosoo Shiawassee A 8 9-12 NM Brighton Livingston B 8 9-12 NM Corunna Shiawassee B 8 9-12 NM Durand Shiawassee B 8 9-12 NM Pickney Livingston B 8 9-12 NM St. Johns Clinton B 8 10-12 Byron Shiawassee C 8 7-12 NM Dewitt Clinton c 8 9-12 M Haslett Ingham c 8 9-12 M Perry Shiawassee c 8 7-12 NM Pewamo, PewamoWestphalia Clinton c 8 9-12 M Stockbridge Ingham c 8 7-12 M Ashley Gratiot D 8 7-12 NM Dansville Ingham D 8 7-12 M Fowler Clinton D 8 7-12 M Morrice Shiawassee D 8 9-12 NM M 169 County Athletic Classifi­ cation Grand Haven Ottawa A 9 10-12 M Grand Rapids, East Grand Rapids Kent A 9 9-12 M Grand Rapids, Forest Hills Kent B 9 9-12 M Grand Rapids, Union Kent A 9 10-12 M Ionia Ionia B 9 9-12 NM Caledonia Kent C 9 9-12 M Cedar Springs Kent B 9 9-12 M Coopersville Ottawa B 9 9-12 M Greenville Montcalm B 9 9-12 NM Hudsonville Ottawa B 9 9-12 M Jenison Ottawa B 9 7-12 M Lake Odessa, Lakewood Ionia B 9 9-12 NM Lowell Allegan B 9 9-12 M Way land, Way land Union Allegan B 9 9-12 NM Wyoming, Godwin Kent B 9 9-12 M Wyoming, Rogers Kent B 9 10-12 M Byron Center Kent C 9 9-12 M Carson City, Carson City Crystal Montcalm C 9 7-12 NM Name of School AREA 6 MEA Region Grades Metro, NonMetro (MEA Region 9) Region #9 170 Athletic Classifi­ cation Grades Metro/ NonMetro 9 9-12 M C 9 9-12 NM Allegan C 9 9-12 NM Hamilton Allegan C 9 7-12 NM Lakeview Montcalm C 9 7-12 NM Martin Allegan D 9 7-12 NM Wyoming, Kent Occupational Kent D 9 10-12 M Name of School County Comstock Park Kent C Edmore Montcalm Fennville AREA 7 (MEA Regions 10 and Mea Region 11) Region #10 Flint, Clio Genesee A 10 10-12 M Flint, Kearsley Genesee A 10 10-12 M Flint, South­ western Genesee A 10 10-12 M Flushing Genesee A 10 9-12 M Grand Blanc Genesee A 10 9-12 M Lapeer Lapeer A 10 9-12 M Swartz Creek Genesee A 10 9-12 M Fenton, Lake Fenton Genesee B 10 7-12 M Flint, Ainsworth Genesee B 10 9-12 M Flint, Hamady Genesee B 10 9-12 M Linden Genesee B 10 9-12 M GeneseeLapeer B 10 9-12 M Otisville, Lakeville Memorial 171 Metro, NonMetro Name of School County Athletic Classifi­ cation MEA Region Grades Almont Lapeer D 10 8-12 M Flint, Bendle Genesee C 10 10-12 M Bridgeport Saginaw A 11 9-12 M Saginaw, Arthur Hill Saginaw A 11 10-12 M Bad Axe Huron C 11 9-12 NM Birch Run Saginaw B 11 9-12 M Caro Tuscola B 11 9-12 NM Ithaca Gratiot C 11 9-12 NM Pigeon, Laker Huron B 11 9-12 NM Vassar Tuscola B 11 9-12 NM Brown City Sanilac C 11 7-12 NM Cass City Tuscola C 11 9-12 NM Deckerville Sanilac c 11 7-12 NM Fairgrove, Akron-Fairgrove Tuscola D 11 9-12 NM Frankenmuth Saginaw B 11 9-12 M Harbor Beach Huron C 11 7-12 NM Reese Tuscola C 11 7-12 NM St. Charles Saginaw c 11 9-12 M Carsonville Sanilac D 11 K—12 NM Caseville Huron D 11 7-12 NM Kingston Tuscola D 11 7-12 NM Region #11 172 MEA Region Grades Huron D 11 K—12 NM TuscolaHuron C 11 7-12 NM County Port Hope Sebewalng AREA 8 Metro/ NonMetro Athletic Classifi­ cation Name of School (MEA Regions 12 and 13) Region #12 Bay City, Handy Bay A 12 9-12 NM Midland Midland A 12 10-12 NM Bay City, John Glenn Bay A 12 9-12 NM Clare Clare C 12 7-12 NM Gladwin Gladwin c 12 9-12 NM Oscoda Iosco B 12 9-12 NM Pinconning Bay B 12 7-12 NM Beaverton Gladwin C 12 9-12 NM Coleman Midland C 12 9-12 NM Farwell Clare C 12 7-12 NM Harrison Clare C 12 9-12 NM Shepherd Isabella C 12 9-12 NM Mt. Pleasant, Beal City Isabella D 12 9-12 NM Whitemore, WhitemorePrescott Iosco D 12 7-12 NM Muskegon Muskegon A 13 10-12 Big Rapids Mecosta B 13 9-12 Region #13 M NM 173 Name of School County Athletic Classifi­ cation MEA Region Fremont Newaygo B 13 10-12 Fruitport Muskegon B 13 9-12 M Manistee Manistee B 13 7-12 NM North Muskegon, Reeths-Puffer Muskegon B 13 10-12 M Whitehall Muskegon B 13 9-12 M Hart Oceana C 13 7-12 NM Morley Mecosta C 13 7-12 NM Newaygo Newaygo C 13 9-12 NM North Muskegon Muskegon c 13 7-12 M Ravenna Muskegon c 13 9-12 M Reed City Osceola c 13 9-12 NM Shelby Oceana c 13 9-12 NM Brethren Manistee D 13 7-12 NM Freesoil Mason D 13 9-12 NM Marion Osceola D 13 7-12 NM Pentwater Oceana D 13 7-12 NM Walkerville Oceana D 13 7-12 NM Grades Metro, NonMetro NM AREA 9 (MEA Regions 14 and IS) Region *14 Alpena Alpena A 14 10-12 NM Cheboygan Cheboygan B 14 7-12 NM Rogers City Presque Isle C 14 9-12 NM Charlevoix Charlevoix C 14 9-12 NM 174 Athletic Classifi­ cation MEA Region Grades Crawford C 14 7-12 NM Presque IsleCheboygan C 14 9-12 NM Atlanta Montmorency D 14 9-12 NM Genesee Otsego D 14 7-12 NM Hillman Montmorency D 14 7-12 NM Indian River Cheboygan D 14 7-12 NM Mackinaw City Cheboygan D 14 7-12 NM Pellston EmmetCheboygan D 14 9-12 NM Presque Isle D 14 9-12 NM Grand Traverse A 15 10-12 NM Cadillac Wexford B 15 10-12 NM Kalkaska Kalkaska C 15 9-12 NM Bellaire Antrim D 15 7-12 NM Central Lake Antrim D 15 9-12 NM Ellsworth Antrim D 15 9-12 NM Kingsley Gd. Traverse D 15 7-12 NM Lake City Missaukee D 15 9-12 NM Leland Leelanau D 15 7-12 NM Mancelona Antrim D 15 7-12 NM Suttons Bay Leelanau D 15 7-12 NM Name of School County Grayling Onaway Posen Metro, NonMetro Region #15 Traverse City 175 Name of School County Athletic Classifi­ cation MEA Region Grades Metro, NonMetro AREA 10 (MEA Regions 16 f 17 $ andI 18) Region #16 Sault Ste. Marie Chippewa A 16 9-12 NM Newberry Luce C 16 7-12 NM Rudyard Chippewa C 16 9-12 NM Detour Village Chippewa D 16 9-12 NM Engadine Mackinac D 16 7-12 NM Mackinac Island Mackinac D 16 K—12 NM Pickford Chippewa D 16 7-12 NM Escanaba Delta A 17 9-12 NM Iron Mountain Dickinson B 17 9-12 NM Ironwood, Luther L. Wright Gogebic B 18 9-12 NM Kingsford Dickinson B 17 10-12 NM Menominee Menominee B 17 9-12 NM Negaunee Marquette B 17 7-12 NM Bessemer Gogebic D 18 7-12 NM L'Anse Baraga C 18 7-12 NM Munising, William G. Mather Alger C 17 7-12 NM Ontonagon Ontonagon C 18 9-12 NM Champion MarquetteBaraga B 17 7-12 NM Houghton D 18 7-12 NM Regions 17 and 18 Chassel 176 Athletic Classifi­ cation MEA Region Grades Metro, NonMetro Name of School County Eben Junction, Eben Alger D 17 7-12 NM Ewen, EwenTrout Creek Ontonagon D 18 9-12 NM Felch Dickinson D 17 7-12 NM Morenisco, Roosevelt Gogebic D 18 K-12 NM Cooks, Big Bay DeNoc DeltaSchoolcraft D 17 8-12 NM National Mine Marquette B 17 7-12 NM Painsdale, Jeffers Houghton D 18 6-12 NM Perkins Delta D 17 7-12 NM Wakefield Gogebic D 18 7-12 NM White Pine Ontonagon D 18 7-12 NM SELECTED BIBLIOGRAPHY SELECTED BIBLIOGRAPHY Books Argyris, Chris. Executive Leadership: An Appraisal of a Manager in Action, New Yorks Harper and Brothers, 1953. Armore, Sidney J. Introduction to Statistical Analysis and Inference. New York: John Wiley and Sons, Inc., TZST.----- Ayer, P. C . , and Barr, A. S. Organization of Supervision. New York: D. Appleton"^ui3^onipany7— r573T~— ",,""— — Barnard, Chester I. The Functions of the Executive. Cambridge: Harvard University Press, 1&38. Barr, A. S., and Burton, W. H. The Supervision of Instruction. New York: D. Appleton and Company, t & e t :-------- Berg, Ivar. Education and Jobs: The Great Training Robbery. New York: Prueger Publishers, 1970. Bernstein, Julius C . , and Sawyer, Willard. "Evaluating the Principal•" The Principalship: Job Specifi­ cations and Considerations for the io 1s . Washing­ ton, D.C.• National Association of Secondary School Principals, 1970. Borg, Walter R. Three Levels of Evaluation for Edu­ cational Products. Washington, D.C.: Office of Education, bureau of Research, 1971. Browder, Leslie, ed. Emerging Patterns of Administrative Accountability" Berkeley, Calif.: McCutchan Publishing Corporation, 1971. Campbell, Paul B . , and Beers, Joan S. ”1971 AERA Con­ ference Summaries— I ." Evaluation: The State of the Art. 1972. 177 178 Campbell, Roald F . , and Gregg, Russell T . , ed. Adminis­ trative Behavior in Education. New York: Harper and Brothers Publishers, 1957. Castetter, William B . , and Burchell, Helen R. Educational Administration and the Improvement of Instruction. Danville, Illinois: The Interstate Printers and Publishers, Inc., 1967. Castetter, William B . , and Heisler, Richard S. Appraising and Improving the Performance of School Adminis­ trative Personnel. Center for Field Studies: liniversity of Pennsylvania, 1971. Cronback, Lee J. Essentials of Psychological Testing. New York: Harper and Brothers, i960. Demeke, Howard J. "Guidelines for Evaluation: The School Principalship— Seven Areas of Competence." Tempe, Arizona: Arizona State University, 1971. DeVaughn, J. Everette. A Manual for Developing Reasonable, Objective, Nondiscriminatory Standards for Evaluating Administrator Performance. State College Mississippi: Mississippi &tate Uni­ versity, 1971. Ebel, Robert L . , ed. Encyclopedia of Educational Research. 4th ed. London: The Macmillan Company, CollierMacmillan Limited, 1969. Educational Research Service. Evaluation of School Admin­ istrative and Supervisory frersonnel^ No. 5. Washington, D .c .: the Service, 1964. . Evaluating Administrative Performance. Washington, D.C.: the Service, 1968. No. 7. ________ . Evaluating Administrative/Supervisory Per­ formance. No. 6 . Washington, D.C.: the Service, 1971. Engleman, Finis E«; Cooper, Shirley; and Ellena, William J. Vignettes on the Theory and Practice of School Administration. New Yoric: The Macmillan Company, 1963. Glassner, Leonard E. Handbook for Evaluators. Pittsburg Public Schools, Pennsylvania: office of Research, 1969. 179 Good, Carter V. Essentials of Educational Research. New Yorks Appleton-Century-Crofts, 1966. , ed. Dictionary of Education. New York: McGraw-Hill kook Company, Inc., 1959. ; Barr, A. S.; and Scates, Douglas E. The Methodology of Educational Research. New York: Appleton-Century-Crofts, Inc., 194i. Griffiths, Daniel E. Human Relations in School Adminis­ tration. New York: Appieton-Century-Crofts, Inc., 1956. Guba, Egan G., and Stufflebeam, Daniel L. "Evaluation: The Process of Stimulating, Aiding and Abetting Insightful Action." Monograph Reading in Edu­ cation, No. 1. Bloomington, Ind.: Indiana University, 1970. Hays, William L. Statistics. and Winston, 19^3. New York: Holt, Rinehart Herriott, Robert E. "Survey Research Method." Encyclo­ pedia of Educational Research. Edited by Robert L. Ebel. 4th ed. New York: The Macmillan Company, 1969. Issac, Stephen, and Michael, William B. Handbook in Research and Evaluation. San Diego, Calif.: Robert R. Knapp Publishers, 1971. Levine, R. A., and Williams, A. P. Jr. "Making Evalu­ ation Effective: A Guide." Santa Monica, Calif.: Rand Corporation, 1971. Mace, Arthur E. Sample-Size Determination. Re inhold Publishing Company, 1964. New York: Nolte, M. Chester. An Introduction to School Adminis­ tration. New York: The Macmillan Company, 1967. Pace, Robert C. Evaluation Perspectives: 1968. Los Angeles, Calif.: Center for the S t u d y o f Evalu­ ation, 1968. Parten, Mildred. Surveys, Polls, and Samples— Practical Procedures'! New York: Harper and Brothers, I§50. Remitters, H. H. Introduction to Opinion and Attitude Measurement. New York: Harper and Brothers, 1954. 180 Roberson, E. Wayne, ed. Educational Accountability Through Evaluation^ Englewood Cliffs, N.J.: Education Technology Publications, 1971. Scates, Douglas E., and Yeomans, Alice V. The Effect of the Questionnaire Form on Course Requests of Employed Adults. Washington, D.C.s American Council on Education, 1960. Slonim, Morris James. Sampling in a Nutshell. Yorks Simon and Shuster, i960. New Spahr, Walter E . , and Swenson, Rinehart J. Methods and Status of Scientific Research. New York: Harper and Brothers, 193d. Stemmock, Suzanne K. Evaluating Administrative Per­ formance . No. 7. Washington, D.C.: Educational Research Service, 1967. Stephen, E. J., and McCarthy, P. J. Sampling Opinions. New Yorks Wiley and Sons, 1956. Stevall, Wallace H . , ed. Rationale of Education Evalu­ ation. Pearland, Texas s Gulf Schools siupplementary Education Center, 1967. Wallace, Richard C. Jr., and Shavelson, Richard J. "A Systems Analytic Approach to Evaluation: A Heuristic Model and Its Application." Syracuse, N.Y.s Eastern Regional Institute for Edu­ cation, 1970. Whisler, Thomas L . , and Harper, Shirley F . , ed. Per­ formance Appraisal; Research Practice. New tfork: Holt, Rinehart and Winston, 1562. Wise, John E.; Nordberg, Robert B . ; and Reitz, Donald J. Methods of Research in Education. Boston: D. C. Heath and &>mpany, 1967. Periodicals Adams, Velma A. Count." 19-25. "In West Hartford It's The Kids That School Management, XV (September, 1971), Barrilleaux, Louis. "Behavioral Outcomes for Administra­ tive Internships: School Principals.” Edu­ cational Administration Quarterly, VIII (Winter, 1972), 59-71. 181 Barrilleaux, Louis. "Accountability Through Performance Objectives." The Bulletin of the National Associ­ ation of Secondary School Principals, LVI (May, 1972 ), 103 -1 0 . Barro, Stephen M. "An Approach to Developing Accounta­ bility Measures for the Public Schools." Phi Delta Kappan, LII (December, 1960), 60. Bird, G. E. "Teachers1 Estimates of Supervisors." and Society, V (June, 1917) , 717-20. School Culbreth, George. "Appraisals That Lead to Better Per­ formance." Supervisory Management, XVI (March, 1971), 8 ■ Davis, S. John. "A Final Exam for Principals." The Bulletin of the National Association of Secondary School Principals, LIII (October, 1969), v-ix, xi, 1 4 1 , 1 4 3 . --Drucker, Peter F. "Decision Making and the Effective Executive." The Bulletin of the National Association of Secondary School Principals, LII (May# 196S), 24-44.--------------- --Gallup, George. "The Third Annual Survey of the Public's Attitudes Toward the Public Schools, 1971." Phi Delta Kappan, LIII (September, 1971), 35. Goldman, Harvey. "Evaluation of Administrative Behavior at the Building Level." The Bulletin of the National Association of Secondary School Prin­ cipals, LIV (September, l£7o) , 7 0 - 7 ^ Hart, M. C. "Supervision from the Standpoint of the Supervised." School Review, XXXVII (September, 1929), 537-40. Heier, H. D. "Implementing an Appraisal-By-Results Program." Personnel, XLVII (November-December, 1970), 25. Howsam, Robert B., and Franco, John M. "New Emphasis in Evaluation of Administrators." National Elemen­ tary Principal, XLIV (April, 1965) , 3<>-40. Hubbard, Evelyn B. "What Teachers Expect of Supervisors." Detroit Journal of Education, III (May, 1923) , ' 41g-17.------------------ 182 Iwamoto, David, and Hearn, Norman E. "Evaluation Is A Full Time Job." American Education, V (April, 1969), 18-19. Katz, R. L. "Skills of an Effective Administrator." Harvard Business Review (January-February, 1955), T3-4T:----------------------Koch, Norman E., and Patterson, Wade N. "Evaluating the Principal." Educational Horizons, XLVII (Summer, 1969), 149-56. Lessinger, Leon. "Engineering Accountability for Results in Public Education." Phi Delta Kappan, LII (December, 1970), 217. . "Robbing Dr. Peter to Pay Pauls Accounting for Our Stewardship of Public Education." Educational Technology, XI (January, 1971), 11. . "Accountability for Results: A Basic Challenge for America's Schools." American Education, V (June-July, 1969), 2. Levinson, Harry. "Management by Whose Objectives?" Harvard Business Review, XLVIII (July-August), T23=3T.---------------------Merriman, Howard O. "From Evaluation Theory into Prac­ tice ." Journal of Research and Development in Education, III (Summer, 19Vtfl, 46-58. Nicholson, Everett W. "The Performance of Principals in the Accountability Syndrome." The Bulletin of the National Association of Secondary SchooT Principals, L^I (May, 1972), 94-102. Niehaus, Stanley W. "The Anatomy of Evaluation." The Clearing House, XLII (February, 1968) , 332. Nutt, H. w. "The Attitude of Teachers Toward Super­ vision." Education Research Bulletin (February, 1924) , 59-5TI Redfern, George B. "Principals: Who's Evaluating Them, Why, and How?" The Bulletin of the National Association of Secondary School Principals,LVI ‘(May, 1972T", 55-93; *---------------c--Saunders, Olga. "What Teachers Want from the Principal in His Capacity as a Supervisor." School Review, XXX (July, 1968), 5-12. 183 Sjogren, Douglas D. "Measurement Techniques in Evaluation." Review of Educational Research, XL (April, 1970), 361-20'.-----------------------Stenner, Jack. "Accountability by Public Demand.” American Vocational Journal, XLVI (February, r§7i); 34:----------------------- Strickier, Robert W. "The Evaluation of the Public School Principal." The Bulletin of the National Association of Secondary School Principals, XLI (February, W57T, ------- -Stufflebeam, Daniel L. "Toward A Science of Educational Evaluation." Educational Technology, VIII (July, 1968) , 5-12. Thompson, Paul H . , and Dalton, Gene W. "Performance Appraisals: Managers Beware." Harvard Business Review, XLVIII (January-February, 1970), 14^-57. Trump, J. Lloyd. "On Humanizing Schools." The Bulletin of the National Association of Secondary School"" Principals, LVI (February, 1672), 9-16. , and Georgiades, William. "Which Elements of School Programs Are Most Difficult— And Why?" The Bulletin of the National Association of Secondary School Principals, LV (May, 1971) , 54-66 . Wang, Marilyn W . , and Stanley, Julian C. "Differential Weighting: A Review of Methods and Empirical Studies." Review of Educational Research, XL (December, 1970), 663-705. White, B. Frank, and Barnes, Louis B. "Power Networks in the Appraisal Process." Harvard Business Review, XLIX (May-June, 1971), 101-09. Unpublished Materials Campbell, Roald F. "The Evaluation of Administrative Performance." Paper presented at the 103rd meeting of the American Association of School Administrators Annual Convention, Atlantic City, N.J., February 20-21, 1971. 184 DeVaughn, J. Everette. "Policies, Procedures and Instru­ ments in Evaluation of Teacher and Administrator Performance." Paper presented at the 104th meet­ ing of the American Association of School Adminis­ trators Annual Convention, Atlantic City, N.J., February 12-16, 1972. Griffith, Daniel Jr. Special Report to National Con­ ference for Professors of Educational Adminis­ tration, Bloomington, Indiana, August, 1966. Henclay, Stephan P. "Deterrents to Accountability." Paper presented at Western Canada Educational Administration Conference, Banff, Alberta, Canada, October 7-9, 1971. Kelly, Edward F. "Extending the Countenance: A Comment for Evaluators.” Paper presented at the Associ­ ation Educational Communications and Technology Annual Convention, Minneapolis, Minnesota, April 16-21, 1972. Nelson, Norbert J. "Pursuit of Management Excellence." An address delivered at the Magnavox Executive Development Program, Fort Wayne, Indiana, March, 1969. Stufflebeam, Daniel L. "Hie Relevance of the CIPP Evalu­ ation Model for Educational Accountability." Paper read at the Annual Meeting of the American Association of School Administrators, Atlantic City, N.J., February 24, 1971. ________ . "Critique of the Report of the Phi Delta Kappa Study Committee on Evaluation." Paper presented at the Annual Meeting of the American Educational Research Association, New York, New York, February, 1971. Tolle, Donald J. "Evaluation: Who Needs It?" Paper presented at a faculty workshop held at Mineral Area College, Flat River, Missouri, September, 1970. Young, Stephen. "Accountability and Evaluation in the 70's: An Overview." Paper presented at the Annual Meeting of the Speech Communication Association, San Francisco, California, December 27-30, 1971.