RETURNING MATERIALS: PTace in book drOp to remove this checkout from your record. FINES wiII be charged if book is returned after the date stamped be10w. ‘ MSU LIBRARIES 5" 4 1 7' ’ 4 u... y :3 K ff I“) I ’ W... ~. 973 If;,., I I“ ”’/_ ‘ ~ 2 , ‘99 / YEB. 9513‘ ‘ i JAN 4 ¢ 2m 811975 % @101 INNOVATION ADOPTION AND ORGANIZATION CHANGE: PROGRAM EVALUATION IN GERONTOLOGY by Donald D. Davis A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Psychology 1982 Copyright by Donald D. Davis 1982 ABSTRACT INNOVATION ADOPTION AND ORGANIZATION CHANGE: PROGRAM EVALUATION IN GERONTOLOGY by Donald D. Davis The reported research describes a randomized field experiment designed to measure the effectiveness of a partic- ipative goal-setting consultation intervention intended to change the program evaluation practices of 43 organizations providing services to older adults in three cities in Michi- gan. The effectiveness of the experimental manipulation is examined within the context of the structure and environment of the organizations providing the focus for change. Change in program evaluation practices is discussed as a special case of the general process of innovation adoption in organ- izations. Mixed support was found for the efficacy of the experi- mental intervention. Interview measures revealed a: strong main effect for the intervention, explaining 2H. percent of the outcome variance. Participative goal-setting provided the intervention component most highly correlated with inno- vation adoption (1 = .64, P. < .001). Self-report measures of adoption of evaluation methods failed to reveal any sig- nificant effects. Measurement differences were discussed as possible reasons for the discrepancy. The experimental Donald D. Davis intervention did not change the level of cognitive acceptance of evaluation practices or evaluation knowledge. Stepwise multiple regression analyses were used to esti- mate the multivariate relationship between psychological, organizational structure, and organizational environmental characteristics and the adoption of innovative evaluation practices. The adoption of program evaluation methods was best predicted by knowledge of intervention group membership, education, attitudes toward evaluation practices, and expected tenure on the job (R2 == .43). Posttest attitudes toward program evaluation were best predicted by pretest attitude scores, number (H: organizational staff, degree of participation in decision making, and interorganizational relations (R2 = .40). Several implications of the research and suggestions for future research are provided. It is suggested that future work focus on 1) determination of the correct unit of anal- ysis for theory and intervention; 2) experimental validation of organizational change strategies and the use of organiza- tional theory to predict their success; 3) methods to facil- itate innovation adoption and implementation in organiza- tions; 4) use of sequential, longitudinal research designs; and 5) development of data-based planning and change in public policy. ACKNOWLEDGEMENTS It is customany at this point to thank those who have contributed to one's dissertation. The list of those who have helped me during my stay at Michigan State is a long one; I have felt little reluctance while here 1K) ask ques- tions of many people. First, I would like to thank Bill Fairweather for teach- ing me about experimental social innovation. He spent much time patiently answering my sometimes impatient questions. I found his experience to be invaluable. I am very grateful to several people who took the time to teach me about numbers. From Ralph Levine, I learned much about systems theory' and systems analysis. Ray' Frankmann spent much time teaching me nuances of the analysis of vari- ance and BALANOVA. To Jack Hunter I am grateful for teaching me cybernetics, mathematical models, psychometrics, and how in) use PACKAGE. His instruction and ideas significantly shaped Iny thinking and stretched [my imagination. I thank Neal Schmitt for teaching me psychometrics and answering my frequent questions, too often after intruding into his office and disturbing his work. To Terry Allen I owe both an intel- lectual and personal debt. He was very instrumental in teaching me about statistics and psychometrics. Moreover, he was always available to talk and share, ever patient and I o 1 ‘ instructive, in spite of my almost daily questioning, often very late at night. Most important, he was a friend; always willing to listen ix> my ideas, no matter how unusual or silly. I would like to thank Norb Kerr for many interesting and informative late-night discussions when the early work on my dissertation seemed unending. He always provided an extraordinary model for scholarship. To Lou Tornatzky I owe thanks for opening my eyes re- garding 'the importance (H: organizations iri creating social change. He kindled in me an interest in the empirical study of organizational change. Larry Messe was a good friend who taught me much about group behavior. He was as pleasant to be with as he was instructive. 'H) the members of Iny dissertation committee I am indebted for preventing me from designing and conducting a study more flawed than the eventual product. Bill Crano, with whom I had the pleasure to work during most of my gradu- ate student career, taught me much about the conceptual and practical conduct of research. He has also been a friend with whom I have spent many enjoyable hours discussing liter- ature, philosophy, science, rock & roll, and other heady issues. Ben Schneider has been indispensible in providing a thoughtful context for my work and pointing my thinking in more powerful directions. As much as anyone, he was respon- sible for preventing me from making mistakes, and helping me iii when I did. It is almost impossible to include all the reasons to thank Bill Davidson. He was most instrumental in teaching me about community psychology and change. He was always supportive and stimulating. As much as anyone, he has shaped my thinking while at Michigan State. To Charlie Johnson, who acted as chair, I give my thanks for guiding me through graduate school. I learned ea great deal from him about working in the community and keeping the proper per- spective on all of this business. He was a constant source of astute ideas and suggestions. He is also one of the pleasantest people I know with whom to drink and chat; a rare talent which became most appreciated during the two and a half years it took me to complete this dissertation. I thank my parents, William and Dena Davis, for instil- ling and strengthening in me an early interest in ideas and innovation. The influence of all these people may be feund ‘Hi the present work. Finally, this research was made possible by a grant from the Michigan Office of Services to Aging. Dr. John Peterson was most helpful as the grant coordinator. The opinions expressed in IN) way represent the policies of the thhigan Office of Services to the Aging. iv TABLE OF CONTENTS Page List of Tables ........................................ vii Chapter I. Introduction .............................. 1 Innovation Diffusion and Implementation .......... 3 Organizational Models ............................. 8 Innovation and Organizations ...................... 12 Organizational Change ............................. 21 Consultation Intervention ......................... 28 Research Hypotheses ............................... 32 Chapter II. Methods and Procedures .................... 34 Sample Selection .................................. 34 Experimental Design ............................... 35 Experimental Intervention ......................... 36 Scaling and Data Reduction ........................ 42 Data Collection Instruments ....................... 45 Operationalization of Constructs .................. 48 Manipulation Checks .......................... 48 Descriptive Process Measures ................. 50 Predictive Process Measures .................. 51 Outcome Measures ............................. 55 Psychometric Characteristics ...................... 55 Chapter III. Results .................................. 60 Aggregation and the Unit of Analysis .............. 60 Summary of Aggregation ....................... 63 Randomization ..................................... 66 Summary of Randomization ..................... 68 Manipulation Checks ............................... 69 Intervention Outcome Results ...................... 69 Summary of Intervention Outcome Results ...... 77 Process Variables ................................. 79 Summary of Initial Regression Analysis ....... 115 Significant Process Indicators ............... 118 Final Regression Equations ................... 129 Summary of Results ........................... 136 Chapter IV. Discussion ................................ 138 Confirmation of Hypotheses ........................ 138 Experimental Hypotheses ...................... 139 Correlational Hypotheses ..................... 147 Flaws in the Reported Research .................... 162 Implications and Future Directions ................ 164 V Appendices Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix meow) XLIHIGJ'T'I Workshop Outline ..................... Consultation Outline ................. Evaluation Self-Report ............... Evaluation Interview ................. Agreement With Current Evaluation Practices .......................... Project/Service Information .......... Project Interaction .................. Evaluation Knowledge ................. Workshop Effectiveness ............... Consultation Effectiveness ........... Partial Correlations: Organizational Scales .............. Similarity Coefficients: Organizational Scales .............. Partial Correlations: Agreement with Evaluation Practices .......... Similarity Coefficients: Agreement with Evaluation Practices .......................... Partial Correlations: Evaluation Self-Report ........................ Similarity Coefficients: Evaluation Self-Report ............. References ............................................. vi 174 176 179 182 184 187 195 196 200 202 206 208 210 213 215 218 220 Number 1 2 3 10 11 12 13 14 15 16 17 18 LIST OF TABLES Title Participants by Geographic Workshop .......... Experimental Conditions ...................... Experimental and Control Groups .............. Measurement Schedule ......................... Comparison of Estimates of Rater Agreement Pretest Agreement with Evaluation Prac- tices, Experimental Groups ................... Repeated Measures ANOVA, Evaluation Interview .................................... Repeated Measures ANOVA, Evaluation Self-Report .................................. Repeated Measures ANOVA, Agreement with Evaluation Practices ......................... Repeated Measures ANOVA, Agreement with Evaluation Practices, Experimental Groups Posttest Differences, Evaluation Knowledge .................................... Correlation Matrix of Organizational Structure Predictors: Evaluation Interview and Self-Report .................... Variables in Equation: Post Interview, Organizational Structure Predictors ......... Regression Summary Table: Post Interview, Organizational Structure Predictors .......... Variables in Equation: Follow-Up Interview, Organizational Structure Predictors ........... Regression Summary Table: Follow-Up Inter- view, Organizational Structure Predictors ..... Variables in Equation: Evaluation Self-Report Organizational Structure Predictors ........... Regression Summary Table: Evaluation Self-Report, Organizational Structure Predictors .................................... vii 67 70 72 74 75 76 82 84 85 86 87 9 89 90 19 20 21 22 23 24 25 26 27 28 29 3O 31 32 33 Correlation Matrix of Organizational Struc- ture Predictors: Agreement with Evaluation Practices .................................... Variables in Equation: Agreement with Evaluation Practices, Organizational Structure Predictors .......................... Regression Summary Table: Agreement with Eval- uation Practices, Organizational Structure Predictors .................................... Correlation Matrix of Organizational Structure Predictors: Knowledge of Program Evaluation ............................ Variables in Equation: Program Evaluation Knowledge, Organizational Structure Predictors .................................... Regression Summary Table: Program Evaluation Knowledge, Organizational Structure Predictors .......................... Correlation Matrix of Organizational Environment Predictors: Evaluation Interview and Self-Report ..................... Variables in Equation: Post Interview, Organizational Environment Predictors ......... Regression Summary Table: Post Interview, Organizational Environment Predictors ......... Variables in Equation: Follow-Up Interview, Organizational Environment Predictors ......... Regression Summary Table: Follow-Up Interview, Organizational Environment Predictors ......... Variables in Equation: Evaluation Self-Report, Organizational Environment Predictors ......... Regression Summary Table: Evaluation Self- Report, Organizational Environment Predictors .................................... Correlation Matrix of Organizational Environment Predictors: Agreement with Evaluation Practices .......................... Variables in Equation: Agreement with Evaluation Practices, Organizational Environment Predictors ........................ viii 92 93- 94 96 97 98 101 101 102 102 103 104 104 105 106 34 35 36 37 38 39 4O 41 42 43 44 45 46 47 48 49 50 Regression Summary Table: Agreement with Evaluation, Organizational Environment Predictors .................................... Correlation Matrix of Individual Level Pre- dictors .......................................... Variables in Equation: Post Interview, Individual Level Predictors ................................... Regression Summary Table: Post Interview, Individual Level Predictors ........................ Variables in Equation: Follow-Up Interview, Individual Level Predictors ....................... Regression Summary Table: Follow-Up Interview, Individual Level Predictors ....................... Variables in Equation: Evaluation Self-Report, Individual Level Predictors ....................... Regression Summary Table: Evaluation Self-Report Individual Level Predictors ...................... Variables in Equation: Agreement with Evaluation Practices, Individual Level Predictors ........... Regression Summary Table: Agreement with Evaluation Practices, Individual Level Predictors ................................. Correlation Matrix of Significant Predictors: Evaluation Interview and Self—Report ............. Variables in Equation: Post Interview, Significant Predictors ........................... Regression Summary Table: Post Interview, Sig- nificant Predictors .............................. Variables in Equation: Follow-Up Interview, Significant Predictors ........................... Regression Summary Table: Follow-Up Interview, Significant Predictors ........................... Correlation Matrix of Significant Predictors: Evaluation Self-Report ........................... Variables in Equation: Evaluation Self-Report, Significant Predictors ........................... ix 106 109 111 111 112 112 114 114 116 116 120 120 121 121 122 122 123 51 52 53 54 55 56 57 58 59 6O 61 62 63 64 A1 A2 A3 Regression Summary Table: Evaluation Self-Report, Significant Predictors ............................ Correlation Matrix of Significant Organizational Predictors: Agreement with Evaluation Practices ......................................... Variables in Equation: Agreement with Evaluation Practices, Significant Organizational Predictors ........................................ Regression Summary Table: Agreement with Evaluation Practices, Significant Organizational Predictors ........................ Correlation Matrix of Final Predictors: Agreement with Evaluation Practices ........................ Variables in Equation: Agreement with Evaluation Practices, Final Predictors ...................... Regression Summary Table: Agreement with Evaluation Practices, Final Predictors ........... Correlation Matrix of Final Predictors and Inter- vention: Evaluation Interview ................... Variables in Equation: Post Interview, Final Predictors and Intervention ...................... Regression Summary Table: Post Interview, Final Predictors and Intervention ...................... Variables in Equation: Follow-Up Interview, Final Predictors and Intervention ................ Regression Summary Table: Follow-Up Interview, Final Predictors and Intervention ................ Correlation Matrix of Intervention Components and Evaluation Interview Scores .................. Final Significant Multiple Regression Equations ........................................ Partial Correlations: Organizational Scales ..... Similarity Coefficients: Organizational Scales .......................................... Partial Correlations: Agreement with Evaluation Practices ....... . ............................... 123 124 124 125 127 127 128 131 131 132 132 133 135 137 206 208 210 A4 A5 A6 Similarity Coefficients: Agreement with Evaluation Practices ............................... 213 Partial Correlations: Evaluation Self-Report ........................................ 215 Similarity Coefficients: Evaluation Self-Report ........................................ 218 xi CHAPTER I Introduction The present report describes a study designed to measure the effectiveness of a participative, goal-setting consulta- tion intervention intended to change iflnz program evaluation practices of organizations providing services to older adults in three cities in Michigan. The effectiveness of the exper- imental manipulation is examined in the context of the struc— ture and environment of the organizations providing the focus for change. Change in program evaluation practices is dis- cussed as a special case of the general process of innovation adoption in organizations. Decreasing social resources and concern arising from equivocal results have pressed social policy makers to ques- tion frequently the merit of social programs. This increased attention has contributed to the impetus for the development of a rigorous and scientific program evaluation methodology, making possible 'flfl” the first time a: scientific theory of social and organizational change. Program evaluation is conceived here to be an innovative management decision tool capable of contributing to the reduction of uncertainty associated with making programmatic decisions. Moreover, the practice of program evaluation is believed to assist in the design and management of more efficient and effective organizational practices and ser- vices. Finally, the application of rigorous and scientific 1 2 program evaluation methods is believed necessary to develop a useful and meaningful theory of social problem causation and resolution, a required step in the establishment of the “experimenting society" (Campbell, 1971). Human service organizations provide an tool for social improvement and cMange 'Hi American society. It is their responsibility to address and mitigate pressing social prob- lems. Social problems may be pwolonged and exacerbated in direct proportion to the inability cn~ reluctance of' hwnan service programs to measure their own success. The amelior- ation and solution of social problems requires in part that human service programs increase their ability to measure and demonstrate their effectiveness. Stephen (1935) early urged the use of evaluation methods to measure the efficacy of New Deal programs during the 1930's. However, extensive use of evaluation has only recently been widely advocated by social scientists (Campbell, 1969, 1971; Caro, 1971; Fairweather, 1967; Fairweather & Tornatzky, 1977; Rossi & Williams, 1970; Rossi, Freeman, & Wright, 1979; Suchman, 1967; Weiss, 1972). The acceptance and implementation of evaluation tech- niques have not kept pace with their rapid development. Where program evaluation methods have been used, the results have been frequently ignored tu/ policy makers (Bernstein & Freeman, 1975; Wholey, Scanlon, Duffy, Fukumoto & Vogt, 1970) or have not been implemented systematically (Caplan, Morrison, & Stambough, 1975; Weiss, 1980). Moreover, program evaluation methods have been perceived at times by human service professionals to be insensitive to the complexities characterizing their programs and to be a manipulative device used by governmental decision makers to camouflage predeter- mined decisions to terminate programs (Attkisson & Broskoski, 1978), decisions which may cwdginate 'Hi caprice (n: may be motivated by pursuit of political advantage. The lag witnessed in the adoption of program evaluation methodology is not unlike the lag evidenced in the adoption of other types of new knowledge. Glaser (1976) has shown that some innovations may take as long as 100 years to dif- fuse fully throughout a particular social system. Several factors may affect the rate (H: diffusion of innovations. An examination of 'these factors inay provide insight into the diffusion of program evaluation methodology among human service agencies. Moreover, aui examination of this literature may indicate how this diffusion can be facil- itated, i.e., show how the adoption of program evaluation methods by human service organizations might be ‘fostered. The purpose of the present research is to examine experi- mentally a method for influencing this adoption process. Innovation Diffusion and Implementation The empirical study of diffusion began during the 1930's when rural sociologists studied the spread (”5 agricultural information from scientists in state universities to farmers (Ryan, 1948; Ryan & Gross, 1943), although theoretical work probably originated with Tarde (1903). The study (H: the 4 diffusion of innovations has since expanded to include the examination of’ literally thousands (H: different social and technological products and processes. The multidisciplinary growth in diffusion research has contributed to ENl almost overwhelming number of publications; Rogers, Williams, & West (1977), for example, cited 2750 publications. Some believe this growth has been due to the ability of social scientists to conduct research having potentially significant social consequences (Downs & Mohr, 1976). An unfortunate cost for this growth has been great theoretical fragmentation. Rogers (1962; Rogers & Shoemaker, 1971) first suggested a nomothetic theory might explain diffusion phenomena. He believed this process was independent of discipline, type of innovation, or research method (Rogers & Eveland, 1975). From his analysis, Rogers constructed what has come to be called the classical model of diffusion. This model consists of four stages (Rogers & Shoemaker, 1971, p. 103). 1. Knowledge. The individual is exposed to the innovation's existence auul gains some under- standing of how it functions. 2. Persuasion. The individual forms a favorable or unfavorable attitude toward the innovation. 3. Decision. The individual engages irI activi- ties which lead to a choice to adopt or reject the innovation. 4. Confirmation. The individual seeks reinforce- ment for the innovation decision he has made, but he may reverse his previous decision if exposed to conflicting messages about the innovation. Although Rogers' model provided a conceptual break- through for diffusion researchers, critics have pointed to several weaknesses and have suggested alternative conceptions 5 of change (Berman & McLaughlin, 1975; Havelock, 1973a, 1973b; Yin, 1978; Yin, Heald, & Vogel, 1977; Yin, Quick, Bateman, & Marks, 1978; Zaltman, & Duncan, 1977; Zaltman, Duncan & Holbek, 1973). Rogers has modified the classical model in recognition of these criticisms (Eveland, Rogers, & Klepper, 1977; Rice & Rogers, 1980; Rogers & Eveland, 1975). The major weakness of the classical diffusion model is its predominate focus upon the individual, perhaps arising in part from its origin in the study of change among individual farmers. The classical diffusion model does not attempt to account. for the different processes in organizations that influence the adoption and implementation of innovations (Havelock, 1973b; Rogers 84 Eveland, 1975; Zaltman, et al., 1973). Many variables shown to influence the adoption of innovations by individuals make little or no sense when con- sidering organizations, e.g., organizations do not have atti- tudes toward the innovation; many' organizational variables related to innovation adoption result in nonsense when gen- eralized to individuals, e.g., formalization of rules govern- ing behavior. Thus, generalizability of the classical model may be limited to innovation adoption among individuals (Zaltman, et al., 1973; Rogers & Eveland, 1975). A second major weakness of the classical diffusion model is its conceptualization (H: the innovation. The classical model generalLy views the innovation as :1 fixed quantity, arising again, perhaps, from the original study of adoption of agricultural products. It is not at all clear that the 6 process of innovation adoption works similarly for more amor- phous and ephemeral innovations like educational curricula, social intervention programs, or social science knowledge (Berman & Mclauglin, 1975; Downs, 1978; Downs & Mohr, 1976; Hall & Loucks, 1978; Larsen, 1980; Mohr, 1978; Weiss, 1980). Related to the view that innovations are unitary phenom- ena is the conception of the adoption decision. Tradition- ally, diffusion researchers viewed ad0ption as a binary response in which one either adopted the innovation or one did not. Post-adoption processes often were run: examined, although Rogers and Shoemaker (1971) discussed the possibil- ity of functional and dysfunctional consequences of adoption. Innovation researchers have recently developed a more compre- hensive view of innovation adoption, including the examina- tion of changes in the innovation subsequent to adoption (Hall & Loucks, 1978; Rice & Rogers, 1980; Yin et al., 1977; 1978). Disagreement exists, however, regarding the merit of deliberate adaptation of the innovation to local conditions and needs by adopting organizations (Calsyn, Tornatzky; & Dittmar, 1977; Glaser & Backer, 1977). Although innovation adoption is frequently seen now as a: process of continuous and gradual specification, a linear, stage model is generally accepted (Eveland, Rogers, & Klepper, 1975; Yin et al., 1978). The final conceptual weakness of traditional notions of innovation adoption rests in 'the hnplicit assumption that innovation and change are intrinsically good (Rogers & 7 Eveland, 1975; Zaltman, 1979). This view is problematic because the diffusion of innovations perceived positively by potential adopters may not occur in the same fashion as inno- vations perceived negatively (Zaltman, 1979). This failure to examine the innovation adoption decision in greater detail may also partly explain the pervasive existence of contradic- tory research results (Downs, 1978; Downs & Mohr, 1976; Mohr, 1978). The weaknesses in the traditional diffusion model are addressed in the present research. First, organizations pro- vide the unit of analysis for examining innovation adoption, allowing greater generalizability of diffusion research results. Second, the innovation is not viewed here to be a fixed quantity. hi the present study, it: is possible for organizations to adopt portions of the innovation. Finally, adoption of the innovation is not assumed to be beneficial. One of the instruments used in the present research (Agree- ment with Evaluation Practices) measures whether potential adopters think current evaluation practices should be used in their agency. A more complete determination of the generalizability of innovation theory requires the inclusion of varied units of analysis and diverse samples. The study of innovation adop- tion in gerontological organizations provides the focus for the present research because innovation adoption in this type of' organization has not. been examined tut researchers. Because innovation adoption by organizations is one (H: the 8 primary focuses of the present research, major models of organizational functioning will be briefly discussed below. Organizational Models In some respects, innovation and change in organizations may! be more resisted than change among individuals. The relationship of innovation in organizations will vary, how- ever, across type and structure of organization and stage of the innovation process (Burns 81 Stalker, 1961; Hage, 1965; 1980; Hage & Aiken, 1970; Hall, 1977; Lawrence & Lorsch, 1967; Perrow, 1979; .J. Thompson, 1967; V. Thompson, 1965; Wilson, 1966; Zaltman, Duncan, & Holbek, 1973; Zaltman & Duncan, 1977). This variation is true both for the creation of innovations within organizations and the adoption of inno- vations created outside of organizations. Historical models of organizational functioning have shaped current conceptions of innovation creation enui adoption iri organizations and, therefore, will be briefly discussed. The bureaucratic model is probably the oldest, rational theory of organizations (Weber, 1947). Bureaucratic struc- ture emerges as a consequence of the attempt by organizations to impart some degree of rationality to an uncertain environ- ment through the use of division of labor, structured roles and formal rules of behavior (Weber, 1947). Hall (1963) pro- vided early' empirical support for the existence of these dimensions, although they were demonstrated by organizations in varying degree. Innovation Inay become problematic for 9 organizations that demonstrate to a: greater extent bureau- cratic characteristics. Typically, innovation is resisted because it causes a disruption of routine and threatens main- tenance of rational control, especially rule observance and superordinate-subordinate role relationships (V. Thompson, 1965). Organizations demonstrating bureaucratic dimensions to any great extent may be less likely to adopt innovations produced outside the organization but may be more likely to implement innovations faithfully once adopted (Zaltman et al., 1973). Many scholars suggest the bureaucratic model is too restrictive and neglects the role of human relationships in organizational functioning. The human relations model (Barnard, 1938; Likert, 1967; Roethlisberger & Dickson, 1947), stressing the importance of norms and other forms of informal behavioral control, emerged to address this weakness in the bureaucratic model. The human relations model focuses primarily on morale, leadership, productivity, and the structuring of groups (Perrow, 1979, p. 98). This increased stress on human rela- tionships directs study to the importance of communication and cooperation rather than more formal organizational char- acteristics. Human relations proponents advocate looser control and increased tolerance for diversity, which is believed to be positively related to innovation (Burns & Stalker, 1961). While some empirical evidence exists to suggest a: positive relationship between looser control and 10 innovation, only equivocal support can be provided to demon- strate that superior organizational performance results from adherence to human relations tenets like participation in decision making (Locke & Schweiger, 1979; Perrow, 1979). Another major approach 1x1 the study (if organizations focuses on the interaction of the organization with its environment. Both structural and interpersonal character- istics are studied. In this "adaptive systems" view, the major goal of the organization is survival, and the organi- zation adapts in any way necessary to insure it (Tosi, 1975, p. 93). This approach includes the “Environmental Model" (Perrow, 1979) and the "Contingency-Choice Perspective" (Hall, 1977). Typically, organizational forms are seen as a function of tasks, goals, or technology, with organizational functioning varying as a result of the fit of organizational characteristics with environmental demands (Galbraith, 1973; Lawrence & Lorsch, 1967; Litwak, 1961; Perrow, 1967; J. Thompson, 1967; Woodward, 1965). The response of organiza- tions to innovation depends on this "organization-environment fit.“ The two stage innovation model developed by Zaltman and his colleagues (Zaltman et al., 1973; Zaltman & Duncan, 1977) is an environment matching model. Similar to Wilson (1966), the stages of the Zaltman model include an initiation stage (the organization becomes aware of the innovation, decision makers form attitudes toward the innovation, and the decision is made to adopt the innovation) and an implementation stage 11 (both initial and sustained implementation). Given the dif- ferent tasks associated with each stage, different organiza- tional structural characteristics become important. Organi- zations should differentiate their structure at each stage of innovation adoption and hnplementation. In: the initiation stage, adopting organizational units should have higher com- plexity, lower formalization and lower centralization. Dur- ing the implementation stage, organizational units should have lower complexity, higher formalization and higher cen- tralization. This contribution ‘hwnn Zaltman and his col- leagues offers the first contingency perspective on the adop- tion of innovations by organizations. Tornatzky, Roitman, Boylan, Carpenter, Eveland, Hetzner, Lucas, & Schneider (1979, pp. 8-9) have also contributed to the contingency perspective of organizational innovation. These authors suggest that innovations requiring uniform tasks (Litwak, 1961) might be more likely to be adopted by organizations stressing rules, job specialization and hier- archical decision making; innovations requiring non-uniform tasks might tna more attractive to organizations stressing participation, limited hierarchy and open communication (pp. 8-9). Little innovation research using this organiza- tion-environment focus has been reported. A review (Hi the results of research examining inno- vation within, and innovation adoption by, organizations will clarify these relationships. While limited longitudinal, experimental research examining the interaction between 12 organization type and innovation adoption and change has been reported (Tornatzky, Fergus, Avellar, & Fairweather, 1980), several investigators have used cross-sectional survey data to document the relationship between organizational charac- teristics, innovation adoption, and change. Innovation and Organizations Burns and Stalker (1961), examining case studies of innovation among twenty electronics firms in England and Scotland, first attempted 'UJ establish empirically ea rela- tionship between innovation and organization structure. Organizations are interpreted as mechanistic or organic. Mechanistic systems are believed to be appropriate in stable environmental conditions auul are characterized byr (1) spe- cialized differentiation of functional tasks, (2) precise definition of organizational roles, rights and obligations, and (3) (a tendency toward superordihate-subordinate struc- tured interaction (p. 120). In many respects the mechanistic model parallels the classical Weberian conception of bureau- cracy. The organic form is represented by (1) adjustment and continual redefinition of individual tasks and roles, (2) a network structure of control, authority and communication, and (3) communication based (Hi the exchange of information and advice rather than instructions and decisions (p. 121). The organic form of structure has several components in com- mon with human relations perspectives of organizational func- tioning. Burns and Stalker (1961) conclude that organic forms of organizations are likely to be more innovative and 13 receptive to innovation adoption and change, although no mechanism is suggested whereby organizations might deliber- ately change to address new environmental demands (Fleischer, 1978, p. 10). Jerald Hage and ins; colleagues have reported several studies that substantiate and extend many of the observations first made by Burns and Stalker (Aiken & Hage, 1968; 1971; Dewar & Hage, 1978; Hage & Aiken, 1967a, 1967b, 1970; Hage & Dewar, 1973). The highlight of this program of research was the discovery 'that organizational characteristics most related to innovation ad0ption and change in 16 human service organizations were complexity, centralization, formalization and interorganizational relations. A summary of these find- ings is presented below. A more complete discussion may be found in Hage and Aiken (1970), Zaltman et al., (1973), and Zaltman and Duncan (1977). Complexity typically refers ‘UD the level (H: knowledge and expertise in an organization. Indicators frequently used to represent complexity include the number of occupational specialities, their level of professionalization, and the existence of a differentiated task structure (Hage & Aiken, 1970; Heydebrand & Noell, 1973; Wilson, 1966). Complexity has been shown in) be related positively to change and innovation adoption. Hage and Aiken (1967b, p. 509) report a nmderately strong, positive relationship between complexity (r.= .48 for no. of occupational special- ties; 1 = .37 for extra-organizational professional activity) 14 and innovation adoption. Additional evidence for the link between level of professionalization and innovation adoption in organizations has been provided by Corwin (1972), Counte and Kimberly (1974), Heydebrand auui Noell (1973), Kimberly (1978), and Kimberly and Evanisko (1981). The relationship between complexity and innovation adoption may not be so straightforward. Zaltman et al. (1973, pp. 137-138) have suggested that complexity may have a positive relationship with change only during the early ini- tiation stage; a negative relationship may exist during the later implementation stage. No data have been reported to document this interaction. The causal relationship between organizational complex- ity and innovation adoption is not precisely understood. Hage and Aiken (1970, pp. 33-35) suggest that the training and norms of experts and professionals prepare them to value new knowledge and motivate them to incorporate this new know- ledge into their work. The frequency of inclusion of profes- sionals in the innovation process has been shown to vary with the type of experts and their position in the organization (Tushman, 1977). Complexity may be related to innovation adoption in the following way. The search by professionals for new knowledge may resemble the behavior of cosmopolite individuals (Gouldner, 1958a, 1958b), who have been shown to be early adopters of innovations (Rogers & Shoemaker, 1971). Highly complex organizations have large numbers of different types 15 of professionals. The large amount and diversity of informa- tion that is consequently brought into the organization increases the awareness and knowledge of innovations existing outside the organization. This constant influx of new know- ledge through professionals and other experts may create per- formance gaps--perceived discrepancies between what the organization is doing and what its professionals feel it ought to do (A. Downs, 1966). Efforts to resolve these dis- crepancies may lead either to the adoption by the organiza- tion of outside innovations or the production of its own innovations (March & Simon, 1958). Although it might appear at first glance that larger organizations with more profes- sionals would be more likely to adopt innovations, the rela- tionship between complexity, organizational size and innova- tion remains unclear (Child, 1972; Dewar & Hage, 1978; Hage & Dewar, 1973; Kimberly, 1976; Moch & Morse, 1977; Pugh, Hickson, Hinings, & Turner, 1968). Centralization of decision making has also been linked with the adoption of innovations by organizations. Central- ization refers to the structure of decision making in organ- izations. 'Hage & Aiken (1970) also include the distribution and exercise of power and control, although this may be a separate dimension (Ouchi, 1977; Tannenbaum, 1968; Tannenbaum, Kavcic, Rosner, Vianello, & Wiesner, 1974). Gen- erally, the fewer the number of organizational staff involved in decision making, and the higher they are located in the administrative hierarchy, the more centralized the organiza- tion is said to be. 16 Hage & Aiken (1970) suggest the concentration of power arising from centralization leads to the preservation of the status quo, thus reducing tolerance for the change that is often required for innovation adoption. Moreover, the con- centration of decision making tends to isolate decision makers and hinder feedback from staff members lower in the organization. This concentration may especially impede innovation adoption if the organization is staffed primarily by professionals, who, as we saw above, are likely to be sources of new knowledge. Thus the concentration of power and decision making reduces the flow (H: innovation related information into the organization and to those members that may influence innovation related decisions. Centralization may further reduce the flow of information into the organiza- tion if the number of boundary spanning positions is reduced (J. Thompson, 1967; Tushman, 1977). ll moderately' strong, positive relationship between an indicator of centralization (I: = .48 for participation in decision making) and the adoption of new programs was reported by Hage & Aiken (1967b, pp. 509). Further evidence for the importance of participation in decision making has been provided by (Fairweather et al., 1974; Moch & Morse, 1977; Stevens & Tornatzky, 1980; Tornatzky, Avellar, Fergus, & Fairweather, 1980). Greater participation in decision making reflects the existence of organic characteristics (Burns 8: Stalker, 1961). There is TH) agreement, however, concerning the influence of participative decision making on 17 the total performance of the organization (Locke & Schweiger, 1979). Innovation adoption in organizations may depend on the stage (Hi the innovation process (Zaltman 8. Duncan, 1977; Zaltman et al., 1973). Centralization might limit innovation awareness and reduce the probability of innovation adoption, the first stage (H: the adeption process; more centralized decision making might facilitate implementation of the inno- vation, the second stage of the innovation process, once the decision to adopt has been made. A third organizational characteristic reported to be associated with the adoption of innovations is formalization. Formalization refers to the degree to which rules and proce- dures are written. Formalization also usually refers to the extent to which deviation from these written rules and pro- cedures is permitted. High formalization places restraints on individual behavior in that most work related behavior is prescribed and little latitude for deviance is allowed. Hage and Aiken (1967b, p. 511) found a moderate negative relationship (£.= -.47) between an indicator of formalization (job codification) and innovation adoption. This finding supports the case study results of Burns and Stalker (1961). There is little agreement, however, regarding the 1%nm1 of this relationship. Although Hage and Aiken (1970) state that high formalization might impede implementation of the adop- tion, Zaltman et al. (1973) disagree. Zaltman et al. (1973) suggest that low formalization at the initiation stage 18 increases the ability of organization members to gather and process information, which increases awareness of the innova- tion. However, with increased formalization during implemen- tation, users of the new innovation can more easily be made aware of new role changes that inevitably accompany implemen- tation, thus improving utilization. Parenthetically, high formalization at the hnplementation stage Inight 'lhnit the activities of inside advocates of the innovation, an organi- zational activity empirically linked to implementation suc- cess (Fairweather et al., 1974; Glaser, 1976; Havelock, 19736; Rogers & Shoemaker, 1971). The previous analysis has shown that several structural characteristics internal to organizations, viz., complexity, centralization, and formalization, influence the creation and adoption of innovations. These aspects of organizations are critical 1x) the study' of innovation because they have an enduring and pervasive effect on all organizational behavior. Also important is the relationship of the organization to other organizations in its environment, i.e., the interorgan- izational network. The interorganizational network includes varying numbers of organizations linked through communication and exchange of resources. Interorganizational networks are not much differ- ent from television networks, transportation networks, or any other type of social network (Politser, 1980). Each point or node in the network is linked through communication and main- tained through the exchange of resources, with stronger ties 19 representing greater interdependency in exchange (Cook, 1977). Access to resources can be improved through the addition and strengthening of links (Sarason, Carrol, Maton, Cohen, & Lorentz, 1977). Social and material exchange is the adhesive that binds together networks; the behavior of net- work participants is shaped by this exchange (Blau, 1964; Homans, 1950; Thibaut & Kelley, 1959). Exchange among organizations is characterized by several factors which contribute to the existence and success of the relationships (Levine & White, 1961). York (1979) has sum- marized these factors: (1) interagency awareness, (2) resource interdependence, (3) domain consensus, (4) goal and task similarity, and (5) conflict. The factors. most germane to the present research are interagency awareness and resource interdependence, and will be discussed below. This discussion draws greatly from the summary by York (1979). Although at first glance this observation may seem inane, interorganizational relations are impossible without the awareness of other organizations and their activities. Levine, White, and Paul (1963) report that over half of the services provided by 34 agencies providing medical and social services were unknown ix) other community agencies. York (1979) suggests that this lack of awareness might be due to the absence of boundary spanning personnel, but he provides no empirical evidence for this conclusion. In addition to awareness, the physical opportunity for interaction must exist (Schermerhorn, 1975). Interorganizational awareness is 20 a prerequisite fin: the development of interorganizational relations. The scarcity or uneven distribution of resources is fre- quently cited as one of the major factors spurring the devel- opment of interorganizational relationships. Among human service agencies, resources may include (1) clients, (2) con- sultation services, and (3) information (Levine, White & Paul, 1963). Resources must be important to goal realiza- tion, and the exchange must be viewed by the participants as reciprocal and equitable (Lehman, 1975). Interorganizational relations may be related to innova- tion adoption through the following sequence of events. The first stage might include the awareness of relevant other organizations 'Hi the local environment. Next, as a: conse- quence of interaction, organizations establish consensus regarding their respective domains, recognize the similarity between their respective goals and tasks, and increase inter- dependence through the exchange of resources. Finally, if conflict; between organizations is not excessive, organiza- tions may establish stronger collaborative relationships. As a result of interaction, members of organizations become aware of innovative programs and practices. Moreover, subtle pressures to adopt these innovations manifest themselves in attempts by professionals and others 'Hi organizations to demonstrate their "professionalism" through knowledge and implementation of new programs and practices in their respec- tive organizations. While it is unknown whether interorgani- zational relations is linked 'hi this fashion in) innovation 21 adoption, there is moderate empirical support for the linkage itself. Aiken and Hage (1968), in their study of 16 human ser- vice organizations, found rather strong support (3 = .74) for a positive relationship between their measure of innovation adoption (number of new programs) and the number of programs conducted jointly' with other organizations. Network cen- trality has also been shown by Becker (1970a, 1970b) to be significantly related to innovation adoption and diffusion among organizations. At the individual level of analysis, network position has often been associated with innovation adoption (Rogers & Shoemaker, 1971). The research that has been discussed thus far has docu- mented the relationships between adoption and implementation of innovations and characteristics of organizations and their environment. Not presented yet are the results of research examining attempts to change organizations and, more specifi- cally, to change organizations such that the likelihood of innovation adoption is increased. Organizational Change Bennis. (1966, p. 251), 'Hi the first McGregor Memorial Lecture, spoke rosily of the advent of "organizational revi- talization...the-deliberate and self-conscious examination of organizational behavior and time collaborative relationship between managers and scientists to improve performance." Based on humanistic-democratic ideals, organizational change emerging from the (collaborative efforts of scientists and 22 managers was to usher in a new era of "temporary systems" devoted to the legitimate expression of imagination and creativity. Scientists were to act as midwives for this new era; working as active change agents, they were to use social science knowledge to manipulate strategic leverage points in organizations and consequently hnprove interpersonal rela- tions and organizational effectiveness (Bennis, 1965). It appears unlikely that change agents have successfully persuaded organizations to adopt conclusively this new value system (Tichy, 1974). Nevertheless, change agents continue to IN! a strategy frequently used for organizational change (Tichy 8. Hornstein, 1976). Change agents have often attempted to facilitate innovation adoption by individuals and organizations (Rogers & Shoemaker, 1971; Zaltman et al., 1973; Zaltman & Duncan, 1977). Change agents may be located inside or outside the tar- get organization, although organizational location may con- tribute to the radicalness of the possible change (Tichy, 1974). Moreover, the techniques exercised by change agents vary greatly (Hornstein, Bunker, Burke, Gindes, & Lewicki, 1971). Change agents typically act as “linking agents" between the source of knowledge and the adoption/utilization system (Havelock, 1973a). This role shares many of the character- istics of .1. Thompson's (1967) boundary spanning unit, its function being to act as a buffer between units within the organization, or between the organization and the larger 23 environment. Much (H: the activity (If this linking role consists of encoding and decoding information so that the interacting systems may better communicate with each other. The existence of boundary spanning units is especially crit- ical in highly complex environments or when the activities of the respective systems are very incongruent (J. Thompson, 1967). Rogers and Shoemaker (1971, p. 248), in an exhaustive review of innovation adoption research, discovered that the simple provision of knowledge is not sufficient and suggested that the following characteristics should be possessed by the successful change agent attempting ‘Ua influence innovation adoption. Change agent success was found in) be positively related to: A 1. The extent of change agent effort. 2. The demonstration of' a client-orientation rather than change-agency-orientation. 3. The degree to which the program is compatible with clients' needs. 4. The change agent's empathy with clients. 5. The degree of homophily with clients. 6. The extent to which the change agent works though opinion leaders. 7. Credibility in the eyes of clients. 8. The effort. used to increase the' client's ability to evaluate innovations. As can be seen in the suggestions for change agent suc- cess made by Rogers and Shoemaker (1971), interpersonal interaction is a critical component (Hi the change process. This mode of communication is partly responsible for the ability oi: the change agent to confront successfully the resistance frequently demonstrated by potential adopters of innovations (Havelock, 1973b). Empirical support for this 24 comes from Fairweather' et al. (1974) and Stevens and Tornatzky (1980), which are discussed in detail below. Fairweather and his colleagues (Fairweather et al., 1974), in a national field experiment examining the adoption of an innovative mental health program by Veteran's Adminis- tration Hospitals, compared the relative effectiveness of different degrees of interpersonal interaction. Conditions in the first "approach“ phase consisted of (1) brochures, (2) workshops, or (3) use of a demonstration version of the inno- vation. Participants irI the demonstration conditions were more likely to adopt the innovation (Fairweather et al., 1974, p. 77), suggesting the relative superiority of a nmre active, interpersonal approach. The second stage (H: the experiment compared the effectiveness (H: an "action consul- tant“ to a written manual. Participants in this second stage included only those organizations which had decided to adopt the innovation during the first stage. Thus, all partici- pants in the second stage of the study had made some commit- ment to adoption. Organizations receiving active consulta- tion were significantly more likely ix) adopt and implement the innovation, suggesting the efficacy of an active change agent. Supporting evidence has been reported by Stevens (1977; Stevens & Tornatzky, 1980). These investigators examined the comparative effectiveness of private and group telephonic and 25 face-to-face consultation in fostering the adoption of eval- uation methodology by substance abuse agencies. These inter- ventions provided increasing amounts of interpersonal inter- action. Program evaluation knowledge was disseminated to all participants in a three-day workshop. An analysis of vari- ance revealed that the group consultation (type of consulta- tion) was more effective than private consultation, and on- site consultation (type (H: site) was more effective than telephonic consultation. A significant interaction between type of consultation and type of site was also found. These experimental comparisons of different degrees of interpersonal interaction offer an exceptional view of the innovation adoption process in organizations as they repre- sent the few, randomized, longitudinal, field experiments in the area. The results from these studies show that active consultation can be effective in influencing the adoption of complex social innovations. Active consultation is. more effective than the simple provision of information (Fairweather et al., 1974), and face-to-face active consul- tation within a group context is more effective than tele- phonic active consultation to private individuals (Stevens & Tornatzky, 1980) in facilitating the adoption of a social innovation by human service organizations. Unclear, however, is the generalizability of these findings. The Fairweather study' included only' Veterans Administration hospitals. Stevens and Tornatzky examined only substance abuse agencies. While both of these types (H: organizations provide human 26 services, they are predominantly staffed by professionals. Fairweather et al., (1974) included hospital superintendents, psychiatrists, psychologists, social workers, and nurses. The participants in Stevens' (1977) study were slightly less professional, in that TH) persons holding a doctoral degree were allowed to participate. Also left unanswered by these studies 'H; the type of group consultation necessary to move the organization toward adoption of the innovation. In both studies, groups were composed of' individuals from iflua same organization. These staff members then acted as internal advocates for adoption of the innovation by the organization. The change agent pri- marily offered technical and motivational support to group members. Thus, while group consultation was shown to be effective in promoting innovation adoption, the participation of other organizational staff may have contributed to the significant main effect. The effectiveness of the consultant may have been confounded with other group and organizational processes like superordinate and subordinate role relation- ships. The present study controls for this confound by employing consultation groups composed of members from dif— ferent organizations. Another limitation of these results is the failure to ground the experimental design in an extensive theoretical context. The Fairweather study made an attempt by examining the influence of organizational decision making, specifi- cally, participative decision making. Stevens also examined 27 participative decision making. Both studies additionally included some organizational variables like size and per- ceived resources. Little attempt was made to examine the environmental context within which the organizations were located. Also, the body of research examining innovation and organizational structure and interorganizational relations was ignored. This limited integrathwi of organizational change and organization theory is not unusual. In fact, the absence of any theoretical foundation among organizational change prac- titioners in endemic (French & Bell, 1975; Huse, 1980), al- though there are some exceptions (Beer, 1980). The present research attempts to extend and strengthen the results found earlier by these investigators. First, a sample of organ- izations very different from those already used is studied. This group of organizations includes human service agencies that provide services to older adults. Typically, the staff of these agencies are not professional; many of them work only part time (Davis, 1981a). Second, the present research will attempt to integrate modestly the theoretical work examining intraorganizational structure/processes, interorganizational relations, and the adoption of innovations by organizations. The relationship between these organizational variables and action consulta- tion will also be examined. Finally, the effectiveness of a type of action consultation never tested before in promoting innovation adoption is examined. This consultation interven- tion is discussed below. 28 Consultation Intervention The intervention used ““1 the present research attempts to facilitate the adoption and implementation of program evaluation methods with a consultation intervention stressing three components: 1) expansion (If interorganizational re- lations and social support, 2) use of structured-goal set- ting, and 3) provision of program evaluation knowledge. The relationship between interorganizational relations and innovation adoption was (discussed above. The» consul- tation intervention takes advantage of, and tries to foster, the interorganizational relations of participants. Partic- ipants are told to seek innovation related resources from other organizations in the consultation group and their en- vironment. The actual procedures used to increase this sharing are discussed in the next chapter. The provision of social support among participants is also stressed in the consultation intervention. The use and effects of social support have recently received considerable attention from community psychologists. Emshoff, Davis, and Davidson (1981) argue that social support possesses the fol- lowing characteristics. Social support: 1. satisfies the need an individual has for af- fection and esteem; 2. implies a mutual obligation among individuals to exchange material resources; 3. implicitly and/or explicitly includes the so- cietal integration (Hi the individual through the acquisition of rewarding roles; and 4. implicitly and/or explicitly assists the in- dividual in validating expectations about others, contributing to the individual's con- struction of reality. 29 Typically, social support is conceived as aui interper- sonal process that assists in the satisfaction of the social and psychological needs of the individual (Caplan, 1974; Caplan & Killilea, 1976). Emshoff et al. (1981) argue, how- ever, that networks providing social support might be devel- oped across groups and organizations. Such networks could be built among organizations sharing similar goals inn] needs, but individually lacking resources to accomplish satisfac- torily all of these goals. Sarason et al. (1977) provide case study evidence of a natural support network comprised of a variety of human ser- vice organizations serving a common geographic area. The major outcomes they report include increased productivity and sense of support, and a decreased sense of alienation. Al- though they do not discuss whether this type of network could be deliberately' created and beneficially unanipulated, some evidence exists to suggest that this is possible (Bogat & Jason, 1980; Caplan & Killilea, 1976; Davis & Jason, 1982). The deliberate creation of such networks has not been tested among organizations, although Schermerhorn (1981) run; sug- gested how this might be done. Some suggest further that task-oriented support groups may enhance task completion (Davis, 1979). Empirical evi- dence for this is sketchy. Some job placement counseling programs having a: significant network component. have been shown to be rfighly successful with youth (Azrin, Flores, & Kaplan, 1975), handicappers (Davis, Johnson, & Overton, 1979) 30 and the elderly (Gray, 1980), although this effect is con- founded with other structured activities. The success of task-oriented support groups among organizations is unknown. The social support component is included in the con- sultation intervention to address the affective barriers to innovation adoption cited by Frohman and Havelock (1973), i.e., perceived fears regarding threats to social relation- ships, outside malevolence, personal position, and status differences with the consultant. Another focus of the consultation intervention includes structured goal-setting. This refers to precise delineation of what is to be accomplished as a result of the consultation intervention. Goals represent the .ahn of action (Locke, Shaw, Saari, & Latham, 1980). Locke et al. (1981), in a review of goalf-setting in laboratory and field studies, report almost overwhelming support for the finding that spe- cific and challenging goals increase task performance. Locke (1968) first suggested a theory (fl: goal-setting for organizations. Locke (1968) showed in eight laboratory experiments that use of specific goals increased performance, and that harder goals, if accepted, led to greater perfor- mance that easier goals. Latham and Yukl (1975) conclude in their review (Hi 27 correlational and experimental studies that goal-setting is effective over an extended period of time in a variety of organizations. Participative goal-setting is used 'Hi the present con- sultation intervention to increase effort and persistence in 31 adopting program evaluation methods through the structuring of adoption related behavior. Moreover, the setting of goals allows a more gradual estimation of the “benefit/risk ratio" of the innovation. Participants may choose to adopt only a portion of the innovation (only some evaluation unethods), thus increasing the “trialability" of the innovation. This trialability has been shown to be an important characteristic of successfully ad0pted innovations (Rogers & Shoemaker, 1971). A final component of the consultation intervention includes instruction in program evaluation methods. Par- ticipants learn how ix) use evaluation methods and how to improve management of their organizations using the collected data. Knowledge is presumed necessary for adoption. This is explained in greater detail in the next chapter. So far, research and theory in innovation have been discussed because they reveal several factors that might influence the adoption of program evaluation methods. Theory and research examining organizational structure and processes were reviewed because the focus (H: the present research is organizations that provide services to older adults. Strat- egies shown to be successful in promoting organizational change were discussed because it is the intention of the present research to test an attempt to change the innovation adoption behavior of organizations. Finally, the present 32 research attempts to provide a modest extension and integra— tion of the previously disparate bodies of work of organiza- tional theory and change. An explicit statement of the hypotheses tested in the reported research is reported below. Research Hypotheses Experimental Hypotheses Hypothesis 1: Participants (H: organizations receiving the consultation intervention will report greater adoption and implementation of program evaluation practices than par- ticipants of organizations not receiving the intervention. Hypothesis 2: Participants (H’ organizations receiving the consultation intervention will demonstrate greater knowl- edge (H: program evaluation practices than participants of organizations not receiving the intervention. Hypothesis 3: Participants (If organizations receiving the consultation intervention will demonstrate more favorable agreement with program evaluation practices than participants of organizations not receiving the intervention. Correlational Hypotheses Hypothesis 4: Indicators of centralization will be neg- atively related to the adoption and implementation of program evaluation practices. Hypothesis 5: Indicators of formalization will be nega- tively related to the adoption and implementation of program evaluation practices. 33 Hypothesis 6: Indicators (Hi complexity vfill be posi- tively related to the adoption and implementation of program evaluation practices. Hypothesis 7: Indicators of agreement with evaluation practices will be positively related to the adoption and implementation of program evaluation practices. Hypothesis 8: Knowledge of evaluation methods will be positively related to the adoption and implementation of pro- gram evaluation practices. Hypothesis 9: Interorganizational relations will be positively related to the adoption and implementation of pro- gram evaluation practices. Hypothesis 10: Interorganizational relations will be positively related to agreement with program evaluation prac- tices. CHAPTER II Methods and Procedures Sample Selection Three communities in Michigan--Lansing, Grand Rapids, and Southfield-—provided research sites. In all cases, Area Agencies on Aging (regional planning and funding agencies for aging related services throughout the state) were contacted and included in the planning of the study and recruitment of participants. Area Agencies on Aging (AAAs) provided lists of all public and private agencies delivering human services to the elderly! within their: geographical jurisdiction. Listed agencies provided a range of services to older adults, e.g., nutrition, legal-aid, housing, home-care amd recrea- tion. No restrictions regarding source of funding were placed (Ni participants, resulting in representation of pub- licly and privately funded services. The only restriction placed (Hi participation was that services be directed pri- marily to the elderly. (This restriction was stipulated by the funding source.) The director of all organizations providing services to the elderly was contacted by letter and invited to partici- pate in a: free, two-day program evaluation workshop. Each organization choosing to participate in the evaluation work- Shop was asked to select one staff member to attend. It was suggested that the agency director, if unable to attend, 34 35 should send the staff member usually responsible for evalu- ation and planning related tasks. Five program evaluation workshops were presented--three in Lansing, and one each in Grand Rapids and Southfield. Of approximately 150 organizations in each community invited to participate, a total of 56 organizations chose to do so: 28 in Lansing and 14 each in Grand Rapids and Southfield (Table 1). Of these participants, three chose not to complete the full two-day workshop (due to inappropriateness of material), seven chose not to continue after participating in the work- shop (generally due to time constraints or lack of interest), and three were deemed unacceptable for the sample (one parti- cipant exclusively provided evaluation services to other agencies; one participant was from the same agency as another participant; the participant from the third agency 'repre- sented a regional administrative unit for other service pro- viders and did not provide services). Experimental Design The dependent variables included the adoption and imple- mentation of program evaluation methods, knowledge of program evaluation methods, and agreement with evaluation practices. The experimental intervention consisted (H: the manipulation of interpersonal interaction with a: six-week, face-to-face consultation designed to promote the adoption of program evaluation methods. Participants not receiving the consul- tation intervention received only the workshop. Participants were randomly assigned to either a consultation group or 36 Table 1 Participants by Geographic Workshop Lansing Grand Rapids Southfield Total W1 W2 W3 Participants 17 7 4 14 14 56 Drop out during 2 1 3 workshop Drop out after 2 2 1 2 7 workshop Excluded after 1 1 1 3 workshop Total in Sample 13 4 4 12 10 43 for Pretest workshop-only group after participation in the workshop (Table 2). The assignment of participants and composition of groups will be further discussed below. Experimental Intervention The first phase of the intervention included the dissem- ination of the innovation. The innovation was disseminated through a two-day workshop in evaluation planning and meth- ods. Participants were also given a specially edited 200 page program evaluation manual in: assist iri the innovation dissemination. The purpose of the workshop was twofold: (1) to equalize across participants, as much as possible, knowledge of evaluation methods, and (2) 1m) provide a com- parison with the method most commonly used by policy makers to change the practices of human service organizations, i.e., workshops. 37 Table 2 Experimental Conditions Consultation with Workshop workshop only Lansing n = 15 n = 10 Grand Rapids n = 8 n = 5 Southfield n = 6 n = 6 Drop out n = 5 n = 2 Sample n = 24 n = 19 Program evaluation was conceived to include several diverse components, ranging from establishment of program objectives and goal-setting to use of randomized experiments. A review (Hi evaluation taxonomies (Fairweather & Tornatzky, 1977; Rossi, Freeman, & Wright, 1979; Suchman, 1967; Weiss, 1972) was conducted to determine the most common components of program evaluation. The primary features of program evaluation comprised the content of the evaluation workshops. These features include: 1. Determining service goals based on partici— pation of staff and clients. 2. Establishing measureable objectives, including the use and interpretation of standardized measurement instruments. 3. Accurate and reliable record-keeping. 4. Measurement of client satisfaction. 5. Measurement of cost-service ratios, including cost/benefit and cost-effectiveness ratios. 6. Measurement of program implementation. 7. Measurement of program effectiveness, includ- ing pre-experimental, quasi-experimental, and experimental designs. 8. Measurement of program impact on the surround- ing community, including needs assessments. 38 Workshops consisted primarily of didactic modules struc- tured around each of the above topics. Small group exercises were interspersed between didactic sessions to allow partici- pants the opportunity to apply the information to their own service. Pretest measures were also administered during the workshop. An outline of the workshops is provided in Appen- dix A. Workshops were conducted during March (Lansing), April (Grand Rapids), and June (Southfield) of' 1981. The consultation intervention began within two weeks after the end of the workshops in each respective site. The consultation intervention was designed to facilitate the adoption (H’ the evaluation innovation presented during the workshops. Participants were randomly assigned either to one of five consultation groups or to a workshop—only control group (see Table 3 for the assignment to conditions). Each consultation group received the same treatment. The creation of small groups was necessary because 'H: was felt that the consultation may more easily be provided to a smaller number of people and the exchange of resources was one of the inter- vention components. The size of each group was determined by the best compromise between having groups large enough to provide a: basis for exchange, but small enough to allow as many experimental replications as possible. Because four members of the first Lansing consultation group withdrew from participation at the time of the ffirst consultation session, groups 1 and 2 were combined to provide a single group (H: 7 members. (These participants withdrew 39 from the consultation groups to which they were assigned for different reasons. One person was not allowed by her parent agency to participate because the parent agency provided evaluation services; they felt these should tna used rather than an outside consultant. The other three people had to withdraw because of other time commitments. They each stated that if the consultation were to be provided in the future they would choose to participate.) Table 3 Experimental and Control Groups Consultation with workshop Workshop only Total Lansing Group 1 n = 6 n = 10 n = 25 Group 2 n = 5 Group 3 n = 4 Grand Rapids n = 8 n = 5 n = 13 Southfield n = 6 n = 6 n = 12 Drop out after assign. to group n = 5 n = 2 n = 7 Total Sample at pretest n 24 n 19 n 43 The consultation was designed to increase knowledge of evaluation methods, assist in the development of positive attitudes toward the use of evaluation, and foster adoption of evaluation methods through the use of participative goal- setting. 4O Consultation sessions were held weekly for six consec- utive weeks in each of the three cities. In two of the research sites (Lansing and Southfield), sessions were con- ducted in an office provided by the local AAA. In the third city, sessions were conducted in an office provided by one of the participants. Each consultation session lasted from two to three hours. Approximately time same structure obtained for each meeting (see Appendix B for an outline of each con- sultation session). The first half-hour was spent reviewing material presented in the workshop. The second activity required participants to report to the group the accomplish- ment of program evaluation objectives selected the previous week. The final activity consisted of each participant setting new objectives to be accomplished by the following week. Objectives were agreed upon through negotiation between the group-leader and participant. All objective setting and accomplishment was conducted publicly and recorded with multiple copies, one for the group-leader and one for the participant. Participants determined at the first session the eval- uation goal they ultimately wished to achieve. For some, this included only the development and administration of a needs assessment instrument; for others, this included an evaluation of effectiveness using a pretest-posttest, quasi- experimental design. Weekly objectives consisted of small steps in the direc- tion of the larger goal. For example, if the ultimate goal 41 was an evaluation of program effectiveness, the participant might first set an objective to have a staff meeting to discuss this idea with relevant staff members. (This was encouraged.) When recording this objective, the participant was instructed to be sufficiently specific that someone else could accomplish the objective in the absence of the parti- cipant. In the case (N: a staff meeting, for example, the number of meetings would be determined, persons to be invited would be named, and if possible, a date and location would be selected. Each week objectives were chosen to require greater and greater behavioral commitment ix) the accomplishment of 'the ultimate goal. Continuing the example above, the second session would require the participant to report accomplish- ment of the previously set objective. If the entire objec- tive was not accomplished successfully, the participant again set the unaccomplished portion of the objective for the sub- sequent week. Additionally, new objectives requiring greater commitment were set. For example, the participant might next determine possible outcomes to examine, select standardized measurement instruments, select la sample for' a pilot-test, conduct a pilot-test, and refine the measurement instrument. The objectives set in the last session (6th) of the consul- tation intervention typically included dates for pretesting, administering the intervention, and posttesting. In some cases, implementation of the innovative evaluation practice began before the (uni of the consultation intervention. In 42 this fashion, participants progressively adopted and imple- mented more and more of the innovative practice, thus insur- ing trialability (Rogers & Shoemaker, 1971). In addition to the effectiveness of the intervention in fostering the adoption and implementation of program evalu- ation methods, characteristics of organizations and their environment, suggested above to be related to adoption, were measured. These characteristics were believed to moderate the success of the intervention and were interpreted as pre- dictive independent variables. The scales used ix) measure adoption and implementation of evaluation practices, as well as all other variables, are presented below. First, however, the method used for scaling will be discussed. Scaling and Data Reduction Several general comments are necessary' regarding mea- surement 'Hi the present study. First, scales tested pre- viously in innovation research were used where possible. This is important because the present research is partly an attempt to synthesize previously disparate empirical domains. Use of these scales is critical if the obtained results are to provide a nmaningful synthesis. Second, several of the scales used in the present study have been previously admin- istered to organizations providing services to older adults throughout Michigan (Davis, 1981a). These scales were also used in the present research. This larger, state-wide sample (n = 108 organizations) was used to determine the psycho- metric characteristics 01: most scales used iri the present 43 study. The workshop sample (n = 56) in the present research was used to confirm scale dimensions and to determine the psychometric characteristics of those scales used for the first time (Workshop Effectiveness, Consultation Effective- ness, and Evaluation Interview). Finally, scales and items used to measure variables interpreted an; the organizational level of' analysis, e.g., size, complexity, centralization, formalization, interorganizational relations, and organi- zational stability, are based on individual responses aggre- gated within their respective organizations. Scales created to measure variables interpreted at the individual level of analysis, e.g., agreement with evaluation practices, are not aggregated. Psychometric analyses for aggregated responses were conducted at the aggregate level because this is the level at which responses are interpreted (cf. Sirotnik, 1980). The psychometric procedures used adhere to the format developed by .hflui Hunter (Hunter, 1977; Hunter 8. Gerbing, 1979; 1980). Measurement models for each scale were created, includ- ing those scales used in previous research. The measurement model specified presumed relationships between underlying traits and manifest indicators. Inter-item correlation matrices for each scale were computed. An oblique, multiple groups factor analysis (Gorsuch, 1974; Harmon, 1978) using PACKAGE (Hunter 8 Cohen, 1969) was used to estimate the parameters of the measurement model. Finally, the fit of the data to the measurement model was determined by examining 44 three criteria for unidimensionality suggested by Hunter (Hunter, 1977; Hunter 8 Gerbing, 1979; 1980): (1) homoge- neity of item meaning, (2) internal consistency, and (3) parallelism, or external consistency. Items were deleted and rearranged to improve the fit of the observed data to the measurement model until a sun: of unidimensional scales was obtained. 1. Homogeneity of Item Meaning. Previous usage and a priori estimates of item content were used to define mean- ingful clusters of items--forming scales and subscales. These scales and subscales represented the a priori measure- ment model. The parameters of the measurement model were estimated with oblique, multiple groups confirmatory factor analysis (Gorsuch, 1974; Harmon, 1976). Communalities were used in the diagonal so that correlations between items and cluster true scores, and correlations between cluster true scores, could be computed (Hunter 8 Gerbing, 1979, p. 16). 2. Internal Consistency. The assumption underlying the measurement of internal consistency is the existence of a linear relationship between cluster true scores and the items used in) measure them. If this linear relationship holds, measurement errors associated with items are uncorrelated with each other or with item true scores. The lack of corre- lation between item errors of measurement is the definition of internal consistency, i.e., measurement error arises from error associated with sampling items from the content domain 45 . and is random (Hunter 8 Gerbing, 1979, p. 20; Nunnally, 1967, p. 206ff). The extent of this measurement error was measured with coefficient alpha (Cronbach, 1951). Cluster items were rearranged to conform to the product rule for internal con- sistency (Hunter, 1973; Hunter 8! Gerbing, 1979): (a) all items were examined for equal quality i.e., similarity of inter-item correlations with item-cluster true score corre- lations; (b) the matrix of all items was examined for a strong-weak gradient. Finally, to provide a more rigorous measure (H: internal consistency, cluster scores were par- tialed out of their respective scales. Perfectly unidimen- sional scales should produce partial correlations equal to zero. 3. Parallelism. Parallelism refers to the correlation of items with other items outside of the cfluster in which they-are a member (Hunter 8 Gerbing, 1979). The degree of parallelism was determined first by examination of these correlations. Also, similarity coefficients were computed to provide a summary score of the parallelism of items within clusters (Hunter, 1973; Hunter 8 Gerbing, 1979). Only predictive independent variables and dependent variables were scaled. Descriptive independent variables were represented by single indicators and, consequently, did not require the application of data reduction procedures. Data Collection Instruments The data were gathered with eight measurement instru- ments. These instruments, and the variables they were intended to measure, are discussed below. .A discussion of 46 the psychometric characteristics of the scales contained in these instruments follows. 1. Evaluation Self-Report. This is a 22 itmn scale which asks respondents to report the frequency' of common evaluation practices in their organization. This scale was administered during the workshop (pretest) and at the end of the consultation intervention (posttest). (See Appendix C). 2. Evaluation Interview. This is. another measure of evaluation practices, consisting of 32, semi-structured interview items asking respondents to report the onset and frequency of evaluation practices in their organization. Respondents were interviewed at the end of the consultation intervention and one month later during a follow-up measure- ment period. (See Appendix D). 3. Agreement With Evaluation Practices. This scale consists of 22 items and is intended to measure how strongly individual staff in each (H: the participating organizations agree with currently accepted program evaluation practices. This scale was administered during the workshop and at the end of the consultation intervention. (See Appendix E). 4. Project/Service Information. This questionnaire collected all descriptive information, like age and educa- tion, and all predictive variable information, like central- ization and complexity. It consists of 35 items and was administered during the workshop and at the end of the con- sultation intervention. (See Appendix F). 47 5. Project Interaction. This is a sociometric type rating scale consisting (H: a random selection of organiza- tions providing services to the elderly in each geographical research site. Additionally, all organizations participating in the research were included in the list of rated organiza- tions. The number (M: rated organizations ranged froni 54 (Lansing) to 64 (Southfield). This scale was administered during the workshop and at the end of the consultation inter- vention. (See Appendix G). 6. Evaluation Knowledge. This fifteen item scale uses a multiple choice format to test workshop participants' knowledge (H: program evaluation concepts taught during the workshop and explained in the provided evaluation manual. This instrument was given at the 4 week follow-up measurement period. (See Appendix H). 7. Workshop Effectiveness. This short. scale consists of six items written to tap workshop participants' perception of the evaluation workshop. This scale was administered at the end of the evaluation workshop. (See Appendix I). 8. Consultation Effectiveness. This questionnaire col- lects information representing consultation participants' perception of~ the effectiveness (H: the consultation inter- vention and their perceived ability to conduct program eval- uation as a function of their participation. This scale was administered at the end (Hi the consultation intervention. (See Appendix J). 48 All instruments, with the exception of the Consultation Effectiveness scale, were administered to all participants in the workshop. All instruments, with the exception of the Workshop Effectiveness, Consultation Effectiveness, and Evaluation Knowledge scales, were also administered to another person in each organization nominated by the workshop participant as knowing the most about that organization's practices. Participants nominated the staff member they believed ix) be most aware of organizational functioning. Nominated others were used to increase the reliability of responses (Seidler, 1974). Operationalization of ‘the con- structs measured with each (H: the above instruments, and their respective psychometric characteristics, are discussed below. Table 4 presents the schedule for administration of all measurement instruments. Operationalization of Constructs This section describes lunv each construct of’ interest discussed in the previous chapter was operationalized. The variables representing- these constructs are organized into four categories: manipulation checks, descriptive and pre- dictive process measures, and outcome measures. Manipulation Checks. The experimental manipulation was measured with two manipulation checks. The first check examined the effectiveness of the workshops in conveying evaluation information. Participants completed the Workshop Effectiveness Questionnaire lat the end (Hi the workshop (Appendix I). This was used to estimate the comparability of 49 Table 4 Measurement Schedule After During Inter- 4 Week Who Com- Scale Workshop vention Follow-up pleted* Evaluation Self-Report X X P, NO Evaluation Interview X X P, NO Agreement with Evaluation Practices X X P, NO Project/Service Information X X P, NO Project Interaction X X P Evaluation Knowledge X P Workshop Effectiveness X P Consultation Effectiveness (consultation group only) X P *P Participant in workshop, usually the program director. NO Nominated other 50 experimental and control groups after the workshop, but prior to the intervention. This questionnaire also provided a measure of participants' expected implementation of program evaluation practices as a result of workshop participation. The second manipulation check measured the perceived effectiveness of the consultation sessions. Participants completed the Consultation Effectiveness Scale at the end of the six-week consultation intervention (Appendix J). This was used to estimate the comparability of experimental groups and participants' rating of consultation related experiences and expectancies regarding post-consultation evaluation prac- tices. Descriptive Process Measures. All descriptive variables were included irI the Project/Service Information question- naire (Appendix F). These variables describe aspects of par- ticipants and their organizations that were believed to distinguish them from each other. While they were not the primary focus of the study, they may have acted to confound any effect found for the intervention. These items included the number of full-time and part-time paid and volunteer staff, number of years the agency had existed, the percent chance the project was expected to exist in the next fiscal year, number of years the respondent had worked in the agency, number of years the respondent expected to continue working in the agency, and respondents' age and sex. Additional process measures for participants in the consultation condition included the number of sessions 51 attended, the number of objectives set, and the number of objectives achieved. These data were used to estimate the degree of effort, that is, whether participants in the con- sultation intervention demonstrated individual differences in their implementation (H: the experimental condition. These data were collected from goal-setting sheets completed weekly by participants in the consultation intervention. Predictive Process Measures. At the individual level of analysis, agreement with current practices in program evalu- ation was measured. This was measured with the Agreement with Evaluation Practices instrument (Appendix E). This measure was designed to estimate the degree to which respon- dents believed evaluation and planning practices, commonly accepted as important, should be conducted in their organi- zation. The items were written such that the organization was the referent. Respondents were asked to rate on a five- point, Likert type scale the evaluation activities included iri the dependent measure. For example, respondents were asked to designate how strongly they agreed with, “My project/service should not record each client contact.“ Thus, the data provided a measure of how strongly respondents felt evaluation activities should be conducted in their organization and how frequently the respondents were actually performing these same activities. The behavioral anchor and organizational referent for these attitude items were meant to increase scale reliability and the ability of attitudinal change scores to predict behavioral change. 52 Other individual characteristics included education and training, job tenure, gender, and age. It should be recog- nized that some of these variables, like education, may also be interpreted as indicators (Hi organizational constructs. In fact, some variables were analyzed both at the individual and the organizational conceptual level. Measures of organ- izational characteristics used to represent predictive pro- cess variables included organization size, as measured by the number of staff, amount of budget, amount of budget committed to program evaluation, and measures of organization struc- ture, viz., complexity, centralization, and formalization. This information was included in the Project/Service Informa- tion questionnaire (Appendix F). Complexity, centralization, formalization, and interorganizational relations were the predictive variables of primary interest at the organizational level of analysis. Typically, their measurement, and the measurement of organi- zational structural variables in general, has been performed one of two ways. The first method is based on aggregated individual perceptions of organizational structure and has been called perceptual (Sathe, 1978), phenomenological (Tannenbaum 8 Smith, 1964), or questionnaire (Pennings, 1973; Ford, 1979) measurement. The second method of measurement attempts to rely on more direct measures of organizational structure and has been referred to as institutional (Ford, 1979; Pennings, 1973; Sathe, 1978) or, simply, structural measurement (Tannenbamn 8 Smith, 1964). Both methods have 53 been used frequently by organization researchers. The aggregated perception method was used in the pioneering work of Hall (1963) and 'Hi all of the previously cited work of Hage and Aiken. The institutional approach is well represented in the work of the University of Aston group (Child, 1972; Inkson, Pugh, 8 Hickson, 1970; Pugh, Hickson, 8 Hinings, 1969; Pugh, Hickson, Hinings, MacDonald, Turner, 8 Lupton, 1963; Pugh, Hickson, Hinings, 8 Turner, 1968, 1969). Each empirical approach has its weakness. As pointed out by Sathe (1978), the perceptual method, using aggregated questionnaire responses, has been criticized for generating "subjective" information, possibly biased by individual dif- ferences in attitudes and other characteristics. Moreover, the proper unit of aggregation may not be universally agreed upon. Sathe (1978) has also discussed the weaknesses in the more “objective“ institutional approach. The measurement of the presence of written manuals, charts, and other documents may be highly unreliable. Formal documents may be obsolete or organizations may only loosely adhere to them. Moreover, the reliance on only a small number of respondents, typically key informants, may be problematic, particularly when respon- dents may be less capable of providing veridical judgments. There have been few attempts to include both types of measurements in a single study. When such attempts have been made, the results have been equivocal. Empirical comparisons (Ford, 1979; Pennings, 1973; Samuel 8 Mannheim, 1970; Sathe, 54 1978) and methodological critiques (Walton, 1981) have revealed weak enui inconsistent convergent validity' between these different methods used to measure organizational structure. Sathe (1978) suggested these different methods of measurement may be tapping different aspects of organiza- tional structure, i.e., institutional methods may be measur- ing designed structure, the formal structure of the organi- zation; questionnaire methods may be measuring emergent structure, the degree of formal structure experienced by organizational members in work-related activities (Hi a day- to-day basis (p. 234). The questionnaire approach was used in the present study because (1) conclusive evidence does not exist to demonstrate the clear superiority of either measurement method, (2) there is no unambiguous agreement regarding construct validity, (3) the degree of formal structure experienced by organization members on a day-to-day basis may be most important (Sutton 8 Rousseau, 1979), and 04) the present research attempts to build on and extend the findings of Hage and Aiken, who used the questionnaire approach. Although scales previously used in Hage and Aiken's research were used in the presently reported research, item stems were changed to maintain a single referent, a weakness present in the original scales (Dtawar, Whetten, 8 Boje, 1980). Items in the scales used to meiasure all organizational characteristics were included in the: Project/Service Information and Project Interaction ques- tio nnaires (Appendix F). 55 Outcome measures. The outcome (H: primary interest is adoption and implementation of program evaluation practices. This is measured with the Evaluation Self-Report (Appendix C) and the Evaluation Interview (Appendix D). Degree of imple- mentation is represented by the relative scores on each of these instruments. There is also interest in change in levels of agreement with evaluation practices and knowledge of evaluation methods. Scores on these instruments are used as both independent and outcome measures. As independent measures, they are used for their ability to predict scores on the two adoption scales. The efficacy of other character- istics in predicting scores on Agreement with Evaluation Practices. and Evaluation Knowledge, and the hnpact (H: the intervention, is also examined. Psychometric Characteristics Sige:_ Indicators for organization size included the number of full-time and part-time paid and volunteer staff (Items 3-6 on the Project/Service Information instrument) and program budget (item 5). The base 10 logarithm was taken of program budget due to skewness of the distribution of this measure. The intercorrelation of these items was .46 (D < .001). The other indicator of size, amount of budget 1ment on program evaluation, did not significantly correlate I'Ifth these two items, so it was analyzed separately. Complexity: Indicators used to represent complexity l°fl<:luded education (item 10), number of services provided (it:em 12), and the degree of extra-professional activity (itLems 13-17). The internal consistency of this scale was 56 moderate (alpha = .65). (Unless otherwise noted, all alphas reported are standardized alphas.). The degree of internal consistency was (hue mostly' to the professionalization subscale (alpha = .71). Correlations between this indicator, education, and number of services were not significant. These scales were analyzed separately. The matrix of partial correlations remaining after cluster scores were partialed from the observed interitem correlation matrix of all organ- ization scales, providing further evidence of internal con— sistency, is provided in Appendix K. This scale also demon- strated acceptable parallelism. The matrix. of similarity coefficients of aLl organizational scales is provided in Appendix L. (All displayed alphas, partial correlations and similarity coefficients represent the fit of the scales derived from Davis (1981a) to the present sample.) Centralization: Two scales were used as indicators of centralization. These included participation iri decision making (items 24-27) and hierarchy of authority (items 26-32). The internal consistency of these scales was alpha = .91, and alpha = .84, respectively. See Appendix K for the partial correlations and Appendix L for the similarity coef- ficients. Formalization: Two subscales were used to measure formalization. These were job codification (items 33-37) and 'WJle observation (items 38-39). Their internal consistency Was relatively high: alpha = .80 (job codification) and 57 alpha = .72 (rule observation). See Appendix K for the par- tial correlations and Appendix L for the similarity coeffi- cients. Interorganizational Relations: Indicators for this variable came from the Project Interaction Scale. Items represent two conceptually related dimensions, i.e., fre- quency and importance of communication among organizations. It was not possible to contact all other organizations in each community and determine the percent of reciprocal choice. Scores for frequency and importance of interaction for each selecting organization were multiplied to yield a single measure of interaction strength. Agreement with Evaluation Practices: Indicators for this variable came from the Agreement with Evaluation Prac- tices scale. While this scale was not rigorously unidimen- sional, the subscales were so highly intercorrelated they could not be used as separate multivariate predictors. Con- sequently, all items for this scale were treated as indi- cators of a single, unidimensional construct. When treated as items for one scale, the alpha was .89, providing an acceptable degree of' internal consistency. Partial corre- lations and similarity coefficients are included in Appendix M and Appendix N, respectively. Evaluation Knowledge: Indicators for this variable came frcmi the fifteen items (N1 the Evaluation Knowledge Scale. These items asked respondents to select the correct response frm .moea co ucmqm .mesa pcmuema mo- mmemmo mH .mao mpzm No .ewwoo now 4mm xeeeoepae eo Azuememw: «mm acexme cowm 43v 5 .28 eom eoepeNe Tchowmmmeoem oH- mmuw>emm .oz ewmq mmmum P8666 ea .62 emanam H u_aeeee> peoamm.u_mm use 3me>emucH cowpmzpm>m ”meopuwemea mezpuaepm chowpm~wcmmeo eo xwepmz cowpm—megou NH anMF 83 mqnmm ”wmcmg z Ho.v a ** .mo.v a r .cowpm>emmno 8—2; ace cowumuwewuou now empmmem “cummeame mmeoum 5mm: .zuweocpam we xcoememw; emummem pcmmweqme mmeoum 304 .cowpmaeuwpema empmmem pcmmmeame mmeoum cm_: .vmpp_eo coma w>mc mpcwoa Fmswumo mon mm: cm- OH mo *kmmu #kmvi «NT 8mm «N we mo OH Hm mo no No mH- mo- peoameuepmm .mH man we- we om- mo mm 3mhanmwflwm .NH mm- no- «0 «o- no- no zow>eopcw umoa .HH m m e m N H mpnewem> Aeuzeepeouv NH apnee 84 Table 13 Variables in Equation: Post Interview, Organizational Structure Predictors Over- Variable b SE all F Prob. Degree - .296 .762 .151 .707 Professional- ization .242 .637 .144 .713 Rule Observation .979 3.060 .102 .756 Percent spent on prog. eval. - .223 .242 .848 .381 Job Codification 1.648 2.907 .321 .585 No. of total staff - .044 .052 .703 .423 Budget 1.506 2.541 .351 .568 Hierarchy of authority -1.187 3.754 .100 .759 No. of Services - .041 .187 .049 .829 Constant -5.083 18.605 .075 .791 Participation in decision making did not enter the equation; final F = .003. 85 Table 14 Regression Summary Table: Post Interview, Organizational Structure Predictors F to enter Variable or remove Prob. Degree 3.793 .068 Profession- alization .409 .532 Rule observation .185 .673 Percent spent on prog. eval. .288 .600 Job codification .475 .503 No. of total staff .373 .553 Budget .330 .577 Hierarchy of authority .117 .739 No. services .049 .829 .182 .203 .212 .228 .256 .278 .547 .307 .311 Over- all F 3.793 2.034 1.349 1.036 .893 .770 .671 .554 .451 Prob. .068 .163 .296 .423 .514 .608 .694 .793 .874 Adjusted R2 = O; adjusted R2 at step 1 86 Table 15 Variables in Equation: Follow-Up Interview, Organizational Structure Predictors Over- Variable b SE all F prob. Budget 3.123 2.014 2.404 .152 Hierarchy of authority -2.379 3.093 .592 .459 Percent spent on prog. eval. - .269 .156 2.974 .115 No. total staff - .052 .042 1.492 .250 No. services - .101 .159 .408 .537 Professional- ization .119 .519 .053 .822 Job codification 1.074 2.451 .192 .671 Rule observation .936 2.525 .137 .719 Constant -7.963 14.839 .289 .603 Participation in decision making and degree were removed from the equation. Final 1: for participation = .0003; final F for degree = .0001. 87 Table 16 Regression Summary Table: Organizational Structure Predictors Follow-Up Interview, F to enter 2 Over- Variable or remove Prob. R all F Prob. Degree 3.024 .100 .151 3.035 .100 Budget 1.391 .255 .219 2.243 .138 Hierarchy of authority .748 .401 .256 1.721 .205 Percent spent on prog. eval .713 .413 .292 1.444 .271 No. total staff 1.829 .199 .379 1.589 .231 No. services .418 .529 .398 1.724 .198 Professional- ization .083 .778 .403 1.349 .310 Job codification .091 .769 .408 1.082 .435 Rule observation .137 .719 .416 .889 .557 Degree was removed from the equation in step 6. Adjusted R2 = 0; adjusted R2 at step 1 = .10. 88 2. Evaluation Self-Report. The second measure of evaluation adoption was iflue Evaluation Self-Report. Organ- izational structural characteristics were also entered as a group to discover relationships with posttest scores on this outcome measure. As can be seen in Tables 17 and 18, these variables failed 11) demonstrate any' reliable relationships with this measure of outcome. The final adjusted R squared for this regression equation also equaled zero. Examination of the ratio of regression weights to their standard errors reveals considerable noise iri the equation. These results are contrary to a priori hypotheses and are probably a result of the homogeneity of the sample. This reduction in range, and possible remedies for it, will be further discussed below. 3. Agreement with Evaluation Practices. The same group of organizational structural variables was examined for any multivariate relationship with this measure of whether re- spondents believed common evaluation practices should be conducted within their organization. Because this scale measures a cognitive construct, it is more meaningfully interpreted at the individual level of analysis. The cri- terion, therefore, represents individual responses to this scale. Organizational characteristics were represented by mean responses within the respondents' organizations. Organ- ization means were assigned to each respondent within the organization to provide the organizational predictors. The regression equation inay be: interpreted as representing the 89 Table 17 Variables in Equation: Evaluation Self-Report, Organizational Structure Predictors Over— Variable b SE all F prob. Rule observation - .238 .518 .212 .656 Staff - .008 .008 .870 .375 Budget - .083 .421 .039 .848 Professional- ization .044 .105 .176 .685 Percent spent on prog. eval. - .014 .032 .182 .679 No. services - .013 .033 .180 .681 Job codification .203 .499 .165 .694 Participation in decision making .077 .241 .103 .756 Hierarchy of authority .112 .628 .032 .863 Constant 2.301 3.103 .549 .477 Degree did not enter the equation; final F = .0007. Regression 90 Table Summary Table: 18 Evaluation Self-Report, Organizational Structure Predictors Variable Rule observation No. total staff Budget Professional- ization Percent spent on prog. eval. No. services Job codification Participation in decision making Hierarchy of authority F to enter 2 Over- or remove Prob. R all F 1.000 .330 .056 1.004 .865 .366 .104 .930 .372 .551 .126 .720 .191 .669 .138 .559 .146 .708 .147 .448 .100 .757 .154 .365 .144 .712 .165 .311 .100 .758 .173 .262 .032 .863 .176 .214 Prob. .330 .415 .555 .696 .807 .887 .934 .965 .984 Adjusted R2 = O; adjusted R2 at step 1 = O. 91 influence of organizational characteristics on the level of agreement with evaluation practices demonstrated by individ- uals working within that organization. Tables 19, 20, euni 21 display the correlation matrix, variables in the equation and the regression summary table. Examination of these tables reveals that agreement with eval- uation practices was marginally related to participation in decision making and the number of staff in the organization. The causal priority of these variables remains to be determined. It is unknown whether organizations with decen- tralized decision making and large numbers of volunteer staff hire people with greater acceptance of evaluation practices, or whether workers with more positive feelings toward evalu- ation are attracted to, and continwe to work in, organiza- tions that possess more decentralized decision making and employ a large staff. Some investigators have reported re- sults which describe the effect. of organizational charac- teristics on attitudes of staff members within the organiza- tion (Rousseau, 1978; Sutton 8 Rousseau, 1979). Quite possi- bly both of these processes are at work. This "interaction- ist" perspective toward organization environments and the hiring and Inaintenance (H: staff has been suggested by Schneider (in press), and will be further discussed in the next chapter. NN-HN ”emcee z .Hoo.v a tee mHod a .4 mmo; a e .uwppwso cmmn m>mc mpcwoa Fmewomo 92 mo NH No- .NN 8H mo .mN 4mm .umea .He>e eHH; .eaem< .HH eNm HH NH- 8H ON 4H NH- No .He>m .moea :o “swam “smegma .oH No- NH- emN emN .NN No no mo outome .N no: *rkwqu HH ¥MN «H 00 ma .mao mpzm .m 41mm- No HH HH mo mo- .eHeou now .N eON mo- NH- mo No AHHeoepae to zguememw: .0 MN eeeem 8H CH meHeee eon -Huue eH .Hema .m NH ma tum cowume -Hmcowmmmeoea .q No No mmuw>ewm .02 .m «#rfim mwmpm .HOp $0 .02 .N Homesm .H w m o m w m N H wpnmwgw> mmuwpumea cowpmapm>m cur: acmemmem< "meouuwumea mezpuzeum Hmcowpmecmmgo mo xwepmz :owpmpmeeou ma mpnmh Variables Evaluation Practices, Organizational Structure Predictors 93 Table 20 in Equation: Agreement with Variable Participation in decision making No. of staff Rule observation Percent budget spent on prog. eval. Budget Job codification Degree Hierarchy of authority Professionaliza- tion No. of services Constant .2192 .0014 .1365 .0331 .1796 .1207 .0411 .1599 .0155 .0016 .9610 SE .116 .004 .249 .023 .198 .239 .062 .301 .050 .016 1.569 Over- all F 3.585 .065 .299 2.007 .826 .254 .432 .282 .097 .010 1.562 Prob. .069 .800 .589 .168 .372 .618 .517 .600 .758 .921 .222 94 Table 21 Regression Summary Table: Agreement with Evaluation Practices, Organizational Structure Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Participation in 5.319 .027 .132 5.319 .027 decision making No. of total 3.286 .079 .208 4.476 .019 staff Rule observation 1.229 .275 .237 3.415 .029 Percent spent on 1.404 .245 .269 2.943 .035 prog. eval. Budget .468 .499 .279 2.049 .059 Job codification .535 .470 .292 2.067 .087 Degree .466 .500 .304 1.806 .124 Hierarchy of .271 .607 .310 1.575 .177 authority Professionaliza- .091 .765 .313 1.365 .252 tion No. of services .010 .921 .313 1.184 .345 Adjusted R2 for entire equation = .048; adjusted R2 for first two steps = .162 95 4. Program Evaluation Knowledge. The final outcome measure to be discussed in this section is knowledge of evaluation practices. It will be recalled that this variable was measured wjth £1 fifteen question multiple choice test administered at the follow-up measurement period. This test was administered only in) participants 'hi the workshop. Because only 32 participants completed this questionnaire, regression coefficients must be interpreted cautiously. Tables 22, 23, and 24 display the correlation matrix, regres- sion equation and summary statistics, respectively. It can be seen that no measure of organizational structure was related to scores obtained on this scale. This result was not unexpected. It will be recalled from chapter I that no a priori hypotheses were offered. Knowledge of program eval- uation was analyzed to discover new and unexpected relation- ships. We have seen that indicators of organizational structure demonstrated infrequent multivariate relationships with the four outcome measures. Participation in decision making, one of the indicators of centralization, was moderately related to scores on the Agreement with Evaluation Practices, ac- counting for about 13% of the variance in this measure. An additional 8% of: the criterion variance was explained by including the number of total staff in the regression equa- tion. Interpretation of these weights leads to the tentative conclusion that staff working in organizations with decen- tralized decision making and a larger number of employees Hoo.v a «¥* mHo.v a *8 “mo.v a k .umppweo came m>mg mpcwoa Hmewumo 96 OH- OH wH mm OH- mm 00 00: OH Hm mm: mon mo «0 k80mn ¥¥¥mm .Fm>m .moea co pcmam ucmuema mwemwo .mao wpzm .wmcou now »u_eo;p:m mo acuememw: mc_xme :owm -wumu cw .Hema cowpmNW -Hmcowmmmmoga mouP>emm to .02 eeeem .HOH ea .62 Hemusm aneHee> .HH cowuwzpm>m Ememogm we mmumPZocx Hmeopuwumea meapuaeum PocoHHmecmmeo eo xwepmz cOHHNHmeeou NN wreak Variables in Equation: Knowledge, 97 Table 23 Program Evaluation Organizational Structure Predictors Variable Participation in decision making No. of total staff Degree Percent spent on prog. eval. No. of services Professional- ization Rule observation Job codification Budget Hierarchy of authority Constant .1862 .0265 .5624 .9993 .1386 .3944 .8235 .8873 .3353 .3798 .9321 SE 1.183 .043 .622 .188 .183 .752 2.593 2.256 2.145 2.932 16.630 1.005 .374 .817 .283 .574 .275 .101 .155 .244 .017 .088 Prob. .362 .567 .407 .618 .483 .622 .764 .710 .882 .902 .779 Regression Summary Table: Knowledge, Organizational 98 Table 24 Program Evaluation Structure Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Participation in 4.137 .061 .228 4.137 .061 decision making No. total staff .791 .390 .272 2.433 .127 Degree .438 .520 .298 1.6298 .220 Percent spent on .356 .563 .320 1.295 .331 prog. eval. Services .403 .540 .346 1.060 .436 Professional- .287 .605 .367 .868 .552 ization Rule observation .328 .582 .392 .736 .651 Job codification .138 .722 .403 .591 .761 Budget .060 .814 .409 .462 .857 Hierarchy of .017 .902 .411 .349 .926 authority Adjusted R2 = o; adjusted R2 at step 1 = .173. 99 are more likely to agree with currently accepted program evaluation practices. Indicators of organizational structure were not suc- cessful in predicting responses on any other outcome meas- ures, although the regression weight for participation in de- cision making approached significance (p = .061) for predic- ting scores on the evaluation knowledge scale. The failure of indicators of organizational structure to predict outcome responses was unexpected. Possible explanations for these discrepencies will be discussed below. As mentioned earlier, another conceptual group of vari- ables analyzed for their multivariate relationship with the outcome measures used 'hi this study included indicators of the participating organizations' environment. These indi- cators included (1) an index score representing the frequency and importance of interaction and communication with other organizations in each of the research sites (Inter-org. relations), (2) the age of the organization (Org. age) and (3) the percent chance the organization would continue to exist in the next fiscal year (Continue to exist). These final two measures were intended to represent organizational stability. It was believed a priori that organizations coming from less stable environments and engaged in greater interaction with other community organizations would be more likely to adopt and implement the innovation. Multivariate analyses of organizational environment characteristics were conducted at the aggregate level of 100 analysis, except for measurement of Agreement with Evaluation Practices and Knowledge of Program Evaluation. These last two criterion measures were measured at the individual level of analysis. Correlations between the predictors, regression equations, and summary tables of regression statistics are included in the tables below. Relationships with the outcome measures are presented in the following order: (1) post interview of evaluation practices, (2) follow-up interview of evaluation practices, (3) self-reported evaluation practices, (4) agreement with evaluation practices, and (5) knowledge of evaluation practices. 1. Evaluation Interview. Analysis of the multivariate relationship between organizational environment variables and the post and follow-up interviews measuring evaluation prac- tices revealed these variables to be unimportant in predict- ing outcome responses. No zero order correlations were significant (Table 25). The regression coefficients and the multiple correlation also were not significantly different from zero (Tables 26 to 29). The failure of interorgani- zational relations to predict innovation adoption was con- trary to a priori hypotheses and to the findings of other investigators (Becker, 1970a, 1970b; Hage 8 Aiken, 1968). The failure of the indicators of organizational stability was also contrary to a priori hypotheses and predictions of organizational theorists (Lawrence 81 Lorsch, 1967; March 8 Simon, 1958). Possible reasons for these discrepencies will be discussed below. 101 Table 25 Correlation Matrix of Organizational Environment Predictors: Evaluation Interview and Self-Report Variable 1 2 3 1. Org. age 2. Continue to exist 24 3. Inter-org. relations -15 10 4. Post interview 11 O3 O9 5. Follow-up interview 15 12 -O6 6. Self-report -05 17 11 Decimal points have been omitted. No zero order correlations were significant N range: 29 - 30 Table 26 Variables in Equation: Post Interview, Organizational Environment Predictors Variable b SE F Prob. Org. age .0179 .028 .394 .535 Inter-org. .0251 .046 .301 .588 relations Constant 1.747 1.447 1.459 .238 Percent chance continue to exist did not enter the equation; final F = .0005. 102 Table 27 Regression Summary Table: Post Interview, Organizational Environment Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Org. age .310 .582 .011 .310 .582 Inter org. .301 .588 .023 .302 .742 Adjusted R2 = O; adjusted R2 at step 1 = 0. Table 28 Variables in Equation: Follow-up Interview, Organizational Environment Predictors Variable b SE F Prob. Org. age .0167 .029 .330 .571 Continue to .0150 .031 .229 .636 exist .0134 .046 .084 .775 Inter-org. relations Constant 1.4353 2.856 .252 .620 103 Table 29 Regression Summary Table: Follow-up Interview, Organizational Environment Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Org. age .630 .434 .023 .631 .434 Continue to .201 .658 .030 .406 .670 exist Inter-org. .084 .775 .034 .289 .833 relations Adjusted R2 = 0; adjusted R2 at step 1 = O. 2. Evaluation Self-Report. Multivariate analysis of the posttest scores (H: the self-report scale of evaluation practices disclosed results that were parallel with responses to the evaluation interview, discussed above. None of the organizational environment variables were significant pre- dictors of responses to this scale. The regression coeffi- cients and multiple correlation were not significantly dif- ferent from zero (Tables I“) and 31). These predictors ex- plained no variance in this outcome measure. In sum, these indicators of the organization environment were unimportant in predicting adoption (H: the evaluation innovation. This result was contrary to what was expected. Other investigators have reported a positive relationship between interorganizational interaction and innovation adop- tion. Organizational theorists have also suggested a posi- tive relationship between turbulant environments and propen- sity to change, including general levels of innovativeness. 104 Table 30 Variables in Equation: Evaluation Self-Report, Organizational Environment Predictors Variable b SE F Prob. Continue to .0045 .005 .803 .379 exist Org. age -.0012 .005 .161 .691 Inter-org. .0027 .007 .135 .716 relations Constant 2.9142 .457 40.549 .000 Table 31 Regression Summary Table: Evaluation Self-Report, Organizational Environment Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Continue to .810 .376 .029 .810 .376 exist Org. age .237 .630 .038 .512 .605 Inter-org. .135 .716 .043 .375 .771 relations Adjusted R2 = O; adjusted R2 at step 1 = O. 105 3. Agreement with Evaluation Practices. The overall multivariate equation including organizational environment variables and responses to the Agreement with Evaluation Practices scale also failed to reach conventional levels of significance (Table 32 to 34). The F test for the regression coefficient for interorganiational relations revealed that its difference from zero was not very likely (p = .126). Moreover, the amount of variation in the agreement scale that could be explained by the entire group of predictors was almost entirely accounted for by interorganizational rela- tions (adjusted R squared for this coeffecient equaled .066; adjusted R squared for the whole equation equaled .075). That is, agreement with conducting an innovative practice in one's organization covaries with the frequency and importance of contact with other organizations. This finding confirms one of the a priori hypotheses. Table 32 Correlation Matrix of Organizational Environment Predictors: Agreement with Evaluation Practices Variable 1 2 3 1. Org. age 2. Continue to 24 exist 3. Inter-org. - 15 10 relations 4. Agree with 22* 22* -29* eval. prac. Decimal points have been omitted. * p < .05; N range: 59-81 106 Table 33 Variables in Equation: Agreement with Evaluation Practices, Organizational Environment Predictors Variable b SE F Prob. Inter-org. - .0057 .004 2.423 .126 relations Continue to .0036 .003 1.288 .262 exist Org. age .0024 .002 1.120 .295 Constant 3.723 .329 127.329 .000 Table 34 Regression Summary Table: Agreement with Evaluation, Organizational Environment Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Inter-org. 4.747 .034 .084 4.747 .034 relations Continue to 1.403 .242 .108 3.094 .054 exist Org. age 1.120 .295 .128 2.441 .175 Adjusted R2 = .075; adjusted R2 at step 1 = .066. 107 The relationship between interorganizational relations and agreement with evaluation practices might operate in the following way. Contact with other organizations might make one more aware of the innovation and could expose the poten- tial ad0pter to other professionals who are willing to hold in high esteem adopters of the innovation which they are already using. Unfortunately, the sign obtained on this regression coefficient was negative, suggesting the opposite conclusion--organizations not interacting frequently have staff members more likely to believe program evaluation should be implemented in their organization. Because the standard error of prediction for this coefficient was almost as large as the coefficient itself, this relationship should be interpreted very cautiously. The obtained marginal re- lationship may be spurious. 4. Program Evaluation Knowledge. The final criterion to be discussed in this section examining the influence of organizational environment variables is knowledge of program evaluation activities. Like the analysis of organizational structure discussed above, this analysis included only work- shop participants. It will be recalled that nominated others were not tested for their knowledge of evaluation. No zero order correlations were significant. Neither the regression coefficients nor the multiple correlation was significantly different from zero. Because there were TH) hypothesized relationships, and in the interest of conserving space, tables of these results are not provided. 108 The final group of predictor variables entered into multiple regression equations included indicators of' indi- vidual level constructs. These included education (highest degree), program tenure, expected tenure (number (H: years expected to stay in the program), age, sex, and agreement with evaluation practices. All of these variables were analyzed as predictors of each of the four outcome measures. The order of discussion will conform to that above, i.e., 1) post and follow-up interview of evaluation practices, 2) self-report of evaluation practices, 3) agreement with evaluation practices, and 4) knowledge (3f evaluation prac- tices. 1. Evaluation Interview. The degree of multicollinear- ity among all individual level predictors may 1%? seen in Table 35. While four correlations attained significance at the .05 level or lower, the highest correlation was only .40 (between tenure and age). (The partial regression coeffi- cient for tenure was almost zero when age was entered into the equation, suggesting tenure did not contribute much var- iation beyond that accounted for by age.) This level of multicollinearity was not considered strong enough to bias estimates (H: regression coefficients. Consequently, indi- cators were entered independently into the regression equa— tion. Three individual level variables successfully predicted responses on the evaluation post interview. These successful predictors included education, expected tenure in the program .mN-oN “mace; z .N u opmeme .H u m—ms umuou we xmm 109 .Ho.v a ewe HHo.v a 44 ”mo.v a e .umuuwso coon m>mc muceoa Hmswumo «ttwm, No- mH- ewN mo No pmmppmoa-.umga .Ha>a esz muem< .oH mm oH mm no mH mo peoqme-mpmm .m NH No- NH- NH- eo- e-HN- zeH>euHeH aa-onHoe .N mH oo mH- NN- HH- tenm- zmw>gmgcw umoa .N 80. mo NH mo- eNN Hmapaea-.ueea .Hm>m new: mmem< .m *mN NH 00 eoN- xmm .m HH eeoe mo- um< .a co HH- mezcmu umpomaxm .m mo- mezzo» new .N eoHHauzeN .H m m e m N H mpnwmgm> meouuwumea Hm>m4 Hmaup>wccH mo xweomz compm—meeou mm 3an 110 and agreement with evaluation practices. Each of these re- gression coefficients exceeded chance probabilities (Tables 36 and 38). The F value for the multiple correlation re- sulting from this three predictor equation was also highly significant (Tables 37 and 39). Evidence provided by the F value and the adjusted R squared (.233) leads to the conclu- sion that slightly' over one-fifth of ‘the variance iri this measure of innovation adoption and implementation may be predicted tur knowledge of individual staff members' educa- tion, expected tenure and level (H: agreement with commonly accepted progrmn evaluation practices. Examination (H: the sign of the regression coefficients suggests the prediction that organizations with staff who do not have advanced col- lege degrees, who do not expect to remain long in the pro- gram, and who agree strongly with the practice of program evaluation will be likely to report higher rates of adoption. This profile of individuals is somewhat contrary with the results reported by Hage and Aiken (1970). This difference may have been related to the nature of program evaluation. Possibly, less educated staff who were less invested in the program were Inore idealistic regarding program evaluation. These staff may run: have had negative experiences with the use of program evaluation results, hence, have had no reason to feel negatively toward its practice. The ability of these variables to predict scores on this scale diminished at the follow-up measurement period. While the predictors entered the equation in the same order, only Variables 111 Table 36 in Equation: Post Interview, Individual Level Predictors Variable b SE Prob. Education - .8082 .240 11.329 .002 Expected tenure - .1222 .056 4.757 .036 Agree with eval. prac.-pretest 2.7312 1.244 4.821 .035 Age - .0376 .038 .982 .328 Tenure - .0387 .105 .136 .714 Sex - .1502 1.334 .013 .911 Constant -1.4740 5.408 .074 .787 Table 37 Regression Summary Table: Individual Level Predictors Post Interv iew, . F to enter 2 over- Variable or remove Prob. R all F Prob. Education 6.462 .015 .139 6.462 .015 Expected tenure 3.525 .068 .210 5.197 .010 Agree with eval. prac.-pretest 4.961 .032 .302 5.471 .003 Age 1.814 .186 .334 4.644 .004 Tenure .131 .719 .337 3.655 .009 Sex .013 .911 .337 2.964 .019 Adjusted R2 = .223; adjusted R2 at step 1 .246. Variables Individual Level Predictors 112 Table 38 in Equation Follow-up Variable b SE F Prob. Education .6492 .250 6.761 .013 Expected tenure .0965 .058 2.726 .107 Agree with eval. prac.-pretest .1781 1.295 2.827 .101 Age .0300 .036 .715 .403 Sex .3002 1.379 .047 .829 Constant .345 5.612 .004 .951 Job tenure did not enter the equation; final F = .0007 Table 39 Regression Summary Table: Follow-up Interview, Individual Level Predictors . F to enter 2 Over- Variable or remove Prob. R all F Prob. Education 4.150 .048 .094 4.150 .048 Expected tenure 2.149 .151 .141 3.209 .051 Agree with eval. prac.-pretest 2.847 .100 .201 3.190 .034 Age .872 .356 .219 2.603 .052 Sex .047 .829 .221 2.038 .097 Adjusted R2 = -112; adjusted R2 at step 1 = .071. 113 the coefficient for education remained different from zero at the .05 level. The F test 1%”: the multiple correlation of these three variables continued to be significant, although the adjusted R squared was reduced in half. 2. Evaluation Self-Report. Predictor variables enter- ing significantly into the multiple regression equation for this outcome measure included age, agreement with evaluation practices, and sex. The regression coefficient for age was significant only when entered in the first step. No other regression coefficient deviated significantly from zero. The overall pattern of results suggests only a weak relationship between position on these predictors and the number of evalu- ation practices reported with this instrument. Unit in- creases in age, agreement with evaluation practices, and being female (70 percent of the sample was female) predicted slight increases irI reported, global evaluation practices. The resulting regression equation and regression summary statistics are given below in Tables 40 and 41. 3. Agreement with evaluation practices. Not surpris- ingly, scores from the pretest administration of this instru- ment were the strongest predictor of scores on this scale at the post measurement period. In fact, no other predictor was significantly different from zero. The F test for the multi- ple correlation was highly significant, as would be expected, given that most of the variance in the multiple correlation was explained by pretest scores. The adjusted R squared was .33, barely higher than the adjusted ll squared (.31) Variables in 114 Table 40 Equation: Evaluation Self-Report, Individual Level Predictors Variable b SE F Prob. Age .0093 .006 2.087 .157 Agree with eval. prac.-pretest .3271 .212 2.384 .132 Sex .2103 .227 .856 .361 Tenure .0121 .018 .457 .503 Education .0147 .041 .130 .720 Expected tenure -.0014 .009 .023 .880 Constant 1.229 .921 1.781 .191 Table 41 Regression Summary Table: Evaluation Self- Report, Individual Level Predictors F to enter 2 Over- Prob. Variable or remove Prob. R all F Age 5.075 .030 .113 5.076 .030 Agree with eval. prac.-pretest 2.689 .109 .170 3.989 .027 Sex .640 .429 .184 2.849 .050 Tenure .433 .515 .193 2.212 .086 Education .149 .702 .196 1.759 .146 Expected tenure .023 .880 .197 1.430 .231 authority Adjusted R2 = .059; .127 at step 2. 115 accounted 1%”: by only pretest scores (”1 the agreement with evaluation questionnaire. The regression equation employing all predictors, and appropriate summary statistics, are included in Tables 42 and 43. 4. Program Evaluation Knowledge. This is the final outcome measure to be discussed in this section. None of the regression coefficients were reliably' different from zero, nor was the F test for the multiple correlation significant. Finally, ther adjusted £2 squared equaled zero. Values of individual level predictors were unsuccessful in predicting knowledge of evaluation practices. Because 'none of the indicators were successful, no hypotheses were offered con- cerning possible multivariate relationships between indi— vidual level Tcharacteristics and scores on the evaluation knowledge scale, and to conserve space, no summary statistics are presented. Summary of Initial Regression Analyses Initial regression analyses included those variables believed to moderate scores on each of the outcome measures. Variables were clustered into three conceptual groups, i.e., organizational structure, organizational environment, and individual staff. The ability of these three groups to predict outcome scores successfully was weak and inconsis- tent. Aggregated organizational structure characteristics were successful only in predicting post scores on the agree- rmyn: with evaluation practices questionnaire. While these multivariate associations were not unexpected, the failure of 116 Table 42 Variables in Equation: Agreement with Evaluation Practices, Individual Level Predictors Variable b SE F Prob. Agree with eval. prac.-pretest .6568 .153 18.512 .000 Expected tenure .0096 .007 1.976 .168 Age - .0072 .004 2.570 .118 Tenure .0122 .013 .927 .342 Education - .0182 .028 .404 .529 Constant 1.640 .591 7.686 .009 Sex did not enter in the equation; final F = .0001. Table 43 Regression Summary Table: Agreement with Evaluation Practices, Individual Level Predictors . F to enter 2 Over- Variable or remove Prob. R all F Prob. Agree with eval. prac.-pretest 19.836 .000 .331 19.836 .000 Expected tenure 1.974 .168 .363 11.146 .000 Age 1.700 .200 .391 8.131 .000 Tenure 1.016 .320 .407 6.355 .001 Education .404 .529 .414 5.083 .001 Adjusted R2 = .332; adjusted 82 at step 1 = .315. 117 these variables to predict scores on each of the measures of adoption and implementation of evaluation practices was con- trary to earlier hypotheses. Indicators of organizational environment were also related to agreement with evaluation practices. There was no relationship between scores on these predictors and adoption and implementation of evaluation practices, or evaluation knowledge. Finally, individual differences were examined. These characteristics (education, agreement with evaluation practices, and expected tenure) were successful in predicting adoption and implementation of evaluation practices as measured with either the interview or the self-report questionnaire. Only pretest scores on the agreement with evaluation practices questionnaire signifi- cantly predicted scores on the posttest measure of agreement with evaluation practices. Most surprising among these results was the failure of organizational structural characteristics to 1”? related to adoption and implementation of evaluation practices, contrary to a priori hypotheses. The failure of these variables to provide significant predictors was most likely the result of the homogeneity (H’ the organizations 'hi the sample, and a consequent reduction in range of scale scores. Explanations for this failure will be discussed in the next chapter. As discussed in the introduction 1x1 this section, the next step of the regression analyses entered only variables providing significant predictors from -all three conceptual groups. Following Przeworski and Teune (1970), analyses 118 involving variables .at more ‘than one level of aggregation will be referred to as "comparative analysis“, as contrasted with analyses that are restricted to only the individual or organizational level. It should 1%? noted 'that comparative analysis presumes comparative theory. To borrow from Roberts et al. (1978), the unit of theory is comparative, rather than being restricted to either level of aggregation. Significant Process Predictors Multivariate relationships for outcome measures will be presented in the same order as above, viz., 1) post and follow-up interviews of adoption and implementation of evalu- ation practices, 2) self-reported evaluation practices, 3) agreement with evaluation practices and 4) knowledge of evaluation practices. 1. Evaluation Interview. Posttest interview scores were significantly predicted by staff education, expected tenure and agreement with evaluation practices. All three predictors are psychological characteristics. Regression coefficients were significantly different from zero. The F test for the multiple correlation for all three variables exceeded chance probabilities (p < .003). The adjusted R squared reveals that 25 percent of the variance in the cri- terion was explained by these predictors. These relationships were also obtained at the follow-up administration (Hi this interview. At this point iri time, however, only the effect of the regression coefficient for education was not due to chance, although the coefficient for 119 expected tenure approached significance (p == .077). The F ratio for the multiple correlation continued to be signifi- cant but diminished (F == 3.19, df = 3, 38, p = .034). The adjusted R squared was reduced in half (.138). The correla- tion inatrices, regression equations and regression summary statistics are reported in Tables 44 to 48. 2. Evaluation Self-Report. Like the interview measure of evaluation adoption and implementation, only psychological level variables produced significant regression coefficients for this outcome measure. Staff members' age and level of agreement with evaluation practices were successful predic- tors. Regression coefficients were only about 2.5 times the size of their standard errors, providing a loose-fitting regression equation. The F test for the multiple correlation departed considerably from chance levels (F = 7.058; df = 2, 69; p = .002). Almost 15 percent of the adjusted variance in the criterion was explained by these two variables. The cor- relation matrix, regression equation and summary statistics are provided below in Tables 49 to 51. 3. Agreement with Evaluation Practices. Posttest scores (Hi the Agreement with Evaluation Practices question- naire were regressed first on aggregate level organizational level variables found previously to be significant. Struc- tural and environmental variables were entered simultaneously to provide a comprehensive interpretation at the organiza- tional level of analysis. These results are presented in Tables 52, 53, and 54. It may be seen that participation in 120 Table 44 Correlation Matrix of Significant Predictors: Evaluation Interview and Self-Report Variable 1 2 3 4 1. Education 2. Expected - 11 tenure 3. Agree with 22* 18 eval. prac. 4. Post inter- - 37** - 22 15 view 5. Follow-up - 31* - 18 12 85*** interview Decimal points have been omitted. *p< .05; ** N range: 42 - p < .01; *** p < .001 81 Table 45 Variables in Equation: Post Interview, Significant Predictors Variable Education Expected tenure Agree with eval. prac. Constant b SE F Prob. - .7808 .229 11.612 .002 - .1300 .054 5.713 .022 2.7211 1.216 4.961 .032 -3.4510 4.583 .567 .456 121 Table 46 Regression Summary Table: Significant Predictors Post Interview, F to enter 2 Over- Variable or remove Prob. R all F Prob. Education 6.462 .015 .139 6.462 .015 Expected tenure 3.525 .068 .210 5.198 .010 Agree with 4.961 .032 .302 5.471 .003 eval. prac. Adjusted R2 = .246; adjusted R2 at step 1 = .117. Table 47 Variables in Equation: Follow-up Interview, Significant Predictors Variable b SE F Prob. Education - .6249 .239 6.822 .013 Expected - .1033 .057 3.306 .077 tenure Agree with 2.1527 1.276 2.847 .100 eval. prac. Constant -2.0729 4.786 .188 .667 122 Table 48 Regression Summary Table: Follow-up Interview, Significant Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Education 4.150 .048 .094 4.150 .048 Expected 2.149 .151 .141 3.209 .051 tenure Agree with eval. prac. 2.847 .100 .201 3.190 .034 Adjusted R2 = .138; adjusted R2 at step 1 = .071. Table 49 Correlation Matrix of Significant Predictors: Evaluation Self-Report Variable 1 2 3 1. Age 2. Agree with eval. prac. O3 3. Self-report 33** 25** Decimal points have been omitted. ** p < .01 N range: 72 - 74 123 Table 50 Variables in Equation: Evaluation Self-Report, Significant Predictors Variable b SE F Prob. Age .0111 .004 6.874 .011 Agree with .3261 .146 4.977 .029 eval. prac. Constant 1.3520 .648 4.353 .041 Table 51 Regression Summary Table: Evaluation Self-Report, Significant Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Age 8.883 .004 .113 8.883 .004 Agree with 4.758 .033 .170 7.059 .002 eval. prac. Adjusted R2 = .148; adjusted R2 at step 1 = .100. 124 Table 52 Correlation Matrix of Significant Organizational Predictors: Agreement with Evaluation Practices Variable 1 2 1. Participation in decision making 2. No. of total 08 staff 3. Inter-org - 23* - 21 relations 4. Agree with eval. prac.-- posttest 36*** 31** -291! Decimal points have been omitted. * p < .05; ** p < .01; *** p < .001 N range: 54 - 81 Table 53 Variables in Equation: Agreement with Evaluation Practices, Significant Organizational Predictors Variable b SE Participation in .1825 .077 decision making N0. of total staff .0041 .002 Inter-org. -.OO44 .003 relations Constant 3.2766 .339 5.392 3.825 1.721 92.933 Prob. .024 .056 .195 .000 125 Table 54 Regression Summary Table: Agreement with Evaluation Practices, Significant Organizational Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Participation in 7.553 .008 .127 7.553 .008 decision making No. of total 4.986 .030 .205 6.559 .003 staff Inter-org 1.722 .195 .231 5.009 .004 relations Adjusted R2 = .185; adjusted R2 at step 1 = .110. decision making, number of full- and part-time paid and vol- unteer staff, and extent of interorganizational relations contributed to 11 regression equation providing a Inultiple correlation divergent from chance levels (F = 5.009; df = 3, 50; p = .004). Only the regression coefficients for par- ticipation and number of staff were different from zero. Slightly more than 18 percent of the variance in the cri- terion was explained by these three variables (adjusted R Squared = .185). The overall pattern of results suggests that these three organizational characteristics adequately Predicted the level at which staff members in organizations agree with evaluation practices. Of the three, participation in decision making clearly provided a superior predictor (adjusted R squared = .110), although its standard error was Slightly higher than preferred. 126 These three organizational variables were next combined with psychological variables shown in previous analyses to be successful predictors, i.e., pretest scores on the agreement with evaluation practices questionnaire. The correlation matrix of these predictors is included in Table 55. As would be expected, pretest scores provided the best estimate of posttest scores. In addition, the number of total staff con- tinued to be a relatively strong predictor (see Table 56 and 57). Other organizational characteristics diminished in their predictive strengh. The greater decline in predictive efficacy demonstrated by participation in decision making was due to its moderate correlation with pretest scores on the agreement with evaluation practices. Its partial beta weight, after entering pretest scores in step IL, was .154, although its zero order correlation vntfii the criterion was .36. The predictive superiority of pretest scores was not surprising. Noteworthy, instead, was the continued ability for indicators of organizational structure and environment in this comparative analysis to predict how strongly individuals demonstrated psychological characteristics like agreement with evaluation practices. While no causal ordering is sug- gested here, such possibilities will be discussed below. 4. Program Evaluation Knowledge. No variables signifi- cantly' predicted scores (Hi the scale measuring evaluation knowledge. Consequently, no tables of final predictors are provided. 127 Table 55 Correlation Matrix of Final Predictors: Agreement with Evaluation Practices Variable 1 2 3 4 1. Participation in decision making 2. No. of total 08 staff 3. Inter-org -23* -21 relations 4. Agree with eval. prac.-- pretest 42*** 15 - 27* 5. Agree with eval. prac.-- posttest 36*** 31** -29* 58*** Decimal points have been omitted. * p < .05; ** p < .01; *** p < .001; N range: 54 - 81 Table 56 Variables in Equation: Agreement with Evaluation Practices, Final Predictors Variable b SE F Prob. Agree with eval. prac.--pretest .5350 .142 14.251 .000 No. of total staff .0034 .002 3.412 .071 Participation in decision making .0737 .075 .952 .334 Inter-org. relations - .0023 .003 .633 .430 Constant 1.5714 .543 8.359 .006 128 Table 57 Regression Summary Table: Agreement with Evaluation Practices, Final Predictors F to enter 2 Over- Variable or remove Prob. R all F Prob. Agree with eval. prac.-- pretest 25.786 .000 .331 25.786 .000 No. of total staff 4.183 .046 .382 15.774 .000 Participation in decision making 1.197 .279 .397 10.956 .000 Inter-org relations .633 .430 .404 8.315 .000 Adjusted R2 = .356; adjusted R2 at Step 1 = .318. Omitted from the discussion of the multivariate analyses thus far has been a presentation of the predictive impact of the intervention condition. That is, does participation in the experimental condition predict outcome scores better than organizational and individual level constructs? To answer this question, intervention group membership was dummy coded (Cohen 8- Cohen, 1973); organizations in the control groups were coded () and members of the experimental groups were coded 1 on this dummy variable. This dummy variable was then entered into each of the regression equations including final predictors. Thus, the proportion of variance due in) the intervention may be compared to the amount of variance due to other significant predictors. Inclusion of both significant 129 predictors and the dummy variable representing intervention group membership maximizes the amount of information in the obtained data that can 1H3 used to produce the best fitting regression equation. Discussion of 1) post and follow-up interviews of evalu- ation practices is followed by presentation of 2) self— reported evaluation practices, 3) agreement with evaluation practices, and 4) knowledge of evaluation practices. Finally, the chapter will conclude with a pmesentation of the rela- tionship between components of the intervention and ‘those outcome measures demonstrating differences as a result of the intervention. Final Regression Equations 1. Evaluation Interview. Posttest scores on the inter- view measuring adoption and implementation of evaluation practices were more reliably predicted by addition of group membership in the regression equation. The adjusted R squared for the multiple correlation was increased approxi- mately 50% (from .246 to .361). As would be expected, the regression coefficient for education continued to be signi- ficant, while the predictive power of agreement with evalu- ation practices diminished 1x1 slightly greater 'than chance levels of significance (p == .078). The regression coeffi- cient for expected tenure was no longer signicant due to its significant negative correlation (I: = -.29; p == .019) with intervention group membership, resulting in a negative par- tial correlation not "unfit greater than zero (partial I: = - .09) when group membership entered the equation. 130 The influence of intervention group membership increased at the~ follow-up administration of ‘this instrument. This additional influence was due in) the diminished predictive capability of the other variables. This waning trend was discussed above. The adjusted R squared for the regression equation remained approximately equal to the adjusted R squared for the post administration of the evaluation inter- view. The relative ranking (Hi the predictors remained un- changed. The correlation matrix of predictors is provided in Table 58. Tables 59 and 60 provide the regression equation and summary statistics for the post interview; Tables 61 and 62 provide the same data for the follow-up interview. 2. Evaluation Self-Report. The addition (H: the dummy coded intervention group variable explained no :significant additional variance in this outcome measure. The individual level characteristics of age and agreement with evaluation practices continued to be sufficient to explain approximately 15% of the variation in this criterion. The failure of intervention group membership to contribute to the regression equation was not surprising given the repeated measures analysis of variance results involving this measure reported above. Because no additional information was provided by the addition of dummy coded intervention group membership, re- gression coefficients and summary statistics remained the same. This information was presented above in Tables 50 and 51. Correlation Matrix of Final Predictors and Intervention: 131 Table 58 Evaluation Interview Variable 1 2 3 4 5 1. Education 2. Expected - 11 tenure 3. Agree with 28* 17 eval. prac. 4. Intervention - 14 - 29* O9 5. Post inter- - 37** - 22 15 51*** view 6. Follow-up interview - 31** - 18 12 58*** 85*** Decimal points have been omitted. * p < .05; ** p < .01; *** p < .001 N range: 42 - 81 Table 59 Variables in Equation: Post Interview, Final Predictors and Intervention Variable b SE F Prob. Intervention 2.4941 .891 7.832 .008 Education - .6464 .216 8.928 .005 Agree with eval. 2.0832 1.148 3.295 .078 prac. Expected tenure - .0792 .053 2.208 .146 Constant - 3.3168 4.219 .619 .437 132 Table 60 Regression Summary Table: Post Interview, Final Predictors and Intervention F to enter 2 Over- Variable or remove Prob. R all F Prob. Intervention 14.165 .001 .261 14.165 .001 Education 5.658 .022 .355 10.736 .000 Agree with 2.125 .153 .389 8.072 .000 prog. eval. Expected tenure 2.208 .146 .425 6.798 .000 Adjusted R2 = .361; adjusted R2 at step 1 = .243. Table 61 Variables in Equation: Follow-up Interview, Final Predictors and Intervention Variable 6 SE F Prob. Intervention 3.2210 .877 13.491 .001 Education - .4514 .213 4.496 .041 Agree with eval. prac. 1.3288 1.129 1.385 .247 Expected tenure - .0376 .052 .515 .478 Constant - 1.8987 4.152 .209 .650 133 Table 62 Regression Summary Table: Follow-up Interview, Final Predictors and Intervention . F to enter 2 Over- Variable or remove Prob. R all F Prob. Intervention 20.353 .000 .337 20.353 .000 Education 3.362 .074 .390 12.458 .000 Agree with 1.066 .308 .406 8.675 .000 prog. eval. Expected tenure .514 .478 .416 6.552 .000 Adjusted R2 = .351; adjusted R2 at step 1 = .323. 3. Agreement with Evaluation Practices. Variables previously demonstrating significant relations with this outcome measure included participation in decision making and number of full-time and part-time paid and volunteer staff. The addition of group membership did not increase the pre- dictive power of the regression equation. In fact, the adjusted R squared was somewhat reduced when group membership was included in the equation. The F test for the multiple correlation continued to be significant. The overall pattern of results suggests only knowledge of pretest scores on this scale and number of staff were necessary to explain approxi- mately 36 percent (H: the variation 'hi this scale. Because knowledge of intervention group membership did not increase the amount of explained variance, regression summary infor- mation is not provided. The significant regression equation 134 presented above in Table 56 continued to provide the best estimate of scores on this outcome measure. 4. Program Evaluation Knowledge. It will be recalled that no variables examined in the multiple regression analysis successfully predicted scores on this criterion. Moreover, as reported above, :1 one-way ANOVA revealed that intervention group membership was unrelated to scores on this instrument. It can 1H3 seen after examining the tables above that membership irI the intervention condition contributed signi- ficantly only to the regression equations for the post and follow-up interview measure of adoption and implementation of evaluation practices. This result will be interpreted fur- ther in the next chapter. The small number (H: respondents participating in the intervention precluded the use of multivariate techniques to determine the contribution of components of the intervention to innovation adoption. Consequently, only zero order cor- relations were used to estimate the relationship between intervention components and outcome measures. The correla- tion matrix of these variables is displayed in Table 63. Intervention components included the number of goals achieved during the intervention, satisfaction with the intervention, and evaluation knowledge. Post measurement of evaluation knowledge was not treated as an outcome measure in 135 this analysis. It can easily be seen that goal-setting con- tributed the major explanation for the success of the inter- vention. Participants rated equally high their satisfaction with the workshop and consultation intervention. Table 63 Correlation Matrix of Intervention Components and Evaluation Interview Scores Variable 1 2 3 4 5 6 1. Number of goals achieved 2. Consultation satisfaction 20 3. Workshop satisfaction 08 62*** 4. Evaluation knowledge 12 -08 -11 5. Post Interview 62*** -1O 20 O9 6. Follow-up interview 66*** -30 03 16 85*** Decimal points have been omitted. * p<.05; ** p<.01; *** p<.001 N range: 12-81 136 Summary of Results Directors and nominated others in organizations partici- pating in the evaluation workshops were randomly assigned to either a”) experimental condition, where they received six weeks of consultation using written goal-setting, or to a control group, where they received no further treatment. A one-way ANOVA, intraclass correlation coefficients, and per- cent exact agreement. were used to estimate the extent of intraorganizational agreement. Those variables that did not show agreement may have failed to do so because of a restric- tion in range of scale responses. It was decided to aggre- gate organizational characteristics because they were more meaningfully interpreted at the aggregate level, and because the evidence for disagreement was not especially strong. Analysis of pretest scores revealed differences between members of the control and experimental groups only in res- pondents' expected tenure 'Hi the organization. Differences between control groups existed for the number of full-time paid and volunteer staff and number of part—time volunteer staff. Experimental group organizations demonstrated dif- ferences in the number of services provided and level of agreement with the practice of evaluation activities in their respective organizations. Evidence documentjru) the effec- tiveness of the experimental intervention was equivocal. Interview measures of adoption revealed a highly significant main effect for the experimental treatment, explaining about 21 percent of the outcome variance. This effect was not 137 found in the self-report measure of evaluation adoption. The intervention also failed to produce differences between ex- perimental and control groups in the level of agreement with evaluation practices or knowledge of program evaluation methods. These results were contrary to a priori hypotheses. Intervention condition membership explained additional vari- ation only in the interview instrument, contributing to twice the amount of explained variation in the criterion measure. Final significant multiple regression equations are displayed in Table 64. Table 64 Final Significant Multiple Regression Equations Criterion Predictors Post Interview = (-3.32) + (2.49) intervention group + (-.65) education + (2.08) agree- ment with evaluation practices + (-.08) expected tenure R2 = .43; Adjusted R2 = .24 Follow-up (-1.90) + (3.22) intervention group + Interview (-.45) education + (1.33) agree- ment with evaluation practices + (-.O4) expected tenure R2 = .42; Adjusted R2 = .35 ( 1.57) + (.53) agreement with evaluation Agreement with Evaluation practices--pretest + (.003) no. Practices-- of staff + (.07) participation in Posttest decision making + (-.002) inter- organizational relations R2 = .40; Adjusted R2 = .36 CHAPTER IV Discussion The presentation and discussion (#1 a study having the magnitude of the present one is always difficult. One always risks becoming detailed to the point of tedium or maintaining a level of explanation that possibly forsakes important detail and suffers from superficiality. Every attempt has been made to maintain a balance between these two extreme possibilities. A restatement. of the a priori hypotheses, and their confirmation or disconfirmation, will begin the chapter. Following this presentation, major flaws in the reported research will be bared. Finally, implications of the find- ings and suggestions for future research will be discussed. Confirmation of Hypotheses It should be recalled that both experimental and cor- relational hypotheses were suggested ‘hi chapter one. The experimental hypotheses referred to changes likely to occur as a consequence of participation in the consultation inter- vention. Specifically, it was argued that participants in the experimental intervention would demonstrate: 1) greater adoption and implementation of evaluation practices, 2) more favorable agreement with evaluation practices, and 3) greater knowledge of evaluation practices. Discussion of the confir- mation of these hypotheses will proceed in the same order. 138 139 Experimental Hypotheses 1. Greater adoption and implementation. The results concerning this hypothesis were equivocal. With one measure of evaluation practices (Evaluation Self-Report) TH) change was found. Examining pretest and posttest scores on this measure, members of experimental and control groups did equally well. Posttest means were identical for members of experimental and control groups. With the other measure of evaluation practices (Evaluation Interview), a significant main effect for participation 'Hi the consultation interven- tion was revealed, with a substantial amount of the variance in this measure accounted iWH‘ by group membership ( 012 = .21). With neither measure was a significant time-by-inter- vention-condition interaction discovered. Several possible explanations for these contradictory results immediately suggest themselves. First, one might argue that innovation adoption reported in the interview really represented expect- ancy or experimenter demand effects. Second, one might argue that these measures were not measuring the same construct, hence disagreement between them should not necessarily be surprising. Experimenter expectancy and demand characteristics refer to the shaping of results by the transmission of the experi- menter's expectations of the results to the participants in the study. Rosenthal (1966; Rosenthal and Jacobson, 1968) has documented the effect of expectancies on performance. This effect has consistently found support in different 140 settings and among varied age groups (Crano 8 Mellon, 1978; Eden 8 Shani, 1982; Rubovitz 8 Maehr, 1971). Participants in the study become aware of the researcher's hypothesis regard- ing outcome and behave in a unanner consistent with this hypothesis. Expectancy! is confounded with treatment. It might be argued that significant effects demonstrated with the use (H: the interview instrument Inay have been due to expectancies rather than the treatment itself. Moreover, the use of the interview format might have exacerbated this effect (Crano 8 Brewer, 1973, pp. 168-169). It should be recalled that participants in the experi- mental consultation were instructed to document all goals set and achieved. Thus, written records existed to document all goals. These goals, in fact, represented increments of adoption. For example, iii a participant decided to create and administer e1 needs assessment questionnaire, he (n: she might set as weekly goals item-writing and questionnaire construction, questionnaire pretesting, and, possibly, actual administration. In all cases, participants were asked to provide a copy of the questionnaire and its administration schedule, as well as document the achievement of any other goals. Copies of afll questionnaires, written plans, and other accomplishments were provided by 16 of the 24 members of the experimental group. Thus, reported outcomes were validly documented for two thirds of all experimental mem- bers, those participants most susceptible to expectancy effects. 141 The level of agreement between participants and nomi- nated others also argues against the influence of expectancy effects. The average zero order correlation between partic- ipants and nominated others for both the post and follow-up interviews was .59 (p < .001). Thus, nominated others would also have to have been affected by the researcher's expect- ancies, an unlikely event. A second plausible alternative explanation for the treatment effect, as measured, is that the effect was real, but the self-report and interview measured different aspects of its success. This explanation seems ix) be iri greatest agreement with the obtained data. Support for this suggestion comes from the correlation between the responses to the two scales. The zero order correlation between aggregated means of both measures was .10 (p. = .26), suggesting these instruments were measuring different constructs. The self-report may have elicited responses representing a global level of intermittent evalu- ation practice, while the interview drew forth responses representing specific practices accomplished since attending the workshop. Responses scored as successful adoption with the interview included only those evaluation practices adop- ted since participation 'hi the workshop. Responses on the self-report, however, asked respondents to reply how fre- quently' their organization engaged in the same evaluation activities listed in the interview. The mean response on the self-report measure (mean = 3.35 at posttest) fell almost 142 midway between the response categories "sometimes" and "often". This pattern of responding implies that respondents felt they performed these practices at least once, although, perhaps, not recently. This temporal specificity may have distinguished responses on the two instruments, partially accounting for their limited convergence. Another very real possibility concerning the observed data was the existence of a treatment-by-testing interaction (Campbell 8- Stanley, 1966, [L 18). The significant main effect for the intervention was revealed only with the inter- view instrument. Generalizability of the observed effect may be limited as a consequence. This limited generalizability also may explain the failure to discover any effect using the self-report instrument. The observed effect may be limited to measurement conditions similar “H1 format ix) the admini- stered interview intrument. In sum, the intervention seems to have had an immediate, but limited, effect on participants in the consultation group. Only the very specific activities set as goals during these sessions were adopted. There was no generalized, eXpanded implementation at the follow-up measurement period. This point will be discussed again below. 2. More favorable agreement with evaluation practices. Participation 'hi the experimental consultation intervention had no impact on participants' level of agreement with evalu- ation practices. A ‘repeated uneasures ANOVA. revealed that neither the experimental nor the control group demonstrated a 143 change irI their level (H: agreement over time. ‘The time-by- intervention-condition interaction also failed to reach significance. An explanation for these results may rest in the vari- ance ‘hi the Agreement with Evaluation Practices Scale. Average responses on this five-point Likert type scale for both groups suffered from £1 ceiling effect. Examination of the sample distribution of responses on this scale shows very high levels (H: agreement with evaluation practices at both measurement periods, resulting in very small standard devia- tions (pretest sample mean 3.89, SD = .38; posttest sample mean = 3.94, SD = .44). The remaining amount of variance capable of being explained as a consequence of participation in the intervention was negligible. Participants could increase their level of agreement very little. Therefore, if the intervention was sufficiently powerful to induce change in this dimension, the restriction in range of the instrument prevented detection of such an effect. A corollary issue is the potential existence of a social desirability effect. Because participants were involved in a project that clearly placed a high value on program evalu- ation methods, and nominated others were most likely also aware of this value, some portion of this agreement might have stemmed from an attempt to present a socially desirable set of responses. Although item responses were reverse worded to limit such a response set, the underlying attitude 144 valences were probably transparent to respondents, particu- larly given the nature and content of the workshops and intervention. If this response tendency was pervasive, it may have contributed in) the observed restriction iri range, confounding interpretation of the results. 3. Greater knowledge (fl: evaluation practices. Meas- urement of knowledge of evaluation practices, using a 15 item multiple choice test administered at the follow-up measure- ment period, revealed no differences between experimental and control group members. Although the mean response for con- trol group participants was lower, and the degree of varia- tion was larger than among experimental group members, these differences were not significant. The failure to find dif- ferences on this dimension was contrary to the hypothesized effect of the intervention. The most plausible explanation for the failure of the consultation intervention to produce greater knowledge of evaluation practices among members of the experimental group is an insufficient amount of time was spent on didactic activities. During each weekly consultation session approx- imately 30 to 45 minutes was devoted to review of evaluation related material presented during the workshop (see Appendix B). In addition, all participants had a fairly comprehen- sive written manual given to them during the workshop. While members (H’ the experimental group were told to review the appropriate section 'hi the manual before each consultation session, and questioning by each member was encouraged, 145 anecdotal evidence suggests this reading was not done. The experimental members seldom read the program evaluation manual. Although all members of the control group also possessed written manuals, thus equalizing the availability of evaluation information to participants in both conditions, it is unlikely they read the material any more frequently than experimental group members. Thus, the absence of any main effect for participation in the consultation interven- tion was probably due to the weakness of the treatment. Formal instruction in evaluation methods was unrelated to successful adoption and implementation of evaluation practices (average 1 = .085, n.s.). The most potent com- ponent of the intervention was the use of written goal-set- ting and public review of accomplishment (average L = .64, p < .001). It is believed further that exclusive use of goal-setting in the intervention groups may have been suffi- cient to elicit adoption of the innovation. That is, the didactic based evaluation workshop may have been irrelevant beyond sensitizing participants to evaluation issues and instilling in them a belief in the importance and usefulness of evaluation methods. Any evaluation knowledge necessary for implementation was provided by the consultant during the intervention. Quite possibly this may have been all that was necessary for adoption and implementation of evaluation meth- ods. This possibility will be discussed again below. The intervention demonstrated limited effectiveness in moving organizations toward the adoption and implementation 146 of program evaluation methods. The most successful component of the intervention was the number of written goals achieved. The success of this component provides a unique example of the efficacy (Hi goal-setting. Prior to this study, Inost research testing goal-setting effectiveness employed depend- ent measures representing concrete task performance like logging (Lathmn 8 Kinne, 1974), card sorting (White, Mitchell, 8 Bell, 1977), or dieting (Bandura 8 Simon, 1977), although some exceptions exist (Kolb 8 Boyatzis, 1970). The results of this study extend the goal-setting literature by showing that this type of structured motivation can also be effective in changing performance on more sophisticated tasks like the adoption and implementation of innovations, specifi- cally, program evaluation methods. The effectiveness of the goal-setting intervention also extends previous research examining the success of change agents in fostering the adoption of innovations in organi- zations. Previous empirical work using outside change agents relied (”1 small groups internal to the target organization (Fairweather et al., 1974; Stevens 8 Tornatzky, 1980). As a consequence, previously measured small group characteristics, especially superior--subordinate relations” were confounded with the effectivenss of the change agent. In the present study, small groups were composed of participants from dif- ferent organizations, eliminating this confound. Moreover, the extended period of time required to induce change in these previous studies was not necessary in the present 147 study. The present intervention accomplished irI six weeks what previous researchers took months to achieve. Yet to be determined is the impact of group process variables like cohesion and leadership. Isolating these effects will require future reasearch. The next topic (H: discussion focuses (Hi the correla- tional hypotheses presented in chapter one. It was proposed above that adoption of the innovation would be moderated by several variables. These included 1) organizational struc- ture, 2) organizational environment, and 3) individual attitudes and characteristics. The results from the multi- variate analyses involving these variables were also mixed. Correlational Hypotheses 1. Organizational structure. Variables in this domain included size, centralization, formalization, and complexity. Indicators for these variables included budget, percent budget spent on program evaluation, number of staff (size); participation in decision making and hierarchy of authority (centralization); job codification and rule observation (formalization); professionalization, number of services provided, and professional training (complexity). Multi- collinearity among these indicators was sufficiently small to allow their independent entry irnx> multiple regression equations. Although organization size is considered by some (Pugh, Hickson, Hinings, MacDonald, Turner, 8 Lupton, 1963) to be a contextual variable, like organization history, it Tnay also 148 be considered a structural characteristic (Kimberly, 1976). Two of the three indicators used for size (budget, number of staff) were significantly correlated (IL = .46, 11 = .003), while the third (percent budget spent on program evaluation) was not. The relationship between size and innovation adoption is unclear (Hage 8 Ajken, 1970, pp. 130-132). Stevens (1977) found a positive, but nonsignificant, zero order correlation (5 = .15, N == 37) between number of staff and adoption of evaluation methods, using his open-ended, self-report ques- tionnaire. Fairweather et al. (1974) reported mixed results. Fairweather et al. (1974, 1%. 86) reported a negative rela- tionship (: = -.12) in their brochure condition but a posi- tive relationship (3_ = .12) in their workshop condition (p.93), although neither of these correlations were signif- icant. Heydebrand 8 Noell (1973) reported a moderate posi- tive correlation (I: = .32). The inconclusiveness of 'the relationship between size and innovation adoption is most likely associated with the fact that size represents several different dimensions, each of which may have a different relationship with the outcome of interest (cf. Kimberly, 1976). Since it was reasoned that increased size could increase complexity or the availability of slack resources, conditions associated with innovation (March 8 Simon, 1958, pp. 186-187; Hage, 1980, pp. 165-184) and innovation adoption (Hage 8 Aiken, 1970, pp. 130-131), size should have demonstrated a 149 positive relationship with innovation adoption. The obtained results failed to confirm this relationship. Not only were regression coefficients nonsignificant, but the sign of the obtained relationships was negative in several cases. There were negative zero order correlations and regression coeffi- cients between percent budget spent on evaluation and total number of staff and both measures of adoption of evaluation practices. Only annual budget demonstrated the predicted positive relationship. An explanation for these results is not obvious. The size of the coefficients in relationship to their standard errors suggests they may have been reflecting sampling error. This explanation is more likely given the number of observations in relation to the number of variables in the regression equation, a ratio ranging from 2:1 to 4:1. In all likelihood, size bore no real relationship to adoption of evaluation practices. That is, small and large organiza- tions providing services to the elderly were equally likely to implement program evaluation practices. Whether this is true in the population of gerontological programs, or in other public or private sector organizations, is unknown. The second measure of organizational structure examined is centralization. The relationship of this veriable with innovation adoption has received more empirical support than any other characteristic of organizational structure. Cen- tralization has consistently been shown to covary negatively with innovation and innovation adoption. Starting with the case study observations of Burns and Stalker (1961), 'this 150 relationship has been documented by Hage and Aiken (1967b), Hage and Dewar (1973), Fairweather et al. (1974) and Tornatzky et al. (1980). Tornatzky et al. (1980) provided experimental evidence for the success of participative deci- sion making in facilitating innovation adoption. Formalization provides the third structural variable measured. The two indicators of this variable included job codification and rule observation. Burns and Stalker (1961) also argue for the importance of this variable in organi- zation change. They provide case study evidence for a nega- tive relationship between formalization, innovation, and innovation adoption. Correlational evidence for this is pro- vided by Hage 8 Aiken (1967b). Complexity is the final measure of organization struc- ture examined. Indicators for this variable included pro- fessional training of staff, degree of involvement in pro- fessional activities like conventions and workshops, and the number of different services provided by the organization. Hage and Aiken (1967b), and Heydebrand and Noell (1973), are among those who have provided empirical support for a posi- tive relationship between organizational complexity and innovation adoption. Results from the Inultivariate data analyses presented above showed the two indicators of centralization to be unrelated to the adoption and implementation of program evaluation methods, although the signs of the obtained rela- tionships were 'Hl the right direction. (An exception was 151 hierarchy of authority and self-reported adoption. This difference could easily have been due to sampling error as the standard error was 6 times the size of the regression coefficient.) The results including formalization also failed to confirm the hypothesized negative relationship. In fact, only for the self-report measure of evaluation adoption did the obtained relationship have the predicted sign. Because the standard errors were larger than each of the regression coefficients, these relationships must. be interpreted very cautiously. The data reflecting the relationship between complexity and adoption of evaluation practices were equally nonsupport- ive. Onlyr the regression coefficient for degree and the post-interview measure of evaluation adoption approached significance (t1== .068). No other indicator of complexity confirmed the predicted positive relationship with adoption of program evaluation methods. The failure to discover the predicted relationships between centralization, formalization, complexity and adop- tion and implementation of program evaluation methods is believed to have been due to restriction in range in the scales used to measure these factors. This restriction was most probably due to the homogeneity of the organizations included in the sample. Examination of the sample charac- teristics for the 43 aggregated organizational means revealed little variation. Such limited variation in scale scores may 152 have placed a powerful restriction on the range of correla- tions. This restriction was made more severe for those scales possessing only moderate internal consistency. Increasing the variation and reliability in the measure- ment of these variables could possibly lead to a confirmation of previous findings. It should be recalled, however, that these measures of organizational structure included the instruments used in the earlier research of Hage and Aiken (1967b), who reported significant findings. Consequently, the restriction in range associated with the homogeneity of the sample is believed to be the most serious problem. This problem could be eliminated with the study of a more diverse group of organizations. Such a sample might include private sector organizations and other human service agencies, in addition to organizations providing services to the elderly. With the added variation on these structural dimensions, a more valid test of these correlational hypotheses should become possible. Another possible reason for the failure to discover significant relationships between measures of organizational structure and innovation adoption might lay in their differ- ent degrees of specificity. The measures of organizational strucure used in the present study may have represented a “macro“ level of abstraction, while the measures of innova- tion adoption may have represented 11 more "micro“ level of abstraction. In discussing organizational climate as a variable, Schneider (1975) has suggested that an; one moves 153 closer to molar, or “macro", levels of perception, each person's perception may be more affectively colored, result- ing in greater individual differences. He suggests elsewhere (Schneider, 1981) that the more comprehensive a measure attempts to be in measuring organizational features, the less useful it will be in understanding a specific issue or cri- terion. The lack of congruence between levels of abstraction may linnt: the obtained correlations. Differences in the degree of abstraction measured with the administered instru- ments may also explain the failure to discover the predicted correlations. In the present study, the measure of innovation adop- tion, particularly as measured with the Evaluation Interview, was very specific. Respondents reported whether their organ- ization adopted any of over two dozen program evaluation activities. While this level of specifity allowed a more comprehensive portrayal of innovation adoption and implemen- tation, it may have reduced correlations with the more molar measures cn’ organizational structure. Future research, to rectify this problem, must attempt to equate the levels of measurement abstraction. A measure of " program evaluation adoption climate“ might beneficially address this weakness. Also examined in the present study was the relationship between organizational structural characteristics, agreement with evaluation practices, and knowledge of evaluation prac- tices. No hypotheses were proposed for these relationships. 154 Instead, multivariate analyses were conducted in an explor- atory fashion. Several interesting findings emerged. Most revealing was the significant regression equation for pre- dicting agreement with evaluation practices. Four structural characteristics, while run: providing individual regression coefficients significantly different inmi zero, contributed to a regression equation whose multiple correlation was significant. These predictors included participation in decision making, number of total staff, rule observation, and percent of budget spent on program evaluation. Interpreta- tion of this significant regression equation leads to the conclusion that organizations with greater participation in decision making, larger staff size, greater rule observation and less percent of their budget spent on program evaluation were likely to demonstrate a higher level of agreement with program evaluation practices. The size of the standard errors, however, indicated considerable hnprecision iri the equation. And given the small sample size upon which the equation was based, conclusions should be cautious. The regression coefficient for participation in decision making was the most stable, being twice the size of its standard error. Moreover, this variable accounted for about 13% of the variance in this measure of agreement with evalu- ation practices. Organizations with more dececentralized decision making were staffed with individuals more likely to agree with the use of evaluation practices in their organiza- tions. While only conjecture, it is possible that in these 155 organizations program evaluation is perceived in a less threatening manner. If staff members participate in the decisions in their program they’ may be subject to fewer negative sanctions as a consequence of evaluation results. They participate in the decision to adopt program evaluation methods, as vufll as other policies and practices, and feel less threatened as a consequence of excercising this control. All sanctions associated with adoption of the innovation are partially under their control. Whether the relationship obtained between this indi- vidual level characteristic and organizational structural variables is spurious can only Ina determined with future research. Longitudinal research will be required to deter- mine causality. This line of inquiry will contribute to organizational theory examining the Twins of organizational structure in shaping individual attitudes and behavior (James 8 Jones, 1976; Schneider, 1982; Sutton 8 Rousseau, 1979). 2. Organization environment. March auni Simon (1958) first suggested that organizations located in turbulant and unstable environments should demonstrate greater innovation and be more susceptible to change. Moreover, information about innovations should be associated with communication with other organizations in the focal organizations' environ- ment. Indicators used iri the present study 'U: represent these aspects of the environment included an index formed by multiplying the frequency of interactions times their impor- tance, the age (H: the organization, and the percent chance 156 the program was predicted to exist in the coming fiscal year. A stable environment should be characterized by older age and a greater chance of continued existence. The relationship between interorganizational relations (IOR) and this stabil- ity was not predicted. Interaction with other organizations may introduce new information into the organization, as well as allow the inno- vative organization to demonstrate its innovativeness and professionalism to peers in other organizations in the com- munity. In some respects this process may resemble the role opinion leaders play in innovation diffusion (Rogers 8 Shoemaker, 1971). Previous empirical support fin: a positive relationship between IOR and innovation adoption has been provided by Aiken and Hage (1968; L = .74, p < .001). This relationship continued ix) be significant after these investigators con- trolled for complexity, size, organizational age, and tech- nology. Stability is another feature (H: the environment that should affect innovation adoption. Organizations in more turbulant environments would be expected to be more prone to change; those in stabler environments, less prone to change (Burns 8 Stalker, 1961; Lawrence 8 Lorsch, 1967). While not specifically predicted, one would expect these same relation- ships to obtain in the presently reported research. Unfortu- nately, this did not happen. 157 None of the regression coefficients for indicators of organizational environment significantly predicted scores on any of the outcome measures. The nnfltiple correlation for IOR was marginally significant, explaining about 8% of the variance, in predicting scores on the Agreement with Evalu- ation Practices scale. Although not predicted, the regres- sion coefficient was negative, suggesting that high IOR was associated with greater levels of agreement with evaluation practices. Because the standard error was as large as the regression coefficient, the stability of this 'finding is questionable. It is probably most accurate to say that envi- ronmental characteristics were unimportant in predicting ad0ption and implementation of evaluation practices, result- ing in no confirmation of the hypothesis that change would be related to IOR. Most organizations (70%) in the sample felt 100% sure their program would continue to exist in the next fiscal year. Another 10% felt 90% sure. The average perceived probability was 92.25%, suggesting most of the organizations in the sample were not concerned about their immediate future. This stability was complemented by the average age of the organizations, i.e., 16.7 years (median = 7.1 years). Most organizations were rather stable because of their average age and because only 5% rated the chances for their continued existence to be 50% or less. This relative stabil- ity for most organizations in the sample may have prevented a real test of the hypothesis. 158 The absence (Hi the predicted relationship between IOR and adoption and implementation of evaluation methods is not readily explicable. This failure might have been due to the nature of the study itself. Adoption of the innovation was best predicted by knowledge of intervention condition member- ship. Because of the contrived and specific nature of the adoption process, IOR may have been irrelevant. Communi- cation with other organizations may tna important only in naturally occurring diffusion, that is, circumstances in which diffusion of the innovation is allowed to run its normal course over several months (n: years, thus, empha- sizing centrality iri communication networks. In this way, sociometric stars can more readily benefit from their loca- tion, demonstrating the predicted relationship between inno- vation adoption and frequency and importance of interorgani- zational interaction. 3. Individual characteristics. The final group of hypothesized relationships to be discussed includes indi- vidual difference variables. These variables were agreement with evaluation practices, knowledge (H: evaluation, educa- tion, job tenure, expected job tenure, age and sex. These variables were entered as predictors in multiple regression equations to discover the importance of staff characteristics in the innovation adoption process. It will be recalled that agreement with evaluation practices and knowledge of evalu- ation methods were also used as criterion variables. In the presently discussed analyses, however, pretest scores of 159 agreement with evaluation practices were used as predictors. Because evaluation knowledge was measured only once, the same scores were used as a predictor. Scores on the evaluation interview were successfully predicted by knowledge of staff education, their expected tenure, and their level of agreement with evaluation prac- tices. Organizations with staff members having greater levehs of agreement with evaluation practices, having less professional education and expecting to remain in the organ- ization less time were more likely to adopt the use of pro- grmn evaluation methods. While the first relationship was predicted, the last two were surprising. Previous research- ers have reported a positive relationship between profession- al education and innovation adoption (Counte 8 Kimberly, 1974; Hage 8- Aiken, 1967b; Heydebrand 8- Noell, 1973; Kimberly, 1978). This finding may be idiosyncratic to the present sample of organizations, which was characterized by a large number of part-time volunteer staff. Reliance on part-time volunteers is very common among gerontological organizations. The obtained relationship between profession- alism and innovation adoption may not hold true in organiza- tions providing services in) other client. groups. The obtained negative relationship may be true only in organiza- tions staffed primarily with volunteers. Agreement with evaluation practices also provided a significant regression coefficient for predicting scores on the evaluation self-report. The age (H: individual staff 160 members provided a slightly stronger predictor. The combi- nation of these two variables explained about 17 percent of the variance in this measure. Organizations with older staff members reporting greater levels of agreement with evaluation practices were more likely to implement evaluation practices. This might be related to time above finding regarding educa- tion because many volunteers working in gerontological agen- cies are themselves seniors. Both measures of the adoption and implementation of evaluation practices were successfully predicted with indi- vidual level variables. The superiority of individual level predictors in predicting organizational level responses contradicts the 'findings of Baldridge and Burnham (1975). These investigators found individual characteristics in) be unimportant in predicting organizational adoption oir inno- vations. The findings in the present study find some support from Hage and Dewar (1973), who found that positive values toward change held by elite organizational memebers were better predictors of organizational innovation adoption than complexity, centralization, (n: formalization. The respon- dents in the present study could easily be considered elite organization members given that they were most often the director and his or her nominated staff member. The impor- tance (H: psychological characteristics in predicting inno- vation adoption may 1%? amplified ‘Hl smaller organizations, like those in the present sample. Thus, the present findings introduce further complexity into the innovation adoption 161 literature tu/ demonstrating that psychological characteris- tics of organizational staff may be better predictors of innovation adoption in organizations than structural charac- teristics of these organizations. The combination (H: pretest scores mmi organizational structural characteristics provided a highly significant multiple regression equation for predicting responses on the Agreement with Evaluation Practices scale. As might be expected, pretest scores provided the most powerful predic- tor. Number of total staff provided a regression coefficient that was significantly different from zero. Participation in decision making no longer provided a significant coefficient because of its strong correlation with pretest scores on this scale. Taken as a: whole, this combination of psychological and organizational characteristics explained about one-third of ‘the variation hi posttest scores (Hi the Agreement with Evaluation Practices scale. This finding is important because it documents the combined impact of both individual and organizational charac- teristics on the behavior (in this case, cognitive behavior) of individuals working in organizations. This finding pro- vides empirical support for interactionist approaches to organizational study which attribute equal importance to the influence of situational and personological determinants of individual and organizational behavior (Schneider, 1982, in press), although it. provides no evidence regarding causal priority for these variables. 162 Organizational and individual level variables varied in their ability to predict adoption and implementation of evaluation methods and agreement with evaluation practices. None of these variables were related to knowledge of evalu- ation. The power in) predict innovation adoption increased when membership 'Hi the intervention condition was added to the above regression equation. The ability to predict inter- view measures of adoption and implementation of evaluation methods doubled when experimental group membership was added to the equation. Knowledge of experimental group membership did not improve the predictive power for any of the other outcome scores. Flaws in the Reported Research The major flaws in the reported research can be divided into the categories of measurement, sampling, and design. The foremost measurement problem was the restriction of range ‘Hl the measures of centralization, complexity, and formalization, although this drawback is also related to the sampling problem discussed below. As already noted, the scales used to measure these variables were based on scales used in the original series of studies reported by Hage and Aiken. These scales may have been suitable for these inves- tigators because their sample was composed of diverse human service agencies. The ceiling effects and reduction in range obtained in the present study may run: have been problematic for these other researchers. If these scales are to be used again, some attempt must be made to increase the variation in 163 their scores. This might be done by changing the response format to include more categories. Another method to increase variation in the measure might be to employ instead some type of paired comparison method. Forcing the choice of pairs of different statements shown to represent the dimen- sion of interest should “spread“ the variation existing in the sample. A method that should be used concomitantly with revision of the measurement instruments is diversification of the sample of organizations. Most preferable would be the inclu- sion of similarly sized private sector and other public sector organizations. While the major focus might still be on gerontological organizations, ‘this diversification would contribute to increasing the variation in the organizational structure measures. Moreover, the efficacy of the inter- vention could also be compared across different classes of organizations. Finally, to increase variation and improve the validity (n: the organizational measures, the number 41f respondents within each organization should be increased. While no Optimal number probably exists, James (19820) suggests that his measure of agreement would be stable only with at least 10 respondents per organization. This value provides a con- venient lower limit for all but the smallest organizations. In the event that more than 10 staff work in an organization, some method of systematic sampling (Cochran, 1977) could be used. 164 Another beneficial change in the design of the reported study would include the administration of the interview measure (H: evaluation adoption at the pretest uneasurement period. This addition would allow nunea definitive inter- pretation (H: the longitudinal impact (H: the intervention. This impact could be examined even more fruitfully with the use of a longer-term, follow-up measurement period. The ideal measurement sequence would Ina sufficiently spaced to allow also the measurement of real change in the organiza- tional attributes. This period of time would have to be quite long since structural characteristics, by definition, are the most enduring aspects of organizations. Such a set of longitudinal sequences would also be necessary to deter- mine the causal ordering of the organizational and psycho- logical characteristics. Organizational and community researchers evaluating their attempts to change organizations should direct their efforts to long-term, follow-up measure- ment of their intervention outcomes. Implications and Future Directions The results (”i the reported research suggest at least five different areas for future inquiry. These include 1) determination of the correct unit of analysis for theory and intervention; 2) experimental validation of organiza- tional change strategies and the use of organizational theory to predict their success; 3) facilitation of innovation adoption and implementation in organizations; 4) need for the use of sequential, longitudinal designs to discover the 165 causal ordering among organizational and psychological vari- ables, organization change, and innovation adoption; and 5) the need for systematic, data-based planning and change in public policy, especially in gerontology. Robinson (1950) first alerted social scientists to the possible errors associated with any attempt to predict indi- vidual level characteristics from aggregated data. Labeling this phenomenon the "ecological fallacy", he demonstrated the erroneous conclusions possible when the behavior of individ- uals is predicted from data aggregated by areal unit. It was Roberts et al. (1978), however, who first sensitized organi- zational researchers in) the implications (Hi theorizing and conducting research at multiple levels of aggregation. While analytic and interpretive pitfalls exist for the unwary when aggregating and disaggregating social data (Hannan, 1971), focus on multiple levels of analysis is critical to the success of organizational and community theory and change. Research and theory encompassing several, and ideally all, levels of pertinent aggregation are neces- sary to understand to the fullest extent the processes responsible for organizational and community functioning and change. hmltiple levels of analysis are important because interventions at different levels of aggregation may result in differential change success (Davis, 1981b; Rappaport, 1975, 1977). Moreover, the ratio of change-impact to effort expended may depend on the level of aggregation at which the 166 intervention occurs (Davis 1981b). This success may also be a function of the type of intervention method chosen (Davis 8 Markman, 1980). The present study offers a primitive example of how a multiple-level approach to intervention and change analysis might Ina accomplished. The comparative effective- ness of intervention at different levels remains to be deter- mined empirically. Related to aggregation is the necessity for determining accurate levels of' agreement among Inultiple respondents in organizations. Implicit in the decision to aggregate is the assumption of the existence of agreement between respondents in organizations. While the correct conceptual unit might be the small group, department, organization, (n: city, one would not desire to remove the natural variation existing in individual differences unless something is gained by com- puting average responses. Current methods used to provide an empirical rationale for agreement are clearly inadequate. Analysis of variance and intraclass coefficients are too conservative (James, 1982a, 1982b). The sampling distributions of new measures created in) address these shortcomings are unknown (James, 1982b). Preliminary application of these new measures of agreement in the current study has shown them to be very unstable with only two raters per organization. Clearly, more work is needed in this area. The second implication of the reported research is the demonstration that it is possible to validate organizational Cr 167 change strategies experimentally, and that these change strategies can be rooted in organizational theory. Ideally, organizational theory should provide the rationale for the experimental conditions used to examine the effectiveness of organizational change techniques. The national experiment reported by Tornatzky et al. (1980) is exemplary for this reason. In this study, participation in decision making was experimentally manipulated to examine its relationship with adoption of an innovative mental health program. A signifi- cant main effect for the paticipation manipulation was found. Furthermore, these investigators provided support for their ability ‘ho induce participation iri organizations providing the focus for change. Thus, a variable occupying a prominant place in the innovation literature, and shown previously to be correlated with innovation adoption, has received tenta- tive support as a causal influence. The scientific quality and rigor of the organizational development (00) and change literature demands the empirical sophistication that is so possible, and yet, so lacking. Porras (1979; Porras 8 Berg, 1978a, 1979b), after a compre- hensive review of the OD literature, underscored the method- ological weakness of most attempts to evaluate the impact of OD interventions. In a review of 35 OD interventions, stres- sfirrg human-process aspects (Friendlander 8 Brown, 1974), and reported between 1959 and 1975, he failed to discover a sin- gle experimental evaluation of effects. This result is more dramatic given that he carefully screened the reports for 168 their methods; he selected only those studies using quanti- tative techniques. Finally, only six of these same studies used the organizatMMT as the unit of analysis; most inter- ventions used the Laboratory Training (T-Group) approach to change individuals or small groups. A chasm exists between current practice in OD and organ- izational change and the unethodological rigor required to produce a viable theory (H: organizational intervention and change. This breach is only slightly narrowed in the area of community research, as reviews (Hi recent literature reveal (Lounsbury, Cook, Leader, Rubeiz, 8 Meares, 1979; Lounsbury, Leader, Meares, 8 Cook, 1980; Novaco 8 Monahan, 1980). Less than 10 percent of the research cited in these reviews employed experimental evaluation, or even the most rudimen- tary psychometric analysis. The chagrin brought (HT by a review of the current state of organizational and community change research can only partly be allayed by the results of the attempts of some to create an experimental basis for the study of this change (e.g., Tornatzky et al., 1980; York, 1979). The inadequacies in the just cited literature mirror those found in innovation research. The original case study findings of Burns and Stalker (1961), demonstrating a rela- tionship between innovation adoption and organic-structured organizations, have unjustly almost acquired the status of truisms. The empirical support often provided to document this relationship comes from the work of Hage and Aiken 169 (1970), results based on a sample size of 16 organizations. Other empirical findings are eouivocal. Authors reviewing the literature examining innovation in organizations cite few other empirical studies for the justification of this rela- tionship (cf. Tornatzky et al., 1979; Zaltman et al., 1973). It. is quite possible, for example, that participation in decision inaking is related in) innovation adoption iri only some organizations, at only certain periods of time, in only some eras, or in only certain countries or cultures. In any case, if we assume the positive relationship between innova- tion adoption and participation in decision making in organi- zations to be true, an assumption supported by only one experimental study, this relationship may be very limited. An example may clarify this. W. J. Reddin, a profes- sional change agent with considerable international experi- ence, suggests English organizations require an authoritarian role for outside change agents because of their rigid status systems (Pfeiffer, 1977). He cites this as evidence for the popularity iri Britain or Brazil of the sociotechnical approaches to change (like the Tavistock model), which pro- vide such a role for the change agent. Innovation adoption in organizations may occur differently in such an environ- ment, changing the role of participation. The scarcity of sociotechnical approaches ‘UD organizational change reported in the literature in the United States, where the autonomy of the individual is paramount, offers indirect support for this 170 conclusion. Delineation (H: the organizational characteris- tics related to innovation adoption in different settings is required. Related to the implications above is the need for longi- tudinal research designs. The temporal dimension must Ina incorporated into cwganizational and community research if causal ordering among the variables of interest is to be discovered. While some organizational investigators have begun ‘ho move “hi this direction (Kimberly 8- Miles, 1980), cross-sectional, recursive designs continue to dominate reported empirical work. Future investigation should use sequential-longitudinal, experimental designs (Schaie, 1965; Friedrich 8 Van lknwn 1976; Baltes, Reese 8- Nesselroade, 1977) to separate the confounded effects of organizational or systmn age, time (H: measurement, and organizational cohort effects. Such designs will allow more unequivocal conclu- sions regarding developmental change and the causal priority of organizational characteristics. Development of models of change patterned after the work of Buss (1973, 1974) would contribute substantially to the accurate delineation of causal relationships. Nonrecursive dynamic models (Duncan, 1975; Heise, 1975; Kenny, 1979) should be used to attempt to describe and predict interorganizational differences, intra- organizational differences, and intraorganizational changes. In other types of community research, such models might examine intercourt differences, intracourt differences, and intracourt changes; interfamily differences, intrafamily 171 differences, and intrafamily changes; or internetwork differ- ences, intranetwork differences, and intranetwork changes. Structural equations should be created to test the ability of these nonrecursive, dynamic models to reproduce the obtained covariance structures (Kessler 8 Greenberg, 1981). The effect of experimental manipulations could also be easily entered iri these structural models (Bagozzi, 1977; Costner, 1971). Pursuance (Hi this line of research would finally provide organizational and community psychologists with empirical results that would be suitable for the theory and practice necessary to accomplish their stated goals. Finally, the reported research bears substantially on public policy; most particularly iri gerontology. The obtained results provide preliminary documentation for the effectiveness of £1 systematic, and easily applied, change technique. More importantly, however, it demonstrates that a rather extensive workshop is not as effective as a more structured, but equally simple, consultation technique. This comparison becomes more meaningful when it is realized that workshops and manuals provide the method most commonly used to change human service organizations, and the workshop and manual provided to participants in the present research were more extensive than most others. Contrary to the normal method policy makers use to create change in human service organizations, i.e., pass legislation and provide workshops demonstrating how to implement the new legislation, the reported research shows that simple change techniques can be 172 systematically and experimentally tested and practiced. Organizational differences iri change can also Ina measured. Early organizational adopters could be used as change agents to facilitative policy implementation among nonadopters. A systematic process of innovation adoption and organization change such as this should be implemented by policy makers, especially in gerontology (Davis, 1981b, in press). The power and validity of this approach in policy imple- mentation would be amplified if those affected by the policy participated in its. design. This is especially true for specific interventions where individuals and organizations experiencing the problem of interest and providing the target for change can contribute to the design of the intervention. Davis, O'Quin, Sivacek, Messéi and James (1981) used an iterative survey procedure to include the population of directors of home-care programs in Michigan in the design of several medication-monitoring interventions for the elderly. Participants contributed to the design and rated the effec- tiveness of seven interventions. This type of participatory practice may result in more powerful community and organiza- tional interventions leading to more appropriate change (Davis, 1982). Participation in intervention design may increase in importance to the extent the elderly suffer multiple problems (Davis 8 James, in press) or demonstrate greater variation in cognitive ability (Davis 8 Friedrich, in press). In this fashion, the cultural diversity and values of those affected by change will be maintained and, possibly, enhanced. 173 Policy makers in gerontology must foster the systematic practice of rigorous program evaluation methods. Increasing scarcity of resources and burgeoning needs among the elderly demand that publicly funded programs document their effec- tiveness and efficiency. The present research demonstrates that service providers can be taught to evaluate their pro- grams, and, given short-term, inexpensive consultation, are likely to do so. Policy makers in gerontology must make a concerted effort to deliver this technology to those who may benefit from it. APPENDICES First Day 8: 8: 10: 10: 10: 11: 11: 12: 15 45 :30 00 30 45 00 45 45 :30 :OO :30 :45 :30 :15 8:45 9:30 10: 10: 10: 11: 11: 12: atom 00 30 45 00 45 45 :30 :OO :30 :45 :30 :15 :00 APPENDIX A WORKSHOP OUTLINE Registration Overview of the project and questionnaire administration Planning and the use of objectives and goals Measurement and data-gathering Break Measurement, goals and decision-rules Small group excercise Lunch Overview of different types of evaluation Efficiency evaluation and client satis- faction Effort evaluation and data management Break Small group exercise Questionnaire administration Question and answer 174 Second Day 8:15 - 8:45 8:45 - 9:30 9:30 - 10:00 10:00 - 10:45 10:45 - 11:00 11:00 - 11:45 11:45 - 12:45 12:45 - 1:30 1:30 - 2:00 2:00 - 2:30 2:30 - 2:45 2:45 - 3:30 3:30 - 4:00 4:00 - 4:30 4:30 - 5:00 175 WORKSHOP OUTLINE (continued) Question and answer, discussion Basic evaluation designs Impact evaluation and needs assessment Small group excercise Break Integration of previous evaluation methods and introduction to process and effective- ness evaluation Lunch Process and effectiveness evaluation Introduction to experimentation Small group exercise Break Evaluation planning and management Integration and summary Questionnaire administration Question, answer and discussion Week 176 APPENDIX B CONSULTATION OUTLINE Topic Introduction of group members and explanation of the purpose of the consulting group 1. Provide technical support 2. Provide mutual support 3. Exchange resources 4. Develop evaluation plan for their service using their funding proposal as a tool Explanation of goal-setting and measureable object- ives Role of evaluation in administration and planning Each person sets goals to be achieved before the next meeting Each person brings an outline of their service Review previous material: 1. Evaluation planning and administration 2. Goal-setting and measurable objectives 3. Goal Attainment Scaling Establish individual evaluation objectives to be achieved by the end of the consultation 177 CONSULTATION OUTLINE (continued) 1. The development of an evaluation plan for their organization and incorporation of this plan into their funding proposal Review accomplishment of previously set goals and discuss problems encountered Each person sets new goals to be achieved before the next meeting Review previous material: 1. Goal-setting and measureable objectives 2. Measurement and standardized instruments, reliability and validity 3. Accurate data collection Review accomplishment of previously set goals and discuss problems encountered Each person sets new goals to be achieved before the next meeting Review previous material: 1. Instruments and data collection 2. Cost/unit of service and measuring efficiency 3. Accurate data collection Review accomplishment of previously set goals and discuss problems encountered Each person sets new goals to be achieved before the next meeting 178 CONSULTATION OUTLINE (continued) Review previous material: 1. Measures of efficiency and client satisfaction 2. Integration of previous material to demonstrate the rudiments of a comprehensive evaluation system Discuss how they might each develop a comprehensive evaluation plan Review accomplishment of previously set goals and discuss problems encountered Each person sets new goals to be achieved before the next meeting Discuss end of intervention, posttest and follow-up Review accomplishments of previously set goals and discuss problems encountered Each person sets new goals to be achieved before the follow-up measurement Administer questionnaires 179 APPENDIX C EVALUATION SELF-REPORT We are going to ask you some questions about the program evaluation activities that service providers often conduct. Please circle the word that best represents the extent to which these activities are ACTUALLY PERFORMED in your project/service. 1. My project/service currently uses client data in its planning. Never Seldom Sometimes Often Always 2. My project/service records each time it delivers a ser- vice. Never Seldom Sometimes Often Always 3. My project/service compares client information collected before and after services are provided in order to meas- ure program effectiveness. Never Seldom Sometimes Often Always 4. The satisfaction of each client with the services he or she receives is recorded by my project/service. Never Seldom Sometimes Often Always 5. Information is collected from each client after services are provided to measure service effectiveness. Never Seldom Sometimes Often Always 6. Assessments of client needs are made regularly by my project/service. Never Seldom Sometimes Often Always 7. My project/service records each client contact. Never Seldom Sometimes Often Always 8. My project/service measures the extent to which each of its programs is reaching its intended group of clients. Never Seldom Sometimes Often Always 10. 11. 12. 13. 14. 15. 16. 17. 18. 180 EVALUATION SELF-REPORT (continued) My project/service gathers follow-up information on all clients after they have stopped receiving services. Never Seldom Sometimes Often Always My agency uses experimental designs (with clients ran- domly chosen not to receive services) to test program effectiveness. Never Seldom Sometimes Often Always My project/service records the program cost for each unit of service. Never Seldom Sometimes Often Always My project/service computes a benefit-to-cost ratio for each unit of service. Never Seldom Sometimes Often Always Specific objectives are established for every program by my project/service. Never Seldom Sometimes Often Always My project/service records each client referral made to other agencies. Never Seldom Sometimes Often Always My project/service currently monitors the implementation of all its programs. Never Seldom Sometimes Often Always My project/service records the action taken on each client referral. Never Seldom Sometimes Often Always My project/service compares clients who receive services with clients who do not receive services in order to measure service effectiveness. Never Seldom Sometimes Often Always The source of each client referral is recorded by my project/service (i.e., how the client heard about the project). Never Seldom Sometimes Often Always 19. 20. 21. 22. 181 EVALUATION SELF-REPORT (continued) My project/service uses systematic case studies to measure program effectiveness. Never Seldom Sometimes Often Always My project/service measures the extent to which each program achieves its objectives. Never Seldom Sometimes Often Always The impact of its programs on the surrounding community is measured by my project/service. Never Seldom Sometimes Often Always My project/service constructs its own measurement tools to measure client change. Never Seldom Sometimes Often Always 182 APPENDIX D Date Name Interviewer Agency Evaluation Interview I'd like to ask you some questions about the evaluation practices you've done in your agency since the evaluation workshop. What I'll do is give you a list of evaluation activities and you tell me if you have done any of them in your project/service. 1. Planning--Have you developed a written plan for your project/service? Written goals and objectives? Specific objectives set for each service? Staff participated in objective-setting? Clients participated in objective-setting? 2. Have you developed a written evaluation plan? Have you completed the Planning for Evaluation Checklist in the back of the manual? (If no evaluation plan, have you held staff meetings to create an evaluation plan?) Formal approval of agency obtained if necessary? Consultants selected if necessary? 3. Created/selected questionnaires? Measured Reliabilities? Measured Validities? Pilotstested questionnaires on seniors? 4. Are you measuring the implementation of services? (E.G. number of staff/client contacts; staff giving to clients the services as planned; how they spend time with clients?) Recording client referrals? 5. Are you measuring the cost/effectiveness or cost/benefits for delivering services? Cost per-unit-of-service? 6. Are you systematically measuring how clients feel about your services? (E.G. questionnaires, satisfaction ratings) 10. 183 Have you conducted any needs assessments in your pro- ject/service? (If in the process of conducting a needs assessment) Have you sampled staff and/or clients regarding potential needs? Have you selected a sample to receive questionnaire? Have you created or selected a questionnaire? Have you pilot-tested questionnaire? Have you hired/trained interviewers/callers? Have you actually implemented needs assessment? Have you implemented goal attainment scaling (GAS)? Have you measured the effectiveness of services in your project/service? Any kind of follow-up of clients, excluding satisfaction? Any pretest-posttest comparisons? Comparison of any groups (e.g., service vs. no-service; one type of service with another type of service)? Experimental design with random assignment? Have you measured if your service is effective with different kinds of clients (e.g., service affects people differently depending on race, sex, age, education) 184 APPENDIX E AGREEMENT WITH CURRENT EVALUATION PRACTICES Several statements describing current evaluation prac- tices are presented. Please circle the response which best represents how much you agree with each statement. Please answer each one. If you have any comments about any of the items simply write them in the margin. (R)* 1. A benefit-to-cost ratio for each unit of service should not be computed by my project/service? Strongly Strongly Agree Agree Neutral Disagree Disagree 2. My project/service should use client data in its planning. Strongly Strongly Agree Agree Neutral Disagree Disagree (R) 3. My project/service should not record the source of each client referral (i.e., how the client found out about the program). Strongly Strongly Agree Agree Neutral Disagree Disagree 4. Systematic case studies should be used by my project/service to measure program effectiveness. Strongly Strongly Agree Agree Neutral Disagree Disagree 5. Clients should be contacted by my project/service several months after they have stopped receiving a service to see if it still has had a positive or negative effect on them. Strongly Strongly Agree Agree Neutral Disagree Disagree 6. My project/service should attempt to make the most rigorous possible effort to measure whether clients have improved after receiving the service. Strongly Strongly Agree Agree Neutral Disagree Disagree 185 AGREEMENT WITH CURRENT EVALUATION PRACTICES (continued) 7. My project/service should measure the efficiency of each of its programs. Strongly Strongly Agree Agree Neutral Disagree Disagree 8. My project/service should work with community groups in establishing objectives. Strongly Strongly Agree Agree Neutral Disagree Disagree (R) 9. My project/service should not establish specific objectives for every program. Strongly Strongly Agree Agree Neutral Disagree Disagree (R) 10. The staff of my project/service should not be willing to change their work routine to measure the efficiency of each of its programs. Strongly Strongly Agree Agree Neutral Disagree Disagree 11. Clients in my project/service should be asked how satisfied they are with each service they receive. Strongly Strongly Agree Agree Neutral Disagree Disagree (R) 12. I do not believe that program evaluation will allow my project service to compete more successfully for funding. Strongly Strongly Agree Agree Neutral Disagree Disagree 13. My project/service should measure the effectivenss of each of its programs. Strongly Strongly Agree Agree Neutral Disagree Disagree 14. A record of how a service has affected a client should be gotten once the client no longer receives the service. Strongly Strongly Agree Agree Neutral Disagree Disagree 186 AGREEMENT WITH CURRENT EVALUATION PRACTICES (continued) 15. My project/service should measure the extent to which each of its programs is reaching its intended group of clients. Strongly Strongly Agree Agree Neutral Disagree Disagree 16. My project/service should measure the impact of its programs on the surrounding community. Strongly Strongly Agree Agree Neutral Disagree Disagree 17. My project/service should not use program evaluation findings to help make budget decisions. Strongly Strongly Agree Agree Neutral Disagree Disagree 18. The staff of my project/service should not be willing to change their work routine to measure the impact of its programs on the surrounding community. Strongly Strongly Agree Agree Neutral Disagree Disagree 19. My project/service should not measure the economic benefit of each unit of service. Strongly Strongly Agree Agree Neutral Disagree Disagree 20. My project/service should not have a specific individualized written evaluation plan. Strongly Strongly Agree Agree Neutral Disagree Disagree ' 21. My project/service should not record the program cost for each unit of service. Strongly Strongly Agree Agree Neutral Disagree Disagree 22. My project/service should not record each client contact. Strongly Strongly Agree Agree Neutral Disagree Disagree *Denotes items reflected before analyzed 187 APPENDIX F PROJECT/SERVICE INFORMATION Name Project/Service Name 1. What is your job title? (Briefly describe your job.) 2. What is the job title and organization of your immediate supervisor? Job Title Organization 3. How many full-time paid (30+ hours/week) staff work in your pgoject/service? (Exclude clerical and maintenance staff. 4. How many part-time paid (less than 30 hours/week) staff work in your project/service? (Exclude clerical and maintenance staff.) 5. How many full-time volunteers work in your project/- service? (Exclude clerical and maintenance staff). 6. How many part-time volunteers work in your project/- service? (Exclude clerical and maintenance staff.) 7. Please estimate your budget for the current fiscal year (FY 1980 - 1981). 8. Please estimate the percentage of your annual budget spent on program evaluation. 9. What was the highest grade you completed in school. (1) Lower than 8th (2) 8th (3) 9th (4) 10th (5) (6) (7) 11th 12th College or advanced degree 188 PROJECT SERVICE INFORMATION (continued) Which of the following degrees do you hold? No degrees BS, BA NP/LPN RN MS, MA MSW MD Ph.D JD ) other (Please specify) ( ( ( ( ( ( ( ( ( ( H80 (1) \IONU‘I boom b—l OVVVVVVVVV Do you have a certificate in Gerontology? Yes No Which of the following services do you provide? Administration of Programs Program Development Referral to Other Agencies or Programs Advocacy/nursing home ombudsman Casework Chore Clerical Service Complaint Resolution Congregate Meals Coordination Counseling Crime Prevention Day-Care Education Employment Energy Escort Financial Management Health Screening Home Delivered Meals Homemaker Services Home Health Services Home Repair Individual Assessment and Monitoring In-home Visits Information and Referral Legal Services Library Services Mental Health Nutritional Education Outreach Physical Fitness Protective Services Recreational Services 13. 14. 15. 16. 17. 18. 19. 20. 21. Age: 22. Sex: 23. 189 PROJECT SERVICE INFORMATION (continued) Senior Discount Substance Abuse Telephone Reassurance Transportation Other I About how many professional conferences (e.g., The Gerontological Society) do you usually attend per year? About how many workshops do you usually attend per year? About how many papers do you present each year at pro- fessional conferences? In how many professional associations, e.g., Geron- tological Society, are you a member? How many professional journals do you read regularly? About how many years has your project/service been in existence? What percent chance is there that your project/service will be in existence in FY 1981 - 1982? (e.g., mark 100% if you are sure it will be around next year; mark 0% if you are sure it will not be here next year.) About how many years have you been employed in this organization? About how many years do you expect to stay with this organization? What was your age on your last birthday? Male Female 24. 25. 26. 27. 28. 29. 30. 190 PROJECT SERVICE INFORMATION (continued) For the next series of questions, answer each question by circling the answer which you feel most accurately represents how your project/service operates. How frequently do you participate in the decisions on the adoption of new policies in your project/service? Never Seldom Sometimes Often Always How frequently do you participate in decisions on the adoption of new programs in your project/service? Never Seldom Sometimes Often Always How frequently do you usually participate in the deci- sion to hire new staff in your organization? Never Seldom Sometimes Often Always How frequently do you usually participate in the promo- tion of any of the staff in your project/service? Never Seldom Sometimes Often Always For the next series of questions, please circle the NUMBER which best describes your opinion. As you can see, the numbers "1“ and "4" are "stronger" answers than "2|! and "3". People who want to make their own decisions would be quickly discouraged in this project/service. definitely false definitely true 1 2 3 4 People have to ask the boss before they do almost any- thing in this project/service. definitely false definitely true 1 2 3 4 There can be little action taken in this project/service until a supervisor approves a decision. definitely false definitely true 1 2 3 4 31. 32. 33. 34. 35. 36. 37. 38. 39. 191 PROJECT SERVICE INFORMATION (continued) In this project/service, even small matters have to be referred to someone higher up for a final answer. definitely false definitely true 1 2 3 4 In this project/service, any decision has to have the boss' approval. definitely false definitely true 1 2 3 4 In this project/service, most people feel like they are their own boss in most matters. definitely false definitely true 1 2 3 4 In this project/service, people can pretty much make their own decisions without checking with anyone else. definitely false definitely true 1 2 3 4 Most people in this project/service make up their own rules on the job. definitely false definitely true 1 2 3 4 People in this project/service are allowed to do almost as they please. definitely false definitely true 1 2 3 4 In this project/service, how things are done here is left up to the person doing the work. definitely false definitely true 1 2 3 4 Employees in this project/service are constantly being checked on for rule violations. definitely false definitely true 1 2 3 4 People in this project/service feel as though they are constantly being watched, to see that they obey all the rules. definitely false definitely true 1 2 3 4 Project Service Information (continued) 192 The following questions ask about some characteristics of the staff in your project/service. If your project/ser- vice is located within a larger organization, e.g., YMCA, Department of Parks and Recreation, Tri-County Office on Aging, circle only those characteristics that are relevant to the staff in your project/service. For example, 1. If written contracts of employment are used only for the director of your project/service, you would circle the number 4. If a writen contract of employment were used for every staff member, you should circle every number on the same line as that question. The second question refers to who has the authority to make decisions in your project/service. For example, 1. If the director of the project/service has the final say in who gets hired, you would circle the number 4. This would be true even if the Board of Directors had to confirm it later. If the director only makes suggestions and the Board of Directors makes the final decision, then you would circle number 5. If you don't know the answer to any of the questions simply leave it blank. 193 m e m N H uamx weep #0 ugoumx .m m a m N H H83. 3:6; ea 28% .m pence HmcowpmN m a m N H -286 8.5 we: .a meowaawgummu m a N N H 8N Set-.3 .m mmgavmuoea m e m N N we Fences copuwez .N acmEAopasm m a m N H 8.8228 53.2.: .H mcoepwmoa wmmcp to 20mm cw memum op AHaum mucosauou use meeoume mmmch weepomewo Legumewo meomw>emmsm AHNUPLmPu-cocv HNUwemHu mo venom mmmpm emcee .xpaam pmcu.flflm xomzu mmmwpa .eumm ecsoem mpuewu m mourn .mempm mo waxy some move uwpmwemuumemgo m wH .wpmHeaoeaam wee momamwemuumeecu mcwonHow one goes: Low amass: comm ucaoem mHue_u m wumHa muHHmHmmHuemm\pummoea 194 m a m N H xeoz,eoe aaHHHHHaHmcoamae meHeeuaa .N m a m N H mace we now one 3°: .0 m a m N H awesome m=H=HaeH .m m a m N H auH>eem age meemeeeu .a m e m N H msmemogq 3m: to cowuaon< .m m w m N a meV—Loz $0 c0585.:— .N m a m N H meuxeoe meHeHz .H NemumH :owmwumu ecu Eewecou op m>mg memcuo e? cm>m .cmxmu on coo cowuum mums -_HHmaH oedema eaeHeHao ma pmze pamEmwemm omen: comema pmmH ecu we on: meopumewo meopuwewa meomw> AHmowewHU-cocv Hmuwempo to venom -Lma=m emmum Loewe mamz muHHmHmmHuemm\uumwoea ,5 9 1 , $88 «:5 a 3.85 360E 00.585830 H05 5.; m0: 32.833308 80>. «03:00 No A309: 93 0a :9: an...» moxoo 05 5 209.5: 05 303 0.6.8.... .32 05 :0 9:300 2:. 5 30:05:88 23 .8 :03 .8... 8:585:38 .850 035 :5: 350235 on 3:08 motion “02.08 .50» on u m8: 0 .8 a: 0 a u.— 08 0: :0 o $853288 0 pm, o m, a 0 0 u :0 $555.00 0:» 5 20305:: 8 .850 05 0038030038 .50» :0058 :0 30 .85 8.50235 $0 00800 05 855.880 0p 00.: 3.30: w: .8533 25 :o 23: 202.953th Humaoxm o :22“: .N .H .95 .00.; .95 400.5 .95 .005 .9: .oo-c .95 .90.: .95 out .80.; Lomzoonoeq .03 1.30:0 850 .883 8 {8:00 «a :85 m5: :8: £83.88 mew—808 28308 0:82 6:93 .3 35 E: 8:. 8.03:8 8.505895 to: not; .850 55 .8 3:850 :0 .8 3:820 :0 .>8m\.n08 .850 05. .8388 oz 8:083:85 to: :5: 4.84.38 .850 $855.08 .850 $5 :85 to? 850 $5 Wazoo; 858mg mam-Hm .55 :85 $5.4. 55 :85 team 5.; 8.59585 50.; 33:82 oomeooo o -08 :88 oz 5.; 53:00 0: 5.; Home 83 8:208 82 team :05. 03 ”2:23th .5 $23 $583 3 F5098 u o >00 0 35.5 3.883008 n o 83.895 58> u m 833:0 u m 838:5 535.000.: n N 803808 .1. N 85885 32.3.5.6. u a 58.588 u H 5.9:: 83:00 80>. mo 3:35.85 0.3 3.8: 2.3:: 33:00 .50» .8 3:33.; 05. x8: 3.8302 8 196 APPENDIX H NAME: AGENCY/SERVICE: EVALUATION KNOWLEDGE You will find below several questions regarding the program evaluation information given to you during the work- shop conducted by me this past Spring. Feel free to use your workshop manual or notes. Please fill in with pencil the appropriate answer on the enclosed answer sheet. For example, if the following question was asked: The major funding source for programs for the elderly is A) U.S. Department of Labor B) U.S. Department of Defense C) U.S. Department of Commerce 0) National Association for the Elderly E) Administration on Aging You would fill in the letter "E" on the answer sheet — ®O@®® Please answer every question. Smile! This is the last questionnaire you will get from me. 1. Which of the following types of program evaluation focuses primarily on the political power that an agency can get to support their programs? *A) Pork Barrel model of evaluation B) Charity model of evaluation C) Scientific model of evaluation D) Influence model of evaluation 2. The A) B) *C) D) 3. The is A) B) C) *0) 197 EVALUATION KNOWLEDGE (continued) nature or content of a planning objective refers to The person receiving a service. How long the effect of a service lasts. Whether a service is trying to change information, attitudes or behavior. Whether the objective is measureable. most foolproof way to know if a service is effective to Give clients a pre-test when they start the service and a posttest when they are done. Conduct a follow-up of clients when they have finished receiving the service. Compare clients currently receiving the service with clients who used to receive the service. Compare clients who were selected with a flip of the coin to receive the service with clients who were selected in the same way not to receive the service. 4. Resistance of staff members to doing program evaluation may be reduced most by *A) 8) Assuring them that they will not lose their job as a result of doing the evaluation. Including them in the planning of the evaluation after all of the details have been worked out by the director. Telling them the evaluation is not really very important. Telling them the information provided by the evalu- ation will not be used anyway. effort evaluation measures How hard clients tried to succeed in a service. How hard staff members tried to improve the client. Whether a service was implemented in the way it was planned to be implemented. The number of units of service provided for a fixed amount of money. 6. Goal Attainment Scaling (GAS) is A) *3) C) D) A measure of the impact of service goals on the surrounding community. An outcome measure used for describing and evalu- ating client goals. A measure of the effectiveness of a program. A self-esteem questionnaire. 10. 11. 12. 198 EVALUATION KNOWLEDGE (continued) Keeping a record of what staff members do when deliver- ing a service to seniors is an example of A) Effectiveness evaluation. B) Impact evaluation. C) Process evaluation. *0) Effort evaluation. Measuring which components of a service are responsible for its success is an example of A) Impact evaluation *8) Process evaluation C) Efficiency evaluation. D) Effort evaluation. Measuring whether a service works better for seniors of different incomes or ages is an example of A) Impact evaluation. B) Effectiveness evaluation. C) Effort evaluation. *0) Process evaluation. Opportunity costs refer to A) How expensive providing services can be. *8) What an individual gives up in order to receive a service. C) Direct costs of providing a service. D) What an individual has to pay to receive a service. Measuring the effectiveness of a service means asking the question. A) Is the service having an impact on the people you want to have an impact on? B) Was the service implemented as planned? *C) Did the clients who received the service do better than clients who did not? ) Which part of the service works best for different types of clients? D The economic efficiency of a service as measured by the ratio of monetary outcomes and costs is an example of *A) Cost/benefit analysis. B) Cost/unit of service analysis. C) Cost/effectiveness analysis. 0) Cost/input analysis. 199 EVALUATION KNOWLEDGE (continued) 13. Which method for conducting a needs assessment provides the most accurate information? A) Talking B) Talking C) Talking service *0) Talking service service. to seniors who know the community well. to seniors who use your service often. to a selection of seniors who have used your in the past. to a selection of seniors who have used your and seniors who don't know about your 14. Whether a questionnaire consistently represents how a client really feels about an issue is a measure of its *A) Reliability. ability. Predictability. e sex of a person is an example of Ordinal measurement. Interval measurement. Ratio measurement B) Validity. C) General 0) 15. Th A) B) C) *D) * Correct answer Nominal measurement. 200 APPENDIX I NAME Workshop Effectiveness Please place a check next to the statement that best repre- sents the way you feel about the following aspects of the evaluation workshops. (R)* 1. The information provided irI the workshops was not very practical. Strongly Agree Agree Neutral Disagree Strongly Disagree The information provided was well-organized. Strongly Agree Agree Neutral Disagree Strongly Disagree The material presented in the workshop did not accu- rately represent the activities of service providers. Strongly Agree Agree Neutral Disagree Strongly Disagree n O 3 D. C O ('9‘ My ability to program evaluation has improved as a result of my participation in the evaluation workshops. Strongly Agree Agree Neutral Disagree Strongly Disagree Program evaluation in my project/service is not likely to improve as a result of my participation in the evaluation workshops. Strongly Agree Agree Neutral Disagree Strongly Disagree llH 201 6. I intend to do more program evaluation irI my project/service. Strongly Agree Agree Neutral Disagree Strongly Disagree llll *Denotes items reflected before analyzed. 202 APPENDIX J Name Agency CONSULTATION EFFECTIVENESS I'd like you to give your opinion regarding various aspects of the consultation sessions. When asked about implementing evaluation, I mean that to include everything we spoke of in the workshop, e.g., setting goals and objectives, needs assessment, cost/benefit or cost/effectiveness analy- sis. Even though you have not implemented all of the evalu- ation methods in your project/service, state how much the consultations have helped you to implement whatever you have tried so far. Following are several statements regarding the consul- tation sessions. Please circle the response which best represents how much you agree with each statement. Please answer each one. If you have any comments about any of the items simply write them in the margin. If you do not know the meaning of an item, circle "DK" (Don't Know). 1. Setting goals and objectives in the consultation sessions helped me to implement program evaluation in my project/service. Strongly Strongly Agree Agree Neutral Disagree Disagree DK Strongly Agree (R)* 3. Strongly Agree (R) 4. Strongly Agree 5. Strongly Agree 6. Strongly Agree (R) 7. Strongly Agree (R) 8. Strongly Agree (R) 9. Strongly Agree 203 CONSULTATION EFFECTIVENESS (continued) The knowledge provided in the consultation sessions helped me to implement program evaluation in my pro- ject/service. Strongly Agree Neutral Disagree Disagree DK Participants in the consultation sessions did not share their resources with me in my effort to implement program evaluation in my project/service. Strongly Agree Neutral Disagree Disagree DK The consultation sessions did not provide a good understanding of program evaluation. Strongly Agree Neutral Disagree Disagree DK I think I could ask other participants in the consultation sessions to help me implement program evaluation methods in the future. Strongly Agree Neutral Disagree Disagree UK I know more about doing program evaluation as a result of my participation in the consultation sessions. Strongly Agree Neutral Disagree Disagree DK I could have implemented program evaluation in my project/service without the knowledge provided in the consultation sessions. Strongly Agree Neutral Disagree Disagree DK The consultation sessions were too structured to help me implement program evaluation in my project/service. Strongly Agree Neutral Disagree Disagree UK The consultation sessions were a waste of time. Strongly Agree Neutral Disagree Disagree DK (R) 10. Strongly Agree (R) 11. Strongly Agree 12. Strongly Agree 13. Strongly Agree 14. Strongly Agree 15. Strongly Agree 16. Strongly Agree 204 CONSULTATION EFFECTIVENESS (continued) It was not very helpful to share my experiences from implementing program evaluation with other participants in the consultation sessions. Strongly Agree Neutral Disagree Disagree UK The consultation sessions have been no better than other consultations I have received. Strongly Agree Neutral Disagree Disagree UK The consultation sessions helped to provide a structure for implementing program evaluation in my project/service. Strongly Agree Neutral Disagree Disagree UK The other participants in the consultation sessions helped me to implement program evaluation in my pro- ject/service. Strongly Agree Neutral Disagree Disagree DK I could not have implemented program evaluation in my project/service without the contributions of other participants in the consultation session. Strongly Agree Neutral Disagree Disagree OK It would have been very difficult to implement program evaluation in my project/service without the support provided by other participants in the con- sultation sessions. Strongly Agree Neutral Disagree Disagree OK The consultation sessions provided enough knowledge for me to implement program evaluation in my project/service. Strongly Agree Neutral Disagree Disagree DK (R) 17. Strongly Agree (R) 18. Strongly Agree 19. Strongly Agree (R) 20. Strongly Agree 21. Strongly Agree (R) 22. Strongly Agree 23. Strongly Agree (R) 24. Strongly Agree 205 CONSULTATION EFFECTIVENESS (continued) Other participants in the consultation sessions did not support my efforts to implement program evalu- ation in my project/service. Strongly Agree Neutral Disagree Disagree UK The consultation sessions did not provide a suffi- cient focus for helping me implement program evalu- ation in my project/service. Strongly Agree Neutral Disagree Disagree OK The time spent in the consultation sessions has been worthwhile. Strongly Agree Neutral Disagree Disagree UK The resources of other participants in the consulta- tion sessions were not necessary to help me imple- ment program evaluation in my project/service. Strongly Agree Neutral Disagree Disagree OK I gained new contacts with other agencies from other participants in the consultation sessions. Strongly Agree Neutral Disagree Disagree OK The other participants in the consultation sessions did not offer very many useful suggestions for implementing program evaluation in my project/- service. Strongly Agree Neutral Disagree Disagree UK I would recommend the consultation sessions to other service providers. Strongly Agree Neutral Disagree Disagree DK Measuring the weekly achievement of objectives was not very useful in helping me to implement program evaluation in my project/service. Strongly Agree Neutral Disagree Disagree DK 2(16 om ma ma ma ma ma ooH 0H OCH NH on: mm mH ooH m0: mH Hm ooH we as ooH om- ooH ooH mm m. CH: oofi m mm: ooH mm ooH ooH ma NH HH o~ m m n o m c ¢m0pmum Pacopum~wcmmso "0:0500F0ggou 505000: x xHozmam< Ammv «we: :30 0:0 >0gu ~00: ANMV .>00000 0000 0>0= 90m: :3 00‘05: 0000500 00 0000: 90m: caxap co.uua 0500_S Ram: mecca: ”no: xm< Amwv .Laoumwv .000 ago 9500 00mg“ CO 00020:: :5 .000: Amwv 00*: o» .000 :5 .00aa Ammv .0000 00000 0» .000 :5 .000: Acwv 00505—00 0:000 00 .00: :5 .000: o e~ m- “NH“ .cgaon .00: .02 00H m. mm Away .00000 .000 .02 so” 5. Ac": maogmxtoz .oz ooH kWAmHV .m:00 .00: m N H .vH .mH .NH .HH .oH 2(37 0000::0000000 00000200000 000>00m\000n000 :0 0002:: 2000 0» L000: 0000:000000 :_ 000050200 0000020 :000 0>0: 000000 Pmevu0aa 000 00 . 00m0 >.000 00 000000: .00 000 00m0,0=0000000> 00:0 000 00000 .00 000 00 00- 00. 00 00m0 .0000 00 0: 0000 .00 000 00 m- 0N- 00m0 000000 00:0 00 00 .00 000 NW 0. 0000 000:0 =30 000: .00 000 00 0000 .000 =30 000: .00 ON 00 00 00 00 00 00 M0 N0 00 00 0 0 0 0 m 0 m N 0 0.00000 00000m 00:0000~_:0m0o umcowum~0ggou 0050000 x x002m00< 10. 11. 12. 13. 14. 208 APPENDI Similarity Coefficients: Pro. conf. (13)** No. workshops (14) No. pro. assoc. (16) No. pro. journ. (17) Part. in dec. to adopt pol. (24) Part. in dec. to adopt prog. (25) Part. in dec. to hire (26) Part. in prom. of staff (27) Own dec. discour. (28) Ask boss before (29) Little action taken (30) Refer to some- one higher up (31) Have boss approv. (32) Own boss (33) Pro. 74 86 95 92 -55 -52 -66 -73 -17 -29 -19 -36 -30 Part in Dec. Ma -38 -47 -65 -76 96 87 93 87 20 37 34 34 51 27 X L Organizational Scales* Hier of k. Auth. 17 -43 -36 31 23 56 28 62 94 96 96 92 78 Job Code -10 -25 -33 -11 13 27 43 7O 43 SO 87 Rule Obs. 49 34 22 25 12 28 -13 61 7O 39 58 43 13 Similarity Coefficients: 209 Organizational Scales* (continued) 15. Make own dec. (34) ~12 12 72 90 12 16. Make own rules (35) ~33 ~12 11 82 ~59 17. Do as they please (36) ~ 4 ~19 13 88 ~49 18. Left up to pers. (37) ~11 3 53 89 ~13 19. Check for rule violations (38) 29 19 58 ~16 98 20. Watched for rules (39) 29 12 57 ~26 *Decimal points have been omitted. **Numbers 'Hi parentheses refer to item numbers on Project/ Service Information Questionnaire. 210 NN 0N ow 0H ma NH 90 ma 00 co" ma- w 0 HH. ooH mu H” mm 000 mu NH- ooH mH 00H m” NH Hm o“ 0 w 0 ea- ma NH: oo~ e «000000000 00000000>m 0003 00050000< : x~ozm00< 00- 00- cod m «N OCH 0 00 m”- CH 00 oo~ 0 H0: ON 00 mm mm ooH N 000000000008 00.00000 000 0000 N0- 000000000 :00 0000000 00¢ 0000000000 0000005 00- 00 0:00:00 0003 000000 5000000 000>0 000 00>0000000 w 00000000 000000000 00>0000000 000000000 00 000000 0- 000055500 0003 000: 05000000 0. 00 0000000000 000000: 000>000 0000000 00 000000 00 00500050 0 000000 000000: 000000: 0000000 00 000>000 0 00000 0000000 0000000 0000000 mm- 0000 0000500000 00: 0 00000000 000 00.500 08000 0 0000 000000 00: 000 00000 0000\0000000 00: a .00 .00 211 NN 0N ON OO0 0- 0. O0- Nu OO0 0 O0 O OO0 0N 00: OO0 0 OO0 00 O0 00 O0 O0 OO0 m OO0 00 00 0.00000 N0- O0- ON 00: mN 00: 0- 0- O0: ON 00 O0 O ON OO0 00: N0 00 000000000 ON: 00 ON 0N: N 0 0 00 ON: 0- 00: O0- 0 MN: 0 0. O0 0 00000000>0 0003 00050000< O0 O0. 0N N. 00- O0 00 N0- 00. O0 00 O0 00 O0 mm m- 00 O0: ml 00- cm O0- ON: "000000000000 0000000 ml ON 00: 00: O0 00: O0- 00- O0- 00- 0N- 0000000 00 0000 0000 000 0000000 NN 00500000 0000005 000000 000005500 00000000000 00 05000 -000 00 000050 0000005 ON- 00 0000000 000: 000000 000000000 000000 0005 00 00000000 00 00000000>0 5000000 00: 000005500 00000000000 00 05000 00 -000 00 000050 000000: 0000000 00 00000 00000000 000 00000000 00 5000 0- -000 0000 0000x0 000000: 000>000 0>00000 000000 00 0000 0000 0000 00 00000000 0 000 0000000 :00 000000 5000000 0000 0 00 0000000000000 000000: 0000000 000 .0200 0000000 0005 0000 0- .500 00 5000000 30000 0003 .00 .O0 .O0 .00 .00 .M0 .N0 2112 0000.55 0000 0:0 000.000 000.68.. 0000000 000000 OO0 0N- 00 O0- 0 0- 00 0 NN O 0 0 00 O- N0 O- 0m- O m- O N0- 00- 0000 000000 00000m .NN 0000000 00 0000 0000 000 0000 OO0 O0 ON ON- 0 00- N0- O0- N- O0 NN O0- Nm 0- O0 00- N0- 0 O- m- 0 5000000 000000 00000m .0N 0000 000000 -00>0 000000000>0000 000 m0 0- 00- 00- 0N N- 0 N0- N0 0- 00 mm- m 00- O0 0 m- 00- O- 00000000 0>00 000000 .0N NN 0N ON 00 O0 00 O0 O0 00 00 N0 00 O0 0 O 0 O O 0 m N 0 0.000000000000000 0000000000 000: 00050000< “000000000000 0000000 213 APPENDIX N Similarity Coefficients: Agreement with Evaluation Practices* 10. 11. 12. 13. 14. 15. 16. 17. 18. Agree with Evaluation Practices Use benefit/cost ratio 77 Use client data 88 Record source of referral 80 Use systematic case studies 87 Contact clients after service is stopped 90 Measure whether client improved as result of getting service 85 Measure efficiency of programs 93 Work with community groups to establish objectives 70 Establish specific objectives for every program 92 Change work routine to measure efficiency 89 Ask clients how satisfied they are 58 Will allow program to compete more successfully for funding 96 Measure effectiveness of each program 95 Record how service has affected client once they no longer receive service 89 Measure extent each program is reaching its intended group of clients 93 Measure impact of programs on surrounding community 61 Use program evaluation findings to make budget decisions 91 Change work routine to measure impact of programs on surrounding cnmmunitv RR 19. 20. 21. 22. 214 Similarity Coefficients: Agreement with Evaluation Practices Should measure economic benefit for each unit of service Should have specific individualized evaluation plan Should record program cost for each unit of service Should record each client contact (continued) 87 74 90 71 *Decimal points have been omitted. 2?15 0N 0N ON 00 O0 00 O0 O0 00 M0 OO0 O- 0 O O0 O0- OO0 O- ON- Om- m- OO0 0N 0N ON- OO0 0 O OO0 0- OO0 N0 00 O0 0 O 0 O m 0 m 0000000-0000 00000000>O "000000000000 0000000 O x0Ozm00< MN m- O0- NI 00- OO0 mm- O0- O0 0m OO0 0000000 000 0000000 5000000 0000 0000: 00 0000x0 0000000: 0000000 000000 0000 0000000 00000 000000 000000 000000000 0000000000000 0000005 00 0000>000 000 00000000 00000 000000 0000 5000 000000 -000 00 00000500000 000000000000 000000 0000000 00000>0000000 0000005 00 00000 -000 000 0000>000 00000 000 000000 000000000 000005000 -00 000000 00000500 0000000 0 00000000 00 0500 0000 0000000 00000000 000 00 0000 000000 000: 2116 NN 0N ON O0 O0 OO0 O 00 OO0 O0 00. 0N- OO0 O0 0O OO0 00 0: O 00- NO. 0: O O0- 00- OO0 O0 OO0 M0 N0 0.00000 ON OO 0m- 00- 00 O0: N0 NN- 0m 00 O0 NN OO0 ON OO0 00 O0 0N 00 OM. on. O0- OO0 O 00: O0- 0N O0- O0- O N0 O0- O0- 00- O0: O0 0 000000-0000 00000000>0 Om- O0 O0 O0- Om- 0.! O ”000000000000 0000000 ON- O0 0N: O0 N0- 0N O 00 00 O0: O0: 0 0N on O0- N0 N0- m- NN- 00 ON m 00- N0: 00: N ml ON- N0- 6' m0 ”I O0 NI 0000.00 0000005 00 0.000 003 0000000 0003 00000000 0000000 00: 0000000 0000500 00000000 0000 00 00000 000000 0000000 05000000 00 000000005 -20.: 05 008 :80 0005 00000000 000000 0000 0000000 5000000 0000 000 000 0000000000 0000003 00000000 0000000 0000 000 00000 0000\0000000 0 00000500 0000000 00 0000 0000 000 0000 5000000 0000000 000000 0000000 0000005 00 0000000 0000050000x0 000 00000500000 00-300000 0000000 .00 .O0 .O0 .00 .N0 .00 .O0 .O 217 OO0 0N O0: OO0 O0: OO0 NN 0N ON 00: 00 OO0 O0 Om: OO0 O0 N0: 0N: O0: ON 9!. 00 N0: O0: ON O0 O0 ON 0N O0: ml O0 00: O OO O 00: O0: 00 0: 00 0 00: OO: 00 ON: N0: 0 00 00 N0 00 0.00000 000000:000O O0: O0 0 0 0 RN 00: 00 0: 0N Om N0: ON: ON: O0: O0 ON: O0: 0N O O O0 0 O 0000000000 0 O O 0 "000000000000 0000000 00 RN: O0 ml O0 00 0000050 0000 0000 000000 00500000 O0 O0 O0: 000000 000000 0000005 00 00000 00000000005 030 000 0000000000 000005500 00000000000 00 05000000 00 000050 000 0000000: 00000000 000 0000 :000000 0000x0 0000000: 000000 00000.00 0000005 00 0000000 0000 0000500000 0000 00000000 0000 00 000000 0000000 .NN .0N 10. 11. 12. 13. 14. 15. 16. 17. 218 APPENDIX P Similarity Coefficients: Evaluation Self-Report* Evaluation Self-Report Uses client data in its planning Records each time it delivers a service Compares client information collected before and after services are provided to measure effectiveness Records client satisfaction Information is collected from each client after services are provided to measure effectiveness Regularly assess client needs Records each client contact Measures extent to which each program reaches its clients Gathers follow-up information Use experimental designs to measure effectiveness Records program cost for each unit of service Computes a benefit/cost ratio for each service Specific written objectives are established for each program Records each client referral made Monitors the implementation of programs Records action taken on each referral Compare clients who receive services with clients who don't to measure effect 79 76 78 93 98 77 80 86 91 26 44 35 77 8O 82 64 76 18. 19. 20. 21. 22. 219 Similarity Coefficients: Evaluation Self-Report (continued) Records source of each referral 74 Uses systematic case studies to measure effectiveness 46 Measures extent objectives are achieved 69 Measures the impact of programs on surrounding community 65 Constructs its own measurement tools to measure client change 73 *Decimal points have been omitted LIST OF REFERENCES LIST OF REFERENCES Aiken, M., & Hage, J. Organizational interdependence and intra-organizational structure. American Sociological Review, 1968, ii, 912-930. Aiken, M., & Hage, J. The organic organization and innova- tion. Sociology, 1971, 6, 63-82. Attkisson, C. C., & Broskoski, A. Evaluation and the emerg- ing human service concept. In C. C. Attkisson, N. A. Hargreaves, M. J. Horowitz, & J. E. Sorensen (Eds.), Evaluation of human service programs. New York: Academic Press, 1978. Azrin, N. H., Flores, T., & Kaplan, S. J. Job-finding club: A group-assisted program for obtaining employment. Behav- iour Research and Therapy, 1975, 13, 17-27. Bagozzi, R. P. Structural equation models in experimental research. Journal of Marketing Research, 1977, 11, 209- 226. Baldridge, J. V., & Burnham, R. A. Organizational innova- tion: Individual, organizational and environmental impacts. Administrative Science Quarterly, 1975, 20, 165-176. ‘— Baltes, P. 8., Reese, H. H., & Nesselroade, J. R. Life-span developmental psychology: Introduction to research meth- ods. Brooks-Cole, 1977. Bandura, A., & Simon, K. M. The role of proximal intentions in self-regulation of refractory behavior. Cognitive Therapy and Research, 1977, I, 177-193. Barnard, C. The functions of the executive. Cambridge: Harvard University Press, 1938T Bartko, J. J. On various intraclass correlation reliability coefficients. Psychological Bulletin, 1976, gg, 762-765. Becker, M. H. Factors affecting diffusion of innovations among health professionals. American Journal of Public Health, 1970, 69, 294-304. (a) 220 221 Becker, M. H. Sociometric location and innovativeness: Reformulation and extension of the diffusion model. American Sociological Review, 1970, 36, 267-283. (b) Beer, M. Organizational change and development: A systems \V// view. Santa Monica: Goodyear, 1980. B ennis, w. G. Theory and method in applying behavioral science to planned organizational change. Journal of Applied Behavioral Science, 1965, 1, 337-368. Bennis, H. G. Changing organizations. Journal of Applied Behavior Analysis, 1966, 2, 247-263. Berman, P., & McLaughlin, M. w. Federal programs supporting educational change: The findings in review (Vol. IV). Santa Monica: The Rand Corporation, 1975. Bernstein, I. N., & Freeman, H. E. Academic and entrepre- neurial research: The consequences of diversity in fed- eraTEevaluation studies. Blau, P. M. Exchange and power in social life. New York: Wiley, 1961: Bogat, G. A. & Jason, L. Traditional versus network building visiting programs among community dwelling elderly. Paper presented at the annual meeting of the Eastern Psycho- logical Association, Hartford, 1980. Burns, T., & Stalker, G. M. The management of innovation. London: Tavistock Publications, 1961. Buss, A. R. An extension of developmental models that sepa- rate ontogenetic changes and cohort differences. Psycho- logical Bulletin, 1973, 89, 466-479. Buss, A. R. A general developmental model for interindivid- ual differences, intraindividual differences, and intrain- dividual changes. Developmental Psychology, 1974, 10, 70-78. "’ Calsyn, R. J., Tornatzky, L. G., & Dittmar, S. Incomplete adoption of an innovation: The case of goal attainment scaling. Evaluation, 1977, 11,.128-130. Campbell, D. T. Reforms as experiments. American Psycholo- gist, 1969, 21, 409-429. Campbell, D. T. Methods for the experimenting society. Paper presented at the annual meeting of the Eastern Psychological Association, Washington, D. C., 1971. 222 Campbell, 0. T., & Stanley, J. C. Experimental and quasi- experimental designs for research. Chicago: Rand- MCNally, 1966. Caplan, G. Support systems and community mental health. New York: Behavioral Publications,71974. Caplan, G., & Killilea, M. (Eds.). Support systems and mutual help: Multidisciplinary explorations. New York: Grune & Stratton, 197677 Caplan, N., Morrison, A., & Stambough, R. J. The use of social science knowledge in policy formulation at the natiEnal level. Ann Arbor: ’Center for Research on the Utilization of Scientific Knowledge, Institute for Social Research, University of Michigan, 1975. Caro, F. G. (Ed.). Readings in evaluation research. New York: The Russell Sage Foundation, 1971. Child, J. Organization structure and strategies of control: A replication of the Aston study. Administrative Science Quarterly, 1972, 11, 163-177. Cochran, H. G. Sampling techniques (3rd Ed.). New York: Wiley, 1977. Cohen, J., & Cohen, P. Applied multiple regression/correla- tion analysis for the behavioral sciences. Hillsdale: Lawrence Erlebaum Associates, 1973. Cook, K. S. Exchange and power in networks of interorganiza- tional relations. The Sociological Quarterly, 1977, 18, 62-82. Corwin, R. G. Strategies for organizational innovation: Empirical comparison. American Sociological Review, 1972, 37, 441-454. Costner, H. L. Utilizing causal models to discover flaws in experiments. Sociometry, 1971, 34, 398-410. Counte, M., & Kimberly, J. Organizational innovation in a professionally dominated system: Responses of physicians to a new program in medical education. Journal of Health and Social Behavior, 1974, 15, 188-198. Crano, N. D., & Brewer, M. B. Principles of research in socialypsychology. New York: McGraw-Hill, 1973. Crano, H. D. & Mellon, P. M. Causal influence of teachers' expectations on childrens' academic performance: A cross- lagged panel analysis. Journal of Educational Psychology, 1978, 19, 29-49. 223 Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika, 1951, 16, 297-334. Davis, D. D. Accessing and developing natural helping net- works. Paper presented at annual meeting of the Midwest- ern Eco-Community Psychology Interest Group, Indiana University, Bloomington, Indiana, 1979. Davis, D. 0. Organizational characteristics and program evaluation practices in the Michigan Aging Network. Unpublished manuscript, Michigan State University, 1981. (a) Davis, D. D. Organizations as the focus for community inter- vention in gerontology. American Psychological Associa- tion Division of CommunityPsychoTogy Newsletter, 1981, lfi, 5-6. (b) Davis, D. D. Participation in community intervention design. American Journal of Community Psychology, 1982, 19, 429- 446. Davis, D. D. Making organizations in gerontology more effec- tive. In M. A. Smyer & M. Gatz (Eds.), Mental health programs for older adults: Evaluative studies. Beverly Hills: Sage, in press. Davis, D. D., & Friedrich, D. D. A Life-span developmental analysis of short-term memory capacity and encoding strat- egies. International Journal of Aging and Human Develop- ment, in press. Davis, D. D., & James, M. L. Substance abuse and mental health among the elderly: The need for coordinated inter- ventions. Journal of Psychiatric Treatment and Evalua- tion, in press. Davis, D. D., & Jason, L. A. Developing a support network for community psychologists. Journal of Community Psychology, 1982, 19, 15-22. Davis, D. D., Johnson, C. D., & Overton, 5. Job placement for handicappers: Developing a natural helping system. Paper presented at annual meeting of the American Psycho- logical Association, New York, 1979. Davis, D. D., & Markman, H. Community psychology interven- tions and the planning of change. Paper presented at the annual meeting of the American Psychological Association, Montreal, Quebec, Canada, 1980. Davis, D. D., O'Quin, K. R., Sivacek, J., Messéc L. A., & James, M. L. Home-medication monitoring practices among older adults: The breach in services. Home Health Care Services Quarterly, 1981, 2, 27-50. 224 Dewar, R. & Hage, J. Size, technology, complexity, and structural differentiation: Toward a theoretical syn- thesis. Administrative Science Quarterly, 1978, 22, 111- 132. Dewar, R., Whetten, D. A., & Boje, D. An examination of the reliability and validity of the Aiken and Hage scales of centralization, formalization, and task routineness. Administrative Science Quarterly, 1980, 22, 120-128. Downs, A. Inside bureaucracy. Boston: Little & Brown, 1966. Downs, G. H. Complexity and innovation research. In M. Radnor, I. Feller, & E. M. Rogers (Eds.), The diffusion of innovations: An assessment. Evanston: Center for the Interdisciplinary Study of Science and Technology, North- western University, 1978. Downs, G. H. & Mohr, L. B. Conceptual issues in the study of innovation. Administrative Science Quarterly, 1976, 21, 700-714. Duncan, 0. D. Introduction to structural equation models. New York: Academic Press, 1975. Eden, 0., & Shani, A. B. Pygmalion goes to boot camp. Journal of Applied Psychology, 1982, p2, 194-199. Emshoff, J. G., Davis, D. D. & Davidson, H. S. Social support and aggression, In A. P Goldstein, E. G. Carr, H. S. Davidson, & P. Nehr (Eds.), In response to gggression. New York: Pergamon, 1981. Eveland, J. 0., Rogers, E. M. & Klepper, C. The innovation process in public organizations: Some elements of a preliminary model. Ann Arbor: Department of Journalism, University of Michigan, 1977. Fairweather, G. H. Methods for experimental social inno- vation. New York: Wiley, 1967. Fairweather, G. H., Sanders, D., & Tornatzky, L. G. Creating chagge in mental health organizations. New York: Per- gamon, 1974. Fairweather, G. H. & Tornatzky, L. G. Experimental methods for social policy research. New York: Pergamon, 1979. Fleisher, M. Organizational typologies for the prediction of change and prescription of change tactics. Unpublished manuscript, Michigan State University, 1978. 225 Ford, J. D. Institutional versus questionnaire measures of organizational structure: A reexamination. Academy of Management Journal, 1979, 22, 601-610. French, N. L., & Bell, C. H. Organizational development: Behavioral science interventions for organization improve- ment (2nd Ed.). Englewood-Cliffs: Prentice-Hall, 1978. Friedlander, F., & Brown, D. Organization development. Annual Review of Psychology, 1974, pp, 313-341. Friedrich, D. D., & Van Horn, K. R. Developmental methodol- ogy: A revised primer. Minneapolis: Burgess, 1976. Frohman, M. A., & Havelock, R. G. The organizational context of dissemination and utilization, In R. G. Havelock (Ed.), Planning for innovation through dissemination and utilization of knoWTedge. Ann Arbor: Center for Research on the Utilization of Knowledge, Institute for Survey Research, University of Michigan, 1973. Galbraith, J. Designing complex organizations. Reading: Addison-Wesley, 1973. Glaser, E. M. Puttingyknowledge to use: A distillation of the literature regarding knowledge transfer and change. Los Angeles: Human Interaction Institute, 1976. Glaser, E. M. & Backer, T. E. Innovation redefined: Dura- bility and local adaptation. Evaluation, 1977, 4, 131- 135. — Gorsuch, R. L. Factor analysis. Philadelphia: w. B. Saunders Company, 1974. Gouldner, A. H. Cosmopolitans and locals: Toward an analy- sis of latent social roles 1. Administrative Science Quarterly, 1958, 2, 281-305. (a) Gouldner, A. H. Cosmopolitans and locals: Toward an analy- sis of latent social roles II. Administrative Science Quarterly, 1958, 2,_444-480. (b) Gray, D. 0. A job-club for older workers and retirees: An experimental evaluation of outcome and process. Unpub- lished Ph.D. dissertation, Michigan State University, 1980. Hage, J. An axiomatic theory of organizations. Administra- tive Science Quarterly, 1965, 19, 289-320. Hage, J. Theories of organizations: Form, processes, & transformation. New York: Wiley, 1980. 226 Hage, J., & Aiken, M. The relationship of centralization to other structural properties. American Journal of Sociol- ogy, 1967, 22, 503-519. (a) Hage, J., & Aiken, M. Program change and organizational properties: A comparative analysis. American Journal of Sociology, 1967, 12, 72-92. (b) Hage, J., & Aiken, M. Social change in complex organiza- tions. New York: Random House, 1970. Hage, J., & Dewar, R. Elite values versus organizational structure in predicting innovation. Administrative Science Quarterly, 1973, 22, 279-290. Hall, G. E., Loucks, S. F. Innovation configurations: Analyzing the adaptions of innovations. Paper presented at the annual meeting of the American Educational Research Association, Toronto, Ontario, Canada, 1978. Hall, R. H. The concept of bureaucracy: An empirical assessment. American Journal of Sociology, 1963, 62, 32-40. Hall, R. H. Organizations: Structure and process (2nd Ed.). Englewood Cliffs: Prentice-Hal1, 1977. Hannan, M. T. Aggregation and disaggregation in sociology. Lexington: Eexington Books,'1971} Harmon, H. H. Modern factor analysis (3rd Ed.). Chicago: University of Chicago Press, 1976. Havelock, R. G. Specialized knowledge linking roles. In R. G. Havelock (Ed.), Planning for innovation through dissem- ination and utilization of knowTedge. Ann Arbor: Center ToriResearch on the Utilizatifin of knowledge, Institute for Social Research, University of Michigan, 1973. (a) Havelock, R. G. (Ed.) Planning for innovation through dis- semination and utilization of knowledge. Ann Arbor: Center for Research on the Utilization of Knowledge, Institute for Social Research, University of Michigan, 1973. (b) Heydebrand, W. V., & Noell, J.J. Task structure and innova- tion in professional organizations. In W. V. Heydebrand (Ed.), Comparative organizations: The results of empir- ical research. Englewood-Cliffs: Prentice-Hall, 1973. Heise, D. Causal analysis. New York: Wiley, 1975. Homans, G. The human grogp. New York: Harcourt, Brace & Jovanovich, 1950. 227 Hornstein, H. H., Bunker, B. B., Burke, N. N., Gindes, M., & Lewicki, R. (Eds.). Social intervention. New York: The Free Press, 1971. Huck, S. N., & McLean, R. A. Using a repeated measures ANOVA to analyze the data from a Pretest-Posttest design: A potentially confusing task. Psychological Bulletin, 1975, B2, 511-518. Hunter, J. E. Methods for reordering the correlation matrix to facilitate visual inspection and preliminary cluster analysis. Journal of Educational Measurement, 1973, lg, 51-61. Hunter, J. E. Cluster analysis: Reliability, construct validity, and multiple indicators approach to measurement. Paper presented at the workshop on advanced statistics, U.S. Civil Service Commission, Washington, D. C., 1977. Hunter, J. E., & Cohen, S. H. PACKAGE: A system of computer routines for the analysis of correlational data. Educa- tional and Psychological Measurement, 1969, 29, 697-700. Hunter, J. E. & Gerbing, O. N. Unidimensional measurement and confirmatory factor analysis. East Lansing: Insti- tute for Research on Teaching, Michigan State University, 1979. (Occasional paper no. 20) Hunter, J. E., & Gerbing, D. N. Unidimensional measurement, second-order factor analysis, and causal models. In B. M. Staw & L.L. Cummings (Eds.), Research in organizational behavior (Vol. 4). Greenwich, Conn.: JAI Press, 1981. Huse, E F. Organizational development and change (2nd Ed.). St. Paul: Nest Publishing, 1980. Inkson, J. H. K., Pugh, D. S., & Hickson, D. J. Organization context and structure: An abbreviated replication. Administrative Science Quarterly, 1970, lg, 318-329. James, L. R. Aggregation bias in estimates of perceptual agreement. Journal of Applied Psychology, 1982, El, 219-229. (a) James, L. R. Organizational climate: Another look at a potentially important construct. Unpublished manuscript, Georgia Institute of Technology, 1982. (b) James, L. R. & Jones, A. P. Organization structure: A review of structural dimensions and their conceptual relationships with individual attitudes and behavior. gzgggizational Behavior and Human Performance, 1976, 16, - 3. —_' 228 Kenny, 0. Correlation and causality. New York: Wiley, 1979. Kessler, R. C., & Greenberg, O. F. Linear panel analysis. New York: Academic Press, 1981. Kimberly, J. R. Organizational size and the structuralist perspective. Administrative Science Quarterly, 1976, El, 571-597. Kimberly, J. R. Hospital adoption of innovation: The role of integration into external informational environments. Journal of Health and Social Behavior, 1978, lg, 361-373. Kimberly, J. R., & Evanisko, M. J. Organizational innova- tion: The influence of individual, organizational, and contextual factors on hospital adoption of technological and administrative innovations. Academy of Management Journal, 1981, 21, 689-713. Kimberly, J. R., & Miles, R. H. The organizational life- cycle. San Francisco: JosseyTBass, 1980. Kolb, D. A., & Boyatzis, R. E. Goal—setting and self-dir- ected behavior change. Human Relations, 1970, 23, 439- 457. Larsen, J. K. Knowledge utilization. Knowledge: Creation, Diffusion, Utilization, 1980, l, 421:142. Latham, G. P., & Kinne, S. B. Improving job performance through training in goal-setting. Journal of Applied Psychology, 1974, 69, 187-191. Latham, G. P., & Yukl, G. A. A review of research on the application of goal-setting in organizations. Academy of Management Journal, 1975, lg, 824-845. Lawrence, P. R., & Lorsch, J. W. Organization and environ- ment: Managing differentiation and integration. HOme- wood, Ill.: Richard D. Irwin, 1967. Lehman, E. M. Coordinating health care: Explorations lg interorganizational relations. Beverly Hills: Sage, 1975. Levine, S., & White, P. E. Exchange as a conceptual frame- work for the study of interorganizational relationships. Administrative Science Quarterly, 1961, g, 583-601. Levine, S., & White, P. E., & Paul, B. 0. Community inter- organizational problems in providing medical care and social services. American Journal of Public Health, 1963, §;, 1183-1195. 229 Likert, R. The human organization. New York: McGraw-Hill, 1967. Litwak, E. Models of bureaucracy that permit conflict. American Journal of Sociology, 1961, 51, 173-183. Locke, E. A. Toward a theory of task motivation and incen- tives. Organizational Behavior and Human Performance, 1968, 3, 157-189. Locke, E. A., Schweiger, D. M. Participation in decision making: One more look. In B. M. Staw (Ed.), Research in Organizational Behavior (Vol. 1). Greenwich, Conn.: JAI Press, 1979. Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. Goal-setting and task performance: 1969-1980. Psycho- logical Bulletin, 1981, 29, 125-152. Lounsbury, J. W., Cook, M. P., Leader, D. 5., Rubeiz, G., & Meares, E. P. Community psychology: Boundary problems, psychology perspectives, and an empirical overview of the field. American Psychologist, 1979, 31, 554-557. (Com- ment Lounsbury, J. W., Leader, D. S., Meares, E. P., & Cook, M. P. An analytic review of research in community psychology. American Journal of Community psychology, 1980, 8, 415-441. March, J. G., & Simon, H. A. Organizations. New York: Wiley, 1958. Moch M. K., & Morse, E. V. Size, centralization and organi- zational adoption of innovations. American Journal of Sociology. 1977, 42, 716-725. Mohr, L. B. Process theory and variance theory in innovation research. In M. Radnor, I. Feller, & E. M. Rogers (Eds.), The diffusion of innovations: An assessment. Evanston: Center for the Interdisciplinary Study of Science and Technology, Northwestern University, 1978. Novaco, R. W., & Monahan, J. Research in community psychol- ogy: An analysis of work published in the first six years of the American Journal of Community Psychology. American Journal of Community Psychology, 1980, 8, 131-145. Nunnally, J. C. Psychometric theory. New York McGraw-Hill, 1967. Ouchi, W. G. The relationship between organizational struc- ture and organizational control. Administrative Science Quarterly, 1977, 233 95-113. 230 Pennings, J. Measures of organizational structure. American Journal of Sociology, 1973, 12, 686-704. Perrow, C. A framework for comparative organizational anal- ysis. American Sociological Review, 1967, ll, 194-208. Perrow, C. Complex organizations: A critical essay (2nd Ed.). Glenview, Ill,: Scott, Foresman & Company, 1979. Pfeiffer, J. W. Interview with W. J. Reddin. Group and Organization Studies, 1977, 2, 144-171. Pollitser, P. E. Network analysis and the logic of social support. In R. H. Price & P. E. Pollitser (Eds.), Eval- uation and action in the social environment. New York: Academdchress, 1980. Porras, J. The comparative impact of different 00 techniques and intervention intensities. Journal of Applied Behav- ioral Science, 1979, l5, 156-178. Porras, J., & Berg, P. 0. Evaluation methodology in organi- zational development: An analysis and critique. Journal of Applied Behavioral Science, 1978, ll, 151-173. (a) Porras, J., & Berg, P. O. The impact of organization devel- opment. Academy of Management Review, 1978, 3, 249-266. Przeworski, A., & Teune, H. The logic of comparative social inguiry. New York: Wiley, 1970. Pugh, D. S., Hickson, D. J., & Hinings, C. R. An empirical taxonomy of work organizations. Administrative Science Quarterly, 1969, ll, 115-126. Pugh, D. S., Hickson, D. J., Hinings, C. R., MacDonald, K. M., Turner, C., & Lupton, T. A conceptual scheme for organizational analysis. Administrative Science Quarter- ly, 1963, 8, 289-315. Pugh, D. S., Hickson, D. J., Hinings, C. R., & Turner, C. The context of organization structure. Administrative Science Quarterly, 1969, ll, 91-114. Rappaport, J. From Noah to Babel: Relationships between conceptions, values, analysis levels, and social inter- vention strategies. Paper presented at the annual meeting of the American Psychological Association, Chicago, 1975. Rappaport, J. Community psychology: Values, research and action. New York: Hdlt, Rinehart, 8 Winston, 1977. 231 Rice, R. E., & Rogers, E. M. Reinvention in the innovation process. Knowledge: Creation, Diffusion, Utilization, 1980, l, 499-514. Roberts, K. H., Hulin, C.L., & Rousseau, D. M. Developing an interdisciplinary science of organizations. San Fran- cisco: Jossey-Bass, 1978. Robinson, W. S. Ecological correlations and the behavior of individuals. American Sociological Review, 1950, ll, 351-357. Roethlisberger, F. J., & Dickson, W. J. Management and the worker. Cambridge: Harvard University Press, 1947. Rogers, E. M. Diffusion of innovations. New York: The Free Press, 1962. Rogers, E., & Eveland, J. D. Diffusion of innovation per- spectives on national R & D assessment: Communication in organizations. In P. Kelly, M. Kranzberg, F. A. Rossini, N. R. Baker, F. A. Tarpley, M. Mitzer (Eds.), Technologi- cal innovation: A critical review of current knowledge (Vol. 2). Final report to the National Science Found- ation, 1975. (NTIS No. PB 242-551) Rogers E. M., & Shoemaker, F. F. Communication of innova- tions: A cross-cultural approach. New York: The Free Press, 1971. Rogers, E. M., Williams, L. M. & West, R. 8. Bibliography of the diffusion of innovations. Stanford: Diffusion Docu- ments Center, Institute for Communication Research, Stan- ford University, 1977. (Council of Planning Librarians Exchange Bibliography, 1420, 1421, 1422) Rosenthal, R. Experimenter effects in behavioral research. New York: Appleton-Century-Crofts, 1966. Rosenthal, R., & Jacobson, L. Pygmalion in the classroom: Teacher expectation and pupils' intellectual development. New York: Holt Rinehart, & Winston, 1968. Rossi, P. H., Freeman, H. B., & Wright, S. R. Evaluation: A systematic approach. Beverly Hills: Sage, 1979. Rousseau, D. M. Measures of technology as predictors of employee attitude. Journal of Applied Psychology, 1978, 6}, 213-128. Rossi, P. H., & Williams, W. Evaluating social programs: Theory, practice, and politics. New York: Seminar Press, 1972. 232 Ryan, B. A study in technological diffusion. Rural Soci- olog , 1948, l2, 273-285. Ryan, 8., & Gross, N. C. The diffusion of hybrid seed corn in two Iowa communities. Rural Sociology, 1943, 2, 15-24. Samuel, Y., & Mannheim, B. F. A multidimensional approach toward a typology of bureaucracy. Administrative Science Quarterly, 1970, l5, 216-228. Sarason, S. B., Carrol, C., Maton, K., Cohen, S., & Lorentz, E. Human services and resource networks. San Francisco: Jossey-Bass, 1977. Sathe, V. Institutional versus questionnaire measures of organizational structure. Academy of Management Journal, 1978, 2l, 227-228. Schaie, K. W. A general model for the study of developmental problems. Psychological Bulletin, 1965, pg, 92-107. Schermerhorn, J. R. Determinants of interorganizational cooperation. Academy of Management Journal, 1975, l2, 846856. Schermerhorn, J. R. Open questions limiting the practice of interorganizational development. Groupyand Organization Studies, 1981, p, 83-95. Schneider, 8. Different levels of analysis in data aggrega- tion: Problems, pitfalls, and a potential solution for survey research. Unpublished paper, University of Mary- land, n. d. Schneider, B. Organizational climate: An essay. Personnel Psychology, 1975, 22, 447-479. Schneider, 8. An interactionist perspective on organiza- tional effectiveness. In K. Cameron & D. Whetten (Eds.), Multiple models of organizational effectiveness. New York: Academic Press, 1982. Schneider, 8. Interactional psychology and organizational behavior. In L. L. Cummings & B. M. Staw (Eds.), Research in organization behavior (Vol. 5). Greenwich JAI Press, in press. Seidler, J. On using informants: A technique for collecting quantitative data and controlling measurement error in organizational analysis. American Sociological Review, 1974, 22, 816-831. 233 Shrout, P. E., & Fleiss, J. L. Intraclass correlations: Uses in assessing rater reliability. Psychological Bul- letin, 1979, go, 420-428. 77 Sirotnik, K. A. Psychometric implications of the unit-of analysis problem (with examples from the measurement of organizational climate). Journal of Educational Meas- urement, 1980, ll, 245-282. Stephen, A. S. Prospects and possibilities: The new deal and the new social research. Social Forces, 1935, l2, 515-521. Stevens, W. F. An assessment of the impact of staff involvement and face-to-face consultation on adoption of innovative program evaluation methods. Unpublished Ph.D. dissertation, Michigan State University, 1977. Stevens, W. F., & Tornatzky, L.G. The dissemination of evaluation: An experiment. Evaluation Review, 1980, 3, 339-354. Suchman, E. Evaluative research. New York: The Russell Sage Foundation, 1967. Sutton, R. I., & Rousseau, D. M. Structure, technology, and dependence on a parent organization: Organizational and environmental correlates of individual responses. Journal of Applied Psychology, 1979, pg, 675-687. Tannenbaum, A. S. (Ed.). Control in organizations. New York: McGraw-Hill, 1968. Tannenbaum, A. S., Kavcic, B., Rosner, M., Vianello, M., & Wieser, G. Hierarchy in organizations. San Francisco: Jossey-Bass, 1974. Tannenbaum, A. S., & Smith, C. G. Effects of member influ- ence in an organization: Phenomenology or organization structure. Journal of Abnormal and Social Psychology, 1964, Q, 410-410. Tarde, G. The laws of imitation (trans. Elsie Clews Parsons). New York: Holt, 1903. Thibaut, J. W., & Kelley, H. H.- The social psychology of groups. New York: Wiley, 1959. Thompson, J.D. Organizations in action. New York: McGraw- Hill, 1967. Thompson, V. A. Bureaucracy and innovation. Administrative Science Quarterly, 1965, l9, 1-20. 234 Tichy, N. M. Agents of planned change: Congruence of val- ues, cognition and actions. Administrative Science Quarterly, 1974, lg, 164-182. Tichy, N. M., & Hornstein, H. H. Stand when your number is called: An empirical attempt to classify types of social change agents. Human Relations, 1976, 22, 945-967. Tornatzky, L. G., Fergus, E. 0., Avellar, J. W., & Fairweather, G. W. Innovation and social process: A national experiment in implementing social technology. New York; Pergamon, 1980. Tornatzky, L. G., Roitman, D., M., Carpenter, J., Eveland, J. D., Hetzner, W. H., Lucas, 8. G., & Schneider, J. Inno- vation processes and their management: A conceptual, empirical, and policy review of innovation process research. Working draft of an unpublished paper, Division of Policy Research and Analysis, National Science Found- ation, 1979. Tosi, H. L. Theories of organizations. Chicago: St. Clair Press, 1975. Tushman, M. L. Special boundary roles in the innovation process. Administrative Science Quarterly, 1977, 22, 587-605. Walton, E. J. The comparison of measures of organization structure. Academy of Management Review, 1981, 1, 155- 160. — Weber, M. The theory of social and economic organization (A. M. Henderson & T. Parsons, trans.). New York: Oxford University Press, 1947. Weiss, C. (Ed.). Evaluating action programs. Boston: Allyn & Bacon, 1972. Weiss, C. Knowledge creep and decision accretion. Knowl- edge: Creation, Diffusion, Utilization, 1980, l, 381-404. White, F. M., Mitchell, T. R., & Bell, C. H. Goal-setting, evaluation apprehension, and social cues as determinants of job perfomance and job satisfaction in a simulated organization. Journal of Applied Psychology, 1977, 62, 665-673. _— Wholey, J. S., Scanlon, J. W., Duffy, H. G., Fukumoto, J. S., & Vogt, L. M. Federal evaluation policy: Analyzing the effects of pubch programs. Wasthgton: The Urban Insti- tute, 1970. 235 Wilson, J. 0., Innovation in organizations: Notes toward a theory. In J. D. Thompson (Ed.), Approaches to organi- zational design. Pittsburgh: University of Pittsburgh Press, 1966. Woodward, J. Industrial organization: Theory and practice. London: Oxford University Press, 1965. Yin, R. K. Organizational innovation: A psychologist's view. In M. Radnor, I. Feller, & E. M. Rogers (Eds.), lpp diffusion of innovations: An assessment. Evanston: Center for the InterdischlTnary Study of Science and Technology, Northwestern University, 1978. Yin, R. K., Heald, K. A., & Vogel, M. E. Tinkering with the system: Technological innovations in state and local services. Leangton: Lexington Books, 1977. Yin, R. K., Quick, S. K., Bateman, P. M. & Marks, E. L. Changing urban bureaucracies: How practices become rou- tiniZed. Santa Monica: The Rand Corporation, 1978. TRT2277TNSR) York, J. L. Building interorganizational linkages: A test of an organizational development model. Unpublished Ph.D. dissertation, Michigan State University, 1979. Zaltman, G. Toward a theory of planned social change: A theory in use approach. Unpublished paper, University of Pittsburgh, 1977. Zaltman, G. Knowledge utilization as planned change. Knowl- edge: Creation, Diffusion, Utilization, 1979, l, 82-105. Zaltman, G., & Duncan, R. Strategies for planned change. New York: Wiley, 1977. Zaltman, G., Duncan, R., & Holbek, J. Innovations and organ- izations. New York: Wiley, 1973.