IIIIII-INN}.IIINAIINEIII£73: AN ASSESSMENT OF THE IMPACT OF STAFF INVOLVEMENT AND FACE-TO-FACE CONSULTATION ’ ON ADOPTION OF INNOVATIVE PROGRAM I ' EVALUATION METHODS ' Dissertation for the Degree of Ph. D. MICHIGAN STATE UNIVERSITY ' . WILLIAM FRANCIS STEVENS 1977 * ‘ ' IIlllllllll‘llll‘lllllllll Michigan State University -'r'v‘ ' ‘ I‘L‘ 1r -rvh" This is to certify that the thesis entitled An Assessment of the Impact of Staff Involvement and Fac’e—to-Face Consultation 0n Adoption of Innovative Program Evaluation Methods presented by William Francis Stevens has been accepted towards fulfillment of the requirements for JILL_ degree in Psychology %%é/ a //v major profess% k Date 7 27 77 0-7639 PIACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINE return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 11/00 chIRCJDaleDuepefirp. 14 [m ABSTRACT AN ASSESSMENT OF THE IMPACT OF STAFF INVOLVEMENT AND FACE-TO-FACE CONSULTATION ON ADOPTION OF INNOVATIVE PROGRAM EVALUATION METHODS BY William Francis Stevens Innovation dissemination is a process by which under- utilized knowledge is disseminated through an evolutionary and/or planned process. This research reviews the liter- ature to identify possible strategies for disseminating in— novative program evaluation methodology to human service programs. Increased staff involvement and face-to-face interac~ tion with evaluation consultants are identified as factors which should produce increased adoption of the evaluation methods advocated. A 2 x 2 x 3 factorial research design is developed to assess the impact of these factors. The results indicate significantly greater innovation adoption among human service programs where more than one staff member is consulted and among programs that are con- sulted at the program site as opposed to telephone consul- tation. Prior interest of the human service program as well as a pre-consultation assessment of the consultant by the William Francis Stevens consultee also were found to significantly predict innova- tion adoption. Measures of Staff attitudes and knowledge and program resources were not found to significantly correlate with innovation adoption. Various recommendations were made for future research. AN ASSESSMENT OF THE IMPACT OF STAFF INVOLVEMENT AND FACE-TO-FACE CONSULTATION ON ADOPTION OF INNOVATIVE PROGRAM EVALUATION METHODS BY William Francis Stevens A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Psychology 1977 .4. *f G (O Lg __. 2 ACKNOWLEDGMENTS In the process of completing this task I would like to take the time to thank some of the people who have helped me along the way. I would like to thank Lou for helping me be more ob- jective in my views and for all of the assistance he has been in this project. I would like to thank Pat, Mary, Coby and John for typing it. I A special thanks to Charley Johnson and especially Bill Ives who helped me keep this thing in perspective. Finally I would like to thank all my other friends who took the time to listen over and over again when this experience was most painful. ii TABLE OF CONTENTS Page LIST OF TABLES . . . . . . . . . . . . . . . . . . v INTRODUCTION . . . . . . . . . . . . . . . . . . . 1 Program Evaluation and Knowledge Utilization . l A Conceptual Overview . . . . . . . . . . . . . 4 Participative Decision-Making . . . . . . . . . 7 Laboratory Research . . . . . . . . . . . . 7 Field Research . . . . . . . . . . . . . . 9 Face-to-Face Interaction . . . . . . . . . . . 12 Consultant Effectiveness Research . . . . The Experimental Plan . . . . . . . . Experimental Hypotheses . . . . . . . . . . . . 20 Intervening Variable Hypotheses . . . . . . 20 Outcome Hypotheses . . . . . . . . METHODS AND PROCEDURES . . . . . . . . . . . . . . . 22 Design . . . . . . . . . . . . . . . . . . . . 22 Sample . . . . . . . . . . . . . . . . . . . . 22 Initial Recruiting . . . . . . . . . . . . 23 Subject Assignment and Attrition . . . . . 23 The Innovation . . . . . . . . . . . . . . . . 24 The Consultations . . . . . . . . . . . . °.- . 25 Data Collection Schedule . . . . . . . . . . . 29 Data Reduction Procedures: Descriptive Process and Outcome Measures . . . . . . . . 30 Descriptive Measures . . . . . . . . . . . 31 Process Measures . . . . . . . . . . . . . 37 The Outcome Measure . . . . . . . . . . . . 42 RESULTS . . . . . . . . . . . . . . . . . . . . . . 43 Analysis of Variance: Descriptive Measures . . 43 Analysis of Variance: Process Measure . . . . 53 Analysis of Variance: Outcome Measure . . . . 64 DISCUSSION . . . . . . . . . . . . . . . . . . . . . 70 Group Consultation versus Private Consul- tation . . . . . . . . . . . . . . . . . . . 70 Process Measures and Staff Involvement Outcome Measure and Staff Involvement On-Site versus Telephone Consultation . . Process Measures and Site . . . . . . Outcome Measure and Site . . . . . . . Cost-Effectiveness . . . . . . . . . . . . Consultant Effect . . . . . . . . . . . . Descriptive Measures . . . . . . . . . . . Summary and Future Research . . . . . . . Group Consultation . . . . . . . . . . On-Site Consultation . . . . . . . . . Consultatn Credibility . . . . . . . . Prior Interest . . . . . . . . . . . . REFERENCES . . . . . . . . . . . . . . . . . . APPENDIX . . . . . . . . . . . . . . . . . . . iv .III. Table 10A 103 11A 11B 12A 123 13A 13B 14A LIST OF TABLES Subject Assignments in Final Factorial DeSign O O O O O O O O O O O O O O O O O O O O Cronbach's Alpha Analysis for Program Size . . . Cronbach's Alpha Analysis for Attitude-Concepts scale 0 O O O O O O 0 O O 0 O O O O O O O O Cronbach's Alpha Analysis for Attitude-Staff Cooperation Scale . . . . . . . . . . . . . . Cronbach's Alpha Analysis for Consultant Pre- Rating scale C O O O O O O O O O O O O O O O Cronbach's Alpha Analysis for the Total Staff Involvement Scale . . . . . . . . . . . . . . Cronbach's Alpha Analysis for Attender-Cooper- ativeness Scale . . . . . . . . . . . . . . . Cronbach's Alpha Analysis for the Staff C00per- ativeness Scale . . . . . . . . . . . . . . Chronbach's Alpha Analysis for the Attitude- Resources Scale . . . . . . . . . . . . . . Cell means for Prior Interest . . . . . . . . Analysis of variance Prior Interest . . . . . . Cell means for Academic Resource . . . . . . . Analysis of variance of Academic Resources . . Cell means for Evaluation Resources . . . . . Analysis of variance of Evaluation Resources . Cell means for Program Size . . . . . . . . . . Analysis of variance of Program Size . . . . . Cell means for Attitude-Capability . . . . . . Page 22 33 34 35 36 38 39 4O 41 44 44 45 45 46 46 47 47 48 Table 14B 15A 15B 16A 16B 17A 17B 18 19A 19B 20A 208 21A 218 22A 22B 23A 23B 24A 24B 25A Analysis of variance of Attitude-Capability . Cell means for Attitude-Staff Cooperativeness Analysis of variance of Attitude—Staff Coop- erativeness . . . . . . . . . . . . . . . . Cell means for Attitude Concepts . . . . . . Analysis of variance of Attitude Concepts . . Cell means for Consultant Pre-Rating . . . . Analysis of variance of Consultant Pre-Rating Correlations of Descriptive Measures with the Outcome Measure . . . . . . . . . . . . Cell means for Staff Involvement . . . . . . Analysis of variance and covariance for Staff Involvement . . . . . . . . . . . . . . . . Cell means for Meetings . . . . . . . . . . . Analysis of variance and covariance for Meetings . . . . . . . . . . . . . . . . . Cell means for Total Staff Involvement . . . Analysis of variance and covariance for Total Staff Involvement . . . . . . . . . . . . . Cell Means for Attender Cooperativeness . . . Analysis of variance and covariance for Attender Cooperativeness . . . . . . . . . Cell means for Staff Cooperativeness . . . . Analysis of variance and covariance for Staff Cooperativeness . . . . . . . . . . . . . . Cell means for Attitude-Resources for site and staff involvement . . . . . . . . . . . . . Analysis of variance and covariance for Attitude-Resources . . . . . . . . . . . . Cell means for Time of Consultations (in minutes). . . . . . . . . . . . . . . . . . Vi Page 48 49 49 50 50 51 51 52 55 55 56 56 57 57 58 58 59 59 60 60 61 Table 253 26A 26B 27 28A 28B 29 30A 30B 31A 318 Analysis of variance and covariance for Total Time of Consultations . . . . . . . . . . . Cell means for Total Tasks adjusted for the Prior Interest covariant . . . . . . . . . Analysis of variance and covariance for Total Tasks . . . . . . . . . . . . . . . . . . . Correlations between innovation Outcome and Process measures . . . . . . . . . . . . . Cell means for the Innovation Outcome Measure Analysis of variance of the Outcome Measure . Collapsed table of Cell sizes for Site versus Staff conditions . . . . . . . . . . . . . Cell means for Outcome Measure adjusted for the Prior Interest covariant . . . . . . Analysis of variance for Outcome Measure using Prior Interest as a covariate . . . . . . . Cell means of the Outcome Measure adjusted for the Consultant Pre-Rating covariant . . . . Analysis of variance for Omnibus Outcome Measure using Consultant Pre-Rating as a covariate . . . . . . . . . . . . . . . . . Page 61 62 62 63 65 65 66 68 68 69 69 INTRODUCTION Program Evaluation and Knowledge Utilization In recent years there has been a significant public outcry for more efficient management of human service agen- cies. As Jimmy Carter (1974) pointed out: "The hard question to be answered remains: On what basis and toward what end will these programs be directed and at what cost? The question can only be answered through an evaluation system for social ser— Vlces prOgrams° (Evaluation, Spring, 1974, pp. 6-7) Fortunately, accompanying this demand for a more rational ap- proach to meeting human needs, a robust program evaluation methodology has emerged in. numerous academic circles. Led by such pioneers as Campbell and Stanley (1963), Fairweather (1967) and, Rossi & Williams (1972) there has been an in— crease in the development of highly sophisticated approaches to assessing human service program effectiveness. Unfor- tunately, the growth of this knowledge base has not been accompanied by an equivalent wideswpread adoption of these new techniques in the field. Underutilization of knowledge is actually quite common when an innovation emerges. Throughout history, there has always existed a time gap between the discovery of new knowledge, and the implementation or use of such knowledge. In an age when the speed at which new knowledge development seems to have increased, the adoption of such innovations 1 2 often seems to lag just as far behind. With the recognition of this time lag between innovation, and adoption of innova— tion, has come the development of a field of study often referred to as knowledge utilization. As this new field of study has emerged, several approach— es to conceptualizing knowledge utilization, or innovation dissemination, have developed. Havelock (1971) has aptly described a number of approaches that seem to have dominated the field. For example, the "Social Interaction" perspective describes a field of endeavor largely within the tradition of communications theory and research. Typically researchers operating from a Social Interaction perspective are inter- ested in the communication process that occurs as knowledge of an innovation moves from the initial developer of the inno- vation, to the eventual user. Investigators such as Rogers and Shoemaker (1971) have been particularly active in this area. Another perspective identified by Havelock (1971), is the "Research Development and Diffusion (R, D & D) perspec— tive." This View of innovation dissemination approaches the problem from a very rational descriptive stance. From the R, D & D perspective, innovations arise because of ration- ally defined needs, and prOceed through various discreet stages from initial development through ultimate adoption in the field. To a significant degree, the R, D & D perspec- tive is a post—hoc description of an apparently rational process after much of the associated human interaction has been abstracted from it. A third perspective identified by Havelock (1971) is the "Problem Solver" perspective, which includes much of the work that has been done under the rubric of organizational development. Here, there is particular concern for facili- tating the internal problem—solving behavior of adopting organizations. Problem Solver efforts are devoted to determining what interventions can help organizations become more participative, responsive, and humane. The Problem Solver perspective is primarily concerned with process, and is less concerned with the specific innovation being adopted by a client organization. Another perspective on the innovation dissemination problem is an integral part of the methodology described by Fairweather and his co-workers (Fairweather 1967; Fair— weather and Tornatzky, 1977). While beyond the scope of the particular issues addressed here, Fairweather describes a fairly sophisticated methodology designed to create new social innovations, refine and develop them through use of a data-based evaluation process, and eventually develop strat— egies which will lead to their dissemination to the field. A particularly innovative aspect of Fairweather's approach lies in his strong emphasis on empirical research of the innovation dissemination process itself. The strength of the Experimental Social Innovation approach to innovation dissemination advocated by Fairweather (1967, 1977) is an insistence that alternative change strategies should be compared in the context of classical experimental method- ology. Through this process an empirical determination may be made of the best strategy for fostering the adoption of an innovation. One research project undertaken within this tradition (Fairweather, Sanders and Tornatzky, 1974) has particular relevance for the research at hand, and will be referred to again later. The perspective of the present research is that the adoption of evaluation methodology in human service organi- zations is a significant social problem and is, from a conceptual point of View, a knowledge utilization and organ- izational change issue. The attempt of this research will be to apply experimental methodology to a comparison of al— ternative change strategies. A Conceptual Overview As implied above, the problem of encouraging human service agencies to adopt program evaluation methodology is both a complex one, yet is an issue that is not dealt with directly by the literature. Much of the research utiliza- tion literature as reviewed by Havelock (1971) and Rogers and Shoemaker (1971) is concerned with the adoption of innovations in a non-organizational context. A major por- tion of this literature falls under Havelock's Social Inter— action perspective and has been concerned with the adoption of innovations not particularly similar to evaluation method- ology (e.g., new farming practices adopted by farmers in rural settings). This is a far cry from the adoption of highly complex evaluation methodologies by large human service agencies. By the same token, much of the organizational 5 development literature, that is the heart of the Problem Solver perspective referred to by Havelock (1971), has little to say about strategies to foster the adoption of a specific innovation. Much of this organizational change literature can be described as peripheral to research utili— zation issues. The organizational development (OD) approach to organization change is based on the ability of an exter- nal change agent to function effectively in a role focused on facilitating organizational problem-solving and group processes. It is the change agent's mission to assist the client in identifying possible alternatives and to facili- tate the organization's internal decision making processes, not direct them. From the perspective of such OD pioneers as Argyris (1970), Blake and Mouton (1969), and Bennis (1966), the intentional encouragement of agencies to use a specific program evaluation methodology would be incon- sistent with the non-directive role of the organizational development practitioner. However, aside from the rather narrow viewpoint of the organizational development practitioner, considerable under- standing of the problem at hand can be gleaned from organi— zational theory in the broader sense. Organization theory has been characterized for several decades by a controversy over the importance of informal group processes in the context of bureaucratic organiza— tions. One party to this discussion is epitomized by Weber (1947) and has advocated the classical bureaucratic approach to organizing the world of work. Assuming that the organization is a rationalizable place, an ideal-type organ— ization might be structured by emphasizing hierarchy, spe— cialization, formal modes of communication, and a priori specification of rights and privileges, all designed to maximize focused expertise on task accomplishment. The thrust of this view is to make the organization akin to a social machine, with its individual members being construed as replaceable parts. In contrast to this point of View has been the theory underlying the organizational development practitioners. Beginning with the early Hawthorne Western Electric Studies (Rothlesburger and Dickson, 1964), the persistent point made by these theorists is that the work place is a setting in which interpersonal concerns and group dynamics issues are particularly important for successful task accomplishment. Individuals such as Whyte (1961) and McGregor (1960) have argued that organizations should be structured in such a manner as to maximize personal fulfillment, informal inter- action, and participative decision-making. According to these authors such changes in organizations will produce more efficient, productive, and humane places to work. The compromise position in this debate has been struck by organizational contingency theorists such as Litwak (1961), Thompson (1967) and Perrow (1972). The viewpoint taken here is that some organizational tasks are best han- dled in the context of more informal group processes, and others are best handled bureaucratically. As described by Litwak (1961), some tasks are uniform, and other tasks are non-uniform in nature. The former might best be approached by bureaucratically structured organizations; the latter might best be handled by more informal, less hierarchical, face-to-face types of interaction within the organization. The point of view taken in this research is that the ,‘ innovation adoption process is, by definition, a problem that is analagous to a non—uniform task. As such, interven— tion strategies that rely on informal, "non-bureaucratic" modes of interaction should be related to successful change efforts. In a review of the literature to follow, we will consider two possible change intervention parameters that are seen to be of particular importance. First, it will be argued that the literature seems to indicate that participa— tizg decision-making may be related to organizational change and innovation adoption. Additionally, the literature will be reviewed to evaluate the evidence for and against the im- portance of face-to-face interaction in change interventions.‘ Participative Decision-Making Literature supporting the importance of participative decision—making in facilitating change comes from a variety of sources, including both laboratory and field settings. Laboratory Research. One area of research that seems to have particular applicability to the present discussion is the risky shift phenomenon, which has been studied in social psychological circles for some years. As has been frequently observed, innovation adoption seemed closely related to risk taking behavior (President's Conference on Technical-Distribution Research for the Benefit of Small Business; 1957). Given that this assumption is accurate, there appear to be a great many laboratory studies which support the concept that simple group discussion prior to the decision-making of group members increases risk—taking behavior. (Wallach et al., 1962; Kogan and Wallach, 1967b, Levinger and Schneider, 1969). Cecil, Commings and Chertkoff (1973), after reviewing a large quantity of risky shift literature, concluded that risky decision-making is significantly increased when sub- jects were asked to make the decision following group dis— ssion as opposed to private decision-making. Analogously, Cecil gt al., conclude that group decision—making in a program management setting should increase program innova- tiveness. In spite of these many supportive research findings, there have been a number of studies which have found con- flicting results of the risky shift phenomenon. The thrust of these studies indicates that the group decision-making shift represents a polarization to either a conservative, or a risky position based upon the social norms of the majority of the group members. (Stoner, 1968; Kogan and Wallach, 1967a, Nordhoy, 1962). The totality of research in this area, then, primarily supports a shift of some fashion which may support or oppose adoption of the innovation when participative decision-making occurs. Therefore, in a consultation setting it w_u1d seem that, if more than one staff member was involved in the consultat- ion sessions prior to and/or during the decision to adopt the innovation, the cited research would suggest a differ— ence (increase or decrease) in the innovation adoption rate between such a situation and that where only a single sub- ject was consulted during the consultation process. Based on the theories of Lewin (1947) and Pelz (1958), such "group carried" attitude changes in favor of innovation adoption should maintain themselves longer than single consultee attitude changes, thereby increasing actual com- pletion of innovation tasks in organizations consulted in a group setting. In another body of laboratory research Shaw (1976) reviewed several studies from the group dynamics literature. He was particularly concerned in his review with comparing alternative types of communication networks that have been used in small group laboratory studies. In this review, Shaw found that communication networks that were less hier— archical and more "open" were more effective in solving group problems that dealt with highly complex problem solv- ing exercises. If one can make the conceptional leap from such a laboratory problem solving exercise to implementing complex social innovations then more "open" interactions might too be related to greater innovation. Field Research. The evidence from the field setting in 10 this regard is more persuasive. Habbe (1952) suggested that regularly convened group meetings of lower level organiza- tion members was an effective means of breaking down intra— organization barriers to throughput of new ideas and innova— tions. In a classical study of this notion (Coch and French, 1948) a group of workers in a pajama factory were involved in initial planning for the utilization of new manufacturing techniques. Fortunately, the investigators in this study were able to set up their research in a true experimental design, and to compare different degrees of participative involvement in initial planning and decision- making as they affected acceptance of the changes. It was found that those employees who were more directly involved in the planning and decision-making were much more receptive to the changes in their work setting. In a classical comparative organization study, Burns and Stalker (1961) investigated a number of manufacturing firms in Great Britian. They attempted to categorize these organizations in terms of the degree to which they followed a bureaucratic model of organizational functioning. The finding of particular importance to the present review is that those organizations which were more open and less hierarchical were those in which industrial innovations were more likely to be found. These authors went on to make the argument that an organizational climate for innovation can be created in such a setting. In a national survey of school districts, some evidence was found which tends to support a group-participa- tive approach. Havelock and Havelock (1973) found that the degree to which school districts began new programs was significantly correlated with the degree to which they involved staff in their planning and development. In an analagous study, Tornatzky (1977) found that the degree to which school systems adopted new programs was signifi- cantly correlated with the amount of participative involve- ment by community and staff members. Further, in an experi— mental study in a prison setting, Lounsbury and Tornatzky (1975) found that the involvement of inmates in planning for changes in the physical environment of the prison was sig- nificantly related to a more enthusiastic attutidinal ac- ceptance of such changes. Most directly relevant for the research at hand is the work of Fairweather, Sanders and Tornatzky (1974). In this national experimental investigation of innovation diffusion to a sample of state and federal hospitals, one of the strongest findings was the highly significant correlations between participative involvement by staff in decisions regarding innovation, and the likelihood that such an inno— vation would be adopted. Specifically, in those hospitals where staff were heavily involved throughout the innovation adoption and decision-making process, there was a strong likelihood that a successful adoption would occur. In summary, the literature reviewed here would seem to argue for importance of staff involvement, and participative 12 decision-making in the adoption of an innovation such as program evaluation methodology. Face—to-Face Interaction In the organizational theoretical literature reviewed above, it was pointed out that one of the characteristics of a non-bureaucratic organization is a greater reliance on inter—personal, informal interaction among organizational members. In turn, it was argued that such interpersonal processes might be quite congruent with change and innova- tion adoption. Fortunately, in the research utilization literature there have been several comparative studies that bear directly on this issue. One of the common techniques used in the dissemination of information is mass media materials. As has been pointed out by Schramm (1962) those actually reading written mate- rials are typically higher in education and socio-economic levels than those who do not. However, in using such a technique in a dissemination of innovation effort, one must be particularly attuned to the characteristics of the in— tended audience. Rogers (1971) indicates that while the receiving and reading of written materials is one possible step to later adoption of the proposed innovation, the like— lihood of a potential adopter reading and subsequently dis- regarding the message is very high. Lounsbury (1976) empirically demonstrated this point in his study of the dissemination of ecologically relevant information. Using a true experimental design on a 13 large sample of metropolitan residents, he compared various dissemination techniques that either used strictly mass media approaches, or supplemented this with more interper- sonal interaction such as a series of phone calls. One of the findings of the study was that adoption behavior typi- cally occured only with the intervention of a phone follow- up supplementing the written communication. Some experimental laboratory studies have indicated that face-to-face communication is rated significantly more effective than telephone communication for complex group discussions (Christie, 1975). Following from such research, Conrath (1975), experimentally evaluated the effectiveness of telephone versus face—to-face diagnosis of hospital patient medical problems. Conrath found that in this one- way type of consultation (patients usually only providing, rather than receiving and using, information) that face-to— face diagnosis was more valuable in diagnosing more subtle secondary medical problems. Antonioni (1973), in a field comparison of counselor communication at an outpatient counseling program, found that face—to-face sessions yielded more positive observer ratings of counselor empathy, verbal concreteness, and client self-exploration than did counsel— ing sessions over the phone. In an interesting fusing of laboratory and field—based research, Chapanis (1971) compared the relative utility of face-to-face interaction, telephone interaction, and written messages in communicating complex information. In a 14 laboratory study comparing these techniques the major finding was that face-to-face interaction was essential. In a companion study, Chappanis found that utilization of a scientific information network was significantly enhanced if users of the network had an opportunity to communicate directly by phone with a "resource person" rather than submitting information requests to a tape recorder. Perhaps the most directly relevant study is that of Fairweather, Sanders and Tornatzky (1974). This national dissemination study experimentally compared various inter— vention approaches that differed on the degree of interper- sonal contact. One condition of the study consisted of the distribution of brochures to hospital staff, a second condi- tion involved a one day workshop presentation, and a third condition conSisted of a relatively intense consultation, leading to the establishment of a temporary demonstration program. Evidence clearly indicated that the more inter- active modes of intervention were related to more long-term change and innovation adoption. In further research con- ducted within the Fairweather gt a1 (1974) study, a com— parison was made between different types of consultation assistance. Some hospitals were offered the possibility of a face-to-face consultation with a staff member of the research staff; other hospitals were given a do-it-yourself manual to assist them in establishing the new program. The former modality was clearly superior to the latter in pro- ducing adoption. I-—"' 15 In summary, the data reviewed here, while fragmentary, argues for the importance of face-to-face interaction in fostering the adoption of a complex innovation such as program evaluation techniques. Consultant Effectiveness Research Over the years, many authors have extensively theorized the impact of consultant traits, skills and behaviors on the consultant's effectiveness in promoting innovation adoption. Credibility is one such trait which Rogers and Shoemaker (1971) define as: "... the degree tO'WhiCh a communication source or channel is perceived as trust— worthy and competent by the receiver." Rogers and Shoemaker go on to state that: "Change agent success is positively related to his credibility in the eyes of his clients." Similar concepts have been suggested by a number of other authors (Zagona and Haiter, 1966; Caird, 1961; Neihoff and Anderson, 1964) who suggest also that credibility of a consultant is most directly related to the observation, by the potential innovation adopters, of the consultant in the performance of innovation—related tasks. Some laboratory research has been conducted which shows that subjects who are given a high pre-consultation assess— ment of consultant credibility are significantly more likely to be persuaded to move their attitudes toward the positions advoacted by the consultant than subjects who are given a mildly credible assessment of consultant prior to consulta— tion. (Aronson, Turner and Carlsmith, 1963; Hovland C. and 16 Weissy, 1951; Tannenbaum, 1968). The research of Osgood and Tannenbaum (1955) points out, however, that the relation— ship between communicator credibility and subject attitude change is curvilinearly affected by an incredulity factor as the communicator's (consultant's) advocated position moves further and further away fromtflmasubjects prior attitudes. Thus a communicator's credibility can drop sharply, and the subject's attitude change return to zero if the communicator advocates a position too far afield from the subject's prior attitude. Several authors have outlined a number of factors which may improve the effectiveness of a consultant. These in— clude but are not limited to: (l) holding a large quantity of knowledge in the area of consultation; (2) ability to provide emotional support, and; (3) ability to be relevant and practical. (Bowman, 1959; Gallessich, 1974; Caplen, 1970). One of the few empirical studies which compared con- sultant effectiveness (Fairweather, Sanders and Tornatzky, 1974) showed no significant differences between consultants. An interesting research document by Larsen (1976) describes a study which empirically addresses the consulta— tion relationship. In this study a sample of 20 community mental health centers was provided with conSultation visita— tions by ten consultants. A utilization score was obtained, which was the dependent measure of organizational/social change. Measures were taken about the nature of the 17 interaction, homophily scores, demographic descriptors of the agency, etc. The results indicate a significant relationship be- tween agency need as being highly correlated with change. This agency need had in fact been articulated before the arrival of the consultant, and thus high change agencies had high awareness of the nature of the problem, agreement on the need for consultation, clear expectation of what they wanted from consultation all in advance of the consultation. In addition, a detailed analysis of the consultation interaction found that the more effective consultants were those that basically dominated the interaction. High util- ization consultants spent §2§ of the meeting time talking and suggesting ideas. The more effective consultants had studied the background information of the agency and had prepared an agenda for the consultation. There was little evidence to support the notion of the non—directive con- sultant or some of the other basic notions of the problem solving, facilitator role for the consultant. In summary, the above literature review would indicate that, while many theoretical concepts have been linked to consultant effectiveness, the empirical data which exists does not consistently indicate predictable differences in consultant effectiveness. The Experimental Plan As indicated above, this research is designed to foster the dissemination of program evaluation technology to a 18 sample of human service agencies, specifically those funded by the Michigan Office of Substance Abuse Services, Michigan Department of Public Health. The mission of this agency is to improve services rendered to substance abuse clients throughout the State of Michigan. Congruent with this mission, in recent years there has been increasing emphasis _on the utilization of program evaluation methodology, and large scale training efforts have been fielded to give program directors basic knowledge about evaluation. This study, then, has been designed to compare, experimentally, alternative technical assistance options to be offered to program directors of substance abuse agencies. The operational plan of the study can be outlined in the following manner. A sample of program directors from substance abuse agencies across the state were asked to come to a preliminary training session on basic evaluation skills at a central training site. After this initial experience, these individuals were offered technical assistance options congruent with the hypotheses to be tested in this experiment. The principle dependent measure was the degree to which the substance abuse programs utilized the evalua- tion methodology advocated in the training experience. An important consideration of this experimental plan was the fact that these different substance abuse agencies included a wide range of evaluation experience, size, resources, / prior interest, enthusiasm, etc. One of the efforts in this 19 research will be to determine the degree to which these capacity and attitudinal factors seem to impact on the process of change. Finally, the research will attempt to control for effects brought about by different consultant characteristics. A major concern of this research will be to determine the adequacy of the conceptual notions that have been ad- vanced. An argument has been made that EEQEB involvement and face-to-face interation seem to facilitate the inno- vation adoption process. In the study at hand, we will attempt to manipulate these variables, and hope to observe changes in the degree of innovation adoption by the client organizations. Therefore, the two principle dimensions to be manip— ulated experimentally will be: 1. Staff involvement will be developed as a dimension -- with some organizations receiving interventions de— signed to maximize Group involvement and other organi— zations receiving intervention designed to minimize staff involvement in the context of a Private consultation; 2. Sttg will be considered as a dimension -— with some organizations receiving intervention designed to maxi- mize interpersonal interaction through On—site face-to- face consultation with others receiving consultation designed to minimize interpersonal interaction via a Telephone consultation. 20 Experimental Hypotheses Congruent with the theoretical rationale developed above, the following hypotheses are presented: Intervening Variable Hypotheses. As it will be recall- ed, the thrust of the previous discussion argued that a maximizing of Group participation and On-Site face—to-face interaction will be associated with changes in a number of process-type intervening variables, which, in turn, are ultimately related to innovation adoption. One sub—set of hypotheses of the current study relate to these intervening variables. These include: 1. Group consultation will be more effective than Private consultation in enhancing positive attitudes toward the innovation. 2. Group participation will be more effective than Private consultation in fostering discussion and staff planning activity in the target organizations. 3. On—Site consultation will be more effective than Telephone consultation in fostering positive attitudes toward the innovation. 4. On—Site consultation will be more effective than Telephone consultation in fostering discussion and staff planning activity in the target organizations. Outcome Hypotheses. As indicated above, the principal dependent variable in the study is the adoption of program evaluation techniques as an innovation in human service agencies. The following hypotheses are advanced: 21 1. Group consultation will be more effective than Private consultation in fostering the adoption of program evaluation techniques, and A 2. On-Site consultation will be more effective than Telephone consultation in fostering the adoption of program evaluation techniques. METHODS AND PROCEDURES Design The study design consisted of an 3 X 2 X 2 factorial analysis of variance format whereby subject organizations were randomly assigned to various forms of consultation modality, and consultant, as presented pictorially in Table 1 below: Table 1: Subject Assignments in Final Factorial Design Telephone On-Site Consultation Consultation Private Group Private Group Consul- Consul- Consul— Consul- tation tation tation tation Consultant No. l n = 3 n = 2* n = 3 n = 3* Consultant No. 2 n = 3 n = 4 n = 3 n = 3* Consultant No. 3 4 n = 3 n = 3 n = 3 n = 4 * Indicates previous attrition of one subject. Sample The final sample of the present study consisted of thirty-seven (N = 37) substance abuse (alcoholism and/or drug abuse) programs in the State of Michigan (as depicted in Table 1) which had just sent a representative to a three day evaluation skills workshop. Characteristics of these 22 23 organizations varied. The full—time staff size ranged from two to eighty; program budgets varied from fifteen thousand to a million dollars; the academic background of the director differed from a G.E.D. to Ph.D candidates and twenty-two of the thirty-seven organizations had evaluation staff prior to the consultations. However, this final sample was obtained after a series of preliminary recruitment efforts were under- taken. These steps are outlined below. Initial Recruiting. All 420 licensed Michigan substance abuse agencies were contacted to determine their general inter- est in attending an evaluation skills workshop (See Appendix A). Following this initial contact, all programs were notified of the workshop dates (Appendix B and C) and were informed that all workshop participants must be program directors or adminis— trators with some form of supervisory role. All potential applicants who indicated that they had a Ph.D. were rejected as inappropriately overeducated for the workshop. The first sixty non-Ph.D. applicants were accepted, with the expectation that approximately 25% would drop out prior to the post-workshop consultation. An attempt was made to allow no more than one workshop participant from a particular pro- gram. Forty—two individuals, representing forty programs, actually attended the evaluation workshop. Subject Assignment and Attrition. At the conclusion of the workshop, participants were notified that they would be randomly assigned to different treatment groups as per the design described in Table 1. Because of the limited availability of the consultants, 24 usually no more than four consultations per week could be car- ried out by any one consultant. Therefore, organizations were told that they would receive their first consultation at vary- ing lengths of time after the workshop. In order to control for this timing effect, the scheduling of consultations was de- veloped such that each consultation cell would contain one early, one medium early, one medium late, and one late consul- tation. After random assignment of subject organization to treatment cells was completed and initial consultations were scheduled, two subject programs decided that they did not wish to be involved (one in the Telephone-Group condition and one in the On-Site-Group condition). One other subject program in the On-Site-Group condition closed for lack of funding prior to the initial consultation. The above cir- cumstances reduced the total sample of organization to the thirty-seven pictured in Table l on page 22. The Innovation The techniques and concepts which were advocated in the workshop and consultations, constituted a short course in pro- gram evaluation methodology. A number of works were consulted to develop the curriculum ific1uding Fairweather (1967), Rossi and Williams (1972), Wholey, gt gt; (1970), and Weiss (1972). In the workshop, an abridged and simplified version of this methodology was presented in sequential components. A lecture format was generally used, supplemented by small group exercises. Some of the major issues covered in the workshop included: 25 1. Setting program objectives; 2. Comparison of types of evaluation; 3. Measurement; 4. Pre-post designs; 5. Matched group designs; 6. Experimental designs; 7. Chi-square and T-test statistics; 8. Logistics of a program evaluation system. A complete schedule of workshop activities is presented in Appendix D. The thrust of the workshop was to encourage partici— pants to utilize more methodologically sophisticated, evalu- ation designs. There was a strong emphasis on eliminating threats to validity by employing true experimental designs. In addition to the lecture and group activities, par- ticipants were given a 120 page manual which included text, bibliographies, exercise materials, and statistical tables. gtgtprogram evaluation concgpts and practices presented at this workshop, then, rgpresented the innovation to be diffused. The Consultations In each case, the first consultation was performed by a graduate student in psychology. All further consultations were performed by one of three Ph.D. consultants who were randomly assigned according to the previously illustrated design. These same Ph.D. consultants, and the graduate student, acted as instructors in the workshop and therefore all consultee organizations had one staff member who was both 26 involved in the consultations and was previously familiar with the consultant. The consultations for each subject organization were scheduled as follows: 1) 10—15 days after workshop--phone contact was made by the graduate student to schedule first consultation; 2) 2-8 weeks after workshop--first consultation by graduate student; 3) 3 weeks after first consultation—-first con- sultation by Ph.D. consultant; 4) 5 weeks after first consultation-—second consultation by Ph.D.; 5) 7 weeks after first consultation——third con- sultation by Ph.D. Because of scheduling problems of the consultees the above plan was not strictly followed, but was closely ap- proximated. Within the constraints imposed by consultee availability, consultations were generally carried out with- in 3 to 5 days of the scheduled date. The initial phone contact for scheduling consultations consisted of the following: 1) An appointment was made with the workshop attender for the initial consultation. 2) Depending upon condition, the workshop atten- der was either requested to consult in Private or in a Group context with other staff. 27 3) The appointment was scheduled for either 9:00- 11:30 A.M. or 1:30—4:00 P.M. If the program director requested the rationale for Group participation (or lack of it) in the consulta- tion, he/she was told the following: "because of the anxiety often produced by evaluation issues it is not known whether or not it is more effective to involve other staff members in initial broad ranging dis- cussions with outside consultants. There— fore, this evaluation skills project will attempt to involve program staff in certain consultations, and consult privately with program directors of other programs, and study the relative differences in effective- ness of the two techniques." Telephonic consultations to those programs which were assigned to the Group Consultation condition consisted of one or more calls until at least three staff members, in- cluding the workshop participant, could be reached for each consultation. One organization in the Telephone-Group consultation condition did not fully satisfy these demands in that three persons were not uniformly contacted. For all other programs, however, the conditions of the cell were followed. Whenever possible participants in the Telephone Group consultations were encouraged to listen in on exten- sion phone lines so that the group interaction could be enhanced. Telephonic consultations were limited to two and _: 28 a half hours (the same length as the on-site consultations), but usually did not extend past one hour. The initial consultation consisted of the following: 1) The consultant reviewed all program services and asked the program staff or director which services they would best like to evaluate; 2) The consultant then typically worked with the consultee(s) to generate alternative research designs to meet the identified evaluation 1 needs; 3) The consultant explored whether computer facil- ities or other evaluation resources had been investigated; 4) The consultant suggested arrangement of meet- ings with relevant personnel needed to carry out evaluation planning tasks; 5) Development of questionnaires and other data gathering devices were discussed; 6) The consultant asked what progress (calls, meetings, task completions) has been made toward the other sequential tasks outlined in the workshop. Following the first consultation, the Ph.D. consultant assigned to the particular program met with the graduate student to discuss his experiences with the consultee. The two discussed possible evaluation projects, along with the problems the graduate student had perceived to exist about each outlined project. 29 The second, third, and fourth consultations generally consisted of a reiteration and recycling of tasks 2-6 de- scribed above. The mandate given consultants was a fairly open-ended agreement: to give whatever assistance that was necessary to foster use of evaluation techniques. Through- out the consultation period consultees had the option of making unlimited self-initiated calls to the consultant. In fact, no more than a handful of such calls were made. Data Collection Schedule Data was collected according to the following schedule: 1) Data related to interest in the proposed inno- vation were obtained from a pre-workshop survey instrument (See Appendix A) which was mailed to all possible workshop applicants sixteen weeks prior to the workshop; 2) Descriptive data about the programs, pre- consultation attudinal data, and workshop attenders' assessment of the consultant's (instructor's) effectiveness were collected from forms distributed at the workshop (See Appendices C, E and F); 3) Innovation adoption data and related staff activity data were collected by phone 150 days after the initial post-workshop consultation (See Appendices G and H). 4) Data on amount of actual staff involvement in the consultation sessions and total amount of time spent in consultations were obtained from I. 30 consultant report forms (See Appendix I) 5) Subjective ratings by the subjects on the effectiveness of the consultation and the inhibiting factors associated with lack of adoption of the innovation were obtained from follow-up questionnaires mailed to all subjects 150 days after the initial consultation (See Appendix J). Data Reduction Procedures: Descriptive, Process and Outcome Measures Prior to analysis it was obvious that the number of variables measured in the study involved a considerable degree of redundancy. In order to enable a more coherent use of comparative techniques, such as analysis of variance or analysis of covariance, several prior data reduction steps were taken. The available data were reviewed for identification of variables which could be combined into E priori rational scale scores, particularly those major factors which the literature had indicated may be relevant to innovation adoption. Several other variables were not combined into scales, but were considered discretely. Because of the severe time limitations, and the bureau— cratic constraints imposed on this state government funded project, test-retest reliability of the instruments could not be accomplished prior to initiation of the data collec- tion process. Therefore, single variable measures will be of uncertain reliability. However, the variables which 31 could be combined to form a priori scale scores were ana- lyzed for their scale reliability using Cronbach's alpha analysis (Mehrans and Ebel, 1967). Following these procedures the data were organized into Descriptive, Process, and Outcome measures. Descriptive measures included a set of variables that described, in a generic sense, organizational capacity and interest in inno— vation. These variables were of minor conceptual interest in themselves, but were considered in the analysis for their possible confounding effects. These included interest and attitudinal measures, staff resources, and size. Process measures consist of a series of assessments of intervening variables, assumed to be influenced by the intervention, and in turn to be related to eventual innovation adoption. The Outcome measure was an index of a program's adoption of the evaluation techniques. Eight Descriptive measures, eight Process measures, and one Outcome measure were identified. The results of this review, selection, and scale analysis process is described below. Descriptive measures. There were eight Descriptive measures identified as follows: 1. Interest (behaviorally expressed by the program) in the innovation prior to initial consultation (Prior Interest). A measure of this factor was obtained by creating a dichotomous variable of whether or not the pre-workshop survey instrument (Appendix A) was returned; 2. Academic resources available to the prOgram L; 32 Academic Resources). Data relevant to this partic- ular factor consisted of the academic background of the workshop attender (Item 3 of Appendix C) and the distance to the nearest graduate school (Item 11 of Appendix C). The correlation between these two variables was found to be .57 (significant at the .001 level) and therefore, these two items were converted to Z—scores and summed to form an Aca- demic Resources scale; Staff available for evaluation activities (Evalua— tion Resources). Data relating to this factor was obtained from the evaluation staff resource items on the workshop application form (See Items 8 and 9 of Appendix C). The correlation between these two items was found to be .59 (P <.001). Therefore, the two items were converted to Z-scores and summed to create an Evaluation Resources scale; General program resources (Program Size). Data relating to this factor included the number of full-time employees of the organization, the number of employees supervised by the workshop attender, the total budget of the program (Items 4 and 12 of Appendix E and Item 2 of Appendix C). These three variables were converted to Z-scores and tested for their scale reliability. Cronbach's alpha (Mehrens and Ebel, 1967) was computed and found to be .847. Therefore, these three items were summed and used as a Program Size scale. 33 An analysis of Cronbach's alpha for the Program Size scale is shown in Table 2 below: Table 2: Cronbach's Alpha Analysis for Program Size Variables Alpha if item deleted Staff Supervised (SS) .95362 by Workshop Attender Total Full-time (FTE) .67573 employees Annual Budget (AB) .68737 5. Workshop attender's theoretical agreement with the innovation concepts (Attitude-Concepts). Data related to this attitude was available from Items 2a through 2d of the post-workshop (pre-consulta- tion) questionnaire (See Appendix F). These four items were converted to Z—scores and tested for their scale reliability. Cronbach's alpha test of scale reliability was found to be .853 and these four items were combined to form an attitude toward innovation concepts scale. An analysis of Cronbach's alpha for the Attitude- Concepts scale is shown in Table 3 below: 34 Table 3: Cronbach's Alpha Analysis for Attitude-Concepts Scale Attitude Variables Alpha if item deleted Measurable criteria (MC) .76861 Pre—testing (PT) ' .77444 Comparison group (CG) .86le Random assignment (RA) .83837 6. Predicted staff cooperation with adoption of the innovation: (Attitude-Staff Cooperation). Data related to this attitude was available from items 3a to 3d of the post-workshop questionnaire (See Appendix F). These four items were converted to Z-scores and tested for their scale reliability. Cronbach's alpha test of scale reliability was found to be .879, and these items were combined to form an Attitude— Staff Cooperation scale. An analysis of Cronbach's alpha for the Atti— tude-Staff Cooperation scale is shown in Table 4 below: 35 Table 4: Cronbach's Alpha Analysis for Attitude-Staff Cooperation Scale Attitude Variables Alpha if item deleted Staff cooperation with ... Measureable criteria (MC) .83677 Pre-testing (PT) .78250 Comparison groups (CG) .84839 Random assignments (RA) .89737 7. Prediction of ability to implement the innova— tion (Attitude-Capability). The only measure of this variable available was obtained by using Item 1 of the post-workshop questionnaire (See Appendix F). Pre—consultation rating of consultant effec- tiveness (Consultant Pre-Rating). All four consultants acted as instructors in the work— shop and were rated on seven different scales by the workshop attenders at the conclusion of the workshop (See Items 5 through 11 of Appen- dix F). Items 7 and 8 were scored differently than the other scales and therefore, did not correlate favorably with the other five scales. Therefore, these two items were discarded from the analysis and the remaining five ratings for each consultant were converted to Z—scores and 36 tested for their scale reliability. Cronbach's alpha was found to be .688 for these five different ratings. An analysis of Cronbach's alpha for the Con- sultant Pre-Rating scale is shown in Table 5 below: Table 5: Cronbach's Alpha Analysis for Consultant Pre- Rating Scale Rating Variables Alpha if item deleted Patience (PAT) .57519 Practicality (PR) .60488 Organization of (ORG) .65416 presentation Openness to consul- (OCO) .68283 tee opinions Understanding (UM) .66320 of materials In summary, the following Descriptive measures were used in subsequent analyses: 1. Prior Interest 2. Academic Resources 3. Evaluation Resources 4. Program Size 5. Attitude-Concepts 6. Attitude-Staff Cooperation 7. Attitude-Capability 8. Consultant Pre-Rating 37 Process Measures. There were eight Process measures identified as follows: 1. Staff involvement in consultation (Staff In- volvement). In order to develop a check on the experimental manipulation of staff involvement, names of the staff present at consultations were recorded on consultant forms (Appendix J). These names were used to create two measures of tfip . staff involvement in the consultation. fish": a. Total number of different persons involved. b. Total number present at all consultations. The correlation of these two measurer were found to be .923. Therefore, these two items were converted to Z-scores and combined to create the Staff Involvement scale. Staffgplanning meetings to discuss adoption of the innovation (Meetings). As a measure of the frequency of staff planning meetings, items 1 Dy to 16 Dy of Appendix G were summed both before and after consultation. The Pre-post dif- ference of these sums were then used as a measure of staff planning. Total staff involvement (Total Staff Involve- pgptl. A measure of the total staff involve— ment in the innovation planning and'implementa- tion process was obtained from combining the total number of different staff directly in- 37 Process Measures. There were eight Process measures identified as follows: 1. Staff involvement in consultation (Staff In- volvement). In order to develop a check on the experimental manipulation of staff involvement, names of the staff present at consultations were recorded on consultant forms (Appendix J). These names were used to create two measures of staff involvement in the consultation. a. Total number of different persons involved. b. Total number present at all consultations. The correlation of these two measurer were found to be .923. Therefore, these two items were converted to Z-scores and combined to create the Staff Involvement scale. Staff planning meetings to discuss adoption of the innovation (Meetings). IAs a measure of the frequency of staff planning meetings, items 1 Dy to 16 Dy of Appendix G were summed both before and after consultation. The Pre-post dif- ference of these sums were then used as a measure of staff planning. Total staff involvement (Total Staff Involve- ment). A measure of the total staff involve— ment in the innovation planning and'implementa— tion process was obtained from combining the total number of different staff directly in- 38 volved (total number of different people listed in column B of Appendix G), the total people involved in any aspect of planning (total of columns B and C of Appendix G) and the dichot- omous variable of whether or not a formal research team had been established (Item 6A of Appendix G) to form a staff involvement scale. Cronbach's alpha for these three items was found to be .547, and therefore, these items were combined to form a Total Staff Involvement scale. An analysis of Cronbach's alpha for the Staff Involvement scale is shown in Table 6 below: Table 6: Cronbach's Alpha Analysis for the Total Staff Involvement Scale Item Alpha if item deleted Total staff directly (TSD) .1012 involved Total people involved (TPI) .1622 Research team (RTE) .6752 established 4. Workshop attender cooperativgness (Attender- Cooperativeness). A measure of the workshop attender's attitude toward the innovation at follow—up was obtained from Items 2a, 2f, 2h, 2j, 21, 20 of Appendix I. These items were 39 then tested for their scale reliability. Cronbach's alpha for these items were combined and used as a measure of Attender-Cooperative- ness. An analysis of Cronbach's alpha for the Atten- der-Cooperativeness scale is shown in Table 7 below: Table 7: Cronbach's Alpha Analysis for Attender-Coopera— tiveness Scale Attitude Items Alpha if item deleted 2a .81081 2f .81616 2h .78259 2j .80512 21 .77796 20 .81413 5. Staff Cooperativeness. Data relating to the program staff's cooperation with the innovation was obtained from Items 2b, 29, 21, 2k, and 2m of_Appendix I. These items were then analyzed for their scale reliability. Cronbach's alpha for these items was found to be .870. There- fore, these five items were combined and used as a measure of Staff Cooperativeness. 40 An analysis of Cronbach's alpha for the Staff Cooperativeness scale is shown in Table 8 below: Table 8: Cronbach's Alpha Analysis for the Staff Coopera- tiveness Scale. Attitude Items Alpha if item deleted 2b .84551 2g .86359 2i .83238 2k .85117 2m .82633 6. Value of resources to the adoption of the inno- vation (Attitude-Resources). Data relevant to the importance of the workshop attender placed on resources for innovation adoption was ob- tained from Items 2e, 2d, and 2e of Appendix I. These three items were then analyzed for their scale reliability. Cronbach's alpha was found to be .711. Therefore, these three items were combined and used as an Attitude-Resources scale. An analysis of Cronbach's alpha for the Atti- tude-Resources scale is shown in Table 9 below: 41 Table 9: Cronbach's Alpha Analysis for the Attitude—Resources Scale Attitude Variables Alpha if item deleted Value of funds .39472 Value of computers .68740 Value of trained staff .73566 7. Total time spent in consultation sessions (Time). After each consultation session the consultant recorded the time elapsed during the consultation. On the consultant reporting form (See Appendix J) the total time recorded on each of the four consultation sessions were added together to represent the total consulta- tion. 8. Total innovation—related tasks completed since initiating consultation (Total Tasks). A measure of the total innovation—related tasks completed by the subjects since the first consultation was obtained by computing the gain in the sum of column A of Appendix G from consultation initiation to follow-up. (While these tasks were prerequisites to high outcome measures, they were not considered outcome success in themselves.) In summary, the following Process measures were used in 42 subsequent analyses: 1. Staff Involvement 2. Meetings 3. Total Staff Involvement 4. Attender-Cooperativeness 5. Staff-Cooperativeness 6. Attitude-Resources 7. Time 8. Total Tasks The Outcome Measure. As has been noted in the Data Collection Schedule, 150 days after the initial consultation the workshop attender was contacted by phone by the experi- menter, and asked to verbally report on the extent to which they were using program evaluation techniques. This infor- mation constituted the raw data for the measure of innova- tion adoption. The responses of the subjects were noted and recorded by hand by the interviewer in the narrative form in which the reports were given. Photo—copies of these hand written reports were then rated by two raters. Both raters had masters degrees in Social Work and had completed a series of graduate courses in psychometrics and were famil- iar with evaluation methodology. Both raters were asked to carry out a blind scoring of the reports they were given using the rating scheme shown in Appendix K. An inter-rater reliability coefficient of .972 was obtained from these ratings. RESULTS Analysis of Variance: Descriptive Measures Prior to analysis of the Outcome measures, all eight Descriptive measures were analyzed by three way analysis of variance to check on the effectiveness of random assign- ment procedures (See Tables 10 to 17). As can be seen from these tables there are no significant main or interaction effects for any of these Descriptive measures with the ex- ception of Evaluation Resources (See Table 12). To determine the potential confounding impact of this measure on the experiment, and to analyze the relationship of other Descriptive measures to the Outcome measure, the eight Descriptive measures were correlated with the Outcome measure. These correlations are presented in Table 18 on page 52. 43 Table 10A: 44 Cell means for Prior Interest Telephone On-Site Consultations Consultations Private Group Private Group Means Consul— Consul— Consul- Consul- tation tation tation tation Consultant #1 -.5249 1.1946 -.1448 -.l448 .2814 Consultant #2 —.l448 -.3122 .5249 .5249 .1128 Consultant #3 .2353 -.1448 -.l448 —.3122 -.3509 Means -.l448 .0784 .0784 —.0109 .0000 Table 10B: Analysis of variance Prior Interest Source of Variation Sum of DE Mean F Squares Squares Main Effects 2.764 4 .691 .626 Site .039 1 .039 .035 Staff .037 l .037 .033 Consultant 2.689 2 1.345 1.219 2-Way Interactions 5.197 5 1.039 .942 Site X Staff .426 l .426 .386 Site X Consultant 4.694 2 2.347 2.127 Staff X Consultant .309 2 .154 .140 3-Way Interaction .456 2 .228 .207 Site X Staff X .456 2 .228 .207 Consultant Residual 27.582 25 1.103 Total 36.000 36 1.000 Table 11A: 45 Cell means for Academic Resource Telephone On-Site Consultations Consultations Private Group Private Group Means Consul- Consul- Consul- Consul- tation tation tation tation Consultant #1 .4858 -.0898 -.3544 -.2777 -.0562 Consultant #2 -.3879 .2890 1.4736 .6865 .3201 Consultant #3 .2910 -.6318 -.9959 .1169 -.2725 Means .1296 -.3590 .1086 .1694 .0000 Table 11B: Analysis of variance of Academic Resources Source of Variation Sum of DE Mean F Squares Squares Main Effects 3.253 4 .813 .206 Site .461 l .461 .117 Staff .274 l .274 .070 Consultant 2.518 1 1.259 .319 2-Way Interactions 7.858 5 1.572 .399 Site X Staff .809 l .809 .205 Site X Consultant 6.312 2 3.156 .801 Staff X Consultant .363 2 .181 .046 3-Way Interaction 3.417 2 1.709 .434 Site X Staff X 3.417 2 1.709 .434 Consultant Residual 98.524 25 3.941 Total 113.051 36 3.140 Table 12A: 46 Cell means for Evaluation Resources Telephone On-Site Consultations Consultations Private Group Private Group Means Consul— Consul- Consul- Consul- tation tation tation tation Consultant #1 -.12 1.85 .26 -.12 .47 Consultant #2 1.29 -.54 -.79 -2.20 -.56 Consultant #3 -.12 2.04 .26 .77 .35 Site Means* .35 1.15 -.09 -.48 .12 Table 12B: Analysis of variance of Evaluation Resources Source of Variation Sum of DE Mean F Squares Squares Main Effects 20.746 4 5.186 1.857 Site 12.699 1 12.699 4.548* Staff .469 1 .469 .168 Consultant 7.578 2 3.789 1.357 2-Way Interactions 18.350 5 3.670 1.314 Site X Staff 6.229 1 6.229 2.231 Site X Consultant 1.578 2 .789 .283 Staff X Consultant 11.251 2 5.625 2.015 3-Way Interaction 5.631 2 2.816 1.008 Site X Staff X 5.631 2 2.816 1.008 Consultant Residual 69.805 25 2.792 Total 114.532 36 3.181 *p <.05 Table 13A: 47 Cell means for Program Size Telephone On-Site Consultations Consultations Private Group Private Group Means Consul- Consul— Consul- Consul— tation tation tation tation Consultant #1 -.3423 .9738 -.6116 -.0443 —.0952 Consultant #2 -.4903 -.0436 .0126 -.7151 —.2887 Consultant #3 -l.1380 3.7017 -1.0596 .0718 .3662 Means -.6569 1.4309 —.5529 -.1991 .0000 Table 13B: Analysis of variance of Program Size Source of Variation Sum of DF Mean F Squares Squares Main Effects 21.887 4 5.472 .705 Site 5.250 1 5.250 .676 Staff 13.269 1 13.269 1.710 Consultant 3.367 2 1.684 .217 2-Way Interactions 28.217 5 5.643 .727 Site X Staff 8.562 1 8.562 1.103 Site X Consultant 5.174 2 2.587 .333 Staff X Consultant 15.599 2 7.800 1.005 3-Way Interaction 3.916 2 1.958 .252 Site X Staff X 3.916 2 1.958 .252 Consultant Residual 194.040 25 7.762 Total 248.060 36 6.891 Table 14A: 48 Cell means for Attitude-Capability Telephone On—Site Consultations Consultations Private Group Private Group Consul- Consul- Consul— Consul— tation tation tation tation Consultant #1 .1827 -.2871 -.4437 —1.0702 Consultant #2 —.l305 .6525 —.4437 .1827 Consultant #3 .1827 .4959 .1827 .1827 Table 14B: Means -.4751 .1104 .2550 Analysis of variance of Attitude-Capability Source of Variation Sum of DE Mean F Squares Squares Main Effects 4.980 4 1.245 1.098 Site 1.824 1 1.824 1.608 Staff .278 1 .278 .245 Consultant 2.877 2 1.439 1.268 2—Way Interactions 2.783 5 .557 .491 Site X Staff .100 l .100 .088 Site X Consultant .408 2 .204 .180 Staff X Consultant 2.156 2 1.078 .950 3—Way Interaction .012 2 .006 .006 Site X Staff X .102 2 .006 .006 Consultant Residual 27.225 24 1.134 Total 35.000 35 1.000 49 Table 15A: Cell means for Attitude-Staff Cooperativeness Telephone On-Site Consultations Consultations Private Group Private Group Means Consul- Consul- Consul— Consul- tation tation tation tation Consultant #1 1.9765 -.5815 -.9024 —3.7150 -l.1062 Consultant #2 1.8063 2.7871 -l.387l .6160 1.0373 Consultant #3 .3447 -.5783 1.1271 -1.1030 -.l332 Means 1.2285 .9168 -.3875 -l.3709 .0099 Table 15B: Analysis of variance of Attitude-Staff Cooperativeness Source of Variation Sum of DF Mean F Squares Squares Main Effects 59.761 4 14.940 1.160 Site 33.306 1 33.306 2.586 Staff 3.994 1 3.994 .310 Consultant 22.461 2 11.231 .872 2-Way Interactions 45.081 5 9.016 .700 Site X Staff .108 l .108 .008 Site X Consultant 16.641 2 8.321 .646 Staff X Consultant 26.285 2 13.142 1.020 3-Way Interaction 2.035 2 1.018 .079 Site X Staff X 2.035 2 1.018 .079 Consultant Residual 296.218 23 12.879 Total 403.096 34 11.856 Table 16A: 50 Cell means for Attitude Concepts Telephone On-Site Consultations Consultations Private Group Private Group Consul- Consul- Consul- Consul— tation tation tation tation Consultant #1 3.9077 .6153 -.1194 -3.4009 Consultant #2 .0257 .4541 -l.2069 -.7715 Consultant #3 .9424 .0238 1.4665 -.0615 Means 1.3400 .3465 -.ll94 -1.2763 Table 168: Analysis of variance of Attitude Concepts Means -.4777 .3060 .0615 .0000 Source of Variation Sum of DF Mean Squares Squares Main Effects 34.104 4 8.526 .670 Site 21.343 1 21.343 1.677 Staff 10.460 1 10.460 .822 Consultant 2.301 2 1.151 .090 2-Way Interactions 41.040 5 8.208 .645 Site X Staff .039 1 .039 .003 Site X Consultant 31.698 2 15.849 1.245 Staff X Consultant 9.019 2 4.509 .354 3-Way Interaction 8.113 2 4.057 .319 Site X Staff X 8.113 2 4.057 .319 Consultant Residual 305.428 24 12.726 Total 388.686 35 11.105 Table 17A: 51 Cell means for Consultant Pre-Rating Telephone On-Site Consultations Consultations Private Group Private Group Means Consul- Consul- Consul- Consul- tation tation tation tation Consultant #1 1.9874 7.5781 .9288 .0244 2.3310 Consultant #2 —5.1347 -l.5781 4.3937 -.7611 —.8322 Consultant #3 1.7709 -.5595 -5.4301 .0799 -.7888 Means -1.1268 .7870 .5969 —.1890 .0449 Table 17B: Analysis of variance of Consultant Pre-Rating Source of Variation Sum of DF Mean F Squares Squares Main Effects 67.677 4 16.919 .579 Site .282 l .282 .010 Staff 2.295 1 2.295 .079 Consultant 65.100 2 32.550 1.114 2-Way Interactions 163.664 5 32.733 1.120 Site X Staff 15.510 1 15.510 .531 Site X Consultant 132.406 2 66.203 2.266 Staff X Consultant 13.028 2 6.514 .223 3-Way Interaction 106.745 2 53.372 1.827 Site X Staff X 106.745 2 53.372 1.827 Consultant Residual 613.491 21 29.214 Total 951.577 32 29.737 52 Table 18: Correlations of Descriptive Measures with the Outcome Measure Descriptive Variables Correlation with Outcome Measure 1. Prior interest .2734* 2. Academic resources .1998 3. Evaluation resources —.0881 4. Program size .1493 5. Attitude-Capability -.1248 6. Attitude—Staff Cooperativeness —.0258 7. Attitude Concepts —.0438 8. Consultant Pre—Rating .3535** *p <.10 one-tailed **p<.05 one-tailed As can be seen from Table 18 the Evaluation Resources mea— sure does not correlate significantly with the Outcome measure. Therefore, the unequal scores on this variable across conditions should not affect the experiment. The results above do indicate, however, a strong trend (p.< .10) toward a relationship between the Prior Interest measure and the Outcome measure, and a significant correla- tion (p <.05) between the Consultant Pre-Rating measure and the Outcome measure. In spite of the fact that neither of these measures achieved statistical significance on the analysis of 53 variance described earlier, it was decided that a conservar tive approach to subsequent analyses should consider their possible confounding effects. It was determined that in all analyses of variance the effects of Prior Interest and Consultant Pre-Rating would be considered as possible co— variants. Analysis of Variance; Process Measures The Process measures were analyzed in the analyses of covariance, with Prior Interest or Consultant Pre-Rating acting as covariates. Since parallel analyses of the Out- come measure (See pp. 64&67) had determined no significant differences across consultants, and since Consultant Pre- Rating was being used as a covariate in the analyses of the Process measures, it was determined that analytical redun— dancy would be eliminated if one collapsed across the con- sultant cells. The results of these analyses are shown on Tables 19 through 26 on pages 55-62. Inspection of the tables indicates that the following results were obtained on the Process«measures: 1) As would be expected from the experimental manip— ulation, the amount of Staff Involvement in consul- tations was significantly greater in the Group Consultation condition (See Table 19). 2) For Frequency of staff Meetings, a significant interaction was found between the Site and Staff Involvement conditions, such that the Telephone/ Group Consultation, and On-Site/Private cells “5 54 appeared to bring about significantly more staff planning meetings (See Table 20). 3) Subjects in the On-Site conditionEfelt less im- paired by resource shortages than did subjects in the Telephone conditions(See Table 24). 4) As would be expected from the conditions, Total Time of consultations was significantly greater in the On-Site conditions (See Table 25). 5) Total Tasks were significantly greater in the Group Consultation condition, and the covariant Prior Interest significantly correlated with Total Tasks completed since the first consultation (See Table 26). No other main, interaction, or covariant effects were found to be statistically significant. Having identified the empirical relationships between the Process measures and the experimental conditions, the Process measures were then correlated with the Outcome measure. The results are shown in Table 27 on page 63. 55 Table 19A: Cell means for Staff Involvement Telephone On-Site Staff Involve- Consultations Consultations ment Means Private Con- -1.7244 .1832 —.7565 sultation Group Con- 1.3376 1.6705 1.5128 sultation Site Means -.1934 .9657 .3862 Table 19B: Analysis of variance and covariance for Staff Involvement Source of Variation Sum of BF Mean F Squares Squares Covariates 2.199 2 1.100 .623 Prior Interest 1.056 1 1.056 .598 Consultant .402 l .402 .288 Pre—Rating Main Effects 75.226 2 37.613 21.309 Site .876 1 .876 .496 Staff 73.810 1 73.810 41.815* 2-Way Interaction .008 1 .008 .004 Site X Staff .008 l .008 .004 Residual 47.659 27 1.765 Total 125.092 32 3.909 *p< .005 56 Table 20A: Cell means for Meetings Telephone On-Site Staff Involve— Consultations Consultations ment Means Private Con- 3.2500 17.4444 sultation Group Con— 17.5000 5.5000 sultation Site Means 10.3750 11.1579 10.3472 11.1842 11.1711 Table 20B: Analysis of variance and covariance for Meetings Source of Variation Sum of DE Mean F Squares Squares Covariates .793 2 .396 1.295 Prior Interest .108 l .108 .354 Consultant .448 l .448 1.463 Pre-Rating Main Effects .065 2 Site .004 1 Staff .062 l 2-Way Interaction 4.863 1 Site X Staff 4.963 1 Residual 7.649 25 Total 13.469 30 *p <.001 .032 .106 .004 .012 .062 .202 4.963 16.221 4.963 16.221* .306 .449 57 Table 21A: Cell means for Total Staff Involvement Telephone On-Site Staff Involve- Consultations Consultations ment Means Private Con- 4.241 6.722 5.482 sultation Group Con— 10.916 6.598 8.757 sultation Site Means 7.579 6.660 6.971 Table 21B: Analysis of variance and covariance for Total Staff Involvement Source of Variation Sum of DE Mean F Squares Squares Covariates 11.412 2 5.706 1.070 Prior Interest 2.220 1 2.220 .416 Consultant 11.262 1 11.262 2.112 Pre-Rating Main Effects 10.455 2 5.228 .980 Site 6.614 1 6.614 1.240 Staff 3.568 1 3.568 .699 2—Way Interaction 1.765 1 1.765 .331 Site X Staff 1.765 25 1.765 .331 Residual 133.301 25 5.332 Total 156.933 30 5.231 58 Table 22A: Cell Means for Attender Cooperativeness Telephone On-Site Staff Involve- Consultations Consultations ment Means Private Con- -.5297 .8429 .1566 sultation Group Con— .3549 -.7072 —.2041 sultation Site Means _ -.0874 .0273 -.0286 Table 22B: Analysis of variance and covariance for Attender Cooperativeness Source of Variation Sum of DE Mean F Squares Squares Covariates 66.232 2 33.116 1.628 Prior Interest 10.629 1 10.629 .523 Consultant 33.572 1 33.572 1.651 Pre-Rating Main Effects 5.790 2 2.895 .142 Site 1.007 1 1.007 .050 Staff ‘ 4.558 1 4.558 .224 2—Way Interaction 17.015 1 17.015 .837 Site X Staff 17.015 1 17.015 .837 Residual 508.446 25 20.338 Total 597.483 30 19.916 59 Table 23A: Cell means for Staff Cooperativeness Telephone On-Site Staff Involve- Consultations Consultations ment Means Private Con- .2930 -.3o47 -.0059 sultation Group Con- 1.6422 -l.6316 -.0808 sultation Site Means .9676 -l.0030 -.0443 Table 23B: Analysis of variance and covariance for Staff Cooperativeness Source of Variation Sum of DF Mean F Squares Squares Covariates 36.800 2 18.400 .820 Prior Interest 17.023 1 17.023 .759 Consultant 6.351 1 6.351 .283 Pre—Rating Main Effects 24.054 2 12.027 .536 Site 23.549 1 23.549 1.049 Staff 1.091 1 1.091 .049 2-Way Interaction 17.180 1 17.180 .766 Site X Staff 17.180 1 17.180 .766 Residual 516.136 23 22.441 Total 594.169 28 21.220 60 Table 24A: Cell means for Attitude-Resources for site and staff involvement Telephone On-Site Staff Involve— Consultations Consultations ment Means Private Con— -1.7940 .3851 —.7045 sultation Group Con- .0855 .9862 .5596 sultation Site Means -.8534 .6857 —.0554 Table 24B: Analysis of variance and covariance for Attitude—Resources Source of Variation Sum of DF Mean F Squares Squares Covariates 3.411 2 1.706 .308 Prior Interest 2.172 1 2.172 .392 Consultant .261 1 .261 .047 Pre-Rating Main Effects 31.646 2 15.823 2.856 Site 28.138 1 28.138 5.078* Staff 4.543 1 4.543 .820 2-Way Interaction 8.206 1 8.206 1.481 Site X Staff 8.206 1 8.206 1.481 Residual 138.522 25 5.541 Total 181.785 30 6.059 *p<.10 61 Table 25A: Cell means for Time of Consultations (in minutes) Telephone On-Site Staff Involve- Consultations Consultations ment Means Private Con- 150.3 436.1 293.2 sultation Group Con- 196.6 445.5 327.6 sultation Site Means 173.5 441.0 310.9 Table 25B: Analysis of variance and covariance for Total Time of Consultations Source of Variation Sum of BF Mean F Squares Squares Covariates 1.558 2 .779 2.226 Prior Interest .686 1 .686 1.962 Consultant .334 1 .334 .955 Pre-Rating Main Effects 21.909 2 10.954 31.304 Site 21.900 1 21.900 62.584* Staff .002 l .002 .006 2—Way Interaction .236 1 .236 .674 Site X Staff .236 1 .236 .674 Residual 9.448 27 .350 Total 33.151 32 1.036 *p <.001 62 Table 26A: Cell means for Total Tasks adjusted for the Prior Interest covariant Telephone On-Site Staff Involve- Consultations Consultations ment Means Private Con— .800 1.420 1.110 sultation Group Con- 1.555 1.594 1.574 sultation Site Means 1.175 1.510 1.348 Table 26B: Analysis of variance and covariance for Total Tasks Source of Variation Sum of BF Mean F Squares Squares Covariates 3.183 2 1.591 2.337 Prior Interest 3.060 1 3.060 4.495* Consultant .087 1 .087 .128 Pre-Rating Main Effects 5.050 2 2.525 3.709 Site .063 1 .063 .093 Staff 4.949 1 4.949 7.270** 2—Way Interaction 1.327 1 1.327 1.950 Site X Staff 1.327 1 1.327 1.950 Residual 18.381 27 .681 Total 27.940 32 .873 *p <.10 **p <.05 63 Table 27: Correlations between innovation Outcome and Process measures Measure Correlation with the Innova- tion Outcome Measure 1. Staff Involvement .2141 2. Meetings .3843** 3. Total Staff .2094 Involvement 4. Attender Cooperative- -.0094 ness 5. Staff Cooperativeness .0010 6. Attitude-Resources .2789* 7. Time .4149*** 8. Total Tasks .7520**** *p< .10 one—tailed **p< .01 one-tailed ***p< .005 one-tailed ****p< .001 one-tailed A review of Table 27, juxtaposed with Tables 19 through 26, sheds considerable light on the relationship between these intervening variables and Outcome. As discussed earlier, meeting frequency (Meetings) and consultation intensity (Time) seem to be related to the manipulation and to Outcome. Not surprisingly, the accomplishment of instru- mental tasks (Total Tasks) appears to be an intermediate step between the intervention and the change Outcome. Finally, the effect of the On-Site Consultations appear to make staff more confident of their internal resources 64 (Attitudes—Resources), which in turn covaries with innovat— ion adoption. Analysis of Variance: Outcome Measure Because of unequal cell sizes, the Outcome measure was analyzed using a three way hierarchical analysis of variance procedure whereby the highest order interaction effects are to be interpreted first. The results of this analysis of variance is displayed in Table 28 on page 65. As is clear, there are no significant three way inter- action effects. Further, none of the two way interaction effects are significant, with the exception of a significant interaction between the Site and Staff Involvement. Normal- ly, this finding would prohibit further analysis of the main effects because of the confounding nature of a significant interaction effect in unequal cell size experiments. How- ”9/ ever, if the cell size matrix of the experimental design is reduced to that matrix relevant to this significant inter- action (i.e., Site by Staff shown in Table 29 on page 65) it can be seen that this matrix is orthogonal and therefore will not confound the main effect analysis. In reviewing the main effects then, it can be seen that significant results exist for both the Site and Staff In- volvement conditions at the .05 level, with no significant difference across the Consultant conditions. As can be seen from Table 28, the significant differences found indicate that the On—Site consultations show significantly more change than do Telephone consultations, and that Table 28A: 65 Cell means for the Innovation Outcome Measure Telephone On—Site Consultations Consultations Private Group Private Group Means Consul- Consul— Consul- Consul- tation tation tation tation Consultant #1 1.83 4.00 2.83 2.67 2 72 Consultant #2 1.17 2.50 2.83 3.58 2.52 Consultant #3 1.00 2.67 3.00 2.38 2.57 Means 1.33 2.89 2.89 2.81 2 61 Table 283: Analysis of variance of the Outcome Measure Source of Variation Sum of DE Mean F Squares Squares Main Effects 8.870 4 2.218 3.022 Site (i.e., tele- 3.916 1 3.916 5.336* phone versus on-site in— volvement) Staff 3.715 1 3.715 5.063* Consultant 1.240 2 .620 .845 2-Way Interactions 7.605 5 1.521 2.073 Site X Staff 5.017 1 5.017 6.837* Site X Consultant 2.686 2 1.343 1.830 Staff X Consultant .376 2 .188 .257 3—Way Interaction 1.182 2 .591 .805 Site X Staff X 1.182 2 .591 .805 Consultant Residual 18.343 25 .734 Total 36.000 36 1.000 *p <.05 66 Table 29: Collapsed table of Cell sizes for Site versus Staff conditions Consultation Consultation by Telephone On—Site Private Consul- n = 9 n = 9 tation Group Consul- n = 9 n = 10 tation Group consultations are more effective than Private consul- tations. As described previously, in the analysis of the De- scriptive measures, an argument was made to consider Prior Interest and Consultant Pre-Rating as possible covariates. In the analysis of Process measures, these covariates were used together for an economical treatment of these inter- vening variables. However, in the analyses of the Outcome measure, which is the dependent variable in the study, it was felt that a more fine-grained analysis would be appro- priate. Therefore, separate analyses of covariance were performed using these two covariants. The analysis of covariance with Prior Interest as a covariant is reported in Table 30 on page 68. As can be seen, the results are essentially equivalent to that of the simple analysis of variance. Prior to initiating an analysis of covariance with Consultant Pre-Rating as a covariate, it should be noted that in the original analysis no differences had been noted 67 across consultants. Therefore, in order to avoid a possible "over analysis" of the consultant effect, the consultant conditions were collapsed to yield only Staff and Site cells as was conducted in the previous analysis of Process measures. Inclusion of Consultant Pre-Rating as a covariant did produce a meaningful impact on the analysis of variance. When considering the data in Table 31 the following changes occur: 1. The significance of the site condition is slightly reduced from p <.05 to p <.10. 2. The significance of staff involvement is increased from p <.05 to p <.005. 3. The significant interaction effect vanishes. As was previously shown the Consultant Pre-Ratings were not significantly different across conditions. However, it is clear that the effect the lower consultant ratings in the Telephone-Private and On—Site-Group Consultation conditions could have brought about the interaction effects demonstrat- ed in the initial analysis of variance. Clearly, as the analysis of variance and covariance indicate there are main effects of consultation site (On- Site more effective than Telephone consultation) and Staff Involvement (Group consultation more effective than Private consultation). There were no significant differences be- tween consultants and no significant interaction effects. 68 Table 30A: Cell means for Outcome Measure adjusted for the Prior Interest covariant Telephone On-Site Staff Involve- Consultations Consultations ment Means Private Con- —.9832 .3250 —.3291 sultation Group Con— .3250 .2902 .3067 sultation Site Means -.3291 .3067 .0000 Table 30B: Analysis of variance for Outcome Measure using Prior Interest as a covariate Source of Variation Sum of BF Mean F Squares Squares Covariates 2.691 1 2.691 3.557 Previous Interest 2.691 1 2.691 3.557* Main Effects 7.774 4 1.944 2.569 Site of Consul- 3.710 1 3.710 4.904** tation Staff Involve- 3.527 1 3.527 4.662** ment Consultant .538 2 .269 .355 2-Way Interactions 6.329 5 1.266 1.673 Site X Staff 4.639 1 4.639 6.132** Site X Consultant 1.702 2 .851 1.125 Staff X Consultant .411 2 .206 .272 3—Way Interaction 1.049 2 .525 .693 Site Staff Con— 1.049 2 .525 .693 sultant Residual 18.156 24 .757 Total 36.000 36 1.000 *p <.10 **p <.05 69 Table 31A: Cell means of the Outcome Measure adjusted for the Consultant Pre—Rating covariant Telephone On-Site Staff Involve— Consultations Consultations ment Means Private Con- .305 -.271 sultation Group Con- .312 .271 sultation Site Means .307 .000 Table 313: Analysis of variance for Omnibus Outcome Mea— sure using Consultant Pre-Rating as a co- variate Source of Variation Sum of DE Mean F Squares Squares Covariates 2.321 1 2.321 4.137 Consultant 2.321 1 2.321 4.137** Pre-Rating Main Effects 5.484 2 2.742 4.886~v Site of 2.153 1 2.153 3.837* Consultation Staff Involve- 5.416 1 5.416 9.652*** ment 2-Way Interaction 1.255 1 1.255 2.237 Site X Staff 1.255 1 1.255 2.237 Residual 15.712 28 .561 Total 26.020 32 .813 *p <.10 **p <.05 ***p <.005 1 Adjustments were completed using regression analysis where the normalized Outcome measure Z-scores were ad- justed by subtracting the product of the Consultant Pre— Rating Z-score and the correlation between Outcome and the Consultant Pre-Rating DISCUSSION Group Consultation versus Private Consultation The impact of the Staff Involvement manipulation can be demonstrated by the effect on Process measures, Outcome measures, or both. Process measures will be addressed first. Process Measures and Staff Involvement. Those Process measures which were significantly affected by the Staff Involvement manipulation include the following: 1) Staff Involvement in consultation 2) Total innovation—related tasks As per the original design, the number of staff partic— ipating in the consultations was intended, by the experimen- tal manipulation, to be higher in the Group Consultation condition. This result then acts more as an affirmation of the consistency of the experimental manipulation than as a finding to be further interpreted. The total innovation-related tasks, however, were found to be both strongly correlated with Outcome (r = .7520, p <.001) and significantly higher in the Group Consultation condition. These findings would appear, then, to be a strong predictor of the superior effectiveness of Group Consultation in increasing innovation adoption. The number of staff meetings were found not to be 70 71 significantly affected by the Group Consultation condition itself, but rather were shown to represent an interaction effect produced by the combination of On—Site Private Consultation or Telephone Group Consultation. Interpetation of such results is not readily apparent. However, one possible explanation may be that the On-Site Group Consul- tations may have fulfilled, in themselves, a portion of the need for staff planning meetings. This would account, then, for the fact that the On—Site Group Consultation subjects organizations took part in significantly fewer (non-consul— tation) staff planning meetings than did On—Site Private or Telephone Group Consultation subject organizations. Should such meetings be viewed as a causal factor of innovation-adoption, future research may wish to study the effects of randomly assigning consultees to consultants who advocate, design or somehow reinforce varying amounts of non-consultation meetings of the staff. Assessing these planning activity results together with the previously cited significant effect of Group consulta- tions on outcome, it appears that the staff—involvement manipulation may have brought about more innovative deci- sions, but did not significantly effect the quantity of group participation in the innovation implementation. Such results seem to lead to the conclusion that group impact into the decision—making process is capable of impacting on outcome independently of further group participation during implementation. A review of the measures that were not significantly 72 affected by the Group Consultation condition include the following: 1) All three attitudinal measures; 2) Time in consultation; and 3) Total Staff Involvement Outcome Measure and Staff Involvement. The most im— portant finding of the present research appears to be that the hypothesized effect of Group Consultation on the Outcome measure, previously implied by the research of Fairweather, Sanders and Tornatzky (1974), was supported in a controlled experimental study. The data analyses which appear to confirm this original hypothesis are the following: 1) Significantly higher Outcome Measure scores in the group consultation condition when the data was analyzed by a) Analysis of variance (p <.05) b) Analysis of variance with the covariant Prior Interest (p <.05) c) Analysis of variance with the covariant Con- sultant Pre-Rating (p <.005) 2) The findings previously cited which indicate that total innovation-related tasks were both highly correlated with Outcome and also were significant— ly higher in the Group Consultation condition. The data which tends not to support the superior ef- fectiveness of Group Consultation comes from the 73 correlations between the two different measures of staff involvement. For, while both the total amount of staff involvement in the consultations, and the total staff in— volvement in all activities, correlated positively with innovation adoption (r = .2141 and r = .2094, respectively), neither was significant (p <.102 and p <.114). However, these latter results may indicate that the number 9f afaff may not be as significant as whether or not aay afaff are involved. Even one additional staff member may bring about the interpersonal contacts and commitments to action that a lone consultee may avoid. The present findings offer direction to more subtle research on the finer distinctions between different forms of staff in— volvement in future consultation experiments. Such research may include the manipulation of the amount, nature, job functions, task assignments or group planning activities of consultee organizations. On-Site versus Telephone Consultation Process Measures and Site. Those Process measures which appear to have been significantly affected by the Site manipulation include: 1) Attitude toward the inhibiting effect of resource shortages; and 2) Total Time of consultation. Both of these measures were also found to be signifi: cantly correlated with the Outcome measure and thereby may be able to provide some insight into the effectiveness of 74 On-Site Consultation in producing stronger Outcome. Again, as cited earlier, staff planning meetings were affected by a potential interaction effect between site and staff involvement conditions. The low number of staff planning meetings produced by the Telephone-Private Consul- tation would be consistent with the other results cited in this research. However, the infrequent meetings produced by the On-Site Group condition (strongest on most other meas- ures) can only be explained by either reduced need for such meetings in this condition, or a generally poor quality of the data. Measures not demonstrating significant differences across such categories include: 1) Cooperativeness - Staff or Attender 2) Staff involvement (during or after consultation) 3) Total innovation-related Tasks Outcome Measure and Site. The second major experimen- tal finding of this research is that On-Site consultations showed a strong trend toward being more effective (p<<.10) than Telephone consultations in the analysis of variance with Consultant Pre-Rating as a covariate and significantly (p<<.05) more effective when either Prior Interest or no variable was used as a covariant. Such findings tend to support the research of Conrath (1975), Antonioni (1973), and Christie (1975), and tends to confirm the second major outcome hypothesis of this experi- ment . 75 Additional data which tends to support this finding comes from the analysis of variance of the Attitude-Re- . sources measure which showed that subjects receiving On-Site/L Consultations felt significantly less impaired by resource shortages than did Telephone Consultation subjects. Since the Value of Resources measure also was strongly correlated (r = .2789, p <.10) with the innovation Outcome measure, the On-Site condition effect on Outcome tends to be further supported. The major non—supportive finding in this area is that total innovation-related tasks, while greater in the On—Site condition, were not significant at any level. There are at least two explanations of the positive On- Site results. They are: 1) The very fact that consultations were on the program site required, in many cases, that the consultant come in contact with other staff if only to say hello and be recognized. Such recog- nition could later bring about staff interaction with the workshop attender, and, thereby, initiate some of the staff involvement in the innovation adoption process which has already been shown to be an effective stimulant to adoption. 2) The analysis variance of total Time of the consul— tation session found that On—Site consultations were significantly longer than Telephone consul— tations. Further, the correlational analysis of Time and Outcome was found to be very significant 76 (r = .4149, p <.005). Therefore, it is possible that the On-Site consultation was more effective simply because it allowed more time (comfortable time, without a phone against one's ear) for consultation activities. For this reason more limited On—Site consultations may be no more effective than Telephone consultation of equal length. There appears to be no way of determining at this time whether the On-Site consultations were more effective because of the face-to-face nature of the consultations, the contact with other staff, the additional information available to the consultant at the program site or the additional consultation time available. Cost-Effectiveness A further question which must be addressed prior to the implementation of any of the present findings is the costs involved in the various conditions. The fact that Tele- phone—Group consultations were only slightly less effective than On-Site consultations creates a great many questions as to the cost—effectiveness of On-Site versus Telephone—Group consultation. Future research in this area may include manipulating On—Site and Telephone consultations at dif- ferent stages in a series of consultations. Such experi— ments could perhaps establish the usefulness of Telephone versus On—Site consultation in the various stages of the consultation sequence. Further research could be conducted 77 which used more comfortable communication techniques, such as a phone system coupled to closed circuit television. Consultant Effect As expected from previous research by Fairweather, Sanders and Tornatzky (1974), there was no significant difference in the innovation Outcome measures across the three consultants. However, the Consultant Pre-Rating by a staff member of the subject program was a significant pre- dictor of innovation outcome. This conclusion was supported by the direct correlation with the innovation outcome measure (r = .3535, p <.05) and by the analysis of variance of the Outcome measure with the Consultant Pre-Rating used as a covariant (p <.05). These results are particularly interesting in light of the fact that the present study did not produce significant differences between consultants in the analysis of variance of either the Outcome measure or the Consultant Pre-Ratings. Thus, it can be theorized that, while the skill level of the three consultants was relatively the same, there were cer- tain measurable factors by which the consultee could assess the credibility or "compatibility" of consultant with his or her organizational or personal needs. This result could be interpreted in at least two ways: 1. Consultees are able to reliably review the skills of the consultant pertaining to their organiza— tional situation in a relatively short limited interaction. This interpretation would support 78 Zagona and Haiter's (1965) theory that consultant credibility, as assessed by consultees, will significantly correlate with Outcome if the con- sultant is observed in a situation quite similar to the actual consultation format (i.e., the workshop training setting); 2) Initial interpersonal impressions or personality assessments by the trainees determine an autistic relationship (Thibant and Kelley, 1959), which pre—determines to some extent the future outcome of consultation sessions, regardless of consul- tant behavior during the sessions. Unfortunately, since none of the process or intervening variables correlated significantly with the Consultant Pre- Rating, there is little empirical data from the present study which can provide clear direction for further study. Future research could investigate rival hypothesis such as those above by establishing an experimental setting whereby different degrees of consultee—consultant congruity could be compared as they effect innovation adoption. Other research could focus on defining more cost— effective methods of obtaining Consultant Pre-Ratings. The method of Consultant Pre—Rating used in this study was very expensive and time consuming. Additional research may wish to delve into the possibility of assessing consultant credi— bility with less expensive techniques. These may include 79 ratings by consultees after: 1. Shorter periods of face-to-face interaction. 2. Some form of videotape review of many possible consultants who are later randomly assigned to the consultees. 3. Different forms of written, pictorial and/or audio cassette recordings of the consultant in the act of consultation. Descriptive Measures Aside from Consultant Pre-Rating, described above, there was little relationship between the pre-consultation measures of program resources and attitudes, and the inno- vation Outcome measure. One follow-up measure of the value of resources was found to strongly correlate (p:<.10) with the innovation Outcome measure. However, because this measure is a subjective assessment of resources it is very difficult to say whether the results are due to a real resource factor or the effects of cognitive dissonance (Fes- tinger, 1964). Such a dissonance effect could cause a program to assess it's resources more favorable after having accomplished an innovation than if the innovation could not be accomplished, regardless of the actual level of re— sources. It is obvious that more valid and sensitive measures of program resources are possible and should be utilized in future research. However, based upon the obtained results it is con- cluded that general capacity (i.e., funding, staff size, etc.) does not correlate with innovation adoption as 80 significantly as does the internal decision-making methods of the organization members and the nature of the interper- sonal interaction between the organization staff and the consultant. If the capacity of the organization does not have an impact on adoption, such impact probably occurs only when gross differences exist between program capacities. (Such differences did not exist in the present experiment.) The correlational analysis of the relatively simple behav- ioral test of Effaf Interest demonstrated a strong trend (p < .10) toward predicting innovation adoption in both correlative and covariate analyses of variance. Further, the Prior Interest measure was found to strongly (p< .10) covary with total innovation-related tasks completed. However, none of the attitudinal measures of interest in innovation, both before and after consultation, were found to correlate significantly with Outcome. These meas— ures included attitudes of the workshop attender and the workshop attenders' pre-consultation prediction and post— consultation assessments of staff attitudes toward the innovation. Such findings tend to support the theory that measures of Prior Interest should be based on innovation-related behaviors rather than attitudes. Summary and Future Research The major conclusive findings of the present research indicate the significant positive impact of the following on 81 innovation adoption: 1) Group Consultation 2) On—Site Consultation 3) Consultant Credibility 4) Prior Interest in the innovation Each of these findings invite a number of future re— search projects. Group Consultation. In the area of group consultation further research could be conducted which manipulates the number of staff involved in the group consultation (one versus three versus five etc.). Future research may wish to control the status of the staff participating in the group consultation such that lower level staff could be encouraged to participate in one condition while only top managers are included in other group consultation conditions. A related issue might be to determine at what afaga in the adoption process group interaction is needed. Zaltman and Duncan (1977) have argued that participative involvement is par— ticularly important in the early phases of adoption, and less so during the latter stages. In the current study it can safely be concluded that the group consultation occur- red in the adoption cycle, this perhaps accounting for the results. In situations where large numbers of subject organiza— tions are available, a number of strategies could combine aspects of all of these potential projects, while investi— gating the most productive placement of group participation 82 in a long series of consultation sessions. Such research could identify whether other staff should be involved im- mediately, or only after the program director has been thoroughly consulted. Obviously here, as in our present research, consultant credibility and prior interest should be measured and included in development of the research design. On-Site Consultation. The present research identified the superior effectiveness of the technique in producing innovation adoption. However, the relative cost involved in On-Site consultation may imply that more effective Telephone consultation, or some other electronically—mediated inter- vention, may be able to play a cost-effective part in dis- seminating innovations. Here, as with the group consultation research, there is the potential to compare different schedules of combined On— Site and Telephone consultation. In such experiments, subject organizations could be randomly assigned to all On- Site versus a single initial On—Site visit followed by Telephone Group consultation. Other research may wish to assess the relative effectiveness of periodic On—Site con- sultation augmented by regular telephone consulations. In order to obtain a less confounded comparison of On- Site versus Telephone consultation, a project similar to the present research may wish to limit both Telephone and On—Site consultations to one hour per contact. Such a limit would eliminate the potential confounding effect of 83 Time on research findings and would more clearly identify the value of the On—Site setting. In the present research, subjects apparently perceived that On-Site consultation provided additional resources needed to implement the innovation. A further investigation of subject organizations perception of the resources pro— vided by On-Site consultation could more clearly identify such resource factors and determine whether such perceptions were based on actual resource differences or were merely a perceptual phenomenon such as described by cognitive dis- sonance. Consultant Credibility. The present research succeeded in identifying a measure of consultant—credibility with the consultee which was correlated with Outcome. However, the specific behaviors demonstrated by the consultants which brought about these consultee impressions were not addressed. Once the relevant consultant behaviors and/or traits which determine consultee assessments of consultant credi- bility are identified with greater precision it may be possible that a group of consultants could be trained to demonstrate the behaviors or traits previously identified by consultees as important. Trained consultants could then be compared with untrained consultants based upon the success of their consultee organizations in adopting the advocated innovation. Such research may represent a first step toward devel— oping reliable and valid instruments for assessing the 84 relevant factors within the presently vague concept of consultant credibility, and could lead to identification of behavioral methods by which consultants could improve their effectiveness with any consultee. Should such research identify credibility as a combina— tion of relatively fixed consultee and consultant traits the findings could be used to "match" consultees to consultants and increase the cost-effectiveness of any consultation system. Prior Interest. As was demonstrated in the present research, a behavioral measure of Prior Interest in the innovator was a strong predictor of Outcome. Should future research be able to develop more subtle behavioral measures of interest in the innovation, it may be possible to examine directly the relationships between con- sultee interest and consultant method. Future experiments could be conducted which randomly assign different forms of consultation to a population of consultees dichotimized into high and low interest groups. Such research could perhaps identify consultation methodologies which are uniquely effective with one or the other of the two subcategories of consultee interest in the proposed innovation. REFERENCES REFERENCES Antonioni, D.T. A field study comparison of counselor empathy, concreteness and client self-exploration in fact—to-face and telephone counseling during lst and 2nd interviews, Discertation Abstracts International, Vol. 34 (2013) No. 866. Argyris, C. Interpersonal Competence and Organizational Effectiveness. Homewood, Ill.: The Dorsey Press, 1972. Aronson, E., Turner, J., and Carlsmith, M. Communicator credibility and communicator discrepancy as determin— ants of opinion change. Journal of Abnormal and Social Psychology, 1963, f1, pp. 31—36. Bennis, W. Changing Organization. New York: McGraw—Hill, 1966. Blake, R.R. and Mouton, Jane S. Building a dynamic cor- pgration through grid organization development. Read- ing, Mass.: Addison-Wesley, 1969. Bowman, P.H. The role of the consultant as a motivation of action, Mental Hygiene, 1959, Vol. 43, pp. 105—110. Burns, T., and Stalker, G. The Management of Innovation. London: Tavistock Publications, 1961. Caird, J.B. and Moisley, H.A. "Leadership and Innovation in the Crofting Communities of the Outer Hebrides," Sociological Review, 1961, Vol. 9, pp. 85-102. Caplan, G. The Theory and Practice of Mental Health Con- sultation, New York: Basic Books, Inc. 1970. Campbell, D.T. and Stanley, J.C. Experimental and Quasi— experimental Designs for Research. Chicago:l Rand McNally, 1963. Cecil, E.A., Cummings, L.L. and Certkoff, J.M. Group Com— position and Choice Shift: Implications for Adminis- tration, Academy of Management Journal, 1973, Vol. 16 (3) Pp. 412-421. 85 86 Chapanis, A. "Prelude to 2001: Explorations in Human Com— munication." American Psychology, Vol. 26, No. 11, November, 1971. Christie, B. Perceived usefulness of person-person tele- communications media as a function of the intended application, European Journal of Social Psychology, 1975, Vol. 4 (3). pp. 366—368. Coch, L. and French, J. Overcoming Resistance to Change, Human Relations, 1948, Vol. 11, pp. 512-532. Conrath, D.W., Buckingham, P., Dunn E., and Swanson, J.N. An experimental evaluation of alternative communication systems as used for medical diagnosis, Behavior Science, 1975, Vol. 20 (5), Pp. 296-305. Fairweather, G.W. Methods for Experimental Social Innova— tion. New York: Wiley, 1967. Fairweather, G., and Tornatzky, L. Experimental Methods for Social Policy Research. New York: Pergamon Press, 1977. Fairweather, G., Danders, D., and Tornatzky, L. Creating Change in Mental Health Organizations. New York: Pergamon Press, 1974. Festinger, L. Behavioral support for opinion change. Pub- lic Opinion Quarterly, 1964, ff, pp. 404-417. Gallessich, J. Training the School Psychologist for Con- sultation, Journal of School Psychology, 1974, Vol. 12, No. 2. Habbe, S. "Communicating with Employees," Student Person- nel Policy, No. 129, New York: National Industrial Conference Board, 1952. Havelock, R.G. Planning for Innovation Through Dissemina- tion and Utilization of Knowledge. Center for Re- search on Utilization of Scientific Knowledge, 1971. Havelock, R., and Havelock, M. Educational Innovation in the United States. Vol. 1: The Natinoal Survey: The Substance and the Process. Ann Arbor, Mich.: Institute for Social Research, the University of Michigan, 1973. Hovland, C., and Weiss The Influence of Source Credibility on Communication Effectiveness, Public Opinion Quarterly, 1951, ff, 635. 87 Kogan, N. and Wallach, M.A. The risky—shift phenomenon in small decision making groups: A test of the informa- tion exchange hypothesis, Journal of Exparimental Social Psychology, 1967 (b) 3, pp. 75:84. Kogan, N. and Wallach, M. Risk taking as a function of the situation, the person, and the group. In G. Mandler, New Directions in Psychology, Vol, III, New York: Holt, 1967 (a). Larsen, K., Norris, E., Droll, J. Consultation and Its Out- come: Community Mental Health Centers, Palo Alto, California: American Institutes for Research, 1976. Lewin, K. Frontiers in group dynamics. Human Relations, 1947, f, pp. 2-38. Levinger, G. and Schneider, D.J. Test of the "risk is a value" hypothesis, Journal of Personality and Social Psychology, 1969, ll, pp. 165-169. Litwak, E. Models of Bureaucracy that Permit Conflict, American Journal of Sociology, 1961, f1, pp. 173—183. Lounsbury, J.W. "The Diffusion of Environmental Action Prac— tices: A community experiment," International Review of Applied Psycholggy, 1976, vol. 25, No. 1, pp. 15-21. Lounsbury, J.W. and Tornatzky, L.G. "Planning and Involve— ment: An experiment in an applied setting," paper presented at Environmental Design Research Association meeting, April 22, 1975, Laurence, Kansas. McGregor, D. The Human Side of Enterprise. New York: McGraw—Hill, 1960. Mehrens, W.A. and Ebel, R.L. Principles of Educational and Psychological Measurement. Chicago: Rand McNally, 1967. Niehoff, Arthur H. and Charnel, Anderson J. "The Process of Cross-Cultural Innovation," International Developments Review, June 1964, Vol. 6, No. 2, pp. 120—129. Nordhoy, F. Group interaction in decision making under risk. Unpublished Master's thesis, Massachusetts Institute of Technology. School of Industrial Management, 1962. Osgood, C.E., and Tannenbaum, P.H. The principle of con- gruity in the predication of attitude change. Psycho— logical Review, 1955, gf, pp. 42-55. Pelz, E.G. Some factors in "group decision." In E.E. Maccoby, T.M. Newcomb, and E.L. Hartley (eds.) Read- ings in Social Psychology. New York: Holt, 1958. 88 Perrow C. Complex organizations: A Critical Essay. Glen- view, 111.: Scott, Foresman and Co., 1972. President's Conference on Technical-Distribution Research for the Benefit of Small Businesses, Washington, D.C.: Office of Technical Services, U.S. Department of Com— merce, September 23-25, 1957, pp. 287. Roethlisberger, F., and Dickson, W. Management and the Worker. Cambridge, Mass.: Harvard University Press, 1947. Rogers, E., and Shoemaker, F. Communication of Innovations. New York: Free Press, 1971. Rossi, P.H. and Williams, W. (ed.), Evaluating Social Pro— gram; theory, practice and politics. New York: Seminar Press, 1972. Schramm, Wilbur. “Science and the Public Mind." Katz, Elihu et al. (eds). Studies of Innovation and of Comminication to the Public, Studies inthe Utilization of Behavioral Sciences. Stanford, California: Insti- tute for Communication Research, 1962, Vol. 2, pp. 261— 286. Shaw, M.E. Group Dynamics, New York: McGraw-Hill, 1976. Stoner, J. "Risky and Cautious Shifts in Group Decisions; The Influence of Widely Held Values," Journal of Exper- imental Social Psychology, 1968, 4, pp. 442-459. Tannenbaum, P.H. Congruity theory. In R.P. Abelson et al. (ed.), Theories of cognitive consistency; a source book. Chicago: Rand-McNally, 1968, pp. 52-72. Thibant, J.W. and Kelley, H.H., The Social Psychology of Small Groups. New York: Wiley, 1959. Thompson, James D. Organizations in Action, McGraw-Hill, 1 Tornatzky, L.G. The relationship between community partic— ipation, student achievement and program innovation in public schools. Unpublished Paper. Department of PsycholOgy, Michigan State University. Walbach, M.A., Kogan, N. and Bem, D.J. Group Influence on Individual Risk-Taking, Journal of Abnormal and Social Psychology, 1962, Vol. 65, pp. 75—86. Weber, M. The Theory of Social and Economic Organization. Tr. and ed. by A.M. Henderson and T. Parsons. New York: Oxford University Press, 1947. _--._._ 89 Weiss, Carol H. (ed.) Evaluating Action Programs. Boston: Allyn and Bacon, 1972. Wholey, Joseph S., Scondon, J.W., Duffy, H.G., Fukumoto, J.S., Vote, L.M. Federal Evaluation Policy. Washing- ton, D.C.: The Urban Institute, 1970. Whyte, William F. Human Relations: a progress report. In Etzioni, Amitzi (Ed.) Complex Organizations. New York: Holt, Rinehart, and Winston, 1961. Zagona, S. and Haiter, M.R. "Credibility of Source and Re- cipeint's Attitude: Factors to the Perception and Re- tention of Information on Smoking Behavior," Percep— tual and Motor Skills, 1966, Vol. 23, No. 1, pp. 155— 168. Zaltman, Gerald and Duncan, Robert. Strategies for Planned Change. John Wiley, 1977. APPENDICES APPENDIX A '90 sun: or MICHIGAN. m ggfifig WILLIAM G. MILLIKEN. m. DEPARTMENT OF PUBUC HEALTH mo noun: LOGAN STREET, LANSING, memo»: «m MAURICE S. REIZEN, “.0. om Dear Program Director: The Office of Substance Abuse Services is now in the process of investigating the possibility of providing various workshops and on site consultations in program evaluation skills. _ , Therefbre, in order to determine the size, format and number of such workshops to offer. we are attempting to identify the number of substance abuse staff members interested in obtaining additional . expertise in program evaluation techniques. would you please complete the enclosed questionnaire and return it to our office at your earliest convenience. Thank you for your cooperation. Sincerely. f M 3"»Cm Bill Stevens Program Analyst Education a Training Division BS/mc QIIC . M. 1&4"; CENTENNIAL ANNIVERSARY ONE HUNDRED YEARS OF PUBLIC HEALTH IN MICHIGAN 1873-1973 I 91 Program Director's Name Phone Number Where You Can Be Most Easily Reached If the following evaluation skills workshops were available, at no cost, in Lansing on the indicated dates, how many staff membefs —_ would you send? (Travel and living expenses would need to be carried by your local program budget.) Number of Employees You Would Send Length of Workshop A one—day workshop A two—day workshop A three—day workshop If the following evaluation skills workshops were offered within 50 miles of your program, af ag cost, how many employees would you‘wish to attend? Number of Employees You Would Send Length of Workshop A one—day workshop A two-day workshop A three-day workshop If a program evaluation consultant, whom you believed competent, was available free of charge, how many hours of your time would you spend with such a consultant? (Fill in any appropriate blanks or otherwise indicate hours.) Hours per day for days Hours per week for weeks Hours per week indefinitely If you could, through your own decision, now reallocate some percentage of your budget to evaluation efforts, what percentage of your budget would you reallocate? (Check One) 0% - 2% 11% - 15% I would shift some monies out 3% - 5% 16% - 20% of evaluation 6% - 10% Over 20% efforts“ If I and my staff had a more thorough understanding of evaluation techniques, I could arrange for my staff, as a whole, to spend approximately additional man-hours per week on evaluation efforts. APPENDIX B 92 Evaluation Skills Workshop Kellogg Center Michigan State University East Lansing, Michigan The Office of Substance Abuse Services will be performing a 20-hour workshop designed to improve the evaluation skills of the local program staff The Icnowledge and skills being offered in this upcoming workshop, while definitely of value in meeting the Office of Substance Abuse Services evaluation guidelines (not yet developed), are offered more importantly to help loca _ programs to learn more about which aspects of their services are working well and which aspects need improvement. The skills being presented, therefore. will probably be of more value to participants interested in internal staff- -operated evaluation used to improve services than to those participants interested primarily in meeting minimum evaluation guidelines. , The workshop is designed as a "learning by doing" experience which will involve all participants in the practice of the skills presented. Therefore, if you _ are planning to attend these sessions. plan to work. T 3’ e skills being introduced in the workshop will include the following. 1) Identifying or creating measurable success indicators Factors which improve or detract from the usefulness of data 3 Goal attainment scaling - strengths and weaknesses 4 Evaluation designs a. time series design ~ b. comparison design c. experimental design d. other designs as time permits ‘ 5 Locating evaluat1on resources in your community V Each of the above topics will include a small group exercise which will allow for participants to practice the sI ills presented and receive feedback on their performance. Because we want to learn as much as possible about the effectiveness of the pilot project a great d: al of inforI:'itiou and fc-edbacl: will be requested from participants. This request for infOImation will begin with the application for:u you will {ind encluicd with this WOII'hO op description. Please be as thorough as possible in crmpleting the information quuested. lhank you. . 10. 11. 93 Application for Evaluation Skills Workshop Applicant's Name Program Name Program Address Telephone Position (Check most appropriate box) 1 iDirector I [Coordinator of a service within the [::]0ther Administrator program Number of Employees You Supervise Major Field Highest Degree of Diploma Received to Date 4. of Study Degree you are currently studying toward (if any) Field of Study Total College Credits in Mathematics and/or Statistics (if any) Type of program (Check as many as appropriate) [ ‘primarily serves alcohol problems i -outpatient -primarily serves opiate problems . lcrisis intervention primarily long-term treatment i Sprevention and education .residential highway safety . ‘administrative or coordinating agency i 'methadone Do you have a staff member in your program whose responsibilities include evaluation activities? Yes No IF YES, how many hour per week, on the average does she/he spend on evaluation activities? 0-2 — 6-10 ll—20 21—40 written evaluation plan for your program which is different Do you have a separate from other programs in your coordinating agency? No IF YES, please include a copy of your plan with your application. Graduate Four Year How many miles away is the College nearest community college School No Does your program presently use or have access to computer facilities? ___Yes __ Unsure When you have completed the above, please return your application to: Bill Stevens, Program Analyst Education and Training Division Office of Substance Abuse Services 1019 Trowbridge Road before December 16, 1974 East Lansing, MI 48823 APPENDIX D Lam 8:30 - 9:00 9:00 - 10:30 10:45 - 12:00 Session 1 Noon - l:30 1:30 - 3:00 Session 2 3:00 - 3:20 3:20 — 5:00 Session 3 5:00 - 6:30 94 WORKSHOP SCHEDULE Coffee and Donuts Trainees Pick Up Materials Jarl Nischan Makes Presentation of OSAS Evaluation cy Bill Stevens Describes Workshop Goals, Activities and Schedule Question and Answer Session With Jarl Nischan and Bill Stevens Definition of Five Types of Evaluation: Effort Evaluation Impact Evaluation Effectiveness Evaluation Process Evaluation Efficiency Evaluation the purpose, general procedures and variOus facts and information each can provide will be discussed. Lunch Defining and utilizing measurable objectives in pre-post and post only evaluations. Defining methods by which achievement of program objectives can be measured by defining observable, and/or measurable outcome measures - a small group exercise will follow a 45 minute presentation. Break Methods of Measurement--will include basic _ concepts. different types of scales, unobtrusive measures, questionnaires and other data gathering techniques. Small group exercises will follow presentation. Dinner 10:00 10:45 11:00 12:00 1:30 3:00 3:15 4:45 6:30 7:30 9:00 10:00 10:45 11:00 Noon 3:00 3:15 3:45 6:30 7:30 8:30 Session 4 Session 5 Session 6 Session 7 95 Coffee and Donuts Effectiveness Evaluation--Introduction to comparison designs and their applicability to drug and alcohol programs. Small group exercise on applying the pre- post and comparison design to program evaluation. Break Effectiveness Evaluation-~Continued presentation of comparison designs and the problems involved in identifying "matching" or other groups comparable to drug or alcohol program clients. Lunch Effectiveness Evaluation--Introduction to Experimental Design and vari0us advantages and difficulties involved in application of such designs to drug and alcohol programs. Discussion of ethical considerations of the use of experimental designs. Break Exercise Session--Trainee will develop an experimental design for use in a drug or alcohol program. Dinner Critical review of evaluation reports. Presentation on Goal Attainment Scaling—-Its strength and weaknesses. . 031_3 8:30 — 9:00 9:00 - 10:30 10:30 — 10:45 10:45 - 12:00 12:00 - 1:30 1:30 - 4:00 Session 8 96‘ Coffee and Donuts Planning for Evaluation--A presentation of all factors needed to be considered prior to implementing an evaluation design. Hill include monetary, interpersonal, inter-program and client considerations. Break Exercise--Trainees will partially create a plan for implementing an evaluation design for a program(s) in their area. Lunch Individual Consultation with Consultants-- All workshop instructors will be available during this time for discussing specific program evaluation problems with individual trainees. APPENDIX E 10. 97 WORKSHOP PRE—TEST NAME: In what county is your program located? How long has your program been in existence? 0-6 months 6 months — 1 year 1-2 years 2-3 years 3-4 years 4-5 years over 5 years HIHI What is the average length of contact with your clients? 0-1 hour l-24 hours 1-7 days 7-14 days 14-30 days 30-90 days 90-120 days over 120 days HIIIH How many full-time paid staff are employed by your program? How many part-time paid staff are employed by your program? How many hours per week does a part—time employee work? (on the average How many volunteers work at your program? (on the average) How many hours per week does the average volunteer work? What is the average turnover rate per year for your (a) full-time staff? (b) part-time staff? (c) volunteers? Does your program hold regular staff meetings which involve all staff? Yes No IF YES, how frequent are such meetings? A'fi‘ 13. 14. 98 -2- Does your program hold other staff meetings on a regular basis which do not involve all staff members? Ye No IF YES. which staff positions attend? What is the total budget for your program for fiscal year 1974-75? What's your program's present evaluation budget? What is the maximum amount of contractual or other funds in your present budget which could be diverted to evaluation efforts provided you made a maximum personal effort to do so? How long have you been employed... in your present position? by your present program? in the substance abuse field? in the general human services field? (a (b (c (d If you are not the director of your program, how long has your program director been employed as director? What types of information or skills do you hope to gain from attending this workshOp? (TRY TO LIST FOUR ITEMS) 1. 2. 3. APPENDIX F 99 WORKSHOP EFFECTIVENESS In general I believe that the evaluation concepts presented in this workshop are capable of being incorporated in my program activities. Strong Agree Agree Agree Slightly ' Uncertain Disagree Slightly Disagree Strongly Disagree How closely do you personally agree with the concepts presented in the workshop? (a) Need for establishing measurable evaluation criteria. Strongly Agree Agree Agree Slightly Uncertain Disagree Slightly Disagree Strongly Disagree (b) Need for pre-testing. Strongly Agree Agree Agree Slightly Uncertain Disagree Slightly Disagree Strongly Disagree (c) Need for a group to compare with your program client. Strongly Agree Agree Agree Slightly Uncertain , Disagree Slightly Disagree Strongly Disagree 100 (d) Need for randomized assignment to alternative services Strongly Agree Agree Agree Slightly Uncertain Disagree Slightly Disagree Strongly Disagree How willing do you believe your staff will be to change their work routine or responsibilities to initiate the following aspects of the workshop? (a) Establishing measurable evaluation criteria. Extremely Willing Very Willing Willing Complain But Willing Would Quit or Try To Undermine Project (b) Pre-testing. Extremely Willing Very Willing Willing Complain But Willing Would Quit or Try To Undermine Projecc (c) Identification of a "matched" group to compare with your program client. Extremely Willing Very Willing Willing Complain But Willing Would Quit or Try To Undermine Project (d) Need for randomized assignment to alternative services within your program. Extremely Willing Very Willing Willing Complain But Willing Would Quit or Try To Undermine Project What changes would you like to see made before holding another workshop such as this? (continued on reverse side) Instructor, Presentor or Trainer Name ‘ suaaaas l()l INSTRUCTOR EFFECTIVENESS 1159193 uosuqor Knsneuiol Patience [1 ll 11 always patient usually patient sometimes patient but sometimes too demanding usually too demanding almost always too demanding Practicality or on- the~job usefulness of Instructor's Presentation 13 13 D . 01.:- wN—l far too theoretical too theoretical sometimes practical but sometimes too theoretical usually practical very practical Vocabulary used by the Instructor DEC] U'lwa—l far too complicated a little too complicated just about right a little too simple far too simple Instructors Answers to Trainee‘s Questions Were: I: D U1 waH drawn out far too long drawn out a little too long usually clear and complete a little too short and/or incomplete far too short and/or incomplete 'Organization of Presentation DUE] C] 01.5de CI very well organized well organized fairly well organized a little disorganized very disorganized OPENNESSfiof Instructor to Different Points of View UU'DD Cl S U wa encouraged trainees to present different points of view would always accept different points of view — occasionally argued too long with a tra1nee usually argued with trainees too long argued with trainees far too much 11. Instructor, Presentor or Trainer Name oneness 102 uxsteo uosuqor Your perception of the instructor's understanding of material Aaaaaniol UL‘UNH Excellent Adequate Poor Very Poor 12. LJ . WRITTEN MATERIALS II" IN GENERAL, THE ORGANIZATION 1 - OF MATERIALS wAs: IN GENERAL, VOCAouLARY USED IN MATERIALS “As: I = EXCELLENT 2 = GOOD I = FA RTOO COMPLICATED : I F333,, 2 = A LITTLE Too COMPLICATED Z 3 = JU T ABOUT RIGHT 5 ‘ VERY poon ‘ = ALI TLE T00 SIMPLE “‘ ' s = FA R T00 SIM PL LE 15. S AN OVERALL EXFERIENCE,I [:l . WOULD RECIEVE FROM THIS TRAINING (USE CODES 16. 17. 18. 19'. 79' 21. 22 23 'THE AMOUNT OF MATERIAL PRESENTED D A WOULD RATE THIS TRAINING PROGRAM AS: I = EXTREMELY VALUABLE I = YTOO MUCH Z = VERY AVALU ABL LE 2 = T93 M"CH 3 = VAL 3 = ABOUT RIGHT I = MEDIU A = TOO LITTL 5 "—’ WA E0 ME 5 = WAY TOO LITTLE 6 = SOMEI/HAT FCOIJNT TERPRODUCTIVE 7 = VERY COUNTERPRODUCT TIVF RANK EACH OF THE—FOLLOWING PROGRAM TYPE} ACCORDING TO THE BENEFIT YOU BELIEVE THEY . IwooLo RECIEVE FROM THIS TRAINING (use CODES RANK EACH OF THE FOLLOWING STAFF FOSITIONS ACCORDING TO THE BENEFIT YOU BELIEVE THEY BELOW). BELOW). . ADMINISTRAToRS, DIRECTORS, 25%. . ADMIN. 0R COORDINATING AGENCIES COORDINATGRS . ’ 25.. ALCOHOL INPATIENT OR RESIDEN- . COUNSELORS, SOCIAL WORKERS TIAL PROGRAMS CLERICAL WORKERS 26-. ; ALCOHOL ourpmsm on RES' DENTIAL PROGRAMS COMMUNITY OUTREACH 0R ORGANIZATION WORKERS 27 . DRUG INPATIENT 0R RESIDENTIAL PROGRAMS PHYSICIANS, I‘URSES OR OTHER DDDDDDDD DDDDDDD PROFESSION/.1. MEOICAL PERSONNEL 28. DRUG OUTPATIEHT PROGRAM.” - ‘ (INCLUDES METHADONE E) . CRISIS CENTER women: '29- .‘CRISIS CENTER TRAINERS 0R TRAINING ‘ coo. D' "‘1' R5 30. PREVENTION on EDUCA T.ION PROGRAMS-OTHER THAN CRISIS CENTER. . :uo1.1c:gFonuA-IIOIIIS$§ .. S IAL WC” ” ”c 31. . ALCOHOL HIGHWAY SAFETY PROGRAMS D = WOULD BENEFIT THE MOST = \VOULD BENEFIT GREATLY = WOULD BENEFIT 50M 4 WOULD BENEFIT A Ll ITT = WOULD NOT BNE NEFIT ATL ALL = DON' TK NWO Ombun-I BELOW, PLEASE LIST ANY SUGGESTION YOU MAY HAVE FOR IMPROVING FUTURE WORKSHOPS OF THIS KIND. . WRITTEN MATERIALS 12. I'NAS‘ENERAL, VOCABULARY USED IN MATERIALS 13 R TOO COMPLICATED A LITTLE TOO COMPLICATED JUST ABOUT RIGHT A LITTLE TOO SIMPLE FAR TOO SIMPL “DUN— II II II II II ' THE AMOUNT OF MATERIAL PRESENTED [3 WAY TOO MUCH abun— II II II II II > om 00 C 4 E O I .1 WAY TOO LITTLE KANK EACH OF THE FOLLOWING STAFF POSITIONS ACCORDING TO THE BENEFIT YOU BELIEVE THEY . WOULD RECIEVE FROM TIIIS TRAINING (USE CODES 16.} 17. 18. 19'. 70.- 21. 22 23 BELOW). _ AOMINISTRATORS, DIRECTORS, COORDINATORS . COUNSELORS, SOCIAL WORKERS CLERICAL WORKERS COMMUNITY OUTREACH OR ORGANIZATION WORKERS PHYSICIANS, "URSES OR 0TH HER PROFESSIONAL MEDICAL PERSONNEL - CRISIS CENTER WORKERS TRAINERS CORR TRAINING C00. '31" NAT A PUBLIC INFORMATION OR EDUCATION SPECIALISTS auburn-l DDDDDDDD 103 11+. 'unun- II II II II II IN GENERAL, THE ORGANIZATION OF MATERIALSW AS: EXCELLENT GOOD FAIR POOR VERY POOR #900459»- II II II II II II II 15. ASA ANOVERALL EXPERIENCE,I WOULD RATE THIS TRAI INING PROGRA AS: EXTREMELY VALUAOLE VERY VALUABLE ABLE MEDIOCR E A WASTE OFT ME $0M ME'IIHAT COUNTERPRODUCTIVE VERY COUNTERPRODUF .ITV RANK EACH OF THE FOLLOWING PROGRAM TYPES ACCORDING TO THE BENEFIT YOU BELIEVE TII: Y ' IWOULD RECIEVE FROM THIS TRAINING (USE CODES 24. 27. 31. 25.." 2.6,... 28. -, 29' ,'I 30.; BELOW). . ADMIN. OR COORDINATING AGENCIES ALCOHOL INPATIENT OR RESIDE TIAL PROGRA ALCOHOL OUTPATIENT OR RESIDENTIAL PROGRAMS DRUG INPATIENT 0k RESIDENTIAL PROGRAMS . ORUG OUTPATIENT PROGRAMS (INCLUDES METHADONE ) CRISIS CENTER PREVENTION OR EDUCMTI C] 1:] E] [3 E] E] PROGRAMS- OTHEI‘ THAN CRISIS CENTER. |:I . ALCOHOL HIGHWAY SAFETY PROGRAMS WOULD BENEFIT THE MOST \VOULD BENEFIT GREATLY NOT BENEFIT AT ALL DON'LDNTK 0W 2 WOULD BENEFIT A LITTLE D BELOW, PLEASE LIST ANY SUGGESTION YOU MAY HAVE FOR IMPROVING FUTURE WORKSHOPS OF THIS KIND. APPENDIX G :33»: N 02 mwcuuuuz % I mmw uconm x moauuauuww uwuamaou: =33»: s 02 cm: uoo H I h wmw vosadanauom muuuuuuo wcosm x uaouuao uanwuammwxm cough: N I oz weak—om maaumwxu ou M magnum: 1'4 | www nonunnaaoo now 1; oua>hwm nowuaa acozm x Iuouaw waaahuuunm nouuau: a . oz mwcauoux h wmw oconm x vouo=Hu>a up on noduuuom oauz :oauamom wamz moow>uon uaaauuuund svouuauaou manoun ~uxmau wsu o>ounnw no «xmmu : ,.amuaaaouu< mxwua uuosu 0.33 so: now>ou 5.65" 32:05. on any use Mair—.50 n03.3463— vuuuuucou wuoz on: muuzuo unasu onus o no unanoHo>wv cw uw>ao>aa anauauum mm) 0:: m zonhounnw uo «xnmu Umkmuaaaouu< mxmmh omonu who: so: hoa>ou .usaca uvfi>oum cu ogu use wcumnumu acuuuaau>w UUUUHUGOU NM”: 053 DMOSUO ONU—au 0&0: o no wcwmoaw>wv :« vo>ao>aa haausuuu an: 0:3 m . 106 \ oz 53:: IN mm» 93300: J I moaoawww :ugmuao: . sud: wucmawwuwm 0:93 I @8390; “6&0.“ng III «$- oz 633.5 IN! 33300: IJI mmw hucuwu muaumafivuoou sonw unoam x Hm>ounna auuumnz Hp 53?: N I oz». nuance: III»: Ewan wmw noauunaa>u unanv ocean Ill .IN. «a uuosnoao>on bH :uuuuus IIIIIIIN owcuuuo: Illlllla oz ozonm m chOHm>wc dowuwaom Quiz nowuuuom 0562 WE ouaufiaumuu waauacnoxm . «cauoaucou oanoma ~mxuwu unu u>ouanm uo wxmuu muzwacmaflaaoou< nxaah . unonu on»: :0: >o«>mu .u:n:« uv«>oun cu ecu use muwzuumu nowuqsaa>u vouuaunou anon on: one—Bo 0.5:» one: o no 9.3336". n.“ “.3502: .3130: an) 95 m 107 GOUUHMS N I 02 ( .wH nun—«poor. N I ma.» wmvwdmeg ucosm x uuonuu Hwawm :05qu N I 02 . own—«you: I II x. .. 3” mm» aumv aconm x no mamaflma< ..:H I Auumaauumko New couuuuz N oz vuaaman uwnasd IIII . ku0u I vucmamnm nmcwuoox x Mmrw wanaunv mmzouw cu muawwau ouosm x we unwanwunm< 53.33 N .MH amp—.300: IN. 02 IIII woman mafiawu ouosm x mm% mo usuamwduuuu nowufimom ua¢z dauuumom oEmz no wmuuuuam fiuouuausoo uaaown mmxmwu osu w>ouaaw no Nxmmu muawasmammaouu< mxmme . unusu who: 30: auw>wu .uanaw wvfi>oun cu osu use mnwmuumu acquaaaw>m fifludfiuflou 0H0: 0&3 9350 0.55 9.33: we wGHQOHQ/av 5 .5502: ,zflausuuu an; 95 \ m APPENDIX H 108 WOULD YOU PLEASE TELL ME ANYTHING AND EVERYTHING YOU ARE NOW DOING TO EVALUATE YOUR PROGRAM. I WILL BE TRYING TO WRITE DOWN WHAT YOU SAY SO PLEASE TALK SLOWLY . APPENDIX .. 3P" T——v~ 1" 109 EVALUATION SKILLS WORKSHOP FOLLOW-UP QUESTIONNAIRE 1. How willing was your staff to change their routines and responsibilities to initiate the following aspects of the workshop? (a) (b) (C) (d) Establishing measurable evaluation criteria. Extremely Willing Very Willing Willing Complain But Willing - Would Quit or Try To Undermine Project Pre-testing. Extremely Willing ____Very Willing Willing Complain But Willing Would Quit or Try To Undermine Project Identification of a "matched" group to compare with your program clients. Extremely Willing Very Willing Willing . Complain But Willing . Would Quit or Try To Undermine Project Randomized assignment to alternative services within your program. Extremely Willing Very Willing Willing Complain But Willing Would Quit or Try To Undermine Project i I .lJLO To what.extent did the following factors inhibit implementing the concepts presented in the evaluation skills workshop? the appropriate boxes) Very Greatly Greatly‘ Some (Please check Slightly None My personal disagreement with workshop concepts Staff disagreement with workshop concepts - Lack of funds Lack of computer facilities Lack of available trained staff or consultants My confusion about the concepts My staff's confusion about the concepts My feeling that such evalua-~ tion issues are a low priority My staff's feeling that such evaluation issues were a low priority .Other issues were so 'pressing that I did not have time Other issues were so 'pressing that my staff did not have time 1. P O 11.1 Very Greatly Greatly Some Slightly None I did not feel it would provide clients with any benefits or rewards My staff did not feel it would provide clients with any benefits or rewards Concepts did not fit the goals and values of our program I believe in subjective evaluation rather than trying to use numbers to define success Difficulty in establishing ’ working and planning meetings . Other (please specify) 112 3. To what extent were the telephonic consultations helpful in implementing the concepts presented in the workshop? Extremely Helpful Very Helpful Helpful Slightly Helpful Not Helpful 4. Please rank from l to 6 the following categories a cording to which were the most valuable services provided by the telephonic consultations (1 equals most valuable, 6 least valuable) provided information on resources provided information on techniques for planning of evaluation project(s) acted as a reminder to carry out evaluation tasks which had been forgotten provided emotional support provided referral to needed information other specify: 3. .h o 11.3 To what extent were the site-visit consultations helpful in implementing the concepts presented in the workshop? Extremely Helpful VeryH elpfu l '" ' Helpful Slightly Helpful Not Helpful Please, from 1 to 6 rank the following categories according to which were the most valuable services provided by the on- site consultations. (1 equals most valuable, 6 least valuable) provided information on resources provided information on techniques for planning of evaluation project(s) acted as a reminder to carry out evaluation tasks which had been forgotten provided emotional support provided referral to needed information other _ specify: APPENDIX J 114 CONSULT/INT REPORT FORM - a-I‘L‘J Program A ' . . Date a "“ ConSultant 9 I ' ' I Sequence Telephone [:::] ._ On-Site [:::] . Length of Contact I. What aspects of the consultee's situation, intensionstr actions tend to support development of a more refined evaluation design? 115 Q 2. What aspects of the consultee's situation, intensions or actions tend to inhibit development of a more refined evaluation design? ' 3. Where does the consultee need to place most emphasis (work hardest) prior to the next consultation? Were these areas discussed with the consultee as a formal list of tasks? . - APPENDIX K 116 RATING SCHEME Review all material written about the subject's evaluation methodology. Then rate the subject as follows: 5 = any experimental design (uses the word random) 4 = any design which compares two or more groups of subjects regardless of the inappropriateness of the match 3 = any pre-test - post-test design (includes sub- jects who indicate that the same data is collected at intake and exit or follow-up but DO NOT use the phrase "pre-test") 2 = any kind of follow-up or post-testing of sub- jects after some form of treatment (EXCLUDE follow-up which request ONLY subjective opinion of the client about quality of service received) 1 = all others *Add l/2 (.5) to any subject who indicates plans for some kind of correlation study. A. If a design is beingpplanned, give credit to the design only if some task has been completed in preparation for carrying out the design Examples: 1. Questionnaires have been completed 2. Release form completed 3. Formal approval of a related agency has been obtained 4. Subjects assigned Plans that were started but discontinued for some reason should not be given credit. B. Any activities which are labeled as: 1. Goal Attainment Scaling (GAS) 2. Management by Objectives (NBC) 3. Milepost Evaluation System (MES) Should be rated (1) unless included in some other sophisticated design. C. Each subject should be rated according to the most sophisticated aspect of his/her evaluation activities. 1r _. ...—.33.: A e . , - . -. , 1 IES — "irgililzlljlsllllllllI‘I‘I‘II“