AN EFFECT EVALUATION OF THE PLANNING AND EVALUATION WORKSHOP FOR REGIONAL PLANNING UNIT PERSONNEL IN L. E A A. REGION V Thesis for the Degree of M. S. MICHIGAN STATE UNIVERSITY ROBERT A. SMITH 1977 ’q.‘ q I M L». Aha} .. V.- fl‘ " h Uezivcrsity ’lxl'vrn 1'- 1“ 91$.Sr.4.r~. . AN EFFECT EVALUATION OF THE PLANNING AND EVALUATION WORKSHOP FOR REGIONAL PLANNING UNIT PERSONNEL IN L.E.A.A.-REGION V By Robert A. Smith A THESIS Submitted to Michigan State University in partiaT fquiTTment of the requirements for the degree of MASTER OF SCIENCE CoTTege of SociaT Science 1977 ABSTRACT AN EFFECT EVALUATION OF THE PLANNING AND EVALUATION WORKSHOP FOR REGIONAL PLANNING UNIT PERSONNEL IN L.E.A.A.-REGION V By Robert A. Smith In recent years there has been an increased emphasis within the criminal justice system on both evaluation and training programs such as workshops. Unfortunately, however, there has been a ten- dency towards under-utilization of the former in regard to the latter. In an effort to correct this trend several types of evalu- ation were conducted for a training workshop which was developed for Region V-R.P.U. personnel in which various planning and evalu- ation concepts, techniques, and strategies were stressed. This study reflects one of those types of evaluation, effect evaluation. It was designed to measure the effectiveness of the workshop in regard to the transference of technology that would be put to use in the field. All of the R.P.U.'s in Region V were surveyed and assigned to either the experimental or control group depending on whether they had sent a representative to the workshop or not. The survey, itself, consisted of a mailed questionnaire whose format contained Robert A. Smith mostly Likert Scales, but also a few other questions of assorted construction, all dealing with key concepts, techniques and strate— gies that were presented at the workshop. The intent of the survey was to determine knowledge and various levels of use of these key items by both groups, upon which the effectiveness of the workshop could be ascertained. Fifty-six percent of those surveyed responded to one of the two mailings. From these responses comparisons were made both within and between groups for both before and after the workshop in order to determine its effectiveness. These comparisons were accomplished through the use of t-tests, frequency distributions and contingency tables. Although some of the control hypotheses could not be accepted, it was determined that the workshop was indeed effective. This conclusion was based on the discovery that the agencies who sent representatives to the workshop demonstrated significant increases in the utilization of many of the concepts, techniques and strategies presented at the workshop, both in terms of the number of agencies using them and the degree of that use. % Approved: /)%7 D . alph G. Lewis Dr. J n H. McNamara M/W Mr. David B. Kalinich AN EFFECT EVALUATION OF THE PLANNING AND EVALUATION WORKSHOP FOR REGIONAL PLANNING UNIT PERSONNEL IN L.E.A.A.-REGION V By Robert A. Smith A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE College of Social Science 1977 DEDICATION To Debbie who encouraged me to further my education, stood by me unfailingly during a most crucial period, and who above everyone else gave my life meaning during the short time I knew her. To my stepfather who got stuck with a new son "almost" fully grown, but who still loved me just as much as though I had been his very own. And especially to my Mother, to whom I owe all that I am and all that I ever can be! "Mere words could never fully express . . . . ii ACKNOWLEDGMENTS In retrospect it seems that I really played only a minor role in the development of this thesis. The various people who were involved in this project could probably have gotten along just as well without me, but I never could have made it without them! Several "work-study" people helped arrange questionnaires and did some initial typing for me. Unfortunately, I didn't know most of them. Fellow Graduate Assistants, Lynn Miller and Thomas Austin, provided that extra element of expertise which was necessary in order to "make sense" of the computer. In addition, Tom was always that "friend in need . . . ." Secretaries Jan Baggett and Mary-Jane Knoll worked miracles whenever there was "dirty work" to be done. They also added a touch of humanity to otherwise impersonal tasks. Typist Harriet Wever turned a mass of "indecipherable Sanskrit" into a final draft that anyone could be proud of, and she never once complained. Committee members, Dr. John H. McNamara and David Kalinich, offered the benefit of their knowledge, wisdom and experience whenever I had the sense to realize that I needed help. Finally, but most importantly, the Chairman of my Committee, Dr. Ralph G. Lewis, gave me the idea for this thesis and provided iii direction for my efforts. He had all the right answers and even some pretty good questions. Without his technical skills, realistic insights and constant encouragement, I would never have finished. However, even more important than the study which he was so instru- mental in providing, was his contribution towards developing me into a competent researcher. To all of these people, I owe a debt of gratitude, not only for their assistance, but also for their gifts of friendship and teamwork. My thanks to them all! iv LIST OF LIST OF Chapter I. II. III. IV. TABLE OF CONTENTS TABLES APPENDICES . THE PROBLEM Statement of Problem and Overview Background and Need Explanation . Purpose, Conceptual Framework and Hypotheses : REVIEW OF THE LITERATURE Introduction Study. Discussion and Summary DESIGN AND METHODOLOGY . Sample . Data Collection and Measurement Research Design . . Hypotheses Data Analysis Summary DATA ANALYSIS . Response Rate . . Representativeness of Experimental Group Hypotheses Tests . . . . . Supplemental Analyses State of the Art Summary . Page vii ix Chapter Page V. CONCLUSIONS . . . . . . . . . . . . . . 92 Summary . . . . . . . . . . . . . . . 92 Conclusions . . . . . . . . . . . . . . 95 Discussion . . . . . . . . . . . . . . 100 Recommendations . . . . . . . . . . . . 102 APPENDICES . . . . . . . . . . . . . . . . . lO7 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . 127 vi Table 10. 11. 12. 13. T4. 15. LIST OF TABLES Survey Rate-of-Return Data for Each Workshop Series . . . . . . . . . . . Percentage of Attendees Who Reported That Their Com- munity Had Adopted or Had Made Plans to Adopt All or Portions of a Program Discussed at a Workshop Estimated Final Implementation Status for Programs as Reported on by Workshop Attendees . Attendee Ratings of Workshop and of Documents Distributed at Workshop . . . Response Rate Comparison of Attendees' and Nonattendees' Overall Pre-Test Scores . . . . . Comparison of Attendees' and Nonattendees' Individual Pre-Test Scores . . Send a Representative to a Planning and Evaluation Workshop . . . . . . . . . Workshop Attendance by State Questionnaire Response Rate by State . Comparison of Attendees' Overall Pre-Test and Post-Test Scores . . . . . . . Comparison of Attendees' Individual Pre-Test and Post-Test Scores . . . . . . . . Frequency Distributions for Attendees' Individual Pre-Test and Post-Test Scores . . . Comparison of Attendees' and Nonattendees' Overall Post-Test Scores . . . . . Comparison of Attendees' and Nonattendees' Indi- vidual Post-Test Scores . . . . . vii Page 28 29 31 32 49 53 54 56 57 58 60 61 63 65 66 Table l6. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. Frequency Distributions for the Attendees' and Non- attendees' Individual Post-Test Scores Increases in Attendees' Pre-Test/Post-Test Use of Items . . . . . . . . Comparison of Nonattendees' Overall Pre-Test and Post-Test Scores . . . . . . . . Comparison of Nonattendees' Individual Pre-Test and Post-Test Scores . . . . . . Frequency Distributions for Nonattendees' Individual Pre-Test and Post-Test Scores . . . Comparison of Rates of Increase of Overall Mean Scores Between the Attendees and Nonattendees Comparison of Rates of Increase in Individual Mean Scores Between Attendees and Nonattendees Correct Planning Steps Number of Books Read Since January 1, 1976 . Frequency Distribution of Books Read . State of Art (Combined) Significant Differences and Increases for Overall Scores Significant Differences (Increases) Within Groups for Pre-Test/Post-Test Individual Scores . Significant Differences (Higher Scores) Between Groups for Individual Scores . . . . viii Page 67 69 71 72 73 76 77 82 83 83 85 89 9O 90 LIST OF APPENDICES Appendix Page A. Letter to State Planning Agencies . . . . . . . . 108 B Questionnaires . . . . . . . . . . . . . . llO C. Results of Supplemental Questionnaire . . . . . . ll8 D Cover Letters . . . . . . . . . . . . . . l23 ix CHAPTER I THE PROBLEM Statement of Problem and Overview In December, l975, a training workshop was held in Chicago, Illinois, for Regional Planning Unit personnel throughout Region V, the Law Enforcement Assistance Administration regional jurisdiction that is composed of the states of Minnesota, Wisconsin, Illinois, Indiana, Ohio and Michigan. It was co-sponsored by LEAA Region V and the Criminal Justice Systems Center at Michigan State University who developed it to improve the quality (and in some instances, the quantity) of planning and evaluation at the RPU level. Of the 73 RPU's or their equivalent that are distributed throughout the region, 34 sent representatives to the three-day workshop. While there. these attendees were exposed to a format of lectures, a rather involved planning exercise and open discussions, all dealing with usable planning and evaluation concepts, techniques and strategies. At the conclusion of the workshop, the attendees were requested to provide some feedback by filling out questionnaires pertaining to the workshop's content, presentation and relevance. However, it was not feasible at that time to evaluate the effects of the workshop either in terms of knowledge gained or ultimate utilization of the concepts, techniques, and strategies presented during the workshop. Ultimately, such evaluation procedures are necessary in order to determine if the workshop was successful in attaining its overall goal to improve and increase the use of plan- ning and evaluation technologies. Providing this needed evaluation component constitutes the problem to be addressed by this study. In order to deal with this problem and properly evaluate the workshop numerous procedures were taken. For the sake of presenta- tion, these steps have been arranged into the five chapters of which this study consists. The following is a brief overview outlining these various procedures. In the remainder of Chapter I, a Background of the growing emphasis for, and types of, evaluation is discussed along with the Need for the study. A brief Explanation is also offered pertaining to the particular type of evaluation employed. Finally, the Purpose, Conceptual Framework and Hypotheses are presented along with the definitions of key terms. In Chapter II an Introduction into the relevant literature is made, an actual Stpgy, is examined, and a Discussion and Summary of the chapter is presented. Chapter III discusses the methodology of the study including the Sample, Measurement and Data Collection, Research Design, formal Hypotheses, and Data Analysis, followed by a Summary. Chapter IV presents the analysis of the data and includes the Response Rate, Representativeness of the Experimental Group, Hypotheses Tests, Supplemental Analyses, some State of the Art information and a Summary. In Chapter V a Summary of the whole study is made, Conclu- sions are drawn, a Discussion is presented and several Recommenda- tjpp§_are offered. There are also several Appendices which contain various letters, relevant questionnaires and some information obtained through this study which is useful in regard to another somewhat unrelated evaluation of the workshop. Background and Need As part of a comprehensive effort to improve the criminal (the criminal justic system initiated by the "Omnibus Crime Control and Safe Streets Act of 1968,"1 legislation was enacted for the creation of a National Institute of Law Enforcement and Criminal Justice which, among its other duties, would "carry out programs of instructional assistance consisting of . . . special workshops for the presentation and dissemination of information . . . ."2 This provision was repeated in both the "Crime Control Act of l973"3 and the "Crime Control Act of l976"4 with the addition that the Institute: I assist in conducting at the request of a state or local unit of government or a combination thereof, local or regional training programs for the training of state and 1P.L. 90-351, l968. P.L. 90-351, sec. 402.b.5. P.L. 93-83, 1973, sec. 402.b.5. hWN P.L. 94-503, 1976, Sec. 402.b.5. local law enforcement and criminal justice personnel . . . Such training activities shall be designed to supplement and improve rather than supplant the tEaining activities of the state and local government In a similar vein, the National Advisory Commission on Criminal Justice Standards and Goalsiriits 1973 report, Criminal Justice System, called for criminal justice agencies and agencies of "6 education to "develop educational curricula and training programs. This sentiment has been shared by some state-level advisory commis- sions such as Michigan's.7 Largely as a result of the impetus created by (1) these and other related laws, standards, and recommendations, (2) a growing nationwide awareness of the need for in-service training within the criminal justice system, and (3) the availability of federal funding for such training,8 there has been a slow but steadily increasing emphasis on the utilization of training programs in general and training workshops in particular. 5P.L. 93-83, sec. 402.b.6., and P.L. 94-503, sec. 402.b.6. 6National Advisory Commission on Criminal Justice Standards and Goals, Criminal Justice System (Washington, D.C.: Government Printing Office, 1973), p. 168. 7Michigan Advisory Commission on Criminal Justice, Criminal Justice Goals and Standards for the State of Michigan (Lansing: State of Michigan, 1974), p. 210. 8For example, in the "Crime Control Act of 1976" the National Institute of Law Enforcement and Criminal Justice is authorized to make grants to, or enter into contracts with, agencies, institutions or organizations for the purpose of conducting special projects including training programs such as workshops (P. L. 94- 503, sec. 402. b. 1). Also, travel expenses and a per diem allowance are provided in this bill for people associated with such projects (P. L. 94— 503, sec. 402. b. 6). The workshop is one of the more widely advocated types of in—service training programs.9 It is a short, but intense, training session that is conducted for practitioners from within the various segments of the criminal justice system and is developed for the transfer of technology to these individuals with the goal that it will be utilized by them in the field. In essence, then, its pur- pose is to upgrade the skills and capabilities of in-service per- sonnel so that they can return to their respective agency settings and subsequently improve the quality and/or quantity of services which they are expected to provide within their jurisdictions. It should be apparent from this brief description that a workshop is a goal-oriented activity. And, as is the case for most goal-oriented activities, there is a valuable component that should be incorporated into most, if not all, workshops. This component is evaluation. In fact, the “Crime Control Act of 1973" was largely 10 developed "to require increased evaluation of programs" and the "Crime Control Act of 1976" provided authorization for the National Institute of Law Enforcement and Criminal Justice: to make evaluations and to receive and review the results of evaluations of the various programs and projects carried out under this title . . . . The Institute shall in consul- tation with State Planning Agencies deve10p criteria and 9As evidenced by the fact that the "Omnibus Crime Control and Safe Streets Act of 1968," the "Crime Control Act of 1973," and the "Crime Control Act of 1976" all specifically prescribe for its use. 10United States Code Congressional and Administrative News, October 15 to October 20, 1976 (St. Paul, Minn.: West Publishing Co., 1976). p. 5809. procedures for the performance and reporting of the evalu- ation of programs and projects carried out under this title . . . . 2 Similarly, the National Advisory Commission on Criminal Justice Standards and Goals has recognized the importance of evalu- ation and urged that "evaluation plans be designed as an integral "12 part of all projects. More specifically, it has called for the appropriate agencies to "develop and implement techniques and plans for evaluating the effectiveness of education and training programs as they relate to on-the-job performances."13 There are several reasons why the evaluation of training programs (especially workshops) has received all of this emphasis and why it is so important. It can: 1. determine whether the training program is accomplishing its assigned objectives. identify strengths and weaknesses of training activities. determine a cost/benefit ratio of the training program. establish a data base which organization leaders can use to demonstrate the productivity and efficiency of their operational procedures. 5. establish a data base which caa assist organization managers in making decisions. #90“) 11 12National Advisory Commission on Criminal Justice Standards and Goals, A National Strategy to Reduce Crime (Washington, D.C.: Government Printing Office, 1973), p. 150. 13National Advisory Commission on Criminal Justice Standards and Goals, Criminal Justice System, p. 168. 14Kent J. Chabotar and Lawrence J. Lad, Evaluation Guide- lines for Training Programs (Lansing: Midwest Intergovernmental Training Council, 1974), pp. 19-23. P.L. 94-503, sec. 402.c. In order to carry out these functions, an evaluation must deal with at least one, and possibly several or all, of the follow- ing questions: What was done? Did it work? Why did it work? How large were the results? What would be the best alternative?15 Mwa-J In any attempt to answer these questions, there are several forms that the evaluation process may take, each focusing on a spe- cific question. These forms, or types, of evaluations are: l. effort 2. effect 3. process 4. impact 5. efficiency.16 It should be noted that although these types are different in nature, they are not by necessity operationally dissimilar. A researcher may go about conducting them in slightly different ways, but the same basic rules apply to all of them and often the same raw data is generated from them. Their primary distinction from each other then is in the separate issues that they address, not the manner in which they are conducted. This is not to say that they need be mutually exclusive of each other or that they must be con- ducted separately. For example, the results from a process evalu- ation might be application to an effect evaluation, or vice versa. Also, any combination of these types can be conducted together in 15Ralph G. Lewis, The Evaluation Process in Criminal Jus- tice Programs (East Lansing, Mich.: Criminal Justice Systems Center, Michigan State University, 1975), p. 7. 1511311.. p. 10. evaluating a workshop, naturally depending on which questions are to be answered. In such a case, it is simply a matter of gearing the data collection to obtain all the data relevant to the types involved. And, of course, the analysis techniques may have to dif- fer somewhat, depending on the nature of the data collected and the information desired. Unfortunately, in spite of all the lip-service within the criminal justice system that has been given to evaluation both as a 17 there has been a serious general concept and as any specific type, under-utilization of it in reference to workshops. Although there has not been much done in the way of research to support this claim, discussions with "experts" on the subject and general observations in the field tend to bear it out. Notwithstanding the arguments that not every workshop may need to be evaluated nor that all types of evaluations should be conducted for any given workshop, there is still a general neglect for conducting evaluations, even when needed. In fact, it is not uncommon for an evaluation component to be largely ignored in the actual planning and conducting of a workshop, and often it is intro- duced only as an afterthought. And, as though this wasn't bad enough, the quality of some evaluations that are attempted may be seriously questioned. I7The reader is referred back to an earlier comment by the National Advisory Commission on Criminal Justice Standards and Goals which called for evaluation of the effectiveness of training pro- grams (effect evaluation). N.A.C.C.J.S.G., Criminal Justice System, p. 168. In trying to understand the reasons for this neglect and poor quality, there are a myriad of explanations which may be offered. Summarized, they fall into the following categories: 1. The people directly involved with conducting a workshop do not have the skills with which to properly conduct an evaluation(s) of whatever type(s) is necessary. 2. The people directly involved with conducting a work- shop do not have resources available to them with which to properly conduct appropriate evaluation(s). This often refers to constraints on such resources as manpower and money. 3. The people who are either in a position to authorize or to conduct an evaluation do not realize the importance of doing so or are just too apathetic. 4. The people who are in a position to make use of the results of evaluations or to make policy decisions based on them either refuse or simply fail to do so. Although these reasons might provide a slightly better under- standing of the situation, they in no way justify it. In fact, such lack of proper evaluation may even serve to defeat the purpose of a given workshop. In any event, it certainly leaves an open question as to such a workshop's worth. This uncertainty can be somewhat exemplified in the case of the Region V Planning and Evaluation Workshop which was previously discussed in the first section of this chapter, although the pre- ceding arguments do not necessarily hold true in this case. Initial consideration wa§_given for an effect evaantion of the workshop and 10 some preliminary planning wa§_made. However, the actual effect evaluation could not be carried out immediately following the work- shop. Therefore the workshop's effectiveness is only now being determined in this study. Acknowledging the fact that the effectiveness of the work— shop is unknown, a very logical and pragmatic question arises: Why bother to find out if it was effective? (I.e., what important need is really served by conducting an effect evaluation?) Earlier in this section, several reasons for conducting evaluations were cited. The first of these, "determine whether the training program is accomplishing its assigned objectives," implies two things: the use of effect evaluation and a need to know the results of the training. Appropriately, then, it provides a good starting point for discussing the need for an effect evaluation and, ultimately, the rationale for this study. The concern for wanting to know if the objectives of the workshop were met (i.e., whether the overall goal to improve plan- ning and evaluation was attained) is quite understandable. Assuming that there was either a demonstrated or an attributed need for the workshop in the first place (hopefully, the powers-that-be wouldn't sanction this one without good reason), then pe0ple at all levels within the criminal justice system who were in some way involved with the workshop itself, or who might be affected by it, will be interested in the results of an effect evaluation to determine whether the initial need was satisfied and the situation improved upon. In this regard, they will use effect evaluation as a 11 tool to ascertain and measure any progress which, hopefully, will result. This holds true even though these people, within the con- text of their own job roles, may be concerned with the potential effects of the workshop for different reasons. For instance, the upper-level administrators may be mostly concerned with the far- reaching effects in the field which the workshop may produce, while RPU representatives who actually attended the workshop may be interested in just improving their own abilities. Also, the "experts" who developed and conducted the workshop are probably very much concerned with turning out a useful product, but the heads of the various RPU's in Region V (not to mention state planning agen- cies, Region V headquarters itself, or even LEAA) may be primarily interested with resultant performance levels within their jurisdictions. Whatever their concerns may be, the outcome of the workshop can have some bearing on them, so that all these people can benefit from the information generated from an effect evaluation. And regardless of the effects or implications that the workshop may or may not have for the criminal justice system in general, or for them in particular, they need to know the findings of an effect evalu- ation in order to realize what these effects or implications are or could be (assuming, of course, that they won't be self-evident). In essence, then, effect evaluation can satisfy the need to know the effects of the workshop by various criminal justice per- sonnel for whatever reasons they may have (even including simple 12 curiosity and ego satisfaction that may be derived from positive findings). Aside from this, effect evaluation can also meet the need to justify the workshop itself. In a general sense, such justifica- tion is reached if the workshop is successful in transferring tech- nology that is subsequently used. However, there are more pragmatic considerations involved. Justificationcfi’the workshop as a whole depends on the justification of costs in terms of resources that were committed to the development and execution of the workshop. Heading the list of resources whose use must be justified is, as one might expect, money. LEAA provided a grant to the Criminal Justice Systems Center at Michigan State University to develop and conduct the workshop and to reimburse all participants for travel and accommodation expenses.18 Not surprisingly, LEAA officials want something to show for the investment, preferably favorable findings, but some findings regardless. Money was only one of the expenditures, however. A good deal of research, planning, coordination, communication and miscel- laneous details and arrangements went into the worksh0p, which means considerable amounts of time, effort and manpower were invested. These resources, like money, require justification (rationaliza- tion?) and, to reiterate, such justification can be facilitated through effect evaluation. The need to justify all this commitment of money, time, effort and manpower is an important and pressing issue because of I8Training workshop, Grant #75 TN 05 004. 13 (l) the limited supply and availability of these resources and (2) a demand for their use on numerous other projects and programs in the field of criminal justice. For example, LEAA has a fixed budget per year with which to disperse funds to worthy projects and pro- grams, but there are literally hundreds of grant applications made to it annually for such funding. Similarly, both Region V and the Crminal Justice Systems Center have many responsibilities to attend to other than the workshop, but they also have just so many staff members "to go around." As a result there's a limit to the number of personnel who could be "spared" for the workshop. In addition, most of the people who do contribute to the workshop (including guest speakers) had other job-related responsibilities requiring their attention which in turn affected the amount of time and effort they could devote to the workshop. Considering the constraints on these resources and a host of potential uses for them, it is not difficult to understand the importance of allocating them wisely and being able to tell via some form of feedback loop (i.e., evaluation) if they had been used pro- ductively. At first glance the logic behind this need to justify the expense of the workshop in terms of the resources committed to it may appear somewhat unclear. Granted, it is pragmatic to make the best possible use of the resources available and to avoid wasting them on this workshop if its goal if unattainable, especially when they might be better utilized elsewhere. However, it would seem a little late to worry about this "after the fact." Why attempt to 14 justify the workshop after it is already history? What real good can come from the knowledge of whether the expense was worthwhile? The answer to these questions lies in the possibility that the workshop might be replicated or that it might serve as a model for other workshops to be developed later. This workshop represents one of the first attempts to increase and improve the use of planning and evaluation at the RPU level, so for all practical purposes it could be considered a pilot program. Therefore, the future of other potential workshops of this type may depend on the success of this one. In fact, the success of this workshop would provide a strong argument in favor of identical workshops for RPU personnel in Region V who didn't attend this one or for RPU personnel in other regions where there is a need for bet- ter planning and evaluation. Perhaps even a workshop for SPA or regional personnel would be in order. Such workshops could use this one as a blueprint to follow. However, if this one is not successful and the expense cannot be justified, then no useful purpose would be served by repeating it and making the same investments in future workshops. After all, why make the same costly mistake twice (or more)? This line of reasoning can be extended to include potential workshops dealing with different subjects (e.g., information sys- tems or research techniques) since the formats of such workshops and the resources needed to develop them would be very similar to the format and resources associated with this particular workshop. The only major difference would be in the type of information presented. 15 Needless to say, the future of all_criminal justice training workshops does ppt_depend on the outcome of this one. Others will be developed and conducted regardless. In doing so, however, wise planners will refer to previous workshops to gain from their experi- ence, much in the same way a researcher will conduct a review of the literature before undertaking a new study. Although the use of work- shops has been increasing, the practice is still in its infancy and there are relatively few workshops described in criminal justice literature to be referred to. As a result, there is a good chance that this workshop will be among those used as guidelines for future worksh0ps. In such a case, the preceding arguments would have some relevance; if this workshop cannot be justified, it should not be used as a model for another one, at least in its present form. However, this is not to automatically say that the Planning and Evaluation Worksh0p should peyg§_be repeated or that others should pgye§_be based on it just because it might not be successful 19 Perhaps additions, deletions, revisions or modi- and justifiable. fications in the content, format or presentation--or better timing (yes, timing is important)--would enhance the chances that desired effects could be attained. If such were the case, possible solutions might be inferred from the results of the effect evaluation itself 19Just for the record, keep in mind that there may be external or situational factors,such as politics,that may have a bearing on the outcome of the workshop. For instance, the head of an RPU may not allow an employee who attended the workshop to apply new skills or implement new techniques. Although this deserves mentioning, it is beyond the focus of this study and will not be further addressed or elaborated on. 16 or it might be necessary to deve10p another type of evaluation such as one measuring cost-effectiveness, from the groundwork laid by this one. In any event (and as a fitting conclusion to this sec- tion), it should be emphasized that the first step toward either correcting the workshop so that it would serve as a model, or simply determining that it should be "written off" as a noble failure, is to ascertain what effects it did have. Hence, even another reason for the need to conduct an effect evaluation. Explanation In the preceding section the need for an effect evaluation of the Revion V Planning and Evaluation Workshop was established. The implication was also made that both the quality and quantity of evaluations in general has traditionally failed to meet ideal stan- dards and the review of literature in Chapter II will further support this contention. This study will attempt to deal with both of these issues by providing an effect evaluation which will be an improve- ment over past practices. Towards this end it is appropriate to discuss the delay of several months that occurred in conducting this particular study after the workshop. Such a delay might give the initial impression that the evaluation of this workshop suffered from the same neglect as has been previously mentioned. However, this is not the case, at least in regard to this study, because a time lapse between the conclusion of the workshop and the onset of data collection is necessary whenever conducting an effect evaluation. 17 There are several steps that must take place before the information presented in the workshop can be put to use. The attendees must carry it back to their respective RPU's, digest it themselves and share it with other members of the staff. The merits of using it must then be contemplated and a decision made to do so or not. If it is to be put to some use plans must be made and eventually implemented. The time required for all these steps would vary among RPU's but several months would pass before all the RPU's could make use of the information. Therefore, it would be impractical to attempt to actually conduct an effect evaluation before enough time has passed for the potential effects to be realized. That is why no such attempt was initiated at the conclusion of the Region V Planning and Evaluation Worksh0p. Purpose, Conceptual Framework and Hypotheses The purpose and overall goal of this study is to provide proper effect evaluation of the December l975 Planning and Evalu- ation Workshop for Region V RPU personnel. Within this context, there are two primary objectives of the study. The first one is to determine whether or not any of the information that was presented at the workshop was actually learned by those who attended, and, if so, how much and in what particular areas. The second objective is to determine if any of the informa- tion that may have been learned by the attendees is currently being 18 put to use by their respective agencies, and if so, what particular areas and at what stage of use (e.g., planning to use, some use, or much use). Note that this second objective is of greater practical importance than the first since in the "action world" there is more concern for increased utilization than with just increased knowledge. Ultimately, then, the overall success of the workshop could even be judged solely on the basis of changes in utilization of the various concepts, techniques and strategies and not on changes in knowledge of them. It may also be appropriate to note at this time that these objectives do not directly address the issue of skills, as such. Rather, only knowledge and the application of that knowledge is to be measured. At first glance this may appear to be an oversight for a couple of important reasons. .First, a skill, by definition, is the ability to apply knowledge; therefore, skills are necessary in order to put any knowledge to proper use. Second, the development of skills, as opposed to just the acquisition of knowledge and the attempt to use it, is ultimately the desired outcome of the workshop. However, this study will not attempt to directly measure skills, because of the difficulty associated in doing so. For reasons to be elaborated on later in this study, data collection will be made by way of mailed questionnaires, which are not condu- cive to the objective measurement of the ability to put knowledge to use. Therefore, only knowledge and the use of that knowledge will be measured, not how well the knowledge may be used. Infer- ences about skill levels might be made from information obtained in 19 this study, but it would be too ambitious a venture to try to incor- porate into this study an instrumentation that could measure skills without actually observing them in action in the field. There is a possible second use of this study as already alluded to. Since the workshop itself was basically a pioneer effort, this study will also be one of the first of its kind. Therefore, anyone who may decide to conduct an effect evaluation of a future workshop could benefit from the experience provided by this one by using this study as a guide to follow (or even not follow it, as the case may be). In addition, the outcome of this study might be instrumental in the actual decision to conduct an effect evalu- ation at all for some future workshop. Hypothetically, the decision to conduct other workshops like this past one could even be affected by the results of this study. There is one more potential service which this study can provide, although not directly related by evaluating the effective- ness of the workshop itself. As a useful by-product of this study, baseline data will be generated which, in turn, can provide a "state of the art" of planning and evaluation technologies in use at the RPU level, at least for Region V (including the RPU's that were not represented at the workshop, since they will be surveyed, too). Such information has never been available in aggregate form before. Although "state of the art" information may have little or no intrinsic value itself, it could be applied to various activities. For instance, had such information existed before the workshop, it could have been used to objectively support the claim that there was 20 a need f0r the workshop or even to show the need in the first place, and once it has been collected it can serve in a similar capacity for the future. In any event, this study will make such information available for concerned criminal justice officials to use as they see fit. There are a variety of formal theories concerning exposure to information in educational settings and their subsequent utiliza- tion upon which this study could be based. However, the primary concern of this study is not really related to issues of theory testing. All that is necessary for purposes of this study is to establish a conceptual framework within which to operate. Such a framework in its simplest form would be somewhat as follows: in-service training workshops for criminal justice practitioners facilitate improved performance in the field. This conceptual framework would rest on two basic assumptions from which testable hypotheses could be derived. The first of these assumptions is that the transfer of usable technology can be made in a workshop setting. In other words, criminal justice personnel can actually be taught in a workshop to do a better job. The second assumption is contingent on the first and states that the technology, once learned, will be put to use. This means that attendees of a workshop will employ their newly acquired skills on the job since such utilization is the reason for learning them in the first place. Although this simplified description of the conceptual framework for the study could be expounded upon in greater detail, 21 such elaboration is unnecessary and would contribute little to either the study or the reader. However, it would be beneficial to clarify the hypotheses at this point as follows: It is expected that the people who attended the workshop will know more about planning and evaluation afterwards than they did before. Their agencies will, in turn, put this increased knowledge to use. It is also believed that those who attended will know more about planning and evaluation after the workshop than selected representatives of the agencies not chosen to participate in the workshop. As a result, the attendees' agencies will use more planning and evaluation concepts, techniques and strategies than will the nonparticipating agencies. In attempting to test these hypotheses there are several terms used in this study whose definitions it would be beneficial to know. In addition, the independent and dependent variables of the study should be differentiated. The following is a list of these definitions and variables: Definitions Representatives: Employees of agencies selected by those agencies to be respondents for this study. The representatives of the agencies in the experimental group attended the workshop. The representatives of the agencies in the control group did not. Knowledge: The accumulation of factors or information on planning and evaluation. Concepts: Abstract or generic ideas relating to planning and/or evaluation which are generalized from specific instances. 22 Strategies: Plans or means to achieve planning and/or evaluation related objectives. Worksh0p: The brief, intensive, training program conducted in December 1975 for selected Regional Planning Unit personnel in Region V and dealing with the transfer of usable planning and evalu- ation technologies. Techniques: Specific technical methods for accomplishing planning and/or evaluation related goals or aims. Agencies: Regional Planning Units or their equivalent within Region V. Variables Independent variable: The training provided at the workshop. Dependent variables: (l) The knowledge gained as a result of the training at the workshop; (2) subsequent utilization of the new knowledge gained. CHAPTER II REVIEW OF THE LITERATURE Introduction There is a multitude of examples of literature from other disciplines which deals with evaluations of training programs and workshops. However, since this study is not concerned with broad theory or with comparing training programs or methods of evaluating them, it would not serve much purpose to delve into other fields. Therefore, the scope of this review is confined to the field of criminal justice. Evaluation is a subject that currently proliferates in criminal justice literature. Almost every imaginable facet of law enforcement or criminal justice has been addressed by some form of evaluation related literature, ranging from general planning and day-to-day activities to specific projects and programs. However, these readings seldom differentiate evaluation by types, so effect evaluation is rarely treated as a separate topic or issue. This does not mean that it is never dealt with, because most descriptions of the evaluation process include an implicit description of effect evaluation. However, as a rule, it is not labeled and discussed as a specific type. Such a distinction becomes the responsibility of the reader. 23 24 In addition to a lack of readings pertaining to effect evaluation as such, the overwhelming majority of literature in circulation is not even research oriented (i.e., actual evaluation research studies). Instead, most of the writings are intended to promote the use of evaluation and/or show how and when to conduct it. A couple of noteworthy examples of this kind of literature are Ralph G. Lewis's The Evaluation Process in Criminal Justice Programs and Intensive Evaluation for Criminal Justice Planning Agencies by Donald R. Weidman.20 Likewise, most writings that specifically relate to the evaluation of training programs (including workshops) are not designed to describe actual evaluations or show the results of them, but rather are meant to advocate evaluation and provide guidelines for it, as is well done in Evaluation Guidelines for Training Pro- gram§_by Kent J. Chabotar and Lawrence J. Lad, and Planning, Con- ducting, Evaluatinngorkshopsfiby Larry Davis and Earl McCullon.2] Unfortunately, these readings contain few examples of such evalu- ation. In fact, there is a dearth of available accounts of evalu- ations in this area that have been conducted. Although our search for research related material was not an exhaustive one, we were hard pressed to find relevant literature. Most of the writings 20Donald R. Weidman, Intensive Evaluation for Criminal Justice Planning Agencies (Washington, D.C.: U.S. Department of Justice, 1975). 21Larry N. Davis and Earl McCullon, Planning, Conducting, EvalgatingWorkshops (Austin, Texas: Learning Concepts, Inc., 1975 . 25 examined were similar to those mentioned above, "cookbooks" for planning, conducting and evaluating workshops. A few studies were alluded to but were impossible to locate or obtain. (Judging from the brief descriptions of those studies in the readings, we are sus- picious of the quality of most of them.) One real piece of research containing an effect evaluation of a criminal justice training workshop was discovered. The evalu- 22 was fairly comprehensive, including aspects of other types ation of evaluation, but effect evaluation was clearly the primary concern (although implicit). The remainder of this chapter will center around a detailed examination of this study. Study An evaluation was conducted of four separate series of work- shops that were sponsored by the Office of Technology Transfer of the National Institute of Law Enforcement and Criminal Justice within LEAA23 decision-making personnel throughout the country to promote the use .and conducted for various criminal justice planning and of certain programs. 220. Dennis Fink, "An Evaluation of the Effectiveness of Workshops for Facilitating the Transfer of Technology" (Alexandria, Va.: Human Resources Research Organization, March, 1976). 23The reader is referred back to the Background and Need section of Chapter I of this study where it was established that the NILECJ was responsible for promoting training workshops. The office of Technology Transfer (OTT) is the specific subunit within the NILECJ that generally takes the active role in sponsoring such workshops. 26 The workshops were similar to each other in many respects. Formats and presentations varied somewhat, but for the most part they all were intensive training sessions of approximately two and one-half days in length and were composed of lectures, discussions and group exercises. The major difference between the series was in their content. For each of the four series of workshops a sepa- rate criminal justice exemplary project or concept was discussed. They were: 1. "Des Moines, Iowa Community-Based Corrections (CBC) System,“ which provides alternatives to penal institutions. 2. "Columbus, Ohio Citizen Dispute Settlement (COS) Program" which provides out-of-court mediation for neighborhood and family disputes. 3. "Sacramento, California 601 Juvenile Diversion Project (601 Project)" which provides crisis coun- seling instead of juvenile court processing for status offenders. 4. Police Department Crime Analysis Units (CAU) Which provide statistical data for the identification of crime patterns and the allocation of police man- power. Each series of workshops was conducted in each of the ten LEAA regions throughout the country, totaling nearly 40 separate workshops (some regions declined to host certain workshops). The selection of individuals within a region to attend a workshop was made by the LEAA Regional Office and was based on an individual's interest in the project or concept and his/her authority to initiate it within his/her jurisdiction. Approximately 35-50 such people were chosen for each workshop. 27 The overall objective of the study was to evaluate the effectiveness of all four series of workshops in regard to: l. the degree to which the attendees tried to imple- ment the projects or concepts that were presented in the workshop they attended, 2. especially liked or disliked workShop materials and techniques, 3. ways to improve future workshops, 4. identification of potential workshop follow-up activities that might facilitate the transfer of criminal justice technology in regard to programs and concepts.‘ The evaluation itself began with a mailed survey to all attendees of all workshops. The survey instrument was a standardized questionnaire developed with the help of NILECJ personnel, which was modified for each series of workshops. The questions were of assorted construction, mostly Likert-type scales, checklists and open-ended fill-ins, and they pertained to the four issues listed above. Questionnaires were mailed approximately two and one-half months after the conclusion of the workshops. Enclosed with each questionnaire was a cover letter explaining the survey. The initial return rate was 49.3% overall (see Table l) and no follow-up was attempted. Analysis was fairly straightforward and simple. Average scores and percentages were computed for some answers and tabula- tion and ordering of the most frequent responses were made for others. There were no formal test hypotheses and no control group 28 TABLE l.-—Survey Rate-of—Return Data for Each Worksh0p Series. Number of Number of Rate of Workshop Series Questionnaires Questionnaires Return Distributed Distributed Community-Based Corrections 0 System (CBC) 379 197 49.3% Citizen Dispute Settlement Program (COS) 400 153 38.3 California Diversion Program for Juvenile Status 235 128 54.5 Offenders (601 Program) Crime Analysis Unit (CAU) 316 188 59.5 AI' W°rk5h°p 1,330 . 656 49.5 series combined to compare with. There was, however, an attempt, mentioned but not elaborated on, to obtain pretest information. Although specific results varied among the different series, the overall findings were favorable. The combined figures showed that 71 percent of the communities from which there were responses to the questionnaires had already adopted, were planning to adopt, or were still considering adopting all or portions of the project or concept presented in the workshop to which they sent a representa- tive. And 19 percent of the total responses indicated that their communities had already adopted all or portions of specific projects or concepts prior to the workshop dealing with it. Only 10 percent of the respondents indicated no plans in their communities to adopt one of the four programs (See Table 2 for the breakdown for each 29 TABLE 2.--Percentage of Attendees Who Reported That Their Community Had Adopted or Had Made Plans to Adopt All or Portions of a Program Discussed at a Workshop.a Workshop Program All Programs (:80b cpsC CAUC 601b Combined Number of respondents 164 137 154 116 571 No plans to adopt program 4% 23% 10% 4% 10% Already had adopted all or por- tions of program prior to 24 12 16 26 19 workshop Adoption of all or portions of program still under con- 33 42 31 28 34 sideration Decision had been made to adopt 9 4 3] 7 13 all or portions of program Already had adopted or was in the process of adopting all or portions of the program 3O 19 13 35 24 apparently as the result of attending the workshop aA respondent for the CBC or 601 worksh0p might have reported that his community had adopted component A of a program prior to the workshop, was in the process of considering adoption of component B, had made a decision to adopt component C and was in the process of adopting component 0. Such a response would be recorded only once and would be recorded under the most concrete evidence that ad0ption had occurred as a result of attending the workshop. In this example the response would be recorded as "was in the process of adopting." bCBC and 601 programs contained 6 and 5 components, respectively. CCDS and CAU programs are essentially one-component programs. 30 series.) Twenty-four percent went so far as to state that implemen- tation was primarily the result of the workshops themselves. Also, interestingly enough, the most successful workshop series (the CBC and 601) were also the most complicated in terms of components that made up the programs discussed. As far as the final implementation status for the programs described in the four series of workshops is concerned, the findings were also encouraging. Overall, 37 percent of those who responded to the questionnaires indicated that their communities had adopted, were in the process of adopting, or had decided to adopt all_of the portions of the program discussed in the workshop they attended. A total of 68 percent indicated a commitment to adopt at least some portions of a program. (See Table 3 for the breakdown for each series.) There were several other findings made (although of less consequence to this study). The workshOps were rated fairly well by the attendees as indicated in Table 4, but 29 percent of the attendees wanted more information than was provided in the work- shops. Several barriers to implementation were discovered. Such a list included a lack of money, a lack of manpower, jurisdictional disputes between agencies cooperating on a program, conflicts with local or state laws, and a lack of adequate caseloads. Finally, several benefits derived from attending the work- shops were listed. The most common of these as indicated by the attendees were new contacts with people from other agencies, new 31 TABLE 3.--Estimated Final Implementation Status for Programs as Reported on by Workshop Attendees.a Implementation Status Program ProélAms Categories - CBCb CDSc CAUc 6le Combined 1. Already had, in process of adopting, or decision made 0 o O to adopt fl program com_ 2313 37% 60A 26A 37% ponents 2. Already had, in process of adopting, or decision made to adopt a majority or pro- 44 NA NA 33 19 gram components 3. Already had, in process of adopting, or decision made to adopt one or a few pro- 23 NA NA 27 12 gram components 4. Consideration still being given to the adoption of one or more program compo- 6 40 30 10 22 nents, or all of program 5. No plans to adopt all or any part of program 4 23 IO 4 ‘0 aA respondent might have reported that his community had adopted one component of the 601 program prior to the workshop, was in the process of adopting two components, had made a decision to adopt a fourth component of the program, and was still considering adoption of a fifth program component. From this information it appears certain that eventually that community will have adopted four or five or a majority of the 601 program components. The response representing this community would be recorded under the second implementation category. A response wuld be recorded under the fourth implementation category only when one or a few program components were under consideration and there were no plans to adopt any other portions of the program nor were any program components already in existence. bCBC and 601 programs contain 6 and 5 components, respec- tively. cCDS and CAU programs are essentially one-component programs. 32 TABLE 4.-—Attendee Ratings of Workshop and of Documents Distributed at Workshop. Workshop CBC CDS CAU 601 l. Usefulness of workshop for acquiring a new ideas and information 3'77 3°84 3-90 4-02 2. Usefulness of workshop in comparison with other recently attended work- 3.73 3.52 3.66 3.68 shops 3. Overall reaction to workshop program and style of presentation 4'06 3°94 4-14 4-22 4. Judged usefulness of training manual distributed at workshOp 4'08 3'88 3'8] 3’90 5. Judged usefulness of exemplary pro- gram handbook or prescriptive 3.75 3.77 3.90 NA package aAverage rating based on a 5-point scale. solutions to problems, the increased availability of desired information and improved techniques. As a result of the findings, the conclusion was made that the workshops were successful in regard to facilitating the transfer of technology. Several recommendations to improve workshops were also offered. Summarized, they are: l. emphasize specially liked training techniques, 2. develop improved pre-workshop materials, 3. provide increased information about related programs, 4. provide detailed information on program imple- mentation, 33 5. avoid over-use of small group problem-solving exerc1ses, 6. eliminate leaderless discussions. Some suggestions were also made regarding the improvement of technology transfer in general. They are: 1. on-site, specially tailored workshops, 2. technical assistance, 3. NILECJ-sponsored "program selling" assistance and material, 4. improved information dissemination methods, 5. funding assistance, 6. additional miscellaneous information on programs (e.g., applications, alternatives, etc.). The study concluded with a brief description of six replica- tion efforts of the CBC program and a Technology Transfer Conference held in Denver, Colorado in March 1975, which was conducted for the purpose of opening communication channels between the people working on the six separate replication programs. A recommendation was made for the use of similar conferences to coordinate the efforts of identical programs undertaken by various communities that have sent representatives to a worksh0p. However, this recommendation was further qualified to state that such follow-up conferences would only be necessary if the different communities attempted to imple- ment gll_components of a particular program (as Opposed to just some of them). Otherwise, the initial worksh0p describing the pro- gram would be sufficient. 34 Discussion and Summary Effect evaluation was obviously not the only concern of the study (it seldom is) and of the very least elements of effort and efficiency evaluations were incorporated into it. (One could even make an argument that all five types of evaluations were included.) Given the procedures that were taken, the study appears to have been conducted fairly well. However, there are some shortcom- ings, at least methodologically. For the most part, the research design was pre-experimental24 in nature. Although it was maintained 25 that a "one-group, pre-test post-test" design was employed, it was unclear as to how pre-test information was obtained or even if it really existed at all. Therefore, the design more closely resembled a "one-shot case study"26 type. This type of design, in turn, has "such a total absence of control as to be of almost no scientific 27 Even if the "one-group, pre-test post-test" design was value." used, it would methodologically be less than ideal because of its lack of controls against threats to internal validity.28 And although both of these designs are often still employed in the field, they are basically unacceptable from a scientific point of 24Donald T. Campbell and Julian C. Stanley, Experimental and Quasi-Experimental Designs for Research (Chicago: Rand McNally & Co., 1963), p. 6. 251bid., p. 7. 26 27 28 Ibid., p. 6. Ibid. Ibid., p. 7. 35 view because they cannot ensure that any change in the dependent variable is solely the result of the introduction of the independent variable. No follow-up questionnaires were sent out. Although the initial response rate of 49.3% was fairly good, it might have been desirable to have made one more mailing. Finally, the analysis of the data may have been over- simplified. The study was supposed to be an experimental study but there was no real hypothesis testing (which is what one would expect to find in a descriptive study). Also, statistical techniques such as t-testing were lacking. Change was determined by the magnitude of the percentages of certain responses to questions and this alone is subjective to say the least. We would hesitate to call the study inadequate, but there are several improvements that could have been made. However, in spite of its shortcomings, there are some aspects of the study such as the survey questionnaire itself which could have been very useful to us in the development of the Planning and Evaluation Workshop study. Unfortunately, publication of the study was not made until after the first mailing for this study, so there is little in which the latter can actually benefit from the former. However, as was already mentioned in the Explanation section of the first chapter, it is our intention to build on this previous study in an effort to improve the validity and reliability of this type of evaluation research as it is conducted in the field. 36 In summation, there is very little evaluation research of criminal justice training workshops in circulation and almost none Specifically addressing effect evaluation. If the one study examined is indicative of the quality of evaluation of programs in the field of criminal justice (as we suspect it too often is), then consider- able methodological improvements are vital to the credibility and utility of the evaluation process in criminal justice. CHAPTER III DESIGN AND METHODOLOGY Sample The population that was relevant to the evaluation included (1) all 73 Regional Planning Units (or comparable agencies, such as Local Planning Units) within Region V, and (2) the individuals within these agencies who were somehow directly involved with the planning and/or evaluation process. A complete census was taken of the Regional Planning Units. All were included in either the experimental or the control group. The only sampling, as such, involved the Choice of the individual within each unit to represent that unit at the workshop and for testing purposes. Assignment of agencies to respective experimental and con- trol groups was as follows: the 34 agencies that sent representa- tives to the workshop composed the experimental group and those 39 that did not, comprised the control group. The actual selection of the agencies to be represented at the workshop was made before the onset of this study and was, there- fore, beyond our control. It was known that no concerted efforts towards randomization in the selection process were made, since the agencies chosen to attend were selected at the discretion of their particular state planning agencies. However, letters (see 37 38 Appendix A) were sent to each of the state planning agencies involved, in an attempt to determine what selection criteria or sampling techniques were employed. In this way, it was possible to discover biases that might have resulted in the experimental group being nonrepresentative of the whole population (or, more appropri- ately in this case, the two groups being significantly different). As with the assignment of agencies to respective groups, the selection of individuals within the agencies to represent them was largely beyond our control. However, for purposes of this study, it did not matter how these people were chosen to attend the workshop. It was sufficient (not to mention necessary) to accept them as the ones from the experimental group that this study was focused on. The people who represented the agencies in the control group were also selected by the agencies themselves. However, we had input into the decisions because it was requested that the person in each agency with the most expertise in planning and evaluation be the one to respond to the questionnaire. It was hoped, in turn, that this would roughly match all of the respondents in terms of competence, thereby making both groups somewhat equivalent, at least at the level of the individuals involved. However, there was no assurance that this would indeed take place. Data Collection and Measurement Data collection was made through the use of written ques- tionnaires (Appendix B). Two separate ones were developed; one went to the representatives in both the experimental and control 39 groups and the other went to only those in the experimental group. The first questionnaire (sent to representatives in both groups) was designed to obtain the bulk of the data. It consisted of approximately 40 items or terms, representing key concepts, strategies and techniques that were presented at the workshop. The determination of these "major points" was made by reviewing tape recordings of the sessions, lecture notes, and prescribed readings. In addition, consultation was made with some of the guest speakers and the coordinators of the workshop. From these efforts the most important concepts, techniques and strategies were chosen as the items to be included in the questionnaire. Next to each of the items were two Likert scales designed to measure familiarity with and use of the term. The scales were numerical (1-5) and each formed a continuum ranging from "not familiar with" to "much use." The first scale (pre-test) attempted to measure whether the term was understood and/or used before 1976 (before the workshop), and the second (post-test) sought the same information for 1976 (after the workshop). Some blank space was also provided for respondents to fill in concepts, strategies or techniques that they considered important but might not have been included as items elsewhere in the question- naire. There were scales provided for these also. There were also a couple of questions that requested certain information to be listed (this was necessary because some informa- tion could not be obtained through scales). 40 Finally, there was a section of the questionnaire devoted to identifying intervening variables. Since the scales were intended to measure knowledge and/or use of a particular term, and different responses between the scales tended to indicate an increase in knowledge and/or use, it was important to know if that increase was the result of the workshop or some other factor(s). Therefore, this section, in essence, asked the respondents to explain any changes in the knowledge and/or use of a term by listing all terms in which there was some change and then describing the source of the change. It is only fair to add, however, that this section was an innovation on our part and it was not known how reliable this section would turn out to be. As is probably already evident, the pre-workshop and the post-workshop data were obtained together. Ideally, pre-workshop information should have been obtained before the workshop, but for reasons to be mentioned in the Design section it was not. Since the pre-workshop information was actually obtained after the workshop it would really be somewhat inappropriate to call it pre-test data. Actually, it would constitute a makeshift "ex post facto" pre-test. However, for simplicity, this information will be hereafter termed pre-test data. Whenever a pre-test is given there exists the possibility of a sensitizing effect which may adversely affect internal validity. This possibility would increase when the pre-test and post-test are presented together, because the respondent would have an Opportunity to compare pre-test and post-test responses and systematically bias 41 them (even if unconsciously doing so). In addition, when the pre- test is given "after the fact," a question of reliability arises because the respondent's memory may not be accurate. Although it is not known if these potentital problems actually occurred, the possibility did exist and had to be considered. Unfortunately, there was little that could be done to get around them. The section questionnaire (sent only to those agencies who sent a representative to the workshop) was short and different in nature from the first. It also was basically unrelated to this par- ticular study because it did not involve effect evaluation, as such. Instead it was designed to assist in conducting process, effort, impact and efficiency evaluations of the same workshop and was included in our mailing for budgetary and expediency purposes. (The reader is reminded that this particular study, an effect evaluation, was merely a subset of a more comprehensive evaluation which was conducted for the workshop. Ultimately, the results of this evalu- ation were to be combined with those of the others to form an Evaluation Report to be presented to LEAA.) The questions on this supplemental questionnaire were of assorted construction and were developed as discretely as possible so that they would not bias any of the responses for the first ques- tionnaire which otherwise might occur. The appearance was that the two questionnaires were unrelated. Although the results of this supplemental questionnaire did not directly relate to this study, they are succinctly presented in Appendix C because they do provide additional information about the workshop itself. They are 42 offered in this study solely for the benefit of the interested reader. After the questionnaires were constructed, they were pre- tested by two people who attended the workshop, but who were not members of the target population. This pre-test was made in an attempt to assure that the instructions were understandable and that the questionnaires themselves were easy and quick to complete. (It was not expected that it would take more than 15 minutes to fill out the questionnaires nor that much work would be necessary to do so.) Revisions were then made as needed. Although we took steps to ensure the reliability of the questionnaires, no real test was made to ascertain how reliable it was. This was not considered necessary because the questionnaires were very similar in most respects to various other types of ques- tionnaires which have been used in the past with success and reli- ability (with the exception of one section which was noted earlier). The only foreseeable threats to the reliability of administering this particular questionnaire were the same as one might expect when conducting any mailed survey of this nature and therefore do not require elaboration. It was expected that any systematic bias in responses which might occur would probably tend to be in the direc- tion of more favorable responses (i.e., higher scores on the Likert scales) because of the respondents' desire to be cooperative. The questionnaires were mailed to all the agencies, com- plete with cover letters (Appendix D) and self-addressed, stamped, return enve10pes. There were two mailings (a follow-up was 43 necessary and a telephone follow-up was considered, but ultimately was not required). Research Design The traditional research design for evaluations of this sort generally has been pre-experimental and either the "one-shot case study" or the "one group, pre-test/post-test" (as indicated in the Review of the Literature). However, in keeping with our commitment to upgrade the quality of such evaluations, more sophisticated research designs were desired. From a methodological point of view, the ideal research design for this study should have been truly experimental (e.g., a randomly assigned pre-test/post-test control group designzg). Realistically, however, a post-test only control group design would have been satisfactory if the experimental and control groups had been randomly selected. Unfortunately, this was not the case. Thus, the research design of preference was a quasi-experimental, non- equivalent control group design.30 Although the latter was somewhat less desirable than the former in terms of controlling against threats to validity and insuring generalizability, it was acceptable, assuming that the two groups were basically similar in regard to criteria of interest to this study, However, if the pre-test data were to show significant 29 3O Chabotar and Lad, p. 85. Ibid., p. 83. 44 differences between the groups, subsequent comparison would probably prove meaningless. Since this particular design was adopted, it was necessary to obtain some kind of pre-test information. It should be pointed out again at this time that all such pre-test data had to be obtained after the workshop training instead of before it, because there was not any relevant baseline data available on the agencies prior to the workshop, nor had this study been officially undertaken until after it. This did not appear to have created any noticeable problems of the nature already discussed but because of the potential influence that this might have had on the outcome of the study, it should be again stressed that awareness was important in order to deal with these problems had they arisen. If they were to occur the validity of the results would be no better than that which could be obtained through the pre-experimental, one-shot case study design that was 31 In used for the study described in the Review of the Literature. such a case, our confidence in the findings would necessarily be of a lesser degree than for the more powerful non-equivalent control group design. Hypotheses There were four formal test hypotheses around which this study was based. They were as follows: 311616., p. 79. 45 Hypothesis 1: The representatives who attended the workshop have knowledge of more planning and evaluation concepts, strategies and techniques (as indicated by medium scores on related questionnaires) now than bef0re they attended the workshop. Hypothesis 2: The agencies of the representatives who attended the workshop put more planning and evaluation concepts, strategies and techniques to use (as indicated by high scores on related questionnaires) now than before the workshop. Hypothesis 3: The representatives who attended the workshop have knowledge of more planning and evaluation concepts, strategies and techniques (as indicated by medium scores on related questionnaires) than the repre- sentatives who did not attend the workshop. Hypothesis 4: The agencies of the representatives who attended the workshop put more planning and evaluation concepts, strategies and techniques to use (as indicated by high scores on related questionnaires) than those agencies whose representatives did not attend the workshop. Data Analysis The collected data was tabulated, coded, keypunched, and stored in the CDC 6500 computer at Michigan State University which aided in making analytical comparisons. The scores for the scaled items were looked at on an overall, individual and grouped32 basis and mean scores and standard devi- ations were computed for them. T-tests were then made as appropriate to compare the scores (1) between the two sample groups for both the pre-test and the post-test and (2) within each sample group for both the pre-test and the post-test. Frequency distributions and contingency tables were 3ZSelected groupings will be made of certain items relating to a common conceptual area, such as types of planning. 46 also employed as necessary to further facilitate analysis and show trends since the worksh0p. For the three questions on the main questionnaire and those on the second (supplemental) questionnaire that were not scaled, analysis did not require the use of the computer or most of the above-mentioned techniques. For the forced-choice questions, per- centages of the various responses were computed and compared between the experimental and control groups as appr0priate. For the fill-in questions, the different responses were listed, simple frequency distributions of those responses were made and the results were also compared between groups as necessary. In order to pr0perly evaluate the workshop, the data col- lected had to be amenable to the techniques employed to analyze it. Some of the statistical techniques to be used in this study were most appropriate for use on interval type data. However, due to the nature and quantity of the desired information, it was necessary to construct Likert scales in the questionnaire in such a way as to obtain ordinal type data (at best). To use statistical techniques geared toward interval data on ordinal data would be methodologically somewhat inappropriate, but we wanted to use the most powerful sta- tistics possible. Therefore, the ordinal data was treated like interval data for purposes of this study. In defense of this deci- sion, a quick look at other research from the field would show that there is ample precedent for this type of "bootlegging" and that acceptance of this practice is common. 47 Summary In this chapter, we have attempted to show what methodo- logical steps were taken in this study. The summary will briefly review those steps. All of the Regional Planning Units in Region V were sur- veyed. Those who were represented at the workshop were relegated to the experimental group and those who weren't formed the control group. (Note again that the initial selection of agencies and indi- viduals to participate in the workshop was made prior to and inde- pendent of this study and was not random.) Data collection was made via written questionnaires which were developed specifically relevant to the workshop. They were mailed with cover letters and a follow-up was made. There were two questionnaires. The experimental group received both, the control group only one. Measurement was primarily made through use of Likert-type scales relating to key items for the workshop. They were set up so as to provide a "makeshift" pre-test. However, there were also some questions requiring forced-choice and fill-in answers. Efforts were made to control (or at least discover) extra- neous variables that might have accounted for measured changes. They were also made to determine how similar the two sample groups were to each other (since their selection was not rand0m). Ideally, the research design should have taken the form of a truly experimental pre-test/post-test control group design. More practically, a post-test only control group design was sufficient. 48 However, due to some methodological problems, a non-equivalent con- trol group design was adopted out of necessity. The hypotheses simply stated that the people who attended the workshop would know more about planning and evaluation and their agencies would use more of this information as a result of the work- shop. These people and agencies would, in turn, know and use more planning and evaluation related information than would people and agencies who were not involved in the workshop. Most of the data analysis involved the use of a computer and statistical techniques such as t-tests and cross-tabulations were employed. However, simpler methods to compare groups such as per- centages, contingency tables and frequency distributions were also included as appropriate. From the findings, conclusions were to be drawn, the hypotheses were to be accepted or rejected, and recommendations were to be made. CHAPTER IV DATA ANALYSIS Resppnse Rate The initial response rate for the first mailing was only about 33 percent. Therefore, a second mailing was made, from which an overall response rate of 56 percent was attained. (It was decided that a telephone follow-up probably would not be very suc- cessful.) The actual breakdown of response rate by experimental (attendees) and control (nonattendees) groups was idential as indi- cated in Table 5. TABLE 5.--Response Rate. Attendees Nonattendees Total Number sent questionnaires 34 39 73 Number of respondents 19 22 41 Percentage 55.8% 56.4% 56.1% There were two additional questionnaires received, one from each group. However, they were grossly incomplete and improperly filled out. One was the result of a copying error that went unno- ticed. The other appeared to be the fault of the respondent for 49 50 failure to follow directions. For all practical purposes, they were useless, so they were regrettably discarded. A brief caveat would be in order at this time. Although a response rate of 56 percent is considered acceptable, the absolute number of respondents was smaller than hoped for. The smaller the sample size, the greater is the possibility that the results will not be generalizable to the population as a whole (external validity). Therefore, it would ordinarily be wise to interpret the results with caution. However, since the population itself is rela- tively small, this will probably not be as crucial an issue but it still merits consideration. Rppresentativeness of Experimental Group In addition to adequate sample size, it is important that the experimental group be representative of the general population in order to ensure external validity. Such representativeness can be best obtained by random assignment of the experimental group. How- ever, as mentioned before, the selection of the pe0ple who make up the experimental group in this study (the workshop attendees) was not made randomly. Nor were any other uniform attempts made to guarantee representativeness. Even though there is little that can actually be done to control representativeness in this study, it is still both possible and desirable to measure it. Regardless of how the experimental group was obtained, it is important to know just how representa- tive of the whole this sample is. If it turns out that there are 51 significant differences between it and the general population in the fist place, then these differences should receive serious consider- ation before making any definitive statements based on the findings. If the two groups are extremely different, it is even possible that no valid generalizations can be made at all. In essence, then, attempts should be made to determine if external validity exists even though no controls were implemented to ensure it. There is another reason why representativeness of the experi- mental group is important to this particular study (although it is somewhat related to the first). The research design employed (Quasi-Experimental Control Group) requires that the experimental and control groups be basically similar to each other in respect to characteristics of relevance. Since the control group in this study consists of the rest of the population, its similarity to the experi- mental group can be equated with the representativeness of the experimental group to the whole population. Therefore, knowing how representative the experimental group is will also tell how similar to the control group it is. This, in turn, will show if the right conditions exist for the research design to be used properly. Relating to these reasons several efforts were made to determine how representative the two groups were of each other. These efforts will be discussed in the rest of this section. Although there were no overall uniform criteria for the selection of representatives to the workshop, there still had to be some basis for choosing. Since SPA's actually selected the RPU's in their states to attend the workshop, and each had its own selection 52 criteria, letters were sent to all six of them inquiring as to their methods of selection. It was not anticipated that these inquiries would actually reveal that the experimental group was representative of the population. On the contrary, it was more likely that the selection criteria of the different states would neither be consis- tent with each other nor necessarily be conducive to representative- ness. The effort, then, was really an attempt to measure any systematic biases that may have occurred, which, in turn, might adversely affect representativeness. Only two SPA's responded to the inquiry. One stated that since there were only seven RPU's in the state, all of them were selected. The other indicated that those RPU's were selected which showed interest in the workshop and whose designated representatives were competent to benefit from it. Since only two SPA's responded, it would be difficult to say whether there was systematic bias in the selection process or not. However, it is likely that interest, ability and need were strong considerations. Although these criteria would be extremely relevant to the selection of a target group for the workshop, they would not tend to indicate the representativeness of the experimental group. In fact, they would probably imply qualitative differences between the experimental group and the general population. However, this is only speculation due to the inconclusive results obtained from the inquiry. There are other means of determining similarities or dif- ferences between the groups. One possible method is to compare 53 pre-test data from both groups to demonstrate whether the groups prior to the workshop were similar in the extent of their knowledge and use of the concepts, techniques, and strategies presented at it. Such a comparison would in turn, be made from statistical analysis of the information contained on the scales for the various items. (For better understanding, this format is demonstrated by the fol- lowing example. Also, see Appendix B.) Example: Item Before 1976 1976 Monitoring 1 2 3 4 5 l 2 3 4 5 Intensive evaluation 1 2 3 4 5 l 2 3 4 5 KEY; (1) not familiar with term (2) familiar with, but no use (3) planning to use (4) some use (5) much use Such an examination was made of both the overall mean scores and the mean scores of the individual items as is presented in Tables 6 and 7. The results indicated basic similarities between the groups in terms of scores. There was no statistically determined significant difference between the groups on the overall scores. TABLE 6.--Comparison of Attendees' and Nonattendees' Overall Pre- Test Scores.a Mean S.D. T-Value Attendees 87.1579 14.557 1.59 Nonattendees 95.9545 20.790 aRange 34-170. 54 TABLE 7.--Comparison of Attendees' and Nonattendees' Individual Pre-Test Scores. ___.__—__— fl- __:.,_. Attendees Nonattendees Item T-Value Mean 5.0. Mean 5.0. Monitoring 3.8947 .937 4.0455 1.214 .45 Intensive evaluation 2.3158 .820 2.5455 .912 .85 Process evaluation 2.1053 1.243 2.8182 1.220 1.85* Effort evaluation 2.0000 1.202 2.2213 .973 .66 Impact evaluation 2.7895 1.134 3.1364 1.167 .96 Efficiency evaluation 1.9474 1.026 2.9545 .950 3.24* Effect evaluation 2.0526 1.026 2.7727 1.066 2.20* Crime trend analysis 3.1579 1.463 4.1364 .990 2.47* Data needs analysis 3.0526 1.258 3.8182 1.220 1.96* Socioeconomic and demo- graphic data analysis 3.6316 .895 4 0455 .844 1.52 Criminal justice system * flow data 2.6316 .955 3.5000 1.058 2.76 Calls for service data 2.6842 1.416 3.0455 1.327 .84 Criminal history data 2.6316 .895 2.2273 .685 1.60 cr'm'"a' AUSFICE agency 3 5263 1 219 3 9091 1 231 1 00 resource data ' ' ' ' ' Offender-based transactional statistics 2.1579 1.068 2.4545 .912 .95 Normative planning 1.9474 1.026 2.4545 1.371 1.35 Strategic planning 2.8421 1.344 2.9545 1.527 .25 Operation planning 3.1053 1.524 3.3182 1.524 .45 Uniform crime report data 1 in frequencies 4.2632 .012 4.0455 1.214 .67 Uniform crime report data 4 263° 994 4 4545 800 1 23 in rates ' L ' ' ' ’ Ratios of offenses to potential targets 2.7368 1.046 2.8182 1.368 .22 Relationshipof no. of crimes 2 5263 964 2 5455 1 224 O6 to no. of criminals ' ' ' ' ' Linear extrapolation 2.1579 1.015 2.0455 1.090 .34 Controls against threats to 1 8421 898 1 7273 703 45 external validity ' ' ' ° ' Controls against threats to 1 7368 806 1 6364 727 42 internal validity ' ' ' ' ' Controls against threats to 1 6842 820 1 8182 1 006 47 reliability ' ° ' ' ' Delphi technique 1.5789 .961 1.9545 1.327 1.05 Scenarios 1.4211 .507 2.0000 1.000 2.53* Simulations 1.8947 .567 2.0909 .868 .87 Impact models of social interventions 1.7368 .653 1.6364 .902 .41 Amoeba model of criminal 1 5263 772 1 5909 959 24 justice system ’ ' ° ' ' Community assessment approach 2.7368 1.240 2.4091 1.221 .85 Citizen involvement in the 3 5263 1 020 3 1818 1 296 95 planning process ' ' ' ' ' ' FGEdbaCk to Ioca' ”nits as to 3 2105 1.084 3.6364 1.049 1.27 quality of their work *Significant difference at .05 level of significance for one-tailed probability of separate variance. 55 For 27 of the 34 individual items there were also no significant differences. However, the control group did score significantly higher on seven items (which might tend to support the contention that the selection of representatives to the workshiop was based on need). In addition, there was a detectable pattern of these differ- ences in two specific areas--types of evaluation and types of analysis. In spite of the two areas in which the control group sur- passed the experimental group, it was apparent that the two groups were roughly equivalent in their knowledge and use of most of the relevant variables prior to the workshop. Therefore (and with some reservation), it was ascertained that the experimental group could be considered fairly representative of the general population, at least in this regard. (Note also that this finding adds credibility to the Quasi-Experimental Control Group Design which was discussed earlier.) The attitudes of the two groups toward the idea of the Planning and Evaluation Workshop are also potentially important in determining representativeness, so they were compared also. One of the three questions at the end of the primary questionnaire asked if the agencies would send a representative to such a workshop (costs not being a consideration). The question also asked why. The overwhelming majority of both groups indicated an interest in attending such a workshop as indicated in Table 8. Similarly, both groups were in strong agreement on the reasons why they would become involved in such a workshop. The 56 TABLE 8.--Send a Representative to a Planning and Evaluation Workshop. Attendees Nonattendees # % # % Yes 16 84 19 86 No 0 O l 5 No answer 3 16 2 9 consistent answers were (1) make contacts with other agencies, (2) gain relevant knowledge, and (3) improve skills. The possibility also exists that there were undetected, extraneous variables which might have influenced the response rate of either group. There may have been some characteristic(s) unique to the people in either group who did respond which might have also affected their responses. Either of these possibilities could have an adverse effect on the representativeness of the sample groups to each other. Hence, the importance of identifying these potential problems, if they exist. However, as already indicated in Table 5, the response rate was fairly good and was comparable between groups, so there were no foreseeable difficulties in this regard. However, there is one potential factor which merits special scrutiny; that is geography. In order to determine if there were any major geographical differences in the response rates of the groups, these rates were broken down by states. The primary concern 57 here was to determine if agencies in one part of the region were more likely to cooperate in the survey than were agencies in another part, and if so, why. However, when employing this technique, it was also possible to tell if the initial selection of the agencies to be included in the workshop was geographically influenced, thereby affecting the representativeness of the experimental group. The results of this examination indicated that both the initial selec- tion of attendee agencies and the response rates were fairly well dispersed throughout Region V as Tables 9 and 10 show. No geographic patterns were detected, at least at the state level. For the most part, this investigation into the representa- tiveness of the experimental group supported the claim that the experimental group was basically representative of the population as a whole and that it was similar to the control group. However, there were a couple of differences that may have bearing on the TABLE 9.--Workshop Attendance by State. .1126. "We: 11:11:15,331“ . Illinois 20 6 30 Wisconsin 12 6 50 Michigan 17 5 29 Ohio 7 7 100 Indiana 8 4 50 Minnesota ___jL_ __ji_ __jfiL_ Total 73 34 47 58 om Ne NN em NN mm mm NF NN MNNNON 33 N m om _ N om m e 8668o==NZ Om N N om N 4 ON N N 8=NNN=H me m N o o o Ne N N oNgo _N N_ N_ No N N, om N m cmmNgoNz om e N_ No 3 8 mm N e cchooNN3 om N. ON ON N N_ mm m e mNo=N__H N coecoamom N NNNON N noecoamom N NNNON N noucoamam N NNNON vwcwnEou mmwucmppmcoz mmmvcmpp< .mgmpm Na mpmm mmcoammm mcwmccowumwzo--.op m4m