i. RETURNING MATERIALS: P1ace in book drop to remove this ‘ "kout from your record. ‘4c3_¥i]1 be char ed if book is r ,returhe gafter the date. ’ stamped be10w. " r. 10 {mg .i00 0231 mums; 9‘0me?“ USING SOCIAL SCIENCE RESEARCH INFORMATION IN ORGANIZATIONS: A CASE STUDY IN CORRECTIONS By ‘ Rickie Dwaine Lovell A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY School of Criminal Justice 1985 Copyright by Rickie Dwaine Lovell 1985 ACKNOWLEDGMENTS ... An explorer who told lies would bring disaster on the books of the geographer. ... Then, when the moral character of the explorer is shown to be good, an inquiry is ordered into his discovery. ... One requires the explorer to furnish proofs. For example, if the discovery in question is that of a large mountain, one requires that large stones be brought back from it. -- Antoine de Saint Exupery The Little Prince I wish to express my sincere appreciation and deep respect for Dr. John H. McNamara who guided me in exploration, assisted me in choosing which stones to bring back, and assisted me in developing as a student and as a human being. I give special thanks to Dr. Kenneth Christian, to Dr. Harry Perlstadt, and to Dr. Gary Miller for sharing knowledge and experience with me and for their efforts in assisting me in completing this undertaking. Mbst importantly, I give thanks for Vivienne, Erin, and Bridget. They have shared their love with me -- and, as the Little Prince would say, matters of greatest consequence are seen only with the heart. Also, I wish to acknowledge the existence of Billy M. Turner. He has pulled me through when money ran short, when my car quit, when my determination began to diminish, and when I was in need of refreshment. He is a quite a delightful human being and a true friend. iii - TABLE OF CONTENTS LIST OF TABLES ............. ........ .............. . .............. vii LIST OF FIGURES ........ . ..... . ................................ . viii INTRODUCTION ............... ..................................... .. 1 Science to Guide Us . .................. .... ....... ........... 1 Conceptual Confusion ....... ............ . ............ ........ 3 "Utilization” ....................... ....... ................. 5 Differing Perspectives ..................... ................ . 8 ”Bureaucratization": A Key Issue ... ....... .................. 9 The Corrections Policy Arena ............................... 14 Problem Statement ..... . ....... 18 Significance of the Study ......... . ...... .................. 18 Definitions .......... .................................. .... 20 END-NOTES: INTRODUCTION ....... ...................... ....... 21 DEWPINGAWM: LITERATURE REVIEW ......OOOOOOOOOOOOOOOOOO 24 How Does Research Utilization Fit into an Overall Model of Decisionmaking in Public Agencies? ...... . ........ . 25 A Rational Model ..... . ............ ...... ....... .... ...... .. 27 A "Real-Life Decision" ..... . ......... ........ .............. 28 Optimizing versus "Satisficing" .......... .................. 31 ”Organizational Learning” ........... .... .................. . 32 "Learning" and the Analytic Paradigm ....... ........ ........ 34 ”Learning" and "Satisficing" .................... ........... 36 Ultra—Conservatism: A "Reactive" Posture ................... 42 Importance of the ”Organizational Learning" Concept ......... ................................. 43 Information, Search, Uncertainty, and Authoritativeness .... ................................. 44 Further Concerns for Understanding Use ........ ..... ........ 47 Implications of the Rational, Analytic and "Satisficing" Paradigms in Visualizing the Utilization Process ........... .. ...... ... ................ .. 48 The Ideal Image: Program.Evaluation ...................... .. 57 Basic and Applied Research ....... .. ........ ........ ..... ... 59 Particular Frameworks of Factors .......... ............... .. 61 Quality as an Issue .. ......... . ...... .... ............ ...... 62 The "Personal" Factor ....... ............................... 65 The Political Factor ..... .. ............................. ... 7O Bureaucracy and "Learning" ..... . ..................... ...... 74 Adopting a Framework for Inquiry .......................... . 75 Considerations for This Study ......................... ..... 79 iv END-NOTES: DEVELOPING A FRAMEWORK: LITEMMREVIEWOCOOOOOOOOOOOOOO ..... ......OOOOOOOOOOOOOOO METHODOLOGY Study Design: Single Case Study ............................ smxhw..u.u.u.u.u.u.n.u.u.n.u.n.n.u.n.u.u Within-Case Design ......................................... General Analysis Plan ...................................... Methods of Analysis ........................................ Limits of the Study END-NOTES: METHODOLOGY . ............ ....... .......... . ..... ANALYSIS ................ ....... .. ........... . ......... .......... The organization .......................................... Internal Structure ........................................ The Study Participants RESEARCH NOTE- "Policy" in the Department ................. RESEARCH NOTE ”Research" in the Department RESEARCH NOTE. Organizational Climate . A Basic Question: Is Empirical Information Available ..................................... Research Information from External Sources ................ RESEARCH NOTE: Enacted Information Space .................. Consultants and Other Invited Researchers ....... ........ .. In-House Research: Internally-Produced Empirical Information ..................................... Technical Knowledge Present in the Form of Individual Knowledge and Expertise ..................... A Basic Question: Is Research Used? ....................... Respondents' Perceptions of Usefulness of Research Information ................................... Is Research Information Used? If So, In What Ways? What Information Is Used? ........................... Instrumental Use .. Conceptual use . Symbolic or Persuasive Use ................................ Who Uses Research Information? What Is the Scope of Use? ................................. Focusing on Factors Which Appear to Be ImportantinUse/Non-Use inthe Department Search and Research Information . Dissemination, Flow of Information, and Research Information .... "Selectivity" and ”Bureaucratization" in the Dissemination Process ........J........................ RESEARCH NOTE: ”Selectivity” and "Bureaucratization" ... mannMymdmuAWme.uunuuuunuuunu. Respondents' Perceptions of Factors Associated with Use/Non-Use of Research Information .................. Respondents' Perceptions of Particular Constraints to Utilization in the Department Mandated Evaluation ................................ ....... 86 93 95 96 97 98 98 99 101 102 104 106 108 110 112 114 116 117 121 123 123 126 127 130 137 139 143 148 150 155 156 162 170 171 177 180 189 194 "nothingworks" 00 ..... 0.0.0.0.... ...... 0.0.000.0.0....000. END-NOTES: ANALYSIS 00.00.0000.00.0.000000000000000..0.00.0 CONCLUSIONS ........................................‘............. Behavioral Learning and Interactive Problem-Solving Factored ProblemrSolving .................................. ”Problemistic Search" and Information Policies ...................................... “Uncertainty", ”Risk Avoidance", and Organizational Interest ................................... COda 0.00....000000....0000.................0....00........ WNOTES: CONCLUSIONS 00..00...00.00.00.0000......-00...... ”PM” 000‘00..000....0...........00...00.00.....000000000000000 LIST OF REFERENCES vi 196 199 201 202 204 206 208 211 214 215 218 LIST OF TABLES 1. Overall Perceptions of Usefulness ............................. 131 2. Perceptions of Usefulness by Respondent Group ..... .... ........ 131 3. Respondents' Perceptions of Importance Of selectea Factors .0.0.000...0.00.0000.0.0.0.0....0........0. 181 vii LIST OF FIGURES 1. Departmental Organization ..... 107 viii INTRODUCTION Evaluation research in corrections has been called an 'elusive paradise', because, though it has been promoted and initiated by leading criminologists for over a century, it has not been established sedurely. Daniel Glaser1 Science to Guide Us In 1973 Daniel Glaser proposed the institutionalization, or "routinization", of evaluation research in corrections. His effort coincided with standards proposed by the National Advisory Commission on Criminal Justice Standards and Goals in asserting that information produced through applied social science research can "routinely guide policy and practice in people-changing organizations."2 In 1980 Donald and Michael Gottfredson again provided impetus for the notion that research information, not necessarily limited to evaluation research, could, given the appropriate emphasis, provide a significant basis for decisionmaking in criminal justice agencies.3 Prescriptions such as Glaser's and those noted by the Gottfredsons reflect a national ideology for rationalizing decisionmaking in public organizations. The notion has become pervasive. According to Burnham, ”One of the ambitions of most contemporary social support, organiza- tional, or control systems now is to be able to claim that they are rational - or at least describe themselves as such."4 And as he also ,notes, "Rationality is a quality which applies, in strict logic, to 2 sequences of action with a view to achieving a desired (though not necessarily permanent) state."5 Robert Rich has observed that,"The adaptatiOn of scientific knowledge to meet the needs of society is a recurring theme in Western thought."6 Indeed, it has been a generation since Robert Lynd issued his challenge to social scientists, "Knowledge for What?".7 Since that time a growing belief in the applicability of social science research information for rationalizing public decisionmaking has led to in- creased demands for the production and application of empirical findings for policy decision. As Patrick indicates, "The history of evaluation research since the mid-1960's depicts a growing commitment to, and demand for, formal assessments of public programs by both federal and state officials."8 During the past few years an issue of increasing importance has been the utilization of information, produced through social science research methods, in the policymaking process. The literature on the use of empirical findings in policy formulation and organizational decisionmaking is limited, yet speculation abounds concerning why research information has had little impact in policy matters and in decisionmaking in general. There is a common complaint from members of the research community that scientific research is not adequately con- sidered in decisionmaking in public agencies. Such a comment has been made by Grosskin, who says that,"Professionals engaged in applied research decry the infrequency with which research findings are consid- ered by decisionmakers in the development of public policy alternatives in administrative decisionmaking."9 The demand for empirical findings to provide a basis for 3 decisionmaking in corrections has surged in recent years. Coffey has characterized correctional organization as generally fragmented and administration as decidedly non-rational, proceeding less from a basis of policy founded on systematic provision of information and more from a basis of policy founded on a heuristic analysis of what appears to be politically expedient.10 As in other policy arenas, a growing importance has been attached to the potential of research information and the role for research information in rationalizing decisionmaking in corrections. Conceptual Confusion The notion that the products of social science research, or research based on social science methods, can be translated into policy action has become attractive in knowledge-oriented society. In fact, "Some practitioners Of applied social science appear to believe that, ideally, social problemesolving is a scientific activity, hence in the real world professional social inquiry is the best method of approaching prOblems, and is so far superior to all others as to warrant disregard - of them except as they appear as poor substitutes",Laccording to Lindblom and Cohen.11 Lindblom and Cohen use the term ”professional social inquiry" ("P.S.I.") to denote analytic investigative activity undertaken by academics, professional researchers, professional practi- tioners, or whoever uses the methods of social science inquiry, to pro- vide,information upon which to base decisions in public organizations.12 A normative bias has developed concerning the ”ought-to-be" of prOblemesolving in public organizations - decisionmakers should make rational changes in programs: improving program.operations: terminating, modifying, or changing program.operations; and/or setting rational 4 standards for action.13 Apparently, all this is to be accomplished by undertaking inquiry in a highly rationalized, analytic decisionmaking process. For many, decisionmaking in public organizations "comes largely to be identified with a rational, scientific, intellectual investigatory process”.14 Those making extravagant endorsements of analytic problem- solving, have tended to conceptually enlarge the role of empirical find- ings in decisionmaking by simply assuming the categorical superiority of empirical evidence over other forms of information. The very notions inherent to social science research (stemming from experimental per- spectives) have tended to contribute to this bias. The notion that valid generalizations will be available on a broad set of social issues, or will be produced as prdblem-specific information, "demands" the ranking of such information as priority consideration among all alternative sources of information, at least for the advocates. However, a perceived lack of impact of empirical findings on policy and action in public agencies has led to much discussion and, more recently, to some research on utilization. Much of the inquiry has been dominated by impressionistic logic, and there have actually been relatively few studies.-In criminal justice, particularly in the area of corrections, there in fact is scant research on the use of empirical findings and little consideration of the role for research in policy- making and decisionmaking.15 Asking why research findings do not have a greater place in policy development and later action has proved to require complex answers, and there has been much conceptual confusion surrounding the major issues.16 There is some disagreement over expectations that research information may provide the primary basis for policymaking and 5 decisionmaking in public agencies. In addition, it seems that neither policymakers nor researchers are satisfied with the current "state of the art" in translating research results into usable products for problem-solving.17 Patton concurs in this, noting that "Most of the recent literature is unanimous in announcing the general failure of [research] to affect decisionmaking in a significant way.”18 However, there are differing explanations concerning the "dilemma of non-use", and, over the past few years, considerable ambiguity has surrounded what "utilization" means and just what constitutes "use“. "Utilization" Indeed, utilization has become a key issue, particularly in regard to evaluation research and other forms of applied research where the methods of social science are used to provide information specifi- . cally intended as an input to decisionmaking. Although the ensuing discussion focuses primarily on information intended to influence policymaking and decisionmaking directly, the terms "social science research”, "empirical findings", and "research results", as well as similar terms, are not used in a restrictive manner. Rather, these terms are meant to encompass research products developed through any impetus (i.e. basic and applied research, explorations and demonstrations, or research specific to one particular setting, which may be disseminated or sought out) which have the potential for entering the decisionmaking process of a public organization. The "dilemma of non-use" refers to the perceived lack of impact of social science research products of whatever sort as inputs into the policymaking/decisionmaking processes of public agencies. 6 Empirical studies on "utilization" are in fact a relatively recent development, with most of the work following a seminal study conducted by Caplan and associates.19 As previously nOted, there have been relatively few studies and the field has been characterized by the production of an array of eclectic findings. Initially, scholars who did provide a definition operationalized "use" or "utilizatién" as "observ- able changes in programs based on empirical studies."20 This conceptual- ization has since been termed "instrumental" use of empirical results. The expectation regarding "instrumental” use is that research informa- tion would enter into an analytic, rational decisionmaking process, and the research findings would be incorporated directly into the outcome. According to Carol weiss, a "problem-solving model" in which research information drives the decision is the most common conception of research utilization: Whatever the nature of the empirical evidence that social science research supplies, the expectation is that it clarifies the situation and reduces uncertainty, and therefore it influences the decision that policy makers make.21 For endorsers of the ”ought-to-be" concerning the use of research information in a ”rational” decision, there is an implicit assumption that research information can provide the "best” evidence for the decision and should be incorporated directly into the outcome. For those advocating‘a high degree of rationality in organizational decision- making, the processes of providing empirical information and coming to use this evidence in a decision are inherently interwoven: the strate- gies Of research inquiry represent the epitome of analytic problem- solving; and the correlate, rational decision, is driven by the input of empirical evidence. 7 Scholars, researchers, and policymakers/decisionmakers have recognized the shortcomings of assumptions that research information, as the ”best“ evidence, will be accepted and used directly. Decisionma- kers are not passive; and information, especially research information seldom drives a decision. For example, Carol Weiss states: It probably takes an extraordinary concatenation of circumr stances for research to influence policy directly: ... a set of policy actors who have responsibility and jurisdiction for making the decision, an issue whose resolution depends at least _to some extent on information, identification of the requisite informational need, research that provides the information in terms that match the circumstances within which choices will be made, research findings that are clear-cut, unambiguous, firmly supported, and powerful, that reach the decisionmakers at the time when they are wrestling with the issues, that are comprehensible and under- stood, and do not run counter to strong political interests.22 The awareness that empirical findings (of whatever sort) seldom foster Observable changes (which are somehow directly attributable to or isomorphic with specific research findings) in programs or policies has led some students of utilization to broaden their conceptualization of ”use". The perspective toward conceptualizing use (and the guidance role of research information) has been broadened, from.the restrictive view of the application of research findings in specific decisions to an emphasis on an enlightenment function for research information. Many now find it acceptable to define ”utilization” in terms of "consideration": no program changes, based on the information, need be shown in order for the information to be categorized as ”utilized“. In terms of investigating the decisionmaking process and the role of research information in that process, the change represents a considerable shift. The perspective shifts from an.emphasis on research information “demanding" use to an emphasis on research information as a competing input among other, diverse sources of input. The decision 8 process is then seen as a dynamic process in which individuals with diverse interests and unequal influence use research information as a tool for "enlightment" - "in sorting out assumptions, clarifying logic, or arriving at a better understanding of the range of activities and constraints involved in a particular decision", or set of decisions.23 Such use is generally termed "conceptual" use. Several works have centered attention on the conceptual use Of research information and on other "political" uses, as well. It is now generally recognized that "use" may include "symbolic" functions, in which empirical findings may be utilized to "substantiate a previously held position, marshal support, or cast doubt on propositions which are at Odds with those held by the user, among other such possibilities."24 Differing Perspectives Even with broad definitions of "use", little has changed regarding the overall impression and pervasive belief that research is underutilized in decisionmaking and policymaking in public agencies. It is possible that the prdblems of investigating conceptual or symbolic use lead to a conclusion that research is underutilized. It could be that persons interested in the issue of utilization are misreading the .situation. Additionally, it could be possible that the potential role for research information communicated through the normative bias of those making extravagant endorsements is descriptively inaccurate because of a misunderstanding of the realities associated with decision- making in complex, bureaucratic organizations. Several explanations have been advanced to account for the modest levels of use Observed in most studies. Rdbert Rich observes, 9 "The two-cultures hypothesis is at the core of most studies of utili- zation and research results. It is the central idea used to explain levels of utilization and non-utilization of scientific knowledge.”25 Relying on a belief that differences in the culture of science and the culture of government (following C.P. Snow26) lead to less effective utilization, some scholars have focused on the knowledge production and transfer processes to locate the major variables accounting for use/non—use, and "proponents of the hypothesis assume that.bridges need to be constructed to link the worlds of policymakers and researchers and analysts."27 - Reasons advanced for the alleged lack of impact of research (from the two-cultures perspective) have been summarized in large lists of factors which include such items as poor methodological quality of the research: lack of relevance of research information for decision- making: as well as inadequate, ineffective, and untimely communication of results.28 One of the results of this approach has been to over- emphasize technical quality of research as a focus. Additionally, the two-cultures perspective has had the effect of directing attention away from.issues associated with institutional and bureaucratic characteris- tics of complex organizations. 'Bureaucratization": A Key Issue A few scholars of utilization have turned attention to the nature of organizational decisionmaking and the processes by which research information use may occur in complex organizations. Rich has centered attention on "bureaucratization" as a set of variables referring to issues of internal agency control and ownership of 10 information to locate critical factors concerning the images and pro- cesses of use.29 According to Rich, one could assume that the character- istics of kndwledge, its quality or perhaps its conclusiveness, are a necessary but not a sufficient condition in accounting for the applica- tion and utilization of research products.30 Rich's focus for under- standing use views research information as a competing input in a some- .times more-: sometimes less-coherent decisionmaking process, where political, value-based, administrative, and economic considerations may form incursions upon the "internal logic” of a decision. Again according to Rich, administrative, organizational, and structural factors may better distinguish between utilization and non- ‘utilization on the one hand, and instrumental and conceptual use on the other hand, rather than factors associated with technical characteris- tics of ”useful" information.31 Following Rich, one may question whether lutilizatiOn and dissemination processes within a complex organization may in-fact be considered separately at all. Rich has argued that a focus on ”utility" only (the extent to which specific products or sets of information are seen as potentially useful, a primary two-cultures focus) neglects the following points: the portions of specific informa- tion used: the form.in which research information passes through decisionmaking channels: who receives the information: and places in the decisionmaking hierarchy to which the information is sent: these factors Rich relates to ”selective utilization”.32 Rich has concluded that the prdbability of research information being used is less a consequence of the appropriateness of the informa- to the substantive policy area than it is of the value of the informa- tion in enhancing bureaucratic interests. There has been little ,ll systematic investigation of utilization which has incorporated this focus, although a common image is that the use of research information can only be understood as a part of an overall political process.33 Lindblom and Cohen have attempted to come to an understanding of the potential for research information through developing a more accurate conception of the problem-solving process in complex organiza- tions.34 By their definition, "problemrsolving" may be thought of as a process which results in an outcome that by some standard will result in an improvement over the existing situation.3S However, in their terms, problem "solution" may be a misnomer, since the outcome may not represent a long-term.solution of any prdblem: instead, it may simply represent a compromise that is acceptable at any given point in time.36 As Rich describes the situation, "'Problem-grappling' seems [better] to reflect the process that individuals and political institutions go through in reaching a particular outcome."37 The modern trend among organizational theorists concerned with decisionmaking in a bureaucratic framework follows an application and further extension of a nee-weberian model, leading to a paradigm for analysis focusing on the "politics of bureaucracy": The decisions and actions of [organizations] are pglitical resultants: resultants in the sense that what happens is not chosen as a solution to a problem.but rather results from compro- mise, conflict and confusion of officials with diverse interests and unequal influence: political in the sense that the activity from which decisions and actions emerge is best characterized as bargaining along regularized channels among individual members of the [organization].38 Thus, one is led to consider alternative conceptions to the normative view held by those making extravagant endorsements for the potential instrumental use of research information: that decisionmaking can be, or is, a highly rational, analytic process amenable to an intellectual 12 and scientific mode of problem resolution and "organizational learning". Further, as Rich notes, ”Beyond being responsible for coordina- tion and control, managers are held responsible for the success of the organization. Progress toward success [however measured] may be classi— fied as the 'problem-solving process'."39 In attempting to understand the use of research in this process, it is important to understand the Orientation of decisionmakers in complex organizations: In an environment in which competition for scarce resources is intense, officials do not want to make a mistake. When officials or managers first receive new information they ask: Why am I receivihg this information? What does the sender want from me? If these questions are answered to the managers! satis- faction, they are likely to inquire, 'If I use this information, can I be embarrassed?' Embarrassment may consist of: (a) presenta- tion of more up-to-date information by an official from another organization at a meeting: (b) presentation of information which contradicts the program most favored by top management: (c) presentation of information which others are already familiar with: or (d) presentation of information which puts another organization into a more favorable light than the one represented by the manager. ' In attempting to move from ”here to a better there", policy- makers "always have a choice between trying to find 'solutions' by arranging to have a given prdblem frontally attacked by persons who will think it through to a solution, or by arranging to set in motion interaction that will, with the help of analysis adapted to the inter- action, eventuate in a solution or a preferred outcome."41 According to Lindblom and Cohen, decisionmakers more often settle on one or another form.of problemrsolving through habit, tradi- tion, custom, or routine rather than through explicit analysis of the problem of choice.42 Even in attempting rational analysis those who seek rational understanding are conditioned by interactive problemrsolving - in selecting how to search, what to spend, assessing costs, how to apply the knowledge or get it used. The problem itself may be an ancillary issue to the bureaucratic routine of,"Yes, we need to 13 problemrsolve in a proper manner.43 As an input to policy, Lindblom and Cohen suggest that pro- fessionally-produced social science information competes with a ”mountain of ordinary information which it cannot replace but only reshape here and there."44 They are critical of those who fail to recog- nize the nature of interactive problemesolving in complex organizations, and they are also critical of those to whom it does not occur that. other appropriate sources of information (such as the expertise of the administrator or other staff) may exist and that this information may receive priority attention in decisionmaking. Further, Lindblom and Cohen criticize those who make extravagant endorsements of analytic problemesolving and who simply assume that research can constitute the sole basis on which to make specific decisions.45 Instead, they suggest that research information may have a role that does not turn on its "conclusiveness" or its "professional production". The issue may not.be one of "fact - proof" as much as ”evidence - argument”: and, certainly, the "authoritativeness" of information is not solely resultant from its "conclusiveness".46 One must focus on research information from the standpoint that its authoritativeness is dependent on factors other than its profes- sional production and the normative implication that it be used directly. Conclusiveness may well be viewed as a necessary but not sufficient condition for authoritativeness. "Authoritativeness" leads one to consider use within the organizational context. Indeed, political, organizational, and personal factors within the context of decision may be more important for understanding utiliza- tion than conclusiveness. "Use" is clearly shaped by the decisionmaking 14 framework, including the dissemination process, which may be 'bureaucratized", as well as by individuals' perceptions of authorita- tiveness. It seems important to view research information as being in competition with other forms of information in terms of being authoritative.. Clearly, one may argue over what the important determinants of utilization are. There has been no adequate theoretical base from which to proceed in investigating utilization.47 Rich has noted that "in the case of policymaking, one is faced with the problem of trying to con- ceptualize processes and styles of prdblem-solving - both of which have diverse roots which have not been traced or operationalized, and more importantly, put in the form of testable propositions."48 At this stage of knowledge concerning utilization, the research must.be characterized as exploratory, and there must.be much extrapola- tion from the data.49 Effort must be advanced toward the development of new guidelines for understanding utilization and toward the development of interpretational models . The Corrections Policy Arena The corrections policy arena presents aunique challenge to the study of utilization of research information. It has long been assumed that- research information has had little impact on decisionmaking in criminal justice, especially in the area of corrections. Perhaps from the ”ought-to-be” perspective expecting instrumental use, such impres- sions are accurate. Perhaps utilization does occur in ways that are difficult to detect, such as influencing the ideas and notions held by decisionmakers or in serving to enhance bureaucratic interests. Adams 15 has noted that there is no clear basis for asserting that evaluation research (in particular) is not used in policy formulation in correc- tions.5° That statement is applicable to assertions concerning research use in general in corrections. Policymakers in the criminal justice field find themselves in the situation of being in a visible and competitive policy arena. To be "successful", policymakers/decisionmakers in corrections must learn to identify, tap into, and control influence networks, and must create and use mechanisms for control of particular decision and action channels. Indeed, as Grosskin notes, "The need to reduce uncertainty and to increase options in the use of discretion in order to minimize the risks in selecting wrong or less effective policy or program alternatives may impact on which sources, types, and amounts of information decision- makers and policymakers may opt to choose to use."51 Correctional administrators are ”expected", if any credibility be given to recent prescriptive "standards", to be committed to the. consideration of research information in the formulation of policy and in making program decisions for their organizations. At the same time, these administrators occupy the unenviable position of being the focal points for the scrutiny and consequent review of-decisionmaking and policy implementation by a number of outside sources, notably including legislative bodies, state executives, and the judiciary., In recent years, corrections agencies have been faced with an increasing amount of litigation in federal courts. Since the mid-1960's, federal judges have called into question the practices of many correc- tional systems and have established themselves as a significant interest group in the administration of corrections systems. Concurrently, an 16 ideology has developed which questions the ability of corrections to meet societal purposes for "rehabilitation" of inmates. The notion that effectiveness has not been demonstrated has taken hold and the idea that "nothing works" has resulted.52 These are powerful factors in the environment of corrections organizations, and, when added to other normally powerful factors, may have an effect on the use of research information for decisionmaking in corrections. Decisionmakers in corrections are also faced with operating - as well as developing, reviewing, and critiquing policy - in the context of complex organizational processes. While information is an essential resource for decisionmaking, the processes for converting that informa- tion into policy and action may be based as much or more on interper- sonal and organizational factors rather than on the information itself. Adams has argued that corrections administrators are willing to accept untested but reasonable operating concepts - in effect, that they are receptive to innovation.53 He asks, however, "If administrators will accept untested but reasonable operating concepts, why will they not accept tested and reasonable concepts presented to them by researchers?"54 One must ask whether research is actually being ignored or whether those advocating research as a basis for decisions believe that research deserves more credibility as a basis for decision than it receives, or then they perceive it now receives. Perhaps correctional administrators do make use of empirical findings, and.perhaps this occurs in certain circumscribed ways. It seems apparent that impressionistic logic, simply prescribing the cat- egorical superiority of analytic problemrsolving and then making extrav- agant clains for empirical information and its role in policy 17 development and program.decision, does little to further an under- standing of "use" and the potential for empirical information in the formulation of policy in corrections. . Also, little is known about what information gets used, by whom, for what purposes, and with what impact in corrections. There is scant research on utilization in criminal justice policymaking and decision- making in general and even less on utilization in corrections in partic- ular.55 There is small basis, beyand impressionistic reasoning, upon which to make statements concerning whether or not social science research information has a "pay-off" in terms of decisionmaking in corrections. One reasonable way to begin to investigate the viability of recommendations to routinize research as a basis for decisionmaking in corrections is to examine the use of research information and the routes by which it occurs in corrections. So little has been done on utiliza- tion in this area, and so beset with conceptual confusion is the field of research on use, that one is led to suggest a much-needed exploratory study. It appears that factors associated with the organizational con- text in which use is expected to occur m§y_be more important in under- standing utilization than those associated with the production of research information. It seems most fruitful to undertake an exploratory study in a corrections setting where latitude is made available to investigate use as it occurs or does not occur in the context of ongoing decisionmaking - a setting where organizational and contextual variables may be examined and insights gained may be applied to refining previous findings in the literature on use and to developing guidelines fer understanding use in a corrections setting. 18 Problem Statement Thus, given the lack of empirical knowledge on the use of research information as a basis for policymaking and decisionmaking in corrections, it appeared appropriate that an exploratory study he conducted which aimed at providing insight into the extent to which utilization occurs and the factors which may most affect use in.a corrections setting. Specifically, the issue under study was to ascertain types and levels of use among corrections policy administrators and to identify factors affecting use/non-use of research information which may lead to the development of a better understanding of utilization in a corrections setting. Significance of the Study The research conducted was exploratory in nature and was intended to provide a basis for examining prescriptions concerning the possible ”routinization” of research information as a guide to policy formulation and decisionmaking in corrections. The study attempted to examine the viability of ”institutionalizing" research as a basis for 'decisionmaking and was intended to develop insights concerning the processes which most affect use in a corrections setting. Data from the study represent an attempt to provide a set of information concerning utilization in a corrections agency and , hopefully, represent an advance in the cumulative effort to understand use/non-use of research information. Data from.the study were also used to address issues essential to the development of a conceptual framework for examining utilization; and, again, it is hoped that these efforts contribute to 19 cumulative advance in the study of use of empirical findings in decisionmaking in complex organizations. 20 Definitions The following terms are adopted or adapted from definitions developed by Rebert Rich in a recent study of utilization.56 These provide a clear statement of the meaning of some terms used in this study. Information designates any materials (books, articles, statis- tics , and the existing literature) and data (surveys, evaluation studies, and other research-produced data) collected by an individual or group. Expertise refers to knowledge possessed by an individual in a particular area. Expertise also denotes the technical and methodological skills possessed by an individual. Policy-relevant refers to that information which the decision- maker decides is crucial to consider or analyze before reaching a decision. No objective criteria strictly differentiate relevant informa- tion from irrelevant information. Utilization refers to the process by which information enters' policy making or decisionmaking. Information is either sought out by a decisionmaker, or in various ways comes to the attention of a decision- maker, is read and understood, and is used by the decisionmaker conceptually, symbolically, and/or directly incorporated into a decision. ct is reserved for information that yields a documentable influence on a particular decision and is directly incorporated into the outcome. One other definition must be included at this point. This definition results from consideration of the liyerature on communication and organizational decisionmaking. Dissemination refers to the processes associated with acquisi- tion, assimilation, and distribution of information for use in particu- lar decisions or for retention and potential, yet unspecified uses within the organization. 21 END-NOTES: INTRODUCTION 1Daniel Glaser, Routinizing Evaluation: Getting Feedback 22 the Effectiveness 23 Crime and Delinquency Programs, Rockville, Md.: NIMH, 1973, pp.'2#3. 2Ibid.,:,and also see National Advisory Commission on Criminal Justice Standards and Goals, Corrections, Washington, D.C.: U.S. Government Printing Office, 1973, pp. 496-530. 3Michael Gottfredson, and Don Gottfredson, Decisionmaking‘gg Criminal Justice: Toward the Rational Exercise 2£_Discretion, Cambridge, Ma.: Bellinger, 1980. 4R. William Burnham, "Modern Decision Theory and Corrections," in.Don Gottfredson, ed., Decisionmakigg[ig_the Criminal Justice System, Rockville, Md.: NIMH, 1972, p.'103. SIbid. 6Robert E. Rich, Social Science Information and Public Policy Making, San Francisco, Ca.: Jossey-Bass Publishers, l981, p. 2. 7Robert S. Lynd, Knowledge for What?, Princeton, N.J.: Princeton University Press, 1939. 8Mary S. Patrick, "Utilizing Program Evaluation Products: A Rational Choice Approach,” paper presented at the annual meeting of the Midwest Political Science Association, Chicago, 11., April, 1979, p. 2. 9Richard B. Grosskin, "Toward the Integration of Evaluation in Criminal Justice Policy: Constructing Alternative Interpretational Models of the Evaluation Utilization Process," unpublished paper, April, 1981, p. 1. 10Alan Coffey, Correctional Administration: The Management 23 Probation, Institutions, and Parole, Englewood Cliffs, N.J.: Prentice- Hall, 1975. 11Charles D. Lindblom, and Daniel Cohen, Usablem mledge, New Haven,& .: Yale University Press, 1979, p. 10. lzlbid. 13Patrick, ”Utilizing Program Evaluation Products,” p. 2. 14Lindblom, and Cohen, Usable Knowledge, p. 11. 15Grosskin, "Toward the Integration of Evaluation in Criminal Justice Policy“. 22 16Carol weiss, "The Many Meanings oeresearch Utilization," Public Administration Review, Sep/Oct 1979, pp. 426-431. 17Robert F. Rich, Translating Evaluation into Policy, Beverly Hills, Ca.: SAGE Publications, 1979, p. 7. 18Michael Q. Patton, Utilization-Focused Evaluation, Beverly Hills, Ca.: SAGE Publications, 1978, p. 2. 19Nathan Caplan, and others, The Use 2: Social Science Knowledge gE_the National Level, Ann Arbor, Mi.: Institute for Social Research, 1975. 20Patrick, "Utilizing Program Evaluation Products," p. 2. 21Weiss, "The Many Meanings of Research Utilization," p. 427. 22Ibid., p. 428. 23Robert F. Rich, cited in Richard B. Grosskin, "Toward the Integration of Evaluation in Criminal Justice Policy," p. 9. 24K.D. Knorr, cited in Carol Weiss, Using Social Science Research ig_Public Policymaking, Lexington, Ma.: D.C. Heath, 1977, p. 172. 25Rich, Social Science Information and Public Policy Making, p. 12. 26C.P. snow, Science and Government, Harvard University Godkin Lectures, New York, N.Y.: New American Library, 1962. 27Rich, Social Science Information and Public Policy Making, p. 12. 28Grosskin, ”Toward the Integration of Evaluation in Criminal Justice Policy,” p. 1. 29Kenneth Prewett, “Foreward,” in Robert F. Rich, Social Science Information and Public Policy Making, p. xi. 30 0Rich, Social Science Information and Public Policy Making, Po 159. ' . 31Ibid., pp. 144-150. ”mm. 33Carol Weiss, “Evaluation Research in the Political Context,“ in E. Streuning, and M. Guttentag, eds. , Handbook.of Evaluation Research, Beverly Hills,& .: SAGE Publications, 1975, pp. —13-26. 34Lindblom, and Cohen, Usable Knowledge. 23 35Rich, Translatinngvaluation into Policy, p. 9. 36mm. ”mm. 38GrahamAllison, Essence gngecision, Boston, Ma.: Little, Brown, and.Co., 1971, p. 162. '39Rich, Translating Evaluation into Policy, p. 10. “this. 41Lindblom, and Cohen, Usable Knowledge, p. 25. 42Ibid., p. 27. ”mmup.n. “mid . 45mm. 45Ibid.., p. 17. 47Rich, Social Science Information and Public Policy Making, p. 11. 48Ibid., p. 46. ”fluid. , pp. 111-113. 50Stuart Adams, Evaluative Research ig Corrections: AiPractical Guide, Washington, D.C.: U.S. Department of Justice, 1975, pp. 34-35. 51Grosskin, "Toward the Integration of Evaluation in Criminal Justice Policy," pp. 2-3. 52Robert Martinson, "What Wbrks?: Questions and Answers About Prison Reform,” in D.M. Peterson, and C.W. Thomas, eds., Corrections: Problems and Prospgg__, 2nd Edition, Englewood Cliffs, N.J.: Prentice- Hall, 1980. ' 53Adams, Evaluative Research i2_Corrections, p. 35. “hold. 55Grosskin, "Toward the Integration of Evaluation in Criminal Justice Policy”. 56Rich, Social Science Information and Public Poligy Making, PP e 35.37 e DEVELOPING A FRAMEWORK: LITERATURE REVIEW Knowledge use of any kind does not occur in a vacuum. In policy- related situations even under the most ideal conditions, how knowledge is used and what influence it may have are influenced by the context of the issues under consideration, the values and perspectives of policymakers, and the relevant political and administrative hierarchical networks in which they operate. Nathan Caplan1 The purpose of this study was to investigate the use of research information as an input in policymaking/decisionmaking in corrections. The study was also intended to provide insights aimed at developing guidelines for understanding the role and potential for research infor- mation in the formulation and implementation of policy in a corrections setting. There is an increasing demand for research information to enter the policymaking process in public agencies. The expectations surround- ing the use of empirical information for policy purposes reflect the normative implication that empirical findings should be “authoritative" in the sense of being inputs which receive priority attention, if not direct incorporation into policy decisions. There has been little systematic investigation of the extent to which research information enters the policy process in criminal justice agencies. It is easy to find examples of disenchantment, but it is difficult to determine whether research findings are actually being ignored.2 24 25 Findings from recent studies in other policy areas indicate that the major use of social science research information is not the applica- tion of specific findings in discrete decisions (instrumental use).3 Rather, public policymakers tend to use research indirectly, as a source of ideas, general information, and orientations to the world.4 There is also an indication that research findings compete in a selective process of utilization that may be more intricately linked to the processes of organizational communication, and the processes and politics of bureauc- racy,than to the normative expectation inherent in the prevailing con- ception of a predominant role for research information in a rational decisionmaking process.5 There has been an array of eclectic findings and thought on use of research results. A number of studies have proceeded from varying perspectives in an attempt to identify factors associated with use/non- use, but there have been few attempts to integrate this work into viable interpretational models. The remainder of this section focuses on an examination of the literature in an attempt to identify an analytic framework for organizing an exploratory study of use of social science research information in a corrections setting. How Does Research Utilization Fit into an Overall Model of Decisionmaking in Public Agencies? Perhaps the fundamental question to be addressed in looking at research utilization involves the extent to which one may expect empiri- cal findings to provide authoritative information which is used in an analytic prdblemrsolving process in large bureaucratic organizations. In the preceding section it was pointed out that the predominant expecta- tions for an analytic "prdblemrsolving model" to provide an accurate 26 image of decisionmaking and utilization processes in complex organiza- tions, while normatively desirable, may lead to extravagant claims for the potential use of research products. Much of the logic associated with, and prescriptions for, the incorporation of research information in such a process have been conditioned by this view. Failure to incor— porate learning from the organizational decisionmaking literature and the great body of thought on bureaucratic policymaking may have contrib- uted to the sense of frustration many scholars and researchers have- experienced in attempting to analyze "use”. Although little attention has been directed to differences in the research process and the policymaking process, and although some attempt has been made to incorporate contextual variables and integrate factors associated with the "organization” in some studies, there has been little effort to understand or explicate the potential for research use by beginning with propositions rooted in organizational theory. A central concern in analyzing "use" should involve an attempt to expli- cate the relationships and linkages among concepts of "rational decisionmaking", "organizational learning", "uncertainty", and "bureauc— ratization”, among others, and to relate these concepts to expectations concerning the potential for research information to enter the policy- making process in complex organizations. ' Rich has discussed.prob1ems associated with.bureaucratization and use, as well as the active orientation of decisionmakers to “succeed“ in bureaucratic organizations, relating these in terms of consequences for ”use“.6 In general, however, there have been few attempts to analyze the literature on organizations and decisionmaking with.the.intent of linking basic propositions regarding possible 27 behaviors in organizational decisionmaking and the potential for research information use. It seems essential that one develop at least a basic expectation concerning the potential for utilization (greater or lesser) in bureaucratic organizations before attempting to assess and explain levels and types of use in complex organizations. One must return to the notion of the "dilemma of non-use" and consider this question, "whose dilemma?". A Rational Model For many, decisionmaking in complex organizations comes largely to be identified with a rational, scientific, intellectual investigatory process.7 The branch of organizational theory that takes as its focus the decisionmaking process in organizations affords the richest source of insights into the adequacy of such a perspective.8 For two decades, the seminal figure in this area has been Herbert Simon. Simon's work is motivated by the attempt to under- stand the basic features of organizational structure and function as they derive from the characteristics of human problemrsolving and rational choice.9 As Allison notes, ”Most theories of individual and organiza— tional choice employ a concept of 'comprehensive rationality', according to which individuals and organizations choose the best alternative, taking account of their probabilities and utilities."10 He also indi- cates that, "‘Rationality' used in this context refers to consistent, value-maximizflhg choice within specified constraints.”11 The "problemrsolving model“ as a conceptualization of research information use is intricately bound to a classical model of rational decisionmaking in complex organizations. The classical-rational model ~involves a set of decisionmakers who prioritize the organization's 28 interests foremost as their own. There is a clear conceptualization of and concensus on a set of organizational goals, and effort is bent to the methodical attainment of a definite and practical end through an increasingly precise calculation of means. In other words, there would be one best alternative for achieving a particular end. In order to find the one, "best” alternative, decisionmakers would take a "rational” problem-solving orientation to: 1) identify a problem and its cause, 2) clarify and rank goals, 3) collect all relevant information regarding how the goals are to be realized, 4) predict the consequences of alternatives, 5) choose among alternatives - evaluate according to some criteria - and 6) select the most appropriate means.12 The expectation would be that social science research information would provide "best evidence" in this decision process and would (when available) be incorporated directly into the outcome. For endorsers of this "ought-to-be' the processes of coming to use this evidence in a policy decision, and the processes of providing the best evidence through research, are inherently interwoven - the strategies of research inquiry represent the epitome of analytic problem-solving: and the correlate, rational decision, is driven by the input of empirical findings. A.“Real-Life Decision” The classical-rational model has its value as an.idea1 type. However , experience , research, and developments in organizational theory 29 indicate that this conceptualization is not reflective of operational reality. Simon suggests that consideration of organizational decision- making must.be centered on what may be termed a "realelife decision": taking into account that 1) information on alternatives is not given but must be gathered; 2) there are limits to information processing capabilities; 3) perceptions must be attuned to the relevant subset of information; and 4) the complexity of issues relevant to a decision creates problems of understanding.13 . According to Allison, "Simon's work finds five characteristic deviations from comprehensive rationality that are displayed by the simplifications of human problem solvers."14 These are discussed by Allison : l. Factored problems - problems are so complex that only a limited number of aspects of each problem can be attended to at a time. Organizations factor complex problems into a number of roughly independent parts which are parceled out to various organiza- tional units. The structure of an organization reflects the problems that its subunits factor. Roles consist of specified subsets of premises that guide actions in a particular subunit. -2. Satisficing - maximization or optimization is replaced by satisficing. In choosing, human beings do not consider all alternatives and pick action with the best consequences. Instead they find a course of action that is 'good enough' - that satisfies. 3. Search - comprehensive rationality requires consideration of all alternatives, thus making the problem.af search trivial. Where satisficing is the rule - stopping with the first alternative that is good enough - the order in which alternatives are turned up is critical. Organizations generate alternatives by a rela- tively stable, sequential search process. As a result the menu is limited. 4. Uncertainty Avoidance - comprehensively rational agents deal with alternate consequences of action by estimating probabilities of possible outcomes. People in organizations are quite reluctant to - base actions on estimates of an uncertain future. 5. Repertoires - repertoires of action programs are developed by 30 organizations and individuals. These constitute the range of effective choice in recurring situations.15 In terms of developing a perspective for understanding research use, it seems important to distinguish whether one is assuming organiza- tional outcomes to be the result of a rational process or the result of rational individuals grappling with a problem in a process of choice which is ongoing and conditioned by a complex set of constraints. Simon has developed a notion of ”bounded rationality” and a description of the process of choice which is characterized by his well-known notion of "satisficing".16 Decisionmakers operate within an organizational context, and organization members behave in some way to facilitate the systematic provision of information on the basis of some explicit or implicit criteria. In a ”real-life decision", there may be some attempt by decisionmakers to be "coherent" - that is, to introduce elements of an analytic problem-solving process to some degree, but there may be limits to organizational and individual capacity to process information, and one must keep in mind that information must be timely, cogent, and is not without cost. Allison's image (see page 12) exemplifies a current trend in thought on policymaking/decisionmaking in public agencies.17 Individuals bargain along regularized action channels, bringing to bear advantages assembled through authorized power and the manipulation of various influence networks. These individuals are seen as rationally seeking to maximize their own perceived best interests through bargaining for a preferred outcome. It is essential that one remember that organizational interests may be prioritized foremost by these individuals as perceived 31 self-interest, though such perceptions may not represent complete 'concensus on "goals" or values among decisionmakers. The normative bias underlying most prescriptions concerning the entry of research information into policy decisionmaking seems to be founded on the expectation of finding or creating a highly rationalized, analytic decisionmaking process in complex organizations. Whether or not one may expect research information to occupy a prominent role in policymaking seems, at a minimum, to depend on the model one assumes to be the prevalent image of decisionmaking in complex, bureaucratic organizations. Optimizing versus "Satisficing" The classical-rational model finds its most recent expression in the depiction of "optimizing" as a strategy in a rigorous, analytic decision process. ”Satisficing" describes a set of possible strategies which may or may not contain analytic elements complementing other decisionmaking schema, but it is distinguished from "optimizing" prima- rily in the extent of reliance on detailed outcome assessment and the absence (perhaps in degree) of a comprehensive search and exhaustive evaluation of possible alternatives. Lindblom and Cohen have suggested that research information come petes with a "mountain of ordinary knowledge": and they suggest one could expect empirical findings , at best, to do little more than reshape this mountain in some ways.18 They further suggest that one may expect no greater potential for research information utilization.un1ess decisionmaking behaviors in complex organizations change - from the present reality they describe as “interactive problemrsolving" to a set 32 of behaviors more closely associated with a highly rationalized, ana- lytic process approaching ”optimizing".19 The different images of "optimizing" and "satisficing" are important in that they evoke distinctly different expectations for the potential of research information as an input into the policymaking process. Adding in the political bargaining paradigm serves to further complicate expectations, yet adds validity to an argument that the image of a highly rational, analytic process is "descriptively inaccurate". Consideration of the "optimizing" and ”satisficing" models reveals that adherence to one or the other as a primary decisionmaking strategy results in a substantially different degree of attention to information, especially research information. Expectations for research utilization may differ greatly depending on which model represents the predominant method of decisionmaking in a given organization. One must point out that the dominant visualizations of the research utilization process coincide with a paradigmatic preference for "optimizing", or rational, analytic decisionmaking. One problem in attempting to investigate research utilization arises from a necessity for relating the primary models of decision- making to the process of research utilization. One reasonable way to attempt to show the relationship is to introduce a concept of "organiza- tional learning" as an avenue for visualizing the predominant modes of knowledge acquisition and use in decisionmakign in complex organizations. “Organizational Learning” Steinbruner, in a discussion of the "cybernetic theory" of decision, relates models of decisionmaking to abstract concepts of 33 "learning" in complex organizations.20 "Organizational learning" may prove to be a key concept in organizing a discussion of rational policy- making and the potential as well as the probable role for research information in the decisionmaking process. .The notion of "organizational learning" refers to the capacity within a complex organization to produce adjustments (in structure, action repertoires, routines, and processes) based on accumulating knowledge concerning the organization and its relationship to its pres- ent and projected environments - a process Katz and Kahn refer to as an ”adaptive-coping cycle".21 ‘Some difficulty may arise in specifying the intended use of the term ”organizational learning". While one may readily be able to relate "learning” to the individual, it may be more difficult to visualize a collective process of “learning" and its application to decisionmaking in complex organizations. As Steinbruner indicates, "Information is processed and decisions are made ultimately by individuals; the determi- nation of values resides ultimately with the individual."22 However, organizational decisions are made by persons in positions which give them access to decisionmaking. Decisionmakers operate within a collec- tive process where at least some parameters are imposed through the processes of organizing. In analyzing decisionmaking in complex organizations, "organi- zational learning" could be considered to be manifest in processes and structural arrangements which facilitate the use of information to allow decisionmakers to make inferences and draw conclusions concerning the state of the organization relative to the environment. Problem recognition, definitions of desired goal/outcome states, assessments of 34 various probabilities, provision of feedback, and the processes for assimilating information and applying knowledge to obtain a decision all figure into the "adaptive-coping cycle." Organizations change, or at least must make some provision to meet changing environmental conditions. The analytic paradigm.and the ”satisficing" paradigm present contrasting views of the predominant mechanisms to be employed in processing knowledge essential to both standard and non-standard problems. The potential for research and research utilization in the collective ”learning" process varies greatly depending on the predominant decision strategy. In understanding the research utilization process it is important that one explore the characteristic modes of "learning" inherent to each of the decision models. “Learning” and the Analytic Paradigm Steinbruner identifies the central characteristic of the analytic paradigm to be the construction, by the analytic decisionmaker, ”of careful, explicit, disaggregated calculations of the possible results of his actions."23 The decisionmaker constructs a working model of those forces in the environment of the organization, both internal and external, which act to control the environment and channelize action. Steinbruner-further states that, As'new information becomes available over time, it should be integrated into the working model and the critical causal assumptions of the model should be adjusted whenever the weight of evidence requires it. That is , the assumptions of alternative outcome calculations and sensitivity to pertinent information require a causal learning process in which new information is integrated into explicit causal inferences.24 Steinbruner further states that, "A given process of decision is analytic if upon examination one can.find evidence that there was at 35 I least limited value integration [among decisionmakers], that alternative outcomes were analyzed and evaluated, and that new information regarding central variables of the problem did produce plausibly appropriate subjective judgments."25 Theoretically, this would take place in a process of causal modeling in which individual decisionmakers collec- tively seek to revise or update models of the organization and its operations vis-a-vis the environment. Steinbruner states, "In following the process through a series of decision points, it can be found analytic if one can observe a causal learning process: that is, an explicit set of calculations which evolve in such a way that higher, more general conceptions of decision objectives come to be included (...), as well as critical environmental interactions which were previously excluded."26 Analytic modeling forms the basis of an optimizing scheme for decisionmaking. In the resulting paradigm of decision within the organization, "optimality" refers to, as Burnham notes, "obtaining the decision which, on theoretical grounds from the information existing at the time, has the highest probability of producing the desired outcome."27 Burnham further states, "'Optimality' is like 'rationality' in being most appropriately ascribed to a whole series of decisions over time, and an optimal decision system is that which produces the theoret- ically best decisions overall."28 "Optimizing” entails some form of consistency or agreement across decisionmakers, as well as an information-focused, analytic decision system.centered on causal learning. Key characteristics of such a system involve a continuous search for relevant, high-quality information to be used in constantly updating the causal models. In its ideal form, such a decision system approaches Wildavsky's characterization of the 36 "self-evaluating organization": For the self-evaluating organization all knowledge must be contingent. Improvement is always possible, change for the better is always in view though not necessarily yet attained. It is the organization par excellence that seeks knowledge. The ways in which it seeks to obtain knowledge, therefore, uniquely define its character.29 "Learning", problem-solving, and policymaking in an "optimizing" organization would come to be identified with an intellectual, scienti- fic investigatory process. The non-use of high-quality, as in empiri- cally-produced, information would present a dilemma of significant proportion. Within such a system, strategies are aimed at 1) analysis which identifies boundary conditions and ranges of possible action, and 2) careful calculation of payoff or outcome probabilities. The search for knowledge is comprehensive, or nearly so; all information on rele- vant parameters is collected for a given decision or set of decisions; and calculations are made, as precisely as possible, to reveal with a high degree of certainty the best possible alternative(s). The system is focused on outcome calculation - the reduction of uncertainty, or risk, through accurate appraisal. The role ascribed to empiricallyeproduced information would be prominent, especially in view of the notion that the predominant mode of ”learning", causal modeling, would at least approxi- mate assumptions rooted in the methods of social science. "Learning" and "Satisficing: In the highly rational, analytic optimizihg decision model, one would expect all information encountered by members of the organization, and/or all information.produced for organizational use, to be evaluated in terms of its value in validating or adjusting causal inferences 37 regarding the state of the organization and its relationship to the environment. "Risk" is minimized, in general, by checking construct validity and is minimized in particular decisions through precise calculations of the probabilities of various outcomes. "Satisficing" is based on the assumptions that: l) organizations are physically unable to possess full information, generate all alterna- tives, and calculate all consequences in terms of all values; and 2) individual decisionmakers' abilities to attend to all relevant input and all problems simultaneously are limited. In addition, those incorporating the notion of "satisficing" into decision models have added the notion that the behaviors of decisionmakers often do not approximate the objective, value-free, preference-free actions associated with arrival at analytic solutions. Rather, decisions are made through the interaction of organization members with diverse interests and unequal influence.3o Whether one views the ”decision" as a ”political resultant" or as the outcome of a compromise among members of a coalition seeking to impose aspiration-level constraints on other members of the organization, the implications for the entry of research information into the policy process are similar. In decision models based on "satisficingf, it is recognized that precise calculation of outcome probabilities is seldom, if ever, possible. Although outcome estimation is not precluded, "uncertainty" concerning the environment and outcomes becomes a critical factor. "Learning" within the organization becomes a matter not of adjusting causal models and inferences, but of developing repertoires and routines for action.which are likely to be relatively "safe" in terms of.overall consequences. "Uncertainty" becomes more'critical as the environment 38 becomes more turbulent, or as decisionmakers perceive they can achieve less control over aspects of the environment or are lees able to read the consequences of actions in relation to an uncertain future. The learning process which would characterize the satisficing model of decision manifests itself through changes in behavior rather than changes in outcome calculation or model adjustment. Steinbruner notes that, "The cycle of adjustment in [the] learning pattern tends to be slow relative to causal learning, and instead of being a consistent process it occurs only sporadically - when [an] established action sequence is inappropriate enough to result in substantial disruption. "31 Steinbruner follows this by noting that, "The major focus is on processes which remove or avoid uncertainty, thus reducing the burdens of processing information, and which divide problems into segments, thus avoiding conflict within the organization."32 An ” informational premise" is crucial to understanding the potential role for research information in a ”real-life"decision". Allison observes that, "March and Simon [have developed] the principle of sequential search and spell out its implications for a theory of choice."33 Cyert and March expand on the notion of search by positing 'problemistic search": Search follows simple-minded rules that direct the searcher first to the neighborhood of problem symptoms, then to the neighborhood of the current alternative. Search is biased.by the special training and experience of the various parts of the organization, the inter— action of hopes and expectations, and the communication distortions reflecting unresolved conflict. 34 Allison observes the following: Cyert and March a... the organization as a coalition of partici- pants with disparate demands, changing focuses of attention, and limited ability to attend to all problems simultaneously. Bargaining among pitential coalition members produces a series of gggfacto 39 agreements that impose constraints upon the organization. The list of these more or less independent constraints, imperfectly rational- ized iggerms of more general purposes, constitute an organization's goals. ' Cyert and March pose three other concepts which relate to the problem of choice. These are also presented by Allison: 1) 2) 3) quasi-resolution of conflict: independent subunits of the organi- zation handle pieces of the organization's separated problem in relative independence, with the prevailing coalition imposing aspiration-level constraints. Inconsistency resulting from this 'local rationality' is absorbed by 'organizational slack'. Conflicts are resolved by sequential attention to goals. uncertainty avoidance: uncertainty is a critical factor of the environment in which organizations exist. Organizations seek to avoid uncertainty. The first rule is: solve pressing problems rather than developing long—run strategies. The requirement that events in the distant future be anticipated is avoided by using rules that emphasize short-run feedback. The second rule is: negotiate with the environment. The requirement that future reactions of other parts of the environment be anticipated is avoided by imposing plans, standard operating procedures, industry traditions, and uncertainty absorbing contracts. organizational learning: organizational behavior is fairly stable. Organizations are, however, dynamic institutions. They change adaptively as the result of experience. Over time, organiza- tional learning produces changes in goals, attention rules, and search procedures.36 In the ”satisficing”-based model of decision a premium is placed on incremental, ”safe" adaptation. "Causal learning" is replaced by behavior-oriented adaptations based on experience. As Wildavsky points out, most organizations do not learn that well through such an orienta— tion, but the process is less threatening. 37 Expanding this model, one can see that environmental complexity is handled through internal differentiation and profileration of hierarchy. An internal complexity arises in which complex problems become fragmented into a number of discrete problems, each addressed.by different elements of the organiza- tion and at different levels.38 The comprehensive analysis required by 40 the analytic paradigm seldom occurs, in that top management focuses in sequential order on the decision issues raised by separate subunits and seldom integrates across subunits.39 . Problemistic search describes the set of actions undertaken in focusing on the problem-at-hand, usually with little attention to long-term outcome assessments. A limited amount of information proc— essing occurs at the policy level, and the order in which the informa- tion appears is critical. Sequential attention at the top management level depends on "focused attention" at the subunit levels - each sub- unit monitors selective feedback channels. One expects uncertainty to be controlled, in the perception of upper-level decisionmakers seeking to minimize risk, through reliance on trusted and familiar information and sources, as well as through efforts to validate information through trusted channels. Response repertoires play a large part in the incremental adapta- tion process characteristic of "behavioral learning”. The generation of new alternatives is limited by the existence of response repertoires, i.e. routines, standard operating procedures, habit, tradition, or custom. Marginal adjustment is one of the primary decision foci (either explicit or implicit). When an action seems not to be working appropriately, the first inclination is to apply an alternative already in existence, with such slight modification that risk may be perceived to be minimal. One also expects a.premium.to be placed on information (as Lindblom and Cohen would point out, “ordinary information“4°) developed within the organization and based in large measure on the experience and expertise of individuals within the organization. Again, this would 41 reflect a desire by top management to minimize risk through the use of "safe” information under their control. Lindblom and Cohen contrast analytic problem-solving with "interestive problemrsolving" and a notion of "learning" based on inter- action.41 In terms of reaching an adequate perspective from.which to view the potential role of research information in organizational prob- lemrsolving, they provide a clear conceptualization: Policymakers always have a choice between trying to find 'solutions' by arranging to have a given problem frontally attacked by persons who will think it through to a solution, or by arranging to set in motion interaction that will, with the help of analysis adapted to the interaction eventuate in a solution or preferred outcome. A key point is that interactive problemrsolving often produces outcome and implementation, perhaps not even taking the form of a decision on a particular problem. A second key observation is that settling on whether to pursue an analytic or an interactive solution is often accomplished through habit, custom, or routines, rather than through an explicit analysis of the problem of choice.42 "Satisficing" does not preclude the introduction of elements of analytic problemrsolving. However, one may expect analytic processes to be neither predominant, nor oriented to causal learning as in the ana- lytic paradigm. Instead, analysis is limited and is adapted to the processes of interaction. In such a system the prevalent normative bias viewing empirical information as "best evidence“ does not hold. Some issues may be amenable to analysis, and on some issues, not even necessarily those subjected to analysis, empirical findings may prove to be authoritative. The literature on organizational theory and decisionmaking in complex organizations overwhelmingly indicates that some variant of "satisficing'I is the predominant.mode of decisionmaking in most organiza- tions. Further, l'behavioral learning"'is the characteristic mode of "learning" in a "satisficing" decision system. In.beginning to come to 42 an understanding of the "dilemma of non-use“ and in attempting to arrive at a more appropriate conceptual framework for examining the use proc- ess, one must attend to a set of propositions related to “organizational learning". At a minimum the notions of ”uncertainty" and "risk avoidance", "problemdstic search”, and "factored problemrsolving" must ' be taken into account in any effort to understand use/non-use. Ultra-Conservatism: A "Reactive" Posture ”Satisficing" may be considered the description of the mechanism of choice subtending a number of closely related strategies. The concept does include a notion of outcomeerelatedness, at least for particular problems. Uncertainty is handled through marginal deviation from previ- ously tried alternatives without extensive outcome calculation. It is possible to identify decision systems based to a very small degree on information processing and outcome assessment. Organizations may adapt "nonepurposively“. That is, adaptation may take place based primarily on a reactive process. Such systems may evolve when the environment is so turbulent and organizational survival (or individual survival within the organization) is so critical as to allow no signifi- cant changes to be perceived as acceptable, or when decisionmakers are so unable to estimate consequences as to perceive any significant action to be too ”risky”. Such systems also develop over time through habit or routine. Downs would term leaders or managers who pursue strategies primarily aimed at insuring job security to be "conservers".43 If management consists of enough “conservers”, a Ptraditional inertia" develops and is reinforced by a reliance on routine and experience within the organization. 43 In this ultra-conservative system, information processing gradu- ally gives way to "reaction" as a routine for doing business and ”learning“ in the organization comes to be based almost entirely on experience. "Crisis management" and "disjoint incrementalism" are terms which would appear to appropriately describe such a problemrsolving process. Decisions are made fig 225 as reactions to threats or pressures perceived to be significant with little attention to long-term strategies or planning. The organization adapts "non-purposively" in that the reactions amount to a servo-mechanistic approach based on monitoring a few ”critical variables". "Crisis management” as described above represents a distinction in degree within an overall concept of ”satisficing”. However, it represents the extreme position - the antithesis of the comprehensively rational organization. Importance of the "Organizational Learning" Concepg The importance of differentiating decision models in terms of "learning” lies in the potential for explicating differing expectations for the role of research information (and other forms of information) in the decisionmaking process, and in identifying a set of related concepts which may assist in understanding the use or non-use of infor- mation. It is apparent that in the idealistic or rational decision- making models there should be no "dilemma of non-use“, since empirical information would.be sought and all information would be considered according to its value in advancing causal explanation. This orientation is substantially different from.the "problemistic", behavioral orienta- tion described.by "satisficing". Yet, although the organizational and 44 decisionmaking literature clearly announce the descriptive inaccuracy of the rational model, most investigations of utilization are organized to understand use/non-use based on the assumptions underlying the rational, analytic orientation. What are the important differences in investigating use proceed- ing with an understanding of the satisficing assumptions? The immediate difference emerges in the necessity of focusing on the organization and factors which are associated with a substantially different "learning" process within the organization, including the active orientations of decisionmakers. Information, especially research information, is not necessarily the driving force of decisions in complex organizations. Propositions such as ”uncertainty", "risk avoidance", "problemistic search?, and "factored problemrsolving", when considered within the overall context of bureaucratic decisionmaking, form a largely different basis for expectations of information use in general and use of research information in particular. A conceptual framework which attends to these propositions allows for a more realistic basis for assessment of use and for understanding the role to be played by research information in decisionmaking than does an approach which, based on the assumptions of the rational, analytic paradigm, asks "Why is there so little 'use'?" mbmnmmSamMUmummw,mdmmmnumwus There is no requirement that decisionmakers prefer analytic problemesolving, optimizing, or rational decisionmaking. It is clear that the normative preference to move toward the ideal image of a rational decision process is promoted.by those who.deve1op prescriptions for improvements in making public policy. It is apparent that information of 45 any sort becomes a policy input in competition with inputs from various sources. In attempting to develop a set of expectations concerning the potential role for research information, one must start by attempting to explicate the relationships among "problemistic search9, "uncertainty", "organizational learning", and "internal complexity“ - the concepts addressed by Cyert and March.44 These must be related to a concept of ”information" which is connected in some way to a notion of "authorita- tiveness" within a bureaucratic decisionmaking framework. Criminal justice organizations share a common feature with other public bureaucracies - the "satisficing” decision pattern. Michael and Don Gottfredson claim that, ”Decisions in criminal justice (...) usually are not guided by explicit decision.policies. Often the participants are unable to verbalize the basis for selection of alternatives. Adequate information for the decisions is usually unavailable."45 However, information is a necessity for the rational decision, for any decision. The term ”information", to be meaningful, must be distinguished from a notion of "available data". In this discussion, the term."information” does not refer to more data, no matter how carefully collected or reliable, but instead 1) to those data that reduce uncer- tainty in the decision under consideration, or 2) in the broader sense, to those data that are relevant or valuable in terms of advancing I'learning" in the organization.46 With the term.thus defined, having “information” implies some process for perception, assimilation, and selection.of that knowledge which has a.potential application in the reduction of uncertainty concerning immediate, impending, or possible decisions. “Information“ and “uncertainty" have a direct realtionship: 46 the more uncertainty present in a given situation, then the more "information value" knowledge or data should have when it appears.47 Information can be viewed, then, as the use of knowledge to reduce uncertainty.48 The "authoritativeness" of information is dependent on its potential for reducing uncertainty. In fact, "authoritativeness" may not be so much a matter of "fact - proof" as of "evidence - argument".49 Many of those advocating or conducting research to provide the basis of decision simply assume that research findings will be "authoritative" because of careful, professional production - because the data is "conclusive". Such a notion may prove to be unrealistic, especially where conflicting "information" is available, or perhaps where probabili- ties are so uncertain as to induce decisionmakers to base decisions on "safe" or "trusted" information from "safe" or "trusted" sources. In a "real-life decision", determination of "authoritativeness" may be based on experience of the participants, expertise within the organization, and on the validation of the information based on that experience and expertise. "Organizational learning" may be a key concept in understanding use/non-use of research information. Considering this notion, one is led to direct attention to structures and relationships, both explicit and implicit, which either facilitate or inhibit the dissemination and use ° of information within an organization. By introducing "uncertainty " into the analysis of the policymaking procedure, "information" becomes a critical factor: there is risk.involved in making a policy decision and ”information?, especially research products, may either reduce or increase this risk.in the perceptions of policymakers. One must attempt 47 link dissemination processes and utilization, incorporating in some way concepts of "information", "uncertainty". and "authoritativeness". Rich has observed that "selective utilization" may characterize the dissemina- tion-utilization process and relates this notion to such concerns.50 Further Concerns for Understanding Use This discussion centers on "policymaking","the decision aspect of that level of leadership which involves the alteration, origination, or elimination of organizational structure".51 An organizational policy is an abstraction or generalization about organizational behavior, at that level, and hhs structural implications for the organization.52 A policy or program decision Egy_be viewed as a discrete problem at a particular point in time, but may actually involve a series of decisions, ' a set of many actors, and complex interactions over time. Carol Weiss notes that, "Many policies are not made at a single point in time; thay seem to happen as the result of gradual accretions, the build-up of small choices, the choosing of small options, and the gradual narrowing of available alternatives."S3 weiss further adds that, "Much policy doesn't (sic) seem to be made by a set of identifiable decisionmakers or by logical-rational procedures that could even take research into account."54 "Organizational learning” centers attention on the ongoing proc- ess of managing and decisionmaking and provides an avenue for differenti- ating the analytic and."satisficing“paradigms in such a.way as to lead to a series of concepts relevant.to understanding use/nonruse. There is a need to explore utilization as a phenomenon which may not.be separated from a selective dissemination, use/non-use process which is dynamic and 48 perhaps is conditioned by concerns related to "real-life decisiomaking". Implications of the Rational, Analytic and "Satisficing“ Paradigms in Visualizing the Utilization Process One way to begin to place the preceding discussion into perspec- tive is to attempt to develop a basic visualization of the elements“ characterizing the research utilization process based on the rational, analytic decision model and based on the ”satisficing" decision model. One must return to consideration of a fundamental question, "How does research utilization fit into an overall model of decisionmaking in a complex organization ". Rational, analytic decisionmaking is characterized by a decision logic which requires a ”causal learning” process. In policymaking/ decisionmaking "causal learning" refers to a decision structure and process which is research-oriented and information-sensitive and requires causal modeling as the basis of rational-comprehensive analysis. The critical aspect of operation in a rational, analytic decision system is the institutionalization of causal analysis as the mode of "learning" and, thus, the determination of adjustments which are required to move efficiently and effectively toward a specified outcome state. "Causal learning” involves the careful creation of a model, or models, utilizing explicit calculation of future outcomes, together with the depiction of the causal forces controlling the environment in which action must occur, as the basis for selecting among alternative courses of action. The model(s) are based on “objective“ information and should be updated “whenever the weight of.evidence requires it."55 The current social science literature abounds with suggested designs for rational, analytic decision systems. Mowitz provides a 49 rather concrete idea of the characteristics of such a system (this is presented with slight modification).56 One would envision an explicit design to: Develop explicit statements regarding the substantive values of the system. Convert the values into observable and quantifiable physical/ behavioral states acceptable as satisfying the values. These states become the performance objectives of the system. Identify work and resources that will produce outputs that have some known capacity to bring about or to maintain the performance objectives. Periodically review the relationship between values and objec- tives and establish boundaries and priorities for decisionmaking that reflect [these values]. Establish [a] decision cycle that forces a review of past experi- ence and projects the likelihood of a-future success through the review of alternatives. Base the scanning for alternatives on available knowledge reflecting current science and technology by institutionalizing the requirement of analysis as a condition for entering the decision stream. Establish a reporting system that includes information about the external effects or impacts of the organization; about the outputs of work; for monitoring results; and for providing a historical base for future decisions against which management can be judged. Finally, institutionalize the decision process and assume that any set of organizational arrangements is functional as long as the decision requirements are being met, that is, outputs are being produced that satisfy the performance objectives of the system with an acceptable degree of efficiency. Organizational ,adjustments should be justified in terms of enhancing efficiency and effectiveness (...). Prescriptive decision structures in general lay out the requisite aspects of a rational, analytic process requiring some operationalization of "causal learning". These generally assume a management system, structure, and.policies or rules which facilitate the realization of prescriptive aims. As pointed out above, any set of organizational arrangements may be functional, as long g§_the decision requirements are being met. The neglected.aspect.of most prescriptions is that decision rules, individual perceptions and.behaviors, and structure and.process must be oriented to the normative ideal fer decision requirements to be 50 What would the research utilization process look like if an' organization.were arranged to realize the requisites for rational, analytic decisionmaking? One must visualize a set of “ought-to-be" arrangements and can hope only to provide a bare outline of an ideal research utilization process. It must initially be pointed out that a decision structure, as Mowitz states, "provides the framework.for examining the results of decisions as well as the consideration of decision alternatives."58 Mowitz further notes, "However, a decision structure is not an aggrega- tion of incumbents' perceptions of what they are doing or ought to do."59 There must be a decision logic which "imposes a single decision discipline" upon all those operating within the system.60 In a rational, analytic decision system the decision logic would express the preference for rational, comprehensive analysis. The decison logic would require a research-oriented, information-sensitive outlook, and the structure of the decision system would be oriented to the acquisition, assumila- tion, and use of empirical or "objective" information in a causal modeling process. The decision logic would call for attention to "objective information”, preferably empirical findings which are used in reducing uncertainty by validating, or suggesting adjustments to, causal models. "Institutionalizing" the decision process means that ‘members of the organization must internalize the decision.logic. One would.expect a research utilization.process in meeting the above conditions that.conforms to the following bare.outlinsu (Items from the preceding discussion»are incorporated with those of the author to develop the following image.) 51 I. Basic Assumptions: -- rational, comprehensive analysis is preferred.‘ r- empirical information provides "beat evidence" and is the preferred source of information. -- causal modeling, systemwide, and quantification of decision factors where possible, provide the funda- mental basis for incorporating objective information into an analysis scheme which minimizes the effect of individual and/or subunit biases, preferences,etc. II. Basic Features: -- explicit decision rules. These require and describe analysis as the mechanism of choice and provide a foundation for causal modeling as the central feature of a continuous process of "self-evaluation". -- explicit information policies. These emphasize acqui- sition and/or production of empirical information and its utilization as the primary basis for decision. The information policies provide guidance concerning the perception, acquisition, selection, and dissemi- nation of appropriate information. These direct attention to and provide guidance concerning a continuous scan for information, as well as problems specific search and relevant information. -- structural arrangements. Some form of centralized information system would be in evidence. This would address needs for data and information acquisition, assessment, storage, retrieval, dissemination, and appropriate presentation. Specific sub-elements would be concerned with the provision of research informa- tion for decision purposes. All subunits would maintain staff who through some centrally coordinated arrangement continually develop data appropriate to empirical analysis and relevant decison require- ments. One would expect the research utilization process in the norma- tive or ideal conceptualization to be the central dimension of an insti- tutionalized learning experience. Donald and Michael Gottfredson concep- tualize the outcome as, "(...) a policy development, implementation, examination, and revision cycle (...) in which a process of repeated examination and revision is a central feature.“61 The Gottfredsons further state that this would result in ” an evolutionary process (...) that is a requisite to more rational policy development."62 The 52 Gottfredsons envision the achievement of this outcome "through a collab- oration of research workers and those responsible for and actively engaged in the decisions."53 The rational, analytic decision paradigm and the resultant conceptualization of research utilization are highly idealistic. The ‘Gottfredsons provide the following assessment: we are well aware that to argue in favor of a central role for facts in a world of values will be seen as short-sighted by some, naive by others - a dangerous revision.to an inglorious and thoroughly discredited earlier era or an embarrassingly optimistic faith in the potential for change.64 They explain their belief in a strategy to approach the ideal as based on an observation that "the alternative position rests on the implicit supposition that progress will be made when presumptions are regarded as facts, when untested hunches are acted upon with vigor, when goals are unspecified, and when trendy alternatives are accepted for their novelty.”65 One would not wish to argue with the Gottfredsons' approach. However, the primary issue for students of utilization is not so simple as deciding whether a rational, analytic decision system, based on empirical knowledge and a process of causal learning, is normatively desirable. As weiss and Bucuvalas have pointed out, "Many preconceptions about the nature of research use have been unduly simplistic."66 They further note in their own conclusions that an unanticipated insight of their study on utilization was "the disparity between the analytic models.of decisionmaking that dominate the academic and professional literature and the perceptions of decisionmaking by participants in the process."57 An approach based on the "satisficing” model of decisionmaking 53 leads to expectations and perceptions of the use process which are substantially different from those developed through a prescriptive approach based on the rational, analytic paradigm, The "satisficing" model focuses more directly on the observed behaviors of those engaged in decisionmaking processes in complex organizations and describes decision orientations which are in large measure different from rational, analytic decisionmaking. Understanding research utilization by first seeking to understand the "satisficing" perspective leads to different expectations concerning utilization and to a different focus for investi- gating the "dilemma of non-use". - A decision system.which follows the "satisficing" model employs interactions to reach ”acceptable outcomes" rather than employing con- clusive, causal analysis to find "correct solutions". The notion of "organizational learning" is proposed to assist in differentiating the logic, arrangements, and processes which, as institutionalized features, characterize the efforts to make knowledge cumulative and bring know- ledge to bear in the adaptive-coping cycle in complex organizations. Whereas a rational, analytic decision system is characterized by ”causal learning", a decision system.based on some variation of "satisficing" is characterized by "behavioral learning". This means that in the "satis- ficing" system, instead of finding a concern for outcome calculation and comprehensive analysis, one can expect quite a different decision logic. In the ”satisficing” system.the assumption of value integration is replaced by, as Steinbruner notes, "a somewhat vaguely specified conception which posits minimally articulated, preservative values, and which does not yield a coherent preference ordering for alternative states (...)."68 Rather than emphasizing a preference for a 54 research-oriented, information-sensitive outlook, the decision logic would revolve around the development of established response repertoires and a process of interactive problemrsolving. As Steinbruner states, "At the level of collective decision, the paradigm posits a process in which decisions are fragmented into small segments and the segments are treated sequentially."69 The decision logic in the "satisficing" system is characterized by adherence to routines or repertoires and a preference for arriving at relatively safe, incremental adjustments to established procedure and patterns of behavior. The decision structure deviates from the highly coherent structure required for comprehensive analysis. The principle deviations provide the core of a necessary perspective for investigating use: - problems are factored. Complex problems are factored into a number of roughly independent parts and parceled out through an organizational structure which reflects the problems that subunits factor. -- subunits handle pieces of the organization's separated prob- lem in relative independence. Rather than proceeding through a process of synoptic decisionmaking, systemwide decisions emerge from subunit decisions through a process of selective attention to pieces of the problem. -- the search for information is "problemistic"; geared toward the development of sufficient information to find the first alternative that is "good enough". ' -- factoring creates problems of conflict among subunits which are controlled through an interactive process resulting in the establishment of repertoires and a series of aspiration- level constraints which essentially constitute "goals". -- uncertainty reinforces attention to repertoires and marginal or incremental adjustment in order to avoid risk. -- as with rational, analytic decision structures, one could assume that any organizational arrangement is functional which realizes the rather nebulous decision requirements: that is, risk is avoided while problems are disposed of 55 sequentially and incremental adjustments contribute to "successful" behavior, however defined.70 Visualizing the research utilization process in such systems is problematic. Several aspects of the decision process could be quite important for understanding use/non-use. Lindblom and Cohen, as well as Rich, point out that the primary orientation in such decision systems is toward reliance on "ordinary knowledge", particularly the expertise and experience of organization members.71 The absence of requirements for outcome assessment and causal modeling create the problem of having to come to an understanding that attention to research information may be quite limited, yet the opportunity for its use is present, dependent upon the changing requirements of sequential attention to problems.and dependent upon the factoring of problems. Factored problem-solving results in two expectations: 1) utiliza- tion of research information may vary across subunits (internalizing the decision logic of "satisficing” results in little normative encourage- ment to orient decisions to empirical information): and 2) attention to research information may vary among hierarchical levels, since, absent a requirement for causal modeling or rational analysis, decisionmakers are relatively free to tailor information requirements to their percep- tions of the current.problem. One would expect a utilization process which reflects, at a minimum, the following: I. Basic Assumptions: -- no one decision discipline is adhered to by members of the organization. .. the preferred source of information is the organiza- tion members - relying on their expertise and experience. 56 -- uncertainty is a dominant aspect of decision. Attempts to avoid risk encourage "safe", incremental adjustment and reliance upon sources of information that may be trusted and controlled. -- problems are addressed sequentially. Time and resources for search are limited and efforts are directed to a limited search for information aimed at the current problem and.arrival at the first "acceptable" solution. -- success may be defined in any number of ways. Generally, any outcome which results in a perceived improvement over the immediate situation may be judged ”successful". II. Basic Features: -- decision rules may or may not be made explicit. In either situation, adherence to routine and regular- ized action repertoires are emphasized. -- information policies may be explicit and/or implicit. Pactored problemrsolving and "problemistic” search result in ill-defined and fragmented attention to information and sequential, ad hgg_attention to information needs. Perceptions and requirements for information vary across subunits and among hierarchi- cal levels, with little attention on systemwide requirements. Research information competes with a “mountain of ordinary knowledge". -- structural arrangements are affected by the absence of a requirement for causal modeling and outcome assessment. The information system conforms to requirements of "local rationality", i.e. subunits maintain responsibility for "local" arrangements and systemwide arrangements for information provision reflect limited concerns. various recognizable elements, such as units for research and information provision may be present, but their functions are oriented to responding to problemrspecific require- ments for information or analysis. The degree of coherence of arrangements for information provision systemwide may vary: what remains a.common aspect is the Wproblemistic nature of employing such ele- ments. It is important to understand that the "satisficing" notions emerge as something more than a dissent from the ideas of rationality.73 As Steinbruner observes, "They appear to describe a fundamental and 57 q ubiquitously used process of decision."74 Apposing notions concerning research information and its utilization in either decision process (rational, analytic or "satisficing") leads to a clearer foundation for understanding the role and potential for research information and for better understanding issues related to the "dilenma of non-use”. The Ideal Image: Program Evaluation Evaluation and organization, it turns out, are to some extent contradictory terms. Failing to understand that evaluation is some- times incompatible with organization, we are tempted to believe in absurdities much in the manner of mindless bureaucrats who never wonder if they are doing useful work. If we asked more intelligent questions instead, we would neither look so foolish nor be so surprised.75 The prevalent images and models of the program evaluation process epitomize an ideal conceptualization of the role of scientific knowledge in a rational decisionmaking process in public agencies. Those authors/ scholars taking a broad view ascribe to the process of evaluation a prominant role for providing information during policy planning (needs assessment, past/future studies): policy formulation (design process), implementation (formative/process studies), and summative assessment (impact studies). Much of the recent interest in utilization seems to center on program evaluation or other similar forms of applied research. This is not surprising in that the primary intent in advocating such research is to produce information ostensibly to be used directly in decisionmaking/policymaking. As Rossi and Freeman point out, ”According to Leviton and Hughes (. . .) , evaluation. findings can influence program planning and implementa- tion in terms of instrumental, conceptual, or persuasive use.'76 They further observe that, “Disappointment about utilization of evaluations 58 is largely related to the limited evidence that [evaluations] have [achieved] limited use."77 This again is not surprising since the notions basic to understanding the initiation of evaluation on a broad scale involve, at least in circumscribed ways (and for the stronger advocates, in a pervasive sense) orienting the organization's membership to an ana- lytic mode of problemrsolving. This also involves adjusting the organi- zation's structure to facilitate causal learning. In other words, the effort is based on moving organizations toward rational decisionmaking. Glaser's notion of "science to guide" corrections is based on orienting correctional agencies to an analytic, self-evaluating perspec- tive.78Though his prescriptions are centered on what one may conceive to be operational-level learning and decisionmaking, his ideas require the introduction of a form of causal learning and analytic decision logic as systemic features. Glaser identifies a need for corrections agencies to base decisions on empirically-grounded evidence. He suggests the notion of ”routinizing" or institutionalizing evaluation as a predominant mode of information provision and a basis of problemrsolving.-One must think that the assumptions behind this idea are those on which the analytic paradigm is based. Program evaluation has become the most visible form.of applied social science research. There are many forms of evaluation, many pur- poses, and many definitions regarding evaluation and its appropriate scope. Ideally, an organizational program is developed as the means to achieve some specified goal-state - to move from an identified problemr state to achievement of a goal-state as measured by some set of outcome- related variables. Summative, or impact, evaluation is generally thought 59 of as assessing effectiveness in attaining the established ends. In doing so, the research hypothesis involves determination of the adequacy of theory employed in asserting causal relationships. Ideally, the program has been developed as the most appropriate means for moving from the explicated problemrstate to the goal-state. Process evaluation, implementation study and policy analysis, is inten- ded to assess the "goodness of fit" of the program as it is developed and implemented.79 Theoretically, such study is a continuing venture providing a formative feedback cycle which allows for adjustments and adaptations as the policy development-implementation process progresses. If evaluation, or any form of applied social science intended to influence policy directly, achieved the aims described above, the result would be an ability to impart a high degree of rationality to the organi- zation's decisionmaking process. The ideal image calls for a prominent role for research to provide empirical information as a basis for most organizational decisions. An immediate implication would be that decisionmakers and researchers begin to think of evaluation as an integral element of the policy cycle, from initial problem definition through outcome assessment. Basic and Applied Research In the ideal conceptualization of program evaluation, basic and applied research are complementary. Basic research would be thought to provide relevant information which is cumulative and awaiting users ‘ attenpting to formulate and clarify ends, especially in terms of causal relationships, in moving from explication of a.problemrstate to a desired end. There has been.much discussion of the differences in basic 60 and applied research. In the context of this discussion, the term ”applied research” refers to research aimed directly at influencing policy, and "basic research" refers to research undertaken in scholarly pursuit intended to make a contribution to an academic discipline 4 even . though it may later achieve some application in an organizational set- ting. Generally, the distinction between the two "types" of research involves the origin of the research, whose values govern the formulation of the problem, questions concerning conditions governing the execution of the research, and the ways in which the research may be expected to be utilized. Applied research is likely to find an audience of interested parties who perceive themselves to be "stakeholders” in the use or non-use of the results. It is important to center attention on what may be different images and processes in the use of both basic and applied research. For example, one may find it necessary to attend to the presence or absence of planning for the use of applied research, as well as to aspects of the use process such as timing, research costs, and self-interests of the potential users. However, simply differentiating basic and applied research, while pointing out peculiarities related to use/ non-use, does not seem to further an understanding of use beyond those peculiar aspects. One must return to the concept of an ongoing learning process in organizations. One must consider ”problemrspecific" search versus a continual scan for information, as well as mechanisms for the continuous acquisition, interpretation, assimilation, and dissemination of information within the organization. Evaluation.products must be viewed as information (perhaps more often appropriately termed ”data") in competition with diverse other 61 inputs to the decisionmaking process. "Evaluation“ carries many differ- ent meanings to participants in the process - to researchers, to decisionmakers, and to other involved parties. On the'one hand, the ideal image of the self-evaluating organization describes a highly rational, analytic organization in which the assumptions basic to social science research could be expected to influence, if not govern, the consideration of empirical evidence for decisionmaking. On the other hand, as Lindblom and Cohen point out, a notion of "professional social inquiry" may more appropriately encompass the many visualizations of "research" one encounters in organizational settings.80 One may find that ”research" subtends possible meanings ranging from the ideal, academic concept to such notions as summarizing daily census totals and so on. One must attempt to understand ”information policy" in organizations and gain a broad perspective on "learning” in order to better understand the potential for empirical information in general and for certain forms of research, such as evaluation research, in particular. Particular Frameworks of Factors A variety of perspectives have been employed in examining utilization in public organizations. More work has been centered on utilization of evaluation products than on the more encompassing general area of social science research use. Several dominant themes have emerged among the efforts of a.number of investigators. various perspectives have tended to emphasize 1) the knowledge production process: technical quality of results: and availability, timeliness, and relevance of information; 2) the linkage system: the 62 array of institutions and arrangements for connecting "researchers" and ”policymakers" in the transmission of expectations and results; 3) the decisionmaking system: from the two-cultures perspective, emphasizing differences between the nature of the decisionmaking process and the nature of the research process and differences between researchers and policymakers; and more recently 4) the decisionmaking process: from the standpoint of administrative and structural factors which contribute to the "bureaucratization" of the dissemination process, as well as initial - attempts to understand the role of research through a better under- standing of the "learning' and "problemrgrappling" processes within ' organizations.81 Qualitypas an Issue Quality and objectivity of research information has received much attention, both impressionistically and in studies based primarily on a two-cultures perspective. Rich so indicates by noting that, ”Caplan found that federal decisionmakers were concerned about the objectivity and accuracy of information."82 He also found that "objectivity becomes an issue when the data base is viewed as weak, the study design is poor, or if there is a general belief that there is so much had research in the social sciences that valid findings are indiscernible, especially when two studies on the same subjects produce opposite findings."83 Carol weiss has found that, "Policy makers are sophisticated consumers who understand issues of quality control and make judgments concerning them.'84 weiss found (in a study of policy administrators in the mental health field, based on their reactions to sample study report summaries 63 rather than actual use of research reports) that "research was subjected to (...) a 'truth test': i.e. was the research trustworthy, can it be relied upon, was it produced through scientific procedures?"85 Her conclusion was that research quality is important as a determinant of use. Rothman indicates that while some policymakers see scientific credibility (as technical quality, validity) as a question in its own right, others see credibility in a more political way: Provide sufficient information to managers about the research program and its results so they are clear about this and can convince others (...).86 The suggestion here is that the primary interest of decisionmakers is not actually the conclusiveness of the research information, although it may be important to some, but the authoritativeness of the information in the sense that decisionmakers can convince the audience to whom they direct the findings that the information is powerful. Authoritativeness and resulting use/non-use may have referents other than conclusiveness or validity. Rich notes that, "Caplan states in a later study that, 'It does appear (...) that the major problems which hamper utilization are nontechnical'.”87 Patton states in his study on use of evaluation results and utilization that, "In no case was methodological quality identified as the most important factor explaining either utilization or non-utilization.”88 Patton makes four points concerning research quality and use: 1) Research quality never arises as an issue in the utilization of many evaluations; in the search for information decisionmakers use whatever is available to reduce uncertainty. 2) Research ’quality' means different things to different people. 3) There are no methodologically perfect studies. 4) Given these three preceding points, the best way to make sure 64 that decisionmakers and information users understand the methodo- logical limitations of studies, and take those limitations into consideration in funding evaluations and using evaluation findings, is to actively involve them in deciding which methodological imperfections they are willing to live with in making the eventual interpretive leaps from limited data to incremental action.89 Patton then concludes: Social scientists may lament this situation and may well feel that the methodology of evaluation research ought to be of high quality for value reasons, i.e. because poor quality studies ought not to be used (Rutman, 1977). But there is little in our data to suggest that improving methodological quality in and of itself will have much effect on increasing the utilization of evaluation research. The implication of Patton's findings is clear. In explaining utilization, the issue of technical quality is less important than factors related to the organization, its leadership, and organizational interest. Often these factors have been neglected in favor of pursuing foci for investigations which place inordinate emphasis on quality as the important variable in determining use. Rich reaches a similar conclusion and adopts the important notion that’”quality” and a number of associated factors, such as ”timeliness”, cost", and ”relevance" (in the limited sense of being relevant to the internal logic of a policy issue) are “necessary but not sufficient conditions“ for use.91 According to Rich, "All of the so-called necessary conditions for use can.be met, and utilization will still not follow."92 It seems reasonable that unavailable information, information that is far from the point, information that is of questionable quality (if quality is broadly defined), and information that is developed or encountered after the time for the decision will receive little atten- tion. It is important to realize that though knowledge production variables are significant and important at a threshhold level, they are 65 not likely to be sufficient in themselves (that is, without being under- stood in the context of organizational interests and flearning") to account for use or non-use of research products. . "Quality” as a dominant theme has generally been pursued without an adequate integrating conceptual framework. Two other dominant themes, a focus on what is termed the "political factor" and a focus on the "personal factor", provide important perspectives which have received much attention. These also lack an adequate overall conceptual framework or theoretical base for organizing the findings in a meaningful way to promote an understanding of the potential of and role for research information in organizational decisionmaking. The "Personal Factor" Patton refers to the "personal factor” as, "(...) the presence of an identifiable individual or group of people who personally care about the evaluation and the information it generated."93 Patton identi- fied the "personal factor" as the key variable in determining: -- whether evaluation processes and information are used, -- the extent to which evaluative findings are translated into action alternatives , and , -- how the information is employed in the translation of policy alternatives into concrete behavioral change strategies.94 Patton explains that decisionmakers who seek out research infor- mation, or who receive and use such information, are attempting to reduce uncertainty "so as to increase their ability to predict outcomes and thereby enhance their ability to exercise discretion as decisionmakers."95 This explanation relates individual preference to organizational interest and maximization of self-interest, yet Patton's explanation is not set within an overall conceptual framework relating organizational interest 66 and individual self-interest. Allison in his conceptualization of ”bureaucratic politics" identifies information as a key ”bargaining 96 Patton's explanation evokes an image of advantage" in any decision. empirical information entering into an individual's decision calculus as a source of clarification, confirmation, perhaps validation of the arrival at a standpoint on issues; and, perhaps, this information may be perceived as a source of leverage in bargaining for decision. Whatever the motivation, Patton emphasizes the commitment of interested leaders as a key in determining the utilization of evaluation research informa- tion. ’ Patton does conclude that decisionmaker uncertainty, accompanied by the need to increase or maximize the use of discretion, is a major factor in understanding use.97 He finds that where there is strong com- mitment by an individual or group to deriving benefit from.the results, evaluations will be used; conversely, where there is a marked absence of commitment and involvement by decisionmakers, there will be a marked absence of impact. Patton formulates recommendations pointed at structuring the evaluation process in such a way as to promote joint participation and collaboration among decisionmakers and researchers in designing the research, conducting the study, and specifying and then carrying out the research.98 The point of these recommendations is to provide a structural inducement to gain commitment from.upper-level leadership. Patton seems to ignore the alternate notion that conduct of evaluation research or use of any research may elicit negative reactions from leadership, depending on what they perceive to be in the best interests of the organization and of upper-level leadership. Patton's notion is not simplistic, and his point regarding the 67 personal factor is well taken. It may be easier to understand a personal factor in operation regarding evaluation research,.since an organization and its members must actually "invest" in a variety of ways in any eval- uation undertaking. In the broader consideration of use of empirical knowledge in general (thinking about research information which may be available and enter the decision process through dissemination from other agencies, professional channels, empirical-format journals, and other sources), one may as well center attention on personal involvement. If the leadership of the organization is committed to use of research as an important guide to decision, and emphasize an information policy designed to encourage assimilation of scientific knowledge, then such commitment may, at least logically, have an effect in encouraging con- tinuous attention to empirical information from a variety of sources and would, as in the logic of causal learning, include both basic and applied research. In such a situation, mechanisms for screening and assimilating empirical information would operate to provide a "continual scan" for encountering and digesting scientific information that is relevant or could be relevant to decision. If Patton is correct, one would expect to find a degree of emphasis, or the lack of it, as a characteristic feature in organizations where, over time, use is relatively high, or conversely, relatively low. Such a notion may be related to an overall concept of ”organizational learning". Patton's work focuses attention on the uncertainty aspect in decisionmaking in complex organizations. If one could conceptualize the "reduction of uncertainty” as the object of any "coherence" imparted to a systematic attempt at information gathering, and if one could visualize ”uncertainty" as the basis of individual and organizational "learning", , 68 there would be a direct focus on information processing at both the individual and structural levels of analysis. Reflecting on decision- maker self-interest and the reduction of uncertainty, one may think it could be in the best interests of a decisionmaker to be coherent on any particular decision or set of issues. In other words, the decisionmaker may seek to introduce elements of a rational, analytic process whenever it seems necessary to approach a decision with some assurance of achieving "success", or at least whenever it seems important to give the appearance of "problemrsolvihg" in a proper manner. One must keep in mind Lindblom's and Cohen's admonishment that: (...) policymakers (...) always have a choice between trying to find 'solutions' by arranging to have a given.problem frontally attacked by persons who will think it through to a solution, or by arranging to set in motion interaction that will, with the help of analysis adapted to the interaction, eventuate in a solution or preferred outcome.9 Decisionmakers operate within an organizational arrangement in a position which gives them access to decision and to regularized information and influence channels. Within the overall context, it is important to con- sider that the ”reduction of uncertainty" may depend upon referents only loosely related to analytically ”solving" the problem according to the internal logic of the problem.at-hahd. Decisionmakers are not passive participants driven by information which they receive from various sources. Patrick theorizes that government agents faced with a program or policy choice use research information, in competition with other infor- mation, to estimate probable outcomes associated with different program strategies.100 Patrick bases her consideration on rational choice assump- tions developed by Bartlett: 69 l) self-interest: a governmental agent is primarily motivated by his own self-interest [goal]. 2) rational strategies: a governmental agent chooses strategies con- sistent with this self-interest [goal]. _ 3) uncertainty: a governmental agent does not possess perfect know- ledge concerning program outcomes; there is always a degree of uncertainty involved concerning program options or strategies and subsequent outcomes. Patrick focuses not only on decisionmakers, but others, such as information-producing personnel within the organization, involved in the provision of information - giving some indication that the informa- tion supplied may have been developed selectively according to some set of explicit or perhaps implicit criteria. Although Patrick's primary concern is the individual decisionmaker, she places some importance on the processes associated with the provision of information to decision- makers, realizing that information must reach the decisionmaker in order to be considered. According to Patrick, decisionmakers apply a "discounting procedure" to consideration of research information, weighing its useful- ness in terms of self-interest and uncertainty: "taking into account its source: interpretive bias; policy relevance; and technical quality characteristics."102 For Patrick, entry into this decision calculus, or ”consideration", would be sufficient to constitute use of the informa- tion. various other elements of the ”personal factor" have received attention. Burnham.has suggested four main dimensions along which the minds of decisionmakers may vary: (...) differences due to l) internally stored data: facts, opinions or impressions which are traceable to identifiable sources and sub- ject to consciously controlled analysis: 2) intuition: past learning and experience; 3) bias: emotions, unconsciously compiled attitudes, etc., not subject to conscious control; and 4) cognitive styles: habits of intellectual manipulation and thinking in different 70 respects.103 It is only common sense to expect that personal styles, ability, and technical proficiency are factors which enter into a high quality decision. Such factors as previous experiences with research information as a basis for decision, as well as formal preparation involving statis- tics and research methods (and perhaps papers and research projects completed), have received little attention, but may be quite important in looking at use/non-use. Self-interest, reduction of uncertainty, and the extent of personal commitment to using research (whatever the motivation) are important elements in understanding use/non-use. It is important to remember that people in positions contribute to and/or participate in making decisions, and that, within organizations, there is some degree of procedure or regularity to the decisionmaking process. In thinking about the role of research information and its use/non-use, one must attend to the personal factor within the overall framework of organizational learning and the constraints imposed by the decisionmaking system. The Political Factor Carol weiss and Michael Bucuvalas describe the nature of the policy process as follows: The decisionmaking system imposes its own constraints on research use. Even when a research study passes muster, officials are often unwilling or unable to use its insights as a basis for action. As individuals they may avoid exposure to research, find it unintelli— gible, misunderstand it, consider it unreliable, or reject that which is antipathetic to their own interests. But even those who read social science research diligently and intelligently and are impressed with its conclusions may find little opportunity to use it. They do not function as isolated individuals but as members of complex organizations, and the organizations set limits on what they can 65.104 71 Weiss and Bucuvalas further note that, "(...) when officials serve in governmental agencies they must function not only within the procedural, structural, and ideological framework of the organization, but also with- in the parameters set by the larger political system."105 Grosskin adds to this, indicating that: Types of political factors which may impose themselves on .utilization (...) may take the form of: -- intra- and inter-agency rivalries; - resource allocation disputes: -- intergovernmental tensions between the grantor and grantee, personnel, governmental units, and stakeholders in policy disputes within and outside of government at all levels: - internal debates regarding the goals and/or achievable accomplish- ments of programs which have high ego-identification for parties to the conflict; and;; -- the overriding need to achieve a br8ad base of concensus among conflicting parties andpositions.1 6 weiss and Bucuvalas are wary of taking a position which, as they think, ”uncharitably“ focuses on self-interested concerns for power, influence, and career advancement among decisionmakers and neglects the possibility that professionals in agencies may believe in the programs they manage.107 Decisionmakers must be concerned with powerful influences affecting the organizations they administer, yet this does not mean that program.or organizational interest may not be perceived by decisonmakers as of primary importance. Weiss notes a number of potentially powerful influences with which decisionmakers must be concerned: Policymakers in government are engaged in ongoing and reciprocal relationships with their counterparts in other agencies, con- stituents, legislative leaders, executive officials, state and local officials, professional staffs, client groups, civic organizations, and other groups that demand attention and action. 08 An important consideration isothat decisionmakers function.with- in an ongoing process of ”organizing”. Deadlines force attention and sets 72 of concerns of a continuing nature force them to prioritize and balance their efforts. As Rothman observes, ” The effect of politics on research is widely recognized. weiss points out that operations research or program evaluation often serves as 'political ammunition' to 'neutralize opponents, convince waverers, and bolster supporters!1 9 Uses of research information such as for "eyewash", "whitewash", "posturing”, "postponement", "substitution", and "submarining" suggest a jwide variation in the nature of symbolic uses for research information. Yet focusing solely on such uses as the major feature of a political ffactor is inadequate. There must be some more meaningful way of relating Fpolitical" factors to the ongoing, iterative process of policymaking/ decisionmaking and to organizational interests. In fact, the nature of the policy process, of bureaucratic politics, and bureaucratic incentives and rewards, may yield a richer 'source of insights into the way fpolitics” affects use. The uncertainty proposition provides an important focus to research on knowledge utili- zation. Decisions must be reached within the context of satisfying many invodved parties, in a timely manner, and with purposeful attention to cost. Rich says that, ”Decisions must be reached quickly and should be _subject to the least possible risk."110 He continues by noting that, ”Given these constraints, Wilensky (...) argues that the influence of established policies and interests will be greater than any other resource available to decision makers."111 Rich suggests that reliance on familiar sources of information, in order to minimize risk, may lead the decisionmaker to emphasize heavily the expertise existing within the organization and rely heavily 112 on information developed within the organization. This focus has led 73 to his investigation of issues related to bureaucratic control and ”ownership" as key variables in the utilization process. In his study of use of CNS information, Rich found that use is increased if bureaucratic interests are enhanced.113 The more important elements of Rich's con- clusions are indications that intra-agency factors, administrative and structural variables, related to dissemination of information and formal/ informal information policies existing within the organization are critical features in the effect of bureaucratic politics on research use. _Patton has related the political factor more directly to the commitment of persons in information and decision channels to the use of 114 In writing about ways to improve the use particular research products. of evaluation research products, he has suggested changes in structure, problemrsolving process, and personal sensitivity to research as mechanisms for improving the commitment of all involved to use research findings. Similarly, Rothman has suggested structural rearrangement along with other mechanisms which he suggests would facilitate use of research through building greater emphasis on research information into the 'decisionmaking process.115 It is important to consider differences in the extent of commit- ment and thinking about political factors affecting research which is commissioned to produce problemrspecific results (i.e. program.evalua- tion, operations research, other forms) and research which is conducted, disseminated, and then, in some way, made available for possible use in a general manner. Though both types of information are affected by the extent of commitment and political concerns as well as administrative concerns, there is a sharper focus and more readily identifiable connec- tion in the former case. One must again return to the possibility of 74 finding an integrating focus in the concept of "organizational learning" and suggest that a political factor in itself must admit to referents located in an organization's cahracteristic ways of doing business. It is much easier to recognize that information produced at some cost to an organization has already been "bought into" and some "payoffs" (whatever those payoffs might be) are expected by individuals and/or groups involved with a particular organization. This type of commitment and the resultant analysis of political factors affecting use are far more difficult to realize in the situation where an organization employing some form.of "continual scan" encounters, selects, and assimilates information disseminated from diverse sources. V The latter is more Rich's concern. His findings indicate that bureaucratic factors may be in operation in the internal dissemination process and that a variant of bureaucratic politics, selection of infor- mation according to some explicit or implicit criteria associated with bureaucratic interests, may be of primary importance in shaping the nature of the use process. Commitment to one or another interest at many levels in the organization may, in a sense, form a key "backstage" variable in examining use/non-use. Bureaucracy and "Learning" Bureaucratic behaviors may defeat prescriptive strategies for the use of empirical information. According to Rich, The organizational and bureaucratic literature teaches us that information - in the form of the expertise of an individual - is the foundation of a power base for any given bureaucracy. Formally, decisionmakers rely on their bureaus to provide all the necessary resources that are needed to solve a particular problem: bureau- cratic interests are dependent upon the continued reliance of decisionmakers on these resources. 75 The primary source for information in a bureaucratic organization is apparently to be found in the expertise and experience of the individual members. ”Experts" (staff experts) are expected to solve most problems at lower levels, and, because of their familiarity with operational level problems, as well as the likelihood that there will be a large degree of uncertainty connected with systemwide problems, policymakers Imay be expected to turn first to sources of information they can trust and can control. Rich indicates further that, As long as decisionmakers are relying on individuals' knowledge, there is little uncertainty in the transmission of information ' inputs; one would expect these inputs tplbe constrained by the articulated values of the organization. One must keep in mind Lindblom's and Cohen's observation that the main question in a bureaucratic decision process may not be so much "How do we solve this problem?” as "How do we problemrsolve in a.proper manner?".118 One must focus on the learning process taking place within the organization in order to'come to any reasonable understanding of the potential for and the use/non—use of research information. According to Rich, (...) the principle purpose served by knowledge utilization is not to provide objective fact-gathering and analysis of high quality, relevant information bearing on a substantive policy issue, but to reinforce the using agency's information policy and to maintain and strengthen the bureaucratic interests for control associated with the acquisition and processing of information in accord with that policy. Thus, information policy takes precedence over the substantive significance of the information in a determination of the contribu- tion of scientific products to public policy formulation.119 Adopting a Framework for Inquiry The array of findings and thought on use and the differentiation in perspectives on investigating use present a large problem concerning 76 an organization of a preliminary framework for inquiry. As weiss and Bucuvalas state, To systematically study the gamut of activities from the formualtion of research to its application in decisionmaking - and all the failings, distractions, and countervailing influences that can afflict those activities - is beyond the resources of a single project.120 The preceding discussion has been an attempt, to again borrow from weiss and Bucuvalas, to provide a "map of the territory to be explored."121 It is obvious that use occurs within an organizational context, and that the character of that context may subtly, or not so subtly, affect the application of research findings in decisionmaking. To comprehensively address use in a complex organization and to adequately encompass the various perspectives which have been advanced would require a multi-stage, long-term study. As Grosskin observes, Larsen has pointed out the need to examine research-based know- ledge utilization as a complex process involving political, organi- zational, socioeconomic, and attitudinal components, as well as the characteristics of the specific research or research product.122 An inclusive investigation would have to take into account stages of information acquisition, including the decision to produce the informa- tion, conduct of the research, transmission of results, interactions during the research, the constellation of interests manifest in the organizational learning system and environment: it would also take into account dissemination and decisionmaking stages, implementation, and impact. One must in addition address issues concerning scope of the contemplated investigation: whether to 1) address utilization of one specific study versus use of research information on a general, contin- uous basis; 2) address individual and structural-level variables; 3) confine the study to certain forms of research information (e.g. . 77 evaluation): and 4) include some element of sensitivity to time horizons or periods during an organization's development. All the above, for an inclusive study, would have to be accomplished in a field of study lacking an adequate theoretical base and possessed of conceptual confusion, where widely varying perspectives are discernible, even where measures of the central concept of "use" have not elicited a consistent degree of agreement. Consequently, one is left to conduct a study which must be lhmited and oriented to exploration and cumulative advance. A major focus in this review involves an attempt to settle on an appropriate and reasonable framework for the study. One of the difficul- ties in deciding how to investigate use is that a number of studies have proceeded from widely varying perspective with similar aims - to identify sets of factors which have causal relationships to ”use”. There has been no adequate theoretical or integrative model model for coherently bring- ing these findings together. One major problematic element, ignored by most but addressed by Rich, is that, In the case of policymaking one is faced with the problem of trying to concpetualize processes and styles of problem-solving - both of which have diverse roots that have not been operationalized, and, more importantly, put in the form of empirically testable propositions.123 This particular dilemma is compounded when one realizes that policymaking is a dynamic, iterative process and may vary across policy areas. Addi- tionally, "use" tends not to occur at one point in time, but may take place as a more diffuse process. A number of other conceptual problems have not been adequately resolved. As a dependent variable, ”use" has varied in its operationali- zation. One must consider whether a behavioral requirement such as 78 "observable changes" (as in "instrumental use") is the only adequate -measure, or whether measures designed to assess ”conceptual use" are appropriate, and if they are, what are they? Each of these distinctions affects the choice of appropriate research strategies. Further, as Grosskin points out, there is a need to identify some means of measuring "degrees" of use.124 Another concern involves attempts to separate use of empirical knowledge from use of other forms of information in the organizational problemrsolving process. Lindblom's and Cohen's work suggests that, in some way, one must address the overall "learning" process in particular types of settings and admit to the realization that other forms of information are at least as authoritative, if not generally more authori- tative than empirical information.125 This issug points to the need for examination of the organizational and decisionmaking literature in an attempt (as has been done earlier in this discussion) to identify and explicate differences and models of "organizational learning". It remains to be seen whether such a focus may prove viable in advancing an adequate conceptual framework for examining ”use". At this time, investigations of use have been limited primarily to a few policy areas (e.g. mental health). The efforts must be regarded as exploratory regardless of the methods which have been employed. Conceptual problems have not been resolved: and there is not an adequate base of information for comparing policy areas. Even within a policy -. area, such as criminal justice, where there has been so little work on use, there is no way to make comparisons because of the lack of baseline evidence. However, a start must be made, and then the issue of compari- sons within and across policy areas must receive greater attention. It is 79 currently beyond reasonable limits for one study to include within its scope sufficient focus to address all the posited "important elements" and to integrate the perspectives necessary to resolve even the several basic "dilemmas" confronting utilization as an area of study. Considerations for This Study As a beginning, it seems reasonable to initiate an investigation from a circumscribed frame of reference. Whether implicit or explicit, a dominant theme in research utilization concerns the introduction of rational, analytic procedures into the decisionmaking process. Glaser and the Gottfredsons have addressed this issue in describing models of information provision for corrections, and for criminal justice, relying on empirical research procedures to be incorporated (or institutiona- lized) into the decisionmaking routine.126 One is led to ask questions concerning the viability of such suggestions by exploring use in an ongoing agency situation. It is obvious that researcheproduced information competes with information from a variety of possible sources. The focus taken by Rich seems most promising in admitting to a greater role for organizational variables (especially "bureaucratization" of the dissemination processes and the development of information policies based on bureaucratic interests) as a central focus for investigation. A focus on organiza- tional learning, as described earlier, seems to be fruitful in that specific foci may form.the basis of a framework for integrating the sometimes seemingly disparate sets of factors proposed as explanatory in understanding use. However, very little work has proceeded in this direction. One is led to suggest that the possibilities be explored 80 through a broadly-based study. A fundamental need is the establishment of a baseline of infor- mation on use in criminal justice. This must.be.accomplished on a cumula- tive basis. What is now needed are attempts to understand research use/ non-use more directly from the standpoint of the role and potential for use of empirical information within an ongoing learning process in complex organizations. There are many problems associated with both of these efforts. However, the approach seems reasonable. In 1978, weiss proposed a classification scheme for locating variables associated with utilization and non-utilization along seven dimensions.127 The types of questions are adaptable to the provision of a.basic focus in a broad study of research use. Grosskin presents weiss's questions with explanation: 1) What information is used? Questions in this area may involve the type research systematically favored, and, additionally lead one to ask questions concerning sources of information which may be systematically favored. 2) How is research information used, if and when it is used? What type of use occurs, and in what ways is use accomplished? 3) By whombis research information used when it is used, and why, and who does not, and why? To what extent is use part of an individual or group decisionmaking process? In what ways does the complexity and stratification of political and organizational settings influence the information processing and decisionmaking styles of potential users? 4) By how many persons is research information used? What is the effective reach or penetration of research findings throughout the various decisionmaking, policy, and action implementation groups of potential users? Do users participate, when possible, in the formulation of questions for investigation? . 5) How directly is use affected by the content of the information received? Good news? Bad news? No news? Study quality? Communica— tion of results? In what form? 6) How much of an effect must be demonstrated before one may conclude that the information has been used? 7) How inanediate is the use?128 Grosskin indicates that in each of the above typologies, the levels of analysis toward which data collection efforts must be directed 81 will require attention to variables at both structural and individual levels of analysis.129 This framework of questions is developed speci- fically for a study of evaluation utilization; however, the questions do provide a general framework adaptable to use of scientific knowledge in a broader sense - exploration of various types of scientific knowledge and its use. An important element addressed by Grosskin involves a measure of use based on a continuum of successive outcomes.130 This "continnum of use" is crucial to specifying what may be conceptualized as "degree of use". Specific sets of factors may either facilitate or inhibit the achievement of each outcome or may affect movement from one level to another.131 The researcher is, in effect, deciding on one level of out- come or another, settling on an operationalization of the dependent variable "use " . Grosskin establishes the following levels of utilization outcome: -- availability of information to be consulted by policymakers and decisionmakers to enable its potential application. -- consideration of results of evaluation along with other sources of information prior to the derivation of conclusions about the efficacy of policies or programs under study. -- application of the content (...) in identifying and selecting policy or decision alternatives. - concrete and behavioral changes resulting from the application of (...) conclusions and/or recommendations reflected in the decision to implement policy or decision alternatives. measured impact of implemented policy or decision alternatives.132 Which of the preceding outcomes one selects as important dependent variable measures clearly determine the scope and importance of various factors which one may logically include or exclude as relevant to the identification of facilitating or inhibiting conditions in the utilization of research information. In a policy area such as corrections, where there is so little 82 information on use, it seems reasonable to approach use with a general framework of questions and allow for the inclusion of emergent foci. As a beginning, it would seem reasonable (following Weiss and Grosskin133) to ask basic questions corresponding to categories of variables. For example: ca 0 questions type of information is research information used? if so, what type of information is used? type of use how is the information used? does use correspond to the categories " instrumental " , "conceptual " , "symbolic"? is use immediate or does use take place over time, perhaps in clusters? user identity what individuals or groups use research information? what compari- sons may be drawn among these indi- viduals or groups? extent or scope of use is research information used throughout the organization? is research information systematically preferred for all issues? limited to a few issues? systematically ignored? In delimiting a basic study, it is also necessary to address the conceptual continuum.referred to’by Grosskin as "successive outcomes".134 Two issues surface: 1) will one attempt to encompass the entire "policy cycle” in operationalizing “use" on the basis of expected outcomes?; and 2) will one operationalize "use” only as "instrumental use" - measurable through documentable impact?. It is apparent that a single study cannot be expected to address, with any success, the entire policy cycle. It is also apparent that operationalizing ”use" only on the basis of "document- able impact" excludes many possible uses, and students of utilization in recent years have shown that conceptual or persuasive uses should receive 83 attention. A reasonable beginning strategy would involve a focus on use at the levels of availability and consideration, leaving open the possibi- lity for documenting use at the application level, if and when such use is found. A result of this approach is to begin with an investigatory framework which is centered on answering basic questions. In addition to a baseline of data on utilization, a study of this type must make some advance in providing explanation concerning the levels and types of use observed. However, a single study cannot hope to satisfactorily encompass the many possible perspectives which would require attention for the study to be termed "definitive". It does appear that little research on utilization has proceeded with the aim of identi- fying and exploring the utility of a conceptual framework rooted in the teachings of the literature on organizations and decisionmaking. Rich's position - that dissemination and utilization cannot be considered separately; and 2) that factors related to the "bureaucratization" of the use processes in complex organizations may provide the most adequate basis for understanding use/non-use - agrees with the teachings of the organizational/decisionmaking literature. The approach taken by Rich is firmiy based on much of the recent literature on decisionmaking in complex organizations. This approach adopts the following understanding: (...) timeliness, relevancy, form and costs do, of course, have some influence on whether information is used. However, these factors are subordinate to other barriers that impede utilization: -- the tendency of bureaucrats to defer to the expertise of their colleagues -- the tendency to seek a monopoly of control over a particular information source -- the tendency not to collect information from other agencies and from individuals or organizations outside of government 84 -- the tendency to place a high value on sources of information that can be controlled, manipulated and trusted.135 The study of utilization is just beginning to turn to critical issues associated with the "fit" and potential for research information in ongoing decision systems. It seems reasonable to opt for a study deisgn which will be limited and yet will admit the possibility of exploring emergent foci. It further seems reasonable to adopt from the literature on qualitative research the notion of conducting a systematic study based on a framework of "local" concepts in moving toward a better understanding of the role and potential for research information in complex decisionmaking processes. Such a study could not be exhaustive, yet could serve to suggest new avenues for relating many of the often disparate perspectives cahracteristic of the field of research on use. In this chapter an attempt has been made to point out the pre- dominant features of the major models for decisionmaking in complex organizations. The analytic paradigm presents the ideal rational decision structure. The knowledge acquisition and use process is characterized by "causal learning". Most thought on research utilization is either explicitly or implicitly based on the expectation of having research information enter an analytic decision process. Most prescrip- tions for improving research utilization are based on the expectation of moving toward the design of analytic decision systems, yet little has been done to investigate or explore the utilization process from the perspective presented by the "satisficing" model and its correlate, "behavioral learning". One would propose that attention be directed to propositions associated with the "satisficing" model in an attempt to determine whether these present viable alternate avenues for 85 use/non-use of research information and the "dilemma of non-use". According to Conner, "Before we can develop a model for evalu- ating the effects of the RD [research utilization] process we need to understand the process well, just as we would need a thorough understand- ing of a new criminal justice reform project before we could develop an evaluation plan for such a project."136 At this stage of knowledge on” research utilization, all investigations must be categorized as explora- tory, knowledge must be built cumulatively, and new policy areas must be explored. It is reasonable to conduct a limited exploration into use in a corrections setting as an initial contribution to the process of moving toward a better understanding of the utilization process in criminal justice and perhaps moving toward the eventual objective of preparing adequate interpretational models. 86 END-NOTES: DEVELOPING A FRAMEWORKzLITERATURE REVIEW 22 the National Level, Ann Arbor, Mi.: Institute for Social Research, 1975, p. vii. 2Martin Rein, and Sheldon White, "Policy Research: Belief and Doubt," Policy Analysis, Fall 1977, p. 242. 3For example, see Robert F. Rich, Social Science Information and Public Policy Making, San Francisco,Ca.: Jossey-Bass Publishers, l98l; and also see Carol Weiss, and Michael Bucuvalas, Social Science Research and Decision-Making, New York, N.Y.: Columbia University Press, 1980. 4Robert F. Rich, Social Science Information and Public Policy Making. 5Ibid. GIbid. 7Charles D. Lindblom, and Daniel Cohen, Usable Knowledge, New Haven, Ct.: Yale University Press, 1979, pp. 3-4. 8GrahamAllison, Essence g£_Decision, Boston, Ma.: Little, Brown, and Co., 1971, p. 71. 91bid. loIbid. 11Ib1de ' Fe 300 1 2Irving L. Janis, and Levi Mann, Decisionmaking: ALPsychological Analysis 2£_Conflict, Choice, and Commitment, New York, N.Y.: The Free Press, 1977, pp. 11-12. 13Herbert Simon, Administrative Behavior, New York, N.Y.: The Free Press, l976. ' 14Allison, Essence 2£_Decision, p. 71. lsnide I PP. 71-720 16Simon, Administrative Behavior. 17Allison, Essence g£_Decision. leLindblom, and Cohen, Usable Knowledge. 19mm. 87 20John D. Steinbruner, The Cybernetic Theory 2; Decision, Princeton, N.J.: Princeton University.Press, 1974. 21Daniel Katz, and Robert Kahn, The Social Psychology 25 Organizations, New York, N.Y.: John Wiley and Sons, 1966. 22Steinbruner, The Cybernetic Theory 2; Decision, p. 36. 23Ibid., Pu 40. 24Ibid., p. 41. ZSij-do I p. 45. 26Ibid. 27R. William Burnham, ”Modern Decision Theory and Corrections,“ in Don Gottfredson, ed., Decisionmaking'i§_the Criminal Justice System, Rockville, Md.: NIMH, 1975, p. 106. ZBIbid. 29Aaron Wildavsky, ”The Self-Evaluating Organization,“ in J.M. Shafritz, and A.C. Hyde, eds., Classics gngublic Administration, Oak Park, 11.: Moore Publishing Co., 1978, p.424. 3oAllison, Essence g£_Decision. 31Steinbruner, The Cybernetic Theory g£_Decision, p. 79. 32Ibid., p. 78. 33Allison, Essence g§_Deoision, p. 74. 34Ibid., p. 77. 3sIbid., p. 76. 36ij-de ' pp 0 76-77 0 37Wildavsky, "The Self-Evaluating Organization," p. 425. 38Steinbruner, The Cybernetic Theory 2; Decision, pp. 67-72. 39mide ' p. 720 40 Lindblom, and Cohen, Usable Knowledge. 411bid. 421b1a., p. 25. 88 43Anthony Downs, Inside Bureaucracy, Boston, Ma.: Little, Brown, and Co., 1967. 44Richard M. Cyert,.and James G. March, A Behavioral Theory 2; the Firm, Englewood Cliffs, N.J.: Prentice-Hall, 1963. 45Michael Gottfredson, and Don Gottfredson, Decisionmaking i2 Criminal Justice: Toward the Rational Exercise gg Discretion, Cambridge, Ma.: Bellinger, 1980, p. xvii. 46Richard V, Farace, P.R. Monge, and H.M. Russell, Communicating and Organizing, Reading, Ma.: Addison-wesley Publishing Co., 1977, pp. 21-14. 47Ibid., p. 24. 481bid. 49Lindblom, and Cohen, Usable Knowledge, p. 42. 50Rich, Social Science Information and Public Poligy Making. 51Katz, and Kahn, The Social Psycholggy 2£_Organizations, p. 259. SZIbid. 53 Carol Weiss, "Policy Research in the University: Practical Aid or Academic Exercise?”, Poligy Studies Journal 4:3 (1976), p. 226. 54Ibid. 55Steinbruner, The Cybernetic Theory 25 Decision, p. 41. 56Robert J. Mowitz, The Design g£_Public Decision Systems, Baltimore, Md.: University Park Press, 1980, pp. 59-60. Note some modification of Mowitz's statements. 57Ibid. 58Ibid., p. 23. 59Ibid. GOIbid. GlGottfredson, and Go.tfredson, Decisionmaking in_Criminal Justice, p. 353. 621bid., p. 354. 63Ibid., p. 353. 89 64Ibid., p. xxvi. aslbid.' 66Weiss, and Bucuvalas, Social Science Research and Decision- Making, p. 274. 67Ibid. 68Steinbruner, The Cybernetic Theory 2; Decision, p. 86. 69Ibid., p. 87. 70Items from the preceeding discussion have been incorporated with the author's own ideas to develop this image. 71Lindblom, and Cohen, Usable Knowledge. 72Items from the preceeding discussion have been incorporated with the author's own ideas to develop this image. 73Steinbruner, The Cybernetic Theory 2; Decision, p. 87. 74Ibid. 75Wildavsky, "The Self-Evaluating Organization," p. 412. 76Peter H. Rossi, and Howard E. Freeman, Evaluation: A Systematic Aggrm mch, 2nd ed., Beverly Hills, Ca.: SAGE Publications, 1982, p. 323. 77Ibid. 78Daniel Glaser, Routinizing Evaluation: Getting Feedback en the Effectiveness g£_Crime and Delinquency Programs, Rockville, Md.: NIMH, 1973. 79Christopher Alexander, Notes gn_the Synthesis g£_Form, Cambridge, Ma.: Harvard University Press, 1964. 80Lindblom, and Cohen, Usable Knowledge. 81Weiss, and Bucuvalas, Social Science Research and Decision- mg, p. 16. 82 Rich, Social Science Information and Public Policy Making, p. 8. 83Nathan Caplan, cited in Rich, Social Science Information and Public Poligy Making, p. 9. 84Carol weiss, cited in Rich, Social Science Information and Public Policy Making, p. 9. 90 85Carol weiss, and Michael Bucuvalas, "Truth Tests and Utility Tests: Decisionmakers' Frames of Reference for Social Science Research," American Sociological Review, 45 (April 1980), p. 311. 86Jack Rothman, Using Research in Organizations: A Guide to Successful Application, Beverly Hills, Ca.: SAGE Publications, —1980, p. 140. 87Rich, Socia1,Science Information and Public Policy Making, p. 15. 88Michael Q. Patton, Utilization-Focused Evaluation, Beverly Hills, Ca.: SAGE Publications, 1978, p. 252. 89Ibid. 9oIbid. 91Rich, Social Science Information and Public Policy Making, p. 159. 92Ibid. 93Richard B. Grosskin, "Toward the Integration of Evaluation in Criminal Justice Policy: Constructing Alternative Interpretational Models of the Evaluation Utilization Process," unpublished paper, April, 1981, p. 27. 94Patton, Utilization-Focused Evaluation, p. 64. 95Grosskin, “Toward the Integration of Evaluation in Criminal Justice Policy," p. 28. 96Allison, Essence gg Decision, p. 169. 97Patton, Utilization-Focused Evaluation. 98Ibid. 99 Lindblom, and Cohen, Usable Knowledge, p. 25. 100Mary S. Patrick, "Utilizing Program Evaluation Products: A Rational Choice Approach," paper presented at the annual meeting of the Midwest Political Science Association, Chicago, I1., April, 1979, p. 7. 101Ibid. 102Ibid., p. 11. 1oaaurnham, ”Modern Decision Theory and Corrections:" PP- 101‘102- 91 1°4weiss, and Bucuvalas, Social Science Research and Decision- Making, 9. 19. 105 wide] Pp. 19-20. 106Grosskin, WToward the Integration of Evaluation in Criminal Justice Policy," p. 30. 107Weiss, and Bucuvalas, Social Science Research and Decision- Making, pp. 21-22. 108W’eis's, "Policy Research in the University," p. 225- 109Rothman, Using Research in_Organizations, p. 59. 110Rich, Social Science Information and Public Policy Making, p. 11. lllIbid. llzlbid. 113Ibid. 114 Patton, Utilization-Focused Evaluation. 1lsaothman, Using Research in Organizations. 116Rich, Social Science Information and Public Policy Making. p. 14. 117Ibid. 11 aLindblom, and Cohen, Usable Knowledge, p. 33. llgnich, Social Science Information and Public Policy Making. pp . 14-15 C 120Weiss, and Bucuvalas, Social Science Research and Decision- Making, p. 27. 1211bid. 122Richard B. Grosskin, ”Turning Knowledge into Action: Improving the Use of Evaluation Research in Crime and Criminal Justice Problems Solving," unpublished paper, Institute of Criminal Justice and Criminology, University of Maryland, 1983, p. 5. 12 p. 111. 3Rich, Social Science Information and Public Policy Making, 92 124Grosskin, "Toward the Integration of Evaluation in Criminal Justice Policy," p. 14. 125Lindblom, and Cohen, Usable Knowledge. 126Glaser, Routinizing Evaluation: and also see Gottfredson, and Gottfredson, Decisionmaking in_the Criminal Justice System. 127Carol Weiss, "Factors Affecting the Utilization of Evaluation Findings: An Empirical Test,” paper presented at the annual meeting of the Evaluation Research Society, washington, D.C., November 2-4, 1978. 128Grosskin, “Toward the Integration of Evaluation in Criminal Justice Policy," pp. 9-14. 129mm. , p. 15. 130Ibid., p. 8. lallbidI 1321bid.p PP. 7-8. 133Carol weiss, cited in Grosskin. "Toward-the Integration Of Evaluation in Criminal Justice Policy," with Grosskin's explanation and expansion, pp. 9-14. 134Grosskin, ”Toward the Integration of Evaluation in Criminal Justice Policy." 13SRich, Social Science Information and Public Policy Making, p. 16. 136Ross F. Conner, ”The Evaluation of Research Utilization," in Malcolm W. Klein, and Kathie S. Teilmann, eds., Handbook gf Evaluation in Criminal Justice, Beverly Hills, Ca.: SAGE Publications, Tea, 'p_. 63? _" —" METHODOLOGY The purpose of this study was to conduct an exploration aimed at providing insight into the use of social science research information as an input to policymaking and decisionmaking in corrections. The study was intended to develop insights and guidelines for understanding the role and potential for research information in a corrections setting. A major Objective involved identification of issues for consideration in the development of a viable conceptual framework upon which to base further study. As Ellickson observes, "Despite more than a decade of federal efforts to foster improved practice in the criminal justice system, we still know little about what factors enhance or obstruct the knowledge utilization process (...)."1 The literature on corrections and correc- tional administration is largely bereft of evidence on use. While it is possible to identify a loosely-knit conceptual framework from research in other service delivery systems, there is scant support in the criminal justice literature for a study on "use". Generalizing from other areas can be problematic. Supporting evidence in criminal justice is frag- mented - as Ellickson notes, "(...) often a brief sentence or two in research originally designed to evaluate broader criminal justice outcomes."2 There has, as yet been no single theoretical base developed from.which to proceed in investigating utilization.3 At this stage of 93 94 knowledge concerning utilization the research must yet be characterized as exploratory, and there must be much extrapolation from the data in analyzing findings.4 As Ellickson has noted, the applicability of findings from other social program areas to the criminal justice context may be subtly or not so subtly altered by the specific context of the criminal justice system.5 In corrections, administrators are faced with addressing what may be viewed as an intractable problem: the correctional system is highly fragmented: correctional agencies are accountable to and/or scrutinized by a large array of ”significant others”; and there has been, for at least the past two decades, a growing notion that effectiveness in meeting societal purposes for "rehabilitation" is minimal at best. As Rich has suggested, we know very little about organizational access to information in general, and relevant issues at least involve: "sharing of information among subbureaus within an agency, and policy- makers' access to information collected by the research personnel within their own agency."6 The complexity of issues addressed, the possibility of factors being influenced or stemming directly from the context, and the lack of evidence on use in corrections argue for a study approach that is exploratory and qualitative - that (...) seeks to develop hypotheses rooted in an empirical under- standing of the knowledge utilization process (...) rather than proceeding immediately to a quantitagive test of hypotheses borrowed form other service delivery systems. Knowledge utilization is both a complex concept and a complex process. Ellickson explains, Understanding what it is and how it takes place is still in its infancy: definitions vary with the discipline of different researchers; research endeavors have not produced cumulative and consistent findings. Efforts to further our understanding of this 95 complex and subtle phenomenon should recognize the need to build a conceptual and empirical base that rests on a thorough exploration of the knowledge utilization process before proceeding to quanti- tative cross-sectional studies of determinants and outcomes. Ellickson observes that, "The case study approach is eminently suited to the qualitative data requirements of studying knowledge utilization."9 Rich has observed that, Case studies allow one to begin to explore the complexities of the interrelationships between policy outcomes and organizational interests, personality conflict, externally and internally produced information, and the value given to expertise.10 According to Yin, "Case studies are relevant for studying knowledge utilization, because the topic covers a phenomenon that seems inseparable from.its context."11 He further notes that knowledge utilization shares the following characteristics with inquiries on decisionmaking: -- a series of decisions that occur over a long period of time, with no clear beginning or ending points (i.e. not sharply delineated from their temporal context) -- outcomes whose direct and indirect implications are too complex for single-factor theories -- a large number of relevant participants -- situations that are special in terms of agency context, historical moment in time, and other key elements. These all point to the applicability of a case study design. Study Design: Single Case Study This study examined the use of research information at the top policy level in a state department of corrections. The case study approach was selected because of its suitability for exploring process and because of its suitability to the qualitative data requirements for studying utilization. A single case Study was selected because of practical constraints. The aim of this study was to initiate new research, provide a basis for further study, and allow for refinements 96 of propositions developed in previous findings from other areas - perhaps to stimulate an interest in multiple comparisons of utilization in corrections. Setting Access for the study was granted by the chief executive of a state department of corrections. Permission was given to conduct a broad investigation into the use of research information among top-level policy administrators, major unit administrators, and research/informa- tion-producing staff at the headquarters level of the department. The department is a medium-sized department of corrections located in a Southern state. The administrative arrangements and formal relationships to other governmental and criminal justice agencies are similar to those found in most states. In one respect this department is rather atypical; recent state law mandates that the department conduct.program.evaluation as a basis for assessing the effectiveness of department operations. The information resulting from evaluations is, at least ideally, to be made available to the state governor and the state legislature for considera- tion in determining the efficacy of the department. This law had been in effect for approximately one year prior to the initiation of this study. Gaining access for the study involved submission of a concept paper for consideration by the department. A research understanding was ultimately reached which involved a promise that the author control the data developed in this study and also required that the author assure confidentiality for all respondents, as well as confidentiality regarding agency documents and data. 97 Within-Case Design Ellickson has explained that, Case studies typically involve structured or semi-structured interviews with several participants in the change process, allow the researcher to follow up new insights gained from prior inter- views, and include the incorporation of data collected from agency documents, news reports, and other sources that shed light on the context within which knowledge utilization occurs.13 This study sampled top-level policy administrators, top unit managers, and researchlinformation-producing personnel within the central head- quarters of the organization. The respondents selected were unit managers and above, as well as research/information-producing personnel, having programmatic and.policy responsibilities within the department for systemwide planning and operation of the department. The working universe represented the population defined above. Preliminary study revealed a total working universe of 31 potential respondents. The author conducted the study employing a structured interview echedule that contained mostly open-ended items (see Appendix). The procedure used was flexible, permitting respondents to develop their thoughts and allowing the researcher to pursue leads that emerged. Interviews averaged approximately ninety minutes in length. The inter- views were tape-recorded and later were transcribed. In addition to the interview data, a variety of other sources provided evidence on utilization or the processes and context within which use might occur. These sources included: agency records, project memoranda and docmnents, and illustrative materials such as study reports, journals, and other information offered by respondents. These materials were used in examination of the flow of information within the department, to document use where possible, and to provide information on 98 structure within the organization. General Analysis Plan On the basis of the interviews and collateral evidence, the study sought to assess the general incorporation of social science research information as an input in policy and program decision in the department. Retrospective responses were elicited, as well as prospective responses, on the role and potential for the use of scientific knowledge. Rich has observed that, "Clearly, decisionmaking is not a strictly causal process in which resources and decisions are directly related."14 The approach taken was intended to provide depth and variety within a limited context: however, the study was not oriented to a complete assessment of the decisionmaking process. The questions central to the study are explicated on pages 82 and 83. The general plan for analysis involved an attempt to address these basic questions in order to establish a relative notion concerning types and levels of use among the respondents, and it involved an attempt to derive inferences concerning the propositions related to ”organiza- tional learning". Methods of Analysis The interviews provided information on a respondent's position, access to decision, level of educational attainment, pattern of research information use on the job, experiences with research information use, sources of research information used, general attitudes toward research information, and perceptions of factors that may inhibit or may facili- tate the use of research information in the department and in corrections 99 in general. Interviews were tape-recorded and later were transcribed. Coding of open-ended questions corresponded to categories of variables established on the basis of a subjective analysis of the content of responses taken together with the basic questions and basic framework established. Collateral data in the form of review of agency documents, project memoranda, and other illustrative material was used to verify some responses, provide evidence of use where possible, and develop data on structure and dissemination networks. The data were submitted to a qualitative analysis. Limits of the Study This study was limited by its nature as a one-shot case study. There was a limited data base to draw upon in reaching conclusions. There was a need to extrapolate from the data, and the number of respondents was not sufficient to provide statistically significant results. The approach limits the credibility that can be given to generalizations concerning the results. This study was limited to one organization during a particular period during its development. It is important to recognize that, as Rich states, "The factors contributing to a decision to collect or utilize information are complex 'and do not lend themselves easily to causal analysis."15 Rich further explains, One of the major methodological problems facing research on utilization is that in the case of policymaking, one is faced with the problem of trying to conceptualize processes and styles of porblemrsolving - both of which have diverse roots that have not been traced or operationalized, and more importantly, put in the form of empirically testable propositions.1 100 Thus, effort must be advanced toward both developing these propositions toward relating utilization to policymaking and program.decision. Though limited, the approach taken in this study allowed for in-depth examination of emergent foci, and it provided the opportunity to apply and refine previous findings from other areas. The objective of the study was cumulative advance, and the attempt to develop and provide some assessement of the viability of a conceptual framework rooted in the literature on organizations and decisionmaking was facili- tated by the approach taken. 101 END-NOTES: METHODOLOGY 1Phyllis Ellickson, fKnowledge Utilization in Local Criminal Justice Agencies: A Conceptual Framework," a concept paper, Santa Monica, Ca: Rand Corporation, 1981, p. 52. 2Ibid. 3Robert F. Rich, Social Science Information and Public Policy Making, San Francisco, Ca.: Jossey-Bass Publishers, 1981, p. 15. 4Ibid. 5Ellickson, "Knowledge Utilization," p. 52. 6Rich, Social Science Information and Public Policeraking, p. 21. 7Ellickson, "Knowledge Utilization," pp. 52-53. SIbid. , p. 54. 9 Ibid., p. 53. 10Rich, Social Science Information and Public Policy Making, p. 21. 11Robert K. Yin, "The Case Study as a Serious Research Strategy," mwledge, 3: 1 (September 1981) p. 99. 12mm. 13Ellickson, "Knowledge Utilization," p. 53. 14Rich, Social Science Information and Public Policy Making, p. 22. 15Ibid., p. 21. “mm. , p. 33. ANALYSIS The purpose of this study was to investigate the use of social science research information as an input to policymaking/decisionmaking at the headquarters level in a state department of corrections. Study objectives included developing insights and guidelines for understanding levels and types of use and for understanding the role and potential for social science research utilization in the department. The organization selected for this study is a state department of corrections in a Southern state. Gaining access for the study required reaching a research understanding involving a promise of confidentiality for the respondents and the organization in general and a promise of confidentiality regarding sensitive information encountered. In order to comply with this request, descriptions of the organization and descrip- tions of the respondents are generalized. An effort is made to avoid using nomenclature, titles, and other designations which may reveal identities. In describing or alluding to the organization or various positions within the organization, and in otherwise discussing results, a set of alternate designations is employed. It must alsobe noted that, while most respondent comments presented in this analysis are direct quotes, some have been paraphrased to avoid identifying the respondent. Analysis of the data is presented in the following section. Twenty-seven respondents participated in the study, representing the executive level of management, top rank of mid-management, and the 102 , 103 primary research]information-producing personnel in the department. The study represents an initial effort in an area in which empirical research has been limited. The analysis is qualitative in nature. There is an attempt to focus on diversity, rather than quantity, in examining use in the depart- ment. The categories used in this analysis result from the framework. established in the previous literature review and from subjective judg- ment based on a content analysis of the data collected. The questions drawn from weiss and from Grosskin form the foundation of the attempt to establish baseline evidence on patterns of use/non-use in the depart- ment.1 The basic questions addressed are: categogy gygstions 1. type of information Is social science research informa- tion available? If so, what informa- tion is available? 2. type of use If social science research informa- tion is used, how is the information used? What types of use are made? Is the use more "immediate” or is use a more diffuse process, occurring over time? 3. user identity Who uses social science research information? Is this information used by individuals? by groups? Are there particular individuals or groups who are more likely to use research information? 4. scope of use .. What is the scope of use in the department? Is research information systematically preferred in the department? Is it put to greater/ equal/less use than other forms of information? Are there particular issues or areas where research information is more likely to find use? where research information is likely to be discounted or ignored? 104 The study also included an attempt to check the viability of notions associating use/non-use with the propositions derived from the models of decisionmaking and organizational learning identified and with variables related to the "bureaucratization" of the information use process. A portion of the data collection was devoted to investigating the dissemination process in the department and to obtaining information from.participants regarding behaviors which may be associated with the possible "bureaucratization" of the utilization process. The work com- pleted by Rich in his assessment of use of CNS information in federal policymaking was drawn upon to suggest several possible categories for analysis.2 In addition, leeway was left to explore emergent foci and to allow the creation of categories based on content nanlysis of the data collected. This analysis initially presents information describing the organization and the study participants, as well as several necessary research notes. The bulk of the analysis addresses the basic questions set forth. This is complemented by the attempt to provide an inferential analysis primarily aimed at addressing questions concerning organiza- tional learning and ”bureaucratization". The Organization This organization is a state department of corrections headed by a chief executive who is appointed by, and is primarily accountable to, the governor of the state. Upper-level management positions are unclassi- fied, appointed positions (i.e. they are not civil service positions). The formal relationships of the department to the office of the governor and to other state agencies are typical of those one would expect to find 105 in most states. The department has a complex mission to perform and is confronted by a turbulent environment. Leaders must compete for resources in a state which is economically more stable than some; but, at the time of the study, faces a struggle to avoid and to reconcile large budget deficits. In recent years, especially within the year prior to the study, the department has come under an increasing amount of scrutiny by an array of "significant others": -- The department has been and continues to be operating under a far-reaching federal court order affecting major policy decisions and most aspects of day-to—day administration and operation.‘ -- The state legislature, primarily through its corrections commit- tees, has increased its scrutiny of the department's operations. During the year prior to this study, legislation was enacted which resulted in reorganization of one major element of the department; and increased legislative interest has demanded greater attention from upper-level executives in the department. -- The department is experiencing a major problem.with overcrowding. Local jails are forced to hold prisoners awaiting transfer to state insitutions - some for long periods of time. News media and several interest groups are pressing the department's leadership, the governor's office, and the state legislature to alleviate overcrowded conditions. The "ripple effect? of holding large numbers of state prisoners in local facilities has created problems for sheriff's departments and for local government. Department leaders are under pressure to provide an answer to this situation. Paradoxically, there is a great amount of public pressure for judges in this state to sentence more offenders to 106 prison for increasingly lengthy sentences, and there has been public pressure placed on the state parole authority to be increasingly selective in granting parole. These conditions combine to produce a climate of uncertainty for the leadership of the department. The overall situation has been reflected in the development of a reactive posture most department members refer to as a "crisis management” situation. The department provides correctional services including incarceration and institutional programs for both adults and juveniles, probation and parole supervision for adults, and administration and leadership for a system operating on'a budget of approximately 125 million dollars per year. The department operates 14 institutions. There are approximately 10,000 adults and approximately 1,400 juveniles incarcerated in the department's institutions and facilities. Probation and parole services are provided by the department for approximately 20,000 persons under the state's jurisdiction. 'The department employs approximately 5,000 persons. More than 60 per cent of the personnel are security/custody personnel working in the department's institutions. Internal Structure The department is organized along functional lines. A head- quarters element carries systemwide responsibility for coordination and overall administration of the system. The administrators within this element constitute an executive policy level within the organization (808 Figure l)- 107 Chief Executive . L---- Personal Executive Officer --- Under Chief Executive -- --- Deputy Chief Executive L w---- Legal - :::;::: Evaluation h---- Mental Health _ Statistics [---- Special Investigations ---- Information/Public Relations - Budget . . . ----- Education - Personnel/Training - Accounting L Prison Enterprises -- Assistant Chief Executives -- Adult Juvenile Probation/Parole (Further subdivided into program sections) EIGURE 1 Departmental Organization Major functional subunits and what may best be termed "indepen- dent" units are headed by an "assistant chief executive" or a "unit wadministrator". Each of these administrators functions as the person (responsible for that subunit and has the responsibility for implementing ‘policy and making operational decisions for the unit. These administra- tors may recommend systemwide policy, in most cases can enact temporary policy changes, and at times participate collectively in making decisions with the policy administrators to address issues confronting the department. The "unit administrators" enjoy a high degree of autonomy in managing the activities of their elements; yet, in a sense, these administrators function as the top echelon of mid-management. Each of these managers has a staff which may include personal executive assis- tants. Each major unit is further broken down along functional lines into 108 I program sections. The management personnel and research/information- producing personnel from this "unit management" level to the chief executive form the working universe for this study. The Study Participants The study sample was confined to top policy administrators, top unit administrators, and research/information-producing personnel located in the headquarters element of the department. A working universe was identified which included 31 potential participants. Ultimately, 27 of the personnel identified participated in the study. These included: six policy administrators, five research/information-producers, and sixteen unit administrators. All participants had direct access to the decisionmaking/p0licymaking process in the department. The four missing participants resulted from a change in staffing which occurred during the course of the study and left four positions unfilled during the course of data collection. Study respondents were grouped as ”policy administrators", "unit management personnel", or ”research/information-producing personnel". "Policy administrators" were those persons in positions which carried authority and responsibility extending across all subunits in the department. These person were primarily responsible for the development and supervision of systemwide policy and action. "Unit management personnel” were distinguished as the top management personnel having authority and responsibility for respective major subunits of the department. These persons functioned as the administrators of major elements and "independent” elements of the department and were accountable directly to policy administrators. Although their 109 responsibilities were often far-reaching, and the unit management personnel often engaged in collective decisionmaking with the policy administrators, their formal supervisory authority and responsibilities were confined to_their respective sub-elements. "Research/information-producing personnel" were executive staff persons responsible for the production and/or acquistion, assimilation, and presentation of information regarding the system.and its operations. This category included those personnel formally designated as research/ information-producing personnel in the headquarters element of the department. Their responsibilities were systemwide. These persons were directly accountable to policy administrators in the department. The credentials of the respondents were impressive. Two partici- pants possessed doctoral degrees: two possessed master's degrees plus work beyond the master's level: four respondents possessed master's degrees with no work beyond the master's; four participants possessed a bachelor's degree plus work beyond the bachelor's degree; twelve 'respondents possessed a bachelor's degree with no work beyond the bache- lor's degree; and three participants had attained an educational level below the bachelor's degree. The respondents' academic credentials represented backgrounds in a variety of disciplines. Social science degrees, such as social work, criminal justice, sociology, psychology, and anthropology were predominant. Other disciplines were_also repre- sented, including economics, engineering, accounting, and business, among others. Participants' levels of experience in the department and in corrections ranged from three months to twenty years. The median length of service within the department was 6.0 years of service. The average 110 length of service in corrections was 7.2 years of service. Twelve of the participants reported that they had taken college- or'university- level coursework in research methods and/or statistics. Six of the participants reported that they had attended some form of in-service education related to research and/or statistics. These six were also among the twelve who had taken formal coursework in these areas. Overall, the study participants appeared to be well—qualified professionals. Given the collective credentials, access to policymaking and decisionmaking, and acquaintance with the organization and, in some cases, with research, one must think that the study participants were well-qualified to address the issues present in this research. The respondents were an impressive group, and all respondents were quite candid and most cooperative in providing information for this study. RESEARCH NOTE: "Policy? in the Department The term.9policy" may be broadly defined as "a pattern of action that resolves conflicting claims or establishes incentives for coopera- tion among those who share goals.but find it irrational to cooperate."3 The crucial element of this definition is an emphasis on a patterned attempt to resolve or manage disputes or provide a rational means-end relationship for action involving members of an organization. In order to avoid a problem.in the use of the terms "policy" and fprogram.decision" in this study, it was necessary to come to an understanding of the meaning attached.to these terms by members of the department. The department has definite "levels" of decision. "Policy" is established at a systemwide level. Within the department, the term 111 "policy" is associated with standard operating procedures or parameters. One unit administrator commented: We have two levels of policy: a 'memorandum policy', which may be initiated at the highest level or often can be initiated by a [unit administrator]; and we have a policy book or manual of SOP's. Memorandum policy is of short duration; SOP's require a formal change from the top level. Very few SOP's get rewritten. Members of the department most closely associated the term "program decision" with adjustments to or planning concerning an anticipated or ongoing operational level activity. The term."policy", as broadly defined earlier, may encompass both "policy" and "program decision". It is important to realize that the members of the department associate "policy" with formal statements of procedure, and they associate "program decision” more with the ongoing process of managing. This distinction was important in exploring the authority relationships associated with ”policy" and "program decision" in the department. Unit administrators have a high degree of autonomy in making program adjustments and may issue transitory "memorandum.policy". Permanent "policy" changes occur at the highest administrative level; as would most program decisions of a major nature, such as decisions to terminate or significantly alter a program. During the study it was necessary to attend to the distinctions made by participants regarding the terms "policy" and "program decision". In analyzing the data it was necessary to understand the comments of a respondent in relation to the images evoked by various questions. The distinctions did not present a large problem in the conduct of the study or in the analysis. However, one should be aware of the various meanings associated with these terms in considering this study and in thinking about the possible design and conduct of future studies of utilization. 112 RESEARCH NOTE: "Research" in the Department Each of the study respondents was asked questions concerning the use/non-use of social science research information. The author provided respondents with the following definition: (...) empirical information developed using the methods of social science, whether developed in-house or by some outside source. For example, the results of program evaluation studies, the results of basic research which may have been disseminated, the results of studies conducted in other organizations, but not limited to these particular forms. There was a great degree of variability in the images and meanings that the study participants associated with the term "social science research". In most instances, the definition provided sufficient- ly narrowed the notion, or at least led to further questions and eventual clarification of the meaning in the context of the study. It is important to realize that members of the department conceive of "research", and therefore conceive of the processes of doing and using research, in a variety of ways. As Lindblom.and Cohen.point out, the term "research" may subtend a very broad range of actual and potential activities.4 They look at the perceptions of practitioners as well as the academic definitions generally associated with “research" to arrive at the suggestion that attention must be paid to variations in conceptions of "research". The participants in this study did evidence wide variation in their conceptions of research and in their notions of the processes and out- comes associated with "research". Activities ranging from.tightly- designed and rather rigorously-conducted empirical investigations, through investigations conducted with lessening degrees of rigor and soundness of design, all the'way to activities such as compiling basic 113 information (as in daily census totals, figures concerning expenditures, counts on bedspace available, and so on) that decisionmakers expected to heuristically evaluate and put to some use, received the designation "research”. Further, it must be noted that most participants did not necessarily conceive of "researchers" as academicians or consultants with impressive credentials. For most, "researchers” could be anyone to whom the task to do "research" might fall. Some study participants (notably those who had taken formal coursework on research methods and statistics) tended to visualize "research" and the conduct of "research" from'what one might term an ”academic orientation". On the other hand, some respondents conceived of "research" as any type of data compilation. In-between, there were respondents who maintained other variations in their visualizations of "research“. For example, some respondents tended to think of "research” information as any information produced in-house. Other respondents tended to differentiate "research" from other activities more or less according to the scale of the investigation (many persons involved; a large data collection effort) or according to whether the activity was or was not a "special” effort. The variability among respondents in their thinking about “research" must be taken into account in attempts to capture "use" through an interview technique. One must be concerned with whether the quality of the data is affected by this variability and must attend to this issue in any effort to analyze the data. In the literature on utilization, there seems to have been little emphasis on the problems associated with varying visualizations of "research". Particularly in thinking about future studies and in attempts to develop interpretational 114 models in future works, it may be important to address issue associated with variability in visualizing "research" among all those involved in aistudy. RESEARCH NOTE: Organizational Climate Organizations have patterns to their ways of doing things that characterize their climates. If one considers time and resources, empha- sis centered on both long-range and short-range planning, and commitment to activities associated with a greater degree of rationality in choosing courses of action, one may begin to understand the character of manage- ment in a given organization. Eyggy respondent in this study provided a characterization of the management climate in the department in terms one could reduce appropri- ately to ”crisis-oriented”. Each respondent provided comments concerning his/her own activities and activities of other members of the department as being ”reactive" and based on a very short time horizon. One policy administrator stated: we do very little planning in the department. We do absolutely no planning regarding the use of information. Another policy administrator provided the following assessment: I hate to confess this, but the department is probably one of the finest examples of crisis management. we'd like to get away from that, but we are constantly under the gun - caught between federal government people, courts, the administration, and cost considera- tion. One unit administrator, a person with approximately fifteen years' experience in the organization, suggested that a reactive posture and the absence of planning are not recently acquired characteristics in the department, or in corrections in general: 115 Corrections people have the attitude that they operate more 'by the seat of their pants'. This is the way it's been done for fifty years, so why change? Corrections people are not receptive to new ideas. we resist change. Wg_prefer expgrience £2 learning. One researcher discussed the effects of a crisis orientation on the organization in terms of research utilization: There is a potential for research information [to assist us]. We are not realizing it because of budget, crises, and our need to react. we have turned our attention to efficiency, economy. There are too many demands to allow us to conduct or even consider research. These responses and other similar responses from.participants point out the reactive nature of ongoing activities in the department. Although the effect of this management climate on research use will be discussed more fully in following sections, it is important at this point to consider the overall organizational outlook. These comments point to the reactive climate in the department as a critical overlay influencing all activities and constricting the perceived range of possible actions. The overall necessity to react to pressures, to "put out fires“, as a primary mode of management was, in general, discussed by respondents as a deficiency in the organization. However, the time pressures and necessity to operate in a reactive mode were viewed by respondents as "real" and not contrived. . In addition, it appeared that some respondents viewed a reactive posture as a "way of life" in corrections, particularly in the depart- ment. Consequently, incremental advances based on experience and on forced change, rather than on causal modeling and/or long-range planning, seemed to present a reasonable and realistic mode of adaptation to members of the department - viewed as a necessary or real "fact" of past and present existence in the department. 116 It is important to turn again to the basic concepts and notions 'related to "organizational learning". It seems apparent that one must begin to consider the character of "learning” in the department and restate two central questions: 1) are expectations for a prominent role for scientific knowledge in the policymaking process in the department reasonable?; and 2) are prescriptions for institutionalizing, or routi- nizing, research-based learning, as a central means of providing informa- tion for decisionmaking/policymaking, viable, given the basic assumptions underlying rational, analytic decisionmaking and "satis- ficing"? One must hold these questions in mind as the answers to more basic questions are attempted. A Basic Qgestion: Is Empirical Information Available? Some studies of use are initiated to assess utilization of particular information known to be available in certain organizations. Some studies follow an initial assumption that research information, because it exists and is related to a particular policy area, will find use, although there may not be a great deal of use. This study was oriented to discovering the respondents' experience of research utiliza- tion by attempting to examine the processes and behaviors in the depart- ment which systematically affect the potential for utilization of research information. To begin with, as noted by Farace, Monge, and Russell, "E232 is known in an organization, and.!hg knows it, are obviously very important 5 Grosskin in determining the overall functioning of an organization." indicates that the basic starting point in a broadly-based investigation of use/non-use involves determining what research information, if any, 117 6 One could attempt to obtain is available to members of the organization. a quantitive description of the relative amounts of various types of data available in an organization, but this would be.a virtual impossi- bility. A quantitative approach to the question of information availa- bility did not deem appropriate to this study. In addressing the question of whether research information is available in the department, an attempt was made to capture, qualitatively, the diversity in forms of social science research information available, as well as to identify the sources of such information. In general, the question, "Is social science research information available to members of the department?", must be answered with a ggglified, "Yes". Some empirical information is produced in-house (a result of various initiatives) and research information is encountered on a rather regular basis from a number of outside sources. An examina- tion of the information encountered in the study, and the sources of this information, provides a foundation for a more particularized analysis of patterns of utilization in the department. Research Information from External Sources A substantial amount of research-produced information is physi- cally present in various places in the department and is, therefore, available to various members of the department. In many instances this information has not been acquired for specific purposes, and in many instances there has been no discernible attempt to categorize the infor- mation in terms of potential identifiable use by members of the organi- zation. In other words, there is a great deal of research-produced data which is merely present and may find some use on an gg'hoc and rather 118 haphazard basis by various individuals in the department. If one characterizes "information" in terms of its potential for reducing uncertainty, implying application of perceptions for potential use, most of these research products must be said to consitute "data", or pieces of knowledge, which happen to be in the possession of certain members of the department. Much of the research-based information present in the department could be categorized as "routinely encountered materials". Respondents were asked whether they received research-based information on a regular basis form outside sources. They were asked, where possible, to provide examples of the information they receive, or to identify the types and sources of the information they receive. All respondents were able to identify research-based knowledge received by individuals in the department on a rather regular basis, although several respondents indicated that they did not personally receive such information. Three primary categories of research-based information are received on a rather regular basis by various individuals in the department. These include: 1) professional journals and publication: e.g., The ACA Journal; Corrections Magazine; Federal Probation; psychological journals; juvenile justice journals, professional business journals; and others. ' 2) materials disseminated form national clearinghouses and from federal agencies, such as NIC and NCJRS. 3) materials disseminated from other agencies concerning research or studies conducted, such as the study of the New Mexico prison riots and risk prediction studies carried out in other states. An important distinction relevant to consideration of these "routinely encountered materials" and availability in the department can be drawn from the work of Leon Brillouin.7 ACCording to Farace, Monge, 119 Russell, Brillbuin differentiated between absolute and distributed 'information'. If one surveyed the number of different pieces of knowledge that are present among the organization's members and contained in documents and other forms of storage, then [one would know] how much absolute information the organization possessed. Distributed information, on the other hand, refers to the relative dispersal through a given system of a particular piece of knowledge.8 It has already been noted that literally counting the ”pieces of knowledge" present in the department would be virtually impossible. However, the notions of "absolute" and "distributed“ information are quite relevant to consideration of the "routinely encountered" research information in the department. In attempting to gain a perspective on the types of empirical information available to members of the department, it became immediately clear that, at the levels of information acquisition and dissemination,- the department has a "distributed-information" problem. Leaving aside for the present time the direct question of use, there are several points that deserve attention. Even though all respondents could identify empiricallyeproduced knowledge routinely collected from outside sources, each respondent also indicated that the department had no policy and no central or systematic mechanism for the assimilation and distribution of empirical information from external sources. One researcher commented: I'do not personally subscribe to journals in this area. I am personally on no dissemination lists. To my knowledge the department, as an organization, is on no dissemination lists. we use some clearinghouse information on an as-needed basis. If I need something I go to [a nearby university library] or contact NIC or somebody. A policy administrator provided the following response: A lot of this material is looked on as a frivolous encumbrance. No, we do not centrally acquire this material on a day-to-day basis and distribute it. There is actually little forethought about this. 120 [we do not] acknowledge the utility of acquiring this kind of information in doing business. [we know] there is a large body of knowledge out there, but a lot of it is not valuable. There is little utility in it in addressing day-to-day problems. Yet another researcher more directly assesses this issue: Information[of this type] comes in at different points. It may be sent on to so and so, but this is [done] purely on personal initiative. Information to be sent on to an individual is done on the basis of who you think might be interested. If information comes in to [the chief executive's office] it might be disseminated. .A lot of information comes in. We do crisis-oriented search. Whatever the nature of the research-based information that is routinely encountered by members of the organization, there is no method or coordination for its assessment or dissemination in terms of possible application or potential use. One must realize that: 1) a large amount of empirically-based knowledge is encountered, and in some way retained, even if not assimilated, by members of the department; 2) there is no scheme for coordinating the acquisition of such knowledge or for evaluating the potential usefulness or value of such knowledge; 3) there are no criteria and no established policies for determining whether to review, discard, or store this knowledge, and no formal priorities are established regarding this material; and 4) there is no policy or central scheme for disseminating this knowledge, and there is no attention paid to the development of guidelines concerning a continual search for new knowledge encountered in this way. As one respondent noted, the members of the department conduct ”crisis- oriented search". In terms of information distribution throughout the department, one is led to question the lack of coherence and absence of policies regarding information. One is also led to consider search and information needs and wonder whether, at times, information which is needed might not already be present but not in the possession of and , 121 unknown to those who might use it. RESEARCH NOTE: Enacted Information Space Within an organization, individuals, through a process of percep- tion, evaluation, and assimilation, determine what data is to be perceived as "information". Farace, Monge, and Russell note that, "Information is basically dependent on the recognition of distinctive, repetitive patterns that occur in the flow of [data] into and through an organization."9 Each member of an organization perceives some regularity to the flow of knowledge or data. When this recognition, related to referents in the organization, occurs, "information" may be said to exist. According to Farace, Monge, and Russell, Each member [of a complex organization] has a set of information and communication perceptions (...) which represent a particular view of the surrounding environment. [The concept of 'enacted information space'] puts an active perspective on the role played by the organi- zation's members in arriving at a definition of what constitutes the limits of what is perceived [and what is communicated] as information concerning the organization and its internal and external environ- ments. Individuals enact an "information Space", or a perceptual framework, concerning what is to be viewed as ”information". A "composite enact- ment", based on the various viewpoints of the organization's members can be said to exist. Farace, Monge, and Russell further explain, The term.'enactment' puts a definitive emphasis on the fact that an information space is something that is literally constructed by the members of the organization, and hence is subject to change from time to time, and liable to differ from member to member.1 An "information space” may not be said to be an "objective" phenomenon in that it exists beyond the construction enacted by the various members of the organization. The concept of ”enacted information 122 space" may be useful in understanding the use/non-use of research information. Individually and collectively the organization's members attune themselves to various data that, on a regular basis, are perceived to have value as "information", When oneeconsiders the notion of factored.prob1emrsolving and selective attention to problems, the concept's usefulness becomes apparent. A collective emphasis on types and sources of information may be manifest as explicit or implict policy regarding what types and/or sources of "information" are appropriate by problem area, by issue, by organizational interest area, and so on. One may expect that individuals, groups, perhaps the entire collective membership of an organization, enact an "information space" in that certain types and/or sources of knowledge come to be seen as appropriate for decisionmaking or for other purposes. Individuals, and collectively, members in subunits may tend to develop a common perception or similar perceptions regarding the use of research informa- tion. Though a consciously arrived-at, explicit policy may not be in evidence, implicit expectations may be communicated concerning the appropriateness of research information (or any other information) as an input for decisionmaking or other purposes. This may occur within sub- units, across subunits, within hierarchical levels, or transcending hierarchical levels. In the department, there is a notable absence of formal, explicit information policy. However, the notion that implicit policies develop and may be connected to the "local rationality" associated with factored problemrsolving and to variables associated with "bureaucratic interest" is an inportant one. This notion deserves further thought on its applica; tion to understanding the potential for research information as an input 123 to policy in complex organizations where "learning" is based on referents other then the normative prescriptions associated with an analytic, causal learning process. Consultants and Other Invited Researchers A limited amount of empirically-produced information is availf able, in the sense of being present for consideration by various indivi- duals in the department, in the form of reports produced by outside consultants and researchers allowed to conduct empirical investigations for various reasons (primarily theses, dissertations, and a limited amount of basic research) with the cooperation of the department. Only two consultant reports were specifically referred to by the respondents: one was a technical assistance project on training employees, which included surveys and curriculum.developmentz and the other was an agri- cultural project involving university researchers, which may not be conceived of strictly as social science research, though the project involved development of scientific knowledge. No specific references were made to other external researchers, although two respondents mentioned that master's and doctoral students had been allowed to conduct research in cooperation with the department. Respondent's perceptions of the usefulness of this type infor- mation are addressed in a subsequent section of this analysis. In-house Research: Internally-Produced Empirical Information At this point, it is again important to realize that members of the organization visualize "research” with a great deal of variability. One must also be reminded of Lindblom's and Cohens' notion of 124 professional social inquiry - involving investigations of many sorts and levels of rigor conducted by professional practitioners using the various tools of social science research methodology.12 The categorization of ”in-house research" reflects a subjective judgment based on the author's assessment of respondent evidence and personal review of documents, project memoranda, and other supporting evidence made available during data collection. As with other observations, the enumeration is not exhaustive. The focus is on the diversity of evidence observed and not the absolute amount of research information present in the department. Members of the department conduct "empirical” studies ranging from rigorous to "soft" designs and from basic research to various forms of applied research , some of which may normally be designated "program evaluation”, ”program.eudit", or perhaps "operations research”. The department's structure includes a program evaluation research section, located within a division of management and budget. This section operates to fulfill the following explicit purpose (as stated in an official publication of the department): This (...) section (...) is responsible for management and program.enalysis within the [department]. This staff works closely with the Internal Auditors, within the [office of the chief executive], in providing programmatic audit capabilities. In-house research activities are not limited to the program evaluation section. There is a research and statistics section, as.well as a program audit section, and various members of the department con- duct research activities utilizing the tools of social science methodo- logy. The only instance of basic research observed during data collec- tion involved members of the mental health services unit within the department. The unit administrator, a clinical psychologist with a Ph.D., 125 provided evidence of several basic and applied projects completed or underway. The unit administrator also provided evidence of recent publi- cation in a scholarly journal and made available a copy of another paper submitted for publication -- tangible evidence that at least some mem- bers of the department were engaged in basic research and were exporting knowledge for consumption and dissemination. Several respondents pointed to investigative activities underway or recently completed by members of their units. In some instances, these projects fit the notion of applied research, and in some instances, one would have had to stretch even the notion of "P.S.I." to loosely refer to the activities as ”research". In some cases the extent of adher- ence to and reliance on methods of social science research were so slight as to require a number of projects to be more correctly categorized ”staff investigations" rather than ”research". The program evaluation section was involved in two major studies during the course of data collection in this study. The section had been in existence for less than two years prior to the beginning of this study. During its relatively short existence, the program.evaluation section had produced three major studies. Each of these studies involved survey techniques combined with collection of collateral evidence and qualitative as well as quantitative analysis of the data obtained. Each of these studies would fall within the broad categorical designation ”process evaluation", each being concerned with assessments of opera- tional efficiency of major ongoing programs within the department, and each presenting recommendations aimed at revising administrative policy, program staffing and objectives, and structural arrangements regarding the operation of the programs. 126 It must be noted that each of the three studies mentioned above were initiated from the policy administrator level of the department. The studies were conducted to answer direct (though sometimes quite broad) questions concerning the operation of specific units in the department, and each study was conducted by a team of personnel not limited to members of the program evaluation section. Reports were pre- pared under the direction of the program evaluation unit administrator and were delivered initially to members of the chief executive's staff. The reports were subsequently forwarded to persons in the department on a "need-to-know” basis. Again, there was research-based information available within the department. The information produced in-house was present in various forms in various places. Only in the instance of the evaluation reports did it seem that any formal distribution of research information had occurred. Technical Knowledge Present in the Form of Individual Knowledge and Expertise There was no way to ascertain the extent of knowledge possessed by members of the organization which had resulted from individual educa- tion and from individual and collective experience. One must, however, emphasize that the credentials of the individuals participating in the study were impressive and that many of the respondents possessed consid- erable experience in the organization and in the field of corrections. It is important to emphasize a comment made by one unit adminis- trator that "many of these people are 'experts in their own minds'" and to re-emphasize Rich's suggestion that information in the form of the 127 knowledge and expertise residing in the organization's members forms the primary base of information for upper-level leadership in a bureau- cratic organization.13 Concepts such as "organizational learning" and those associated.with different models of the learning process in complex organizations begin to provide the avenue for relating this base of knowledge to expectations for the potential role and use/non- use of research information. A Basic Question: Is Research Used? A major methodological problem in examining "use" results from the dilemma presented by attempting to capture a rather ill-defined phenomenon that occurs in ways sometimes difficult to detect. Rich points out: Single reports are typically not used or applied in themselves. Staff members accumulate evidence concerning a policy problem, smarize it, and send a report based on compiled evidence to a policymaker. Very seldom can a policymaker point to a particular study that had a definite influence upon a decision.14 The term "impact" is a term generally reserved to denote docu- mentable incorporation of empirical findings in a particular decision - the situation in this study termed "instrumental use". As Rich explains, ”This kind of impact represents the only truly objective criterion for the measurement of [utilization]."'15 In this study there were only three instances of documentable use. These instances met the objective criterion of observable incorpo- rationof findings and/or recommendations of a study into a policy decision. However, the conditions surrounding the use of this informa- tion require some qualification. These instances are discussed in subse- quent sections. 128 The question, "Is research information used in the department", is addressed in this section of the analysis. "Use" is a concept that is neither sharply defined nor easily captured. In this study, three general categories of use were adopted in order to allow a broad frame of refer- ence and hopefully to incorporate within the framework as many "types" of use as could be identified in the department. The study framework established the following categories of use: 1) instrumental use: documentable instances of observable incorpora- tion of findings and/or recommendations of a study in a policy decision. In this study only direct evidence of incorporation, such as a policy statement, directive, decision memorandum, and so on, directly referring to and showing reliance on a study would be categorized as instrumental use. 2) conceptual use: research information used by decisionmakers or research/informationeproducing personnel as a tool for enlighten- ment in sorting out assumptions, clarifying logic, or arriving at a better understanding of the range of possible options and constraints for a particular decision or set of decisions would be categorized as conceptual use. 3) symbolic use: utilization of empirical findings and/or recommenda- tions to substantiate a previously held position, marshal support, or cast doubt on propositions at odds with those held by the user, or other such persuasive uses would be categorized as symbolic use.16 As one immediately realizes, any attempt to operationalize "use" in any way other than documented instances of "impact" leads to the development of expansive, rather soft'categories and results in the need for a great deal of subjective judgment and extrapolation from the data. This study takes a qualitative approach, seeking to portray the richness in diversity, or conversely the lack of diversity, in utilization while developing a general notion concerning whether utilization in the depart- ment is substantial, non-existent, in between, or close to either of the poles of this continuum. 129 In delimiting the scope of inquiry, two decisions were made. First, the study would be confined to examining use at the top level of management and information production in the department. Second, "use" would refer to the entrance of research information into the policy process. Use was sought at the following levels of outcome in the policy process: "availability" and "consideration", with ”consideration" to include the decision phase of the process. This meant that the study was concerned with the "acquisition-to-decision” phases of the policy cycle and did not address the phases of implementation and beyond. One must again point out that the study 1) was of relatively short duration; 2) involved a relatively small, but important, group of participants; and 3) was relatively open in its focus and structure in order to allow the researcher to pursue emergent foci. The primary sources of data relevant to answering the question, "Is research used in the department?” were the study participants. Although some evidence of use was present in project memoranda, the most enlightening evidence resulted from responses to questions con- cerning utilization asked of study participants. In analyzing the responses it was necessary to collapse some questions (the open-ended nature of most questions tended to elicit redundant comments at times): and it was also necessary both to extrapolate from the data and subjec- tively to categorize responses. Five questions from the interview schedule elicited responses relevant to answering the basic question regarding use/non-use. These were:(see Appendix): -- Can you cite any instances in which you personally have used research findings in reaching a policy or program decision? In changing a policy or programfl How was the research used? ' 130 -- Can you cite any instances of the use of research findings by others in reaching a policy decision or changing a policy or program? How was the research used? -- Do you regularly review research findings in your area? From what sources? for example, from empirical format journals? information disseminated from other agencies? other ? -- Are other forms of information put to greater/equal/less use than research findings in the development of policy or in making program decisions in the department? -- What are your general feelings about the use of research informa- tion for policy development and program decision in the organiza- tion? In corrections in general? In your activity/unit? Taken together, the responses to these items present an overall view of "use” and perceptions of "use" at the top level of management in the department. It is perhaps best to first discuss participants' "general feelings" or perceptions about the usefulness of empirical information. One may recall a principle finding'dffcaplan‘andiassoci- ates was that, although utilization in public organizations is relative- ly low, top-level administrators evidence a high degree of interest in research information in terms of its potential contribution to policy formulation.17 Respondents' Perceptions of Usefulness of Research Information Respondents' comments were varied concerning the potential usefulness of research information. Most seemed to accept the general notion that research information gguld, maybe should be used. Most respondents seemed to take a pragmatic approach to this question in that most provided answers on two levels: 1) first, on an abstract, perhaps normatively-based level - is research information inherently useful?; and 2) secondly (but overlapping), is research information useful in the context in which we (the respondents) live and work? 131 Respondent comments concerning potential usefulness of research information can be grouped along a continuum described by three positions: 1) potentially useful, but ...; 2) perhaps useful, but ...; and 3) probably not useful. These positions cannot be sharply defined, and the nature of the question elicited responses sometimes difficult to interpret. Overall, the responses were categorized as in Table 1 below. TABLE 1 Overall Perceptions of Usefulness Position number Percent potentially useful, but... 12 44.4 perhaps useful, but... 10 37.1 probably not useful 5 18.5 There was a definite difference in the pattern of responses between researcherslinformationeproducers and all others. Researchers/ information producers all saw research as "potentially useful, but..." while the other two respondent groups tended to exhibit.more dispersed patterns in viewing the potential usefulness of research information. These patterns are indicated in Table 2 below. TABLE 2 Perceptions of Usefulness by Respondent Group Potentially Perhaps Probably Not . G oup Useful Useful Useful top policy administrators l 3 ' 2 unit management 6 7 3 researchers/information producers S ' 0 O The difference noted above is not particularly surprising, in that one might expect those who have the greatest commitment to research, in terms of sepcific preparation and work orientation, to feel more 132 positive about, and perhaps advocate, the use of research information. Several comments are presented below to illustrate positions taken by respondents regarding the potential usefulness of research information in the department. Some respondents were quite positive in ascribing an inherent utility to research information in general. As may be seen, even these respondents tended to express doubts when reflecting on the potential usefulness of research information in the department. One unit.administrator comments: The potential is a '10', but we have budget problems, manpower problems. Noting we do in planning is unaffected by these problems. we tie up three or four people [in this unit] just to keep the paper flowing. A researcher states: Research information has a lot of utility. However, research and statistics scare people. Peeple can relate more often to frequencies and descriptive analysis, but when you go beyond that, you lose 90 to 95 percent of the people, of those you have to communicate with. Many are skeptical of research because they do not understand. The audience the policymaker is talking to may not understand. In my experience, people seldom request anything beyond descriptive 'stats'. People are more sophisticated and want to consider anything that has value, but they pick up the technical journals they need to use and get lost. A second researcher notes, as well: There is a large potential for research information. we are not realizing it because of budget, crises, and our need to react. we have attuned attention to efficiency, economy. There are too many demands. - unit administrator took a slightly different perspective: Research i§_useful to me. To me, it can give a broader base to make a decision - to make a biased instead of a prejudiced decision. Many of my decisions also have a moral side to them, and-I must keep in mind political impact. It's silly not to consider all the factors. The above responses reflect a generally positive attitude concerning an inherent usefulness regarding research information. 133 However, each of these and other responses indicate that, although there may be some normative support among respondents regarding the potential utility of research information, on a more practical level contextual variables and complex concerns seem to lead respondents to less positive perceptions of potential usefulness in the department. The question asked did not refer to specific information and did not call upon respondents to judge the potential utility of specific findings, but asked for a general judgment concerning potential utility of research information. Several respondents had mixed feelings regarding research information and its potential utility. A unit administrator pointed out that research information must be viewed as one among many sources of information and concern in a decision: Research information can be a good source. It's fine, but it can't be the sole source of information for a decision. You can't overlook it, or anything else. Another unit administrator supports the above statement: Research has potential. It depends on whether you want to be proactive. we have to consider the legislature - budget battles. we don't know how things will go. we are faced with declining revenues - we know how to manage with plenty and in times of less crisis, but we are not fat. we have many pressures. It is a struggle to make do. we have to respond quickly. Time for research is lacking. we must rely on instincts. That's not to say we are not striving. [The chief executive] attempts to base decisions on information. Research information is tough to come by and tough to interpret. A policy administrator provides a slightly different perspective: Research is useful in particular areas. Most people are busy. we cannot take advantage [of it] in all areas. If something has been tried in another state, it may be something we need to take a look at. These respondents seemed to adhere to an underlying normative assumption that research information may be inherently of value. Their comments are generally less positive and more strongly reflect practical 134 and individual concerns. One respondent quoted above brings out an important point in noting that research information is perceived as appropriate for some areas in the department but is not perceived as an appropriate input in some other areas. A unit administrator had a similar comment, yet felt that research information could be appropriate for certain uses, but not for others: Research information has a potential if that's what's wanted to begin with. I mean, if you like the particular results. I 93E? 339d reasons. Do I want the negatives? N9} This latter participant indicates a personal belief that research may be best used in a persuasive manner. Several respondents provided comments judged subjectively to be essentially negative concerning the usefulness of research information in the department. In making distinctions based on the transcripts and tapes, it became apparent that the question could have been structured better in an attempt to separate feelings about research usefulness in general from application of those feelings to the respondents' current work situations. While the following responses are distinguished as essentially negative, one must consider that these respondents probably incorporate a more pragmatic perspective into their perceptions of the value of research information for their activities and the activities of other members of the department. The characterization of these comments as essentially negative may be arguable, but the distinction was made in part on the basis of each comment taken within the context of an entire discussion. One unit administrator commented: we have no opportunity to consider research. There is a large body of research information. A lot is not valuable. For most professionals there is not much value in it in answering most questions. 135 Another unit administrator focuses on constraints: Usefulness. This is going to sound more negative than intended, but research is a luxury. Research is not just someone wanting to do a project. Need and resource-oriented constraints and personal time are all involved. A unit administrator centers attention on constraints of a different nature: Usefulness is constricted to certain areas - say, environment and violence. Research on hostage situations would be extremely useful. In this department, in this state, research on violence and overcrowding would not be used, because it would not be politically palatable. Two policy administrators look at usefulness in the face of yet another set of constraints. The first set of constraints is reflected in the following: we need computers and computer time. we have no set format for developing data. we are not set up to use research even if we have the inclination. This particular comment appears to be more centered on physical resources than on other concerns. However, the comment parallels concerns expressed by Glaser and later by the Gottfredsons regarding the level of sophistication, or level of rationality, imparted to the decisionmaking process in corrections.18 This notion receives greater attention later in this analysis. At this point, it is important, however, to note that this policy-level administrator points out the notion that usefulness of research information on an ongoing basis may be connected to the effort to orient the information acquisition and decision processes to its use: and, without doing so, its usefulness is diminished. The second set of constraints is indicated in the comment of the other policy administrator mentioned above. That person observes: 136 Most of our people are not prepared to take real research and make conclusions. We are making strides, but we meet resistance. People may not like new conclusions. Most of our peOple are not educated in social science. Research has potential, but what happens when you have people who have a degree but are not insight- ful? We have a lot of people who are not prepared to talk about these problems; many people who come to work because they need a job. They may not be concerned or able. Again, a policy administrator seems to connect the usefulness of research information to a lack of technical sophistication (this time referring to the department's human resources) and what might be termed a bureaucratic inertia - a reliance on conservatiVe or comfortable modes of operation. Can members of the department actually understand and use research information? Are members of the department willing to introduce a greater degree of rationality into the decisionmaking process? This set of responses is essentially negative regarding useful- ness and reflects another mixture of beliefs. Two respondents appear to place little value in research information in terms of practical benefit. The other three participants' concerns are connected not so much with the inherent value of research information as with the contextual constraints they perceive will influence its application. One possible difficulty in analyzing a limited set of data would appear to arise in attempting to "do analysis" (as in "doing social science") without going beyond what the data can bear. Respondents' perceptions concerning potential utility of research information seem to indicate a generally positive attitude which is tempered, or in some cases has been altered, by their perceptions of constraints ariSing from the context in which they must operate. Only two respondents expressed direct doubt about the potential utility of research findings in an 137 abstract sense. The perceptions expressed suggest some support ofcthe notions advanced by Lindblom and Cohen - research information must come pete with a "mountain of ordinary knowledge" and may only reshape that mountain here and there; and a greater understanding of the concerns and means for addressing information needs of decisionmakers may lead to consideration of the interactive nature of problem-solving in complex; organizations.19 The responses to this question are also suggestive of many findings and conclusions in other studies indicating that the nature of the policy process and the context of decisionmaking in complex organizations may tend to condition decisionmakers' views of the useful- ness of research findings. Is Research Information Used? If so, In What Ways? What Information is Used? Respondents were asked to cite instances of use of research findings in reaching a policy decision or in arriving at or making a decision to change a program or policy. Two similar questions were asked. One question asked participants to cite instances of personal use, and the other question asked respondents to cite instances of use by other members of the department of which they had personal knowledge. Both questions involved attempts by the interviewer to probe for additional information concerning what information was used and how'it was used. The interviews elicited retrospective and prospective responses from study participants. Project memoranda also provided some evidence on use and assisted in looking at patterns of use and in examining the dissemination process in the department. One must emphasize that the total universe of knowledge 138 available for potential use within the department was not known; and, therefore, it is impossible to make absolute statements regarding levels of utilization or to make absolute statements concerning the use of social science research information relative to information of other types. There is an attempt to characterize use on the basis of diversity of uses encountered in the data, and there is an attempt to provide a judgment of use of research information relative to other information in a general sense. Consideration of the literature on "use" and the literature on organizations and decisionmaking would lead one to the initial expecta- tion that "use" in a large, bureaucratic organization facing a turbulent environment would be quite low. As this analysis will indicate, the level of "use' in the department can only be judged to be quite low. However, some research information is used in circumscribed ways, and the range of diversity of "use" is at least somewhat surprising. Initial responses to the requests to cite instances and examples of use were varied: including responses such as "I don't think we use research information at all"; "we do not plan the use of any type of information": ”To my knowledge, we do not use research information" 3 and the citation of examples of use occurring as indicated in the following comments: -- Almost every treatment program.is in some way based on research someone has done. -- In our [unit] we do a lot of research. For example, we are attempting to replicate the study done by [another state] on the MMPI to assist us in classification. we are looking at things like violent incident rates and correlations with various categories of inmates in our population: we're conducting a discriminate analysis using scales developed in other states to try to develop a risk prediction scale. we make this information available to upper-level management. we use a lot of research in 139 what we are doing. I don't know what other people in the depart- ment do. -- we completed a study on [a specific unit]. A copy of this study was sent to various people in the department. Each had to respond concerning the findings. Let me show you ... I don't know whether you can find any direct reference, on paper, to any changes based on this, but at least these people had to respond to it. Many respondents had difficulty recalling or pointing out examples of personal use or use by others. However, each respondent, even those who stated rather flatly that no use occurred, was able to provide, upon further questioning, at least one example of use. Some respondents provided several examples.0ne problem with the interview technique seemed to be that the interviewer must probe to elicit what sometimes turn out to be vague recollections or "felt" responses from respondents. There may be difficulty in determining whether respondents are remembering, perhaps reconstructing, instances from somewhere in the distant.past, although appropriate questioning and focus seem to assist in negating that problem. Many respondents would provide an example of use in direct response to a question, or perhaps would cite no example in a direct response, and at some point later in the interview would return to the issue to qualify or perhaps provide additional details concerning the initial response. Instrumental Use In this study, “instrumental use" is equated with the concept "impact" in that it is reserved to denote documentable incorporation of research findings and/or recommendations in a particular decision or set of decisions. The paucity of evidence indicating the occurrence of instrumental use in the department has already been noted. There were 140 only three clear instances of instrumental use observed. One instance of instrumental use involved changes to the staff- ing pattern and changes in the structure and allocation of resources in a major subunit of the department based on findings from an inéhouse study conducted through the evaluation section. The result was directly observable, however, there was not a simple relationship between the production of the findings and the initiation of the program adjustments. One researcher comments: we were assigned to do this study. But you have to understand that the decision had already been made. The outcome was obvious. we knew what we were going to find, and that is why we were asked to do what we did. we didn't have time for an adequate study, but it didn't matter anyway. The program evaluation section had been assigned to complete a "cost-benefit” analysis of the operations of a major subunit and had been given one week to accomplish this task. Instrumental use of the report was documentable. The study findings and recommendations were implemented. However, the context within which the study was completed and within which the decision was made to utilize the results would lead one to categorize the use of the information as "symbolic" use. This study had been initiated as a result of external initiative. The legislature, through its corrections committee, had begun to scrutinize the operations and financial viability of the continued operation of the major subunit involved. Members of the legislature had, in fact, already begun their own informal study by traveling to nearby states to view similar units and operations. There is no direct evidence as to how this was communicated to the upper- level leadership of the department. The chief executive of the department ordered the study to be completed on a short time horizon. 141 Without more direct evidence regarding the motivation behind the study, it would be inappropriate to speculate about stakeholder interest. At least, it does appear that organizational interest was involved in the initiation and subsequent use of the study. It was difficult to get respondents to comment on the study or the changes resulting from the study in question. One administrator did state : Recommendations, findings from [this study] were made use of in making the change. They [the legislature] already knew what they wanted, so the study didn't decide that. It helped to justify the changes. A researcher commented: The (...) change? we did a basic assessment of 'where we are'. It gave credibility to a political decision that had already been made. Shortly after the completion of this in-house study, changes were made , and a law was enacted formalizing the structural rearrange- ment and staffing changes required. This was an instance of the instru- mental use of in-house research-based information. This instance of use also provided what one would have to judge to be a clear instance of symbolic or persuasive use of research findings. The study itself was not rigorous, and the time available for the study was short. The circumstances surrounding its use necessitated qualification of the categorization of the instance of use as instrumental use. This instance of use also clearly points out some of the difficulty in categorizing "use" and the need for a technique for investigating utilization which allows one to probe for collateral evidence to assist in understanding and appropriately categorizing instances of utilization encountered. A second instance of instrumental use involved the direct use 142 of another study conducted in-house by the program evaluation section. The results of this study were incorporated in a decision to initiate a task force within the department to further study and draft a formal inmate grievance procedure. The use of these results appeared to represent an attempt to take a proactive step in the department. The study had indicated that the absence of a formal grievance mechanism was related to problems being experienced at various institutions. This in-house study was also undertaken and used within the context of external pressures. The federal judge scrutinizing the department's operations (the department was under a far-reaching order imposed by the federal court) was thought by some respondents to be turning attention to the area of inmate grievance procedures. It would require more evidence than was available in the present study to determine whether this instance of use was another instance more appropriately termed persuasive or symbolic use. By the evidence that was available, the findings and recommendations of this study were directly used, as shown in a memorandum, to initiate efforts to correct a perceived deficiency in the department. One researcher commented on this study and its results: This study, I think, was particularly positive in my experience. It is going to lead to some positive changes. The third instance of instrumental use involved both in-house information and the collection and use of information produced in other states regarding inmates and vocational education. It was difficult to determine whether the information used fit this study's definition of social science research information, since the information collected was not made available for review. However, discussions with 143 persons connected with the conduct of the study gave sufficient indica- tion that the information utilized could, at least loosely,'fit within the framework of social science research information. Studies on utilization seldom can be expected to find large levels of instrumental use. The decision process in complex organizations is a dynamic process, and, as has been noted, decisions may actually result from many smaller decisions and/or actions and the gradual . accretion of a number of earlier decisions and actions involving both the decision processes and information acquistion-to-use processes. The requirement that impact be documentable or observable behavioral change related directly to specific research findings or recemmendations for the use to be categorized ”instrumental use" almost certainly insures that levels of instrumental use will be quite low in organiza- tions which are not characterized by a highly rational decisiomaking process. Conceptual Use "Conceptual use" in this study included the use of empirical findings and/or recommendations from research studies as a tool for enlightenment in sorting out assumptions, clarifying logic, or arriving at a better understanding of the range of activities and constraints for a particular decision or set of decisions.20 Clearly, most of the instances of use reported and identified in this study may be categorized as conceptual use. Respondents' reports of conceptual use were often rather cloudy or non-specific, in the sense that respondents often would indicate vaguely that "research" was considered in coming to an understanding of 144 issues confronting them. However, respondents were able to relate relatively few specific instances of conceptual use. Research/informa- tioneproducing personnel were more often able to point out specific instances of conceptual use by referring to studies completed or under- way which involved a preliminary review of literature or collection of research-based information as a foundation for study design. Unit administrators and policy administrators provided more general responses; in some cases they conveyed a notion that the level of even conceptual use in the department would be quite low: and in other cases, they indicated a belief that perhaps a greater amount of conceptual use occurs. One policy administrator shared the following belief: For the most part, I think there is strong consideration of research results. I can name many areas where we look at research of a general nature. The process of using this general-type information is more informal. we rely more heavily on other kinds of information - 'practical information'. Another policy administrator had a less favorable outlook concerning research information, yet describes a mode of conceptual use: Sometimes there are so many recommendations in the literature. They tell you too much. You get tired. Research can become word-of- mouth. Something gets translated, becomes part of what someone believes, gets used in this way. A third policy administrator typifies the response of several partici- pants: I can't point to anything right now. Several images were present in participants' descriptions of use which illuminate the ways in which conceptual use may occur. A unit administrator provided the following example: I can't cite any instances at the mment. We are working on something now - a treatment project that may have some direct 145 effect. The project has been going on for over a year and [the program evaluation section] has been evaluating it. The idea was formed over a period of time; it developed from field supervisors. They were looking at a lot of research and coming to this. The real research was used in developing the ideas. The program was actually designed by [my superior] based on his own expertise, and on some things that were going on in other states. They were doing some things that we wanted to take a look at, not that we wanted to adopt it entirely, but we didn't want to reinvent the wheel. This description points out the use of research in developing ideas, perhaps in clarifying logic and arriving at a course of action based on integrating the notions suggested by research with experience and the validation provided by reviewing the actions of others. Another unit administrator gives a similar description, yet provides a different motivation for conceptual use of research informa- tion: I use the literature for ideas - things I pick up at workshops and at conferences. I use [a nearby university library] and usually NIC or ACA literature. we look at information from other states. It's all geared toward planning for budget. This administrator seems to be suggesting that conceptual use is linked to organizational interest in the form of providing support for the unit budget request. In the sense that budget itself may be viewed as a major organizational policy, there is a direct outcome to this concep- tual use in the form of a budgetary request. Additionally, this comment suggests that conceptual use, as a category, may subtend a set of persuasive or symbolic outcomes in constructing arguments for budget requests. In—house research is also used conceptually. A unit administrator notes: Like this one study that program evaluation did. If nothing else, it enlightened me, I used it to critique some of the things we are doing. I also look at information from other states, get basic information - ideas- Who is to say whether it's valid in our 146 environment. I put these things together, and if it's reasonable, we might try some tests. A researcher comments, as well: There was consideration on the study. Everybody involved had to respond. They all had to consider it. Maybe that will eventually lead to something. A unit administrator provides still another indication of conceptual use tied to a slightly different perceived outcome: I can give an example. we look at research on parole. There is a lot of information on this. I can't point to a particular policy we've developed because of one particular piece of it. Much of what we look at is hard to apply. Sometimes we use some recommenda- tions to do 'housecleaning' - to make things easier. Finally, a unit administrator echoes a previously-noted motivation for conceptual use: If you get an RBI [reply by indorsement] on a study done [in-house] you are going to consider it. You have to respond! Many of the responses indicating the occurrence of conceptual use involve general answers rather than specific accounts of conceptual use. There were a number of specific instances mentioned, such as consideration of research information concerning work on a hostage policy; reviews of crowding studies and risk of violence; reviews of risk prediction studies from other states; design work for a pilot study using the NFC inkblot to detect those faking psychosis; review of recent findings on the Rohrschach method; design work on education evaluations using results of evaluations done in other states; review of criterion reference testing, as opposed to'standardized testing for education evaluation of inmates; review of study reports on the New Mexico prison riots in looking for possible policy needs; and a number of other specific instances. The focus of this study is not quantitative; therefore, there is 147 a greater concern with the prevalence of conceptual use rather than the - incidence of conceptual use. All respondents made some conceptual use of some research product at some time. Regular use, or reliance on research information as a prominent source for information, enlightenment, or ideas, would have to be termed limited. If one considers the specific examples cited, the pattern of regular use begins to emerge. Certain subunit sections were more likely to be found to consider research information on a regular basis, and the persons at or near operational program levels were more likely to be the persons making the greatest use of research information as a source of enlightenment, ideas, or clarification of logic. Particularly in sections or subunits concerned with classification, mental health, design of juvenile treatment programs, education programs, or in sections devoted to research and production of information for decisionmaking, there appeared a likelihood that conceptual use would occur on a rather regular basis. At the unit administrator level, except in the subunits mentioned above, and particularly at the policy administrator level, although instances of conceptual use were noted, it appeared that the likelihood of conceptual use occurring on a regular basis or research information assuming a prominent role as a source of ideas or enlightenment was greatly diminished. At these levels, where conceptual use was cited and discussed, it appeared that this use was undertaken primarily for symbolic or persuasive purposes, such as supporting recommendations or budget requests. 148 Symbolic or Persuasive Use "Symbolic use" is defined in this study as the “utilization of research findings to substantiate a previously held position, marshal support, or cast doubts on propositions which are at odds with those held by the user, among other such possibilities."21 Many, though not all, respondents provided descriptions of use of research information which could best be described as symbolic or persuasive use. Several respondents reported that their primary use of research findings involved supporting budget requests, use as justification in grant or funding requests, or to strengthen policy recommendations already formulated. During the study, the major reorganization of the subunit discussed in the section on instrumental use provided perhaps the clearest example of the use of research information in a persuasive or symbolic manner. ' A number of symbolic uses of differing natures were described by respondents. A unit administrator described the following use: I amxworking now on a policy 'blurb' which I hope to see happen as a policy change. I would only submit it if it were backed up by research data. It's that kind of area. Depending on the policy issue I am addressing, I try to back up a change with research data. You don't just charge off like a 'bull in a china shop'. This comment may be interpreted in different ways. For example, one may say that this administrator attempts to use research information as a substantive base for policy decision. However, further discussion with the administrator clarified his position in making the comment. It was clear that the research data being used was used to legitimize the request rather than as a primary source of information in formulating the idea for the policy change. The idea for the change was based on experience and the research information was collected afterward as 149 support. Another unit administrator describes a similar instance: Here's an example: We [this unit] wanted to go for funding to initiate a program using TM [transcendental meditation] as a primary method [in a treatment program]. we got together maybe forty to fifty studies to back up this request, It was defeated, though. One person [in the decisionmaking group]_who didn't understand TM, thought it was something else, stopped the entire thing. we use research mostly in our budget requests; to get funding - as a justification. Yet another unit administrator summarized the statements made by many participants: we'll recommend what we feel - but that won't be without.as much back-up as we can get. Finally, a policy administrator provided the following comment concerning the requirement to conduct program evaluation in the department: The program evaluation requirement is used as a management tool - to get budget. It appeared that symbolic or persuasive use constituted the primary type use made of research information at the unit administrator and policy administrator levels within the department. Although concep- tual use occurs at these levels, descriptions of use by participants at these levels tended to reflect an ultimate outcome to be associated with symbolic or persuasive use of some sort. One is led to consider the nature of the decision process at these levels and to return to thinking of Allison's "bureaucratic politics paradigm" for analyzing decision and action in complex organi- zations.22 Administrators at these levels act as "buffers" for subunits and/or the organization as a whole. They must compete for scarce resources and must enter into bargaining games in which self-interest and organizational interest are at stake. It appears that, on some 150 issues, research information may be seen as a bargaining advantage or, perhaps more often, as a source of basic support for requests and recommendations. It does not seem surprising that.personnel at these levels primarily use research information symbolically or persuasively. One must note again that reported use by participants tended to be general rather than specific in most cases. One must suggest that the extent of symbolic use may be rather pervasive, yet the incidence of even this type use is rather low. In other words, even considering symbolic use, it does not appear that research information finds much use, except in circumscribed ways - on particular issues or in certain situations. Research information is not the primary source of informa- tion used for symbolic or persuasive purposes in the department. Who Uses Research Information? What Is the Scope of Use? The data indicate that research information, of whatever sort, occupies a relatively minor place as an input to decisionmaking/policy- making in the department. very little evidence of use at the policy administrator level was encountered during the study. Even respondents' accounts of conceptual use or symbolic use at this level indicated relatively sparse use. Research information becomes somewhat more important, and evidence of use is rather more plentiful (especially conceptual use) at the unit administrator level. Especially in particular subunits or sections, such as those concerned with mental health programs and classification and diagnostic testing, there is abundant evidence of use. However, use varies greatly across units, with some elements of the department evidencing only minimal consideration for research 151 information, according it little importance as a policy or program decision input. Certain individuals within the organization tended to report use to a much greater degree than others, primarily, it seemed based on the type of work.position (in terms of programmatic responsibi- lity) occupied. Research/informationeproducing personnel, as might be expected, provided the greatest amount of evidence concerningfiuse. These persons tended to describe a large amount of conceptual use, particularly in using research information/research products to clarify issues in designing their own research or as support in or as background informa- tion in formulating replies to requests for information from decision- makers. Research information receives limited attention in the depart- ment. Certain individuals and groups are apparently more disposed to its use. Even information produced at some cost to the department, through in-house investigations, results in little instrumental use; though, if certain mechanisms (such as mandated responses) are employed, the- findings will result in what could be considered conceptual use. One question asked participants to point out areas in which research findings had proved to be particularly useful in formulating policy or in making program decisions. Respondents were also asked to discuss areas where research findings could be particularly useful and to point out any areas where research information would be ignored. There was some agreement among respondents that the areas where the greatest amount of use had probably taken place, and certainly where the greatest amount of use could be expected, were at the program level 152 in the department. Small program.eections or independent units, such as those involved with mental health, classification, and treatment inter- vention were seen as probably the most consistent users of research information and as the areas where research information could have the greatest application. One unit administrator describes: well,.our major responsibility is the inmate, so information on inmates, for classification, is probably most used, we need to develop a large base [of information] and be able to computerize it. Findings on risk prediction and violence proneness would be the most helpful, probably. Cost-benefit analyses would be helpful. Job and task analyses would be useful. A policy administrator enumerated the following: Research on categorizing offenders would be the most useful. Probably we use more in classification and risk prediction. I'm sure we could put research on violence and risk prediction to use. A researcher echoed the above statement: we need good data on risk prediction. Risk factors. we need data for release decisions. Also program development: for example, serious and violent offenders - how to describe them and how to define 'violent offender'. Administrators need to know what kind of ‘ program intervention has the best chance. A unit administrator provides a useful summary of most comments: Research is going to be most useful in program areas - almost any treatment program. From my level on up, almost everything is geared for planning for budget. If we had more time, maybe it would be different. Here we live from.day-to-day, and there are value choices. Respondents were inclined to see less likelihood of use occurring at the unit administrator or policy administrator level than at the operational program level in the department, especially outside the specific decision areas mentioned in this section. There was an eclectic array of possibilities mentioned for use of research information. Again, one must keep in mind that individual perceptions of "research" varied and some respondents included as ”research" some activities and knowledge which did not fit the intent of this study. 153 A policy administrator addressed the issue of "scope of use" in the following way: It depends on the issue. At the operational level, the decision is going to be based about sixty percent on experience. The other forty percent depends on reviewing policies and regulations. You have to understand the necessity of following laws and rules. we conduct crisis planning here. we need a body of knowledge - a statistical base to draw on. If we had that we might use it. Managers are not acclimated to research. I don't know many researchers who are top-line managers. Empirical data is boring to legitimate power brokers. A unit administrator made this statement: Ninety percent of my decisions are based on the financial statement. The biggest help could come from feasibility studies, assessments of costs and benefits. To my knowledge, we do not do inmate follow—up studies, or that sort of thing. I'm not sure that we'd want to. Cost is always a major factor. I have no problem with research doing what it does, but security comes first, and always, cost. The previous two comments reflect concerns expressed by high- level administrators all through this study: research information is "0.x." in some ways; but, unless it addresses practical issues of immediate concern to the individual in terms of his/her perception of organizational interest, it will be discounted. At the upper levels of the department, administrators see potential for research information in a categorical sense. They see the "real" concerns of cost or budget and crisis, as well as the need to exert power to control those concerns, as preeminent. That is not surprising, and the statements offered give explicit clues as to the potential role for research information in a "world” where analytic problemrsolving is merely another ideology or perhaps an "ivory-tower” assumption. In addressing the questions examined in this section, some respondents evidenced a decidedly pessimistic perspective regarding the potential for research information, given their perceptions of the 154 appropriate practices to be undertaken in the department and the per- ceived crisis situation. Some respondents were rather straightforward in describing attitudes or operational realities they preceived would pre- clude the use of research information, even though they believed research information could be helpful in certain areas. A unit administrator offered the following: we need to ask about success with offenders. I've never been asked to initiate any research on success with this program. We address issues as asked. It is a luxury to do things properly. we could use this kind of information to look at the process, but 'product' is just an exercise. Improvement takes money. Some areas are probably using research: small sections like medical, mental health, classification. A researcher further discusses priorities: We need more evaluation. There was little research [done in- house] before the evaluation unit - about two years ago. The primary emphasis has been on building additional facilities. It has been a matter of doing what the federal court has told us to do. we need better computer arrangements. we need to do more in the area of impact [summative assessment] studies. A unit administrator asks the following: If there's no implementation, research information is for naught, right? That's the way I perceive things to be. we have good, educated.people, with keen minds throughout the department. In most areas, why shouldn't we rely on them? That i§_what we do. It is obvious that the extent of utilization of research infor- mation as a basis for decisionmaking in the department is quite small. It is also obvious that respondents perceive research information to be valuable and applicable in circumscribed ways in a few issue areas and that the potential for research information is seen primarily as being limited to these areas. The use of research information, furthermore, appears to be a matter of individual determination rather than collec- tive choice in the department, and most respondents were pessimistic about its potential for affecting policy or program decision in the 155 department. One unit administrator responded to the questions examined in this section with a comment that must be included in this narrative. The comment seemed to characterize the feeling given the author by many respondents in their thinking about research use and the areas where research information might be helpful: That question [in what areas might research be particularly useful?] would be a more appropriate question to ask a legislator, or the federal judge. You have to remember that statistics can prove anything but the truth.'Well, what do you want to show?', these folks will ask you. I ask you, 'What does it take for credibility?'. Focusinggon Factors Which Appear To Be Important in Use/Non-use in the Department Reviewing the literature on "use", one would not have expected to find that research information occupies a prominent role in the decisionmaking process in the department, and that was the situation encountered. One is left to present the remainder of the data from the study and to explore the overall questions, "Why this relatively small level of use?": "What factors appear to be important in use/non-use in the department?": and "Wbuld a general prescription to routinize use of empirical findings for policymaking/decisionmaking be viable for this department? " . The department is faced with a turbulent environment. As has been noted, the department has been operating under a far-reaching federal court order for several years. Scrutiny by the federal judge has extended to most aspects of department operation. The state legislature has extended its efforts to scrutinize department operations and has mandated [although it has not enforced the requirement] that all 156 state departments conduct program evaluation as an assessment mechanism. The department was, at the time of this study, also confronted by an ' anticipated change in the office of the governor and corresponding position changes at the upper levels of management in the department. Members of the department expressed feelings of pressure stemming from environmental influences and seemed to feel great pressure to provide pragmatic, viable solutions to immediate difficulties regarding overcrowding and department operations which had been criti- cized by those one would call "significant others". There appeared to be only limited attention in the state to address the problems facing the department as systemic problems involving all components of the criminal justice system, and the result was to make it appear to members of the department that the department was the focal point, perhaps unduly, for excessive external scrutiny. As has already been pointed out, members of the department perceived the organization to be operating on a short time horizon, with little opportunity to conduct comprehensive planning efforts. All respondents characterized the management climate in ways which may best be described by using the term."crisis management". Search and Research Information Participants were asked, ”When you have a policy problem, or are faced with a policy issue or program decision, where do you look for information?" Not surprisingly, policy administrators and unit adminis- trators evidenced a strong preference to rely on the expertise and experience of the organization's members as the primary source of information for problemrsolving. Research/informationeproducing 157 personnel were more likely to turn to other sources. The sources of information for problem-solving identified by the respondents fall into six categories: 1) staff (expertise/experience residing within the abilities and range of knowledge of individuals within the department); 2) standards and laws (state and federal laws governing the operation of the system: standards promoted by national bodies such as the‘American Correctional Association and others); 3) other systems (direct observation of practices/procedures in use in other states and the federal corrections system); 4) research informa- tion produced in-house; 5) research information produced outside the system and disseminated or in some way made available to members of the organization; and 6) consultant review or invited research conducted by researchers from outside the department. Respondents were not asked to rank responses in terms of preference in the above categories. Rather, the categories were identified from responses. In future studies, it may be advisable to start with with a categorization and ask for a ranking, since this would probably give a clearer indication of respondents' perceptions of importance of various information sources. It is possible, however, to draw a narrative picture of respondents' prefer- ences regarding sources of information for problem-solving. Among policy administrators, interviewees indicated that they typically turn first to staff members for information on'a policy or program problem. Only one respondent indicated another priority, stating that standards and laws were first reviewed and staff were then consulted. As one policy administrator noted: we hire people for their expertise. we expect them to be able to assist us in problem-solving. 158 A second policy administrator explained: I always staff the problem. I expect the staff to use whatever resources are available - whether that means journals, the library, civil service regulations, other states, get in touch with univer- sities - whatever. After that they give me a summary. I write a decision and distribute it for consideration. That usually generates a certain amount of response. Then I act on it. One would expect this to be a realistic perspective for top management. None of the policy administrators indicated a preference to turn to empirical information as the primary step in searching for information on a policy or program problem, Another policy administrator discusses: I prefer statistics to back up a decision. My people give me a capsule version and their recommendations. I look at what is necessary. You have to have a feel for the politics of it. I seldom look at an entire study. In general, the policy administrators turn to staff to provide information. As a group, policy administrators could not be said to have relegated empirical findings to a position of unimportance, but in terms of their search activities, one must note that these administrators prefer to receive information which has already been assimilated and evaluated in terms of its importance by persons whom the administrators trust to bring their experience and expertise to bear in determining its relevance and importance. The administrators seemed to prefer recommendations based on the experience and expertise of staff; and where possible, they seemed to be primarily interested in-the logical substantiation of these recommendations, based on whatever information would be relevant. One policy administrator explained: Because of the situation we're now in, we depend a lot on the legal section. The potential for research to be influential is limited. Another policy administrator addressed the issue of inviting 159 consultants or outside researchers to provide information: There are sometimes problems with outsiders. It depends on their credibility, and you know that consultants cost. Usually we are going to invite this kind of review if we need validation. Generally, we already know what needs to be done, but sometimes we need the credibility that an outside source can provide - especially if someone else, like legislators, is going to be involved. Unit administrators did not differ greatly in their reported preferences regarding search and sources of information. Fifteen of sixteen unit administrators indicated that they look first to informa- tion already existing in the form of expertise and experience of staff members. One unit administrator indicated a preference to review empirical information before proceeding with further search. The place of standards and laws appeared to assume importance among unit adminis— trators. In what seemed a pragmatic approach, most unit administrators noted that standards and laws, in particular, were equally as important as expertise or experience as a source of information for decision. A unit administrator offered the following statement: There are constraints of law in practically everything we do. No matter what someone wants to do, we have to know whether or not we are legally able to do it. I turn first to staff to draw on their knowledge, then to the law, and after that to whatever seems necessary. Another unit administrator's comments summarize the responses of this group regarding search: Expertise and experience is [sic] number one. It depends on the nature of the problem. Whatever is available, I'm not going to make a decision based on bad information. Sometimes I'm backed up against the wall. Usually I have time to make a decision. Usually I don't have to make an immediate response. You need good information, because somebody above you is going to make a decision based on what you say, and you're going to have to live with it. Research does not often address the issue. Research/informationfiproducing personnel evidenced a search pattern different from the administrators. Again, one is not surprised. 160 Due to the nature of the tasks required of this group, one would expect to find a greater reliance on information sources other than staff members' experience and expertise. Although one might expect a preference for initiating search with reviews of empirical information, one must observe that this preference is at times shaped by the availability of empirical information relevant to particular areas or issues. One researcher comments: I look for information anywhere I can find it. That might mean publications in criminal justice, statistics from the Bureau of Justice, NIC, NCJRS, technical journals, the library, or going to the files to dig out basic information we need. You have to take into account what kind of information you need to provide. The issue of reliance on research information and patterns of search is perhaps best addressed by respondents' comments in answer to the following question: "Are other forms of information put to greater/ equal/less use than research findings in the development of policy in the department?” As by now would be obvious, all respondents indicated that other forms of information were more widely used than research information. The following comments are particularly illuminating in considering patterns of search and any reliance on research information. Researchers comment: -- I don't think we use research. I may be biased. we use histori- cal data. we use aggregate data: looking at today, last week, last year - strictly numbers. I think these numbers have a stronghold on the formulation of policy and on making decisions in general. -- The need for policy takes priority over the need to review research. The kinds of information the decisionmakers look to are not necessarily what you would call 'research'. Definitely the need for policy takes priority over the need for evaluation. Unit administrators comment: -- I don't think we use research at all. If there is any used, it's probably something we developed ourselves. 161 -- we have no staff to really do anything with research. There is a wealth of data available - no one has anything to do with it. we need time and someone with the expertise to tell us how to use research. - -- Research, even reviewing research, is a luxury - whatever the source. Needs and resource-oriented constraints and the personal time factor make it so. -- Decisions are political. To have the greatest impact for7research information, you have to learn the secret: learn how to market it in a political environment - to influence decisionmakers to change their behaviors. -- Expertise is used more than any other source of information for policy and other decisions. -- Whatever information supports the budget, we use that and go that way. -- we use the chain of command and 'faith'. -- We use Observation and experience. Experience is the common denominator - tap those staff members who are knowledgeable and extract as much information as possible. There are the people who are opinion leaders - who are looked to because they know what is going on. Suppose we have information [as in a current case] - research done by the federal government. we get an idea, test it against the experience we already have. In the end logic prevails. we do use an interactive process. Often the real decisions come down to a political issue - internally or externally. Finally, one policy administrator stated definitively: we use the resources at hand: experience, expertise, and the legislature. Some respondents, given the orientation of different questions, tended to evidence an attitude which could lead one to draw an inference that they would be amenable to a greater role for research information as an input to decision. However, it seemed that when presented questions oriented to eliciting a response based on experience with research infor- mation or perceptions of utilization in the department, respondents, perhaps pragmatically, seemed inclined to recognize only a minor role for research information. One is reminded of the difference in attempting to 162 determine levels of citizen satisfaction with law enforcement by asking general questions as opposed to asking specific questions regarding personal encounters with law enforcement officers.'Oeneral questions tend to elicit more favorable responses which may be founded on refer- ents rooted in perceptions of the value of law enforcement, whereas specific questions tend to elicit more negative responses based on the respondent's ability to analyze discrete actions related to a particular context.23 Asking questions concerning the use of research information in the department appeared to be analogous to asking specific questions regarding citizen satisfaction with law enforcement. The study partici- pants seemed to be far less positive when their answers referred to the specific situations in which they worked in the department. Dissemination, Flow of Information, and Research Information One of the primary objectives in this study was to attempt to determine whether dissemination processes in the department were in any way related to the use/non-use of research information. In his study of CNS information, Rich found "selectivity" in the internal dissemination of research information to be a major factor in the use/non-use of research information.24 Realistically, who wants to receive what information in what form and the institutionalization of processes to provide information are central to an understanding of the role to be played by certain types and sources of information available to members of the organization. These dissemination processes are subject tola number of factors related to the development of information policies promoted within an organization. 163 Respondents were asked to describe the flow of information within the department in general and were then asked to_describe the flow of research information within the department. In the department, there was no formal plan regarding the flow of information. Decisions regarding the dissemination of information of any sort are made on an ad_ has or informal basis. Responses from study participants ranged from "I know of no dissemination plan" to more particularistic accounts of the distribution of certain information on the basis of "need to know". One policy administrator commented: Dissemination is informal. Unfortunately, we have no dissemina- tion plan. Now, I personally c/c [send carbon copies to] relevant people. That's the way I handle it. A unit administrator stated: There is no information dissemination plan that I'm.privy to. I see information if it comes to my unit. Usually, it's a question. I disseminate information [to my unit]. I don't think anyone in headquarters is interested unless information addresses a 'critical' issue. Maybe it's the same for all of us. Another unit administrator issues a stronger reply: There is no dissemination plan. No forethought. we have a problem with time and leadership. Our leadership does not acknowledge the utility of a formal flow of information in doing business. Much of the information that we may need to see is viewed as [unnecessary]. A researcher described the dissemination of information in the department: Information comes to me in so many ways. You have to know everyone; their assignments. The [chief executive] knows how he wants things done. You have to use 'area of expertise'. Court orders, regulations - it all goes somewhere. There is no central point. It is within the individual's discretion to disseminate information. Research information is received by members of the organization at many points. There is no central point at which research information (or any other type of information) is collected, evaluated, assimilated 164 on the basis of its relevance, and either stored or distributed. Research information produced within the organization, as in reports developed through the program evaluation projects, is disseminated on the basis of "need-to-know", and may have the greatest chance of widespread distribu- tion. Its consideration is at least demanded through a process of requir- ing designated persons to reply in writing to these reports. Some of the replies were made available for review in this study. The replies point to at least some interest (whatever the motivation or anticipated outcome) from top-level leadership in distributing research information and in receiving a reaction to that information. These reports must be viewed as special cases, however; and one must keep in mind that evalua- tion information is developed at some cost to the department and is generated in response to questions initiated by upper-level leadership in the department. Regarding dissemination of research information within the department, respondents in general pointed out the lack of any formal plan for channeling information to those who might benefit from it. One must recall the previous research note on enacted information space. In the department, members of subunits and sections, and persons at the upper-levels of administration clearly attend selectively to information on the basis of particular unit or individual interest. This tendency results in a haphazard distribution of information affecting both availa- bility and consideration of research information. Some respondents did describe informal mechanisms for distributing research information on a selective basis. One policy administrator describes: I get lots of mail, journals. I forward most of that to [the research/information producing personnel]. They work closely together; swap the things I can send them. I attach a memo and ask, 165 'Is this something you can use? Do I need to know about it? Is it worth anything?‘ They look into it, and frequently it is; and some- times it's just 'baloney' someone is trying to pass off. The above comment indicates the tendency among upper-level administrators to give limited attention to information which reaches them on a routine basis. Administrators at the policy level have the opportunity to channel such information to staff, as described above, for comment on possibilities. This comment also indicates obliquely the reliance on trusted subordinates to digest information and render a judgment on information, thus affecting the "authoritativeness? of a given piece of information and leading to issues concerning information policy and selectivity in vertical transmission of information. These issues are addressed in subsequent sections of this discussion. Problem-solving is factored in the department. The situation (as in most complex organizations) reinforces the selective attention to information. In the department there is no central coordination of dissemination, and the decision to disseminate most forms of research information is made a matter of individual discretion. Major subunits and independent sections follow informal processes for'distributing information within the units, including the dissemination of research information which may be available to various members of the particular units. Even at these levels, the dissemination of research information is haphazard. A policy administrator discusses the potential problem: Flow? There are some real problems. Each [unit administrator] maintains his own flow pattern; decides who needs to know. Maybe not today, maybe tomorrow, someone gets certain information. we have no well-established policies on dissemination. People sometimes 'find out' by accident. This is a big difficulty. A unit administrator points out: Research information is not regularly disseminated within our 166' unit. we receive a lot, but unless it has some special meaning at the time, we just collect it - put it away. For awhile we had things ordered: now I really don't know where most of it is. If there's anything interesting I see, I send it on to those who need it. Another unit administrator describes the dissemination process: I send information to people I feel need it. It's a gut-level decision. I do have staff meetings and sometimes I will refer to research information at that time. Finally, a unit administrator summarizes: There is no formal plan. Informal is understood. This takes place on an individual, selective basis. It is left to the discretion of each individual to disseminate information. we have a limited number of personnel - nobody to coordinate this. This organization is structured along functional lines, and to a large extent problem.recognition and many aspects of problemrsolving are factored out along these lines. Major unit administrators influence dissemination within their units to differing degrees. Dissemination across units is restricted by the informal process. In the case of evaluation information, and other in-house porduced information, dissemination is based on a decision by the top level of management. Dissemination of routinely-encountered research information is entirely informal. Information acquisition is based on individual initiative, except in the case of directed problemistic search. Individuals within units may be on clearinghouse mailing lists, and many acquire research information through journal subscriptions, attendance at conferences and seminars, or in other ways. They may acquire information pursuant to specific requests, usually initiated to address a specific problem.(problemistic search). No evidence was found to indicate an attempt in the department to coordinate information acquisition, assimialtion, storage, or use on a systematic basis. Farace, Monge, and Russell have termed a situation analogous to 167 the one encountered in the department the "woodwork" theory of distribu- tion - "That which is needed by organization members to function ade- quately resides in the walls, machines, and atmosphere surrounding them; and by some natural [though mysterious] process, it will be made known to the relevant employees at the relevant times."25 The department appears to follow the "woodwork" theory. To this one must add that the reliance on expertise and experience residing in individuals, the primary source of knowledge, reinforces the tendency to ignore an attempt at rationalizing the information dissemination processes. Attention to research information as a potential input for problemrsolving in the department is minimal. It is apparent that respon- dents operate on a predominant assumption that the experience and exper- tise of organization members is sufficient in most instances to address program and policy problems. Beyond that, search takes a 9problemistic” orientation, and, where the information known to individuals is not sufficient, a problem-oriented search may occur. The extent of search is limited by time and resource constraints. One must think that the department has a "distributed-information problem” and must suggest that this problem may even affect problemespecific search in that, in some cases, sufficient additional information may already be in the possession of department staff, yet may be unknown to those undertaking the task of acquiring problem-specific information. The dissemination processes in the department in no way resemble the formal structures one would expect to find to support a causal learning process in a complex organization. The processes do resemble the characteristics one would associate with a model of "behavioral learning" in a complex organization. According to Rich, ”An analysis of the dissemination of 168 of information is, of course, not an analysis of utilization."26 An understanding of the acquisition of research information in a particular organization is, however, quite important in arriving at an understand- ing of the potential for research information as a component of the overall problemrsolving strategy and the basic learning mechanisms characteristic of the organization. In order to gain a slightly different perspective on where research information might find application, and in what ways its acqui- sition and dissemination might be promoted, the following question was asked: "Are research findings more likely to be used in developing general systemwide policy? for administrative engineering or developing internal procedures? for program review and change? Or, are there other, more likely uses?" Respondents' perceptions varied, but there was general agreement on one point. Research information is not likely to be used unless it can be put to specific application in the specialized areas such as classification, mental health, or other operational areas - a dominant theme in the characterization of ”use" by respondents in addressing all the questions posed. One must remember, though, that circumscribed uses such as support for budget or other requests and other symbolic uses have been noted in responses to some questions as receivihg attention. As a policy administrator characterizes "use": The little we do use is for general policy management decision. Unit administrators comment: -' It's hard to find time to plug it [research information] in any- where. we have day-to-day decisions. Some special areas base a lot of what they do on research. But, those are special areas. 169 -- Perhaps for general policy. It depends on the information and its implications. There is little use anywhere. -- The most use you would find anywhere is informal use. It's up to the people in charge to take it or leave it. I doubt seriously whether research has forced any issue in this department. Let me say this, too; there are a lot of people here who wouldn't look at a research report or information they know was connected with a research project, but they would listen to certain people. If the right people told them the same _thing without saying where it came from, they might decide to use that information. we have a lot of people like that. Additionally, researchers/information producers stated: -- You may find people who are working in programs use research because of the things they are doing. At the department level, you may find things are a little more political. For example [the chief executive] reads what is passed along to him by [the executive assistant]. -- This is a difficult question to answer. Every unit uses research information. They use it every year for budget. Information of this sort [is used] to show need. -- There is always a 'felt need' for changes first. It is obvious, ahead of time, when there is going to be some change. Then you get assigned to do a study or review some other research. You understand? These responses further support the finding of minimal use in the department while supporting the concurrent finding that, among relatively small groups of users in "special areas", the incidence of use is relatively high and use of research information of various types occurs on a rather regular basis. One also finds support for the notion that utilization within the department is not simply conditioned by a general feeling of indifference concerning the potential utility of research information. Rather, there appear to be implicit information policies within the department regard- ing research information, which have definite implications in terms of perceptions of its value; dissemination (concerning both form and content); and types of issues, as well as policy/program areas, where 170 the application of research information is deemed relevant. Research information is apparently viewed as more appropriate for operational level decisions in what are termed "treatment" programs; in circumscribed ways, as support for budget and for some major recommendations; and the recognized role of symbolic support. Furthermore, there is some indica- tion that the value of certain research information may be dependent on the credibility not merely of the original producers, but the credibility of particular department members passing it along, as well. "Selectivity" and "Bureaucratization" in the Dissemination Process Leaders at the upper levels of the decisionmaking hierarchy in complex organizations cannot be expected to assimilate all items of 'information which are acquired and distributed throughout an organiza- tion. Upper-level managers must in some way indicate the nature and amounts of information they either must or perhaps prefer to see, regarding both ongoing matters and matters of immediate interest. Infor- mational needs or interests may be expressed through formal or informal means. It is reasonable to expect that unique contextual demands, as well as individual preferences, will condition leaders' preferences for information of various sorts. The presence or absence of explicit or implicit information policies, either formal or informal, would be indicative of leadership commitment to utilization of certain types of information and the means of its provision. In this study ”selectivity" refers to the dissemination of research information regarding what is sent to whom, in what form, and whether there is any pattern of discriminate perception concerning the 171 transmission of information from one level of the hierarchy to the next. "Bureaucratization" refers to the extent to which perceived rewards and incentives, or other organizational interests, affect the dissemination process and therefore directly affect the use process. Issues of control of information and control of channels of information or influence net- works are also relevant to the consideration of bureaucratization of the dissemination processes. These concepts are important in that the overall learning process in an organization is reflected in and characterized by both formal and informal rules governing the transmission or communica- tion of information. To be considered, research information must reach decisionmakers. The department has no formal policy governing the communication or dissemination of information other than that described by the formal structure of authority and the inherent relationships which flow from the depiction of authority relationships. Even formal organizational charts made available for review failed to depict several important positions, and, certainly, these charts gave little indication of the extent of reliance on formal communication channels in the communication of infor- mation. Naturally, these charts gave no insight regarding informal net- works or implicit information policies. This discussion of "selectivity" and "bureaucratization" may arguably be termed impressionistic. The discussion is, therefore, presented as a research note to emphasize its suggestive nature. RESEARCH NOTE: "Selectivity" and "Bureaucratization" One must differentiate the dissemination of in-house produced research information from routinely-encountered social science research 172 information. Regarding routinely-encountered research information, it is apparent that there is little systematic interest or commitment from upper-level leadership in influencing or coordinating the acquisition or dissemination processes. The "woodwork" theory appears to apply; no specific rules govern the acquisition or transmission of such informa- tion. Rather, it is left to the discretion of individuals to acquire, interpret, and communicate this information. This pattern of operation is indicative of the incremental, behavioral model of organizational learning and the satsificing model of decisionmaking. Attention to problems or specific interest areas is factored among subunits. It is left to the members of each subunit to "enact“ an information "space" and to direct selective attention to information that will be deemed valuable. It is clear that respondents selectively disseminate routinely- encountered research information based on individual evaluation of its relevance and on the basis of perceptions of its value to various members of the department of which an individual has knowledge. It is apparent that respondents perceive a set of circumscribed uses for this informa- tion. One must also infer that there exists an informal, yet pervasive, notion among respondents that only research information which is per- ceived to be relevant to an immediate interest should receive any degree of attention in terms of its dissemination. The process is haphazard, and there is a general notion among respondents that taking time to review research information of a general nature is an unwarranted luxury. In this process little of the research information encountered finds its way to higher levels of decisionmaking, except as it may be communicated verbally by respected staff members. 173 The data collected in this study are insufficient to show that personal, vested interest or perceptions of organizational interest are directly related to selective dissemination of routinely-encountered research information. Flows of this type information were difficult to track, other than in a few instances where individuals related specifics regarding their efforts to direct research articles to persons known to them to be interested. One is led to ask whether the overall view of routinely-encountered social science information (described by one policy administrator as being a "frivolous encumbrance") does not in itself constitute an implicit policy regarding the dissemination and considera- tion of this information. Impressionistically, one would answer that it does, especially when considering the "crisis" orientation of the depart- ment and pressures to attend to matters of “immediate“ interest. The lack of leadership commitment to acquisition, dissemination, and consid- eration of routinely-encountered social science research information, ’except for certain circumscribed purposes, seems to be widely recognized by department members and m§y_be interpreted by them as an informal 1 signal to attend to other matters. The dissemination of in-house evaluation reports is not hap— hazard. As one researcher described, "If we do a report, it goes on to [the chief executive] and then there's a determination as to who sees it. There is no regular basis.“ The phrase "need-to-know" was used by respondents to describe the basis of dissemination of in-house research products. The determination of who needs to know is made at the upper levels of the department. There appeared to be little formal, a priori thought concerning the dissemination of in-house research products - little planning for 174 utilization - although it may be obvious, because of the focus of various projects, who would ultimately receive copies of the reports. The process points to an informal, yet explicit, policy of control of the information resulting from in-house projects. - Evaluation reports are disseminated as a full report of findings and recommendations with an executive summary attached. The procedure involves disseminating the reports to designated individuals with a request for a reply to the chief executive. This procedure appears to insure that at least the designated individuals must consider and react to the reports. Replies to one evaluation report were made available for review in this study. The reponses indicated that consideration of the reports had taken place. They also indicated some dissatisfaction with the evaluation study, in that replies directed criticism at 1) questions asked in the study and their formulation; 2) conduct of the data collec- tion; 3) measures used in the study; and, consequently 4) study results and recommendations. Concerning the set of responses made available, one researcher commented: You have to consider that there are adversary relationships within the department. A unit administrator described an informal communication of evaluation results, and reactions to the information: There is word of mouth transmission of research that gets used. There is some defensiveness about evaluation research. It loses credibility when you have the feeling that it is promoted to do somebody in. The research in this department has not been formative. Many [unit administrators] are concerned with boundary maintenance. If there's nothing positive across sections you begin to question utility/credibility. Hidden agenda discourage the circulation of some information. 175 Another researcher stated the following: People react to evaluation with some apprehension. It's like sending out 'eyes and ears'. Their reaction may be antagonistic. Any utilization may be 'antagonistic utlization'. These comments indicate support for Rich's findings that organi- zational interests are important in understanding utilization.27 Evalua- tion products and use of this information are considered separately in a subsequent section. At this point, one is concerned with whether the dissemination of in-house reports is affected by bureaucratic interests. Three major evaluation projects had been completed in a two year period prior to this study. These were the major studies produced in-house at that time. Additional studies were conducted in-house; however, the studies were limited in that they served either myopic purposes, were of a nature that did not fall within the scope of this study, or were basic research that was actually conducted for export rather than general internal use. None of the data developed in this study can support a definitive finding that the dissemination and subsequent use of the evaluation products was directly a result of organizational interests. There is, however, evidence to indicate that organizational interests play a part in the dissemination of such information and in the perceptions of utility of in-house research, as well as the use/non-use of these products. One cannot extend the data beyond its suggestive nature. However, one is led to suggest that in future studies or similar studies an intensive attempt to investigate the importance of organizational interests in the dissemination and in the control of the dissemination of in-house research would provide a fruitful avenue of inquiry in understanding use/non-use. In this study one cannot substantiate, yet 176 cannot ignore, the implications of respondent perceptions that in-house findings meet a situation where, as one administrator put it: "There are 'numerous hands' involved, not always in agreement.". "Research" refers to a wide range of investigatory activities in the department. When one looks at the dissemination of information and considers the possibility of informal information policies existing, greater clarity is obtained through looking at in-house evaluation infor- mation then in looking at routinely-encountered information. Empirical findings receive little attention and usually occupy a minor role in decisionmaking. Research/information producing personnel tend to provide the clearest image of the existence of implicit information policies in the department. As one researcher states: The decisionmakers want information that is descriptive - graphs with a narrative that is limited to non-technical language. They want to know: how many? what do they look like?; probabilities of releases next month; very basic stuff. To gain acceptance, research information would have to 'slap you in the face'. When in-house products are disseminated formally, the anticipated outcome is symbolic. "we knew what we were going to find”; ”You have to; understand thatthere are adversary relationships in the department”; ”Hidden agenda discourage use”; and other similar comments suggest that organizational interests play a large part in the_formulation of in-house research, in the dissemination of in-house research information, and in the encouragement to use or ignore in-house research. The case for "selectivity“ and "bureaucratization" of dissemina- tion processes is incomplete, but there is sufficient indication that these areas deserve further study, more rigorously designed, more thought- fully conducted, and more amply measured. This author is, at least impressionistically, certain that, in the department, bureaucratic' 177 interests play a large role in the dissemination and utilization of the small amount of research information that does occur, as well as in the non-use of research information. Uncertainty and Risk Avoidance A basic assumption concerning information in a causal learning process is that the value of "information“ corresponds to the "reduction of uncertainty". In the context of use/non-use in a highly rational, analytic decisionmaking process, one would expect information to find. use, especially research information, since all information encountered would be at least evaluated and considered in terms of its value in validating or adjusting causal models concerning the organization and its operations with respect to the environment. In a complex, bureaucratic organization where incremental behav— ioral adaptation is the mode of learning, "uncertainty" takes on a differ- ent meaning. The more abstract notion of "uncertainty", as gaps in know- ledge related to a causal model, is in the latter case expressed in terms of the level of risk associated with an uncertain apparaisal of the environment and an uncertain.future. Assessments of the value of "informa- tion?, in this instance, turn not on its utility in expanding a causal model, but on a heuristic assessment of the level of risk associated with its use or with its non-use. "Information" may be seen as either increas- ing or decreasing the risk in decisions where the environment is turbulent and there is difficulty in estimating the consequences of pursuing various options. One of the primary assumptions in the behavioral learning model is that a strategy of "risk avoidance" will be pursued in an organization. This strategy relies on an experiential basis for learning and on the 178 notion of pursuing options which promise the least possible deviation from previously-tried courses of action judged to be acceptable in the organization - a strategy Lindblom aptly termed ”muddling through".28 It is not only apparent, but almost painfully patent, that the. department is confronted with a turbulent environment. Members of the department express their concerns with pressures perceived to be exter- nally induced and internally manifest'as "crisis". "Crisis" seems to have become a "covering-law explanation" for a reactive management posture in the department. Some of the effects of this situation on the use of research information have already been dis- cussed. One of the effects described by respondents involves the control of information in an effort to avoid risk. A policy administrator most vividly described the effect of risk and control of information in the department: Our biggest problem is, 'Do we want to develop certain informa— tion?‘ If we are not prepared to handle it - deal with the results - do we even want to develop it? If we can't-handle what they want to tell us, I don't even want to generate it right now (...), because if you do there is a tremendous push from the outside, from legislar tors, from.do-gooders, from the media, saying, 'You mean you were aware of this all the time and you didn't do anything about it?‘ we'll back-burner things. That's a terrible thing to admit, but it's true. we have limited resources. If a federal judge says, 'You will do this', then we'll do that. we've got state laws and regula- tions we've got to respond to. Then we can get to these other things. The issue of information control is further addressed by another policy administrator: All you have to do is have one disgruntled employee. The informer tion all becomes a matter of public record. If I don't like you or you don't like me, all you have to do is call this reporter and say,'Hey go check on this.‘ This has happened. Frankly, I don't want.some of our employees to have the information they request. They are not responsible enough to know how to utilize the information, and you know as well as I do the attention you get in the media. People do not know some of the information that does exist, and we can't make it available to them.(...). 179 The same policy administrator continues: we have people who just don't have the background. They need to come to points where they can understand the implications and use research information. That stuff, if not worded properly, and used properly, can really be misconstrued. You see that a lot more in corrections than anywhere else. Some respondents also provided comments which were related to a perception of risk avoidance, or perhaps more accurately, to a descrip- tion of a pattern of incremental, behavioral adaptation in the department. These comments must be considered within the overall context of uncertain- ty and crisis - the predominant view respondents held of the department and its relationships to significant others in the environment. One unit administrator states: I would not say that politics at the upper level inhibits the use of research. But, we are made painfully aware that we live in a real world and that there are some real obstacles out there - painfully aware at this level. At the upper level, we are a buffer, and we are the first contact for other [outside] inputs. Another unit administrator elaborates: Many [of the department's personnel] are survivors. Don't even change just one thing. [If you do] where will it end? You may not know the impact, but you know it will have some impact. Don't do it. A third unit administrator summarizes: Sometimes you just can't take a chance. Public attitudes prevail over empirical data. ' Again, given the level and nature of the data collected during this study, the inferences drawn concerning the control of information and ”risk avoidance" can only be termed ”impressionistic" or highly suggestive. It does appear that the perceptions of respondents related to operating in an uncertain environment, taken together with the specific comments by some respondents regarding the avoidance of risk, support the notion that these areas could be fruitful areas of inquiry for further 180 study. One cannot substantiate the effect of these particular issues on use/non-use of research information in this study. One does not wish to create an "imaginary sentiment" regarding organizational interests-and use. Respondents' Perceptions of Factors Associated With Use/Non-use of Research Information The interview schedule for this study was designed by adapting items from instruments used by Rich in a previous study, and McNamara in a previous study, and by creating several items unique to this study.29 One set of questions was drawn directly from the instrument used by Rich. In one question, respondents were asked to reply as to whether each of ten possible factors presented them was of no importance, some impor- tance, or great importance in understanding whether or not research information is used for policy purposes or program decision in the department. Each of the factors presented was a direct quotation from the literature on use representing a statements concerning use.3o Table 3 presents the ten selected factors along with aggregated responses. Eliciting responses to this item proved to be rather cumbersome. Respondents seemed to prefer not to rate the factors along the dimension suggested; rather they often prefered to answer through a qualifying statement. Some respondents added a fourth category ("no opinion") in replying to the factors. Several respondents prefered not to answer a number of items in any way and were not included in the data presented. Three interyiews were terminated with the preceding item due to time constraints of the respondent and are not included. The total number of respondents to this item was 21. 181 oHQMGOfiumesUs: ma Moosoowm 0:» mo >ue>wuoonoo 0:9 MH m season weave o: smsousu sansaas>s ma “seaweed: ma coaumauomse on» on use: use summoned we came» adoEfiu s so me coquuahomsfi on» HH mousom overuse so an .ooofl>owm modes on concede as .omsos new couscoum mm; :ouumeuousw ecu huasuousd ad 0260 no: van .hsousuews osdxdacoamdooo as» canons» as oe>os no: no: coeussuomca esp NH cows muses panacea m masons» Homescowmwuoo s on hauoowao mason caduceuomsd on» o H ma passe» oouomnqooum one muoxuscoamfiooo soeuauom hoaaom one muHOQGSm sowusswomcw osu cascade oz eossuuomEH cosmuHomEH 02 mo meow mo oosmuuomEH amuse mo «oncommom Houosh mmoaudm DfiBmmOUDm ho HOZdEMOfiSH m0 mZOHBmNQMHQ .mazmnzammmm n mflmsa 182 o I memes msamufia no names: . Hm I z mucous one no acoausooHHe mucus Ho amoeba on» o o v be omcoaeuro no: moon coeunauoeaa or» .oa nexussodmaooo s an cogs» hunches sowuumom e uowosuucoo Ho 0 o as N omcoeasao no: woos coauusuouoa or» .m euosocsumuoocs ma vase masses 0 o o aw a an ensconced as coauuauomaa or» .o soaswmo oussuuomEH oocsuuomEH oossuwomEH oz oz no «saw no scone no «oncogene uouonm fioscducou I m fidmmfi 183 Three factors clearly emerge as being of ”great importance" to respondents in understanding use/non-use of research information in the department: -- item 5: the information is on a timely topic of interest and need; -- item 8: the information is presented in a manner that is understandable; -- item.lO: the information does not challenge the budget or staff allocations of the agency. Responses to two other items reflect a preponderance of opinion toward being viewed as "of great importance" in understanding use/non-use: -- item 2: the information comes directly to a decisionmaker through a trusted staff aide; -- item 4: the information was produced in-house, as opposed to being provided by an outside source. Only one factor was judged to be of "no importance" in under- standing utilization in the department: -- item 6: the information is "unique"; is available through no other sources. Several items were judged to be of "some importance" in under- standing use/non-use in the department: -- item 1: the information supports the policy position decision- makers are predisposed toward; -- item 3: the information has not moved up through the decision- ' making hierarchy, but has come in laterally; -- item 7: the objectivity of the producer of the information is unquestionable; -- item 9: the information does not challenge or contradict a position already taken by a decisionmaker. That research information, to be used, must be on a timely topic of interest and need was of great importance in the judgment of 184 respondents. If one considered this to be merely a logical conclusion concerning research information and use in general, one would miss the ' explanation given by respondents themselves and apparent throughout this study. Respondents again stated that most decisions in the department are crisis-oriented - to be made within short deadlines, with search limited to the problem at-hand. One respondent summarized the general consensus: You have priorities. Interest and everything else is secondary to need. The implication is clear. If research information is available at the time a decision is to be made, it may be used. However, little attention will be directed to a specific effort to acquire research information for most decisions. The judgment of participants provides support for findings common to most studies on use - information must be timely and relevant. It must be emphasized, though, that being ”timely” and being “relevant" have referents rooted deeply in the organizational context, management climate, and the perception, selection, dissemination processes within the organization. One is led to think that there is support in this study for Rich's contention that certain necessary preconditions, such as timeliness, may be met and whether or not use may follow is more dependent on administrative or organizational variables related to organizational interest331 ”Understandability' was judged to be of great importance as a factor in understanding use/non-use in the department. Participants emphasized ”simplicityz” “down-to-earth" language and recommendations; and "non-technical, condensed and readable“ versions of study reports. If one returns to data presented on other questions, this response 185 pattern is reinforced. As has been noted, one is led to suggest that an implicit.policy regarding communication of research information exists within the department, in that members perceive that the more descriptive and non-technical the format an§_the information, the more well-received it will be by the decisionmakers. Such an implicit policy affects the dissemination and use of research information. Study participants judged that whether research information challenged budget or staff allocations would be of great importance in understanding its use/non-use. It is by now clear that contextual variables contributing to the perceived crisis situation greatly affect all aspects of department operation. One respondent states: we have no flexibility; no latitude. Dollars and cents equal 'program. Said another way, "budget defines policy." However, there are other implications present. Another respondent explains: Talk dollars, take precedence. But it also depends on the mood and personalities of administrators. They are conscious of manpower and budget, and they may also be protecting the thin edge we hold. If research information would result in lowering either manpower or dollars, I would say that would be extremely important in whether or not it gets used. It probably would not be used. One is led again to consider the discussion on "risk avoidance" and control of information. While this analysis cannot in any way 'suggest significant support, one must consider that uncertainty and organizational interests, in the form of a perceived need to preserve budget and manpower resources against losses, must receive attention in attempting to understand the use/non-use of research information. The two other items indicative of organizational, more particu- larly "bureaucratic" interests, as perceived important factors are items 2 and 4. Respondents more often thought that communication of research 186 information through a trusted staff aide was of great importance, and they thought this factor was at least of some importance, in under- standing use/non-use. One policy administrator stated, "I prefer recommendations from reliable sources." A unit administrator explained: Reliability is the key. It depends on the subject matter, but if you find the 'right ear' [to transmit information], then you get this 'halo effect.’ It is natural to expect that decisionmakers place faith in reliable staff members. Indeed, delegation of authority demands such an attitude. As Rich states, Policy makers consciously try to minimize risk in relation to their own positions and the positions of the organization. They seek to rely on an aide who understands their position and who will measure considerations of risk before the information is submitted.32 The response to the above-mentioned item provides an indication that respondents perceiveEthatficommnnication of information through the "right," "trusted" aide is important in getting the attention of a decisionmaker. Considering also the data on selectivity and risk avoid- ance, one is led to suggest that enough support is present in this study to indicate that dissemination or communication of research information through trusted aides or reliable channels deserves attention in future work on utilization. Respondents judged that whether or not information is produced in-house is at least of some importance, more often of great importance in understanding use/non-use. In qualifying their answers, participants focused on the crisis situation; the cost associated with wasting internal research: and the notion that internally produced information is more likely to be applicable: as well as on "bad experiences" with outside information. One unit administrator commented on the importance 187 of outside information: It depends on your objectives. Outside information on some issues may be more authoritative. If you are not making headway, you may have to get someone to help sell an idea. It is interesting that respondents generally accorded only some importance to information that comes in laterally, as opposed to infor- mation that proceeds up through the hierarchy,iin'understanding'use/non- use. Most respondents noted that the "specific issue" would be of more importance than whether the information came in laterally or proceeded up through the hierarchy. It is also interesting that respondents accorded only some importance to the objectivity of the producer as a factor in understanding use/non-use. Qualifying statements ranged from ”the objectivity of the producer is understood” to statements such as ”objectivity does not assure correctness;“ "it is always possible to have false documentation:" and "validation of the information by more than one source is more important than the objectivity of the producer.” As one might expect, research/informationsproducing personnel placed greater emphasis on objectivity of the producer than did the other respondents. One researcher explained: There must be objectivity. Subjectivity does creep in. It depends on who is doing the study and this might become an important question [in assessing the information]. Whether or not research information supports a predisposed policy position and whether research information does or does not challenge a position already taken by a decisionmaker were judged to be of some importance by respondents. Support for a predisposed policy position received more emphasis from.respondents in their qualifying statements than did contradiction of a stand already taken. Comments by respondents generally indicated that whether or not either of these 188 factors would be important depends on 1) the individual decisionmaker and his/her attitude toward the issue at stake and 2) the issue. One policy administrator offered the following explanation: You have to look at what is being used by each party [in a decision], no matter what stand you take. You can't allow yourself to be embarrassed, maybe in front of the legislature, by somebody else showing some information that makes you look like you didn't do your homework. There is again some indication that organizational interests can be important in understanding use. The issue and the attitudes of decisionmakers on certain issues are viewed as being important. Emabrrassment may be a personal difficulty for the decisionmaker: however, one is led to think more of the risk involved in being embar- rassed on an issue in the sense of the position security of the individ- ual and the risk posed to the organization when an administrator "loses" in a decision. One must keep in mind that, especially where external scrutiny is involved, powerful interests may clash and decisionmakers must review the information to be presented by all sides and at least consider possible uses of any information available to interested par- ties, in addition to the possibility of questions being raised on information the decisionmakers may themselves utilize. Such a situation may influence levels of conceptual use and the use, perhaps non-use, of information. The responses to the item on factors were not analyzed quantita- tively. The number of respondents was small, and this item was included in the interview schedule primarily to add to the attempt to identify important issues for similar future studies - to attempt to determine whether certain areas of inquiry might prove to be fruitful. The factors were presented as directional statements in order to elicit 189 qualifying statements, as well as to obtain an indication of perception of importance by participants. The judgments of the respondents lend support to the notion that variables associated with organizational intereSt and "bureaucratization" deserve further consideration and research attention in attempting to understand use/non-use. These g£g_judgements by respondents in one organization and reflect their thoughts about their own organization. Even with a large sample, across organizations, causal referents would be rather difficult to identify beyond a suggestive level, given the current state of knowledge on use. The complexity of variables associated with decisionmaking, policymaking, and utilization creates problems which must at first be addressed by cumulative evidence. Even though only suggestive, these findings provide support for a contention that to understand use, more research must be oriented to identifying and checking propositions associated with organizational interests and Ebureaucratization" within an overall framework of "organizational learning." Respondents' Perceptions of Particular Constraints to Utilization in the Department Respondents were asked to identify any particular constraints to the use of research information for decisionmaking in the department. Perceptions varied widely, ranging from one unit administrator's state- ment, "There are no constraints: the [chief executive] really supports it” to another unit administrator's statement, Hidden agenda discourage it. I don't think our leadership is interested. If they were, they'd be embarking on a new way of doing things. we use research information only when it's absolutely necessary. 190 Overwhelmingly, and by now not unexpectedly, respondents indi- cated that the primary constraints to use of research information in the department relate to budget and crisis. In addition to this explanation, the attitude of upper-level leadership was most often mentioned as a constraint to utilization in the department. One unit administrator noted: Attitude is a constraint. If you don't have managers' under- standing, that is a constraint. The top management needs to sanction the use of research. we have little support. Personal capacity or ability was another constraint mentioned. A policy administrator discussed this issue: A lot of people achieve 'education' who are not insightful. They are not prepared to think about these problems or use research. we have many who come to work because they need a job. They may not be concerned or able. A unit administrator stated further: Personal capacity is a constraint: identifying, absorbing, and using information. The same information to five managers has five different meanings. The [chief executive's] approach is to encourage everyone to look at how business is done - this means empirical information in some cases. But we spend so much time keeping up, putting out fires, that we can't do it. A variety of other constraints were noted by respondents. One particularly relevant assessment was provided by a researcher: we are not set up to use research information, even if we have the inclination. we have no set format for developing data, and we need computer time. This assessment was echoed by another researcher: we have no centralized statistical, information, or research procedures. Program evaluation is conducted on an ad hgg_basis, and the data is collected based on the study. All sections do not keep statistics or data based on a notion of program evaluation. We are not developing data on a daily basis which lends itself to answering evaluative questions. The latter two comments are particularly important. The learning 191 process within the department is oriented to the incremental, behavioral model of adaptation and not to the systematic, rational integrated processes associated with causal learning. This orientation exemplifies Lindblom's and Cohen's explication of "interactive learning" - i.e. policymakers or decisionmakers may choose to attack problems by some combination of interactive decisionmaking supplemented by analysis adapted to the problem at-hand.33 Glaser and the Gottfredsons directly address the issue of designing a rational, ongoing process of developing empirical information and institutionalizing this process for at least operational-level decisionmaking.34 Wildavsky depicts the ideal type of "self-evaluating" organization in a similar vein, but extended to all levels of decision- making.35 Without a well-integrated, continuing and comprehensive design for developing, assimdlating, disseminating, and utilizing empirical information in an analytic decision process, expectations for utilization and a prominent role for research information must be minimal. In addition to asking respondents to identify particular con- straints to the use of research information, the author, in a follow-up question, asked for comments on what might be done to get decisionmakers more interested in research information as an input for policy and program decision. Participants' comments were clustered around three elements they perceived to be key elements in increasing interest in utilization: l) greater commitment by upper-level leadership: 2) greater availability and accessibility of research information: and 3) building confidence in research information as a reliable source of information for decisionmaking. Regarding leadership commitment, one researcher stated simply, 192 "we need more emphasis from the top.” A policy administrator echoed that statement: as he stated, "Leadership from the top on down must want it." Another policy administrator reiterated the notion, "Key people have to strongly want to use research information." One unit administrator expressed a perceived need to institu- tionalize the use of research in the following way: [Adopt] a policy to require statistics and research to support recommendations and decisions - something like a budget narrative. Such a suggestion would of course require the prior commitment of upper- level leadership. Yet, suggestions for gaining commitment from leaders were limited: few of the respondents seemed to connect leadership commit- ment with a different style of operation for the department. The sugges- tions were rather tautalogical: gain greater commitment by leadership and gain a higher level of utilization. How? One must show greater bene- fits from using research and then leadership will be more interested. In actuality, respondents seemed to place gaining greater leadership ‘ commitment in the realm of taking a ”leap of faith.“ However, respondents did not seem to be assured that many of the department's leaders would be willing to take the leap. Several respondents pointed to a need to increase the availa- bility and accessibility of research information, assuming correctly, that without availability and accessibility, little use would occur. The suggestions were again based on a normative notion that the presence of research information would lead to use - another leap of faith. As one researcher explained, Increase availability. we need accessibility to research infor- mation. People know it is there and would use it. we need a clearinghouse - a reference library, even a newsletter [in the department]. Something to get us thinking about it. we need to bring 193 in more researchers. Let the policymakers interact with them: see what they are like, what they can do; get some interest in allowing research and then thinking about using the information. A unit administrator concurred: we need availability. We need search time - as much as it takes to do a thorough job. we need to establish a track record in using research. Basing decisions on research is a truly radical departure. We need it, but it is an unusual style [for decisionmaking] to try to objectify, to be rational. Another unit administrator indicated: There needs to be recognition. we need to develop a resource pool. We need to look at the assets we have as a state system: bring the people with the abilities together on a statewide basis to share resources: bring the universities and state agencies together. we need this concept: demands crisis empirical data/body of knowledge Clearly, respondents were being asked on short notice to address a set of complex issues. Several had no comment. One unit administrator stated rather flatly, "I don't think it can be done." The comments presented above indicate that respondents note the need for structuring a rational, analytic process for decisionmaking. The elements they collectively mention would lead in that direction, but the problem is clear: leadership in the organization, those with the power and capacity to change a traditional mode of operation, must have the energy and inclination to do so: and even if they have the desire, internal and external constraints to such a change are monolithic. One policy administrator discussed building confidence in research information as an input to decisionmaking: First of all, you have to have the capacity to generate reliable information. I'm not sure that the people [we have] believe that research is that reliable. we need to do some confidence-building. we have people who just don't have the background. We need in-service education to bring them to the point where they can use research information. You see [this problem] is widespread in corrections. we have a lot of management personnel who came up in the security- oriented system. They are not used to using these sorts of tools. We 194 would have to develop them and then expand it to where we are more use oriented. Again, although most respondents seemed to adhere to the normative notion that research information could be a substantially useful input for decisionmaking, most seemed to think that practical problems would be significant Obstacles to a move to a more analytic mode of decision- making. Many respondents did appear to think that the role for research information in the department could be broadened from the very minimal place it occupied; however, one must gather that respondents pragmatic- ally held out little hope for a substantial change in "ways of doing business." Mandated Evaluation A state law requires that program evaluation by conducted by every executive department in the state. As previously noted, the law had been in effect for approximately one year prior to the study, although thisdepartment had maintained a program evaluation section for about two years. All respondents were aware that the department had a program evaluation section, yet roughly sixty percent of the respondents did not know of the state law and its provisions. Those who did comment on the effect of the law were ambivalent about its benefit to the department. As one unit administrator commented: There has been no recocnizable effect for the department, but there is tremendous potential. A policy administrator noted: I see no real changes because of [the law]. It has kept a section [in the department] working. A unit administrator referred to a perceived negative effect of the law, 19S and a negative perception of evaluation in general: Many perceive the main effect to be 'management by evaluation.‘ This causes some negative reaction, but there is also a lot of scrutiny in other ways. Maybe we need to give it a chance. Yet another unit administrator reinforced this perception: It-[program evaluation] is looked at as a monitoring device. People are concerned with their areas and particularly their budgets. They see this as tied to the purse strings. These comments echo comments previously presented which describe program evaluation in the department as a "management tool," used primarily to "get budget," and the description of program evaluation as perceived to be "sending out eyes and ears." There appeared to be little strong support for evaluation as a mechanism for continual learning and adaptation in the department. The only instrumental use of evaluation findings, as previously noted, could better be understood in terms of symbolic use, to legitimate a decision already made. The program evaluation requirement is not clarified as to the objectives or the role to be played by evaluation in decisionmaking in the affected state agencies. None of the evaluation studies conducted prior to this study in the department could be considered to be “summative” or "impact" assessments. The studies were "process” studies initiated to answer particular questions of interest to top-level management. The greatest use of these studies was conceptual use fostered through the internal requirement that designated persons reply in writing to the chief executive concerning their reactions to the studies. Prior planning for utilization of the evaluation products was in large measure absent, except that it appeared the studies were initiated with symbolic purposes in mind , and there was attention to directing the studies to designated persons. 196 The prevalent image among respondents concerning in-house program evaluation seemed to be that its usefulness was confined to pur- poses such as "administrative engineering" and symbolic use to garner resources. The notion previously discussed, that in-house evaluation elicits negative reactions from persons engaged in "adversary relation- ships" within the department, seemed to summarize the overall view taken by respondents, other than researchers. One would have to think that evaluation research is initiated in the department when it seems neces- sary to provide some form of analysis which is adapted to an interactive problem-solving process, and the primary use of results may be a symbolic use in assisting to legitimize a decision. The state requirement appeared to be achieving one desired result, namely that of directing attention to in-house research for assessment purposes. However, in fostering interest in an effort aimed at routinizing evaluation as a basis for decisionmaking, the requirement appeared to have little effect. The requirement in itself has apparently done little to orient problemesolving to a more rational, analytic approach in the department. Perhaps further study regarding the effect of requiring evalua- tion in the department would be a fruitful focus for future research. At the time this study was conducted, there was little way to determine a substantial effect, either positive, negative, or no effect, on the problemrsolving process in the department. "Nothing,Works" In recent years, an ideology has developed concerning research and practice in corrections which has been manifest in a predominant 197 view that "nothing works" in attempting to achieve societal purposes for rehabilitation of offenders. Respondents were asked whether such a notion has any effect on utilization in the department. One researcher provided an interesting explanation: No [there is no effect]. Corrections may or may not be working fine. Records and data are not working fine. That's what Martinson meant anyway - that we can't show anything because we don't develop the right kind of information. A second researcher took a different view: People look at that type attitude and they perceive research information to be less useful, less credible. If there are disagree- ments on issues within the social sciences, people will back away from using the information, from taking a stand one way or the other. Evaluation research has a sound basis depending on whether you have a goal that's measurable. If there are sound bases for evaluation research, people will say, 'well, you're going to evaluate me, who's going to evaluate you?". A unit administrator stated further: we can ask about rehabilitation, but we don't know what we mean. Recidivism? we can make ourselves look good or really bad depending on how we approach it. I feel recidivism is particularly frustrating. I see some good studies, but how can we compare our state with other states? I think research information is particularly [discounted] on recidivism - and on any other controversial areas. One policy administrator provided a very negative view: There i§_an effect from 'nothing works', which is that most social science research is seen as 'bullshit.‘ The previous responses are representative of the range of responses elicited by this question. There is little basis for asserting a strong effect on utilization in the department. One must think that there is some thought given to the inadequacy of research in answering questions confronting the department and corrections in general. It would seem that this issue deserves further attention in the research on utilization in corrections and criminal justice: however, it would appear that factors related to organizational learning and organizational 198 interests would provide more fruitful areas of inquiry and more adequate bases for examining utilization than would be provided by solely focusing on this issue. 199 gym-NOTES: ANALYSIS 1Carol weiss, cited in Richard B. Grosskin, ”Toward the Integration of Evaluation in Criminal Justice Policy: Constructing Alternative Interpretational Models of the Evaluation Utilization Process," unpublished paper, April, 1981, pp. 9-14. 2Robert F. Rich, Social Science Information and Public Policy Making, San Francisco, Ca.: Jossey-Bass Publishers, 1981. 3Charles Ostrem, notes in a seminar, "Program Evaluation," presented at Michigan State University, Spring, 1980. 4Charles D. Lindblom, and Daniel Cohen, Usable Knowledge, New Haven, Ct.: Yale University Press, 1979. 5Richard V. Farace, P.R. Monge, and H.M. Russell, Communicating and Organizing, Reading, Ma.: Addison-wesley Publishing Co., 1977, p. 27. 6Grosskin, ”Toward the Integration of Evaluation in Criminal Justice." 7Farace, Monge, and Russell, Communicating and Organizing, p. 26. 8Ibid. 91bid., p. 35. 10mm. , pp. 35-36. 11mm. , p. 36. 12Lindblom and Cohen, Usable Knowledge. 13Rich, Social Science Information and Public Poligy Making. “Ibid. , p. 112. 15Ibid. 1680s pp. 7-8 above. 17Nathan Caplan, and others, The Use of Social Science Knowledge at the National Level, Ann Arbor, Mi.: Institute for Social Research, 1975. 18Daniel Glaser, Routinizing Evaluation: Getting Feedback gn_the Effectiveness g£_Crime and Delinquengprrograms, Rockville, Md.: NIMH, '19731 for a.more.recent discussion on similar topics one should also see Michael Gottfredson, and Don Gottfredson, Decisionmaking in criminal Justice: Toward the Rational Exercise 2; Discretion, Cambridge, Ma.: Bellinger, 1980. 200 19Lindblom, and Cohen, Usable Knowledge. 20See p. 8 above. 21See p. 8 above. 2 . . . 2GrahamAllison, Essence 9; Decision, Boston, Ma.: Little, Brown, and Co., 1971. 23Richard R. Bennett, and Robert S. Corrigan, "Police Occupational Solidarity: Probing a Determinant in Deterioration of Police/Citizen Reactions," Journal g£_Crimina1 Justice, 8 (1980): 111-122. 24 Rich, Social Science Information and Public Policy Making. 25Farace, Monge, and Russell, Communicating and Organizing, p. 27. 26Rich, Social Science Information and.Public Policeraking, p. 139. 27Ibid‘. 28Charles D. Lindblom, "The Science of Muddling Through," Public Administration Review, 19 (Spring 1959): 79-88. '29Rich, Social Science Information and Public Policijaking. Items were also adapted from an instrument developed in a study currently being conducted by John R. McNamara, Michigan State University. This study involves similar research on use of social science research information by administrators in law enforcement agencies. 30Rich, Social Science Information and Public Poligpraking, p. 128. . 31Ibid. 321b1d. ' P. 130. 33Lindblom, and Cohen, Usable Knowledge. 34Glaser, Routinizing Evaluation: and also see Gottfredson, and Gottfredson, Decisionmaking in Criminal Justice. 35Aaron Wildavsky, "The Self-Evaluating Organization," in J.M. Shafritz, and A.C. Hyde, eds., Classics 3; Public Administration, Oak Park, Il.: Moore Publishing Co., 1978, pp. 412-427. CONCLUSIONS The purposes of this final section are to place the analysis in perspective and~draw meaningful conclusions. This study developed answers to a set of basic questions regarding levels and types of use of social science research information in a corrections setting. The study differs from many on utilization in attempting to address concerns related to the role and potential for research information by deriving relevant foci from the literature on organizations and decisionmaking in complex organizations. The notions concerning rational, analytic decisionmaking and research use and "satisficing” and research use served to provide a conceptual framework for this study. One must now draw conclusions from the findings presented in the study and seek some determination of their relevance to consideration of the role and potential for research information in the department. Several basic findings regarding research utilization in the department are readily apparent. The role ascribed to social science research information in the department is quite limited. Utilization does occur in certain circumscribed ways. Instrumental use is very in- frequent. Conceptual and symbolic uses occur rather more often: yet, in comparing use of research information to reliance on and use of other types of information, one must conclude that use of research information is minimal. The primary source of information for policy purposes in the department is the substantive knowledge provided by individual members 201 202 of the department. Expertise and experience form the predominant basis for decisionmaking in the department. As weiss and Bucuvalas point out. "The circumstances under which social science research enters the decisionmaking domain are more complex, ambiguous, and elaborated than most previous observers have perceived."1 Evidence in this study indicates that simplistic explana- tions regarding production and transfer processes incorporating the two- cultures perspective, and explanations which are based on simple assump- tions regarding the normative belief that availability of research information would lead to use, provide insufficient focus for addressing use/non-use in the department. For most respondents in this study, although practically all espouse the normative belief that research information could be inherently useful, there is only limited practical value in its use as an input to decisionmaking. For the respondents, there is generally no “dilemma" associated with non-use, or with reliance on other forms of information as the primary inputs for decisionmaking. The main issues with respect to utilization in the department concern the orientation of the knowledge-inquiry system and the orienta- tions of those engaged in the policymaking/decisionmaking process. The study developed a good deal of evidence pointing to the necessity of gaining fundamental insights into the orientation of the "learning" and 'decisionmaking processes and resultant patterns of use/non-use. Several important aspects are evident. Behavioral Learning and Interactive ProblemrSolving The department provided a prime example of a complex organization where "learning" is behaviorally oriented and the decisionmaking process 203 may be categorized as a "satisficing" process. Respondents made no pretense of following the rational, analytic paradigm or even of attempting to institute mechanisms to approach such a decisionmaking scheme. The study results plainly indicate that expertise is the primary source of information for decisionmaking, and the decisionmaking process is one developed over a long period of time, in which habit, custom, or routine relegate rational analysis to a position of relatively . minor importance. Top policymakers in the department gave little indication that rational analysis would be important for decisionmaking unless the appearance of rationality (in the limited sense of buttressing a position or a request with "objective" information) would assist in per- suading a particular audience. There was no indication that comprehen- sive analysis contributes to outcome assessment in the department. Concern with established routine and ”safe", incremental adaptation was evident in the pattern of responses. "Success“ for the department apparently would depend on preservation in the face of what members per- ceived to be an uncertain environment. This study focused on utilization and was not intended to judge the department against a yardstick of ”rationality." One must conclude, however, that understanding research utilization (or the lack of it) in the department requires an under- standing that the role and potential for research information and its utilization are limited by the orientation of the decision system and the corresponding logic and mechanisms involving the acquisition, dissemination, and use of research information. 204 Factored Problem-Solving The department is structured along functional lines and problem» solving is factored among subunits. The main significance of this study may lie in the indication that use of research is affected by this arrangement. Subunit members in the department are free to enact an "information space" based on selective perception of information deemed, on whatever basis, to be relevant to task accomplishment and resolution of immediate or anticipated problems. The greatest use of research infor- mation occurs in subunit sections at operational levels where a profes- sional "treatment" orientation appears to validate the appropriateness or relevance of research information for certain purposes to department members. In sections devoted to information production and to research, utilization was also relatively high. Again, the specific tasks and orientations of the subunits seem.to validate the relevancy of research information (to members of the department) for certain purposes. In most subunits, and at higher decisionmaking levels in the department, the_role for research information appeared to be quite limited. Study results suggest that symbolic use of research information in major program.decision and policy decision receives the consensual validation of respondents. In other words, use of research information to support budget requests and other funding requests, and to "sell an idea,” among other potential persuasive uses, appeared to be viewed by respondents as the most appropriate roles for research information. Some conceptual use was noted at the "unit management" level in the department: however, little use of research information for strictly 205 conceptual purposes was noted at the policy administrator level. Policy administrators all described their main concerns in arriving at decisons in.ways one would categorize as "interactive problemrsolving." Use of research information as a basis for decision in the process appeared to be minimal at best. . Indeed, little attention is directed by persons at the policy administrator level to utilization of research information for policy or major program decision. Rather, most problems are to be addressed by the "experts" in each subunit ("that's what we hired them to do"). Where a problem appears to originally concern those at the policy administrator level, or when a problem is sufficiently pressing to make its way from the unit management level to the policy administrator level, “interaction," together with consideration of whatever informational input "seems necessary," constitutes the primary mode of problem resolu- tion. It is apparent that problems are addressed sequentially at the upper levels in the department and that the rather limited role for research information which is evidenced at the unit management level diminishes further at the policy administrator level. One is led to suggest that the parceling out of problems and a hierarchical ordering of problems affect utilization in the department. The focus of this study concerned the upper levels of decision in the department, and the sample did not include personnel at the lower, operational levels. Still, there is sufficient evidence to suggest that members of the department, when they do accord research information any practical potential, find that research information is more appropriate for rather limited, “scientific" problems, and these are most likely to be encountered at lower, operational levels in certain subunits. At 206 ‘ higher levels, where intra-organizational interests, external interests, and questions of value become more prominent, it is apparent that the role for research information diminishes (from little to even less) and changes from possible instrumental use to a limited place for conceptual use and a rather more important role (for the respondents) in various persuasive purposes. "Problemistic Search" and Information Policies The notion "problemistic search" aptly describes the search activites in the department. The crisis orientation (described earlier as constituting a "covering-law explanation" for a reactive posture in the department) serves to reinforce the perceptions of the depart- - ment's members of the necessity for addressing problems of an immediate nature on a sequential basis. Respondents perceived that resource and time constraints related to the crisis situation in most instances precluded attempts to use research information as a basis for decision. Review of research was perceived to be, in most instances, a "luxury" or, as some stated, a "frivolous waste" of time better applied to other pursuits. Conducting in-house research was perceived to be a way to complement problem-specific interactions aimed at resolution of immediate problem situations. There was little indication in the data from this study to suggest that even in-house research was perceived as being oriented to the institutionalization of an evolutionary policy cycle. Except in special subunit sections (such as those devoted to research/information production), the search pattern in the department involved turning to staff as the primary source of information for decision, next to laws and standards, then to reviews of practices in 207 other states, and finally, to whatever other information "seems neces- sary.” Thus, research information is sought when "it seems necessary" and when perceptions of time and resource constraints allow. The refer- ents for "seeming necessary" are varied. Study findings indicate that these referents depend on the issue, decisionmakers' attitudes toward the current issue, and apparently on decisionmakers' perceptions of the need to support an interactive problemrsolving process~with ”objective” data. However, one must keep in mind that "research" connotes a wide range of activities and products to members of the department. Implicit information policies appeared to reinforce the problem- specific orientation of search in the department. The department main- tained no explicit information policy; some respondents indicated that, actually, there was little forethought about information needs and no planning on a. systemwide basis devoted to the department' 5 overall information-gathering needs. As a result, the organization could best be described as having a "distributed-information” problem; there was no central coordination to the flow of information in the department and no coordinated policy or guidelines regarding the acquisition, assessment, dissemination, or utilization of information of whatever sort. The relative freedom of subunit administrators tO'bOth attune themselves selectively to various forms of information, and to either organize or not organize the dissemination of information, patently affected the patterns of attention to and use or non-use of research information. There appeared to be an informal perception among respon- dents regarding research information and its appropriateness for 208 policy and program decision. This is reflected especially in the low levels of instrumental and conceptual use and the relatively higher levels of persuasive use noted. Consequently, the "woodwork theory" of information appears to appropriately describe the lack of coherence within the department concerning the acquisition and distribution of information. Staff members informally decide, on an individual basis, whether or not to acquire and disseminate research information, except in the few instances where in-house research is disseminated on the basis of “need-to-know." Dissemination of in-house research information is controlled from the chief executive's office, and this, in fact, constitutes the only formal mechanism regarding the flow of research information. The absence of formal information policies and mechanisms for coordinating the acquisition and flow of information underscore the fragmented, incremental approach to problemesolving in the department. Research utilization is directly affected by the 39 Egg, informal approach to information needs and by the absence of central coordination of the processes of the knowledge-inquiry system in the department. One must keep in mind, however, that the decisionmakers in the department consistently prefer to rely on the expertise of staff members, and that there appeared to be no established decision requirements dependent on a research—oriented, information-sensitive approach. "Uncertainty", ”Risk Avoidance", and'Ogganizational Interest A link between uncertainty, risk avoidance, and use/non-use of research information.was suggested in this study. One policy administra- tor stated directly that certain information ia either not developed or 209 is not-disseminated within the department in order to avoid undue. scrutiny of departmental operations and decisions. Overall, one is led to suggest that the "riskiness" associated with the perceived "crisis situation" is related to at least some attempt to control information and may reinforce reliance on "safe" or "proven" sources of information. Yet, the findings in the study do not provide strong substantiation of such a claim: however, it appears rather certain that the overall reactive management posture is intricately bound to perceptions of an uncertain future, and this climate affects the orientation concerning empirical data. Study results do appear to provide support for the contention that organizational interests affect use in the department. Clearly, the finding that, at the upper levels of decision, use tends to be primarily symbolic or persuasive supports this notion. Respondent perceptions that: 1) communication of information through a trusted aide (finding the "right ear") is important in understanding use/non— use: 2) the issue and decisionmakers' attitudes toward the issue are more important than the objectivity of the producer of research information in understanding use: and 3) challenges to budget or staff allocations are very important in understanding use/non-use all support a contention that "bureaucratization" of the knowledge-inquiry system is important in understanding use/non-use in the department. Adversary relationships within the department were pointed out as a.problem in achieving use of in-house research products. Such rela- tionships were mentioned as important in understanding that use of evaluation products might be termed "antagonistic“ use. Requiring a reply to the chief executive insured at least limited consideration of 210 of evaluation products by designated members of the organization. How- ever, there were indications that program evaluation, in particular, is viewed by many respondents as a monitoring device, selectively applied, and that at least some degree of threat is associated with evaluation activities. Respondents related program evaluation to a control mechanism for upper-level management ("sending out eyes and ears"). It appeared that the credibility of efforts to conduct in-house research was diminished by the perception thet evaluation products were connected with "political" judgments of subunit success. Whether or not these perceptions resulted from utilization of research products by upper- level management or were perhaps "created" as an image to discredit evaluation and protect subunits from the effects of "undue scrutiny" could not be determined. The situation regarding the perception of evaluation does serve to highlight the extent to which the use of research information may be tied to issues associated with "bureaucrati- zation" and "organizational interest" in the department. This study did not result in the documentation or observation of 'any instance where research information was deliberately misused to serve purely “political" ends. Results of the study are sufficient to underscore the need of addressing issues associated with "bureaucrati- zation” and "organizational interest" in understanding use/non—use in the department. However, the data are not sufficient to determine with assurance the degree to which these issues affect use/non-use in the department. , 211 Coda The department is a large, complex organization, and it is apparent that there is no simple explanation for the patterns of utilization observed in this study. The study did not directly address elements of the policy cycle from-implementation on, but was instead. limited to a concern with utilization at the upper levels of the decisionmaking hierarchy and the phases of the policy cycle up to and including the decision phase. It is clear that many questions remain to be addressed concerning utilization in the department. The results of this study do provide support for the notion that research information must compete with a "mountain of ordinary know- ledge" and that use may only result in "reshaping" that mountain "here and there."2 The study findings underscore the importance of understand- ing the realities of the decisionmaking process and the overall orienta- tion of the knowledge-inquiry process, the complex effects of organiza- tional routines and procedures, and the relationships of these powerful aspects of organizational reality to the role and potential for research information as an input to decisionmaking. Unmastakeably, one must conclude that normatively-based prescrip- tions for institutionalizing research and resultant expectations for utilization are not entirely viable for the department. There appears to be neither the structure nor the inclination to base the decision system of the department on a research-oriented, information-sensitive approach. One must therefore think that, at this point, normative prescriptions can do little more than reshape the thinking of some of the department's members here and there. There appears to be little support for the 212 re-orientation and restructuring required to elevate research informa- tion to a role of prominence in the decisionmaking processes in the department. . ‘ Lindblom and Cohen stress the importance of understanding that policymakers/decisionmakers may choose to tailor their approaches to problem-solving to suit their perceptions of a given situation.3 The results of this study underscore the importance of understanding this insight and applying it to understanding use/non-use. By moving to a better understanding of use/non-use within the ubiquitous "real-life" decision, it may be possible to better identify the kinds of changes which can be made and which lead to what might be termed the "optimum use" of research information. The findings of this study suggest that there is much to be learned and that there is a need to investigate utilization from a perspective which emphasizes the effects of the decisionmaking context. Investigations of utilization are yet confronted by l) the conceptual confusion surrounding ”use"; 2) the need to study a phenome- non intricately bound to the processes of the "real-life" decision in complex organizations; and 3) the need to address an iterative cycle of policy development/decision, including the need to address questions related to information production, acquisition, assflmilation, dissemina- tion, entry into the decision itself, implementation and its effects, and the return of feedback for policy adjustment. There is a need to address variables in all these areas at both micro- and macro-levels. There has as yet been no adequate, integrating conceptual framework for addressing utilization. It appears important to turn attention to investigations rooted in the rich literature on 213 on organizations and decisionmaking in complex organizations. What would appear necessary is the continuation of attempts at cumulative advance in understanding utilization. It seems important to conduct a number of studies, each limited and well-defined, aimed at further investigation of the effects of factored problemrsolving, problemistic search, organizational routines and procedures, uncertainty, risk avoidance, and the bureaucratization of the knowledge-inquiry system, especially the effects of information policies and organizational interests. It could be that the notions surrounding "satisficing" and “behavioral learning" provide the perspective necessary to understand utilization and integrate some of the seemingly disparate findings in the field, and perhaps further research refining the notions and organized to address these concerns would prove useful in the study of use/non-use in corrections and in advancing an understanding of utilization in public agencies in general. 214 END-NOTES: CONCLUSIONS 1Carol Weiss and Michael Bucuvalas, Social Science Research and Decisionmaking, New York, N.Y;: Columbia University Press, 1980, p. 248. 2Charles Lindblom and Daniel Cohen, Usable Knowledge, New Haven, Ct.: Yale University Press, 1979. 3Ibid. APPENDIX 10. 11. APPENDIX INTERVIEW SCHEDULE Describe your position in the organization. How long have you held this position? How long have you been with the organization? What is your highest level of educational attainment? Have you completed any coursework related to statistics, methods, or other research-related topics? Have you participated in seminars or in- service education programs concerning research or research use? Do you participate in policy or program decisions in the organiza- tion? In what way? In what particular policy areas? Can you cite any instances in which you have personally used research findings in reaching a policy or program decision? In changing a policy or program? How was the research used? Have you had any experiences with the use of research findings which were particularly positive? particularly negative? How do you account for this? Can you cite any instances of the use of research findings by others in reaching a policy decision or changing a policy or program? How was the research used? ' Are there any areas in which research findings have been particularly useful in formulating policy or making program decisions in the organization? Any areas where research findings may be potentially useful? Any areas where research findings would be particularly discounted or ignored? When you have a policy problem or are faced with a policy issue or program decision, where do you look for information? Can you cite examples? ' How are policies, procedures, and programs reviewed, critiqued, and up-dated in the organization? In your unit/area? Describe the flow of information in the organization. Describe the flow of research information in the organization. If you send research information forward, how do you go about deciding what information is to be sent forward and who is to receive the information? In what form do you usually send this information? 215 12. 13. 14. 15. 16. 17. 18. 19. 216 Do you regularly review research findings in you area? From what sources? For example, from empirical format journals? information disseminated from other agencies? other? Are you personally on dissemination lists to receive research information? For example, from NCJRS, NIC, or others? Is the organization on such dissemination lists? Are other forms of information put to greater/egual/less use than research findings in the development of policy or in making program decisions in the organization? Are research findings more likely to be used in the organization for general systemwide policy development? administrative engineering? internal procedures development? program review or change? Are there other uses for which research findings are more likely to be used? What are your general feelings about the use of research information for policy development and program decision in the organization? In corrections in general? in your activity/unit? Are there any particular constraints to the use of research findings for policy development or program decision in the organization? When research information is used for policy purposes within the organization, what factors are critical in understanding whether or not it is used? Let me mention several factors, and I would ask you to tell me whether each is of great importance, some importance, or no importance at all concerning the use of research information in the organization. a. The information supports the policy position decisionmakers are predisposed toward. b. The information comes directly to the decisionmaker through a trusted staff aide. c. The information has not moved up through the decisionmaking hierarchy, but has come in laterally. d. The information was produced in -house, as opposed to being provided by an outside source. e. the information is on a timely topic of interest and need. f. the objectivity of the producer of the information is unquestionable. g. The information is presented in a manner that is understandable. h. The information does not challenge or contradict a position already taken by a decisionmaker. i. The information does not challenge the budget or staff allocations of the organization. If you had to pick one or two of the above factors that are most important for understanding the use of research information in the organization, what would they be? Are there other factors affecting 217 the use or potential use of research findings that you would care , to mention? 20. What might be done to get policymakers in the organization more interested in the use of research findings as a basis for policy and program decisions? Are you familiar with Public Law #‘_;_. Has that law had any effect on the use of research information as a basis for policy or program decision in this organization? 21. Do you think the recent notion that "nothing works" has had any effect on the use of research information for policy or program decision in corrections? 22. Are there any other comments you wish to make? ** Many of the items included in this interview schedule are borrowed or adapted from those utilized by Robert F. Rich in his study of the use Of CNS information by federal officials. The report of his work is published in his book, Social Science Information and Public Poligy, published by Jossey-Bass Publishers in 1981. Several of the items included in this interview schedule are borrowed or adapted from those utilized by John McNamara in his study of research utilization among officials in law enforcement agencies. At this time that work is still in progress. LIS'JII OF REFERENCES LIST or REFERENCES Adams, Stuart. Evaluative Research in Corrections: A Practical Guide. washington, D. C.: U. S. Department of Justice, 1975. Alexander, Christopher. Notes gn_the Synthesis of Form. Cambridge, Ma.: Harvard University Press, 1964. Allison, Graham, Essence 2f Decision. Boston, Ma.: Little, Brown, and Co., 1971. Bennett, Richard R. , and Corrigan, Robert S. ”Police Occupational Solidarity: Probing a Determinant of Police/Citizen Reactions." Journal 2; Criminal Justice 8 (1980): 111-122. Burnham, R.William. "Modern Decision Theory in Corrections." In Decisionmaking in_the Criminal Justice System. Edited by Don Gottfredson. Rockville, Md.: NIMH, 1975. Caplan, Nathan, and others. The Use of Social Science Knowledge at the National Level. Ann Arbor, Mi.: Institute for Social Research, 1975. Coffey, Alan. Correctional Administration: The Management 2f Probation, Institutions, and Parole. Englewood Cliffs, N.J.: Prentice-Hall, 1975. . Conner, Ross P.1"The Evaluation of Research Utilization." In Handbook of Evaluation in Criminal Justice. Edited by Malcolm.W. Klein, and— Kathie S. Teilmann. Beverly Hills, Ca.: SAGE Publications, 1980. Cyert, Richard M., and March, James G. A_Behavioral Theory 2; the Firm. Englewood Cliffs, N.J.: Prentice-Hall, 1963. Downs, Anthony. Inside Bureaucracy. Boston, Ma.: Little, Brown, and Co., 1967. Ellickson, Phyllis. ”Knowledge Utilization in Local Criminal Justice Agencies: A Conceptual Framework.” Santa Monica, Ca.: Rand Corporation, 1981. (Typewritten.) Farace, Richard V.; Monge, P.R.: and Russell, H.M. Communicating and *Organizing. Reading, Ma.: Addison-wesley Publishing Co., 1977. 218 219 Glaser, Daniel. Routinizinngvaluation: Getting Feedback Ln the Effectiveness Lf Crime and Delinquency Programs. Rockville, Md.: NIMH, 1973. Gottfredson, Michael, and Gottfredson, Don. Decisionmaking §§_Criminal Justice: Toward the Rational Exercise 2§_Discretion. Cambridge, Ma.: Bellinger, 1980. Grosskin, Richard B. "Turning Knowledge into Action: Improving the Use of Evaluation Research in Crime and Criminal Justice Problem- Solving.” Institute of Crime and Criminology, University of Maryland, 1983. (TyPewritten. ) ; . "Toward the Integration of Evaluation in Criminal Justice Policy: Constructing Alternative Interpretational Models of the Evaluation Utilization Process." April, 1981. (Typewritten.) Janis, Irving L., and Mann, Levi. Decisionmaking: §_Psychological Analysis 2; Conflict, Choice, and Commitment. New York, N.Y.: The Free Press, 1977. Katz Daniel, and Kahn, Robert. The Social Psychology Lf Organizations. New York, N. Y.: John Wiley and Sons, 1966. Lindblom, Charles D. ”The Science of Muddling Through." Public Administration Review 19 (Spring 1959): 79-88. , and Cohen, Daniel. Usable Knowledge. New Haven, Ct.: Yale University Press, 1979. Lynd, Robert S. Knowledge for What?. Princeton, N.J.: Princeton University Press, 1939. Martinson, Robert. ”What works?: Questions and Answers About Prison Reform.” In Corrections: Problems and Prggpects. Edited by D.M. Peterson, and C.W. Thomas. Englewood Cliffs, N.J.: Prentice-Hall, 1980. Mowitz, Robert J. The Design 25 Public Decision Systems. Baltimore, Md.: University Park Press, 1980. ' National Advisory Commission on Criminal Justice Standards and Goals. Corrections. washington, D.C.: U.S. Government Printing Office, 1973. Patrick, Mary S. ”Utilizing Program.Evaluation Products: A Rational Choice Approach.” Paper presented at the annual meeting of the Midwest Political Science Association, Chicago, 11., April 1979. Patton, Michael Q. Utilization-Focused Evaluation. Beverly Hills, Ca.: SAGE Publications, 1978. 220 Prewett, Kenneth. "Foreward" to Social Science Information and Public Policy Making, by Robert F. Rich. San Francisco, Ca.: Jossey-Bass Publishers, 1981. Rein, Martin, and White, Sheldon. "Policy Research: Belief and Doubt." Policy Analysis (Fall 1977): 239-271. Rich, Robert F. Social Science Information and Public Policy Making. San Francisco, Ca.: Jossey-Bass Publishers, 1981. . Translating Evaluation into Policy. Beverly Hills, Ca.: SAGE Publications, 1979. Rossi, Peter H., and Freeman, Howard. Evaluation: §_Systematic Approach. 2nd edition. Beverly Hills, Ca.: SAGE Publications, 1982. Rothman, Jack, Using Research in Organizations: A Guide to Successful Application. Beverly Hills, Ca.: SAGE Publications, 1980. Simon, Herbert. Administrative Behavior. New York, N.Y.: The Free Press, 1976. Snow, C.P. Science and Government. Harvard University Godkin Lectures. New York, N.Y.: New American Library, 1962. Steinbruner, John D. The Cybernetic Theory 93 Decision. Princeton, N.J. : Princeton University Press, 1974. Weiss, Carol, and Bucuvalas, Michael. Social Science Research and Decision-Making. New York, N.Y.: Columbia University Press, 1980. weiss, Carol, and Bucuvalas, Michael. ”Truth Tests and Utility Tests: Decisionmakers' Frames of Reference for Social Science Research." American Sociological Review 45 (April 1980): 302-313. Weiss, Carol. "The Many Meanings of Research Utilization.” Public Administration Review (Sep/Oct 1979): 426-431. . "Factors Affecting the Utilization of Evaluation Findings: An Empirical Test." Paper presented at the annual meeting of the Evaluation Research Society, washington, D.C., 2-4 November 1978. . Using Social Science Research in Public Policymaking. Lexington, Ma.: D. C. Heath, 1977.— . "Policy Research in the University: Practical Aid or Academic Exercise?" Poligy Studies Journal 4:3 (1976): 224-228. . ”Evaluation Research in the Political Context." In Handbook g£_ Evaluation Research. Edited by E. Streuning, and M. Guttentag. Beverly Hills, Ca.: SAGE Publications, 1975. 221 Wildavsky, Aaron. "The Self-Evaluating Orgainzation." In Classics 2: Public Administration. Edited by J.M. Shafritz, and A.C. Hyde. Yin, Robert K. "The Case Study as a Serious Research Strategy." Knowledge 3:1 (September 1981): 95-106.