a .. :x......n.4..;..u. .1 a a... . .. ,. . . . . . , . v. u a ”h. i . . . . . x . . _ ,1.£3.:¢M .. . . . . . . . LS. .. .3, . . . .....u... raga“? 1.31: f. . . . , . . 5F Ir...“ r __ ..M_....,E . , ... A ”an, . ##s... . km" " Ts . I . . . . . _. ynr v.17 otv an. kw”. ,. . $3"an ”a .w‘ ,. . . ,. .5. _ . .. 25.5.... .9 .. . :. 4.....- . . PL“. :2. a I; cf... . m.» (In. . ‘£..vb.vr.o Hui-s ~vVP.- : . 2 1% N . , , x26. ...\: a. .12. 1,. rgxrl. an... vs: 3 k .. ‘ tutti. 1.57:. .i . L . . , . , (“W N a: . 1.. .5» t...” 2 NH. ., 7t . , . . . V ’7‘. Low. 1 1» .1. .,:.1u.: :t‘gl...) i... )5...- .h . ‘1.?..a.\. E. A y? .9 . 121.. .u. .. 2.1153. . .Lr... . . (flu.- ..i 1......» o... . . £3..?£. X 9.37.»... )3: Li ... . P ~ 1.1:...7. r. iyrc .7.» s . , mix—5 L . n 17 \. .Vx. 5.1. : .th Phuwnl.‘ THFSlS 2: .“l I ’1'. \ This is to certify that the dissertation entitled PERCEIVH) FAIRIESS IN NATIRAL RESMCE DECISIM "AKIN: INFUBKIES All) museums presented by Patrick Do] i ndo Sui th has been accepted towards fulfillment of the requirements for Rh!) degree in Jinan—— Major professor ’7 /- Date , ngo MSU is an Affirmative Action/Equal Opportunity Institution 0- 12771 LBIHARY Michigan State University PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. ’ DATE DUE DATE DUE DATE DUE MAY [13 2092 6/01 chlRC/DateDuepGS—MS PERCEIVED FAIRNESS IN NATURAL RESOURCE DECISION MAKING: INFLUENCES AND CONSEQUENCES By Patrick Delinde Smith A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Forestry 2000 3“] 5L; COTS Inak mea cord ABSTRACT PERCEIVED F AIRNESS IN NATURAL RESOURCE DECISION MAKING: INFLUENCES AND CONSEQUENCES By Patrick Delinde Smith How can trust between citizens and decision makers be increased? Over the last century, decision makers have tried a wide variety of participation techniques in an attempt to increase citizen trust in their agencies and achieve better resource management. In this study, several cases of natural resource decision making in Michigan were compared in order to determine what aspects of the decision making context affect citizen evaluations of the fairness of the process and outcome of decision making. Psychological theories of procedural and distributive justice were used to create measures of fairness. Contextual factors tested in the study included the agency conducting the participation, the characteristics of the citizens involved, and the nature of the decision making situation in terms of power distribution, intensity of conflict, and participation technique used. Predictions were made as to how citizen evaluations would influence the important consequence for the agency of trust in decision makers. Data to test the hypotheses were collected through 1550 mailed surveys to citizens and 455 surveys to agency decision makers. Descriptive, correlational and regression results largely confirmed the predicted relationships, showing that perceived fairness is affected by the decision making context. These results provide clear directions for decision makers and natural resource agencies that wish to build effective collaborative partnerships with citizens. ACKNOWLEDGEMENTS Many individuals were involved in the successful completion of this dissertation. Staff in the Michigan Department of Natural Resources Forest Management Division and the Huron-Manistee National Forest helped me compile the mailing lists and critique the questionnaire. Fellow students at MSU, particularly Marcello Wiechetek and Christina Kakoyannis, helped me think through my research and make sure the survey was actually sent on time. My committee, Larry Leefers, Dennis Propst, and Bernard F inifter were generous with their time and advice. In particular, my advisor, Maureen McDonough, was always a source of wisdom, guiding me through the shark-infested dissertation depths to make sure I emerged with only minor cuts and bruises. Of course, all the guidance in the world does not suffice if there are no data. I am indebted to the many citizens of Michigan who willingly (or with a little prodding) shared their experiences and opinions about natural resource decision making. Finally, a dissertation depends on social support. Stephanie F ozzi not only assisted at crucial times with the actual survey, but was also an enchanting companion who made sure I took time off to play. Finally, thanks to my parents, Richard and Marcia Smith, and my sister, Dana, who made me what I am today... a Ph.D.! iii TABLE OF CONTENTS LIST OF TABLES ............................................................................................................. vi LIST OF FIGURES ........................................................................................................... ix CHAPTER 1: INTRODUCTION ....................................................................................... 1 CHAPTER H: A BRIEF HISTORY OF PUBLIC PARTICIPATION IN NATURAL RESOURCE DECISION MAKING ................................................................................... 7 American culture and public participation: Justice for all ............................................ 7 The age of professionals: 1900-1960 .............................................................................. 9 The age of participation legislation: 1960-1990 .......................................................... 1 1 The age of citizen-professional partnerships: 1990-2000 ............................................ 15 CHAPTER IH: JUSTICE THEORIES AND PUBLIC PARTICIPATION ..................... 19 Theories of justice ......................................................................................................... 19 Justice and public participation .................................................................................... 23 CHAPTER IV: A CONTEXTUAL THEORY OF PERCEIVED F AIRNESS IN NATURAL RESOURCE DECISION MAKING ............................................................ 27 A theoretical framework for perceived fairness ............................................................ 28 Fairness and the decision making context .................................................................... 30 Agency context factors ............................................................................................. 3O Situation context factors ........................................................................................... 33 Citizen context factors .............................................................................................. 36 Elaboration of the framework ....................................................................................... 37 Research questions ........................................................................................................ 39 CHAPTER V: RESEARCH METHODS 1: CASES, SAMPLING, INSTRUMENTS, SURVEY ADMINISTRATION, NON-RESPONSE BIAS AND ANALYSIS .............. 42 The cases ....................................................................................................................... 42 Sampling frame ............................................................................................................. 43 The instruments ............................................................................................................. 46 Context variables ...................................................................................................... 48 Evaluation of experience variables ........................................................................... 52 Evaluation consequence variable .............................................................................. 53 Survey administration and response rates .................................................................... 53 Non—response bias ......................................................................................................... 58 Coding and data entry .................................................................................................. 66 Analysis ......................................................................................................................... 68 CHAPTER VI: RESEARCH METHODS 2: COMPUTING THE VARIABLES AND CHECKING RELIABILITY AND VALIDITY .............................................................. 73 Reliability and validity of researcher assigned context variables ................................ 75 Factor analysis and reliability of agency survey context variables ............................. 76 Validity of agency survey context variables .................................................................. 83 Factor analysis and reliability of citizen survey context variables .............................. 87 Validity of citizen survey context variables .................................................................. 9O Factor analysis and reliability of citizen evaluation variables .................................... 92 Validity of citizen fairness evaluation variables ........................................................... 94 Factor analysis, reliability, and validity of the evaluation consequence variable ....... 95 iv Summary ....................................................................................................................... 96 CHAPTER VII: RESULTS AND DISCUSSION: COMPARING AGENCIES ............. 99 Cross-agency variation on agency variables ................................................................ 99 Cross agency variation on situation variables ........................................................... 107 Cross agency variation on citizen characteristics variables ...................................... 115 Cross agency variation on fairness evaluation and consequence variables .............. 118 Testing the theoretical framework of perceived fairness ............................................ 120 Summary ..................................................................................................................... 128 CHAPTER VIII: RESULTS AND DISCUSSION: CITIZEN AND SITUATION INFLUENCES ON FAIRNESS AND TRUST .............................................................. 129 Citizen evaluations of fairness and trust in decision makers ...................................... 129 Participation technique and evaluation variables ...................................................... 130 Conflict and evaluation variables ............................................................................... 133 Equal power and evaluation variables ....................................................................... 133 Prior relationships and evaluation variables ............................................................. 134 Citizen characteristics and evaluation variables ........................................................ 135 Summary ..................................................................................................................... 137 CHAPTER IX: RESULTS AND DISCUSSION: THE COMBINED INFLUENCE OF AGENCY, SITUATION, AND CITIZEN FACTORS ON FAIRNESS AND TRUST 139 Analysis overview ........................................................................................................ 141 Testing the mediation of fair process .......................................................................... 143 The relative influences of context variables on fair process ....................................... 148 Interactions in the influences of context variables on fair process ............................ 151 Difl'erences between agency means on fair process ................................................... 153 Testing the mediation of fair outcome ........................................................................ 156 The relative influences of context variables on fair outcome ..................................... 159 Interactions in the influences of context variables on fair outcome ........................... 161 Diflerences between agency means on fair outcome .................................................. 164 Summary ..................................................................................................................... 165 CHAPTER X: SUMMARY AND CONCLUSION ....................................................... 169 Summary ..................................................................................................................... 169 Study limitations ......................................................................................................... 1 82 Future research ........................................................................................................... 183 Management implications ........................................................................................... 185 APPENDIX A: CITIZEN QUESTIONNAIRE .............................................................. 192 APPENDIX B: AGENCY QUESTIONNAIRE ............................................................. 200 APPENDIX C: SHORT FORM (PHONE SURVEY) QUESTIONNAIRES ................ 208 REFERENCES ............................................................................................................... 214 la in- Ta] Tat in-r Tab Tabi Tab Tab. 103: LIST OF TABLES Table 1. Variables and how they were measured. ........................................................... 49 Table 2. Response rates to the mailed and phone follow—up surveys of citizens ............. 54 Table 3. Response rates to the mailed and phone follow-up surveys of decision makers. 56 Table 4. Citizen reasons for not returning survey. ........................................................... 58 Table 5. Comparison of variable means between respondents (resp) and non-respondents (n-resp) to assess citizen non-response bias ...................................................................... 60 Table 6. Agency reasons for not returning survey. .......................................................... 63 Table 7. Comparison of variable means between respondents (resp) and non-respondents (n-resp) to assess agency non-response bias. .................................................................... 64 Table 8. Items used to measure citizen knowledge and citizen self-interest. .................. 76 Table 9. Factor structure for perceptions of the beliefs of most other employees on bureaucracy and expertise variables. ................................................................................ 78 Table 10. Factor structure for fairness importance variables ........................................... 80 Table 11. Factor structure for fairness performance variables ........................................ 81 Table 12. Citizen level correlations among agency variables with sample sizes from 282 to 325. ............................................................................................................................... 85 Table 13. Agency level correlations among agency variables with a sample size of 5. .. 86 Table 14. Factor structure of fair process variables. ........................................................ 93 Table 15. Citizen level correlations among citizen evaluation variables ......................... 95 Table 16. A summary of variables and their validity ....................................................... 97 Table 17. Means, standard deviations and significant differences between agencies on citizen knowledge and citizen self-interest variables ...................................................... 102 Table 18. Means, standard deviations and significant differences between agencies on fairness importance variables .......................................................................................... 102 vi Table 19. .\ rm pet Inc; ‘11 I us’l’fi UV' WWII} Levi 055151011 IT; Ts: H \ 1.1316 bb- o -' I . . «h ‘ Cfiilil‘i Lin; 1's: 1* i 1451’: -.‘. . ClilZCfi CV2 Table 24. ci‘aluatior Table 25: fairness e‘ Table 26; fairness e Table 2.“. ClilZ'L’n C0 Table 28. makers Vi Table 39 ms in d l'ma‘n‘. ~- mt: Table 19. Means, standard deviations and significant differences between agencies on fairness performance variables. ...................................................................................... 104 Table 20. Participation techniques used by the agencies. .............................................. 109 Table 21. Means, standard deviations and significant differences between agencies on decision making situation variables. ............................................................................... 113 Table 22. Means, standard deviations and significant differences between agencies on citizen characteristics variables ....................................................................................... 117 Table 23. Means, standard deviations and significant differences between agencies on citizen evaluation and trust variables. ............................................................................. 119 Table 24. Anova analysis of differences between government levels in fairness evaluations. ..................................................................................................................... 121 Table 25: Agency level Pearson correlations and p-value significance levels of citizen fairness evaluations with agency culture and resources (N = 5 agencies). ..................... 123 Table 26: Agency level Pearson correlations and p-value significance levels of citizen fairness evaluations with situation and citizen variables (N = 5 agencies) ..................... 126 Table 27. Correlations between citizen evaluations, situation context variables, and citizen context variables. ................................................................................................. 132 Table 28: Model summary of stepwise ANCOVA for dependent variable trust in decision makers with total fair process evaluation as one of the independent variables .............. 145 Table 29: Standardized beta coefficients of stepwise ANCOVA for dependent variable trust in decision makers with total fair process evaluation as one of the independent variables .......................................................................................................................... 145 Table 30: Model summary of stepwise AN COVA for dependent variable total fair process evaluation ........................................................................................................... 146 Table 31: Standardized beta coefficients of stepwise ANCOVA for dependent variable total fair process evaluation ............................................................................................ 146 Table 32: Model summary of stepwise AN COVA for dependent variable trust in decision makers with fair outcome evaluation as one of the independent variables ..................... 157 Table 33: Standardized beta coefficients of stepwise AN COVA for dependent variable trust in decision makers with fair outcome evaluation as one of the independent variables ......................................................................................................................................... 157 vii exalt-:3:- Table: fair our Table 34: Model summary of stepwise ANCOVA for dependent variable fair outcome evaluation ........................................................................................................................ 1 5 8 Table 35: Standardized beta coefficients of stepwise ANCOVA for dependent variable fair outcome evaluation ................................................................................................... 158 viii LIST OF FIGURES Figure 1. Overview of the theoretical framework of influences and consequences of perceived fairness .............................................................................................................. 29 Figure 2. Predicted directions of influence between variables in the theoretical framework of perceived fairness. ........................................................................................................ 38 Figure 3: The interaction of power equality with education on total fair process evaluation. ....................................................................................................................... 1 52 Figure 4: The interaction of age with participation technique on fair outcome evaluation ......................................................................................................................................... 162 Figure 5: The interaction of gender with participation technique on fair outcome evaluation ........................................................................................................................ 163 ix CHAPTER 1 : INTRODUCTION I felt much better about it then, because I felt that someone cared enough to give me the information. Even after the fact it still made [me] feel good... If these are the kinds of people who are watching the store, then I have a greater sense of trust in them running the operation. It’s almost as if I have a personal relationship with that faceless thing called the DNR. The above quote, from a focus group participant recalling an interaction with a forester from the Michigan Department of Natural Resources (Smith, McDonough, & Mang, 1999), highlights how public participation can lead to greater trust in natural resource managers. Relationships of trust and respect between citizens and agency employees are crucial to the relationship-based collaborative management regimes currently being promoted in the United States. The use of collaboration reflects recent efforts to make government agencies more responsive and accountable to citizens. This is being done to counterbalance the previous emphasis on neutrality in government. Over the course of US. history, governments have emphasized either neutrality or accountability in search of a balance between the two that maximizes justice (Knott & Miller, 1987). Neutrality and accountability in decision making can be interpreted as contrasting aspects of justice. Neutrality focuses on giving all citizens equal, unbiased treatment. In the US. the movement for making government more professional assumed neutrality would be achieved through the use of accurate information and expert judgement (Box, 1998). Accountability emphasizes making sure that government respects the rights of all citizens to participate in decision making and to have an opportunity to influence the outcomes. Accountability is embodied in the practice of democratic governance and citizen participation. \h f? Principles of neutrality and accountability have been emphasized to varying degrees over the 20th century. At the beginning of the century, governments in the US. focused on neutrality. People largely trusted the professionals who cared for their lands. As a result, public participation in resource management was rarely practiced (Kaufman, 1960). Management was oriented towards commodity production and economic development. Direct involvement of citizens was kept to a minimum. Decision makers were experts who decided the goals and presumed to know how to reach them. In the 1960’s and 1970’s, citizens became increasingly dissatisfied with governmental policies and demanded a voice in decision making. Numerous mandates for involving citizens and giving them opportunities to voice their concerns were given to agencies (Taylor, 1984). In addition to using participation forums such as public hearings, citizens and environmental organizations sued agencies in court for not fulfilling their mandates to consider citizen concerns. As a result, resource managers began to give greater consideration to preservation and environmental concerns (Jones & Mohai, 1995). In the 19705 and 19805, in response to the growing power of environmental groups, traditional resource constituencies like timber companies, oil producers and land developers fought back. Natural resource decision making became a process dominated by conflict (Wondolleck, 1988). In addition, many resource managers resisted the move towards accountability. Public participation was generally implemented in a formally correct way that missed the spirit of the new laws, and simply led to ratification of agency decisions (F ortrnan & Fairfax, 1991; Lawrence et al., 1997). The greater involvement of the courts led to a focus on avoiding lawsuits. This meant that C0 pu rel; mar inCl’I 115 a, gore participation processes emphasized formal procedures instead of considering constituent concems (Taylor, 1984). Participation as tokenism also existed in antipoverty, urban renewal, and model cities projects (Arnstein, 1977). Although this situation continued into the 1990’s in many areas, the real need for collaboration between agencies and citizens continued to grow. Natural resource managers began attempting ecosystem management, an approach which is holistic and focuses on the connections between resources (COS, 1999). Ecosystem management also focuses on natural resources at the landscape level, necessitating coordination across ownerships (Salwasser, 1994). Cross-boundary management often meant voluntary commitments fi'om citizen landowners to manage their lands consistently with adjacent public lands. Achieving cooperative efforts between citizens and agencies required they trust each other. As agencies facilitated the way forward, they needed to have positive relationships with citizens. In order to continue the movement towards collaborative management, resource managers need to understand why some decision making causes citizens to have increased trust in the agency, support for its decisions, and improved relationships with its employees, while other decision making does not. The brief review of the history of government in the US. suggests that justice issues of neutrality and accountability are very important. In fact, research on procedural and distributive justice shows that fairness of the decision making process and fairness of the decision outcome ofien increase trust in authorities and support for their decisions. For example, Tyler and Degoey (1995) found that residents supported authorities and complied with their rules during a water shortage in San Francisco. This trust and willingness to undergo personal jcszi (Lint deprivation for the good of the larger community was largely based on perceptions that the decision making process was fair. Lauber and Knuth (1997, 1998) documented how perceptions of fair decision making processes in a moose reintroduction decision were related to satisfaction with the process as well as with the state agency making the decisions. The consequence of increased support for authorities based on perceptions of justice has also been demonstrated in many contexts including law, business, and politics (Lind & Tyler, 1988). A noteworthy feature of justice theory is the extent to which good participation processes can fulfill many of the important principles of procedural justice. This was confirmed in a recent study of the Northern Lower Michigan Ecosystem Management Project (Smith & McDonough, 2001). When asked how they judged the fairness of natural resource decision making, citizens often focused on characteristics of the process like adequate participation opportunities, early notification, broad representation of affected groups, and serious consideration of citizen concerns. This is not surprising because these characteristics all relate to accountability and neutrality. Broad representation is necessary to make sure that the information collected represents all views equally, allowing decision makers to try reaching decisions which are unbiased. Adequate opportunities to participate and early notification also help to make sure that there is equal access to decision makers, increasing neutrality. However, effective participation is also related to accountability because it makes sure that citizens can express their thoughts to decision makers and potentially influence the decision outcomes. If citizens are able to influence outcomes this shows the decision makers are being held accountable to meet the needs of the citizenry. By following effective participation making Pm: lites: win: are the l question is irr ' fluence ho.“ theory and {he pote ti 1 mil-u characteristics agency place i. fair experience making situatn and citizens ha process and on oithe personal WOT Panicipalt faintess. In son he decisions at areWinnie} inf participation practices, there is a greater chance that citizens will evaluate the decision making process and outcome as fair. Greater trust in the agency should be the result. If research suggests that perceived fairness can increase trust in decision makers, what are the factors which, in turn, may increase perceived fairness? Answering this question is important because the circumstances surrounding decision making may influence how citizens perceive the decision making. Reviews of social psychological theory and the history of natural resource management in the US. suggest a number of potential influences on fairness. The first group of possible influences concerns characteristics of the agency making the decisions. For example, if the employees in an agency place importance on achieving fairness, it is more likely that citizens will have fair experiences. The second group of influences on fairness are aspects of the decision making situation. For example, when the issues being decided are highly controversial and citizens have intense conflict with each other, it is likely that perceived fairness of the process and outcome will be reduced. The final group of influences on fairness consists of the personal characteristics of citizens. For example, someone who has a great deal of prior participatory experience may be less critical of the agency and perceive greater fairness. In summary, the agency making the decisions, aspects of the situation in which the decisions are being made, and characteristics of the citizens judging the decision all are potential influences on perceived fairness, and thus on trust in decision makers. The purpose of this study is to understand the role of fairness in influencing citizen trust in a decision making context. The first objective is to confirm the influence of perceived fairness on the consequence of citizen trust in decision makers in natural resource contexts. In other words, how big an impact does perceived fairness have on trust and support fc is achiei ed. In 0th! perceived faimcss'.‘ the relationships of The stud} b public participation resource decision it psychological theor uses the history in ( immeri'orl' relating fainess to the cons outlined through a .~ to administer and a: validity oithe ram ipresents an lllle‘. “155 analysis of co. hone research. and trust and support for decisions? The second objective is to better understand how fairness is achieved. In other words, what aspects of the decision making context influence perceived fairness? These objectives will help address the problem of how to improve the relationships of citizens with the employees and agencies managing natural resources. The study begins by examining the history of natural resource management and public participation in the US. A case is made for the importance of justice in natural resource decision making. Chapter 3 then gives a more detailed accounting of psychological theories of justice and how these relate to public participation. Chapter 4 uses the history in Chapter 2 and the theory in Chapter 3 to construct a theoretical fiamework relating the decision making context to perceived fairness and perceived fairness to the consequence of trust in decision makers. Testing of the framework is outlined through a set of research questions. Chapters 5 and 6 present the methods used to administer and analyze the surveys, compute the variables, and assess reliability and validity of the variables. Chapter 7 examines the data at the level of the agency, Chapter 8 presents an individual citizen level analysis, and Chapter 9 combines them in a step- wise analysis of covariance. Chapter 10 concludes with the summary, study limitations, future research, and management implications. CHAPTER ll: \thy is p LS. yet so pro‘c in the history of nation strugglec accountability. legislative mar chapter. nature the lens of jus US, a case is better relatior American a. The because it t Charactenz Was the \t' gorem in. ”will et gum 10 a CHAPTER II: A BRIEF HISTORY OF PUBLIC PARTICIPATION IN NATURAL RESOURCE DECISION MAKING Why is public participation in natural resource management so important in the US, yet so problematic and difficult to achieve? The answer to this question lies partly in the history of the country and its search for justice. Over the years, citizens of the nation struggled to achieve justice through the goals of governmental neutrality and accountability. This has resulted in legacies of agency culture, citizen conflict, and legislative mandate which both facilitate and hinder neutrality and accountability. In this chapter, natural resource decision making in the 20th century is briefly reviewed through the lens of justice. Starting with an overview of the cultural foundations of fairness in the US, a case is built for the argument that justice in decision making holds the key to better relationships between citizens and their governments. American culture and public participation: Justice for all The importance of public participation is deeply rooted in the history of the US. because it taps into the cultural value of justice. The US. is a nation of immigrants, characterized by a frontier spirit of independence. A defining moment of cultural identity was the War of Independence against England in which the settlers fought for the right to govern themselves in a democracy. They fought to create a society in which “all men are created equal” and established an elaborate system of courts to ensure that justice was given to all. To this day, the neutrality of the court system in the US. remains a powerful belief (Ewick & Silbey, 1998). The (lCSli'. toxerrntent oper: males decisions r interests get is ha: often maintained being “captured" is concerned it till Governments uh violate their delt is designed to cor mechanisms fort elections. public QQV fitment do ‘3“: \0 \' all; : citizens, The as the gov (ii the C80 Dermal bu pain C0m: Iii/755,57]; [f honest, CORGI: The desire for justice has created a philosophical tension in which US. government operates. On the one hand, justice requires a neutral government which makes decisions without bias. Overtly political decisions in which people with one set of interests get what they want violate the sense of fairness. In practice, governments have often maintained a somewhat distant relationship with citizens to avoid the appearance of being “captured” by special interests (Knott & Miller, 1987). On the other hand, justice is concerned with the protection of basic human rights such as freedom of speech. Governments which become authoritarian and disconnected from their citizens can violate their basic rights and treat citizens unjustly. The democratic form of government is designed to counteract the tendency towards authoritarianism by providing mechanisms for citizens to control and regulate their own government. In addition to elections, public participation has become a way for ordinary citizens to influence government decisions which affect their lives. Thus governments concerned with justice have to walk a fine line between maintaining neutrality and being accountable to their citizens. The history of natural resource management in the US. shows pendulum swings as the government alternately emphasized neutrality or accountability. In the beginning of the century, widespread perceptions of political party patronage led to calls for a neutral bureaucracy in which employees were hired based on their expertise, not their party connections. In the 19605 and 19703, citizens felt the government was not respecting their basic rights and had gotten out of control. In response to widespread protest, congress mandated citizen participation in many government agencies. The end oitiecentur} h. neutrality in the The age ofprot} The us making got err; patronage syste goi'ernrnent tor appornted boar lhlS. local gm: Counctl~pr0fcs DEpression er; created and 51; “IE philosoph Society Dunn; ullt‘epsn} -ba< ‘ROblnson, 1g foregim' CTEt Ulllzianan Use SUCl] a; limbe Europe deSl'fl M An‘ of the century has found agencies struggling with how to integrate accountability and neutrality in their decision making. The age of professionals: 1900-1960 The first half of the twentieth century was characterized by an emphasis on making government more professional. This was in response to a party and political patronage system which had led to widespread corruption and inefficiencies in government towards the end of the nineteenth century (Box, 1998). Large numbers of appointed boards led to fragmentation of power and lack of coordination. In response to this, local governments moved to a strong mayoral model and then ultimately to the council—professional manager structure. At the federal level, especially during the Depression era, government grew larger and more centralized. New agencies were created and staffed with a professional civil service dedicated to efficiency and neutrality. The philosophy advocated attaining a community-wide vision, the greater good for society. During this period the Forest Service was created as were the first professional university-based schools for training the foresters needed to staff the new agency (Robinson, 1975). Gifford Pinchot, the first chief of the Forest Service, proclaimed the foresters’ creed to be “the greatest good for the greatest number.” Emphasizing the utilitarian use of the national forest lands for water production and consumptive products such as timber and grazing, he brought scientific forest management principles from Europe designed to achieve a sustained yield of products over the long term. This federal model of forest management was also adopted by many states. Foresters at the state level had sin ratersl lrrrr'.) hut.» 1 Pt at v'bll , . Toe: Willi SCSI “*- (It? had similar training and similar objectives of sustained production of timber and watershed protection. The role of citizens in this new forestry was small. Public participation was limited to public relations and federal foresters were frequently transferred to prevent the development of potentially collusive relationships with local people (Kaufman, 1960). Foresters were portrayed as professionals working for the broader good of society, in contrast to publics who were motivated by narrow self-interest. Thus, agencies did not see much point in involving citizens in natural resource decision making. These beliefs were also present at the local level, although in a more complex way. Zoning and planning became widespread practices for governing the use of urban land resources in the face of industrialization and the resulting rapid development of urban areas. Although zoning efforts were sometimes used to keep ethnic and racial groups separate, they still revolved around the use of the land resource (Molotch, 1976). Once the profession of urban planning was created, planners worked with city government to contain development in an orderly fashion. This was often done in the service of what was termed the “growth machine,” a group of business elites who stood to profit from land development. Citizens could and did become involved when the desires of the growth machine conflicted with their own needs for neighborhood amenity maintenance (Steele, 1987). In contrast to federal and state forest management, which often occurred in remote locations, urban land management was done in citizens’ back yards. From the beginning, citizens had motivation to demand voice in local planning and zoning. However, at this early period, the balance of power ofien lay in the professionals’ hands. 10 the 7- un ll dent no. citiz At all three levels of government, the professional management philosophy profoundly affected the relationship between citizens and managers. As Box explains: Many practitioners have been trained to maintain a separation between themselves and the public, because they are the experts and citizens are the “customers,” people who know little about public services except the end products they receive and the fees or taxes they pay. (1998, p 145) The concept of citizen as customer, someone who lacks knowledge about how to manage the resource, represented a profound change from earlier models of J effersonian democracy in which citizens were assruned to be able to participate directly in governance (Pateman, 1970). J effersonian democracy had emphasized direct involvement partly in order to achieve the aspect of justice related to the protection of citizen rights and the assurance of government accountability. The newer government philosophies stressed instead the aspect of justice concerned with neutrality. At the end of the 19505 and beginning of the 19605 citizens demanded a return to a more Jeffersonian approach. People began to question the ability of government to protect their rights and demanded a more accountable, less authoritarian government. They wanted a greater role than that of passive “customer.” The age of participation legislation: [960-1990 The 1960’s and 1970’s were a period of social upheaval in the United States, marked by the civil rights movement, the environmental movement, and anti-Vietnam War protests. Publics demanded a greater voice in their government and Congress responded to these concerns with sweeping new legislation that mandated citizen ll participation at local and federal levels. During the late 19705 and 19805, agencies and natural resource managers struggled to learn how to implement these laws. At the federal forest management level, both the National Environmental Policy Act of 1970 (NEPA) and the National Forest Management Act of 1976 (NF MA) mandated a planning process which included public involvement (F rome, 1984). These Acts had to be followed by national forest planners, but did not apply to state level managers, setting up a potential difference in the amount of citizen involvement. During this period, national forest management goals were dramatically reshaped by the new influence of environmental groups. Through lawsuits based on NEPA and NFMA as well as other new laws like the Endangered Species Act of 1973, production of traditional products like timber declined, while other uses such as recreation and wildland preservation increased (Jones & Mohai, 1995). This introduced a new intensity and level of conflict between different user groups that often played itself out in public participation arenas (Culhane, 1981). The new requirements for participation were implemented by the Forest Service through standardized techniques based on the requirements of NEPA (USFS, 1977). The first step was scoping, in which all the affected constituents, particularly people living near to the management area, were told of the impending decision and given an opportunity to share their opinions. Then a set of management alternatives was internally developed and made available for comment during a defined public comment period. Public comments were collected through written comments as well as in formal public hearings. Collecting and using this new influx of information was a challenge for professionals accustomed to making decisions autonomously. 12 Local level decision making was also affected. In 1966, the Demonstration Cities and Metropolitan Act provided funding for community programs with “widespread citizen participation”. The Housing and Development Act of 1974 required that cities “provide citizens with an adequate opportunity to participate in the development of an application” (Pollack, 1984). Because much local level urban planning centered on funding from these Acts, local governments were forced to implement citizen participation strategies. The new federal funding requirements necessitated that municipalities conduct comprehensive citizen participation when doing planning (Pollack, 1984). This created an environment in which citizen concerns had to be taken more seriously. Another change was that planners were pressured to preserve open space and to consider issues like water quality when making decisions (Steele, 1987). The new-found importance of citizen participation had an effect on public- manager relationships at all levels of government. Many managers saw the newly empowered publics as troublemakers who upset the status-quo system and challenged their professional authority. Other managers identified with the new democratic ideals and saw citizen involvement as a way to do things better. A regional forester looked back on that era and surmnarized the varying attitudes towards the new role of citizens: Some administrators in the past have felt public involvement was cumbersome and a stumbling block in the way of efficient management. That I can’t accept. It has proven a great benefit to the national forests of the northern region. Participating individuals and groups help Forest Service personnel make better decisions and confirm the democratic approach to public land management. National forests don’t belong to just those working in the Forest Service- they are part of our national heritage (Frome, 1984, p. 96). A5 a result of the varying views of professionals, it is not surprising that implementation, as well as citizen satisfaction, was mixed (F rome, 1984). In a review of 13 loud I tanzcu oi can; participation in National Forest planning processes during the 19805, the Forest Service found that 95 percent of respondents indicated there was compliance with the public participation section of the regulations (Russell, et al., 1990). However, only 3 percent of citizen respondents felt their comments had influenced planning decisions. This suggests that citizen involvement may have been viewed as input to be considered by decision makers, but was not important or valid enough to affect actual planning decisions. Fortman and Fairfax (1991) stated that public involvement in forest planning was often used to ratify agency decisions. Ample evidence of participation as tokenism also existed at local levels in antipoverty, urban renewal and model cities projects (Amstein, 1977). In addition, public participation was primarily one-way communication (Blahna & Yonts-Shephard, 1989) and often culturally inappropriate (McDonough, 1991, 1994). For example, most opportunities for public input into forest planning were structured as large public meetings advertised through standard media. The use of these techniques, in addition to excluding non-traditional users, tended to intensify rather than resolve the underlying conflicts between user groups (Wondolleck, 1988). In summary, from the beginning of the century through the 19805, natural resource decision making went through a dramatic transformation that left its mark on agencies and citizens. The early desire for a more neutral government led to agency policies and the promotion of a cultural of professionalism which served to prevent relationships between citizens and resource managers. As the resource use preferences of citizens changed, agencies did not adjust, leading to citizen demands for greater accountability. Although legislation followed which required citizen participation, 14 The roll C01 alt ho sit or "(J H "‘1 '1. r.) '7.) agencies’ cultural and structural factors created in the service of neutrality became barriers to agency accountability. Participation was implemented as required, but in a way that did not always lead to accountability. However, the changes had begun, and as the century came to a close, recognition of the need for active citizen involvement in natural resource management led to the use of more accountable and creative decision making approaches. The age of citizen-professional partnerships: 1 990-2000 The present era has seen the wider adoption, at least in principle, of a more active role for citizens. The Forest Service has experimented with varying techniques of conflict resolution and agency-citizen partnering. Some states have also tried this, although Michigan has focused on traditional techniques like advisory boards, open houses, and public meetings (McDonough & Thorbum, 1997). Local level planning also still largely relies on traditional public hearings and written comments, although particular localities have experimented with community Visioning processes. Within the profession of local managers, there is a growing movement towards what Kemmis (1990) has termed “barn-raising.” According to this philosophy, citizens are encouraged to take responsibility for their communities. One piece of evidence for the changing role of citizens in local planning is the commitment of Public Management, a leading practitioner journal aimed at local government, to publishing more on empowering citizens (Benest, 1996). The Forest Service recently adopted this philosophy of cooperative management with the second commissioning of the Committee of Scientists. The committee prepared 15 l99‘h lt’US rm‘ inf» CSCE C031 a set of recommendations, many of which may be codified into new regulations (COS, 1999). The report suggested that planning needs to be adaptive, collaborative, and focused on building stewardship capacities in communities. One of the key organizing principles of this new approach was ecosystem management. At both the federal and state level, the older ideas of multiple use are being augmented with new goals that emphasize maintaining ecosystem processes and functions. Because humans are also seen as part of the ecosystem, their participation in management decision making and implementation on private lands remains essential (Smith & McDonough, 1999). These proposals for planning are hardly new ideas. For the last decade some forest managers have tried new methods of planning and public participation. The Hoosier National Forest has been involved in conflict for many years about recreational use of the Deam Wilderness (Slover, 1996). Horse riders and non-riders were involved in escalating conflict over trail use and managers’ traditional approach of gathering comments and proposing solutions was not working. A new approach of collaborative planning was tried in which these groups and many others were brought together to focus, not on their differences, but rather on their shared desire for good management of an area special to them. Sinnon, Shands and Ligget (1993) have described this type of process as creating a community of interest. Numerous workshops, field trips and discussions resulted in a comprehensive plan to which all agreed. Volunteers from the groups are now helping make the plans a reality (Slover, 1996). Shannon (1987) reported that a new responsive management style is emerging within the Forest Service. This approach requires a balance between participatory planning and technical competence. Public participation involves bringing in parties and 16 creati r with he : relation“: 0; . conlllkl g. fifi9.‘~ . ckiuAILt: creating new relationships between them. The old model can be described as a wheel with the agency as a hub and each party communicating directly to it. The new set of relationships resembles a wheel with no hub and all the parties communicating directly with each other in a web of spokes. The agency takes the role of strategic thinker, facilitator, and knowledge resource (Shannon, 1992). The ability of citizens to understand and deliberate on technical issues is respected (Doble & Richardson, 1992) and the decision making process is no longer a black box. Another trend utilizing more collaborative approaches is that of alternative conflict resolution. Codified in the Alternative Dispute Resolution Act of 1990 as an appropriate response to management conflicts, altemative conflict resolution focuses on using a variety of creative approaches outside of the courtroom to find solutions to conflicts between agencies and citizens as well as among citizens. Success often hinges on the ability of conflicting parties to understand opposing interests and so be willing to find compromises and creative solutions that address the underlying cause of the conflict. Although not practiced much by the Forest Service (Schumaker, O’Laughlin, & Freemuth, 1997) it has been used in a few cases (Wondolleck, 1988) and shows promise. These practices have also been extensively developed by the Army Corps of Engineers (Priscoli, 1990) and include negotiation, facilitated collaborative problem solving, mediation, conciliation, mini-trial, and arbitration. In summary, the history of public participation in natural resource management shows an evolution in the roles and relationships of managers and citizens driven by the conflicting demands for neutrality and accountability. The first half of the century saw the rise of professional resource managers who were generally trusted by citizens to 17 C4; 'YJ provide for the greater good of the citizenry. The agencies emphasized neutrality in decision making and kept an image of being distant from citizens. During the turbulent period of the 1960’s and 1970’s, citizens became disillusioned with professional management, began to mistrust government, and demanded and received greater direct participation in governmental decision making. The 19805 were a time when agencies grappled with and institutionalized citizen participation. The twin justice goals of neutrality and accountability were difficult to achieve simultaneously. For this reason participation often did not have effects on management decisions, frustrating citizen participants and failing to restore citizen trust in government. In the 19905, new approaches have been tried which attempt to reconcile professionalism and citizen involvement by emphasizing collaboration. The hope is that by increasing agency responsiveness the mistrust between managers and publics which arose in the 1960’s and 1970’s will be reduced. One of the keys to understanding this history of citizen involvement in natural resource management is the concept of justice. Justice has been described in rather simple terms as pertaining to neutrality and accountability, an approach consistent with the political science literature. However, justice has also been explored from a psychological perspective. Social psychologists have conducted detailed investigations of why justice is important to people and how they assess it. Theories about the fairness of the process used to reach decisions as well as the decision outcomes have been developed and extensively tested. The next chapter outlines the psychological perspective on justice. 18 CHAPTER III: JUSTICE THEORIES AND PUBLIC PARTICIPATION How do citizens evaluate decision making processes and outcomes? The historical review in the previous chapter suggests that citizens in the United States expect their governments to be neutral and accountable when they make decisions. Neutrality and accountability are aspects of justice. It is no surprise that researchers and theorists have developed psychological theories explaining how and why people often search for fairness in decision outcomes as well as processes (Lind & Tyler, 1988). The theories propose extensive lists of fairness principles which derive from deep-seated needs for dignity and social status. In this chapter, psychological theories of justice are reviewed in order to understand more clearly how people form fairness perceptions, and why these perceptions are so important. These theories are then related to existing public participation conceptualizations. The claim is made that justice theory is a comprehensive approach which subsumes many of the key decision making prescriptions found in writings about public participation. Theories of justice How do people evaluate the fairness of a decision? Social psychological theories and research suggest that both the decision outcome and the process by which that decision is reached affect perceptions of fairness (Hegtvedt & Markovsky, 1995). Concerns related to decision outcomes have been termed distributive justice because they relate to how benefits and costs are distributed among people. An equivalent term would be outcome fairness. In this discussion, justice and fairness are used interchangeably. l9 Concerns related to decision making processes have historically been termed procedural justice, interactional justice, and procedural fairness. The literature is mixed as to whether or not there is need for a distinction between procedural and interactional justice (F olger, 1993; Lind & Tyler, 1988). For the purpose of providing an overview of the field, all three terms are taken as equivalent and used interchangeably. Another key aspect of j ustice theories is that fairness is conceptualized as being perceptual and subjective. In other words, fairness is in the eye of the beholder. However, this does not mean that a person will always see something different from another person when they experience the same event. They may come to largely similar conclusions about the fairness of a situation because they share a common perspective, background and expectations. In fact, justice research suggests that there is a shared set of principles people use when judging the fairness of the process and outcome of decisions. Early research on justice focused on decision outcomes. Three principles of distributive justice emerged: equity, equality and need (Hegtvedt, 1992). Equity, which received most of the research attention, is based on the concept that everyone should get rewards in proportion to their efforts or costs (Homans, 1961; Walster et al., 1973). In contrast, equality requires that everyone benefit equally regardless of costs or efforts. Finally, need requires that people receive benefits according to their needs, either because they are deficient relative to others or because they have a need for greater resources than others. In 1975, Thibaut and Walker published a classic book which established a new focus for justice research: procedural justice. Procedural justice focuses on the fairness 20 of the process by which decisions are reached. Thibaut and Walker’s theory and research emphasized direct participation in the decision and the opportunity to voice one’s opinion as key principles by which people judge the fairness of a process. Soon after, Leventhal (1976; Leventhal et. al., 1980) added to this by proposing six principles for procedural fairness: l) consistency over persons and across time, 2) suppression of personal self- interest (bias), 3) use of accurate information, 4) modifiability of decisions, 5) representativeness of the concerns of all recipients, and 6) adherence to prevailing ethical and moral standards (ethicality). Lind and Tyler (1988) developed a model which seeks to explain the underlying processes implied by these procedural justice principles. Their group value model suggests that people are concerned about their social standing in society and these principles give them information about how others evaluate them. Their model posits three basic components of procedural justice judgments: 1) neutrality, 2) trust in the benevolent intentions of decision makers, and 3) status recognition. Neutrality is the belief that decisions are based on a complete and accurate assessment of the facts and implies that over time everyone will receive fair outcomes, even though this may not be so in the short term. Intentions are important because, to the extent that authorities are benevolent, they can be trusted to make decisions that are fair. Status recognition involves treatment that enhances the person’s sense of dignity and reflects that he/she has good standing in the group. The group value model can incorporate many of Thibaut and Walker’s and Leventhal’s principles. Lind et al. (1997) found that much of the variance in procedural fairness attributed to voice was explained by group value variables, lending support to the 21 hypothesis that being able to express oneself gives one a sense of dignity. Leventhal’s principles of consistency across persons and time, bias suppression, accuracy, correctability, and representativeness all may contribute to the neutrality of the decision process because they help to insure that accurate and complete information is used. These principles also communicate status information. For example, a strong bias against a person suggests that the person has no status and that the authority does not have benevolent intentions. Folger (1993) developed an even more basic explanation for justice judgments. He integrated distributive and procedural judgments by positing that people are primarily concerned about being treated with dignity. Outcomes which meet distributive fairness principles or processes which meet procedural principles suggest that the people are being treated as ends in themselves, rather than as a means to something else. F olger suggests that seeing oneself as an end is central to a feeling of dignity. F olger’s attempt to integrate procedural and distributive concerns reflects a long- standing observation that the two interact. It has been shown repeatedly that when outcomes are favorable or fair, procedural justice has small or no effect on participant reactions. However, if outcomes are unfair, procedural justice works to make participant reactions more positive (Brockner & Siegel, 1996). An opposite phenomenon has also been demonstrated. When procedures are unfair, fair outcomes can make participant reactions more positive (Van den Bos et al., 1997). Fairness heuristic theory (Lind et al., 1993) can be used to explain the interactions between procedural and distributive justice. Initial faimess impressions set the stage for interpreting and evaluating subsequent experiences. They do so by leading to judgments 22 of the benevolent intentions, neutrality, and respect for a person’s status by the authority. These concerns could be summarized as the “trustworthiness of the authority.” Procedural justice works by providing assurance that long-term outcomes will be just. Participants can trust that the process will eventually benefit them. High procedural faimess and high distributive fairness also show that authorities can be trusted to continue treating participants with dignity. This interpretation is supported by numerous studies showing how trust leads to greater procedural justice (e. g. Tyler, 1994) and how trust in authorities is a consequence of procedural justice (e. g. Tyler, 1990; Tyler & Degoey, 1995). In summary, justice theories have proposed a wide set of fairness principles. Outcome fairness principles include equity, equality and need. Procedural fairness includes the importance of voice, broad representation, lack of bias, use of accurate information, and direct control over decision processes and outcomes. These principles are deeply important to people because they communicate the level of respect and social standing they are accorded by decision makers. This in turn affects their sense of self- worth and dignity. The deep-seated nature of these principles explains why fairness judgements can have large impacts on support for decisions and trust in decision makers. Justice and public participation As the preceding section explains, justice theories suggest a set of principles, grounded in basic human motivations, that can be used to evaluate a decision making process and outcome. However, justice theories have only recently been applied to natural resource decision making. In contrast, an extensive body of writing related to 23 citizen involvement in natural resource decision making has developed over several decades. These writings have also proposed various criteria for evaluating how citizens are involved in decision making. The influence of these writings on natural resource decision making practice suggests that they be compared with justice theories to determine commonalities and differences. Classically, citizen participation in decision making has been thought of in terms of power and in terms of democratic philosophy. Arnstein’s (1977) “ladder of citizen participation” outlined 8 rungs of citizen power, ranging from manipulation to consultation to citizen control. Another way to analyze these rungs is that, as power increases, citizens have more voice and more participation in decisions - the two procedural justice concerns documented by Thibaut and Walker (1975). In contrast to the power approach, Laird (1993) derives evaluative principles from democratic philosophies of direct participation and pluralism. He suggests that participants need to learn and become informed, participation should be as broad as possible, and decision making power should be shared. These last two are essentially identical with Leventhal et al.’s (1980) fairness principles of representativeness and participation in decisions. Another approach that is more explicitly congruent with justice theory is based on J urgen Habermas’ linguistic/philosophical theories for a fair and competent ideal speech situation (Renn et al., 1995). A fair situation requires that anyone interested must be able to attend, initiate discourse, participate in discussion, and influence the collective consensus decision. A competent situation requires access to information and its interpretations, and the use of the best possible procedures for selecting knowledge. 24 As an exploration of Habermas’ theory, Tuler and Webler (1999) and Webler and Tuler (forthcoming) conducted a qualitative study of participants’ evaluations of the decision making process in a forest advisory council. Their results support Habermas’ theory and the social psychological theory of justice presented here, suggesting that the theories are very similar. For example, ability to attend is related to the representativeness of participants involved. Ability to initiate discourse and to participate in discussion is another way to express the concept of voice. Influencing the collective decision relates to participation in the decision by having control over it. They also found some evaluative criteria not present in Habermas’ theory, but which are suggestive of justice theories. For example, respect, openmindedness, honesty, understanding, listening, and trust are suggestive of Lind and Tyler’s (1988) group value theory because they are signs of good group standing and benevolent intentions. In summary, it appears that many justice principles are implied by recent writing about public participation. An advantage of justice theory is that it has a strong empirical base. It already includes some of the principles which Webler and Tuler (forthcoming) demonstrated empirically, but could not fit neatly into Habermas’ theory. Justice theory also adds new principles like ethicality and neutrality, making it a comprehensive framework within which public participation fits as a tool to help achieve perceived fairness. Lawrence et al.’s (1997) call for a focus on procedural justice when doing natural resource public participation supports this approach. The purpose of this study is to examine justice theory in a natural resource decision making context to determine how the context shapes fairness judgements and how those judgments influence citizen trust in decision makers. In the next chapter a 25 theoretical framework is proposed which argues that fairness increases citizen trust in the decision makers. The framework also identifies how fairness judgements may be affected by the decision making context. The history of public participation reviewed in Chapter 2 is used to create a set of factors which together define the decision making context. Social psychological literature in areas like conflict resolution and organizational culture is used to more clearly specify the nature of the bivariate relationships between each context factor and fairness. Finally, a set of research questions designed to test the theoretical framework is presented. 26 ' I. .i . 1 ill “Mr P slung!“- \(' ml) «imam . {IIZQ/ “him ( be lfliohe dEIEMIT): “f7, be 11335th a} and crux \O l ‘ ‘ h], / / I I" c'. lr/l’lflj/r , all j! ./ ’1‘! 'rfi'r ‘ ‘ l i "1‘ A l ”"1”!!de rp lilli / ,r CHAPTER IV: A CONTEXTUAL THEORY OF PERCEIVED FAIRNESS IN NATURAL RESOURCE DECISION MAKING The importance of fairness and the well-developed nature of j ustice theory suggest that fairness principles can be used to measure citizen evaluations of natural resource decisions and decision making. Given that citizens will perceive different levels of fairness, what are the reasons? Many decision makers would like to know why certain decisions and decision making processes lead to positive citizen responses and others do not. One way to identify possible reasons is by examining the historical decision making record. Chapter 2 laid out a history of public participation in natural resource decision making. Close examination of this history allows the development of a set of factors which combine to describe the variety of decision making contexts in which citizens can be involved. Fairness principles can then be used to compare the various contexts to determine which are associated with higher levels of perceived fairness. Conclusions can be reached about which contextual factors have the greatest impact on perceived fairness and trust in decision makers. In this chapter a theoretical framework of the influences on and consequences of perceived fairness in natural resource decision making is proposed. The influences on fairness are a set of context factors derived from the brief historical review in Chapter 2. The consequence of fairness is trust in decision makers. Once the theory has been explained, a set of specific research questions guiding the analysis is presented. The questions are designed to guide the testing of the contextual theory of fairness proposed in this chapter. 27 '0 it? decision , "513' man. 1213’) ”II/Warmer c‘. brows the process‘ mwswv; ws‘xw ‘g\\\\\'~§&\\'&3 “wages. Th1 / / 1'1 ,. “Minds. 1 Mm: . ,1. . it , ii‘t‘ ." ' “Whiz will.“ “ll A theoretical framework for percei ved fairness The centerpiece of this study is a theoretical framework which specifies how context factors affect fairness and how fairness affects trust in decision makers. The theoretical framework has three main sections (Figure 1). The first part of the framework consists of the contextual variables derived from the review of the history of public participation in natural resource management. These factors, grouped into those related to the agency, the situation, and the citizens, all hold the potential to cause citizens to evaluate their participation experience as being more or less fair. The second part of the framework consists of the ways citizens evaluate a decision making process. Justice theory suggests that fair outcomes and fair process are very important to citizens. Although process and outcome judgements are different, they may influence each other, particularly if the person only knows the outcome or only knows the process (Van den Bos et al., 1997). As a result, process and outcome fairness are usually positively related. The framework proposes that the decision making context will have a large influence on citizen evaluations of the fairness of their decision making experiences. The third component of the framework is the consequence of the citizen evaluations. Decision makers would like to be trusted by citizens. Citizen trust is manifested in citizen support for decisions, citizen satisfaction with the agency making the decisions, and good relationships between citizens and decision makers. The literature review suggests that trust of decision makers increases when citizens evaluate decision making processes and decision outcomes as being fair. 28 MS .fil’ltrl / iii’t-l”*"’“7""l tr till limit I , lid; 4 lbw/till.” ill lrlm,‘ , , -i .,.- .. mil their halter Decision Making Context: Influences on Citizen Evaluations Agency Agency culture Level of government Agency resources Situation Participation technique Amount of conflict Equal power distribution Prior respectful relationshJirs Citizen Prior level of involvement in decision making Age Gender Education Citizen Consequence Evaluation of of Citizen Experience Evaluations Fair process Trust in —’ decision Fair outcome makers Figure 1. Overview of the theoretical framework of influences and consequences of perceived fairness. The framework assigns a central role to perceived fairness by hypothesizing that the influences of context factors on trust are mediated by fairness. Mediation means that the influence of one variable on another variable is transmitted through an intervening variable. The mediating role of fairness in this model is based on the idea that the context factors are related to a specific experience. Citizens then evaluate that specific experience in terms of fairness. The fairness of that particular experience may then change their beliefs about the decision makers. 29 p mt'Oli‘c’ oqerate “71h” $S\§ ‘mfi s \‘m§_\§ ‘ \ \ \wséi“ \‘ “ lit?” Mil ll- if to "Which-til in - . H . arr-'ririith- 4m Fairness and the decision making context The framework described above and depicted in Figure 1 includes a broad array of context factors which are predicted to affect fairness. In this section the inclusion of each context variable is justified in terms of the history of natural resource management presented in Chapter 2 and social psychological literature. The context factors can be divided into three main types: the agency making the decisions, the situation in which decisions are made, and the citizens evaluating the decisions. Agency context factors The agency making the decisions has the primary control over the process used to involve citizens. The agency also hires certain types of employees and rewards certain types of employee behaviors and attitudes. Agencies create their own policies, have to operate within legislative boundaries, are given authority only over certain types of issues, and deal with a particular set of constituencies. Different natural resource agencies would vary on these and other dimensions. In this study, the three particular agency dimensions explored are the level of government, the agency culture, and the resources available for public participation Level of government: Natural resource agencies can exist at local, state, or federal levels. Level of government may be important because the geographic scope of affected constituencies differs dramatically among various levels. Federal agencies are accountable to the nation as a whole and have to balance local, state, and federal constituencies. State agencies are accountable to the state and its population. Local government is only directly accountable to local people. This suggests that notification 3O next theprer mow/Ma 772: fifi sass“ Q\\.\\\‘ ‘ m\\\\“ <: 651$ "t’ljlt‘lolii ,4 '{L/l l (N l lira! .filrifllflfl- vmrr'fi/ZMJ/v , /m \WKT I A I “a“ \\\ has “\K r and adequate involvement should be easier at the local level. Decisions may also be more closely tailored to citizen desires because there may be less diversity of opinion. It may also be more convenient and less intimidating for citizens to be involved at a local level. Because notification and involvement are related to the fairness principle of representation, and because tailoring decisions to citizen needs is related to principles of influence and fair outcomes, perceived fairness may be higher at local levels. However, local planning commission decisions affect neighborhood characteristics which may invoke deeper commitments from citizens than would issues of forest management in a distant forest. The decision of whether or not to build a Walmart next door has a huge impact on the day-to-day lives of nearby residents. In contrast to the previous arguments, this suggest that citizens at the state and federal level may be less critical and have higher fairness perceptions. The final aspect of level of government is the legislative environment. The history outlines that federal regulations required both local and federal agencies to involve citizens. However, these regulations did not affect states. Thus one would expect less effort to be responsive to citizens at the state level and therefore more negative citizen evaluations. In conclusion, although it is difficult to predict which level of government will been seen as the most fair, there are likely to be some effects of level of government. Agency culture: The second agency factor, agency culture, is the set of beliefs, attitudes, and behaviors of the people in the agency making the decisions (I-Iofstede, 1998). In preliminary exploratory conversations for this study, agency personnel suggested that the comprehensiveness of public participation varied widely based on the 31 personal beliefs of the person in charge. This could potentially be extended to the agency as a whole. Agencies may differ in terms of how decisions are made, and therefore in terms of citizen-perceived fairness, because their membership collectively has different beliefs than the membership of other organizations. Kweit and Kweit (1981) specify the aspects of bureaucratic culture that may have an impact on public participation and thus on the fairness of decision making. These include an emphasis on expertise, regularity (use of rules, hierarchical authority, consistency), efficiency, and organizational self—maintenance. Authors writing on public participation have also long suggested that beliefs about the importance of expertise may interfere with performance of public participation by downgrading the value of citizen comments and enhancing the importance “of the preferences of “neutral agency professionals”. These considerations suggest that agencies with cultures emphasizing neutrality at the expense of accountability to citizens will be judged by citizens to be less fair. In addition, agencies may have cultural beliefs about the importance of fairness in decision making and this may also affect citizen evaluations. Specifically, when an agency’s culture emphasizes expertise, downgrades citizen knowledge and motivations, and places less importance on fairness, citizens will evaluate their experience as having an unfair process and outcome. Amount of participation resources: The final agency factor is the presence of resources for participation. The amount of personnel, money, and training available for public participation could limit the quality and quantity of citizen involvement. Thus, greater amounts of staff, money, and training resources for public participation should cause citizens to evaluate their experience as having fair process and outcome. 32 Situation context factors The second set of contextual factors emerging from the historical review describes the situation in which the decision is made. How did people interact at the particular meeting evaluated by the citizen? Was there a great deal of conflict? Were there large differences in financial resources or expertise among the citizens? What kind of participation technique was used? Within any particular agency, there may be a wide variety of decision making episodes and differences among them would impact citizen experiences. Participation technique: The first situational factor emerging from the historical review is the type of participation technique used. In the historical review, a picture is painth of participation in which more formal, less interactive forums, such as letters and public hearings, were replaced more recently by discussion oriented, collaborative efforts. While this general trend does exist, the reality is that discussion-based advisory groups have been used for many years and less interactive forms like letters are still used effectively. Because a given agency often uses several techniques, citizens may be exposed to a variety of involvement methods. It is important to evaluate participation technique because many practitioners want to know if certain types of techniques work better. It is critical to find out if there is any kind of relationship between technique and perceived fairness. One of the key findings from previous focus group research (Smith & McDonough, 2001) was that principles of fairness were equally important across all techniques. However, citizens and agency employees preferred methods which involved repeated personal interactions. 33 Techniques commonly used in natural resource decision making include written letters of comment, one-on-one interactions, formal public hearings, discussion oriented meetings, and on-going advisory boards. These can be arranged in a hierarchy from no or low levels of discussion and personal interaction to high levels of repeated interactions over many years. The finding that citizens prefer discussion based methods suggests that participation techniques having more personal interaction and discussion will lead to citizens evaluating their experience as having a fairer process and outcome. Amount of conflict: The situation may also may also vary in terms of the amount of conflict. During the 20th century, conflict over resource use was a driving force in the institutionalization of systematic public participation. The need to successfully resolve that conflict was one reason new collaborative decision making techniques were developed (Syme & Eaton, 1989). Because of differences in the participants and the issues, the intensity of conflict may vary. A conflict in which people hold onto their positions deeply, express strong emotion, and hold positions which are highly incompatible with those of other people is harder to resolve (Floyd, Germain, & Horst, 1996). It should also be harder to achieve an outcome all participants see as being fair because their positions are more divergent and are held onto more strongly. Identifying compromise settlements which everyone can accept as fair would be harder. People intensely involved in an issue may also have more trouble seriously considering the views of others, threatening the sense of procedural justice. This suggests that decision contexts characterized by a high level of conflict will lead to citizens evaluating their experience as having an unfair process and outcome. 34 Pripr respectf_ul relationships: It seems reasonable to suggest that factors which have been shown to affect the resolution of conflict may also affect perceptions of fairness by participants. A factor which makes conflict resolution easier is the presence of prior respectful relationships among participants. Pruitt (1998) reports the results of many studies on social dilemmas which demonstrate that parties who have prior positive relationships are more likely to successfully resolve the dilemma with cooperative strategies. The use of cooperative strategies should lead to carefirl consideration of each party’s interests and thus procedural justice. It might also lead to more fair outcomes as participants c00perate in their search for solutions that affect everyone equally. In addition, as justice research has shown, fairness is symbolic of respect and social standing. Thus if people have a good relationship, they are more likely to feel respect towards others and treat them in ways which demonstrate that respect, in other words, fairly. This suggests that a situation in which many citizens know each other and know the decision makers may be evaluated as having fair process and outcome. Eg'pr ggpal power ar_nong citizens: The fourth aspect of the situation which may affect fairness evaluations is the extent of power equality among citizens. Power inequalities among citizens are a direct threat to the fairness principle of neutrality because they could lead to suspicions that people are not treated equally. They may also lead to the impression that one’s comments are not being seriously considered, violating the principle of influence. Finally, power inequalities may lead to the judgment that decision outcomes favor one person over another simply because of the person’s greater resources, threatening the sense of outcome fairness. These predictions are consistent with an experimental study which found that procedural justice was most important to 35 people when they were in an unequal power situation (Barrett-Howard & Tyler, 1986). The authors inferred that the experimental participants were confident that equal power leads to fair outcomes, but in an unequal power situation they needed additional assurances that the decision making process was a fair one and did not reflect any bias towards those with more power. It can be predicted that decision contexts in which power is more equally shared among participants will lead to citizens evaluating their experience as having a fairer process and outcome. Citizen context factors The final set of contextual factors describe the characteristics of the citizens making the judgements. It is important to remember that fairness judgements are ultimately individual perceptions. It may very well be that individual characteristics like gender, education, and prior experiences with citizen involvement will impact the experiences the citizen has and the way those experiences are interpreted. Age, gender, education: Previous research found that women and persons with less formal education felt less control over the political arena and participated less in governmental decision making (Smith & Propst, 2001). If one feels less control, this may translate to mean that one feels decision makers are not responsive. Because responsiveness is an aspect of the justice principle of influence, women and less educated people may perceive a more unfair process. A lack of responsiveness may also lead to decision outcomes which do not treat the citizen equally, violating outcome fairness. This suggests that demographic attributes like gender, age, and education may explain variation in perceptions of outcome and process fairness. 36 Prior involvement in decision making: Another important participant characteristic is the general level of involvement in the political process. General level of involvement in politics is important because it is correlated with one’s general attitudes towards government and therefore towards judgments of specific experiences. People who are more involved are more likely to believe government is trustworthy (F inkel, 1985). A lack of trust can also lead to disengagement from the political process. A citizen having greater trust in the government is less likely to interpret an unsatisfactory outcome as being caused by lack of fairness. It can be predicted that people with limited prior participation experience will evaluate their experience as having an unfair process and outcome. Elaboration of the framework All of the predicted relationships among variables in the preceding sections are part of the theoretical framework. In Figure 2, the predicted directions of relationships between variables in the framework are specified. As explained above, it was possible to classify most characteristics of the context as either increasing or decreasing perceptions of fairness. One exception was for level of government, in which some aspects of the level of government predicted greater fairness for local levels, and others predicted greater fairness for state or federal levels. The other exception was age because the literature did not suggest age-related trends in fairness perceptions. Although not predicted by the framework, and therefore not specified in Figure 2, context factors may also affect each other. For example, it may be that when power equality is low, conflict strongly reduces fairness. However, when power equality is 37 Decision Making Context: Influences on Citizen Evaluations Agency - Agency culture which believes in fairness and citizen participation - Level of government - Greater perceived agency resources for participation Situation - Participation technique which emphasizes discussion ° Greater level of conflict - Equal power distribution 0 Prior respectful relationships between citizens and between citizens and decision makers Citizen - High level of involvement in decision making ° Older - Female 0 Higher education Citizen Consequence Evaluation of of Citizen Experience Evaluations Fair process Trust in + —> decision Fair outcome makers 1‘ .1 u, + positive influence - negative influence + ? unknown influence Figure 2. Predicted directions of influence between variables in the theoretical framework of perceived fairness. high, conflict may reduce fairness to a lesser degree. This might occur because when there are strong differences of opinion and power is very unequal, the decisions may be in favor of the powerful. But when power is equal, conflict leads to more equitable outcomes and people perceive greater fairness. While this framework does not try to predict interactions, it acknowledges that they may exist. In the results chapters of this study, once the simpler bivariate relationships presented above are tested, multivariate approaches are used to explore complex patterns of influence between context variables. 38 Research questions Up to this point, the theoretical framework has been discussed without explicit reference to whether the relationships among variables are psychologically based and therefore can only be observed at the individual level, or if they also have a social component and exist at the agency level. For example, do the data support the prediction that agencies with a higher percentage of female citizen participants receive lower fairness evaluations? If connections among variables can be drawn at the agency level, there may be agency level phenomenon driving the relationships. For example, the agency may have policies which affect how women are treated, causing an observable relationship between gender and perceived fairness at the agency level. In addition to, or instead of, agency pattems there may be observable relationships between gender and perceived fairness at the individual level. In other words, women may have experiences or perspectives which lead them to be more critical no matter which agency is making the decision. Finally, levels can be combined to see if there are patterns at the agency level after controlling for any individual level relationships among variables. The set of research questions are organized to examine the framework at different possible levels. The first questions examine the agency level. 1. How and why do agencies differ in terms of context, fairness, and trust variables? 2. At the agency level, are the directions of influence among context, fairness, and consequence variables consistent with the theoretical framework? Answering question 1 by looking at differences among agencies allows assessment of the impact of agency histories, legislative environments, and particular 39 circumstances on context variables. It is also helpful to first identify if agencies do in fact differ before examining if those differences can be explained by the theoretical framework. Question 2 examines the patterns of agency differences to see if they conform to the predictions of the theoretical framework. In addition to the agency level, the framework may also apply at the individual level. 3. At the citizen level, do context factors related to the situation and the citizens influence perceived fairness in the predicted directions? 4. At the citizen level, does perceived fairness increase trust in the decision makers? Research questions 3 and 4 focus on the situation and citizen variables because these were measured at the individual level. Question 3 examines the relationships between context and fairness variables to see if they match the predictions in Figure 2. Question 4 concerns the relationship between fairness and trust in decision makers that is also specified in Figure 2. The final research questions address the combination of individual and agency levels. 5. Does fairness mediate the effect of context factors on trust? 6. Which context factors explain the most variation in perceived fairness? 7. Does the level of one context factor affect how other context factors influence fairness? 8. Can the differences between agencies in terms of fairness perceptions be explained by individual and situation factors, or are is there agency level variation in fairness that can be explained by agency level factors? 40 Questions 5 to 8 address the theoretical framework in a manner most closely resembling the complex real world by combining variables across both the agency and individual level. Question 5 concerns the central idea of the framework, which is the mediation role of fairness. Mediation means that the effect of one variable on another is transmitted through an intervening, or mediating, variable. In the proposed framework this means that context affects fairness and fairness affects consequences like trust in decision makers. Question 6 elaborates the framework by identifying the context factors which cause the largest changes in fairness judgements. Knowing this may help managers develop strategies to achieve greater fairness. Question 7 further elaborates the framework by looking at whether the value of one variable causes a change in the effect of another variable on fairness. This means that although a variable may not have a significant influence by itself, it does have influence in combination with another factor. Finally, question 8 seeks to separate out the individual and agency level effects on fairness. Can differences among agencies be explained by the situation and citizens involved, or do agencies differ in ways that can only be explained by agency level variables like employee culture? Answering these research questions is very important because they address the problem of how to build relationships of trust between citizens and natural resource decision makers. Better relationships are necessary for the citizen-agency partnerships needed to address increasingly complex management issues. The first step to answering the questions is to collect the needed data. The next two chapters explain the methods used to measure the variables and test their relationships. 41 CHAPTER V: RESEARCH METHODS 1: CASES, SAMPLING, INSTRUMENTS, SURVEY ADMINISTRATION, NON-RESPONSE BIAS AND ANALYSIS The previous chapter outlined a theoretical framework specifying how contextual factors may affect citizen perceptions of procedural and distributive justice, and ultimately, citizen trust in decision makers. In this study, the relationships in the framework were tested with data collected through questionnaires mailed to citizens and decision makers involved with a variety of agencies. This chapter first describes how the cases were chosen, indicating how many respondents were in each case. Then the creation of the mailing lists and the resulting sampling frame is described. The instruments and variable operationalizations are given, followed by survey administration procedures, response rates, estimation of non-response bias, data entry and analysis. 77w cases The basic methodology involved comparing cases. A comparative approach was used because variation in context factors was needed to test the theory. An experimental design would have required manipulation of the context, which is impractical when one is studying real decision making. A random population survey also would not have worked because only a small proportion of citizens have actually participated in natural resource decision making. By contrast, choosing specific cases because they differ on variables important to the study allowed a more targeted, smaller sample. The comparative case approach also facilitated consideration of qualitative differences among agencies, such as their historical circumstances, that might explain differences in citizen evaluations. In this study, each case was a different agency. Specific agencies were chosen for a variety of reasons. The first was that they were related to natural resource management 42 and preferably to forest management. The second was that they had ongoing citizen participation programs with enough citizens involved in the past two years (at least 100) for statistical testing. Sufficient documentation had to be available to allow the creation of a mailing list of citizen participants that included close to 100% of the citizens involved with the agency. Because the funding was from a Michigan Agricultural Experiment Station grant, agencies had to be in Michigan. Finally, because one of the hypotheses was that level of government would affect citizen experiences, the set of agencies had to include federal, state, and local level agencies. According to these criteria, the Huron—Manistee National Forest of the USDA Forest Service was chosen to represent the federal level, the Forest Management Division of the Michigan Department of Natural Resources was chosen for the state level, and Delta, Delhi, and Monitor Township planning commissions were chosen for the local level. Although the planning commissions did not deal directly with forest management, they were chosen because they were partly rural townships experiencing rapid development. These commissions were often grappling with the decision of converting farmlands and forests into new developments — a natural resource management issue. Sampling frame For each of the agencies above, written questionnaires were mailed to all citizens recently involved for whom an address could be found. In addition, a written questionnaire was also mailed to all employees of the Huron-Manistee National Forest and the MDNR Forest Management Division, as well as the township planning commissioners. All citizens and employees received questionnaires to make sure that there were enough respondents from each agency (at least 50) for accurate statistics. 43 The initial mailing to citizens involved with the Huron-Manistee National Forest was sent to everyone on a Huron-Manistee Forest Plan Revision mailing list. Although the list was being used for mailings related to revising the Forest’s plan, it had been built up over several years and included anyone who had contact with the agency for any reason. It included individuals who had sent in letters, gone to hearings or meetings, or requested information. It also included members of a long-running advisory group, the Friends of the Forest. There were also many citizens, such as local government officials, who had been put on the list by the Forest Service as a matter of policy, not because those individuals had ever had contact with the Huron-Manistee agency. There were 780 people on the Huron-Manistee National Forest mailing list, which was reduced to 715 after correcting for envelopes returned because of a bad address or because the person was deceased. The initial questionnaire about the Forest Plan Revision asked if the respondent had ever been to a Friends of the Forest meeting and if he/she would be willing to return a follow-up about the Friends of the Forest. Out of 94 people indicating attendance, 56 agreed to do a follow-up and were also sent the Friends of the Forest questionnaire. The mailing to citizens involved with the Michigan DNR was done from a list compiled by the researcher. MDNR management units throughout the state were contacted to obtain names and addresses of individuals who had participated in any way with the Forest Management Division in the past 5 years. Most records came from meeting sign-up sheets that were sometimes illegible and often had minimal addresses. Using Internet searches, complete addresses were obtained for most of the names, leading to a mailing of 245 questionnaires. Correcting for bad addresses and deceased 44 respondents led to 215 deliverable questionnaires. In addition, a list from the MDNR of people who had attended the Pigeon River Advisory Council was obtained. The list had 50 people and this was adjusted to 47 deliverable questionnaires. The mailing to citizens involved with planning commissions was done from lists compiled by the researcher. Planning offices for Delhi, Monitor, and Delta townships were visited. Names and addresses were gathered from meeting minutes and sign-up sheets going back a period of two years. The Internet was used to find complete addresses where necessary, leading to a mailing of 253 questionnaires to Delhi, 142 to Monitor, and 184 to Delta. Correcting for bad addresses and deceased respondents led to 220 deliverable questionnaires for Delhi, 122 for Monitor, and 172 for Delta. Mailings to decision makers were based on organizational directories provided to the researcher. All 314 MDNR Forest Management Division employees and 139 Huron- Manistee National Forest employees received the questionnaire. A filter question at the beginning allowed them to return the survey unanswered if they had no interactions with citizens during the course of their job. Adjusting for this filtering led to 297 MDNR and 126 Huron-Manistee employees who were potential respondents. Surveys were also mailed to all current and recent (within the past two years) planning commissioners. Recent commissioners were included because many of the citizen respondents were involved up to two years earlier so they would have interacted with those commissioners. A total of 14 were sent to Delhi, 7 to Monitor, and 11 to Delta. All of these addresses were correct. 45 The instruments There were two instruments: one for citizens and one for decision makers. Instruments were pre-tested by administering them to people similar to the intended respondents of the final version. The citizen pre-test instrument was mailed to a list of 100 citizens taken from sign-up sheets of the Meridian Township Planning Commission. After a reminder postcard, half were returned. The agency instrument was distributed to 20 headquarters employees of the Michigan Department of Environmental Quality (MDEQ). The WEQ was chosen because it was recently separated from the MDNR and so would have similar culture and types of employees. Twelve were returned. Patterns of missing data, correlations among items, and respondent comments in the margin were used to rewrite several questions. On the citizen questionnaire pre-test, a long list of items about different topics was combined together under the general heading , “Your opinions about the Planning Commission process.” All items had a five point disagree-agree response scale. In the revision, this list was broken up into different topics (e. g. decision making process, conflict, power) sometimes with different response scales. This was done to make it easier for the respondents to deal with the long length. On the employee survey, the needed revisions were fewer. The main comment from many pretest respondents was that it was too long. For this reason, some items were deleted from the section on bureaucratic culture. Deleted items were ones which respondents thought were unclear or redundant. Once pre-testing was completed, five versions of the citizen instrument were created with slight modifications to make them agency specific. In addition, a special 46 version was created for the Friends of the Forest of the Huron-Manistee. This was a follow-up so it did not need to contain the demographics and background information already asked on the first Huron-Manistee questionnaire. A special version was also created for the Pigeon River Council Advisory Board of the Michigan DNR. This was done because many of the council members may have been involved with the DNR in other ways and so would have received the general DNR questionnaire. A special Pigeon River questionnaire made sure that they also commented about the Pigeon River Council separately. Both of these special versions were created because they pertained to unique, long running participation efforts conducted by the agencies which deserved special attention. However, all seven versions were kept as similar as possible so answers could be directly compared. Because there were several versions mailed out, and because some citizens were involved with more than one agency, some people received multiple questionnaires. In the analysis these were treated as coming from separate respondents. Five versions of the decision maker instrument were also made, one for each agency. An example of a citizen and a decision maker instrument are in Appendices A and B. On each questionnaire, citizen respondents were asked to focus their answers to the questions on their most recent participation experience with the agency. Because the questions were quite detailed, they required a clear memory of the experience and the most recent experience was likely to be remembered most accurately. In addition, by choosing an experience based on recency, possible biases resulting from remembering particularly bad or good experiences would be avoided. The intention was that across the many respondents who had participated at different times and in different ways, the 47 proportion of good to bad experiences obtained would be representative of the agency’s overall performance. Context variables The variables measured in the questionnaires are listed in Table 1. Most variables were calculated by adding up each respondent’s scores on a number of items and then dividing by the number of items. The resulting average score contained less random variation and was thus a more reliable measure of the underlying construct. The first set of context variables measured agency culture and were measured on the agency questionnaires. Five scales were created for agency culture. The first three scales were based on theoretical writing by Kweit and Kweit (1981). The first scale assessed employee perceptions of the beliefs of most others in their unit about the level of citizen knowledge of decision making topics. The second measured respondent perceptions of the beliefs of most others in their unit about whether citizens were motivated by self-interest and had short-term views. The third scale sought to measure the relative importance accorded to expertise versus citizen participation. All these items had to be newly created because there was no previous quantitative research on the tOpic. Respondents were asked about the beliefs of most others in their unit because this was an attempt to measure agency culture which is shared beliefs. The fourth and fifth agency culture scales dealt with justice. Process and outcome fairness items were copied from the citizen instrument for use on the agency survey. Employees were first asked to rate how important each item was to themselves and then were asked to rate “how often you and other employees in your unit actually achieve 48 Table 1. Variables and how they were measured. L Data source LOperationalization Context variables - agchfactors Agency culture Agency survey Beliefs about citizens - citizen knowledge 2 items - citizen self-interest/short-tenn view 2 items Importance of expertise versus participation 11 items Importance of fairness principles 21 items Performance of fairness principles 2] items Level of Evemment Assigned Federal, state, local Amount of agency participation resources Agency survey 3 items Context variables — situation factors Participation technique Citizen survey 1 item Amount of conflict Citizen survey 4 items Prior equal power distribution Citizen survey 5 items Prior respectful relationships Citizen survey 6 items Context variables — citizen factors Level of respondent involvement in decision Citizen survey making - Total involvement across all agencies 11 items“ - Involvement in the agency being evaluated 6 items“ Respondent age, sex, education Citizen survey 1 item each Evaluation of experience variables F air process Citizen survey 16 items Fair outcome 5 items Consequence of evaluation variables Trust in decision makers I Citizen survey I 6 items Chapter VI lists the questionnaire items and measurement scales used for each variable. Variable values are from item score averages except * which were assessed as the percentage of total items checked. 49 them.” These two measures reflect a distinction between cultural values and practices. Hofstede (1998) found that these two ways of measuring culture achieved very different results. Values tended to be individually held and so did not predict organizational variables, but practices were very culturally consistent across organizational members. The next context variables were level of government of the agency and agency resources for participation. Level of government was assigned by the researcher. Agency resources were assessed on the agency survey by asking employees how much money, staff, and training were available for public participation. Assessing resources in this perceptual way was consistent with the other perceptual measures on the surveys, but it may not be entirely consistent with the actual resources available. Another context variable was participation technique used. On the citizen survey, participants were asked to focus their answers on a particular participation experience. This was done because the theoretical framework specified that evaluations of a particular experience affected global levels of trust in decision makers. Testing the theory required that questions be answered in terms of a specific experience. For the specific experience, citizens were asked how they had participated. Possible ways to participate included receiving mail, sending written comment, having a one-on-one interaction, attending a hearing, attending a meeting, and participating in a number of agency specific opportunities. These possibilities were classified into six techniques ordered from least to most interpersonal interaction: mail/non-specified, written, one-on- one, hearing, meeting, and advisory board. Respondents had the option of saying they had participated in a particular planning project and these projects had to be coded into techniques. The specific projects 50 for the MDNR included the Presque Isle Management Plan, the Menominee River Management Plan, and the Lake Superior Forest Pilot Project. These projects consisted of three to six meetings and were classified as “meeting”. The MDNR also allowed people to participate in open houses and compartment reviews. This opportunity was classified as a hearing because it was a one-time event and concerned discussion of very specific decisions, not general policy. The MDNR also had two advisory boards (Pere Marquette Friends of the Forest, Pigeon River Country Advisory Council) and the Huron- Manistee had one advisory board (the Friends of the Forest). Finally, in addition to their regular hearings, Monitor township held a Visioning workshop which was classified as a meeting because it was very discussion-based and focused on more general, policy-level issues. The next variable was the amount of conflict. Although the importance of this variable is well supported in the literature, there were no operationalizations that could be applied to the wide variety of cases included in this study. Therefore a new set of items was created that measured both the intensity of emotion and the incompatibility of interests. Other context variables were the existence of prior equal power among citizens and prior relationships of respect among citizens and between citizens and decision makers. New scales were created for both variables. Relationships items asked how many people were present in the participation experience and with how many of them they had prior respectful relationships. Power equality was measured with an item about overall power distribution supplemented by items based on theories of capital (Coleman, 1988) which assessed access to knowledge (human capital) and financial resources 51 (financial capital). Distribution of these types of capital is important because they are sources of power. The final context variables concerned the characteristics of the citizens participating. There were two measures of citizen involvement in governmental decision making. Previous research (Smith & Propst, 2001) developed a general scale for measuring total participation in political and natural resource decisions. This was replicated here to allow comparison across cases. Another scale measured total participation within each agency compared to the possible involvement in the agency. This gave a percentage score of agency involvement which could be compared across cases. Finally, standard demographic items on age, gender, race/ethnicity, and education were asked (Smith & Propst, 2001). Evaluation of experience variables Most of the fair process items were modified versions of those in Lauber and Knuth (1998). Other standard fair process items modified fi'om Tyler (1994) were politeness and honesty of authorities, consideration of citizen comments, and overall fairness of the process. New items on influence over the outcomes as well as over the process were created for the fair process scale because they reflected principles of control (Thibaut & Walker, 1975). Additional procedural justice items, based on findings from previous focus group research (Smith & McDonough, 2001), included convenience of attendance, involvement of local people, notification, and question answering. Fair outcome items were modified from Tyler (1994). New items about fair outcome concerns of equity, equality and need were also created. 52 Evaluation consequence variable A scale measuring citizen trust in decision makers was created by combining four types of items. Citizen trust in the agency’s decision making and citizen satisfaction with the agency’s job performance were assessed with questions modified from Tyler and Degoey (1995). Citizen support for the decisions made in the particular experience being evaluated was measured with items modified from Tyler (1994). An item assessing the improvement in citizen — decision maker relationships was created for this survey. These four types of items were used in combination to measure citizen trust in decision makers. Survey administration and response rates Both the citizen and decision maker instruments were administered in the same way. All surveys were stamped on the back with a number and then sent with a cover letter. The stamped number was used to track who had and who had not responded in order to facilitate reminders. Only the researcher had access to this information. Respondents were told that following the completion of data analysis, the connection between the list of names and survey numbers would be deleted. Questionnaires were designed so they could be folded in half and mailed back postage prepaid without an envelope. After two weeks, reminder postcards were sent to people who had not returned the survey. After another two weeks, a second copy of the survey was sent to those who had still not responded. Starting two weeks afier that, approximately 20 % of the nonrespondents for each mailing were randomly chosen and an attempt was made to contact them by phone. If a phone number could be obtained for either home or work, phone contact was attempted repeatedly (sometimes up to 10 times) for a month. The number of successful contacts is given in Tables 2 and 3. 53 2 NF MN 08 m. F. m 5 3- mm ONN =38: 2:235 53: N 0.0 N m 328% -8: Eco w w N.No 4 N N 4m macs; 53o F F Ndo F F m N 88-8-28 EEG F F 58 F m N 53E» EEO N N FF on: @9- m Fm N ta 8.5— =oym “752 a F F mm who who N mm mm mm m FN 258.5 «ZOE N v NNN F F F m 8 850898: as: mzoz 80.35 8an “mono-F meme F N m 2% 889a 83 2292 .83 F832 o.om F F m 83m 852852 MZQE N8 F N 53 22 258; M752 amouom 23 MO m—ucoim F F 082 F F N N usages: 28 mzoz 3032 Eons-ages F F 02 F F F mF \ was: :80 2292 v v .92 m mm N FN 2828-28 M752 F F 53 F m m 5E:9 «202 F F F 98 5mm F M F on om mpg "22: ov vm MK Qmm Ndv NF oF F 09 va m FN 938.5 mar.— “22: mm 3t 3: m. 8 mm 3. 35888: as: Em F N FE N 8F 8 mm 8:85 mzzm m m msm m 3 FF mm 28-8-28 ”F223 F m Qwv VF m F mm 52:3 "F22: racism 88sec E828 229:8 BEBE v m N F “8sz 9333. 28.5 0:95 0:93 .x. .x. Eek F258 8 8538 u oEEoz—ov u :38. o=3=58 FE: tag-:5 85qu .3 9823 Erase—.8 0:23 98 Foo—8E 05 8 mean 8.898% .N aim-F. 54 .8 Eu :8 >258. 88—880 m E 98¢. 8 822: :8th 0:? 3:82:88: .8 FF 2:: 288 88 80:88 823. 2:8 838m 8 2828 2:88am?— mo n n A228 2823 88:: an 888: 28888.. .8 u H 88:8 32a ”88:88 283 2:8 288 823 :8 2:28:88: .8 n n 3528 wreck ”cm-$24: >283. 22885 I 4+m+N+c .2: n 8888 x $833 2883880 FF E8..- \$+m+N+: .52 u F8538: o\e 3qu 88288: d. 23.888 23 8E8 8:2: :33 u 82:8 8833.... 8388288 FF BEN 8:88:88 2: 8 9:83:38 83 88:88: 2: 82:82 2.: .8 282 E 282882 2: 823: 85 888 28 Be: E 8:2: 225 :2: 833m 2: 82» 8:28 8:: 2:- 55 mo F on F ms} F.mm mN mmN mmN mom Dam F 12.-HOB w oF mF who nm N 0N mm vm NF :8th 5:835. 58: F N 0.0 F N F F 88:88-8: 580 o N mNm F N wN N 8:8: 25 F F 0...: F N F 28-:0-28 SEQ 5mm F N v ~8st «:89 5 OF N F m.wm w. Fm N m F FN m F NN F 238.5 @5833. 8:82 N v ed 0 F 8588-8: 8:82 F F New F N m m 8:283 wEaEmE 8::02 m v New F m m m mctmo: 8::o2 ode F F m 28-8-28 8:82 F F 0.00 F F F 8st? 8:82 888 8880 8888 22980 8:88: V m N F 8—8:: 2638 88:: 28:: 285: Ax. X 88:. F288 8 8:88: a 28838: a .88- 85:88 :8 8?:5 8.2.882 .N 22; 55 II 2 Cu I 12 dd t C «xvi» till L-Hn Q8 K ..?I.. I Nutrmzm-leUSWSC-J .JQFCUZS 0.0.2:..30 3025.5.- V M- l-N ~ F F528:- mkutzm 020:; 026—7— DCCSQ .X. c: 129-- Fu—Zzi a0.- ..UFCZEE xx O~£=LO>207 xx 229k- .mLUv—sz nunvmthU—v-‘C fixfiOer-um. Q:r>3c:awu nun-nuznh 7:: 205:2: 9.: a: sat-veal «flu-:35 392:..vas2rdz .M- U~n~=r~ .8 E: :5 >955 cos—9:8 : :m 38 o: 98me :02? 0:3 35338: No u 2:: 0:23 :96 $5528 >023 2:8 833: 8 360%: £52258 .3 u .I. A925. 28: 6:23 3 8:82 352038 No x n 8828 393 629:on 203 2:8 0:2: :03? :8 $5283: No a u 358:: 285% é+m+~+c >333 22:58: . v+m+~+z .2: u makes x mmzotsm 033033: No 3: :38. >v+m+N+$ *2: .I. 3553 X 383 308:8 :oNEo o: @5865 89: E: 393380 558.: 25:: 3:2: :32 u 32:5 .3322: m3:.§>.¢mh\o u BER 56 5N om Nv 9mm m. Fm m F on 3% VF F mmv A30£ v2.5.2 072.45.. Eon 5 2m moocoeotmc Emu—mama dozmo wEon “can >995 a E 2.8 8 0:23 2: 53 30:83 @2033“ 556 0:3 89: 2a macoccoameéoz .m 38 .N ._ 823, Sow teamwoewwm 8m mos—g Eowcoamom vomd cmd mm.m no 5n 553:3— mg... 8... a... M: Em A case n _ .22: u o C 5m 32 3:. w? 8 an own .0» 2 2a new Ba :2 B 505 €235 «3. Bad 3.5m awn no can 9.32: 5128—. .5555.._>=o “:3 85.2. 3.35» a. «58235 .55. .x. wmnd wad cod om _ mm AmcoEEo Eon 33% $55 53, $8qu 2: Eamon $593§$ 8:280 coed Sud- mmd- 3 00m GENE”. $88.“ 3:33 @225me 83 6303 3:559 .339— Quantum—xv find owd- mud- ov omc £5 E 3:82 566% a 63m 8 Evans .8230 3?ch 9 51 C team—3. .83qu :8... was mad- mm m G 82? Ecofioa >8 5:5 325.80 203 3:82 32203 BC 2:32.: but mood wad- omd- mm wwm C32: 203 £56826 .3 $8830 of» 2:825 huh Cod nmd wmd :V 03 C5 203 $8663 838 8 wow: 8538:. 2:» 839:— ham comd Nw.~ .vo.m am .26 2.55.03 statute.— cccd 3.3 N~.- fl. own $86: 6-3 >o=0u< E 32:29:.— ..e .961— .x. 83 2d cod we NP $823195»: 5 as; mégfi :88 :85 2 Z mBESA as: 5.3-: an? $3-: Q8» .35 omcommorcoc :oNEo $33 2 Emorcv $=3=oa$$=o= can Emu: mucous—0&8 5053 £82: 2353 .«o :OmtquoU .m 2an 6O natural} resou1 respondents. respondems. population} “ho did n01 female and 9% esfima‘ may not 0f their d: OUICOme ' Yesmnde “on-TESF eXCept‘ Saliem‘ “me;- qui‘szii iittle i nhQSe undo natural resource decision making in general was not significantly different from respondents. The percentage of people who were female was significantly lower for non- respondents. This particular result can be confirmed for the entire non-respondent population because the surveys were numbered and could be matched up to names to see who did not respond. This analysis showed that the respondent population was 21% female and the total non-respondent population was 17% female, which is higher than the 9% estimate obtained from round 4 and phone survey respondents. Thus gender bias is likely not a concern. Finally, non-respondent evaluations of the fairness of the outcome of their decision making experience were mixed (Table 5). Of the two items measuring outcome fairness, one showed significantly more positive evaluations for non- respondents, but the other showed no significant difference between respondents and non-respondents. These results suggest that non-respondents are basically the same as respondents except that they are less involved. Less involvement means the survey will be less salient, usually one of the strongest indicators of whether a survey is returned. This survey was also very detailed, requiring a certain level of involvement to answer all the questions. In fact, the issue of how much detail to ask and whether or not those who had little involvement should be filtered out a was difficult design decision. Ultimately, even those with little involvement were included so as to find out what they did know. This undoubtedly lowered the response rate. However, for the present study, the differences cited above may not be relevant. The heart of the analysis presented in Chapters 8 and 9 rests on correlations among variables, not their absolute values. Are there any differences between respondents and 61 non-resp represent survey a: responde responde oftt'hieh same dir they eta" ICSponde and the c A Phone ar. reaSons V also state responds non-respondents in terms of relationships among variables? The non-respondents can be represented by those people who either answered questions over the phone or sent in a survey after being called. Bivariate correlations among the variables using the non- respondent data suggest there were minimal differences between respondents and non- respondents. Twenty significant correlations were found in the non-respondent data, all of which were also found in the larger respondent data set and all of which were in the same direction as those in the respondent data set. In conclusion, although non-respondents tended to be less involved in the agency they evaluated, the patterns of how they answered questions were the same as those of respondents. For the present study there is no evidence of meaningful non-response bias and the conclusions reached apply to everyone who was on the original mailing lists. Agency survey: Non-respondents to the agency survey were also contacted by phone and asked why they had not returned the survey. The most commonly cited reasons were that they were too busy or not interested in filling out the survey. Some also stated that they did not have contact with citizens as part of their jobs (Table 6). These results suggest that non-respondents were not systematically different from respondents on the variables measured. Non-respondents were also asked if they would answer a short version of the survey over the phone or send in the complete questionnaire. The short version (Appendix D) had questions asking about their experiences with citizen participation, their beliefs about bureaucratic and expertise-based values, and their age, gender, race, education, professional association memberships, and tenure in the organization. 62 Table 6. Agency reasons for not returning survey. Reason # of mentions Not interested / survey not a priority Too busy Intended to, but haven't yet Never received survey Questions too vague A-bUIUIflOO Do not have contact with citizens Respondents ofien mentioned more than one reason. When respondents and non-respondents were compared, there were only two significant differences, both showing that non-respondents had worked a fewer number of years in their agency (Table 7). Another non-significant trend confirmed the qualitative result that non-respondents had fewer experiences with citizen participation. How might these differences between respondents and non-respondents affect the present study? In this study, only the agency culture variables are used. Examination of correlations from the respondent data-set on cultural variables suggested that employees with less participatory experience place less importance on fair process ( r = 0.166, p = .005). However, correlations also showed that employees who had shorter work tenures in the agency tended to believe citizens possessed the knowledge and abilities to participate ( r = .119, p = .041). They also believed that citizens are not self-interested and are not focused on the short-term ( r = -.157, p = .007). These results contradict each other. The participatory experience bias suggests non-respondents have a culture which downgrades participation whereas the tenure bias suggests non-respondents feel citizens can and should participate. If the data were adjusted for both tenure and participation experience, the net effect on agency culture would probably be close to zero, suggesting 63 .23 5 2m moocohobfi Emu—“Ema 66:8 wfion Sun 523 a 5 25m do 6:23 2: 696 28526 @8033“ SEE 0:3 805 98 352038-52 .m 28 .N 4 8.33 Bot uofiwouwwa 2a 829» “cowcoamom :3. a? 3.2 2 8m aafiaaoueeazn 5 2:5... :8... 3.2 2.2 cm 2% 5:32.65 a. 2:59 R2 2.0 3o 3 8m c3505 u _ 5305-8: u e V $65 3838...... 5 niece—52 Sod 3o 56 NN EN C3808 u _ 5255-8: n o v Ea a 3:225: $3 9% Se. 3 won 838:3. 33 2.0 one 2 RM A vase u _ .22: n o V 5m 22 can 8e 8 o: am“ new 2 2... 2x Ba :2 3 sea £225 on... 32. 3o NS mm 8m $3.8 «3:25 Boo m2- Ed- 2 2% e23 3.22.33: 83 38 mm? mm ME” 8512.3 8:33.25. .x. m3~u>iQ sows :63: 2 2 NNQELGA >83 98%: 5.3 3.3-: $3 .85 omcoamocéoc xocowm 888 2 aware—v mEoEOQmBéoc use 38: 8:26:88“ 56253 memo:— oEmtg mo acmEQEoU .N. 633. 64 that adjustment is not needed and that non-response bias in terms of agency culture is minimal. Given the high overall response rate to the agency surveys of 72%, non- response bias is not a problem. In summary, non-respondents to the citizen survey tended to be less involved than respondents. Non-respondents to the agency survey tended to have less tenure in the agency and less participation experience. However, when examining bivariate relationships among variables for both agency and citizen data, there were no meaningful differences between respondents and non-respondents. This lack of non-respondent bias on the key research issues, while comforting, does not address the issue of biases resulting from systematically excluding people from the sample frame. Frame errors arise from the fact that the mailing lists only contained people who had participated with the agency. People who were strongly disenchanted with government in general or the agency in particular would not even bother trying to participate. In addition, people with no strong interest in natural resource issues, those generally apathetic about participation in government, and those simply too busy also would not be on the list. Evidence for this bias comes from comparison of the sample in this study with that of another study. Smith and Propst (2001) had citizens who were members of community groups fill out a questionnaire which included the same items about participation in politics and natural resource decision making that were used in the present study to measure general involvement in natural resource decision making. Averaging across the eleven items, the Smith and Propst study found an involvement level of 27.6% compared to this study which had 39% involvement. This supports the 65 conclusion that there is frame bias in terms of excluding those who are less involved in politics and government. In terms of demographics, the sampling frame does not represent the state of Michigan. The sample was 80% male, 61% over the age of 50 years old, 55% possessing a four-year college degree or higher, and 93% white. There were only 1% Asian- American, 0.2% African American, 0.5% Hispanic, and 3% Native American respondents to the citizen survey. It is also clear that minority groups, particularly African Americans and Hispanics, were not represented on the lists. How might the white, elderly, highly-educated male sampling frame error lead to bias in the research results? The problem is that the questions on the survey asked respondents to evaluate and describe their participation experience. If they never participated then they simply would not be able to answer. Mailing the survey to a random sample of Michigan residents, and thus using a less biased frame, would not have worked. Coding and data entry Data from the surveys were entered into MS Excel and then imported to SPSS for data analysis. Responses to questions asking for frequency of participation were entered in terms of the number of times a person had done something. If they wrote “10+” then it was entered as 10. This introduces only minimal inaccuracy. On the citizen survey, when citizens were asked to choose the technique they used to participate in their most recent participation experience, they sometimes indicated more than one technique. Since only one value could be entered, the rest of the answers were checked for clues as to which to enter. For example, if they wrote a description of when, where and why the experience 66 mmflmn were involve: most agency- mten com was entered and so wen Dé’pamner WBT‘), mailings ' 511g gestir hm a\‘9 9 a Vail g. {/7 occurred, this sometimes gave a clue. Also a later question asking how many people were involved was also helpful. If nothing could be deduced, then the technique with the most agency-citizen contact was chosen. For example, if they checked “submitted written comment”, “talked one-on-one”, and “attended an open house,” the open house was entered as their participation experience. Some people did not specify a technique and so were put in the “non-specified” categories. On the surveys related to the Michigan Department of Natural Resources (MDNR) and Huron-Manistee National Forest (HMNF), respondents were also allowed to check that they would base their answers on mailings they received. However, many of them wrote comments in the margin suggesting their answers related to experiences other than mailings. For this reason the “mail” and the “non-specified” cases were combined. Answers to questions with an agree-disagree scale were entered as the following values: Strongly Disagree = -2, Disagree = -1, Neither Agree Nor Disagree = 0, Agree = 1, Strongly Agree = 2. Answers to questions on a true-false scale were entered as: Definitely False = -2, More False Than True = -1, More True Than False = 1, Definitely True = 2. Answers to the importance scale questions were entered as: Not at all Important = 0, Somewhat Important = 1, Important = 2, Essential = 3. For frequency scales, answers were: Never = 0, Seldom = 1, Sometimes = 2, Often = 3, Always = 4. Extent of impact scales were: Zero = 0, Small = 1, Medium = 2, Large = 3, Dominating = 4. Amount available scale was: Almost None = 0, A little = 1, Some =2, A Lot =3. On all questions with a Don’t Know option, Don’t Know was entered as a missing value for that question. The presence of this option meant that respondents could distinguish between being neutral on the issue (i.e. Neither Agree Nor Disagree) and not 67 knowing the answer. The presence of Don’t Know was crucial because some questions were very specific and a respondent may not have remembered the details or their experience was too limited for them to find out. Sometimes it was clear that they had used the Neither Agree Nor Disagree category as equivalent to Don’t Know. This was apparent when they marked all questions on a page this way. In this case, the answers were entered as missing values. In the demographic section, people sometimes checked multiple racial/ethnic categories. This was almost always a combination of Caucasian and another group and the person was assigned to the minority group. If two or more minority groups were chosen, the “Other” category was used. A few people, obviously bothered by the question, wrote in “American” or “Born in the U.S.A.”. These were entered as “Other,” although they were probably Caucasian. Respondents often wrote comments in the margin or in the open section at the end. Some also called with comments or questions. A list of categories of comments was created. When a respondent’s comment fit a particular category, the respondent’s questionnaire ID was added to the list under that category. If the comment related to a particular question, the question number was also entered. This allowed easy retrieval of qualitative data on topics related to the study. Questions that may have been confusing to respondents could also be identified. Analysis There were four basic components of the analysis. The first involved factor analysis and the validity and reliability checks needed for creation of the variables from 68 the items in the questionnaires. The second part used one-way analysis of variance and post-hoe tests to identify differences among agencies in mean values of the variables in the study. The third part involved examining the bivariate relationships among variables proposed in the theoretical framework. Finally, the fourth part used stepwise analysis of covariance to combine the individual and agency levels of analysis with multiple variables. Reliability checks involved factor analysis of items which had been designed to measure the same construct. For example, the 16 fair process items on the citizen survey were factor analyzed to see if they could all be combined into one scale or needed to be broken up into several subscales. Items which did not load on factors as expected were discarded. Afier factor analysis, alpha reliabilities were calculated to confirm that items in a scale were consistent enough to be combined. Then validity of each scale was tested by examining its correlation with other scales that should have been similar. If correlations were opposite of those expected, the scales were examined more closely for problems. Often comments respondents had given over the phone or written in the margins gave clues to why certain items were problematic. If justified, a scale was discarded for this reason. The second component of the analysis explored the agency level differences among variables. Mean scores on all the variables were calculated for each agency. Then one-way analysis of variance and posthoc paired comparisons were used to identify agencies which were significantly from each other. Posthoc comparisons were t-tests to determine if variable means were significantly different. The significance level was 69 adjusted t< comparisc 11 with simf framewc combini lock 5‘ are 6111' increas the {w IKline four c Eixpla 10 se incr: Clea \‘afi ind an; (IQ adjusted to take into account the number of comparisons made because when many comparisons are made, some may be significantly different by chance. The third analysis examined the individual level relationships among variables with simple correlations. This allowed testing of the bivariate relationships present in the fiamework. In the final analysis, stepwise analysis of covariance was done in order to combine the individual and agency levels of analysis. The analysis was conducted with block stepwise linear regression. In block stepwise linear regression, groups of variables are entered sequentially to see if adding an additional group of variables significantly increases the amount of variation explained in the dependent variable. For example, in the first regression four demographic variables might predict 8% of the variation in fairness evaluations. In the second regression, the 3 situation variables are added to the four demographic variables and the resulting equation with 7 independent variables may explain 10% of the variation in fairness evaluations. An F-statistic can then be calculated to see if the change in explained variance is significantly different from zero. If the increase in explained variation is significant, then the category of situation variables is clearly important and should be kept in the theory. If there is no increase in explained variation then only demographic variables are needed and the theory simplifies to individual differences. The stepwise regression described above is only at the individual level of analysis. One way to include an agency level of analysis is with agency dummy variables. The analysis then tests for significant differences between the means of the agencies on the dependent variable after adjusting for the individual level variables which 70 are equiva analysis. 1 variables be added variable: multipl among make : faime: the de allow Ofcc dUm lder 0n 1h CC are equivalent to covariates in an analysis of covariance (McClendon, 1994). A stepwise analysis, for example, might first enter just the agency dummy variables as independent variables predicting fairness. Then in the second regression, citizen demographics could be added as covariates to see if they explain additional variation. Finally, situation variables might be added as covariates to give the full equation. Using a stepwise analysis of covariance approach has several strengths. First, multiple regression of any type divides explained variation in the dependent variable among all the independent variables in the equation (Blalock, 1979). The analyst can make statements like, ‘controlling for the age of the respondent, conflict decreases fairness.” Second, the direction and relative influence of each independent variable on the dependent variable can be compared using standardized regression coefficients. This allows statements like, “The influence of equal power on fairness is twice as large as that of conflict” (Alwin & Hauser, 1981). Third, agency level factors can be included as dummy variables along with the continuous individual level variables. This helps identify agency level differences that can not be explained by individual level phenomena. Conclusions can be reached like, “Delta township has a lower level of perceived fairness than the MDNR even afier controlling for citizen perceptions of power equality.” Finally, interaction terms can be introduced which test whether the value of one independent variable affects the relationship another independent variable has with the dependent variable. For example, a significant interaction might show that, “when conflict is high, power equality increases fairness, but when conflict is low, power equality has no effect on fairness.” 71 enter f1 second (Georg which i and n3- Cl'llfl' CI Variabl Pmpos Varied Questic Phone. meaSUl c0mbir Variabi repreSe There are also some weaknesses of the approach. For example, stepwise regression needs a substantive theory to guide the decisions about which variables to enter first. If variables share a lot of variation (multi-collinearity), then the ones entered second may not increase the explained variation significantly and will be excluded (George & Mallery, 1995). In other words, the order variables are entered may determine which ones are significant. Addressing this concern requires testing for multi-collinearity and trying different orders of variable entry. Entering variables in blocks can be used to enter collinear variables simultaneously and avoid some of these issues. Collinear variables can also be combined into single indexes. This study uses both approaches. In summary, this chapter has outlined the methods by which the framework proposed in Chapter 4 is tested and elaborated. A variety of cases were chosen which varied on the contextual factors. Citizens and agency employees were mailed questionnaires that measured context, evaluation and consequences variables. Follow-up phone calls were used to assess non-response bias. In the next chapter, the items measured on the questionnaires are factor analyzed in order to see how they can best be combined into computed variables. Then the reliability and validity of the computed variables are assessed to determine how much each can be trusted as an accurate representation of the underlying phenomenon. 72 CHAPTER VI: RESEARCH METHODS 2: COMPUTING THE VARIABLES AND CHECKING RELIABILITY AND VALIDITY Data for this study were collected through many items on several long questionnaires. One reason for the long length was that each theoretical concept was measured with several items. This is a common practice because it allows estimation of the random measurement error associated with a variable - its reliability (Agresti & Finlay, 1997). By using several items and then computing an average or a sum of scores on those items, a more precise picture can be gained of the underlying (or latent) theoretical construct. However, accuracy of measurement is not just an issue of random error. Sometimes the items may all be highly consistent with each other (reliable), but systematically measure the wrong thing. This kind of systematic error relates to the question of validity which is the extent to which the items are actually measuring the desired underlying construct rather than a different construct. Prior to calculating reliability and validity, the items have to be examined to see how they should be combined into computed variables. This is important because theory predicts that some constructs may have sub-constructs. For example, the construct of fair process was measured with 16 items. The items were chosen to represent the many fair process principles proposed in the literature. It may be useful to compute both a general fair process variable and several more variables measuring specific aspects of fair process. Factor analysis is used to determine the natural sub-groupings of items for those constructs measured with many items. Factor analysis is also useful for identifying particular items which do not match the other items and should be excluded from the computed variable. They may not match because they are actually measuring a different 73 construct or because they were poorly written so respondents can not give meaningful answers. In this chapter, once factor analysis is used to determine how items are to be grouped into variables, reliability analysis measures whether respondents gave similar answers to all items grouped into a variable. The more similar the answers, the less random variation in the measure and the greater the reliability. Reliability also increases if there are more items. This is analogous to sampling theory in that up to a point a more accurate estimate of the population mean is obtained with a larger sample of individuals. Reliability is calculated with coefficient alpha (Alpha = kr/ 1+ (k-1)r where k is the number of items and r is the average correlation between pairs of items) (George & Mallery, 1995). The validity of the computed variables is also assessed. Validity can be tested in several ways. In this study, construct validity is measured (Babbie, 1998). A variable which has construct validity is related to other variables in ways predicted by theory. For example, theory predicts people who have a high level of involvement with an agency will also have a higher than average number of relationships with employees in that agency. If this relationship holds, both variables are likely valid. If it does not hold, then one or both may not measure the intended construct, or the theory linking the two variables may be wrong. Often researchers will deliberately include items in the survey which already have highly established validity and can serve as comparison variables. In this study this strategy was only used to a limited extent because the length of the survey precluded inclusion of many extra items. 74 The exploration of the dimensional structure of constructs as well as reliability and validity of the computed variables is organized according to the theoretical framework in chapter 4 which divides variables into three groups. Variables in the first group together define aspects of the decision making context which may have influence on the way citizens evaluate their decision making participation experiences. Variables in the second group describe how citizen use fairness to evaluate their experience. The third type of variable concerns a possible consequence for the agency of citizen evaluations. In this chapter, the factor, reliability and validity analyses are presented according to these groups, starting with the decision making context variables. Context variables were measured in three ways. Some were assigned by the researcher, others were derived from the survey sent to agency personnel, and others were assessed on the survey sent to citizens. In the following analysis, context variables fiom the three sources of data are considered separately. Reliability and validity of researcher assigned context variables Government level and participation technique were both assigned by the researcher. When agencies were chosen they were chosen partly according to the level of government they represented: federal, state or local. Because these are self-evident categories there is no need for reliability or validity checks of government level. Participation technique, however, does involve generalizing from very specific efforts (e.g. Menominee River Management Plan) to general techniques (e. g. meeting). These techniques were ordered according to the amount of discussion and personal interaction. A check on reliability for this variable would involve finding out if someone else would 75 Fact Were com't belie: 10 me and d Cliller 2c 7 b 3a 7 ~ To get i Marga! emp I 0.1 ‘6 6590119 ef"lire! assign the same specific participation efforts to the same techniques. Validity would involve finding out if the techniques were indeed ordered from least to most interpersonal interaction. It was difficult to find one person who was familiar with all the decision making efforts covered by the survey, however informal conversations with several agency employees suggested participation technique was reliable and valid because they agreed with the way the participation efforts and techniques were classified. Factor analysis and reliability of agency survey context variables Some variables measured on the agency survey had many items so factor analyses were conducted to find out if there were important sub-groupings or if the items could be combined into a single dimension. The first set of four items measured agency employee beliefs about citizen knowledge and reasons for participating (Table 8). The items sought to measure the agency cultural beliefs that citizens have expertise (citizen knowledge) and do not focus on societal needs (citizen self-interest). Table 8. Items used to measure citizen knowledge and citizen self-interest. g# Item wording Factor loadings Citizens who participate in decision making are 2c . .. mainly concerned about their self-interests. .87 -.08 2d usually focused on the short-term. .84 -.16 2a usually able to understand and use technical information -.05 .86 2b usually in possession of the knowledge needed to make good -.20 .80 decisions To get the scores used in the analysis, respondents were asked, “What are the characteristics of citizens? Please indicate your perceptions of the beliefs of most other employees in your unit. ” Response scale: Definitely False, More False Than True, More True Than False, Definitely True 76 Factor analysis of the four items which together comprised the citizen knowledge and citizen self-interest variables showed that there were two dimensions. The citizen knowledge items clustered on one factor and the citizen self-interest /short-term items on the other factor. Alpha reliabilities for citizen knowledge (0.673) and citizen self-interest (0.716) were barely acceptable according to a common rule of thumb which holds that alpha values greater than 0.6 are questionable, greater than 0.7 are acceptable, greater than 0.8 are good, and greater than 0.9 are excellent (George & Mallery, 1995). The next set of eleven items concerned the perceptions of the beliefs of most employees in the unit about the importance of bureaucratic principles and expertise (Table 9). Factor analysis led to three dimensions. The six items aligned on the first factor generally concerned the importance of bureaucratic principles like authority, consistency and efficiency. The second factor contained four items which were clearly about the importance of expertise. The third factor only had one high loading item. This item captured the concept of how well agency employees understood citizen needs, a belief somewhat different fiom bureaucracy and expertise. Therefore it was not included in the variables. The first six items were averaged to make a bureaucracy variable with an acceptable Alpha value of 0.77. However, the four expertise items received an alpha of only 0.61. When the two items which had the lowest loadings on the expertise factor (3c, 3e) were removed from the expertise scale, the reliability increased to 0.67. Therefore the expertise variable was computed from only two items (3a, 3b). In addition to measuring beliefs related to citizens and the importance of expertise and bureaucracy, employees were asked about fairness. This was because it was hypothesized that a particular agency’s cultural attitudes towards the importance of 77 Table 9. Factor structure for perceptions of the beliefs of most other employees on bureaucracy and expertise variables. Factor loadings Q# Item wording 1 2 3 3i Well-established plans should not be changed in response to citizen .75 .08 .05 demands. 3h Public participation usually helps to build public support for the -.75 .06 .12 agency. 3g Consistent application of rules is more important than incorporating .69 .23 .05 public comments. 3j The benefits of involving citizens outweigh the costs. -.66 -.08 -.14 3d Decisions made by a person in a position of authority should not be .58 .25 .23 challenged by citizens. 3k Involving citizens slows down decision making processes too much. .54 .41 -.17 3a Decisions should be made according to standard professional practices. .04 .81 .04 3b Experts should have the power to make decisions. .20 .75 .17 3e [Agency employees] understand the long term consequences of .34 .57 .05 decisions better than citizens. 3c When making decisions, correctness is more important than popularity. -.06 .48 -.41 3f [Agency employees] have a clear idea of public needs and desires. .05 .14 .87 Bold loadings indicate the items which loaded on the factor and combined into a variable To get the scores used in the analysis, respondents were asked, “How much do you think most other employees in your unit agree or disagree with the following statements?” Response scale: Strongly Disagree, Disagree, Neither Agree Nor Disagree, Agree, Strongly Agree fairness would be related to how citizens judged the fairness of their experience with that agency. For the same reason, employees were also asked how often they thought they and other employees in their unit actually achieved fairness. Based on a theoretical review of the fairness literature (discussed in Chapter 5) a set of 16 fair process items and 5 fair outcome items were generated. These were then used to measure the importance of fairness to employees (fairness importance) and how well their unit achieved fairness (fairness performance). The same items were also used on the citizen survey to assess how fair the decisions were from the citizen perspective. 78 The fairness items were meant to be combined into overall process and outcome fairness but, because they were created to measure many specific principles, they might also combine into sub-groupings of fairness items. Factor analysis was used to check for sub-groupings. Factor analyses of the fairness importance and fairness performance items related to the decision making process were done separately and led to slightly different groupings (Table 10, Table 11). In order to be consistent in the analysis and allow direct comparison, the same items were combined into the same subgroups for fairness importance, fairness performance, and citizen judgements of fairness. This was possible because the factor structures were reasonably similar and were consistent with fairness principles mentioned in the literature. Factor analyses using four factors led to the creation of fair process variables about representation, neutrality, influence, courtesy, general fair process, and total fair process. The items composing each and the resulting alpha reliability levels follow. Two variables called representation importance (Alpha = 0.82) and representation performance (A= 0.74) were created from items related to access to participation opportunities and representation of everyone affected (d, e, f, g). Two variables called neutrality importance (A = 0.73) and neutrality performance (A = 0.75) were created from items related to the use of accurate information, reasoning, lack of bias, and honesty (m, n ,o, p). Two variables called influence importance (A = 0.70) and influence performance (A= 0.74) combined items related to citizen influence over the decision making (h, i, j, k). Two variables called courtesy importance (A = 0.61) and courtesy performance (A = 0.59) combined items relating to answering questions and polite treatment (1, q). Two variables called fair process importance (A = 0.62) and fair process performance (A = 79 Table 10. Factor structure for fairness importance variables Factor loadings Q# Item wording l 2 3 4 5e The participation experience is convenient to attend. ._7_4 .22 .16 -.03 5h Citizens are able to participate directly in making decisions. .74 -.05 .14 .03 5f Everyone affected by the decisions has an opportunity to participate. £1 .27 .19 .08 5 g Local people are adequately involved. ,Q .39 .15 .08 5j Citizens have an influence on the choice of decision making process. .62 -.O7 -.09 .23 Si Citizens are able to have an influence on the decision outcomes. .60 .20 -.02 .35 5d Citizens are given sufficient advance notification of the opportunity to ._g .52 .14 .18 participate. Sq Citizens are treated politely. -.03 .81 .20 .02 51 Citizen’s questions are answered. .31 .70 .19 .09 5p Agency employees are honest to citizens. .13 .58 .44 .25 5k Citizens' comments are seriously considered in the decision making. .43 .50 .03 .32 511 The decisions are well reasoned and logical. .04 .09 .84 .03 5m Information used to reach the decisions is accurate. .05 .28 .75 .08 50 There is a lack of bias toward particular interests, groups, and persons .21 . 12 .67 .11 5a The procedures used to make decisions are fair. .09 .01 .23 .85 5b Citizens are treated fairly. . .24 .32 .03 .69 All the bold numbers in a given column indicate the items which were combined to form a variable. Underlined numbers indicate items which were combined into a separate variable. To get the scores used in the analysis, respondents were asked, “How important are each of the following statements to you? ” Response scale: Not at all Important, Somewhat Important, Important, Essential 80 >4 W '2: (D r '0 4t (J! U! £1 0 :3 :‘l :1 B H :3 a! :lOODODeOd £10 D. O J! ”‘1 C” Table 11. Factor structure for fairness performance variables Factor loadings Q# Item wording 1 2 3 4 511 The decisions are well reasoned and logical. .75 .12 .17 .15 50 There is a lack of bias toward particular interests, groups, and persons. .70 .02 .25 -.O6 5m Information used to reach the decisions is accurate. .65 .01 .20 .27 Sa The procedures used to make decisions are fair. .63 .26 .09 .31 5b Citizens are treated fairly. .61 .07 .17 .44 5p Agency employees are honest to citizens. .60 . 13 .13 .44 Si Citizens are able to have an influence on the decision outcomes. .02 .80 .01 .13 Sj Citizens have an influence on the choice of decision making process. -.09 .73 .22 .03 5h Citizens are able to participate directly in making decisions. .34 .65 .16 .12 5k Citizens' comments are seriously considered in the decision making. .44 .59 .24 .23 Se The participation experience is convenient to attend. .15 .09 .78 .27 5d Citizens are given sufficient advance notification of the opportunity to .16 .21 .71 .30 participate. 5f Everyone affected by the decisions has an opportunity to participate. .34 .20 .70 .02 5 g Local people are adequately involved. .29 .38 .42 -.O7 Sq Citizens are treated politely. .26 .08 .09 .76 51 Citizen’s questions are answered. .17 .18 .28 .71 All the bold numbers in a given column indicate the items which were combined to form a variable. Underlined numbers indicate items which were combined into a separate variable. To get the scores used in the analysis, respondents were asked, “In the section above you indicated the importance to yourself of some statements. Now please indicate how often you and other employees in your unit actually accomplish them. " Response scale: Never, Seldom, Sometimes, Often, Always 81 0.73) “ proced: perform bdc. were al each se outcorn “The d: citizens citizens importa 0.60) w for Pub] “MOM-t analysis FESQUrCE I alpha re} Camions 0.72) were created using items asking about overall fairness of the decision making procedures (a, b). Finally, total fair process importance (A = 0.87) and total fair process performance (A = 0.88 ) were assessed with a combination of all the fair process items (a, b, d, e, f, g, h, i,j, k, l, m, n, o, p, g). The outcome items from the fairness importance and fairness performance scales were also factor analyzed separately. The data easily combined into just one factor for each set of importance and performance items. The items were: “Benefits and costs of outcomes are distributed fairly among citizens,” “The outcomes of decisions are fair,” “The decisions reached are equally favorable to all citizens,” “The decisions benefit the citizens who are most deserving,” and “The decisions reached are consistent with Citizens’ personal values.” Of the two variables computed from these items, fair outcome importance (A = 0.71) had an acceptable reliability, but fair outcome performance (A = 0.60) was questionable. Finally, agency employees were asked to indicate the level of resources available for public participation in their unit. They rated three items, “Staff ’, “Training”, and “Money” with response options of Almost None, A Little, Some, and A Lot. Factor analysis of the responses yielded one factor. The resulting variable of participation resources had an Alpha reliability of 0.76. In conclusion, most of the constructed agency culture and resources variables had alpha reliabilities above 0.70, indicating they could be used in further analyses, although cautiously. 82 lbhdmi Correlat Lhexpe variable Chapter perform motivan institute so could field of 1 long ten QPCOIC mOre lOr Slr0ng1}. because 1 T Validity of agency survey context variables Validity was assessed by conducting bivariate correlations among the variables. Correlations in the direction predicted by theory indicate the variables’ validity. Unexpected correlations suggest the measures may be invalid. Correlations of agency variables were checked at both individual and agency levels. This was done because in Chapter 7 variables are used as aggregated means at the agency level. A variable which performs differently when aggregated to the agency level may be invalid. Theory predicts that there are a general set of beliefs about the ability and motivations of citizens vis-a-vis agency employees. Historically, the civil service was instituted to create a corps of employees who were not beholden to political parties and so could work for the broader good of society. They were supposed to be experts in their field of work (Knott & Miller, 1987). In the area of land management, expertise included long term planning. By deduction, one can hypothesize that employees centered in this type of culture would feel that they are able to be more expert, less self-focused, and more long-term focused than citizens outside the agency. Employees who feel this most strongly may place less importance on principles of fairness like citizen influence because they feel citizens are not well equipped to participate in decision making. These arguments suggest that there should be negative correlations between the belief that citizens are knowledgeable (citizen knowledge) and that their influence is important (influence importance) on the one hand, and a set of beliefs on the other hand that citizens are self-interested and short sighted (citizen self-interest), bureaucratic principles are important (bureaucracy), and expertise is important (expertise). 83 These expectations were supported at the individual level because correlations were in those directions although not all were significant (Table 12). At the agency level results were less clear because expertise and bureaucracy failed to have any significant correlations (Table 13). The questionable validity of the bureaucracy and expertise variables is supported by qualitative data collected during the non-response bias phone interviews. Respondents expressed confusion and contradictory answers when attempting to answer many of the bureaucracy and expertise items. The items involved making tradeoffs between citizen participation and other values and many respondents felt both were important in different ways and couldn’t answer. Some terms in the questions, e. g. “standard professional practices” and “expert” were open to multiple interpretations. For these reasons, it was decided not to use the bureaucracy and expertise variables in further analyses. In contrast, the fairness importance and performance variables appeared to be valid because they consistently correlated positively with each other. At the individual level almost all the correlations were significant. At the agency level, each variable correlated significantly with at least one other fairness variable. Thus the fairness importance and performance variables are valid and reliable and can be used in further analyses. Finally, in addition to reliability and validity, it is useful to assess whether the data for the variables are normally distributed. Normality is an important assumption of statistics like t-tests and regression which are used in this study. In order to assess normality of the data, statistics were calculated to see if the distribution for each variable was skewed to one side (skewness) or had a larger or smaller than normal peak in the 84 mo.vne .:.... .2. .2. .2. .R. .2. .2. :. .2. :. 8. .2. .2. .2. .2.-.2.- 8.- .2. 8058282282022 .2.. .E. .2. .2. .2.. .2. .2. .2. 2. .8. .2. .2. .2. .2.-.:.-.2. .2. 8.2828058522 .22. .2. .2. .2. .:. .2. .2.. .3. .2. .2. .2. .2. S.- .2: 8.- .2. oocasceeaauoefiarsfi .2. .2. .2.. .2. .:. .2. .2. .2. :. .2. .2. S. .2.- 2. 8. 005.52.88.80 .2. .2. .2. :. .2. .2. .2. S.- .2. .2. 8-2.- 3.- :. 35.82.822.582 .2. .2.. .2. .2. .2. .2. .2. .2. .2. 22.-.:.- 2..- .2. Beater-.8825 .2.. .2. .2. .8. .2. .:. .2. .2. .2.- .2.- 8.- .2. 858.28 8228882 .2. .2. .2. .2. 8. .2. .2. 8. .2.- 2.- :. Becca-28.88522 .3. .2. .2. .2.. .2. .2. S.- :.- .2.- .2. 852888588322 .2.. .2.. .2. .2. .2. 3. .2.- 2..- .2. 8528828822233 .3. .2. .2.. .2. we. 8.- S. 2. 0858822358 .2. .:.. .2.. S. 3.- 3. 8. 838522832 .8. .3”. S. .2.- .2.- .2. 85285285225 .:.. 8.- 2.. 2.- 8. 852882 8:38.882 co. 8.. cor 8. 352.895 3085 ham *3”. 2.0—. 3:... amicaxm *2. Km: hoauosaosm .2.- 2.2352... 526 282.65. 826 .52 .2.-:- 6 .22 2 .2 .2.. 52 2.5 6 E = E a... m m mo 05 use; .mmm 2 mwm 28.2 momma 295m :23 33222, >25? 28:8 2022028 .26— 5220 .3 033- 85 .:.... 22...... :L :2 2. 2... ...... _ .:.. ......... .L .2 : .2 .1... L I V... .1.» 0.45:...) 6.... 02.3.. 0.95% a .:..3 94.3.2.3.» .Auzum... 352:... v..:...:....utcu 79>... kiiwwwx 4.-. 07:: mo.Vn* 8. we. 3. we. 3. .3. 3. 2. 3. 3.- 2. 3.. 8. 3. E.- .3.- 3.- 3. 8888883882... .3. E. .3. 2w. .2. .3. 3. 2.. 2..- 3. 2. E. K. 3.- 3.- 3.- 3. 85::38358885 3. .3. 3. .3. .3. 2. 3. 2. 3 2.. 5 2. 3.- 3:3.- .3. 8§§ot888oafizsfi .3. S. E. 3. 3. 3. 3. .m. S. a. .3. 2..- 22.-.3.- .3. 85.52.882.80 G. .3. .3. 2.. R. 3. 9.. E. .3. .3. 3.- 2..- .3.- .3. 88:82.8 £382 3. 3. .3. 2. 3. 3.- 3 3. 2.. 2. 2. 3.- R. cognac-2.88822: .3. 3. 3. 3. 3. on 3. .2. E.- Sc 3.. 3. 8:...8288358882 3. 8. 2. 9.. 3. .3. .2. $.- 3; 3.- .2. 85582888882... 3. 2. 2.- 3. 2. S. S. a. 3.- 3. 88885388828... 3. 3. .3. .3. .3. S.- 3; 3.- 3. 8:885_.88a§§£ 8. 3. 2.. 2.. 3.- 2m. 3.- 2.. 8838538280 3. Q. 8. $.- 2..- z..- 3. 855853.332 3. 3. 3.- 3.- 3.- 8. 8:885:88: .33. 3.- 3.. .3.- .3. 8828522558882 3.. 2..- La.- ..mm. eons-22.8.3882. 2n... 3. 2.. E.- 8283. 2. cm... xofioamohsm 3a.- “mo-285.28 52.5 0380—305. 5220 .20.. .:.:- mo .22 2 .2 .:.-2 .8 E... 6 22 = E E m m 8 Mo 2855 .m .0 02m 2928 a 5.3 832...; 30:09. macaw mecca—oboe .26. zocow< .2 03m... 86 center of the distribution (kurtosis). For both the skewness and kurtosis statistics, values under 1 are considered normal, values between 1 and 2 are acceptable and values over 2 indicate significant departure from normality. All the agency variables had values for both skewness and kurtosis under 1 except citizen self-interest, expertise, and fair outcome performance variables which were between 1 and 2. Thus the agency variables do not violate the important statistical assumption of normality. Factor analysis and reliability of citizen survey context variables The citizen survey measured several context variables, including conflict, equal power, and prior relationships. It also assessed personal characteristics like agency involvement, general involvement, age, gender, and education. There were four items measuring the amount of conflict. Citizens were asked how much they agreed with the statements on a scale of Strongly Disagree, Disagree, Neither Agree Nor Disagree, Agree, and Strongly Agree. Factor analysis showed that the items could equally well be combined into one dimension or into two dimensions. When divided into two dimensions, two of the items were related to emotional intensity (“Participants began the process with strong, deeply held positions.” “Participants expressed strong emotions in response to disagreements”), and two were related to incompatibility of positions (“Participants took positions that were very different from other participants.” “Positions held by participants were highly incompatible with those of others”). Since the need of this study was for fewer, more comprehensive variables, the four items were combined into a single conflict variable with an alpha reliability of 0.77. 87 There were four items asking about the equality of power distribution among citizens with the same agreement response categories used for conflict items. Factor analysis gave two dimensions. The three items grouped on the first dimension were, “Power was distributed equally among citizens,” “Financial resources were equally distributed among citizens,” and “The level of knowledge about how to get what they want was equal among citizens.” The second dimension only had one item, “Access to [agency employees] was equally available to all citizens.” Conceptually, it is reasonable that the fourth item would receive a different pattern of ratings because it was dependent on agency actions, not on the characteristics of the citizens participating. In the interest of a conceptually clear variable, the item was excluded from the variable equal power. The remaining three items had an acceptable alpha reliability of 0.79. A set of six items assessed the number of prior relationships among citizens and between citizens and decision makers. These asked the total number of citizens present, the total number of agency employees present, and the number of citizens present who opposed the respondent’s interests. For each of these three groups, respondents were also asked with how many of them they had a respectful relationship before the participation began. Because there was never an intention of creating just one variable, factor analysis was not done. Instead variables were computed by taking the ratio of the number of each type of person with whom the respondent had a relationship divided by the total number present of that type. Thus the first variable, percent decision maker relationships, was the number of agency decision makers with whom they had a prior positive relationship divided by the total number of agency decision makers present. The second variable, percent citizen relationships, was the ratio of the number of citizens with whom they had 88 apfio parcel whom preser appror charac how m during 10+ ho hmlpm- maximr gflfflu Calculat doneeat mfidai a preVlO‘: a prior positive relationship to the total number of citizens present. The third variable, percent opposed relationships, was the ratio of the number of opposing citizens with whom they had a prior positive relationship to the total number of opposing citizens present. All these variables involved proportions of one item over another, so it was not appropriate to calculate alpha reliabilities. The final set of context variables on the citizen survey related to citizen characteristics. The level of involvement in the agency was calculated by asking people how many times they had participated with that agency for each of several techniques during the last five years. For example, they were asked to indicate, on a scale from 0 to 10+ how many times they had submitted written comment. The number of times they had participated was added up across all the techniques and then divided by the maximum score they could have received (i.e. circling 10+ for every technique). This gave the percent involvement with the agency. The level of general participation was calculated in a similar way, only with different items. They were asked if they had ever done each of the activities in the past five years. The number they had checked yes was divided by the total number of items to give a percent. The eleven items were taken from a previous study (Smith & Propst, 2001) and included general politics, (e. g. “organized a group of people around some political issue”), local natural resource decision making (6. g. “signed a natural resource/ environment/ land use petition”), and regional natural resource decision making, (e. g. “Attended a hearing or meeting of the Michigan Department of Natural Resources or USDA Forest Service”). The full list of general involvement items can be found in Appendix A, question 11 on the sample questionnaire. 89 not me areas \\ activitil sense 0 affirms} analysi: responc single i‘ l'alidz'n lDdEpen r00m or HOWeye Suggest who Opt: ofrhe p e Respon d conflict ; pOSlllVe ( TL cOnt-1300 Items within each of the general involvement and agency involvement scales were not meant to measure the same thing, but rather to measure activity in a broad number of areas which could then be added up. It was expected that respondents would have done activities in some items and not in others, so the items would not be “reliable” in the sense of being consistent. In fact, this was the case as some items were answered affirmatively by only 8% of the people and others by 40%. For this reason, factor analysis and alpha reliabilities were not calculated. Other context factors related to respondent characteristics like gender, age, and education were only measured with single items so reliabilities could not be calculated. Validity of citizen survey context variables Validity was hard to assess with these variables because they were meant to be independent of each other and so should not show much correlation. There was also not room on the survey for including other items just for the purpose of assessing validity. However, certain patterns provide hints of validity. Two of the prior relationships items suggest how much conflict was present. Specifically, dividing the number of citizens who opposed most of my interests by the total number of citizens present gives a measure of the percent of the people present who were perceived as being opponents. Respondents who indicated a high percentage of opponents should have also rated conflict as being high. The correlation between this ratio and conflict was significantly positive ( r = 0.30, p < .001), confirming the validity of the conflict variable. The validity of the prior relationships variables was suggested by their correlations with agency involvement. Presumably someone who is more involved in a 90 panicu 96 dad: I were at .001; r signific because natural ICSOUICt relation respond Variable filled ou difficult did not r term “re: “respectl they “Er ClllZ'ms C present a- COnfilSiOr or 012.0” eXITEme C particular agency will know more of the staff and the other citizens present. As expected, % decision maker relationships, % citizen relationships, and % opposed relationships were all positively correlated with agency involvement ( r =.349, p < .001; r = .213, p < .001; r = .180, p = .01). Agency involvement and general involvement were also significantly positively correlated (r = .418, p < .001). This supports their validity because one would expect a person who is generally highly involved in politics and natural resource decision making to also be highly involved with a specific natural resource agency. These validity conclusions can only be considered tentative because the prior relationships variables were themselves subject to questionable validity based on respondent comments and pattems of response. The prior relationships items were the variables most often left blank. Only 25% of the 852 people returning the questionnaire filled out all the prior relationships items. Apparently the section was confusing or difficult so many left it blank. Many wrote in question marks, some indicating that they did not remember how many people were there. Written comments also suggested the term “respectful relationship” was unclear. Others did not see how they could have a “respectful relationship” with someone who “opposed most of my interests.” Although they were told to say how many people were present, maximum numbers like 300 citizens or 50 agency staff suggest they were thinking not in terms of who was actually present at the meeting, but in terms of all the people involved in the decision issue. This confusion was understandable for people whose participation consisted of letter writing or one-on—one phone conversation. For these reasons, the results have to be viewed with extreme caution. 91 survey. except distribL around almost . Variable Which r. should 1 for em] their ex; are desc F actor a C camure 1 Because Could all from the . fair PTOce Although validity was difficult to determine for context variables from the citizen survey, it was possible to check the data distributions for normality. For all the variables except one, skewness and kurtosis values were i 1, which indicates an essentially normal distribution. Agency involvement had slightly larger skewness and kurtosis values of around 1.3, but this still indicates acceptable normality. In conclusion, the researcher assigned, agency, and citizen context variables were almost all normally distributed, valid and reliable. The only exceptions were agency variables of bureaucracy and expertise and citizen variables of prior relationships, all of which had validity problems. When these variables are included in any analysis, they should be interpreted with caution. However, the other variables are all good candidates for explaining why citizens gave either positive or negative evaluations of the fairness of their experiences. In the next section, variables related to citizen evaluations of fairness are described and explored. Factor analysis and reliability of citizen evaluation variables Citizen evaluations of their experiences were measured by items designed to capture both fairness of the decision making process and fairness of the outcomes. Because there were so many fair process items, factor analysis was done to see if they could all be grouped together. Factor analysis showed a pattern similar to that found from the identical items on the agency survey, indicating that the constructs underlying fair process judgements are robust to different situations and types of persons (Table 14). Because of this similarity, items were grouped into the same sub-variables created for the 92 4g Lo 4f Ev Nllmbe Varia' agency counes cOmbin L'nme the lien and neu Selim-1d Table 14. Factor structure of fair process variables. Factor loadings Q# Item wording l 2 3 4 4n The decisions were well reasoned and logical. .70 .40 .22 .24 40 There was a bias toward a particular interest, group, or person. -.69 -.22 -.03 -.39 4m It appears that information used to reach the decisions was .68 .33 .30 .26 accurate. 4c Citizens were treated unfairly. ;.___Z -.02 -.18 -.02 4a The procedures used to make decisions were fair. £5 .37 .30 .21 4p [Agency] employees were dishonest. -.62 -.10 -.51 .02 4h Citizens were unable to have an influence on the decision - 53 -.48 .06 -.15 outcomes. 4j Citizens had an influence on the choice of decision making .19 .79 .26 .17 process. 4i Citizens were able to participate directly in making decisions. .18 .79 .19 .22 4k Citizens' comments were seriously considered. .44 .62 .37 .21 4q Citizens were treated politely. .28 .19 .78 .07 41 Citizens’ questions were answered. .27 .30 .60 .33 4d Citizens were given sufficient advance notification of the .06 .18 .59 .48 opportunrty to participate. 4e The participation experience was convenient to attend. .12 .11 .15 .79 4g Local people were adequately involved. .20 .21 .08 .76 4f Everyone affected by the decisions had an opportunity to .22 .25 .49 .59 participate. Numbers in bold indicate items which loaded on the factor and were combined as a variable. Underlining indicates items used separately to make a different variable. agency variables. These were general fair process, representation, influence, neutrality, courtesy, and total fair process. The first factor (Table 14) was divided into two variables. Items 4a and 4c were combined to create fair process evaluation, a general measure of process fairness. Unfortunately, this variable had a low Alpha reliability of 0.59, probably because one of the items was stated in the negative. The other variable consisted of items about logic and neutrality (4m, 4n, 4o, 4p) and was termed neutrality evaluation (A = 0.85). The second factor contained items related to influence over the decision and process (4h, 4i, 93 4j, 41 whic into c repre addit: W85 C “Ben: were 1 decisi Were t and th Validi- Validit: EXamp fair “'1 1 also be bl‘finar V1111‘de of the C4 Comm-m 4j, 4k) and was termed influence evaluation (A = 0.84) The third factor contained items which measured polite treatment and answering questions (4q, 41) which were combined into courtesy evaluation (A = 0.68). The fourth factor included items about access and representation (4d, 4e, 4f, 4g) and was termed representation evaluation (A = 0.76). In addition to these sub-groupings of fairness items, a total fair process evaluation variable was computed by averaging all the items (A = 0.92). The analysis of Fair Outcome items was much simpler. The five items were, “Benefits and costs were distributed fairly among citizens,” “The outcomes of decisions were unfair,” “The decisions reached were equally favorable to all citizens,” “The decisions benefited the citizens who were most deserving,” and “The decisions reached were consistent with my personal values.” A factor analysis found only one dimension and this had an Alpha reliability of 0.86, indicating good reliability. Validity of citizen fairness evaluation variables Because no other variables had been included specifically for checking validity, validity was assessed by comparing the evaluation variables among themselves. For example, justice theory suggests that someone who evaluates a process as procedurally fair will also tend to find the outcomes fair. A process judged as generally fair should also be judged as fulfilling specific principles of fairness like influence. Examination of bivariate correlations should find positive relationships among the fairness variables. Validity of these variables is indeed supported by simple correlations (Table 15). Some of the correlations are also fairly large, suggesting that evaluation variables should not be combined in the same regression equation because of multicollinearity. 94 Table 15. Citizen level correlations among citizen evaluation variables. Variable TFPE F PE RE IE NE CE Total fair process evaluation Fair process evaluation .77* Representation evaluation .76* .41* Influence evaluation .85* .56* .53* Neutrality evaluation .88* .68* .51 * .68* Courtesy evaluation .74* .51* .54* .58* .60* Fair outcome evaluation .69* .55* .42* .61 * .73* .42* * p < .01 In conclusion, the citizen evaluation variables appear to be reliable and valid. They are also normally distributed, as shown by skewness and kurtosis statistics which were between plus and minus 1 for all variables. In the next section the final type of variable is assessed. Factor analysis, reliability, and validity of the evaluation consequence variable Trust in the decision makers was measured with several types of items. There were two items related to trust in the agency: “The [agency] can be trusted to make good decisions,” and “I trust the [agency] to make good decisions without my input.” There were also two items related to satisfaction with the job the agency does: ”The [agency] does its job well,” and “I am satisfied with [agency] decision making.” There were also items related to support of agency decisions: “I plan to actively oppose, appeal or sue a decision reached in this decision,” and “I plan to support the decisions reached.” Finally, there was one concerning the relationship between the citizen and the decision makers: “As a result of this experience the relationship between [agency employees] and myself 95 hasi they WCI’C andk 51mm: and co items 2 genera which 1 Cl’aluati reliabili reliabilir related v Vallables measures Validity. fratneu-o, in [he 5111c has improved." Factor analysis of these seven items yielded one dimension, suggesting they could be combined. The seven items had a very good alpha reliability of 0.91. Validity of the trust in decision maker variable was not determined because there were no other related variables. However, the data were normally distributed. Skewness and kurtosis values were within plus and minus 1 for all three consequences variables. Summary This chapter presented factor analyses, alpha reliabilities for calculated variables, and correlations among variables. Factor analysis helped determine how to combine items and led to the creation of variables which measured more specific aspects of the general construct. This was particularly helpful in organizing the 16 fair process items which were used to measure agency cultural attitudes towards fairness and citizen evaluations of the fairness of their experiences. Calculations of inter-item Alpha reliability coefficients helped assure that variables were measured with sufficient reliability by the items. Correlations showed that variables which were theoretically related were also empirically related in the data, confirming the validity of most of the variables. Table 16 lists all the variables, indicating what construct each variable measures, the items used to calculate each variable, and variables with questionable validity. In the next three chapters, these variables are used to test the contextual fi'amework of perceived fairness proposed in Chapter 4. Knowing that the variables used in the analyses were reliable and valid increases confidence in the accuracy of the results. 96 Table 16 Constrm Context Context Beliefs a lrnportar Process : import Process 1 perforr \ Table 16. A summary of variables and their validity. Construct Variable Items used (see Low surveys in @pendix) validity Context variables: Research er assign ed Government level Participation technique Q3 Context variables: Agency survey Beliefs about citizens Citizen knowledge Q2.a, b Citizen self-interest Q2. c, (1 Importance of expertise Bureaucracy Q2.d, g, h, i, j, k X Expertise Q2.a, b X Process fairness Fair process importance Q5: a, b importance Representation importance Q5: d, e, f, g Neutrality importance Q5: m, n ,o, p Influence importance Q5: h, i, j, k Courtesy importance Q5: 1, q Total fair process importance Q5: 3, b, d, e, f, h, i, j, k, l, m, n, o, p, q Process fairness Fair process performance Q6: a, b performance Representation performance Q6: (1, e, f, g Neutrality performance Q6: m, n ,o, p Influence performance Q6: h, i, j, k Courtesy performance Q6: 1, q Total fair process performance Q6: a, b, d, e, f, h, i, j, k, l, m, n, o, p, q Outcome fairness Fair outcome importance Q5: t, u, v, w, x Fair outcome performance Q6: t, u, v, w, x Participation resources Participation resources Q8: a, b, c Context variables: Citizen survey Prior conflict Conflict Q10: a, b, c, (1 Prior equal power distr. Equal power Q8: b, c, e Prior respectful % Decision maker relationship Q9: a, b X relationships % Citizen relationship Q9: c, d X % Opposed relationship Q9: e, f X Respondent involvement Agency involvement Q2: a, b, c, d, e, f, g in dCCiSiOIl making General involvement Q11: a to k Demographics Age Q12: a Gender Q12: b Education Q12: (1 97 Table 16. Continued. Construct Variable [tents used Low validity Citizen evaluation variables: Citizen survey Process fairness Fair process evaluation Q4: a, c Representation evaluation Q4: d, e, f, g Neutrality evaluation Q4: m, n, o, p Influence evaluation Q4: h, i, j, k Courtesy evaluation Q4: l,q Total fair process evaluation Q4: a, c, d, e, f, h, i,j, k. 1, m. n, o- p. q Outcome fairness Fair outcome evaluation Q5: a, b, c, d, e Consequence of citizen evaluation: Citizen survey Trust in decision maker Trust Q1: a, c, d, e, Q7: a, b, e 98 Cl variabl theoret are ans other a trust in Variabl et‘alua prediC' frame. might CFOSE CHAPTER VII: RESULTS AND DISCUSSION: COMPARING AGENCIES How and why do agencies differ in terms of context, fairness, and consequence variables? Are the agency level patterns among these variables consistent with the theoretical framework proposed in Chapter 4? In this chapter, these research questions are answered in order to understand how the particular agency making decisions affects other aspects of the decision making context, citizen evaluations of fairness, and citizen trust in decision makers. The agencies are compared in terms of mean values on the variables measured on the survey. Then patterns of agency differences in context, evaluation and consequence variables are compared to see if they correspond to the predictions made in Chapter 4’s contextual framework. This is done to see if the framework applies at the agency level of analysis. The first aspect of the context which might influence fairness was the agency making the decisions. Cross-agency variation on agency variables There were several aspects of the agency which might be important. The first was the level of government. As explained in Chapter 5, five agencies were chosen for this study. They represented federal, state and local level natural resource management agencies in Michigan. At the local level, three township planning commissions were examined. Monitor, Delhi, and Delta townships were chosen because they were fast growing, partly rural areas. The planning commissions were often faced with deciding how to manage development on previously undeveloped land. At the state level, the Forest Management Division of the Michigan Department of Natural Resources GVIDNR) was chosen because it had to make decisions about management of the state’s forest 99 resources. At the federal level, the Huron-Manistee National Forest (HMNF), located in northern lower Michigan, was chosen because it managed federally owned forest lands in Michigan. By design, the agencies differed on level of government. In Chapter 4 contradictory predictions were made for how level of government might affect fairness. Local levels should be evaluated as more fair because they would have an easier time notifying and involving citizens. However, local levels also make decisions which affect Citizens’ day-to-day lives and so would arouse stronger reactions, making it harder to achieve fairness. In a later section, when context-faimess patterns are examined, support for these rival hypotheses is examined. Another aspect of the agency was the cultural attitudes of its employees. This can be very noticeable to citizens. As part of a lengthy letter accompanying a returned questionnaire about the Pigeon River Advisory Board, a citizen commented: As a result of those meetings and my personal contact with the DNR, I have come away with the impression that the mentality of the majority of the DNR personnel is that an adversarial relationship exists between all “civilians” and those in the department, and deep down inside they would rather we just left them alone instead of complicating their lives with the decisions we make at these meetings. The perceived attitude that citizens are an annoyance or even an adversary to good management is an expression of broader cultural issues. The historical review in Chapter 2 suggested that agencies have struggled for many years with balancing neutrality and accountability (Knott & Miller, 1987). Kweit and Kweit (1981) theorize that agency cultures emphasizing neutrality will downplay citizen involvement. The quote above suggests this may be occuning in the Michigan DNR. Looking at all the agencies in this study in terms of the cultural variables measured, what is the balance between neutrality and accountability? 100 The first culture variables measured were the beliefs that citizens have and can use technical knowledge (citizen knowledge) and that they are focused on short-term self- interests rather than societal needs (citizen self-interest). They were measured on a scale of Definitely False (-2), More False Than True (-1), More True Than False (1), and Definitely True (2). Beliefs about citizen knowledge ranged from -0.1 for the Michigan DNR to 0.9 for Monitor Township, and beliefs about citizen self-interest ranged from - 0.5 for Monitor Township to 1.2 for MDNR and Delhi Township (Table 17). However, for most agencies, citizen knowledge was positive and citizen self-interest was also positive. This suggests that decision makers see citizens as having some expertise, but also being focused on the short term and on their own self-interests. If citizens are seen as taking the short-term, personally advantageous position, they may not be trusted by the agency to have influence over decisions. Decision makers may see themselves as the neutral arbiters among the many conflicting citizen desires. Does this interpretation of the primacy of neutrality over accountability find support in measures of agency cultural beliefs about fairness? The agency questionnaires measured personal beliefs about the importance of fairness principles and the extent to which the employee’s unit achieved fairness principles. A set of 16 items, combined on the basis of factor analysis into 6 variables, assessed employee personal beliefs about the importance of process fairness principles. The items were rated on a scale of Not at all Important (0), Somewhat Important (1), Important (2), and Essential (3). Courtesy importance and neutrality importance were rated close to “Essential” by all the agencies, while influence importance was between Somewhat Important and Important (Table 18). The other fair process variables hovered 101 Table ' aeener Again ”1813i MDNT Delhi Monin Delta Respor and 1 Mon Means signi mult Table farmes A 29220 MDXr Delhi MOnitt 13% '49an MDN; Delhi Monitc % RESPOI Nor MePins BOnf Table 17. Means, standard deviations and significant differences between agencies on citizen knowledge and citizen self-interest variables. Citizen knowledge Citizen self-interest Agency Mean St.D. Mean St.D. HMNF 0.0 1.0 1.1., 0.9 MDNR -0.1 1.0 1.23 0.8 Delhi 0.1 1.2 1.2 c 0.6 Monitor 0.9 0.8 05., d 1.8 Delta 0.6 1.0 0.01, 1.5 Respondents indicated if they believed citizens had the knowledge to participate and that citizens were self-interested. Response scale: Definitely False (2), More False Than True (-1), More True Than False (1), and Definitely True (2) Means marked with subscripts a and b are significantly different, and c and d are significantly different at p <.05 according to Bonferroni t-tests which correct for multiple comparisons. Table 18. Means, standard deviations and significant differences between agencies on fairness importance variables. Total fair process Fair process Representation importance importance importance Agency Mean St.D. Mean St.D. Mean St.D. HMNF 2.2 0.4 2.4 0.5 2.1 0.5 MDNR 2.2 0.4 2.4al 0.5 2.1 0.5 Delhi 2.2 0.4 2.6 0.5 2.3 0.5 Monitor 2.4 0.2 2.8 0.3 2.4 0.3 Delta 2.5 0.2 2.91, 0.2 2.5 0.4 Influence Neutrality Courtesy Fair outcome importance importance importance importance Agency Mean St.D. Mean St.D. Mean St.D. Mean St.D. HMNF 1.7 0.5 2.5 0.4 2.6 0.5 1.2,. 0.6 MDNR 1.6 0.5 2.6 0.4 2.6 0.5 1.3 0.5 Delhi 1.7 0.7 2.6 0.4 2.3 0.6 1.1 0.7 Monitor 1.8 0.2 2.6 0.3 2.6 0.5 1.91, 0.7 Delta 2.0 0.6 2.8 0.3 2.8 0.4 1.3 0.4 Respondents were asked how important each fairness item was. Response scale: Not at all Important (0), Somewhat Important (1), Important (2), and Essential (3). Means marked with subscripts a and b are significantly different at p <.05 according to Bonferroni t-tests which correct for multiple comparisons. 102 around Important. The outcome variable, fair outcome importance, was rated as Somewhat Important for all agencies except Monitor Township which rated it as Important. Assigning a lower importance to influence, and the highest importance to neutrality, is consistent with the interpretation that many agency cultures still value neutrality over accountability. The same patterns occurred with cultural variables asking if employees feel their unit accomplishes fairness in its decision making (Table 19). Fairness performance items were rated on a fi'equency scale of Never (0), Seldom (1), Sometimes (2), Often (3), and Always (4). When variables were compared with each other, neutrality was one of the most frequently achieved principles (Often) and influence was the least achieved (Sometimes). The similarity in rankings of fairness importance and performance principles is no surprise as one would expect agencies to put the most effort towards those things they value. In addition, cognitive dissonance theory (Festinger, 195 7) would suggest that the greatest value is placed on those things most consistently and easily achieved, setting up a reinforcing feedback loop. These results again support the primacy of neutrality over accountability in these agencies. But do agencies differ in terms of how they balance neutrality and accountability? The extent to which agencies balance neutrality and accountability can be seen in the ratio of the agency score on influence to the score on neutrality. Agencies with greater ratios give more importance to accountability compared to others. For the ratios of influence importance to neutrality importance, differences among agencies were very small (HMNF = .7, MDNR = .6, Delhi = .7, Monitor = .7, Delta = .7). This same lack of agency difference was present in the ratios of influence performance to neutrality 103 Table 19. Means, standard deviations and significant differences between agencies on fairness performance variables. Total fair process Fair process Representation performance performance performance Agency Mean St. D. Mean St.D. Mean St.D. HMNF 2.8a 0.5 2.9, 0.6 2.9 0.5 MDNR 2.8, 0.4 3.0,, 0.6 2.73 0.6 Delhi 3.0 0.3 3.4 0.4 3.3., 0.4 Monitor 3.4., 0.2 3.8., 0.4 3.5., 0.3 Delta 3.1 0.2 3.5 0.4 3.3., 0.4 Influence _ Neutrality Courtesy Fair outcome performance performance performance performance Agency Mean St.D. Mean St.D. Mean St.D. Mean St.D. HMNF 2.3 0.7 2.88 0.5 3.3 0.5 2.0., c 0.4 MDNR 2.2 0.6 2.9, 0.5 3.4 0.5 2.1., 0.5 Delhi 2.2 0.9 3.2 0.2 3.3 0.4 2.4 a 0.2 Monitor 2.8 0.3 3.6., 0.4 3.8 0.4 3.1, 0.6 Delta 2.2 0.6 3.4., 0.4 3.7 0.4 2.3., 0.4 Respondents were asked how often their unit achieved the items. Response scale: Never (0), Seldom (1), Sometimes (2), Often (3), and Always (4). Means marked with subscripts a and b are significantly different, and c and d are significantly different at p <.05 according to Bonferroni t-tests which correct for multiple comparisons. performance (HMNF = .8, MDNR = .8, Delhi = .7, Monitor = .8, Delta = .6). It appears that the emphasis on neutrality occurs in many different natural resource agencies, consistent with studies documenting cultures of bureaucracy and expertise among natural resource managers in agencies like the Forest Service (Kaufman, 1960). The reforms of the 1960s and 1970s may not have gone far enough, because neutrality is still a dominant value in comparison to citizen influence. Even though there appeared to be an overall cultural emphasis on neutrality, were there any agency differences in absolute values of variables? The patterns of variation 104 among agencies on cultural beliefs about citizen knowledge and citizen self-interest were similar to each other (Table 17). Monitor and Delta townships had the highest citizen knowledge (0.9, 0.6) and the lowest citizen self-interest scores (-.05, 0.0). There were no differences among Huron-Manistee, MDNR and Delhi Townships on citizen knowledge (0.0, -0.1, 0.1) and citizen self-interest (1.1, 1.2, 1.2). These results suggest that Monitor and Delta Township supervisors had higher regard for citizens as participants in decision making. Is this conclusion supported by patterns in the fairness variables? Among fairness importance variables, there were only two significant (p<.5) differences between agency means (Table 18). Delta township commissioners felt fair process importance was significantly more important than did the MDNR employees (2.4 vs 2.9). In addition, Monitor township commissioners felt fair outcome importance was more important than did HMNF employees (1.9 vs 1.2). Although not significant, the same relative agency rankings held for total fair process importance, influence importance, neutrality importance, and representation importance. In contrast to the fairness importance variables, there were many significant differences between agencies on fairness performance variables (Table 19). However, the pattern of agency differences in fairness performance matched the patterns in fairness importance. On most fairness performance variables, HMNF, MDNR, and Delhi were not different from each other, and Monitor and Delta were also similar to each other. Monitor had higher fairness performance scores than HMNF and MDNR on all variables and the differences were significant for total fair process performance, fair process performance, representation performance, neutrality performance, and fair outcome performance. Delta had higher scores than MDNR and HMNF on all variables except 105 influence performance, and was significantly higher on neutrality performance and fair outcome performance. These patterns in the fairness variables, which show Monitor and Delta with higher fairness importance and performance rankings than the MDNR and HMNF, match the patterns which show that Monitor and Delta commissioners have higher rankings of citizens as knowledgeable and lower rankings of citizens as self-interested when compared to the HMNF and MDNR. This suggests that there are consistent cultural differences among agencies. What might explain these differences? Agency culture could be affected by any of a number of factors. One explanation would be based in level of government. The HMNF and MDNR were generally lower than Monitor and Delta in regard for citizens and fairness. Perhaps the state and federal government agencies had similar cultures because they are staffed by permanent civil servants trained in natural resource management. As a result they rate fairness and involving citizens as being less important than other decision making criteria such as the perceived good of the resource. By comparison, planning commissions are composed of appointed or elected local citizens. Their point of view may be more similar to citizens in terms of emphasis on fairness. Since they are also locals, they may believe other citizens possess the knowledge and motivation to participate. The only exception to this level of government pattern would be Delhi Township which had cultural ratings very similar to HMNF and MDNR. The difference between Delhi and the other planning commissions may simply be a result of the specific makeup of the Delhi planning commission. A larger sample of townships would help determine if Delhi is different from the usual commission. 106 The third agency related context variable was perceived participation resources. Employees were asked how many resources (money, staff, training) the agency possessed for public participation. Resources were measured with a response scale of Almost None (0), A little (1), Some (2), A Lot (3). In Chapter 4 it was predicted that greater resources should lead to citizen evaluations that were more fair. Patterns in resource availability showed that Delhi reported the most resources (Mean = 1.9, St.Dev. = .6), Monitor (Mean = 1.8, St.Dev. = .5) and Delta (Mean = 1.7, St.Dev. = .6) were next, then HMNF (Mean = 1.5, St.Dev. = .7), and finally MDNR (Mean = 1.2, St.Dev. = .7). Delhi was significantly greater than HMNF and MDNR ( p <.05). This pattern of perceived resources, in which Delhi reports the most, is in stark contrast to the pattern of cultural variables on which Delhi has the lowest scores. A possible explanation is that when respondents were asked about resources, they responded in terms of how much they thought they should have. Thus Delhi commissioners, who put low importance on involving citizens, would feel they have plenty of resources, whereas employees of other agencies, who want to conduct extensive participation, would feel their resources were inadequate. Cross agency variation on situation variables Having shown that agencies do vary on agency factors, do they also vary in the characteristics of the decision making situation? There were several aspects of the decision making situation which potentially influenced citizen evaluations of fairness. These were the participation technique used, the amount of conflict present, the extent of 107 power equality among citizens, and the prior relationships among citizens and between citizens and decision makers. How did these variables vary across agencies? The first aspect of the situation was participation technique. Participation technique may strongly influence a citizen’s experience. One long-time participant described the Friends of the Forest -— a Huron-Manistee National Forest advisory board. It gives the opportunity for various interest groups to debate issues and perhaps find common ground. Gives an opportunity to get to know forest managers and to have an input in management decisions. Normally I feel very comfortable attending “Friends” meetings. ' This Friends of the Forest participant liked the advisory board technique and felt it met important needs of building relationships, debating issues, finding common ground, and influencing forest management. This respondent was clearly satisfied with this technique. However, there are many other ways to involve citizens. The possible ways citizens could participate with each agency were uncovered during the compilation of the mailing lists used in this study (Table 20). All agencies had records of some citizens who had written in letters, as well as others who had called a decision maker or visited the office in order to have a one-on-one conversation. However, there were also agency specific techniques. The planning commissions held their regular meetings in public and citizens were welcome to attend and speak if they wished to comment on the decisions being discussed. These public meetings were essentially formal public hearings. The planning commission in Monitor Township also held an all day Visioning session designed to help update the township master plan. 108 Table 20. Participation techniques used by the agencies. Agency and name of participation effort Participation technique Huron-Manistee National Forest (HMNF) Letters written Phone call, office visit one-on-one Forest Plan Revision meeting Friends of the Forest advisory board Michigan Department of Natural Resources — Forest Management Division (MDNR) Letters written Phone call, office visit one-on-one Open house / Compartment reviews hearing Presque Isle Management Plan meeting Menominee River Management Plan meeting Lake Superior Forest pilot project meeting Pere Marquette Friends of the Forest advisory board Pigeon River Country Advisory Council advisory board Delhi Township planning commission (Delhi) Letters written Phone call one-on-one Planning commission hearing hearing Monitor Township planning commission (Monitor) Letters written Phone call one-on-one Planning commission hearing hearing Master plan Visioning workshop meeting Delta Township planning commission (Delta) Letters written Phone call one-on-one Planning commission hearing hearing 109 The Huron-Manistee National Forest in Michigan had several ways citizens could be involved in addition to written letters and one-on-one conversations. The Friends of the Forest was established as a response to challenges of the Huron-Manistee’s forest plan. As part of the settlement, the HMNF promised to give biyearly updates on progress towards achieving the plan. The group of appellants came to function as an advisory board, although it was an open meeting and many other individuals and groups chose to participate regularly. In addition to the Friends of the Forest, four years ago the Huron- Manistee began its forest plan revision process by holding 30 public meetings. This was followed by a series of in-depth workshops in which all participants were given opportunities to share their concerns about the plan. Forest planning by the Michigan Department of Natural Resource’s Forest Management Division (MDNR) offered a variety of participation techniques. Citizens sometimes sent in written comments or met MDNR personnel one-on-one in their offices or in the field. Open houses were often conducted as a prelude to a compartment review. Compartment reviews were hearings in which management decisions were made for the next 10 years for a small area. An initial evaluation conducted for the MDNR found mixed support for this participation technique (McDonough & Thorbum, 1997), suggesting a need to study it in more detail. The MDNR also conducted several local planning efforts. These included the Presque Isle Management Plan, the Menominee River Management Plan, and the Lake Superior State Forest pilot project. These projects usually consisted of a series of meetings in which the same people met to negotiate management decisions for a specific forest area. Thus, except for a smaller geographic scale, they were similar in design to the Huron-Manistee forest plan revision workshops. llO The MDNR also had two long-running participation efforts. One was the Pere Marquette Friends of the Forest, which was modeled after the Huron-Manistee Friends of the Forest. The other was a very long term advisory board: the Pigeon River Country Advisory Council. It was unique in that it was an appointed board that met regularly for 25 years for the purpose of making policy recommendations to the MDNR about the Pigeon River area. Differences between agencies in the types of participation techniques used can be partly explained by the nature of the resource decisions they make. Planning commissions make decisions that affect small numbers of people in very specific areas. Decisions are often made very quickly. It may have been for these reasons that they chose to use formal public hearings. The MDNR also chose the formal hearing type when it did compartment reviews because it was also approaching the management task as a small location, small set of stakeholders issue. When issues were of broader geographic or temporal scope, such as revising the township master plan or the forest plan, one or a series of workshops was the approach often chosen. Thus the HMNF, the MDNR, and Monitor township all conducted discussion-oriented meetings. Finally, agencies doing larger scope plans needed to have the plans monitored and updated. The advisory board, which allows the same group of people to discuss the plan repeatedly, was the choice of the HMNF and MDNR. In conclusion, partly because of the nature of the decisions they make, the HMNF and MDNR used a higher proportion of discussion- based, relationship-building techniques than did the planning commissions. Of the planning commissions, Monitor Township was the only one which experimented with a workshop-type meeting. The rationale behind these choices of technique, while logical, 111 does not necessarily mean that the technique was the best or only possible choice. Some participation techniques may be used simply out of tradition even though other more effective possibilities exist. In addition to participation technique, the decision making situation could vary in terms of the amount of conflict between citizens. Conflict was measured with four items and a response scale of Strongly Disagree (-2), Disagree (-1), Neither Agree Nor Disagree (0), Agree (1), Strongly Agree (2). For all agencies, conflict existed at least some of the time because the means were higher than zero (Table 21). This is reasonable because citizens will be more motivated to participate when important decisions are at stake and an opponent with a different interest exists. Thus some level of conflict can often be expected in natural resource decision making that attracts citizen involvement. However, the amount of conflict definitely varied among agencies. The Huron-Manistee had the highest level of conflict (0.7) and this was significantly higher than the amount for MDNR (0.4), Delhi (0.4), and Monitor (0.2). The Huron-Manistee also had more conflict than Delta (0.5) although the difference was not significant. One possible reason for the higher conflict rating of the HMNF was that it had fewer routine types of interactions. For example, many citizens attending a planning commission meeting are there to get a building approved, or a site plan changed. For those types of meetings, the conflict might be relatively low. The MDNR also had relatively routine open houses in which some citizens came and just collected information or shared some opinions. There may not have been highly charged discussions. In contrast, HMNF participation was often organized to specifically address a controversial management decision. The regular advisory board meetings were attended by interest 112 Table 21. Means, standard deviations and significant differences between agencies on decision making situation variables. Conflict* Equal power* % Decision % Citizen % Opposed maker relationshipM relationship" relationship“ Agency Mean St.D. Mean St.D. Mean St.D. Mean St.D. Mean St.D. HMNF 0.7a 0.6 -0.5,, 0.9 69.3,l 39.4 56.0 37.9 40.4 45.4 MDNR 0.4., 0.7 -0.4 0.8 71.62, 36.3 63.11, 36.8 52.7 44.0 Delhi 0.4., 0.8 -0.4 0.8 41.8., 43.8 41.9., 41.7 27.9 44.7 Monitor 0.2., 0.8 -0.1., 1.0 45.5., 38.3 36.3., 38.9 44.4 46.4 Delta 0.5 0.7 -O.3 0.9 47.3., 44.1 44.7., 42.4 32.6 41.6 *Respondents were asked their agreement with statements indicating the existence of conflict or power equality. Response scale: Strongly Disagree (-2), Disagree (-1), Neither Agree Nor Disagree (0), Agree (1), Strongly Agree (2). ”Respondents were asked how many people [decision makers, citizens, opposed citizens] were present and with how many of those they had prior relationships. Dividing prior relationships by total present gave percent values. Means marked with subscripts a and b are significantly different at p <.05 according to Bonferroni t-tests which correct for multiple comparisons. group representatives who tend to be very expressive of strongly held views. While these types of meetings and participants also existed for the planning commissions and the MDNR, the HMNF may have had a higher ratio of them. The extent of power equality among citizens was the third aspect of the decision making situation. Citizens often perceived differences in power and attributed differences in process fairness to these power differences. I am concerned about preserving aesthetic values... specifically providing peace and quiet through control of disruptive activities, i.e. snowmobiling through the forest. Apparently the snowmobile lobby is just too strong. Why bother making recommendations if they are ignored? 113 For this citizen as for many others, political power meant that citizen input was not considered seriously, a violation of the fair process principle of influence. In this study power distribution among citizens was measured with four items on the same agreement response scale used for the conflict items. All agencies had negative means, suggesting that power tended towards being unequal (Table 21). The only significant difference between agencies was that HMNF had less power equality (-0.5) compared to Monitor Township (-0.1). When looking at the non-significant trends, the pattern is similar to that of conflict. HMNF has the highest power inequality (-0.5), Delta (-0.3), MDNR (-0.4 ), and Delhi (-0.4) are in the middle, and Monitor is the lowest (01). The fact that agencies high on conflict were also low on power equality suggests that the variables are related. Theoretically, power equality can influence the ease with which conflict is resolved and inequality can exacerbate existing conflict (Pruitt, 1998). This suggests equal power may influence the level of conflict. The final set of situation variables is about relationships among participants in decision making. The theoretical framework predicted that the more relationships of respect exist among parties to a conflict, the greater the likelihood they will resolve their differences. This would lead to perceptions of greater fairness. The first relationships variable, % decision maker relationship, was calculated by dividing the number of agency employees with whom the respondent had a prior relationship of respect by the total number of employees present. All agencies had means over 40% which showed significant prior relationships and suggested that many of the citizens had participated more than once (Table 21). The HMNF and MDNR were not significantly different from each other, and the planning commissions were not different from each other. However, 114 the HMNF (69.3%) and MDNR (71.6%) had significantly more relationships between citizens and decision makers than did the planning commissions (41.8%, 45.5%, 47.3%). A similar pattern held for the percent of citizens with whom the respondent had a prior relationship. The HMNF and MDNR had more prior citizen-citizen relationships than did the planning commissions, although the difference was only significant for the MDNR. Finally, the percent relationships with citizens who were opposed to the respondents interests did not show any agency differences. Taken together, these results suggest that the HMNF and MDNR do a better job of helping citizens and decision makers to get to know each other and feel respect for each other. This is no surprise as the HMNF and MDNR use participation techniques such as discussion meetings and advisory boards to a much greater extent than do the planning commissions. These participation techniques facilitate the building of relationships among citizens and between citizens and decision makers. Cross agency variation on citizen characteristics variables Not only can the decision making situation differ, the citizens judging the decision can also be different in important ways. In this study, four types of citizen characteristics were measured: prior involvement in decision making, age, gender, and formal education. Citizen experiences with decision making were assessed in two ways. The first measured the involvement with the agency and the second measured involvement in politics and natural resource decision making in general. Agency involvement was calculated by adding up the total number of times the person was involved in the agency 115 and dividing it by the total potential amount they could have been involved. This gave a percentage that could be compared among agencies (Table 22). The results showed that MDNR had significantly higher agency involvement levels (35%) than the planning commissions and HMNF. Delhi (18.8%), Delta (19.2%), Monitor (15.2%), and HMNF (17.5%) all had essentially the same agency involvement levels. These results suggest that the MDNR tends to keep the same people involved repeatedly, while the other agencies bring in new participants. The level of general involvement was calculated by adding up how many of a set of eleven general political and natural resource decision making activities the citizen had done and dividing by the total possible. General involvement results showed that MDNR (49.7%) was significantly higher than the HMNF (44.2%), Delhi (22.5%), Monitor (21.7%), and Delta (21.5%). The HMNF was also significantly higher than the planning commissions, but the commissions were not different from each other. This result demonstrates that the MDNR and the HMNF attract citizens who show a higher level of civic involvement and natural resource involvement than do the planning commissions. The higher level of natural resource involvement is not surprising since citizens attending planning commissions may be motivated by the desire to save their neighborhood, not because of general environmental concerns. The barriers to local participation are also not as great, making it easier for less experienced people to participate. For example, planning commission hearings occur in the evening near citizens’ homes. HMNF and MDNR meetings often occur during the day at the agency office which may be several hours drive from where citizens live. Thus it is easier for a citizen who is not highly experienced in general political participation to attend a planning commission meeting. 116 Table 22. Means, standard deviations and significant differences between agencies on citizen characteristics variables. Agency General Age Gender Education involvement involvement Agency Mean St.D. Mean St.D. Mean St.D. Mean St.D. Mean St.D. HMNF 17.5., 21.3 44.2, 23.2 5.0 1.3 0.15, 0.4 3.5 1.2 MDNR 35.0, 23.9 49.7., 20.7 4.9 1.4 0.1., 0.3 3.5 1.2 Delhi 18.8., 16.2 22.5c 15.5 4.7 1.3 0.4., 0.5 3.3 1.2 Monitor 15.2., 14.8 21 .7c 15.3 5.0 1.4 0.3 0.4 3.4 1.3 Delta 19.2., 17.2 21.5c 14.6 5.0 1.3 0.3., 0.5 3.7 1.2 Agency involvement and general involvement are percentages of the total possible participation in the agency and in general politics and natural resource decision making. Age is measured in 10 year increments, so 5 equals an age from 50 to 59. Gender is a variable where 0 is male and 1 is female. The education response scale is Less than high school (1), High school graduate (2), 2 year degree (3), 4 year college degree (4), Graduate or professional degree (5). Means marked with subscripts a, b and c are significantly different at p <.05 according to Bonferroni t-tests which correct for multiple comparisons. These results for involvement are also consistent with the results for gender. A higher proportion of participants in the planning commissions were women (30 to 40%) when compared to the HMNF and MDNR (10%). The gender differences might be explained by difference in subject matter. Historically, women have been excluded from forest management decision making, but have been very involved in decision making related to home and neighborhood. The final citizen characteristics were age and education. Across all agencies, the average age of participants was between 40 and 60 years old. The average level of education was between a two year and a four year college degree. This suggests that participants in natural resource decision making tended to be older and more educated. However, there were no differences in age or education of citizens between agencies. 117 Cross agency variation on fairness evaluation and consequence variables The final set of variables which might express cross-agency variation were citizen fairness evaluations and trust in decision makers. Several variables measuring the fairness of the decision making process were constructed from 16 items. Total fair process evaluation averaged the scores on all these items. Fair process evaluation averaged the scores on two of the 16 items which asked about general fair procedures. Representation evaluation averaged the scores on four items about the convenience of participation and the involvement of diverse stakeholders. Influence evaluation used items measuring the sense of control over decision making processes and outcomes and the feeling that citizen input was seriously considered. Neutrality evaluation combined items about lack of bias and use of accurate information. Finally, courtesy evaluation was calculated from two items about polite treatment and answering of questions. There was one fair outcome variable which used five items that asked about distribution of benefits and costs and that asked if the outcome was consistent with the citizen’s values. All of the fairness variables had agreement response scales of Strongly Disagree (-2), Disagree (-1), Neither Agree Nor Disagree (0), Agree (1), Strongly Agree (2). The patterns in fair process evaluations across agencies showed that HMNF, MDNR, Monitor and Delta were all evaluated about the same (Table 23). In particular, the HMNF and MDNR were always very similar. However, Delhi often got significantly less fair evaluations. The biggest difference was for influence evaluation. Delhi received —0.3 while HMNF got 0.2, MDNR got 0.0, and Monitor and Delta received 0.1. Evaluations of outcome fairness showed a different pattern. HMNF received less fair 118 Table 23. Means, standard deviations and significant differences between agencies on citizen evaluation and trust variables. Total fair process Fair process Representation Influence evaluation evaluation evaluation evaluation Agency Mean St.D. Mean St.D. Mean St.D. Mean St.D. HMNF 0.3a 0.7 0.4 0.9 0.4 0.8 0.23 0.9 MDNR 0.3 0.7 0.5, 0.9 0.5 0.9 0.0 0.9 Delhi 0.1., 0.8 0.1., 1.0 0.3 0.8 -0.3., 0.9 Monitor 0.4 0.8 0.5 0.9 0.7 0.8 0.1 1.0 Delta 0.4 0.8 0.4 1.0 0.7 0.8 0.1 1.0 Neutrality Courtesy Fair outcome Trust in evaluation evaluation evaluation decision makers Agency Mean St.D. Mean St. D. Mean St. D. Mean St. D. HMNF 0.2 0.9 0.8, 0.6 -0.2, 0.8 0.28 0.9 MDNR 0.3 0.9 0.8, 0.7 -0.1 0.8 0.1 0.8 Delhi 0.0 1.1 0.4., 1.0 -0.1 1.0 -0.1., 1.0 Monitor 0.3 0.9 0.7 0.9 0.2., 0.8 0.3., 0.9 Delta 0.2 1.0 0.7 0.8 -0.1 0.9 0.1a 0.9 Means marked with different subscripts are significantly different at p <.05 according to Bonferroni t-tests which correct for multiple comparisons. outcome evaluations (-0.2) than did Delhi (-0.1), but this difference was not statistically significant. Monitor got a statistically higher fair outcome evaluation (0.2) than did HMNF (—0.2), and also showed a non-significant trend towards being higher than the other agencies. The patterns in the fairness variables also carried over to the consequence of trust. Delhi had significantly lower trust scores (-0.1) than did HMNF (0.2), Monitor (0.3), and Delta (0.1). Monitor had the highest trust score, although the difference was not statistically significant. What might explain these differences and similarities? The theoretical framework proposes that differences in decision making context should 119 explain citizen evaluations. This theoretical prediction is tested in the next section, by comparing patterns in context, evaluation, and consequence variables. Testing the theoretical framework of perceived fairness The theoretical framework proposes three types of context factors which potentially affect citizen fairness judgements. The context factors include those related to the agency making the decisions, those related to the decision making situation, and those describing the characteristics of the citizen participants. To what extent do these factors predict perceived fairness at the agency level? The first set of context factors are about the agency making the decisions. One difference among agencies was the level of government. HMNF operated at the federal level, MDNR at the state level, and the three planning commissions at the local level. In Chapter 4 it was predicted that level of government would be important but the direction of influence was hard to hypothesize. Considerations related to notification and representation suggested the local levels would be judged as more fair. Considerations of legislative mandates for citizen participation suggested the local and the federal levels would be perceived as most fair. Finally, the notion that local level issues would be more involving to citizens suggested that state and federal levels would be seen as most fair. Which of these theories did the data support? Consistent with the many predictions above, the results of this study do not provide a clear indication of the effect of level of government on fairness. When means of evaluation variables calculated for each level of government were compared, there were two significant differences (Table 24). In both, the federal level had the highest fairness and the local level the lowest fairness. The first fairness variable showing 120 Table 24. Anova analysis of differences between government levels in fairness evaluations. Mean score F -test Federal State Local DF p-value Total fair process evaluation .34 .33 .26 (2, 766) .349 Fair process evaluation .42 .47 .30 (2, 733) .134 Representation evaluation .42 .46 .52 (2, 738) .367 Influence evaluation .20a -.03., -.09., (2, 707) .001 Neutrality evaluation .23 .30 .14 (2, 731) .219 Courtesy evaluation .84, .81, .55., (2, 705) .000 Fair outcome evaluation -.19 -.15 -.07 (2, 697) .212 Different subscripts in the same row show significantly different means at the 0.05 level according to Bonferroni posthoc tests. significant difference was influence evaluation. The lower influence evaluation at the local level was consistent with the prediction that local levels are perceived as less fair because the issues affect citizens more personally. With personally involving issues, influence on the process and outcome would be more important, so citizens would react more negatively to any perceived lack of fairness. However, the low score on influence evaluation for the MDNR does not fit that explanation because MDNR decisions are as distant from citizens as are HMNF decisions. The second significant difference was for courtesy evaluation. Perhaps courtesy is lower at the local level because the more intense issues inspire the decision makers to be tougher. Another possible reason is that HMNF and MDNR decision makers are employees, and courtesy to citizens is easily noticeable by their supervisors. However planning commissioners are unpaid volunteers, so they can more easily get away with being rude. In conclusion, while level of government may be a very important variable, this analysis does not provide straightforward conclusions. 121 Agency culture is another context factor. An agency where employees hold citizen abilities and motivations in low regard should also be evaluated more negatively by citizens. In addition, if agency employees feel fairness has low importance, citizens may also perceive less fairness. Finally, agencies which evaluate themselves as achieving less fairness, may also be evaluated as being less fair by citizens. These predicted relationships were tested with bivariate correlations between citizen evaluation variables and agency culture and resources variables (Table 25). Because the correlations were at the agency level, the sample size was 5, making it very difficult to achieve statistical significance. The discussion focuses on correlations with p-values less than .3 in order to identify non-significant trends which would need to be confirmed with a larger sample of agencies or management units. The overall pattern largely confirmed the predicted relationships. Of a total of 112 correlations between agency culture and fairness evaluations, 41 had p-values less than .3, and of these 41, only 2 were in directions opposite of predicted. An interesting pattern in the correlations was that representation evaluation and fair outcome evaluation were the aspects of fairness most consistently related to agency culture. Representation may be related most strongly because it can be readily judged by citizens and different citizens are likely to reach the same evaluation. Judging representation involves looking around the room and noting who is, and who is not, present. Because citizen judgements are based on easily observed objective fact, they may be more accurate and thus be more highly correlated with the amount of effort an agency actually expends on citizen involvement. The same argument would hold for fair outcome evaluation. The decision outcome may be publicly announced, facilitating citizen comparison of the relative 122 .3862: Mo 26250 ohm mecca—228 3:50:25 .:2626 wowomcoa 2: E 3:05 anficmfi-co: Beam Eon E mecca—280 3. v wig-Q * we». 9». m3 . clwrn 3m . um: Rm. . 9m.- “ S. .1 . mam . cor 3a.. NY- 82:82 sozmfiofiam NS. sma. c3... 3.- :N. om. mam. 2.- 3m. mm. 3:. 5. 3%. mo. coca—Eaton 2.88:0 can 3. 2. 2%. mo. 3. mm. 8%. S. VS. :3. an E. 22. mo. 855.22 32:50 35. mm. :w. omr mac. 2. e3. v0: m2. 5.. uma. co. emu. _N. 35:20.23 bzmbsoz 36. *Na. van. 2. mom. No. 3%. 2.. gm: mm. mm». 0v. 2». w? commute-tom 8:265 V: . 3.. gm- . wlmfl was . o fl .- mmm. E - mmm . em. 3N. ONE m3. Nor eunuch—88¢ cozfismoaom as. S. Rm. fl 2. 8. mm. a 2 .- ea. we. 3.. o _ .- as. «o. 8558.22 $88: .5 028.2823 awe. vNa. Mam: mm: 3m. mm. mg. 00. N2. 3.. 53.. I. 36. mm. $30.5 hm.“ Ego-H. 39. «ha. 2% m _. mu ~ . ac. Sc. mm. 2N . be. «3. . Nm. K M- . Nm. oogtoqem 220850 ham 3.». 8. Rm. em. in. S. :2. me. :2. an. 5. Nb. Re. 3. 8%:2E @2500 m3. 5.. cE. 3.- mm. 3.- Rm. 2 - Rm. em. 3:. 8. wk. 3. oogtoaé 5:22:02 NE... 2. woo. oo. SQ. mo. 03-. cm. c2. Nb. can. 3. mm». 34. 853.595 8:265 N3. me. New. mmr N3. we. 39. oo. VQQ. 5. Eu. co. En. om. oocmtomfim :ozflcOmoaoM SN. 8. Re. om - ma. _ _. new. o _. :3. 8. 8%. 2 2. Se. mm. oocaeoafi $88: Em 853.595 36. mm. hum. :.- mg. 3. 3K. 0N. Nag. vw. awn. _N. no». mv. $805 :8 35H Ra. mm: 9mm. S. 33... 54.- 3%. emf N3. 36... was». er 2m: hm: 52225—8 coNEU m3. 3. own. 2 - SA. mm. NE. mm. $5. on. m3. Om. 3.4. 9... 032305— 5230 A .— m .. a n m ._ m .— m .— m .— gag—go song—«>6 53.335 :oamfigo 5:23:30 coaxing coaxing 282m wofiutg 2:830 SE $2500 3:230 Z 02—ch couficomoaom $805 tam Ed :38- 82:82 a. 23:8 bosom/x $225»: w H 70 82:82 cc: 2:23 mocowm 53, 2232:: 22:5 ENE: mo 20>“: oogocimfi big-q we: 2832228 season $32 5:03.. ”mm 035- 123 outcomes of others. However, neutrality, influence, and courtesy are more subjective. Citizen judgements will be more influenced by personal and situation characteristics in addition to what an agency actually does. The third agency factor was participation resources. Agencies with more resources for conducting citizen participation should be able to do a better job, leading to greater perceived fairness. However, this variable predicts fairness poorly. In fact, agencies with more perceived available resources have a trend of being evaluated as less fair by citizens (Table 25). These results suggest that either the amount of resources for participation is not related to fairness, or there is a difference between actual resource availability and employee perception of resource availability. Because participation resources was measured in a subjective way by asking employees how many resources were available, their answers may have been affected by how much they thought there should have been. Thus employees in agencies which value participation may have been critical of the resources available because they wanted more. In actual fact, their agencies may have had more resources than other agencies which did not value participation. If this interpretation is correct, then more participation resources may actually increase perceived fairness, but this study found an opposite result because it assessed employee perceptions. In sum, the role of participation resources is uncertain. The next set of context factors related to the decision making situation. One aspect of the situation was the participation technique used. Techniques emphasizing discussion and repeated interaction were predicted to lead to greater fairness. Correlations between the average score on participation technique for each agency and average agency fairness evaluations confirmed the prediction (Table 26). In particular, 124 agencies with more discussion-based participation were seen as being more neutral (r = .80, p = .105) and more courteous (r = .77, p = .128), as well as having a generally fair process (r = .73, p = .163). There was a weaker trend towards citizens feeling they had more influence (r = .45, p = .444). There was no relationship between participation technique and fair outcome evaluation. This is logical because the technique used is an aspect of the process, not the outcome. Another situational factor was the amount of conflict. Conflict showed mixed patterns among the fair process variables because there were an equal number of positive and negative relationships, all of which were weak (Table 26). The only reasonably strong correlation was with fair outcome evaluation (r = -.79, p = .113), suggesting that as conflict increases, fair outcomes decrease. The fact that conflict is most strongly associated with outcome is consistent with the idea that conflict often arises out of differences in underlying interests. Usually a person’s interests are based in achieving a particular outcome. Thus when conflict is more intense, differences in interests are probably also more extreme, and it will be harder to find a fair outcome. Equal power also had small fairness correlations at the agency level, although they were more consistent than those of conflict. With the exception of courtesy evaluation, equal power was associated with greater fairness. It was most strongly related to representation evaluation (r = .78, p = .122) and fair outcome evaluation (r = .95, p = .012). The correlation with fair outcome may be statistically significant because the powerful can often get the outcome they want at the expense of those with less power. The final situation characteristics were the prior relationships among participants. It was predicted that if citizens had more prior relationships with decision makers, with 125 .AVJUWQ-U~IW~W W) .1.” Z» V.rd\n\~w.--wtfi Iv \V 4 . 4 v r 1.. HR a. H :. F—Av—:.~:~—m za—B V.:fi::.=——u>rv f..f..pv——L:wm :avN_——ch rfl—fvxiov— avovz-wrvw‘mp—Lmv ov=~uw>la —v:~w .I:Av.-~w~rvtnv“u -Aumh~wnvl ND) VN Ava» v-m< Vm .v\h- rNt F—I demt *JF i . . - . . » ~ . r . . 60862: :0 0:50:50 2: macaw—2:00 005—2095 .:2626 02082: 0:: E £0.22: Euoafiwmm-co: 26am Eon 5 28:22.50 mo. v 029d ... :3: 9... V::. :m. :3. mm. :3. E.- :::. :N. 3:. :N. 2:. 4:. E08285 3:09: 3:. .mlml- m2. 2.. :::. :m. 2:. mm. 3.. a..- :2. 2.. :Q. «N. 2080235 3200 :3. 2.- R:. :m. 2:. :m. ~:~. S. :3. 8. 3:. :m. :2. 2.. 858:8 :::. a. :8. t3.- ::. 3.- :8. 8.- :3. :o. :2. 2.- 2:. 5.- 5:50 :::. mm. ::. :5. :::. ma. :8. .8. :2. :h. 2:. .8. 8:. 3:. :3 :3. E. :2. 2.. :::. mm. 2:». :v. :2. S. ::. 2.. 2:. S. 32822 38:8 ex. :2. .:Ihl. Nmm. 8. .:k. «N. :Q. .8. :3. 9V.- :::. mm. :3. S. 95:82:. 22:6 .x. 2:28:32 3:. an 3:. 3. as. :v. :3: 2. 3:. 3.. ~::. em. :::. :m. 5:9: .5365 .x. 2:. .2. E. a _ .- :2. 9. :3. 3. N2. 2. :::. mm. 3:. :m. .26: Ea: :2. :7 V3,. 3. 5. cm; R». 9.. 2:. mm. :8. .8. 3:. 8. 85.50 :2. E. :2. F. :2. 8. 3.». :v. :::. 3. :2. mp. :::. «m. 83562 5:86:55: a .— m .— m .— m .— m .— m .— m .— nouns—g0 Gomwm=_m>o Gomams—w>o Fatwa—«>0 nomads—«>0 :Omum=_m>o Fatwa—9’0 mwOOOuQ 080030 ham $2.300 3:05:02 00:26.: comafisom2m0m 2000-a 20m :3 =33- .Amomocowm m u Zv 833.8.» Congo 0:: 2033:: 5:5 SAVES—:0 3053 52:0 no m_0>0_ 00.82.“?me 0:_:>-n 0:: 28:22:00 282:0.“ _0>0_ >0=0w< 6N 03g. 126 other citizens, and with citizens who opposed their views, the process and outcome would be more fair. Although the correlations were small, fair process was consistently associated with a greater percentage of relationships. This pattern was the strongest for relationships with opponents where correlations ranged from .60 to .85 and approached significance at the p < .05 level. When people have a relationship of respect with their opponent they are more likely to cooperate, leading to a process that is more fair. The final set of context factors describe the individual characteristics of citizen participants. The theory predicted that greater levels of involvement with the agency and in natural resource decision making in general would lead to greater fairness perceptions because the citizen would feel a greater sense of control. However, the correlations for agency involvement and general involvement showed no clear trends, suggesting that participation experience does not operate at the agency level. Age, education, and gender did have clear and strong agency level patterns. Age was highly correlated in a positive direction with all the fair process variables. Education was also positively correlated with fair process. Gender was negatively correlated with all the fair process variables except representation evaluation. This means that older people, more highly educated people, and men perceived more fair process. These are also the categories of pe0ple who have higher perceptions of control over government (Smith & Propst, 2001 ). A sense of greater control may translate into a feeling that the process is fair. In conclusion, at the agency level, there is support for the impact of context on fairness. However, at the agency level, does fairness predict the consequence of trust in decision makers? Correlations support the proposition that increased fairness is associated with trust in decision makers. Specifically, total fair process evaluation (r = 127 .93, p = .021), fair process evaluation (r = .87, p = .056), representation evaluation (r = .78, p = .124), influence evaluation (r = .89, p = .044), neutrality evaluation (r = .86, p = .064), courtesy evaluation (r = .67, p = .213), and fair outcome evaluation (r = .56, p = .330) all were positively correlated with trust in decision makers. Summary At the agency level, do agencies differ on important context variables? Do these differences affect citizen evaluations, and do those evaluations impact trust in decision makers? The results reviewed in this chapter show that the agencies did differ on context variables. In addition, these differences in agency, situation, and citizen characteristics did generally correlate with citizen evaluations of fairness as predicted by the theoretical framework of perceived fairness. When an agency values fairness, reports that it achieves fairness, and believes that citizens have the knowledge to participate and are not self-interested, citizens also are more likely to report that the agency treats them fairly. A decision situation characterized by a discussion-based participation technique, low conflict, equal power, and prior relationships between citizens and their opponents was perceived as leading to more fair decision processes and sometimes more fair outcomes. Agencies with older, male, and highly educated citizen participants tended to be evaluated more fairly. When agencies were evaluated as being more fair, their decision makers were trusted. In the next chapter, the causes of individual citizen evaluations are explored. The analysis focuses on situation and citizen characteristics, context variables which were measured at the individual level. This is done to further understand why some citizens perceive greater faimess than others. 128 CHAPTER VIII: RESULTS AND DISCUSSION: CITIZEN AND SITUATION INFLUENCES ON FAIRNESS AND TRUST The previous chapter examined the differences between agencies on context, evaluation, and consequence variables in order to determine if the relationships among variables at the agency level were consistent with the theoretical framework of Chapter 4. The results indicated that context factors related to the agency making the decisions explained variation in evaluation variables at the agency level. However, situation and citizen characteristics did a relatively poor job of explaining agency differences in citizen evaluations. This is not surprising as situation and citizen characteristics were measured at the citizen level. At the citizen level do context factors related to the situation and the citizens influence perceived fairness in the directions predicted by the theory? Do fairness evaluations increase trust in the decision makers? In this chapter, these questions are answered. Citizen evaluations of fairness and trust in decision makers Citizen evaluations of fairness are at the core of the theoretical fiamework proposed in Chapter 4. The theoretical framework proposes that the decision making context affects fairness evaluations and these in turn affect the level of citizen trust in decision makers. The framework specifically proposes that fairness of the decision making process and the decision outcome increase trust in decision makers. Much prior research has shown that positive evaluations lead to positive outcomes for the agency (e.g. Tyler & Degoey, 1995). 129 In Chapter 6, factor analysis and reliability analysis led to the creation of five process fairness variables. Total fair process evaluation measured overall fairness by averaging all 16 fair process items. F air process evaluation measured general fairness, but did so with only two fairness items (“The procedures used to make decisions were fair,” and “Citizens were treated unfairly”). The access of all affected citizens to decision making was assessed with representation evaluation, the ability of citizens to influence process and outcomes was assessed with influence evaluation, the neutrality of the process was measured with neutrality evaluation, and the politeness of the agency was assessed with courtesy evaluation. Fair outcome evaluation was the one measure of the perceived fairness of the outcomes. Bivariate correlations unambiguously supported the theoretical proposition that fairness is positively related to trust in the agency. The correlations with trust were 0.78 for total fair process evaluation, 0.65 for fair process evaluation, 0.49 for representation evaluation, 0.69 for influence evaluation, 0.78 for neutrality evaluation, 0.54 for courtesy evaluation, and 0.69 for fair outcome evaluation. All of these were significant at the p < .01 level. These correlations show that influence, neutrality and fair outcomes have a larger impact on trust than do representation and courtesy. Participation technique and evaluation variables Now that the relationship of fairness with trust has been established, how do context variables impact fairness? There were four context variables related to the situation in which decision making took place. These were the participation technique used, the amount of conflict among citizens, the equality of power among citizens, and 130 theam interac hearin that w intera< and Cl had si evaluz but no courte Questi 10 ans' politerl inflUC. becon andh | filnda that t} inter'z the amount of prior relationships among citizens and between citizens and decision makers. The first variable, participation technique, measured the amount of interpersonal interaction. Ranked from lowest to highest, the techniques were written, one-on-one, hearing, meeting, and advisory board. In Chapter 4, the theoretical prediction was made that when citizens participated in decision making using a technique with more interaction, they would rate the decision making process and outcome as more fair. The correlations suggest a positive relationship between participation techniques and citizen evaluations of fair process, as predicted (Table 27). Participation technique had significant positive correlations with fair process evaluation (r = .09) and courtesy evaluation (.08) and all except one of the other citizen evaluation variables had positive, but not significant, correlations with participation technique. The connection between courtesy and technique is not surprising because one of the courtesy items was “citizen’s questions were answered.” A back-and-forth discussion would allow greater opportunity to answer questions and the dynamics of fiiendly conversation would encourage politeness. A conversation also allows feedback which can increase the sense of influence because the citizens can see their comments being considered. As techniques become more group-based, citizens are also able to see the diversity of citizens involved and have a sense of the representativeness of the process. In addition, humans are fundamentally social creatures, and so enjoy personal interactions. It is not surprising that they would have more positive impressions of techniques which allow relaxed interactions. The preference for group-based techniques has also been found in prior research (Smith & McDonough, 2001). 131 due—Lit; 23:59 .5523 2:: .mu.£....:_..> b 332.5 :C._:..::v. .v.::._:..E_..>.l.~|l. Cx.:.fiw:...3:.i .:.fld...:.§:rd .3855 35 2 26030 955322 a 2865 3382. 351095 .mmEmcomfi—B 5362:an 2: toga :03? :8“ 2: E 83:86 82? 2865 3028:: 20m mo. V n— * 5.- «E. ewc. e:- No. «a n. 1:. :28”.va *5 —.- an —.- «war «war 1:.- mo: « n “r poncow Basso-m «2. 8. 8. 8. S. 8. S. ome- «N —.- we. 5.- vor «N ~.- 5.- co.- EoEo>_o>E 22250 «we: £5. 5. - co. No. 55. 5.- EoEo>_o>E NA25wa a. 8. co. 3.- S.- 8. 8. nieces». 8830 .x. E.- 8- 8.- S.- vor 3.- S.- efisosse 555 .x. vor an n. no. co. co. «5 _. wo- QEmcotflg 35 .x. L... «a. L:- én. an. .3. «m... sea .25 «NN.- 5.- « ~N.- not met «act a — ~.- ~05on 5.- $5. oo- No. 5. «as. 55. 0:35—08 coumamomtwm 283596 2623336 238396 $26396 zotuégo 26.23336 guess,» 3.83% 22836 but A8580 5.52362 monoébé zetanzmmogmox 88on 55% AEbSQ-N 353.23. 833...; net—3.56 nonED ave—amt? axowcoo $530 28 Joint? c868 cassava. accusing nouns 5953 macaw—ebony SN @33- 132 Conflu- Conflic situatio | I- OUICOIT- neutral dEClSlt‘ 0UlC0r 0f the relatix Panic be im lfllEre 0fCor Equa- MOI-1‘ Conflict and evaluation variables The second aspect of the decision making situation is the amount of conflict. Conflict is predicted to reduce fairness evaluations because in a highly conflicted situation people will hold stronger and more divergent positions, making it harder to find outcomes that all perceive as being fair. If outcomes do not turn out as desired, the neutrality of the decision makers may be questioned. The correlation results strongly support the hypothesis because all seven evaluation variables are negatively correlated with conflict, and five of these are significant (Table 27). In particular, Conflict is significantly correlated (p < .05) with total fair process evaluation (-.11), fair process evaluation (r = -.08), neutrality evaluation (r = -.21), and fair outcome evaluation (r = -.22). These results indicate that as conflict in decision making increases, citizen satisfaction decreases. It does so both in terms of outcomes, because citizens do not get what they want, and in terms of the general fairness of the process and whether or not the process is seen as neutral and unbiased. The relative sizes of the correlations suggest that the outcome function is more important, particularly when one considers that one of the items measuring neutrality evaluation can be interpreted in terms of outcomes, i.e. “There is a lack of bias toward particular interests, groups, and persons.” This is not surprising because conflict often arises out of competing interests, i.e. differing desired outcomes (Floyd, Germain & Horst, 1996). Equal power and evaluation variables The third aspect of the decision making situation is the extent of power equality among citizens. Participation theories are ofien presented in terms of power and power 133 sharing so it is logical to propose that decision making would be seen as less fair if power is less equal. As predicted, all seven citizen evaluation variables were significantly positively associated with equal power (Table 27). Specifically, equal power was positively correlated with total fair process evaluation (r = .43), fair process evaluation (r = .24), representation evaluation (r = .3 8), influence evaluation (r = .37), neutrality evaluation (r = .3 8), courtesy evaluation (r = .22) and fair outcome evaluation (r = .41). These correlations were the strongest of any context variable, suggesting that equal power among citizens is a key factor in achieving perceived fairness. This is no surprise because power inequalities can lead to biased, unfair outcomes as well as processes which are not neutral because they give greater influence or access to certain people at the expense of others. Research on procedural justice found that unequal power situations enhanced the importance of fair procedures (Barrett-Howard & Tyler, 1986), implying that equal power situations were inherently more fair to begin with so additional assurances of fairness were unnecessary. In conclusion, the literature as well as these results makes it clear that the distribution of power is a central element of the decision making context. Prior relationships and evaluation variables The final situation variables measure the prior relationships of respect among the participants. Because respect is an underlying principle of procedural justice (Lind & Tyler, 1988), if people already know each other in a relationship of respect, they should treat each other more fairly. This means that persons who claimed to have a prior 134 relationships with the decision makers and other citizens should view the process and outcome as more fair. Although one would expect prior relationships to be an important variable, the results were mixed (Table 27). The percent of decision makers with whom the respondent had a relationship was significantly positively related to fair process evaluation (r = .10) and courtesy evaluation (r = .13) as well as positively, but non- significantly, with four of the other evaluation variables. However, percent of citizens relationships was significantly negatively related to fair outcome evaluation (r = -.13), as well as non-significantly negatively related to all but one of the other evaluation variables. The percent of opposed citizens with whom the respondent had a relationship had no significant correlations with evaluation variables. Perhaps the reason for the positive significance of relationships with decision makers lies in the fact that decision makers are the ones with control of the process and the outcome so relationships with them are the most important to fairness judgements. It is unclear why the percent of citizen relationships was negatively related to outcome fairness. These results are counter to findings that relationships lead to more cooperative strategies (Pruitt, 1998) which should lead to more satisfactory outcomes. The results should be taken with caution in view of validity concerns with the prior relationships variables as outlined in Chapter 6. Citizen characteristics and evaluation variables The final type of context variable was the characteristics of the citizens evaluating the decisions. Because fairness evaluations are based on individual perceptions, it was 135 expected that individual differences among people would lead to differences in their evaluations. It was predicted that males, people with more education, and people with more participation experience would perceive greater fairness. The results partially confirm the predictions (Table 27). As predicted, men and people with more education perceive greater fairness. Older people also gave more fair evaluations, particularly in terms of fair outcomes (r = .13). Women were more negative on total fair process evaluation (r = -.11), representation evaluation (r = -.10), influence evaluation (r = -.08), neutrality evaluation (r = -.08), courtesy evaluation (r = -.13), and fair outcome evaluation (r = -. 10). More educated persons were more positive on total fair process evaluation (r = .10), fair process evaluation (r = .10), influence evaluation (r = .14), neutrality evaluation (r = .08), and courtesy evaluation (r = .14). These results are consistent with previous findings from a more general sample of Michigan residents in which men and more highly educated people felt a greater sense of control over government (Smith & Propst, 2001), implying greater satisfaction with government decisions. The results about the effect of prior participation experience were more mixed. A higher level of involvement with a particular agency was positively correlated with courtesy evaluations (r = .08), but negatively correlated with fair outcome evaluation (r = -.08). General level of involvement was negatively associated with representation evaluation (r = -. 12) and fair outcome evaluation (r = -.12). General involvement also showed a non-significant trend towards negative correlation with the other evaluation variables. The negative evaluations of people with more experience are contrary to expectation. They suggest that the more involved one becomes the more critical one gets, 136 or the people who tend to get more involved do so because they are more critical in general. These results suggest that even though participatory experience increases the sense of control over government (Finkel, 1985), this does not necessarily mean citizens become more satisfied with the decisions or decision making processes. Summary The theoretical framework made a number of predictions, many of which were supported at the citizen level. There was strong support for the association of fairness evaluations with the consequence of trust in decision makers. In addition, a number of context factors were found to affect fairness evaluations. There was some support for the prediction that decisions made with participation techniques having more interpersonal interaction will be evaluated as being fair. Increased conflict was clearly related to less fair citizen evaluations, while equal power was associated with more fair citizen evaluations. As predicted, prior relationships between citizens and decision makers led to increased fairness. However, contrary to prediction, increases in citizen-citizen relationships were associated with less fairness. Finally, women, younger people, and people with less education evaluated their experience as more unfair. Contrary to prediction, people with more prior participation experience perceived less fairness. In conclusion these findings generally support the proposed theoretical framework. Context factors do have an impact on citizen evaluations of fairness. Fairness, in turn, is associated with greater trust in decision makers. In the next chapter, a multivariate test of the framework is conducted. Using stepwise analysis of covariance, the context variables are examined together in a set of 137 multivariate regressions. The multivariate analysis allows the testing of all context variables together to see if they independently explain variation in fairness. The regressions also include agencies as dummy variables in order to examine the agency and the citizen levels of variation in fairness at the same time. Finally, the equations can also directly test interactions among variables. As a result, a more realistic picture of the influences and consequence of perceived fairness is constructed. 138 CHAPTER IX: RESULTS AND DISCUSSION: THE COMBINED INFLUENCE OF AGENCY, SITUATION, AND CITIZEN FACTORS ON FAIRNESS AND TRUST The previous two chapters sought to determine if context factors affected fairness and if fairness influenced trust in decision makers. This was done to test the theoretical framework proposed in Chapter 4. The framework was developed for the purpose of understanding why citizens perceive some decision making processes and outcomes to be more fair than others. Once these factors are better understood, agencies will be better able to adapt their decision making processes to the characteristics of the situation and citizens. Achieving processes and outcomes that are more fair should help to build the relationships of trust between citizens and decision makers that facilitate cooperative resource management. In the previous chapters, relationships among variables were examined at the agency and citizen levels. Chapter 7 examined the framework at the agency level and showed that agency level factors of government level and agency culture were influential in explaining cross- agency variation in fairness. Situation and citizen characteristics also explained variation in fairness at the agency level. Variation in fairness was also predictive of variation in the consequence of trust in decision makers. In Chapter 8, situation and citizen characteristics were correlated with fairness judgements at the citizen level. Situation factors like power equality, amount of conflict, participation technique, and prior relationships explained some variation in fairness. Citizen characteristics, like prior participation experience, gender, age, and education also helped explain fairness. Furthermore, fairness was highly correlated with the consequence of trust. In conclusion, 139 bivariate examinations of the data largely supported the theoretical framework at both the individual and the agency level of analyses. In this chapter, a multilevel, multivariate analysis technique is used. Multivariate approaches can determine which variables are most influential because all the variables are included in the same calculation. Multivariate analysis also allows a more direct test of the structure of the theoretical framework because context, evaluation, and consequence variables can be analyzed together. The proposition that context factors influence trust by first affecting fairness evaluations can be directly tested. Finally, multivariate approaches enable the modeling of interactions among independent variables. A multilevel approach is incorporated because the factors themselves operate at multiple levels. Chapter 7 showed that agencies do differ in fairness and these differences can be partly explained by difference between the agencies. Chapter 8 showed that individuals also differ in fairness judgements, and that these can be explained by citizen level factors like the situation and the respondent characteristics. A complete picture of the relative importance of the different levels, and the interactions between the levels, can only be obtained through a multilevel analysis. The multilevel, multivariate approach is used to answer the following research questions: 1) Does fairness mediate the effect of context factors on the consequence of trust? 2) Which context factors have the greatest influence on fairness? 3) How do context factors affect each other?, and 4) Can the differences between agencies in terms of fairness be explained by individual and situation factors, or are there true agency level factors that affect fairness? 140 Analysis overview The mode of analysis used here was stepwise analysis of covariance. A multiple regression equation was estimated with a certain set of independent variables. Then another equation with some additional independent variables was calculated. For each equation, the amount of variation explained in the dependent variable was estimated by R2, the coefficient of multiple determination. Subtracting the R2 values gave the additional amount of variation explained by the variables which were added to the second equation. An F-test was performed to show if the change in R2 was significantly different from zero. By adding sets of variables and observing changes in R2, the relative influences of the different types of context variables were determined. A multilevel dimension was added with dummy variables that represent the agencies (McClendon, 1994). A dummy variable is coded with zeros and ones. Since there were five agencies, four dummy variables were created. In this way each agency was uniquely determined. For example, respondents who participated in Delta Township received a one in the Delta variable, and zeros in the Monitor, Delhi, and MDNR variables. Respondents who participated with the HMNF received a zero on all the agency dummy variables. When a dummy variable was included in a regression, the regression coefficient it received was equal to the difference between the mean value of the dependent variable for that agency and for the HMNF. If the coefficient was significant, the difference between that agency and the HMNF was significantly different from zero. Including dummy variables was equivalent to conducting an analysis of variance with agency as the factor, because it tested for differences among agency means. 141 Adding dummy variables was a simple way to examine agency level variation in fairness, because it tested to see if agencies had significantly different means for fairness. Citizen level variables could also be used as regular independent regression variables. If a coefficient was positive and significant, then higher levels of the independent variable predicted greater fairness, afier holding all other variables constant. Including citizen level variables changed the interpretation of the dummy variables. The citizen level variables acted as covariates which meant that the agency averages were adjusted for the presence of any relationship between the covariate and the dependent variable. For example, if conflict was included in the equation with the agency dummy variables, then any differences between agencies existed after holding conflict constant. Thus differences between agencies must have been caused by something else besides differences in the amount of conflict. The stepwise analysis of covariance did not use all the variables from the previous chapters. Because the prior relationships variables had questionable validity and because many people lefi them blank, they were not included. Including them would have reduced the sample size by half. In addition, not all of the fair process variables could be included because they were so closely related that they explained the same variation in the dependent variable of trust. This multi-collinearity would pose problems for the interpretation of the variable coefficients because they would vary widely depending on whether the other related variables were included (N etter, Wasserrnan, & Whitmore, 1993). The solution was to use two fairness variables in separate analyses. The total fair process evaluation variable combined all the fair process items in a single index, thus capturing the shared influence of the different aspects of fair process. The fair outcome 142 evaluation variable was also an index of outcome fairness items. Separate stepwise analyses of covariance were conducted with the two types of fairness to see how their relationships with the context variables and with trust differed. The other variables not included were related to the agency. The agency level was represented by dummy variables and if any other agency factors, such as level of government or agency culture, were included in the same equation then they became perfectly collinear with the dummy variables. The equation could not be solved. Thus the analysis statistically tested for differences in agency means after adjusting for all the citizen level variables. Then the pattern of differences in agency means could be qualitatively compared with patterns of agency culture, participation resources, and level of government to see if they might explain the differences. In conclusion, the following variables were included in the regression equations: agency dummy variables (Delta, Delhi, Monitor, MDNR), characteristics of the citizens (age, education, gender, general involvement), characteristics of the situation (participation technique, equal power, conflict), measures of fairness (total fair process evaluation, fair outcome evaluation), and a measure of consequence (trust). In the next section, the stepwise analyses of covariance which used these variables to answer the research questions are presented. Testing the mediation of fair process A central aspect of the theoretical framework of perceived fairness is that the influence of the context on trust in decision makers is indirectly channeled through perceptions of fairness in decision making. Context factors operate on a particular event 143 which the citizen perceives and evaluates. Based on the evaluations, the citizen’s level of trust in decision makers is modified. These considerations can be rephrased as the question, does fair process mediate the effect of context factors on the consequence of trust? Stepwise regression analysis can be used to answer this question. The first set of regression models demonstrate that context variables have very little direct influence on trust after controlling for fair process. In the first model, total fair process evaluation is used as the only independent variable predicting trust. The model is significant and explains 64% of the variation in trust (Table 28). The second model adds additional agency dummy variables, the four citizen characteristic variables, and three situation variables. Although the change in R2 is significant (p = .017), adding these variables increases explained variation in trust by only 2%. The small direct influence of context on trust can also be seen in the independent variable standardized coefficients, or Beta values (Table 29). Only equal power and conflict have significant coefficients, and these coefficients are small (0.10, -0.06). These results confirm half of the mediation effect: fairness does influence trust and context has very little direct effect on trust. The second part of the mediation is to test whether context has an influence on fairness. The influence of context on fairness can be seen in the next set of stepwise regressions in which fairness is the dependent variable. Four models were compared which contained different set of variables (Tables 30 and 31). The third model is the most parsimonious because it explains almost the same amount of variation (Adj R2 = 0.24) with fewer variables than the fourth model (Adj R2 = 0.25). The third model contains seven independent context variables which have significant effects on fairness 144 Sesame 25 55 @235 50389 3.2.: .8 2:9» :88 05 5 3:20.58 05 83w xocown :08 .85 808808 San 2E. ape—pat? >888 zocowm 08 M292 55 .289 .8282 .355 88956 5883.3 09 23 E 83:82 3.5.5 5.: .5 N 55.5 25.5 55.5 5.... N .5 R55 .5 55.5 .8“ .5 55m. .5 55% .5 555.5 MW. 55.5.. 55.5 5:. 5.0 Nod- 5.5 05.5 5.5- 5.5 55 No.0- mu... m N 88 we. 5a.: m ~ 8.53598 58% 835.882 .8385 285539 5 8.862% N m 8.5589 855A858R 835% 8.8sz 888.6 nozuuanm wMV MZQ: EEQ .858: SEQ hubfics 850—2 835.29. 585888: 838:9, 80555055 2: 8 28 as 88396 $808 85 :32 £5 80me .8385 E as: 033.5, 82838 85 <>OUZ< 3550?. 8 358588 Son 5388558 ”am 2%.—- SE88 68888 88585th .8285 :83 £589,782: 128% coca—ow .8588“. .owa £292 .380 .8832 .889 deans—«>0 $088 .85 :39 h 5.5 No.5 8.5 55.5 555.5 N Scan—«.6 8085 LE 32 856 86 8.5 8.5 85.5 ~ 5888 % ewes”? 38.: .8\ 838.23 $8883 8:8 we. we a: .3. a: .198. 58: “mo—pat? 80555055 25 .8 28 mm nouns—go $308 85 .89 £5 Eve—«E 86805 E 6.5 033:; 805885 85 <>OUZ< 0935on .8 $883 882 ”mm 2%.—- 145 88.588 288.» .8885 8 895888 25. v 8 888 888.58 5.885588 08: 888858 888.55 855 888w.“ .v 8588 8 588253.. $288 25 58 "922: 88385 85888 .8088 88.5 88 .8 88> :88 25 5 388.58 25 835 .888 88 8.5 888588 885 2.:- 85858, .8885 8858 88 M9292 58 .889 .8882 .889 885588 3805888 88 585 E 80882 V55 .5 555 .5 5N 5 .5 555 .5 m- 55 .5 N VN .5 N 55 .5 5N N .5 5 N m. .5 555 .5 N M 5 .5 555 .5 M55 55.5- 2.5- 55.5 3.5 585- 55.5- MN.5 55.5 3555- aNN.5- 585- £55- m 5 N N 5.5 N V5.5 555.5 N 55.5 M5N .5 555.5 5N N .5 N5N .5 555.5 5N 5.5 N 55.5 Ma. 5—.5- 55.5 and 585- 55.5- 585 55.5 55.5- 5N.5- ——.5- 55.5- m m 555 .5 E: .5 N 55 .5 N w 5 .5 555 .5 555 .5 N 5 N .5 5N N .5 55.5 5N.5- 55.5- 2.5 55.5 55.5 5N.5- 55.5- 55.5- m N N35 555.5 555.5 53.5 5.8 55.5 5N5- 55.5 55.5 m 9 8.58358 58% x 8.38% 8358.88 8.35% 888.595 H m 835% 8.5580 858N858m 835% 8.8sz 858C 85835.5 855‘ ~N>NQSN .2N8Q 8582 SNmQ 8582 8888; 5858855 85888 @888 :8 88 8888., 885885 8.5 <>OUZ< 88388 8 8885.88 885 58858585 9m 28,—. 85825». x 8385 8:98 .8580 6:68:08 853858d8305 8:88 8550335 .888 .528 £2.88... a»... .828: 8.5 8:52 .88 38 :8 3o Rd 88 8 8:88 628888 8588585 .838 8:98 8888238 88.8» .8585 8858358 .oma .5292 .289 .8582 .889 555.5 2.5 5N5 5N.5 555.5 m 828338 8885 .8585 828858 .858 £292 .289 .8582 .889 555.5 55.5 55.5 55.5 555.5 N 9292 .289 .8582 .889 N55.5 no.5 No.5 no.5 N555 9 .8888 ~N 8388 8508 8k 3353.. 38888 8.5.8 ma. .2 .8 NE. .8 .:.. .8. 88: 858.88 8088 .58 988 2588.» 885885 8.5 <>OUZ< 85588 .8 5888.8 8502 ”on 8:89. 146 (Table 31). Thus context factors clearly have an effect on fair process. To summarize, to what extent does fair process mediate the impact of context on trust? This can be calculated using a technique analogous to the calculation of direct and indirect effects in path analysis (Alwin & Hauser, 1981; Pigozzi, pers. comm.). In path analysis, the indirect effect of a variable is calculated by multiplying the Beta coefficients which lie along the path. In this situation, for example, the indirect effect of equal power on trust is the multiplication of the Beta coefficient for equal power when fairness is the dependent variable (Table 31, Model 3) with the Beta coefficient of fairness when trust is the dependent variable (Table 29, Model 1). The product of the two Betas gives the amount of influence which is mediated by fairness. However, in this situation the combined influence of all the context variables on fairness needs to be calculated. This combined influence is captured in the R statistic, which is the multiple correlation. The R statistic is identical to the Beta coefficient when there is only one independent variable. By multiplying R statistics, the direct and indirect effects of a group of variables can be compared. The regression with context variables as the independent variables and fair process as the dependent variable had an R of 0.506 (Table 30, Model 3). The regression with fair process as the independent variable and trust as the dependent variables had an R of 0.803. Thus the mediated influence of context on trust was R = .406 (.506*.803) In contrast, the direct effect of the context variables on trust is equal to the change in R when the context variables are added as independent variables to an equation already containing fairness as an independent variables and trust as the dependent variable (Table 29). Without the context variables, the model with fair process as the independent 147 variable and trust as the dependent had an R of .803. When the context variables were added, the model’s R was .813, an increase of .010. These results show that the indirect (mediated) effect of context on trust (R = .406) was much greater than the direct effect (R = .010). As the framework proposes, fair process does appear to mediate the influence of context on trust. The relative influences of context variables on fair process Having found support for the mediating role of fair process, the next question is to determine which aspects of the context cause the largest changes in citizen judgements of fair process. A stepwise analysis of covariance was performed to answer this question. As groups of variables were entered, the change in explained variation ( R2 ) was examined. The agency dummy variables were entered in the first model, followed by citizen characteristics, and finally situation variables. Agency dummy variables were entered first to see if there was any agency level variance in fair process. Then citizen background characteristics were included as a group. They were entered before the situation variables because they were expected to explain some variation and were not of primary importance. Finally, situation characteristics were entered. When the agency dummy variables were entered by themselves they only explained 2% of the variation in total fair process evaluation (Table 30, Model 1). All of them had identical means except Delhi which was significantly lower than HMNF (Table 31, Model 1). This was exactly the same pattern found in Chapter 7 when agency means were compared for total fair process evaluation. It suggested that for some reason citizens in Delhi Township were particularly upset with their planning commission. 148 However, the lack of differences among agency means and the small amount of explained variation in process fairness, suggest that fairness perceptions are largely expressed at the individual, not the agency level. The next block of variables was citizen characteristics. Adding them to the equation only increased explained variation in fair process by 5% (Table 30, Model 2). Education was positively related to perceptions of fair process (Beta = .13, p <.05), while higher levels of general political involvement was negatively related (B = -.20, p <.05). Although not significant, age was positively related to fairness (B = .08) and female gender was negatively related (B = -.06). The directions of influence were the same as found in Chapter 8 with zero-order correlations. However, the correlation between general political involvement and total fair process evaluation was not significant, while the correlation for gender was. This inconsistency in results suggests the citizen characteristics may interact enough to affect each other’s significance level when included in the same regression. The only variable which was significant both in zero- order correlations, and in multiple regression was education. Thus the effect of education on fair process was quite reliable. In summary, citizen characteristics explain a small, but significant amount of variation in fair process. The final block of variables added was the situation characteristics. This set of variables was clearly the most important, as it added 17% of additional explained variation in fair process (Table 30, Model 3). Equal power (B =.39), participation technique (B = .08), and conflict (B = -.10) were all significantly related to fair process at the p <.05 level. The significance and directions of influence mirrored the zero-order correlations in Chapter 8. These results suggest that the immediate environment of the 149 decision making is the aspect of the context which has the largest impact on citizen evaluations of the decision making process. Thus the final model without interactions explained 26% of the variation in total fair process evaluations. Examination of the regression coefficients showed that the relative influence on process fairness was greatest for equal power (B = .39), then education (B = .18), then general involvement (B = -.l6), then conflict (B = -.10), and finally participation technique (B = .08). Controlling for these covariates, there were now significant differences among the agencies. This suggests that differences among agencies in terms of citizens and situations had been masking true agency level variation in process faimess. In conclusion, the stepwise analysis of covariance largely confirmed the bivariate relationships identified in Chapter 8 at the citizen level. Examination of changes in explained variation among models suggested that agency, citizen, and situation characteristics all explain variation in fair process. However, situation factors, particularly the equality of power distribution, were the most influential. This result is not surprising given that inequalities in power distribution could easily cause biased outcomes, lack of neutrality, and lack of respect for some individuals. All of these results are integral to perceptions of fairness. Having shown that straightforward context effects exist, it is also possible that they may interact. The level of one context factor could affect how another context factor influences fairness. The next section tests for statistical interactions. 150 Interactions in the influences of context variables on fair process How do context factors affect each other? Does the level of one context factor affect how other context factors influence fair process? The effect of one factor on another was tested by including statistical interaction terms in the regression model. The first step was to add all possible two way interactions among context variables to the full basic regression equation. When this was done, only one interaction was significant. This interaction, between equal power and education, was then added by itself to the final regression equation to produce Model 4 (Table 30). Adding the interaction only increased explained variation in fair process by 1%, but this increase was significantly different from zero. The interaction term’s coefficient (B = -.3 8) also remained highly significant and large (Table 31 ). The probable reason for the small increase in explained variation was that the term was calculated from two variables already entered in the equation. Thus it did not add much overall explanatory power. However, the interaction did highlight an important way the context variables operate and so was kept. When an interaction term is added, it often affects the coefficients of its constituent variables. Thus in model 4, the coefficient for power changed to .75, an increase from .39, and education slightly decreased from .18 to .13. This is not a concern because when a significant interaction term is included, the coefficients of its constituent variables can no longer be interpreted on their own (Agresti & Finlay, 1997). Because of the interaction, there is no longer a single, linear relationship between equal power and process fairness so the coefficient for equal power can not be interpreted. However, the interaction term, the equal power term, and the education term can all be combined in a simple linear formula which can be used to plot a graph of the interaction. Because the 151 equation does not include the other coefficients or the intercept, the y-intercepts of the lines are not accurate. However, the slope of the lines, and thus the relationships between equal power, education, and fair process are accurately depicted. The equation, using unstandardized beta coefficients from Model 4, is: Total fair process evaluation = 0.087*education + 0.643*equal power — 0.085*equal power*education The graph of the relationship between equal power and fair process at three levels of education shows that for people with lower education, power equality has a greater impact on process fairness (Figure 3). People with lower education are also often the people who have less power in society. Thus they may be more sensitive to power differences. 1.5 ~ 3. E 0.5 ~ M 2 ° ‘ ,I; -0.5 ~ , , a /’ ----883 5:0?“ 0:0 M292 :80 :an £8802 .020D 8005880 b.8583 0.8 Ben 8 80882 830 80.0 80.0 50 02.0 08.0 32 3.0 08.0 08.0 03.0 80.0 me So. 8.: 02. 8.0- 8.? 2.: 5o- 8...- mg- 8.:- 8...- 3.: m N 08.0 0:. E... m _ 26.36598 30% 03855000 036% 8050368.: 08828 n m >5:an tenekamtum 335% 3.020% .0806 :88.»an 0wa “ZQ: EEQ x886: SEQ 58k 38: 0.038.203 8080885 0050.83 828080—08 05 mo 0:0 00 8:03:30 0888: :3 55 800—08 .5660: 8 .08.: 03083 809—080: 88 <>OUZ< 083980 .8 08065000 80a :0N_E0::Sm ”mm 050,—- 8580 630858 :ocmfiowtam £026: 33:0 8080238 ~80:0fi:0::0w 89:00.60 .0w0 .MZQE 8.0a £2822 .0209 80:02.30 0888: Ed ocod nod Fwd and oood N 80:02.30 0888: :0.“ coed end end and Good _ 0w=§0 N 09:30 3008 .8.\ 830.2% 28882 cox-is we. a: a: .3. a: are .08. 08: 0038:; 80:53—08 05 .8 0:0 00 830205 080080 Ea 53, 80038 87:00: 8 8.8 03083 83:03: :8 <>OUZ< 0086000 mo E0888 0:02 ”mm 033. 157 0.00:0m0 0:: 0:0 0:22: 50250: 830286 080080 :35 .50 0:_0> :008 0:: 8 0080:0530 0:: 003m 000:0m0 :000 :05 8085000 0:0: 0: .0 .00_:0_:0> 0088:: 000:0w0 0:0 M752 0:0 8:09 £00802 .0:0D 8005880 0:338:80 0:0 20: 8 0:08:82 0666 :66 6666 0.86 6666 «.36 M666 30.6 N666 60-06 036 N26 6006 M5 5.0.. mm... a :7 and mm... mod- :06- 006 and 006 5.0- no.0 cod m 0 666 6 0 m0 6 666 6 60:6 6 006 6 0 M0- 6 606 6 mm» 6 03. 6 m 0- ~ 6 an m 6 Ma. r :6- cod mmé 09o- wad- vod mod 86 mod- bod 5d- m m 0.666 3.66 0.36 036 366 6.0.6.6 366 :06 Ma. m — .9. 0:5- cod 0 ud o : .o 5.0- N 2. No.0 m N N066 2.0.6 6666 0:6 mm. :6 mod 0 :6 mod m : 0%: 00:000. 808 80% N 0:08:00: 0:08:00: 830% 0308.. n m 2030800205 00:80.6 003080305 81% 8.00206 L080“: 00.00035 0MV SING: .:S0Q 80.80: SEQ N000: 0030.20: 800200305 0030205 080080 :05 05030> 80:80:00 :05 <>OUZ< 00350000 .50 080305000 0:0: 0003800000m ”mm 030,—- :0000fix 0:08:00: 003080380 .0w0 X 0:08:00: 00308038: 85:00 6:08:00: 830803.80 £0300 .0:00 0.080235 00.00 .0050 50820 .000 .0202 .200 .8052 .000 000.0 00.0 :00 0.0.0 000.0 0 85:00 6:08:00: 003080380 £030: 8:00 8080208: .8000» £0000m 80300300 6m0 .6752 8:00 £00802 .805 Good 26 oNd ad coo-o m .5528: 0050 .0050 .8080? .000 .0202 .200 02.82 .000 000.0 00.0 00.0 00.0 000.0 0 0202 .200 09.82 .300 000.0 00.0 00.0 00.0 000.0 _ 0M=0:0 x 0M0800 N008: :0\ 030:? 20000005 8:8 m:- N: N: .000 N: .50 .00- 0002 0030205 080080 :35 0308; 80300003 :05 <>OUZ< 00350000 .:0 80888: :0002 #m 030-:- 158 significant effects on fair outcome (Table 35, Model 4). Given that context factors explained variation in fair outcome, to what extent did fair outcome mediate the influence of context on trust? This can be estimated with R (the multiple correlation statistic) for the models. The model with context factors as the independent variables predicting fair outcome had an R of .462 (Table 34, Model 3). The model with fair outcome predicting trust had an R of .710. Thus the indirect effect of context on trust was R = .328 (.462*.710). The direct effect of context on trust equaled the change in R when context variables were added as independent variables to an equation with fair outcome as an independent variable predicting trust (Table 32). This change was R = .050 (.760 - .710). Thus most of the impact of context factors was mediated through fair outcome (R = .328) instead of being direct (R = .050). These results support the mediation role of fair outcome proposed in the theoretical framework. The relative influences of context variables on fair outcome Having found support for the mediating role of fair outcome, the next question is to determine which aspects of the context have the greatest influence on fair outcome. A stepwise analysis of covariance was performed in which agency dummy variables were 1" added, followed by citizen characteristics, and finally situation variables. Agency differences explained only 3% of the variation in fair outcome (Table 34, Model 1). However, two of the agency coefficients were significant. Monitor (B = .17) and MDNR (B = .11) had significantly higher fair outcome means than HMNF. This suggets that agency factors might be important in determining outcome fairness. 159 3960f‘ differe Thus 5 in the sectio: involx that o greate relati< still b The next regression model included citizen variables and explained an additional 3% of variation in fair outcome (Table 34, Model 2). MDNR was no longer significantly different from HMNF and the difference between Monitor and HMNF was reduced (.12). Thus some of the agency differences in fair outcome were because the agencies differed in the types of citizens they involved. This will be discussed in detail in a following section. Of the citizen characteristics, age (B = .10), gender (B = -.09), and general involvement (B = -.15) had significant coefficients (Table 35, Model 2). This suggests that older people, men, and people with less political participation experience perceived greater outcome fairness. These directions of influence mirror the findings of bivariate relationships in Chapter 8, suggesting they may be relatively robust. However, it should still be noted that adding citizen characteristics only increased explained variation by 3%. Adding situation variables, however, boosted explained variation by 15% (Table 34, Model 3). Of the situation variables, equal power (B = .35) and conflict (B = -.17) were significant (Table 35, Model 3). Again, the directions of these relationships were consistent with the bivariate correlations in Chapter 8. Adding the situation factors eliminated any significant differences among agency means and also made the coefficients for age and general involvement no longer significant. These changes suggested that agency differences in fair outcome had been largely based in agency differences in citizen and situation characteristics. This finding will be explained further below. The fact that the citizen variables of age and general involvement lost their significance suggests possible multicollinearity between citizen characteristics and situation. This multicollinearity could take the form of statistical interactions, tested in the next section. 160 According to the model, situations with more equal power and less conflict led to greater fair outcome evaluations. Equal power was by far the most influential context variable. Highly unequal power would increase the likelihood of outcomes that benefited one person over another and so violated the principle of outcome fairness. The importance of conflict is understandable because in a high conflict situation pe0ple may be more attached to their interests and perceive greater injustice if the outcomes were not as they thought they should be. Finally, women perceived less fair outcomes than men. Women, who are generally less powerful members of society, would be more likely to have their needs overlooked and so experience an unfair outcome. This last possibility suggests an interaction between gender and power. In the next section, the model is finally fiilly developed by testing for context-context interactions. Interactions in the influences of context variables on fair outcome The initial search for significant interactions used the complete model (Table 34, Model 3) as a base and added all possible two-way interactions. There were no significant three-way interactions. This led to two significant interactions: participation technique with age and participation technique with gender. The model with the two interactions explained an additional 2% of variation in fair outcome (Table 34, Model 4). In order to understand the nature of the interactions, they were graphed using equations based on the estimated unstandardized regression coefficients. Because both interactions involved the same variable of participation technique, both interaction terms had to be included in the equations. For example, the plot of the interaction of participation technique with age was calculated with gender equaling 0.5. This was done 161 "“1‘:*‘ F11 l4. - _ to produce a plot that was gender neutral. The plot of the interaction of participation technique with age had the following equation: Fair outcome evaluation = 0.235age + 0.258technique — 0.060technique*age + 0.208technique*0.5 The graph of the interaction shows that for younger people, participation techniques with more discussion led to greater fair outcome (Figure 4). For middle-aged persons, discussion and interaction still increased fairness, but not as dramatically. For the elderly, the participation technique was relatively unimportant. 1.8 - g 1.6 . 5 1.4 e g 1.2 — 0 1 _ E 0.8 - g 0-6i ----age=2(20-29) .:.; 0-4 * age = 4 (40-49) é.” 0.2 ~ ....... age= 6 (60-69) 0 I l I l l 1 2 3 4 5 Participation technique Figure 4: The interaction of age with participation technique on fair outcome evaluation The second interaction was between gender and participation technique. The graph was created with the following equation. In the equation, age is held constant at a 162 vdueo responc Fai than f< discus increa generz they c desire Fl'gllre eVa] ual value of 4, which is the age group of 40 to 50 years old. Thus the plot applies to a respondent of average age. Fair outcome evaluation = - 0.835gender + 0.258technique — 0.060technique*4 + 0.208*technique*gender The graph shows that participation technique is much more important for women than for men (Figure 5). For women, as the technique increases in the amount of discussion and interaction, their perceptions of fair outcome increase. Men also show an increase but it is very small. Women may prefer discussion-based methods because of a general preference for building and maintaining relationships. In an environment where they can interact more freely, they may be able to get outcomes closer to what they desire. 0.4 — 0.3 — 0.2 - 0.1 - ; -0.1 - 1 2 3 4 5 -0.2 - -o.3 - -0.4 ~ , -0.5 a —rmle -0.6 3 -' ....... fame -0.7 - Fair outcome evaluation Participation technique Figure 5: The interaction of gender with participation technique on fair outcome evaluation In conclusion, the interaction terms show that participation technique has a large positive influence on outcome fairness for people who are younger or female. Older 163 people and men don’t care much how they are involved. These interactions explain why participation technique and age were not significant by themselves. For some respondents, participation technique was positively related to fair outcome and for others it had no relation. On the average these would tend to cancel each other out, leading to a small or non-significant positive impact of participation technique on fair outcome. The same attenuation of effect would occur for age and gender. Diflerences between agency means on fair outcome The final item to explore is the multilevel aspect of the model. The major finding was that for fair outcome there was no significant agency level variation in fair outcome after adjusting for citizen and situation characteristics. The first model which just contained the agency dummy variables, showed a significant difference between Monitor and HMNF (B = .17) and between MDNR and HMNF (B = .11) (Table 34, Model 1). When citizen variables were added, the differences between the HMNF and the planning commissions were reduced by 0.05. This can be explained by the level of general involvement. People who were more involved perceived less fairness. Because the HMNF had many highly involved people, when the agency means were corrected for general involvement, the HMNF mean became more positive, closing the gap between itself and the planning commissions. When situation characteristics were added, the gap was closed even more. The HMNF had more conflict than the other agencies, and since conflict reduced perceived fairness, correcting for conflict would increase the mean fairness for HMNF. The HMNF also had less equal power. Since power equality was positively related to fairness, correcting for equal power would also tend to increase the 164 11138. othe Sum exar frarr deci mos mec‘ Situ. Whe pf0< fain P3111 mean fairness for HMNF. The net result was to close the gap between HMNF and the other agencies enough to make the differences in fair outcome no longer significant. However, there was one non-significant difference worth considering. Monitor had much higher fair outcome evaluation than Delhi. This difference mirrored the difference in agency cultures. Monitor had higher fair outcome importance and performance than Delhi, although these differences were also not significant. These patterns suggest the possible existence of agency level variation in fair outcome which can be explained by the agency factor of culture. Summary To what extent does the theoretical framework proposed in Chapter 4 hold when examined using multivariate, multilevel statistics? This chapter explored the theoretical fiamework to answer a number of research questions. The first was, do the fairness variables of fair process and fair outcome mediate the influence of context on trust in decision makers? Stepwise regressions showed that the influence of context on trust was mostly mediated by fair outcome and fair process. Fair process did have a slightly larger mediation role (R = .406) than fair outcome (R = .328). This may have been because situation factors of technique, equal power, and conflict describe the time and place where the decision making process occurs, so they would have a greater effect on fair process than on fair outcome. This would lead to more mediation by fair process. The next questions asked which context factors had the largest influence on fairness and did they interact. Stepwise regressions showed that situation factors, particularly equal power, had the largest impacts on both fair process and fair outcome. 165 Situations with more equal power and less conflict had higher perceived process and outcome fairness. Participation techniques were associated with increased fair process and sometimes with increased fair outcome. This was because participation technique interacted with both gender and age in its effects on fair outcome. Younger people perceived greater fair outcome with discussion-based techniques than older people who were relatively unaffected by technique. Men were also largely unaffected by the technique in their judgement of fair outcome whereas women perceived much greater fairness if discussion-based techniques were used. These interactions show that participation technique was an influential factor for both fair process and fair outcome, but its effects were specific to the type of citizen. Power equality was also citizen- specific in its effect on fair process. Persons with less education showed a stronger positive relationship between the extent of power equality and perceived fair process. Taking into account these subtleties, situation factors had largely similar impacts on fair process and fair outcome evaluations. Citizen characteristics were less influential on process and outcome fairness, but their inclusion as a group in the regressions still significantly increased explained variation in fairness. The directions of influence of each citizen characteristic were the same for both outcome and process fairness. However, between the two types of fairness, different citizen variables were significant. Education significantly increased and general political and natural resource involvement lowered process fairness. In comparison, being female significantly decreased and being older significantly increased outcome fairness. These differences, however are simply a matter of degree and somewhat unstable. The stepwise analysis showed that the significance of the citizen 166 characteri: equation. both proc Ti correctin- residual 2 process, I had been been a n0n~si faCIOf characteristics could vary depending on what other variables were also included in the equation. With a larger sample all the situation characteristics might be significant for both process and outcome fairness. The last aspect of the context was the agency making the decisions. After correcting for agency differences caused by the citizens or the situation, was there residual agency level variation that could be explained by agency factors? For fair process, correcting for citizen and situation actually uncovered agency differences that had been hidden. The differences could be explained by a combination of level of government and agency culture. The differences between the planning commissions and the Huron-Manistee and MDNR probably existed because the planning commissions dealt with issues which hit closer to home for citizens and so were more likely to lead to negative evaluations if anything about the process went wrong. Within each level of government, citizen evaluations followed agency culture. Agencies which placed higher importance on fairness, which evaluated themselves as achieving greater fairness, and which had positive beliefs about citizens tended to be more fairly evaluated by citizens. Perhaps because the HMNF and MDNR had largely identical agency cultures, they also showed no significant differences in citizen evaluations of fair process. In contrast, without including citizen or situation covariates there were significant differences among agency means on fair outcome. However, once the covariates had been added, the agency differences were much smaller and were not significant. Still, non-significant differences did exist and they matched cultural differences. Thus agency factors are probably important for both process and outcome fairness. 167 In conclusion, the differences between fair process and fair outcome are only a matter of degree. They both mediate the influence of context on trust, although fair process mediates more. They both are most strongly influenced by situation factors, but subtleties of how the situation interacts with citizens differ. Citizen characteristics also have small but significant impacts on both, although which characteristics are significant differ. Finally, both show agency level variation which can be explained with agency level factors, although the agency differences are not significant for fair outcome. Taken H together, these results provide support for the contextual model of perceived fairness proposed in this study. E 168 CHAPTER X: SUMMARY AND CONCLUSION Summary How can trust in natural resource decision makers be increased? In the modern world of resource management, collaborative management is increasingly promoted as a way to accomplish ecosystem management (COS, 1999). Collaboration requires relationships of trust. However, mistrust between citizens and decision makers ofien exists (Fortman & Fairfax, 1991). A possible source of this mistrust is the historical emphasis on neutrality in natural resource decision making. Ironically, the rise of professionalism and neutrality in government agencies at the beginning of the century was to increase trust in government. A system of political patronage had introduced corruption and inefficiency in government agencies, and establishment of a civil service based on hiring the best professionals for the job was seen as a solution (Box, 1998). Decision makers were expected to remain detached from relationships with citizens who were viewed as customers who lacked knowledge about the resource and were focused on their personal interests. Thus public participation was largely public relations. Decisions were left in the hands of professionals who tried to achieve the greatest good for the greatest number (Kaufinan, 1960). In the 19605 and 19705, citizens became dissatisfied with government and demanded greater accountability from governmental decision makers. Public participation was seen as one way of increasing accountability. New legislation, such as the National Forest Management Act of 1976, prescribed the use of public participation in a general way. Agency reactions were mixed as employees struggled with the 169 contradictions between the mandates for citizen participation and their cultural beliefs which devalued citizen expertise (Frome, 1984). From the 19805 on, public participation underwent a gradual transformation towards increasingly emphasizing direct citizen participation in conflict resolution and collaboration. These strategies required new attitudes in which professionals saw themselves as facilitators of processes that included citizens in decision making (Shannon, 1987). Over the course of the 20th century, agencies went from an emphasis on neutrality towards a greater emphasis on accountability (Knott & Miller, 1987). This shift represents a deeply seated tradeoff in the conception of justice. Justice involves treating everyone equally (neutrality) and respecting their rights for self-determination (accountability). In the search for trust from citizens, agency decision makers have tried to balance neutrality and accountability in order to reach the sometimes elusive goal of fairness. They have done so in the hope of increasing citizen trust in government decisions. Evidence for the importance of fairness in decision making does not just come from a review of the political history of the US. Fairness has also been identified in the social-psychological field as a pervasive human need. The psychological study of fairness has shown that both the process used to make decisions and the decisions that result are evaluated according to principles of fairness. Process and outcome fairness are important to people because they invoke instrumental needs like survival and symbolic needs like social standing in society and basic dignity (Thibaut & Walker, 1975; Lind & Tyler, 1988; Folger, 1993). Empirically, the wide-spread use of fairness to evaluate corporate, political, and judicial decision making has been demonstrated (Lind & Tyler, 170 1988). Fairness is also important to citizens in judgements of natural resource decision making (Tyler & Degoey, 1995; Lauber & Knuth, 1998; Smith & McDonough, 2001). Theorists have proposed a wide range of principles for fair procedures including neutrality, accuracy of information, and ethicality. Other process principles like representation of all involved, voice, and direct participation in decisions are identical to common prescriptions for conducting public participation. Outcome faimess has been measured in terms of equity, equality, and need (Hegtvedt, 1992). Fairness has been shown to have the consequences of increased citizen trust in decision makers and support for their decisions (Tyler & Degoey, 1995). The purpose of this study was to understand how perceived fairness influences citizen trust in a decision making context. A major goal was to determine if aspects of the context in which decisions were made influenced citizen judgements of fairness. A better understanding of what influences perceptions of fairness would then lead to suggestions for decision maker strategies. The result would be greater citizen trust and so more successful natural resource management, particularly management utilizing collaboration. The central role of fairness was conceptualized in a theoretical framework. The framework hypothesized that decision making context factors, such as the amount of conflict among citizens, would affect perceptions of fair process and fair outcome. Perceptions of fairness would then influence trust in decision makers. The framework proposed three main types of context factors: aspects related to the agency making the decisions, the situation in which citizens participated, and the characteristics of the citizens judging the decisions. Directions of the bivariate relationships between context 171 variables and evaluation variables and between evaluation variables and the consequence variable were specified. The context factors were partly identified from the history of natural resource decision making in the United States. There were three factors related to the decision making agency that might affect fairness. The historical review showed that employee attitudes towards citizens changed over the years. However, because different agencies might have changed at different rates, agencies might differ in terms of how their culture valued citizen involvement and fairness. Agencies might also differ in the amount of resources they devoted to participation. Finally, this study included agencies at the local, federal, and state level to see if there were level of government effects. The situation in which participation and decision making occurred might also affect fairness judgements. During the last century, participation became important partly because of citizen conflicts over how resources should be managed, suggesting that the amount of conflict, as well as factors related to the resolution of conflict, like power distribution and prior relationships among participants (Pruitt, 1998), should affect the way citizens evaluate their participatory experiences. Finally, the citizens making the judgements of decision making fairness may l‘ . differ on important personal characteristics. The study examined typical demographic characteristics like age, education, and gender. It also measured prior involvement levels in political and natural resource decision making. Because fairness evaluations are ultimately subjective perceptions, these personal characteristics might be very important. The context, fairness, and trust variables were measured with written surveys sent to decision makers and citizens. Five agencies at federal, state, and local levels were 172 studied. Reliability and validity of the variable operationalizations were assessed. Most of the variables had adequate psychometric properties, with the exception of those measuring prior relationships and some of the agency culture variables. The framework was tested in three ways. First, differences among agencies on the variables were measured and their patterns interpreted. This was done to see if the theoretical framework functioned at the agency level of analysis. Then, the situation and citizen context variables were correlated with citizen evaluations. This identified whether citizen level bivariate relationships were consistent with the framework. Finally, agency and citizen levels were combined in a stepwise analysis of covariance. This helped to determine which context factors had the greatest influence on fairness. The analysis was guided by a set of research questions. For this reason, the study results are most clearly presented in terms of answering these questions. How and why do agencies differ in terms of context, fairness, and trust variables? Agency differences were examined for all the context variables, the fairness evaluation variables, and the consequence variable of trust in decision makers. The first set of context factors concerned aspects of the agency making the decisions. Agency culture was similar across all agencies in that they all gave more importance to neutrality than to accountability/influence. However, there were agency differences. Compared to the Huron-Manistee National Forest (HMNF) and the Michigan Department of Natural Resources Forest Management Division (MDNR), Monitor and Delta planning commissions generally felt fairness principles were more important. Monitor and Delta also gave themselves a higher level of achieving fairness principles than did HMNF and 173 MDNR. These results suggested a level of government effect whereby planning commissioners, who are generally volunteer local citizens, feel fairness is more important. Agency employees might discount involving citizens and achieving fairness because of a professional allegiance to making decisions that are consistent with their disciplinary training. For the category of situation context factors, there were also agency differences. The HMNF and MDNR generally used more discussion techniques than did the planning commissions. This may have been because they often made decisions with broader geographic and temporal scope, requiring in-depth consideration of alternatives and the involvement of many constituencies. The HMNF also had the highest conflict and the lowest level of power equality. This may have been because, compared to the other agencies, they had a higher proportion of controversial decision making. The planning commissions and the DNR, while they had highly controversial decisions, also had a large number of routine decisions where there may not have been any conflict. This would lower their average levels of conflict. The final situation variables were about prior relationships among participants. HMNF and MDNR had more prior citizen- decision maker and citizen-citizen relationships than did the planning commissions. Both of these agencies also used more discussion techniques which tend to encourage relationship building. The final situation context variables were citizen characteristics. The amount of involvement of citizens in a particular agency was highest for the MDNR compared to the other agencies. This suggests the MDNR tends to encourage the same people to participate repeatedly. The involvement of citizens in general politics and natural 174 resource decision making was highest for both the MDNR and the HMNF. Since participation in the HMNF and MDNR is generally less convenient than in local planning commissions, and because it is more focused on environmental issues, it would tend to attract those who already are very involved in natural resource topics. Finally, although the agencies did not differ on age and education of involved citizens, the HMNF and MDNR involved very few women compared to planning commissions. This may have been because HMNF and MDNR deal with management of resources like wildlife, timber and minerals, topics from which women have historically been excluded. In addition to context variables, there were also differences in citizen evaluations and the consequence of trust in decision makers. On fair process evaluations, the general pattern was that Delhi received the lowest ratings by citizens. For fair outcome evaluations Delhi was at the lower end, but essentially the same as MDNR and Delta. However, HMNF received the lowest fair outcome evaluations and Monitor was the highest. Finally, the consequence of trust combined the patterns from process and outcome fairness. Delhi was the lowest, Monitor was the highest, and the others fell in between. So what might be the explanations of these patterns in evaluations and trust? The next question addresses this by examining patterns among context, fairness and trust variables. At the agency level, are the directions of influence among context, fairness, and consequence variables consistent with the theoretical framework? Patterns of contextual influence on fairness can be summarized in terms of the three types of context: agency, situation, and citizen. Agency level factors seemed to predict fairness evaluations. There appeared to be a level of government effect on citizen 175 evaluations. The planning commissions generally received lower faimess evaluations, consistent with the hypothesis that local level decisions are personally relevant to citizens so they form more extreme judgements. The state and federal levels often deal with forest management topics that are geographically distant and do not affect most citizens’ day-to-day lives. Agency culture was also somewhat consistent with citizen evaluations, although there were not many significant correlations. When looking at trends in the correlations, there were 44 correlations out of 119 total which had p-values less than 0.3 and only five of these were not in the predicted direction. These data tentatively support the conclusion that agencies with cultures that believe citizens are not self-interested and have expertise, receive more positive citizen evaluations of fairness. Agencies which believe fairness principles are important, and which believe that they achieve fairness, are also evaluated as being more fair. The fact that agency culture explained agency differences in fairness supports the theoretical framework. The results for situation variables at the agency level were similar. Situation factors of participation technique and equal power were positively associated with fairness evaluations, and conflict was negatively associated. Participation technique may have been effective because the choice of technique is dependent on the types of decisions made, a level of government issue. Agency differences in prior relationships between citizens and decision makers and between citizens and citizens had small correlations suggesting prior relationships led to greater process fairness, and this was particularly the case for relationships with citizens who opposed one’s views. 176 The final set of context factors was citizen characteristics. Agency differences in citizen level of involvement in the agency and in general politics did not clearly correlate with perceived fairness. However, although agency differences in age and education were small, they did predict fairness evaluations. Older and more educated people perceived decision making processes as being more fair. Women evaluated processes as being less fair. In conclusion, agency factors like level of government and agency culture explained fairness differences among agencies. Situation and citizen characteristics also helped explain agency differences. These results suggest that the theoretical framework of perceived fairness does operate at the agency level. Does it also operate at the level of individual citizen judgements? The next question examines patterns of relationship at the citizen level of analysis. At the citizen level, do context factors related to the situation and the citizens influence perceived fairness in the predicted directions? Bivariate relationships of situation and citizen variables with fairness evaluations show clear support for the theoretical framework. Among the situation variables, participation techniques that emphasize discussion were associated with greater fairness. This is consistent with research showing that citizens prefer small focus group-styled participation methods (Smith & McDonough, 2001). In addition, conflict decreases perceived fairness and power equality increases fairness. This is consistent with literature on conflict resolution which states that it is harder to reach a resolution if conflict is intense and power is unequal (Pruitt, 1998). Another finding consistent with 177 conflict resolution research (Pruitt, 1998) was that more prior relationships with decision makers was associated with positive fairness evaluations. Citizen characteristics were also largely consistent with predictions. Women, younger people and people with less education perceived lower levels of fairness. It may be that because most planning processes are designed and conducted by people with more education, they are not conducted in a way accessible to people with less formal education. When pe0ple with less education participated they may have been less comfortable and so rated their experience more negatively. Women may have a lower sense of control over political processes (Smith & Propst, 2001) and, because one aspect of fairness is control and influence, may perceive less fairness. Contrary to prediction, people with more participatory experience actually had more negative evaluations. Other research has shown that more participation experiences create greater feelings of control over agency decision making (Smith & Propst, 2001; Finkel 1985), which should lead to more positive evaluations of decisions. However, because people are motivated to participate out of concern for the decision topic, those who participate the most probably have the strongest concerns. Thus they may have the most negative evaluations when they perceive any deviation from fairness in process or 01111001118. At the citizen level, does perceived fairness increase trust in the decision makers? When evaluations of fairness were correlated with trust in decision makers, the relationship was strongly positive. This confirms procedural justice research which finds that trust in authorities increases when decision making processes are fair (Tyler & 178 Degoey, 1995). So how do context factors affect trust? The theoretical framework proposes that the impact of context on fairness is mediated by fairness evaluations. The next question tests this proposition Does fairness mediate the effect of context factors on trust? Do context factors directly impact trust or do they first influence fairness evaluations and the fairness evaluations then impact trust? This proposed mediation of fairness is based on the premise that context factors are mostly relevant to specific decision making experiences. Thus they should be closely related to evaluations of those experiences. By contrast, trust is a more general belief that would be affected by the specific fairness evaluations. The mediation role of fairness was tested through the calculation of direct and indirect effects using values from stepwise analysis of variance. An indirect effect is equivalent to mediation because it shows how the influence of an independent variable on the dependent variable is transmitted through an intervening variable. The indirect effect of context on trust as mediated by fair process was .406 which was much larger than the direct effect of context on trust (.010). In a similar way, the indirect effect of context on trust as mediated by fair outcome was .328 which was much larger than the direct effect of context on trust (.050). These results indicate that the framework, which assigns a central mediating role to fairness, is supported. Which context factors explain the most variation in perceived fairness? The multivariate regressions found that many context variables had impacts on both fair process and fair outcome. For fair process, situation factors of conflict, power 179 equality, and participation technique had more influence (R2 increase of .17), than did citizen characteristics (R2 increase of .05), or agencies (R2 increase of .03). Fair outcome showed an ahnost identical pattern (situation R2 increase of .15, citizen R2 increase of .03, agency R2 increase of .03). Of the situation factors, power equality, with a standardized coefficient of .39 for fair process (.35 for fair outcome) was much more influential than conflict (process B = —.10, outcome B = -. l 7) or participation technique (process B = .08, outcome B = .0). Power equality may be important because it is so closely related to fairness. Perceptions of unequal power can easily lead to perceptions of bias in decision making process and outcomes that favor those with power. Although citizen characteristics did not have large impacts on fairness, the increases in explained variation for process and outcome fairness were still statistically significant. General political involvement (B = -.16) and education (B = .18) were the only variables with significant coefficients for fair process. Gender (B = -.08) had the only significant coefficient for fair outcome. Age showed a non-significant trend towards explaining fair outcome (B = .06, p = .06). Although there were differences in significance between process and outcome, the directions of the relationships between citizen variables and both process and outcome fairness were the same. Thus it is likely that all the citizen variables influence both outcome and process fairness in similar ways. Does the level of one context factor aflect how other context factors influence fairness? The analysis of covariance also tested the ways in which context variables might statistically interact in their effects on fairness. There was only one significant interaction for fair process which suggested that for people with less formal education, 180 power equality has a larger influence on fair process. Because less educated people often are the less powerful members of society, they may be more sensitive to power differences. There were two significant interactions for fair outcome: age with participation technique, and gender with participation technique. Older people and men were generally unaffected in their fair outcome evaluations by the amount of discussion in a participation technique. However, women and younger people gave much higher fairness evaluations if the technique emphasized discussion. Women may prefer a discussion approach because it facilitates relationship building. Since commonly used participation methods like formal hearings and written comments do not allow much discussion or relationship building, this may help explain why women feel less satisfied L with their experiences. Can the diflerences between agencies in terms of fairness be explained by individual and situation factors, or is there residual agency level variation in fairness that can be explained by agency level factors? This question examines the data from a multilevel perspective because it considers the impact of citizen level and agency level factors simultaneously. In the analysis of covariance, agency was the factor and situation and citizen variables were the covariates. The analysis tested whether significant differences among agencies in fairness remained after controlling for the situation and citizens. For fair process evaluation, controlling for situation and citizen uncovered significant agency differences. A combination of level of government and agency culture seemed to explain the differences. The planning commissions may have received lower evaluations than the Huron-Manistee and MDNR because planning commissions make 181 decisions about issues which affect Citizens’ daily lives and thus engender more critical evaluations if anything goes wrong. However, state and federal agencies make decisions about distant forests, so citizens are not motivated to be as critical. Within a given level of government, agency culture correlated with citizen evaluations. HMNF and MDNR had very similar agency cultures and very similar evaluations. Delhi had lower citizen evaluations and its planning commissioners reported an agency culture which gave relatively low fairness importance ratings. Compared to other planning commissions, Delhi commissioners also believed that citizens had little knowledge and were very self- interested. For fair outcome evaluation, after controlling for citizen and situation, differences among agencies were no longer significant. However there was one fairly large, but non- significant, difference. Monitor received a more positive fair outcome evaluation than did Delhi. This difference is best explained by differences among agency cultures in which Monitor had the highest fair outcome importance and Delhi had the lowest. In conclusion, for both outcome and process fairness, residual variation in fairness does occur at the agency level and it can be explained by patterns in agency factors. Study limitations Although the results of this study are strong and quite clear, this study, like all others, has its limitations. Perhaps the largest limitation is that it was conducted with the population of people who are already actively involved in natural resource decision making. It may not apply equally well to those who are not involved. However, the fact that the importance of fairness was originally noticed in non-participants (Smith & 182 McDonough, 2001) suggests similar patterns may apply to others. Another limitation is the cross-sectional nature of the data. The model proposed and analyzed makes suggestions about causality which can really only be confirmed in a longitudinal study. One of the strengths of the research design was also a limitation. In an attempt to compare across a variety of cases and develop general theory, the questionnaires were written as generically as possible. Some respondents complained that questions were too vague or did not fit their experience. For example, several people involved in the Huron- Manistee Forest Plan Revision commented that the process was not finished (it had already been ongoing many years) so they could not judge the decision outcomes. This was in contrast to local planning where the decision was sometimes made in one meeting. It appears that increasing generalizability decreased validity, an often acknowledged dilemma (McGrath, 1982). Future research This study generated a plethora of research needs. One clear need was to deve10p better measures of many of the constructs. Participation technique was an objective but very rough measure of the amount of discussion and interaction. New measures which assess these more accurately need to be devised. For example, the amount of discussion and interaction could be assessed more directly by asking citizens and facilitators rather than relying on the researcher assigned values used in this study. One set of items used in this study actually did measure prior relationships amongcitizens, an aspect affected by participation technique. However, these items clearly did not work because they were left blank so often. New ones should be tried and pretested extensively. In hindsight, a 183 much more detailed, longer set of questions would have been necessary to assess the relationships people have and the nature of those relationships. The items measuring agency culture of bureaucracy and expertise also suffered from validity problems so they need to be revised. Participation resources was measured on a very subjective general scale which may have confused actual resources available with the respondent’s desired level of resources. These two aspects need to be clearly separated. A more objective measure of resources, like comparison of agency budgets, might be more useful. Finally, the items used to calculate trust that asked about support for the decision did not distinguish between support based on active acceptance of the decision and support based on passive resignation to a decision that cannot be changed. Perhaps the most interesting variable suffered from design limitations. An entire survey was created to measure agency culture, but this was then summarized to average values for just five agencies. Analysis of the correlations between agency culture and fairness uncovered some patterns. However, a larger number of agencies and units within those agencies would have led to clearer statistical results. In order to minimize the number of people responding to multiple surveys, the units should as much as possible not share the same citizens and employees. This will reduce respondent burden. At least 20 units should be sampled, although that would be an expensive, complex undertaking. In addition to the measurement and design needs, the study also raised substantive questions for further exploration. For example, why does education correlate positively with fairness? Why were regional governments evaluated more positively than local ones? What is the role of conflict in fairness evaluations? These research questions are 184 similar in that they try to understand and explain patterns found in the quantitative data. They are often best answered through in-depth qualitative studies. The study presented here established the role of fairness in increasing citizen trust in decision makers and identified a large number of variables which affect fairness. The next steps are two-fold. First, the quantitative measures should be refined and re- administered to new and more diverse samples. Secondly, the observed patterns among variables need to be examined qualitatively to understand the relationship mechanisms more clearly. Management implications So what are the management implications of the answers to these research questions? How does support for the theoretical framework translate into everyday decision making? How do managers get higher fairness evaluations from citizens? This study has several implications for natural resource managers and agencies. The first implication is that citizen evaluations of fairness in decision making have substantial impacts on trust in the agency and support for decisions. Bivariate correlations showed that total fair process evaluation and trust had a correlation of 0.78 and fair outcome evaluation and trust were correlated by 0.69. This result confirms an earlier qualitative study (Smith & McDonough, 2001) which found that citizens naturally used fairness when reaching judgements about an agency. The current study identified many aspects of process fairness, such as representation, neutrality, influence, and courtesy, which were highly correlated to trust. Decision makers would be wise to focus on fairness principles like ensuring adequate representation, conveying their own 185 neutrality, giving citizens influence over the decision, and treating citizens respectfully. Many decision makers are probably already trying to achieve these principles. These results suggest that it is important to communicate to citizens about the efforts being made to achieve these principles. It is also important to conduct frequent evaluations of decision making fairness, to see where improvements are needed. Given that fairness is so important, how do context factors affect fairness and what are the implications for decision makers? The first set of factors was related to the agency. There was evidence that agencies in which employees believe citizens are knowledgeable and not self-interested receive higher fairness evaluations. Leadership within agencies should consider ways to change agency culture. Shannon (1992) found that leaders in Forest Service forests were able to increase the responsiveness of their staff to citizens by modeling participatory behaviors in internal decision making. Other efforts like training existing staff, instituting appropriate incentives, and hiring new employees all can contribute to cultural change. Achievement of many of the fairness principles could be greater with training. For example, staff could learn creative and effective ways to use the media and local informal networks in spreading the word about decision making, thus increasing representation. The high percentage of women participating at the planning commission level may be because of grassroots organizing. One woman, after receiving the survey, called with a question and indicated that she was working with a neighborhood organization that had been going door-to-door generating opposition to a proposed development. If employees could learn how to tap into this kind of local network in a positive way they might get much higher turnout at public meetings. 186 The effects of situation factors on fairness also have important implications. The significant interaction between gender and participation technique showed women’s outcome fairness evaluation increases noticeably in more discussion-oriented sessions, but men’s increases only slightly. A similar interaction showed that younger people also gave more fair outcome evaluations when discussion was used, although older people did not show any differences. Examination of the demographics of survey respondents showed that women and younger people were not well represented in agency decision making. There was only one woman for each five male participants, and at the state and federal level, this reduced to a 1 to 10 ratio. The average age of respondents was 50 years old. The strong implication is that one way to increase the representativeness of decision making, a key fairness principle, would be to use more discussion techniques. This would provide a more conducive environment for younger people and women and might encourage them to participate. In addition to participation technique, the amount of power equality had a very large impact on fairness evaluations for all respondents. The interaction between power equality and education showed that people with less education were more sensitive to power differences. This may be because, as the saying goes, knowledge is power. In natural resource decision making, citizens with expertise may be more successful in persuasively communicating their preferences to decision makers. Citizens who lack education, and therefore the power that comes from expertise, are more aware of how their lack of relative power may influence the fairness of the process. The connection between power and expertise was actually part of the measure used to assess power. In this study, power equality among citizens was measured in terms of equality of financial 187 resources, equality of knowledge about how to get what one wants, and general power equality. Given the importance of power to fairness judgements, how can decision makers influence the distribution of power? While they can not hand out money to make financial resources more equal, they could provide other types of assistance that equalize power. Decision makers could hold educational meetings for citizen activists to show them how decisions are made, and so how they can best influence them. Some decision makers have deliberately found money to hire scientists that can collect and interpret data for use by all citizens. This has been particularly effective in disputes over hazardous pollution between citizens and industry, but has also been used in city planning (Renn, Webler, & Wiedemann, 1995). It has been demonstrated that when citizens are given accurate, easy to understand information, they are able to make reasonable science-based decisions (Doble & Richardson, 1992). In addition, when access to knowledge was similar among all participants, differences in opinion could be recognized as representing different values, not just different levels of expertise. Differences in values are harder to discount and ignore, leading to the possibility of serious consideration of citizen interests and influence on decisions. Thus fairness of the process should be increased. In addition to trying to reduce power inequalities, the connection between power inequality and fairness could be minimized. Keeping all decision making very transparent and in the open could reduce concerns that power inequalities affect decision making processes. Conflict was the third aspect of the decision making situation that could be managed to increase fairness. The results showed that conflict had a moderately negative impact on fairness evaluations. This, however, does not mean that conflict is bad. The 188 presence of conflict often draws attention to genuine differences of interests and opinions among people. Conflict may be necessary to make sure that the full range of options is considered before a decision is made. However, once conflict has served its role of ensuring all perspectives are considered, successfirl resolution of that conflict will probably reduce the negative impacts on fairness. Resolution would mean that the parties in opposition voluntarily agree to a settlement that meets the interests, but not necessarily the positions, of all. If underlying interests are met, then outcome fairness judgements should be high. A resolution process which meets everyone’s interests would also necessarily allow people to share their concerns, and have them seriously considered in a neutral way. Thus a conflict resolution process should meet the fair process criteria of neutrality, influence, courtesy, and representation. There are a broad array of conflict resolution techniques ranging from very informal discussions to very formal mock trials (Priscoli, 1990). Decision makers faced with conflict may wish to receive training in some of these techniques or secure the services of an outside facilitator. When using someone from outside, however, very careful attention must be paid to how the decision will be implemented. A review of environmental mediation cases found that mediation led to a large number of settlements, but, after the outside facilitator left, implementation was attempted in a traditional, unsuccessful manner (Sipe, 1998). In conclusion, the theoretical fi'amework confirmed by this study points the way towards a broad array of management strategies that may improve the relationships between citizens and decision makers. The strong correlation between fairness and trust in decision makers suggests managers should use fairness principles as a way to guide 189 and evaluate their decision making. The relationships between context factors and fairness evaluations suggest a variety of ways to increase fairness. Agency cultures can be developed which value fairness and which hold citizens in high regard. Participation techniques which emphasize discussion can be used to encourage women and younger people to participate, increasing the representativeness and therefore fairness of the process. Power equalities, particularly those arising in education and expertise differences, can be directly addressed through the provision of easy-to-understand technical information for all citizens. Finally, conflict can be reduced and fairness increased through careful use of conflict resolution methods. A focus on achieving fairness through practices like these will help natural resource managers and agencies conduct participation that leads to greater citizen satisfaction with decisions and perhaps even greater trust in government. 190 APPENDICES 191 APPENDIX A CITIZEN QUESTIONNAIRE 192 F Public Participation in Huron-Manistee National Forest Decision Making This survey is being sent by the Forestry Department at Michigan State University to all the people who are on the mailing list for the Huron-Manistee National Forest Plan Revision. We would like to know about your experiences with USDA Forest Service decision making and public participation. Even if your knowledge about the Forest Plan Revision is limited. we are very interested in your opinions. Your completion of the survey is entirely voluntary and your specific responses are confidential and available only to the researchers at Michigan State University working on this project. Forest Service personnel will not see individual responses. You indicate your willingness to participate by filling out and returning this survey. Thank you very much for your time! Question 1: How do you feel about USDA Forest Service decision making in the Huron- Manistee National Forest during the past 2 years? Please check a box to show how you feel about each of the following statements. Strongly Disagree Neither Agree Agree Strongly Don‘t Disagree Nor Disagree Agree know The Forest Service can be trusted to a’ make good decisions. 0 D D D D D b My experiences with the Forest Service have been mainly negative. CI D D D C] D C The Forest Service does its job well. D C] D D D D d I trust the Forest Service to make good decisions without my input. Cl CI D D D D c i am satisfied with Forest Service decision making. 0 Cl C] D D Q Definition of terms used in the remaining questions: “participant" I both citizens and Forest Service employees. “citizen ” - people who participated but were not Forest Service employees. 193 Question 2: Over the past 5 years, how many times were you involved in the following Huron-Manistee Forest Plan Revision (FPR) opportunities? Circle the # of times involved in the past 5 years (or write a number if more than 10) Received mailings related to the Forest Plan Revision (FPR). 0 1 2 3 4 5 6 7 8 9 10 __times 0 Submitted written comment related to the FPR. O l 2 3 4 5 6 7 8 9 10 times C Talked one-on-one (phone or in-person) with a Forest Service employee about the FPR. 0 l 2 3 4 5 6 7 8 9 10 times «F Attended a FPR meeting that primarily consistedofWhesmadeby 0 1 2 3 4 5 6 7 8 9 10 times participants (e. g. informational meeting, public hearing, listening session). +2 Attended FPR meeting that primarily consisted o l 2 3 4 s 6 7 8 9 10 times of citizsnaftesluissuasine 188% among themselves (e. g. Working Group Session). C HMMgMQJOvI o‘F fitFOFhavsyauW? 0 Question 3: What was your one mm Forest Plan Revision-related participation experience? Please check only ONE item from the following list. 0- l 0 Submitted a written comment related to the Forest Plan Revision (F PR).\ If you checked one 2 D Talked one-on—one (phone or in-person) with a Forest Service employee of these, please fill in about the FPR. the box below 3 0 Attended a FPR meeting that primarily consisted of mm who: made by participants (e. g. informational meeting, public hearing, listening session). ‘l 0 Attended FPR meeting that primarily consisted of W dim issues among themselves (e. g. Working Group Session). / 5 D None of the above b --‘ When did this experience occur? Please indicate as exact a date as possible. Year ? Month ? Day ? What was the location (city, building)? 13 C What was the topic, subject or purpose of the participation? Please base your answers to questions 4-10 on written Please base your answers to material and mailings you received about the Forest Plan questions 4-10 on this specific Revision. Answer the questions in terms of the overall FPR experience. process. 194 L F~ >§9 "nm 3NPs-Qv" \ Question 4: Wm Please describe and evaluate the M the Forest Service used to make decisions related to your most recent participation experience (identified in Q3) in the Forest Plan Revision. Please check a box for each statement. Strongly Disagree Neither Agree Agree Strongly Disagree Nor Disagree Agree The procedures used to make decisions were fair. 0 0 Cl Cl C] I am satisfied with the process used to reach decisions. Cl C] Cl Cl 0 Citizens were treated unfairly. O C] Cl 0 0 Citizens were given sufficient advance notification of the opportunity to D 0 Cl 0 D participate. The participation experience was convenient to attend. Cl Cl Cl Cl E] Everyone affected by the decisions had an opportunity to participate. 0 Cl D 0 Cl Local people were adequately involved D D D D D Citizens were unable to have an influence on the W D D D D D Strongly Disagree Neither Agree Agree Strongly Disagree Nor Disagree Agree Citizens were able to participate directly in making decisions. Cl U Cl 0 Cl Citizens had an influence on the choice of W 0 D D 0 0 Citizens' comments were seriously considered. 0 Cl Cl Cl 0 Citizens' questions were answered. 0 D D CI 0 It appears that information used to reach the decisions was accurate. Cl 0 D D The decisions were well reasoned and logical. Cl Cl Cl Cl Cl There was a bias toward a particular interest. group, or person. Cl C] D 0 Cl USFS employees were dishonest. Cl 0 D C] 0 Citizens were heated politely. Cl Cl C] Cl CI 195 Don’t know Don’t know DD 000 U D U Question 4 continued: ’ ion Ma 1 Proce Strongly Disagree Neither Agree Agree Strongly Disagree Nor Disagree Agree y- Citizens listened to and understood each other. Cl Cl 0 Cl C] 5 Citizens successfully worked together to reach agreement Cl C] Cl Cl Cl t I do not understand the process that was used to reach decisions. Cl Cl Cl Cl Cl Question 5: mm Please describe and evaluate the outcomes of decisions related to your most recent participation experience in the Forest Plan Revision. Strongly Disagree Neither Agree Agree Strongly Disagree Nor Disagree Agree 4' Benefits and costs were distributed fairly among citizens. Cl Cl 0 0 Cl b The outcomes of decisions were unfair. Cl 0 D D D 6 The decisions reached were equally favorable to all citizens. Cl 0 Cl 0 4 The decisions benefited the citizens who were most deserving. 0 Cl 0 D D e The decisions reached were consistent with my personal values. C] D C] Cl 0 F The decisions reached did not meet my personal interests. D Cl Cl 0 D I do not know what decisions were / ultimately reached. Cl Cl C] CI Cl Question 6: Other topics related to your most recent participation experience. Strongly Disagree Neither Agree Agree Strongly Disagree Nor Disagree Agree 0' I had very little background knowledge about the decision topics. CI 0 0 Cl 0 b The decisions involved were of minor importance to me. D D D CI Cl O I was too busy to be involved with the Forest Plan Revision. CI Cl CI Cl Cl 196 Don‘t know Don't know 0 D D D U nmnb-p +7 nv-P Question 7: How has your most recent participation experience affected you and others? Strongly Disagree Neither Agree Agree Strongly Don’t Disagree Nor Disagree Agree know I plan to actively oppose. appeal or sue a decision reached in this experience. Cl 0 D D 0 Cl I plan to support the decisions reached. Cl D D Q Q D This experience made me upset 0 Cl C] 0 Cl Cl This experience made me happy. Cl 0 CI Cl 0 D As a result of this experience. the relationship between Forest Service Cl Cl Cl Cl Cl Cl employees and myself has improved As a result of this experience. there is now more respect between other Cl 0 D D Cl Cl citizens and myself. Questions 8. 9. 10 ask about the situation pm: to your most recent participation. Question 8: Please describe your perceptions of the relative amounts of influence participants had MEL: your most recent participation experience began. Strongly Disagree Neither Agree Agree Strongly Don’t Disagree Nor Disagree Agree know Power was distributed equally between the agency and citizens. D D 0 Cl 0 Cl Power was distributed equally among citizens. D D D E] El El Financial resources were equally distributed among citizens. Cl Cl Cl C] 0 Cl Access to Forest Service employees was equally available to all citizens. Cl Cl Cl Cl Cl Cl The level of knowledge about how to get what they want was equal among 0 Cl 0 Cl 0 Cl citizens. Question 9: For your most recent participation experience, please give the number of people who were present and the number with whom you had a respectful relationship m the participation began. Total it # with whom i had a respectful present relationship before the participation began Forest Service staff. a, b an citizens. a of Citizens who gm most of my interests. L 197 a: mnfibp e>~§o «hm hr Question 10: Please describe the level of conflict and disagreement between participants that W of your most recent participation experience. Participants began the process with strong, deeply held opinions. Participants took positions that were very different from other participants’ Participants expressed strong emotions in response to disagreements. Positions held by participants were highly incompatible with those of others. Strongly Disagree Neither Agree Agree Disagree Nor Disagree Cl D D Cl C] Cl Cl Cl E! El Cl Cl Cl D D Cl Strongly Agree C] The remaining questions are not about the Forest Plan Revision, but give us important background information we need to interpret your answers. Don 't know Question 11: Enlarging your experiences with the Forest Plan Revision, during the past 5 years in which of the following ways have you participated in general politics as well as natural resource decision making? Organized a group of people around some political issue. Worked on a local political campaign. Ran for public office. Regularly attended meetings of a natural resource, environmental or outdoor recreation related organization. Signed a natural resource lenvironment/land use petition. Attended a local zoning/ land use hearing or meeting. Attended a hearing or meeting of the Michigan Department of Natural Resources or USDA Forest Service. Communicated (phone call, letter, or visit) with the Michigan Department of Natural Resources or USDA Forest Service to give a suggestion or complaint. Wrote a letter to an editor of a newspaper about a natural resource or land use issue. Organized a group of people around a natural resource or environmental issue. Served on a natural resource/land-use commission, advisory board, or planning team. 198 N00 NOD NOD N00 N00 N00 N00 NOD N00 N00 N00 YesD YesD YesCI YesD YesCl YesD YesCJ YesCl Yes D YesD YesD Question 12: Please tell us about yourself: What is your age? (Please check a category) CI 10-19 0 20-29 C] 30-39 CI 4049 CI 50-59 C] 60-69 C] 70-79 C] 80-89 D 90-99 Are you: C] Male 0 Female What is your racial/ethnic background? (Please check one category) Cl Asian American 0 Native American Cl Other [:1 African American Cl Latino/Hispanic / Cl Caucasian Cl Mixed ethnicity ) 2C5 What is the highest level of formal education you have completed? (please check one) Cl Less than high school 0 Four-year college degree Cl High school graduate (or equivalency) Cl Graduate or professional degree Cl Associates or other 2 year degree Question 13: Friends of the Forest follow-up survey. We are also comparing the Forest Plan Revision process with the Forest Service Friends of the Forest meetings for the Huron-Manistee Forest. Have you attended a USDA Forest Service Friends of the Forest meeting? No Yes If yes. would you be willing to fill out a follow-up 3 '/z page survey about the Friends of the Forest? No D Yes CI Question 14: Do you have any further comments about the Forest Plan Revision? Thank you very much for your time and thoughts! This survey does not require an envelope and is postage prepaid. Please just fold it in half, staple or tape it, and mail it. 199 APPENDIX B AGENCY QUESTIONNAIRE 200 Public Participation in the Huron-Manistee National Forest: A Survey of Employees This survey is being sent by the Forestry Department at Michigan State University to employees of the Huron—Manistee National Forest. This is an independent study initiated by the researchers at MS U in order to better understand public participation and provide suggestions to resource managers. Your specific responses are confidential and available only to the researchers at Michigan State University working on this project. USF S personnel will not see individual responses. The questionnaires are numbered so that we can avoid sending unnecessary reminders to those who have already responded. Once data collection is completed. the numbered lists will be destroyed. Your completion of the survey is entirely voluntary and you indicate your willingness to participate by filling out and returning this survey. It should take about 20 minutes to complete. If you have questions please call Patrick Smith (51 7) 353-5103 or Maureen McDonough (51 7) 432-2293. Thank you very much for your time and thoughts! Definitions of terms used in this survey: “Decision” 8 a natural resource management or regulatory decision. “Citizen” = anyone who is not a state or federal agency employee and who could potentially participate in USFS decision making. Preliminary Part: While working do you ever have contact (even informally) with citizens? -o Q D Yes —> Please fill out the survey Cl No —> You do not need to fill out this survey, please mail it back to us blank. Myamifliru ? Dddliffipefi'htkw. 0) myw currently an- «I. conduit-sine . 201 We \D'prD—n Part 1: Over the past 12 months, how many times have you done the following public participation activities? Please circle the number of times done in the past 12 months Responded to a citizen‘s question or comment about a decision by talking one-on-one (phone or in-person). 0 1-3 4-6 7-9 10-12 13-15 16-18 20+ Initiated a phone call or in-person visit to a citizen in order to gather input aboutadecision. 1-3 4-6 7-9 10-12 13-15 16-18 20+ Readacitizen’s written comment to the USFS. 1-3 4-6 7-9 10-12 13-15 16-18 20+ 0 0 AttendedaUSFSOpenhouse. 0 1 2 3 4 5 6 7 8 9 10 11+ 0 AttendedaUSFS public meeting. 1 2 3 4 5 6 7 8 9 10 11+ Attended a meeting of a citizen’s organization as a USFS representative. 0 1 2 3 4 5 6 7 8 9 10 11+ Incorporated a citizen suggestion that was new to you intoaplan,document,orreport. 0 1 2 3 4 5 6 7 8 9 10 11+ Part 2: What are the characteristics of citizens? 0n the LEFT SIDE. please circle the number that best represents your personal beliefs about each statement. On the RIGHT SIDE. please circle a number to indicate your perceptions of the beliefs of most other USF S employees in your unit. Definitely More False More True Definitely False Than True Than False True -2 -1 l 2 My own Beliefs of most beliefs other employees in my unit Citizens who participate in USFS decision making are... -2 -1 1 2 usually able to understand and use technical information. -2 -1 1 2 4. -2 -l 1' 2 usually in possession of the knowledge needed to make good -2 -l 1 2 b decisions. -2 -l 1 2 mainly concerned about their self-interests. -2 -1 1 2 C -2 -1 1 2 usually focused on the short-term. -2 -1 l 2 ‘1 Citizens fail to participate because... -2 -1 1 2 .. they don’tcare much about the important decisions we make. -2 -1 1 2 e -2 -l l 2 .. they feel they are too busy to participate. -2 -l 1 2 F -2 -1 1 2 .. they are satisfied with the decisions. -2 -1 l 2 j -2 -1 1 2 .. theydo not trust the USFS. -2 -1 l 2 It 202 O Q. Part 3: On the left side please indicate how much you personally agree or disagree with the following statements. On the right, rate how much you think most other L'SF S employees in your unit agree or disagree with the statements. The statements measure detailed differences. so please rate them all even if they seem Similar. Strongly Disagree Neither Agree Agree Strongly Disagree Nor Disagree Agree SD D N A SA My own Beliefs of most beliefs other employees in my unit SD D N A SA Decisions should be made according to standard 0— SD D N A SA professional practices. SD D N A SA Experts should have the power to make decisions. A SD D N A SA SD D N A SA When making decisions, correctness is more important C SD D N A SA than popularity. SD D N A SA Decisions made byaperson inaposition of authority J SD D N A SA should not be challenged by citizens. SD D N A SA USFS employees understand the long term consequences of 3 SD D N A SA decisions better than citizens. SD D N A SA USFS employees haveaclear idea of public needs and desires. ‘F SD D N A SA SD D N A SA Consistent application of rules is more importantthan SD D N A SA incorporating public comments. SD D N A SA Public participation usually helps to build public [1 SD D N A SA support for the agency. SD D N A SA Well-established plans should notbechanged inresponse L SD D N A SA to citizen demands. ‘ SD D N A SA The benefits of involving citizens outweigh the costs. J SD D N A SA SD D N A SA Involving citizens slows down decision making processes too much. k SD D N A SA Part 4: How many times have you ever done the following? Please circle the number for each. Attended a Forest Plan Revision working group session. 0 1-3 4-6 7-9 10-12 13-15 16-18 20+ Attended a public Forest Plan Revision listening session or informational meeting. 0 1-3 4-6 7-9 10-12 13-15 16-18 20+ Participated in internal discussions about citizen input to theForestPlanRevisionprocess. 0 1-3 4-6 7-9 10-12 13-15 1618 20+ AttendedaUSFS FriendsoftheForest meeting. 0 1-3 4—6 7-9 10-12 13-15 16.18 20+ 203 Part 5: How important are each of the following statements to you? Please check a box for each statement. Not at all Somewhat Important Essential important Important a. The procedures used to make decisions are fair. 0 D D D P Citizens are treated fairly. Cl Cl 0 Cl C Citizens are satisfied with the process used to reach Cl C] U C] decisions. d Citizens are given sufficient advance notification of the Cl C] 0 Cl opportum'ty to participate. e The participation experience is convenient to attend. D D D O 4? Everyone affected by the decisions has an opportunity to Cl 0 Cl Cl participate. ; Local people are adequately involved. 0 Cl D Q A) Citizens are able to participate directly in making decisions. Cl Cl 0 D 1 Citizens are able to have an influence on the 51:21:19!) 0 D D Cl 931m 3 Citizens have an influence on the choice of W Cl 0 O C] amass. k, Citizens' comments are seriously considered in the decision Cl C1 C1 C1 making. 1, Citizens' qgestions are answered. 0 Cl 0 Cl Not stall Somewhat Important Essential Important Important M. Information used to reach the decisions is accurate. 0 D D U The decisions are well reasoned and logical. Cl C] D C] 9 There is a lack of bias toward particular interests, groups, Cl Cl C] Cl and persons. f Agency employees are honest to citizens. Cl Cl CI Cl 89 Citizens are treated politely. U D D D r Citizens listen to and understand each other. CI Cl 0 Q 3 Citizens successfully work together to reach agreement. 0 CI Cl Cl f Benefits and costs of outcomes are distributed fairly among 0 Cl Cl 0 Citizens. Li. The outcomes of decisions are fair. D D D 0 if The decisions reached are equally favorable to all citizens. Cl 0 Cl C] 1.) The decisions benefit the citizens who are most deserving. Cl 0 D D K The decisions reached are consistent with citizens’ personal 0 Cl Cl D values. y The decisions reached meet citizens’ personal interests. 0 Cl 0 D 204 Part 6: In the section above you indicated the importance to yourself of some statements. Now please indicate how often you and other L'SF S employees in your unit actually accomplish them. Please Circle 0 word on the right for each statement. Frequency of accomplishment in my unit The procedures used to make decrsions are fair. Never Seldom Sometimes Often Always Citizens aretrcarcd fairly. Never Seldom Sometimes Often Always Citizens are satisfied with the process used to reach decisions Never Seldom Sometimes Often Always Citizens are given sufficient advance notification of the opportunity to participate. Never Seldom Sometimes Often Always The participation experience is convenient to attend. Never Seldom Sometimes Often Always Mix Q- “WP Everyone affected by the decisions has an opportunity to pal-nap”; Never Seldom Sometimes Often Always Local people are adequately involved, Never Seldom Sometimes Often Always Citizens are able to have an influence on the 922151211 V . 0f 9112911195.- . ever Seldom Somenmes ten Always Citizens are able to participate directly in making decisions. Never Seldom Sometimes Often Always Citizens have an influence on the choice of W 03§N n~Lvi~n>So am Never Seldom Sometimes Often Always C itizens' comments are seriously considered in the decision making. Never Seldom Sometimes Often Always Citizens' questions are answered. Never Seldom Sometimes Often Always Information used to reach the decisions is accurate. Never Seldom Sometimes Ollcn Always The decisions are well reasoned and logical. Never Seldom Sometimes Often Always There is a lack of bias toward particular interests, groups. and persons. Never Seldom Sometimes Often Always I Agency employees are honest to publics. Never Seldom Sometimes Often Always 2 Citizens are treated politely. Never Seldom Sometimes Often Always r’ Citizens listen to and understand each other. Never Seldom Sometimes Often Always 5' Citizens successfirlly work logger to reach agreement. Never Seldom Sometimes 0M Always rt Benefits and costs of outcomes are distributed fairly among citizens. Never Seldom Sometimes Often Always [4, The outcomes of decisions are fair. Never Seldom Sometime! 0M Al"?! V The decisions reached are equally favorable to all citizens. Never Seldom Sometime! 03m Always it) The decisions benefit the citizens who are most deserving. Never Seldom Sometimes 0M Always K The decisions reached are consistent with Citizens’ personal values. Never Seldom Sometimes Often Always } The decisions reached meet citizens’ personal interests. Never Seldom Sometimes Often AlWIYS 205 = Part 7: How big an impact does each of the following factors have on the seriousness with which citizen input is considered in your unit? Please circle a word/or each factor. f I l Extent of impact —-.---- .. 206 Q Rules and policies. Zero Small Medium Large Dominating b Leadership of superiors. Zero Small Medium Large Dominating c Personal convictions of the decision maker. Zero Small Medium Large Dominating A Resources (staff, money), Zero Small Medium Large Dominating (, Pressure/expectations of equal-level colleagues. Zero Small Medium Large Dominating Part 8: How much of the following resources are available to your unit for public participation? Please circle an amount for each resource. Amount available a. Staff Almost None A Little Some A lot b Training Almost None A Little Some A lot 0 Money Almost None A Little Some A lot Part 9: In your unit how frequently is there... i a, discussion among employees about how to ’ improve citizen involvement in decision making? NW“ Seldom Sometimes 03¢" Always discussion among employees about citizen comments in an attempt to identify the Never Seldom Sometimes Often Always underlying concerns? C a climate that makes subordinates feel it is 01: to openly disagree with superiors? Never Seldom Sometimes Often Always A debate between subordinates and superiors about technical issues? Never Seldom Sometimes Often Always Part 10: Please tell us about yourself: What is your age? (Please check a category) 6L 0 10-19 D 20-29 D 30-39 Cl 40.49 CI 50-59 CI 60-69 Cl 70+ Are you: b D Male Cl Female Tb o~p~§n *3 Part 10 continued: What is your racial/ethnic background? (Please check one category) Cl Asian American :1 Native American Cl Other (please specify) Cl African American Cl LatinoxT-lispanic >C_i. D Caucasran D Mixed ethnicity (please specify) What is the highest level of formal education you have completed? (please check one) 0 Less than high school C] Four-year college degree CI High school graduate (or equivalency) Cl Graduate or professional degree 0 Associates or other 2 year degree If you received professional training or a degree. what was the subject(s) or field(s)? Do you belong to the Society of American Foresters? D Yes C] No Do you belong to any other professional societies? (please list) All told. how long have you worked for the USFS? years. How long have you worked on the Huron-Manistee National Forest? years. Part 11: Do you have any further comments about public participation in the USFS or about this survey? Thank you very much for your time and thoughts! This survey does not require an envelope. and postage is prepaid. Please just fold it in half, staple or tape it. and mail it. 207 APPENDIX C SHORT FORM (PHONE SURVEY) QUESTIONNAIRES 208 USFS Citizen Nonresponse phone interview: Name Q1 # _ Date Phone # Reason you did not return survey: Would you be willing to send in completed Questionnaire? Yes No Should I send new Questionnaire? Yes No Mention confidentiality. Question I: How do you feel about USDA Forest Service decision making in the Huron-Manistee National Forest during the past 2 years? Strongly Disagree Neither Agree Agree Strongly Don't Disagree Nor Disagree Agree know The Forest Service can be trusted to make good decisions. 0 CI Cl D Cl D My experiences with the Forest Service have been mainly negative. 0 Cl D Cl C] D The Forest Service does its job well. D Cl D D D Cl I trust the Forest Service to make good decisions without my input. 0 Cl D CI 0 D I am satisfied with Forest Service decision making. D D U D D D Question 2: Over the past 5 years, how many times were you involved in the following Huron-Maulstee Forest Plan Revision (FPR) opportunities? Received mailings related to the Forest Plan Revision (FPR). ___times Submitted written comment related to the FPR. ___times Talked one-on-one (phone or in-person) with a Forest Service employee about the F PR. _times Attended a FPR meeting that primarily consisted of W made by participants (e.g. informational meeting, public hearing, listening session). _times Attended FPR meeting that primarily consisted of WW issues _times among themselves (e.g. Working Group Session). 209 If person has participated ask about experience: Strongly Disagree Neither Agree Agree Strongly Don’t Disagree Nor Disagree Agree know The procedures used to make decisions were fair. Cl C] D Cl Cl 0 The outcomes of decisions were unfair. Cl C] Cl 0 Cl 0 The decisions reached were consistent with my personal values. Cl Cl Cl Cl 0 Cl I plan to actively oppose, appeal or sue a decision reached in this experience. 0 D D 0 Cl 0 Power was distributed equally among citizens. D Cl D Cl Cl Cl Participants began the process with strong, deeply held opinions. Cl Cl C] Cl Cl Cl Question 11: mm: your experiences with the Forest Plan Revision, during the past 5 years in which of the following ways have you participated in general polities as well as natural resource decision making? Organized a group of people around some political issue. No Cl Yes Cl Worked on a local political campaign. No D Yes Cl Ran for public office. No D Yes Cl Regularly attended meetings of a natural resource, environmental or outdoor recreation related organization. No D Yes Cl Signed a natural resource lenvironment/land use petition. No Cl Yes 0 Attended a local zoning! land use hearing or meeting. No D Yes 0 Attended a hearing or meeting of the Michigan Department of Natural Resources or USDA Forest Service. No D Yes Cl Communicated (phone call, letter, or visit) with the Michigan Department of Natural Resources or USDA Forest Service to give a suggestion or complaint. No D Yes D Wrote a letter to an editor of a newspaper about a natural resource or land use issue. No D Yes D Organized a group of people around a natural resource or environmental issue. No D Yes Cl Served on a natural resource/land-use commission, advisory board, or planning team. No D Yes D What is your age? (Please check a category) CI 10.19 C) 20-29 C] 30-39 D 40.49 C] 50-59 CI 60-69 D 70.79 CI 80—89 D 90.99 Are you: 0 Male Cl Female 210 What is your racial/ethnic background? Cl Asian American Cl Native American Cl Other Cl African American Cl Latino/Hispanic D Caucasian Cl Mixed ethnicity What is the highest level of formal education you have completed? 0 Less than high school Cl Four-year college degree Cl High school graduate (or equivalency) Cl Graduate or professional degree Cl Associates or other 2 year degree Other comments: 211 USFS Agency Nonresponse phone interview: Name Q1 # Date Reason you did not return survey: old Would you be willing to send in completed Questionnaire? Yes No Should I send new Questionnaire? Yes No Part I: Over the past 12 months, how many times have you done the following public participation activities? Please circle the number of times done in the past 12 months Responded to a citizen’s question or comment about a decision by talking one-on-one (phone or in-person). 0 1-3 4-6 7-9 10-12 13-15 16-18 20+ Initiated a phone call or in‘person visit to a citizen in orderto gatherinputaboutadecision. 0 1-3 4-6 7-9 10-12 13-15 16-18 20+ Readacitizen‘s written comment to the USFS. 0 1-3 4-6 7-9 10-12 13-15 16-18 20+ AttendedaUSFSopenhouse. O l 2 3 4 5 6 7 8 9 10 11+ AttendedaUSFSpublicmeeting. O 1 2 3 4 5 6 7 8 9 10 11+ Attended a meeting of a citizen’s organization as a USFSrepresentative. O 1 2 3 4 S 6 7 8 9 10 11+ Incorporated a citizen suggestion that was new to you intoaplan,document,orreport. O l 2 3 4 5 6 7 8 9 10 11+ Part 4: How many times have you ever done the following? Please circle the number for each. Attended a Forest Plan Revision working group session. 0 1-3 4.6 7-9 10-12 13-15 16-18 20+ Attended a public Forest Plan Revision listening session or informational meeting. 0 1-3 4-6 7-9 10-12 13-15 16-l8 20+ Participated in internal discussions about citizen input to the Forest Plan Revision process. 0 1-3 4-6 7-9 10-12 13-15 16.18 20+ AttendedaUSFS FriendsoftheForestmeetinE 0 1-3 4-6 7-9 1012 13-15 16-18 20+ 212 Part 3: How much do you personally agree or disagree with the following statements. Strongly Disagree Neither Agree Agree Strongly Disagree Nor Disagree Agree SD D N A SA SD D N A SA Decisions should be made according to standard professional practices. SD D N A SA Experts should have the power to make decisions. SD D N A SA When making decisions. correctness is more important than popularity. SD D N A SA Decisions made by a person in a position of authority should not be challenged by citizens. SD D N A SA USFS employees understand the long term consequences of decisions better than citizens. SD D N A SA USFS employees have a clear idea of public needs and desires. SD D N A SA Consistent application of rules is more important than incorporating public comments. SD D N A SA Public participation usually helps to build public support for the agency. SD D N A SA Well-established plans should not be changed in response to citizen demands. SD D N A SA The benefits of involving citizens outweigh the costs. SD D N A SA Involving citizens slows down decision making processes too much. What is your age? (Please check a category) 0 10-19 D 20.29 CI 30-39 D 40.49 D 50.59 CI 60-69 D 70+ Are you: 0 Male 0 Female What is your racial/ethnic background? (Please check one category) Cl Asian American Cl Native American Cl Other (please specify) CI African American 0 Latino/Hispanic Cl Caucasian Cl Mixed ethnicity (please specify) What is the highest level of formal education you have completed? (please check one) D Less than high school 0 Four-year college degree Cl High school graduate (or equivalency) 0 Graduate or professional degree D Associates or other 2 year degree If you received professional training or a degree, what was the subject(s) or field(s)? Do you belong to the Society of American Foresters? D Yes Cl No Do you belong to any other professional societies? (please list) All told, how long have you worked for the USF S? years. How long have you worked on the Huron-Manistee National Forest? years. 213 REFERENCES Agresti, A. & B. Finlay. 1997. Statistical methods for the social sciences, third edition. Upper Saddle River, NJ: Prentice Hall. Alwin, D.F. & RM. Hauser. 1981. The decomposition of effects in path analysis. In Linear models in social research, ed. P.V. Marsden, pp. 123-140. Thousand Oaks, CA: SAGE. Amstein, SR. 1977. A ladder of citizen participation. In Citizen participation certification for community development, ed. P. Marshall, pp. 40-49. National Association of Housing and Redevelopment Officials, Washington DC. Babbie, E. 1998. The practice of social research, eighth edition. Belmont, CA: Wadsworth Publishing Co. Barrett-Howard, E. & T.R. Tyler. 1986. Procedural justice as a criterion in allocation decisions. Journal of Personality and Social Psychology 50(2): 296-304. Benest, F. 1996. Serving customers or engaging citizens, what is the future of local government? Public Management 78(2): A6-A10. Blahna, D.J. & S. Yonts-Shephard, 1989. Preservation or use? Confronting public issues in forest planning and decision making. In: Hutcheson, J ., F. Noe & R.Snow (eds). Outdoor recreation policy: pleasure and preservation., p 161-176. Westwood, CT: Greenwood Press. Blalock, HM. 1979. Social statistics, revised second edition. New York: McGraw-Hill. Box, RC. 1998. Citizen Governance: Leading American Communities into the 21” Century. Thousand Oaks, CA: SAGE Brockner, J. & P. Siegel. 1996. Understanding the interaction between procedural and distributive justice: The role of trust. In Trust in organizations: Frontiers of theory and research. eds. RM. Kramer & T.R. Tyler, pp. 390-413. Thousand Oaks, CA: SAGE. Camevale, P.J. & D.G. Pruitt. 1992. Negotiation and mediation. Annual Review of Psychology 43: 531-582. Coleman, J .S. 1988. Social capital in the creation of human capital. American Journal of Sociology 94 Supplement 895-8120. 214 COS, Committee of Scientists. 1999. Sustaining the people '5 lands: Recommendations for stewardship of the national forests and grasslands into the next century. Washington, DC: USDA. Culhane, P.J. 1981. Public lands politics: Interest group influence on the Forest Service and the Bureau of Land Management. Baltimore: Johns Hopkins University Press. Doble, J. & A. Richardson. 1992. You don’t have to be a rocket scientist... Technology Review Jan: 51-54. Ewick, P, & S. Silbey. 1998. The Common Place of Law. Chicago: University of Chicago Press. Festinger, L. 1957. A theory of cognitive dissonance. Evanston, 111.: Row Peterson. Finkel, SE. 1985. Reciprocal effects of participation and political efficacy: A pane analysis. American Journal of Political Science 29(4): 891-913. Floyd, D.W., R.H. Germain & K.T Horst. 1996. A model for assessing negotiations and mediation in forest resource conflicts. Journal of Forestry 94(5): 29-33. F olger, R. 1993. Reactions to mistreatment at work. In Social Psychology in Organizations. ed. J .K. Murnighan, pp. 161-183. Englewood Cliffs, NJ: Prentice Hall. Fortmann, L. and SK. Fairfax. 1991. Forest resource policy. In Rural policies for the 1990 ’s. eds. C.B. Flora & J .A. Christenson, pp. 270-280. Boulder, CO: Westview Press. Frome, M. 1984. The Forest Service, 2"" Ed. Boulder, co: Westview Press. George, D. and P. Mallery. 1995. SPSS/PC+ Step by Step: A Simple Reference Guide. Belmont, CA: Wadsworth Publishing Co. Hegtvedt, K.A. 1992. When is a distribution rule just? Rationality and Society 4(3): 308-33 1 . Hegtvedt, K.A. & B. Markovsky. 1995. Justice and injustice. In Sociological perspectives on social psychologa eds. K.S. Cook, G.A. Fine, & J .S. House, pp. 396-420. Boston: Allyn and Bacon. Hofstede, G. 1998. Attitudes, values and organizational culture: Disentangling the concepts. Organization Studies 19(3): 477-492. Homans, GO 1961. Social behavior: Its elementary forms. New York: Harcourt, Brace & World, Inc. pp. 404. 215 Jones, E.S. & P. Mohai, 1995. Is the Forest Service keeping up with the times?: Interest group and forestry school perceptions of post-NFMA change in the United States Forest Service. Policy Studies Journal 23(2): 351-371. Kaufman, H. 1960. The Forest Ranger: A study in administrative behavior. Baltimore: Johns Hopkins Press. Kemmis, D. 1990. Community and the politics of place. Norman: University of Oklahoma Press. Knott, J .H. & G.J. Miller. 1987. Reforming bureaucracy: The politics of institutional choice. Englewood Cliffs, NJ: Prentice-Hall. Kweit, M.G. & R.W. Kweit. 1981. Implementing citizen participation in a bureaucratic society. New York: Praeger. Laird, F.N. 1993. Participatory analysis, democracy, and technological decision making. Science, Technology, & Human Values 18(3): 341-361. Lauber, T.B. & B.A. Knuth. 1997. F aimess in moose management decision-making: the Citizens’ perspective. Wildlife Society Bulletin 25(4): 776-787. Lauber, T.B. & B.A. Knuth. 1998. Measuring fairness in citizen participation: A case study of moose management. Society and Natural Resources 12(1): 19-37. Lawrence, R.L., S.T. Daniels, & G.H. Stankey. 1997. Procedural justice and public involvement in natural resource decision making. Society & Natural Resources 10: 577-589. Leventhal, GS. 1976. F aimess in social relationships. In Contemporary topics in social psychology. eds. J .W. Thibaut, J .T. Spence, & R.C. Carson, pp. 211-239. Morristown, NJ: General Learning Press. Leventhal, G.S., J. Karuza Jr. & W.R. Fry. 1980. Beyond fairness: a theory of allocation preferences. In Justice and social interaction. ed. G. Mikula, pp. 167-218. New York: Springer-Verlag. Lind, E.A., C.T. Kulik, M. Ambrose & M.V. de Vera Park. 1993. Individual and corporate dispute resolution: Using procedural fairness as a decision heuristic. Administrative Science Quarterly 38: 224-251. Lind, E.A. & T.R. Tyler. 1988. The social psychology of procedural justice. New York: Plenum Press. 216 Lind, E.A., T.R. Tyler, & Y.J. Huo. 1997. Procedural context and culture: Variation in the antecedents of procedural justice judgments. Journal of Personality and Social Psychology 73(4): 767-780. McClendon, M.J. 1994. Multiple regression and causal analysis. Itasca, IL: F.E. Peacock Publishers. McDonough, M.H. 1991. Integrating the diversity of public values into ecosystem management. Pp 133-137 In: Lemaster, D.C. & G.R. Parker (eds). Ecosystem management in a dynamic society. West Lafayette, IN: Purdue University. McDonough, M.H., KE. Vachta, S.L. Funkhouser, & A.L. Gieche. 1994. Creating community-forestry partnerships: a participatory approach. USDA Forest Service Urban Forestry Center for Midwestern States. McDonough, M.H. & M. Thorbum (1997) An evaluation of, and recommendations for, public participation and citizen advisory mechanisms in the Michigan Department of Natural Resources Forest Management Division. MSU Foresu'y Department Working Paper. McGrath, J .E. 1982. Dilemmatics: the study of research choices and dilemmas. In McGrath, J .E., J. Martin, & R.A. Kulka (eds). Judgement call in research. Beverly Hills: SAGE. Molotch, H.L. 1976. The city as a grth machine: Toward a political economy of place. American Journal of Sociology 82: 309-78. Netter, J ., W. Wasserrnan & G.A. Whitmore. 1993. Applied statistics. Boston: Allyn and Bacon. Parker, J .D. & M.H. McDonough, 1999. Environmentalism of African Americans: An analysis of the subculture and barriers theories. Environment and Behavior 31(2): 155-17 7. Pateman, C. 1970. Participation and democratic theory. Cambridge: Cambridge University Press. Pollak, PB. 1984. Planning decisions: Does citizen participation make a difference? A case study. Planning & Administration 11(2): 63-77. Priscolli, J .D. 1990. Public Involvement, conflict management, and dispute resolution in water resources and environmental decision making. Ft. Belvoir, VA: Institute for Water Resources, US. Army Corp of Engineers. IWR Working Paper 90-ADR-WP- 2. 217 Pruitt, D.G. 1998. Social Conflict. In Eds. Gilbert, D.T., S.T. Fiske & G. Lindzey. The Handbook of Social Psychology, 4'” Ed, Vol 2, pp 470-503. Renn, O., T. Webler, & P. Wiedemann. 1995. Fairness and competence in citizen participation: Evaluating models for environmental discourse. Boston: Kluwer. Robinson, 60. 1975. The Forest Service: A study in public land management. Baltimore: Johns Hopkins Press. Russell, J .W., C. Blye, J. Caplan, H.A. Deutsch, O.D. Grossarth, M. Lunn, E. Schultz, R. Scott & T. Stewart. 1990. Public participation: Volume 5 in critique of land management planning. Washington, DC: USDA Forest Service. Rylander H, R.G., D.B. Propst, & T.R. McMurtry. 1995. Nonresponse and recall biases in a survey of traveler spending. Journal of Travel Research (spring): 39-45. Salwasser, H. 1994. Ecosystem management: can it sustain diversity and productivity? Journal of Forestry 92(8): 6-1 1. Schumaker, J .R., J. O’Laughlin, & J .C. F reemuth. 1997. Why don’t federal employees use alternative dispute resolution more often? Journal of Forestry 95(1): 20-22. Shannon, M. 1987. Forest planning: Learning with people. In Social science in natural resource management systems. eds. M.L. Miller, R.P. Gale & P.J. Brown, pp. 233- 252. Boulder, CO: Westview Press. Shannon, MA. 1992. Foresters as strategic thinkers, facilitators, and citizens. Journal of Forestry Oct:24-27. Sipe, N.G. 1998. An empirical analysis of environmental mediation. Journal of the American Planning Association 64(3): 275-285. Sirmon, J ., W.E. Shands & C. Ligget. 1993. Communities of interests and open decisionmaking. Journal of Forestry 91(7): 17-21. Slover, BL. 1996. A music of opinions: Collaborative planning for the Charles C. Deam Wilderness. Journal of Forestry 94(5): 18-23. Smith, P.D., M.H. McDonough, & M. Mang. 1999. Ecosystem management and public participation: lessons from the field. Journal of Forestry (Oct): 32-38. Smith, P.D. & M.H. McDonough. 2001. Beyond public participation: fairness in natural resource decision making. Society and Natural Resources. 218 Smith, P.D. & D. Propst. 2001. Are topic specific measures of socio-political control justified? Exploring the realm of citizen participation in natural resource decision making. Journal of Community Psychology. Steele, EH. 1987. Participation and rules: The functions of zoning. American Bar Foundation Research Journal 1986(4): 709-755. Syme, G.J. & E. Eaton. 1989. Public involvement as a negotiation process. Journal of Social Issues 45(1): 87-107. Taylor, S. 1984. Making bureaucracies think: The environmental impact statement strategy of administrative reform. Stanford: Stanford University Press. Thibaut, J. & L. Walker. 1975. Procedural Justice: A psychological analysis. New York: John Wiley & Sons. Tuler, S. & T. Webler. 1999. Voices from the forest: What participants expect of a public participation process. Society and Natural Resources 12(5): 437-453. Tyler, T. 1990. Why people obey the law. New Haven: Yale Univ. Press. pp. 273. Tyler, T. 1994. Psychological models of the justice motive: Antecedents of distributive and procedural justice. Journal of Personality and Social Psychology 67(5): 850- 863. Tyler, T. & P. Degoey. 1995. Collective restraint in social dilemmas: Procedural justice and social identification effects on support for authorities. Journal of Personality and Social Psychology 69(3): 482-497. United States Forest Service. 1977. Forest Service inform and involve handbook. USDA Forest Service. Van den Bos, K., E.A. Lind, R. Vermunt & A.M. Wilke. 1997. How do I judge my outcome when I do not know the outcome of others? The psychology of the fair process effect. Journal of Personality and Social Psychology 72(5): 1034-1046. Walster, E., E. Berscheid & G.W. Walster. 1973. New directions in equity research. Journal of Personality and Social Psychology 25(2): 151-176. Webler, T. & S. Tuler. forthcoming. Fairness and competence in citizen participation: Theoretical reflections fi'om a case study. Administration & Society. Wondolleck, J .M. 1988. Public lands conflict and resolution: Managing national forest disputes. New York: Plenum Press. 219