’uf”£ fin \ . mum; 1112111111qu >11th [ljlfll my“! Mull] L This is to certify that the thesis entitled SURVEYS OF ATTITUDES AND OPINIONS AS AN INPUT INTO PUBLIC POLICY DECISIONS presented by REBECCA L. JOHNSON has been accepted towards fulfillment of the requirements for Master ' 3 degree in Agricultural Economics Vffifitwfl Major professor Date NOV. 8, 1979 0-7639 SURVEYS 0F ATTITUDES AND OPINIONS AS AN INPUT INTO PUBLIC POLICY DECISIONS By Rebecca L. Johnson A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Agricultural Economics 1979 ABSTRACT SURVEYS 0F ATTITUDES AND OPINIONS AS AN INPUT INTO PUBLIC POLICY DECISIONS By Rebecca L. Johnson Public input is increasingly being seen as a necessary component of the public policy making process. For various reasons, some represen- tation of what the public wants is sought after by bureaucratic and legislative decision-makers. However, when opinion polls and attitude surveys are conducted, the rules of representation are unclear and inconsistent. To whatever extent the polls and surveys represent the various publics in our society, they do so at the discretion of the survey designer. This thesis looks at various ways that a survey design necessarily selects a particular public to represent. In the arena of public policy making, budgets are finite and trade- offs must be made between competing programs. This awareness of competition between programs for scarce dollars is often lacking in the setting of a poll or survey. It is questionable whether surveys which do not force respondents to consider trade-offs can be useful guides for policy makers. A State Forest planning effort which is currently using a survey as part of its planning process is analyzed as a case study in the final section of the thesis. The various points in the survey process which involve judgments determining whose preferences are to count, are identified and analyzed. ACKNOWLEDGEMENTS I would like to thank the following people who contributed in many ways to the completion of this thesis. Special thanks go to Dr. Al Schmid, who served as thesis supervisor and inspired the research on this topic. His insights and thoughtful comments have been instrumental and invaluable. I am also grateful to Dr. Larry Libby and Dr. Don Holocek who served on my committee and provided meaningful comments on the draft c0py. I wish to thank Jerry Thiede of the Department of Natural Resources who was continually willing to provide information on the management of the Pigeon River Country. I am also grateful to my friend, Cindy Cordes, for all of her editorial and typing assistance. A final word of appreciation goes to my colleague and friend, Steve Davies, for providing insightful dialogue and moral support throughout the research process. ii TABLE OF CONTENTS Page List of Tables and Figures .................... v PART I .............................. 1 Chapter I ......................... 1 Statement of the Problem ................ Purpose of the Paper .................. 9 PART II ............................. 11 Chapter II ......................... 12 Choosing Who to Survey ................. 12 Chapter III ......................... 16 Choosing a Survey Method ................ 16 Telephone Surveys ................. 16 Mailed Questionnaire ............... 17 Personal Interview ................ 21 Timing of Surveys ................. 22 Construction of the Questionnaire ......... 23 Question Wording ................. 23 The Order of Questions .............. 26 The Type of Question Used ............. 29 Open-Ended Questions ............... 29 Forced Choice Questions .............. 31 Ordinal Ranking Surveys .............. 33 Ratio Scale Surveys ................ 36 Explicit and Implicit Trade-Off Questions ..... 38 Chapter IV Aggregation and Reporting of Survey Results ....... 50 PART III ............................. 55 Chapter V .......................... 56 iii Page Goal Programming .................... 58 The Model ....................... 60 GP For Use In Land Management Plans .......... 62 The Pigeon River Country Programming Model ....... 64 Chapter VI ......................... 80 Conclusions and Recomendations ............ 80 APPENDICES A. Questionnaire Used For the Pigeon River Advisory Council ................... 83 B. Follow-Up Correspondence Used .............. 85 C. Individual Responses to the Ranking Question ...... 86 D. Individual Responses to the Ratio Question ....... 87 BIBLIOGRAPHY iv LIST OF TABLES AND FIGURES Page Location of the Pigeon River Country Planning Unit in Michigan ....................... 57 Goal Priorities By Interest Groups and Land Use for the PRCSF .................. 68 Aggregate Ordinal Rankings Obtained From the Ordinal Ranking Scale and Ratio Scale Techniques ..... 74 PART I Chapter I Statenent of the Problem Increasing numbers of attitude surveys and public opinion polls have been conducted in recent years. These polls and surveys have become important guides for public policy makers. In some cases, a measure of public input is mandated by law for an agency. For example, the National Environmental Policy Act of 1969 (NEPA) requires public involvement on all major federal actions and Executive Order 11514 requires agencies to develop procedures to assure understanding of proposed actions and to solicit public views (Erickson and Davis, 1975). Such input might be accomplished through public hearings, workshops, referendums or public opinion polls. The choice of technique is usually left up to the agency. Those agencies choosing to use a poll or survey . are also free to choose the type of survey to be administered. In many cases the agency contracts with a polling organization to do the survey for them. It is then up to the agency to clearly state the purpose and need for the survey. Since the polling organization will design the survey to meet the needs of its customer (the agency), the communication link between the two is important. Even in cases where public input is not mandated, there is an ideological argument which says that in a democratic society, the public decision-makers should be responsive to expressions of demand by members of society. When making decisions which involve resource allocation, these public representatives must attempt to reflect the "public's preferences." Many of the polls and surveys purport to be measures of "what thg_public wants" and are therefore useful to those people making public choices. Political survival is another motivation in use of surveys. This involves finding out what the preferences of the politically powerful people are, not necessarily what the majority's preferences are. As Bartlett (1973) points out, politicians are vote-maximizers, but bureau- crats are security maximizers. Their security depends not on satisfying the majority of voters, since they are not directly accountable to the voters. Rather, they must satisfy those interest groups which have the power to terminate the bureaucrat's position. For example, an agent of a natural resources department may be very interested in the opinions of hunters in a particular area where the agent is proposing a land manage- ment plan. If the agent fails to satisfy the hunters, it is possible that the powerful conservation groups could force a transfer of that agent. At the same time, the agent may fail to satisfy a majority of the residents in that area, and yet it is unlikely that the majority will be well organized or will have intense enough interest in the issue to cause any trouble for the agent. In such a case, the agent may take a survey of hunters only to determine their preferences for alternative land management plans. The public agent is choosing whigthublic is relevant to the purposes and interests of that agent. Examples of public surveys with relevance to public decision-making are numerous. Carlson (1976) surveyed residents of Idaho to obtain "public preferences toward natural resources use.“ Since decisions have to be made by the State which will allow and exclude various land uses, an expression of "public preferences" would be useful for making politically favorable decisions. The Congressional Record (12/13/69) includes results of environmental surveys which are supposed to represent "public attitudes regarding environmental improvement." A Congressman from Michigan used surveys in his newsletters to his constituents. He told them, "As Your Man in Washington, you may be sure that this "grass roots" expression of opinion from home will be of much assistance in my effort to represent our District in the Congress." (Chamberlain, 1972). Representatives for five major polling organizations were invited to a hearing before the Subcommittee on Economic Growth and Stabilization of the Joint Economic Committee (U.S. Congress, 1977). Senator Humphrey stated that "a better grasp of public attitudes, opinions and expectations is crucial to the work of this and all the other committees of Congress." The value of these pollsters' testimony to Congress was summed up in the following way by Senator Humphrey: (This is) a rare opportunity to obtain a comprehensive assess- ment from leading experts about the views and expectations of the American people regarding some of the Nation's major economic problems and what the Goverment is doing about them. Specifically, I expect that you gentlemen will be able to give us a better understanding of how the public views the energy crisis and how it is responding to the proposals of the administration and Congress to deal with it in terms of conservation and taxing measures. By the same token, I hope that you will be able to bring into sharp focus the attitude of the public regarding the current state of the economy with its still intolerable level of unemployment and what effect this is having on consumer spending and saving plans. More- over, I hope that we will get a solid reading on what the public thinks the Government is doing right and what it is doing wrong concerning these important economic issues. A survey of the Dartmouth community (Community Resource Development, A Massachusetts Heritage, 1973) claimed it was "an aid to selectmen in making wise decisions and to assist them in setting priorities for spending." The Indiana Survey (Gordon, Brooks and Ryan, 1973) was under- taken to assess preferences for community living. The results were to be "of use to public and private decision-makers who are trying to improve Indiana's communities." The survey is supposed to show what characteristics and services are preferred by residents. A survey by Marans and Wellman (1978) looked at the thoughts, expectations and activities of permanent and seasonal residents of the two northernmost counties in Michigan's lower peninsula. The survey was intended to "aid in the planning and environmental management necessary to protect regional natural attractions." Massay (1978) did a study entitled "Attitudes of Nearby Residents Toward Establishing Sanitary Landfills" (1978). By investigating factors which may be influencing attitudes he hoped to offer suggestions that may be helpful in reducing citizen Opposition toward selecting sites for sanitary landfills. Thus, the author was using the survey to find reasons why people either favored or opposed landfills and then these reasons could be thought of as targets for changing attitudes. For example, if people responded that they opposed sanitary landfill sites because they were concerned with possible odor, then the author proposes to politicians that they should convince constituents that the landfill will not produce odors and then opposition to the site will be eliminated. The Michigan Public Opinion Survey (1977) was conducted to determine how Michigan residents prioritize community issues and spending of tax funds. The major purpose was to "provide county, regional and state leaders with information that could help them make decisions about community services." The authors concluded that this survey was useful because "as public officials decide upon alternative uses of scarce tax funds for public services, they are interested in the needs felt by the people." The Michigan State University Experiment Station did a longitudinal study in the Upper Peninsula to measure "satisfaction with Rural Communities" (1978). It stated that "county planners, legislators, Cooperative Exten- sion Service Personnel, educators and others concerned with developing policies and programs related to rural areas like Ontonagon County must not only be aware of the value systems, goals and attitudes of the residents, but also base planning on them." The Christian Science Monitor (1972) reported on a poll by 22 Repre- sentatives who surveyed their constituents on the question, "Should the Federal government expand efforts to control air and water pollution, even if this costs you more in taxes and prices?" They also reported that the "Congressmen receiving a loud-and-clear message from the voters include some lawmakers well positioned to shape federal policy accordingly." These included Gerald Ford who was the House Republican leader, John J. Rhodes who was chairman of the House Republican Policy Committee and a high ranking member of the Appropriations Committee, Wendall Wyatt, also on the Appropriations Committee, and M.C. Snyder on the public works subcommittee which writes much of the environmental legislation. Referring to the National Wildlife Federation's environmental survey (1969), the Conservation News (1969) has said, "It's difficult to fathom why officials in both Executive and Legislative branches of the Government do not recognize this demand on the part of the people." Thus, it is expected that public officials will use the results of these surveys in their decision-making. The National Waterways Conference, Inc. felt this same way. (They are a group for toll-free navigation.) They propose that a massive rural renewal program be started, utilizing water resource development and other economic tolls to revitalize rural areas (Newsletter, 1969). They cite support for this position from the results of two polls: one by International Research Associates of New York which shows 91 percent of Americans support water resource development programs. This poll was sponsored by the National Rural Electric C00perative Association and used “methods proven to produce valid results as to public attitudes." The other survey, conducted by Gallup,shows that most Americans think rural areas and small cities are the most pleasant places to live. To say that these two results provide justifi- cation for spending on a massive water resource development program requires a long step in logic. Nevertheless, this is what the group was advocating. The Randall, et al. (1974), Sinden (1973), Walsh (1978), and Brookshire, et al. studies are all examples of "bidding games" which are used to estimate values of nan-market goods. These estimates could then be used in benefit-cost analyses when making decisions on public spending priorities. The Tri-County Regional Planning Commission (TCRPC) used a survey along with a public hearing to get public reaction to proposed alterna- tives. The TCRPC was trying to avoid making politically unfavorable decisions since they stated, "Without public commitment, the clean water plan will be difficult to implement." Finally, there are polls taken literally every day by Gallup, Harris and other major polling organizations which seek to get the public's assessment of present government decisions, as well as public preferences regarding what decisions the government should make in the future. These polls include questions regarding new and proposed laws (e.g., "Do you favor a proposed law requiring drivers and passengers in cars to use both shoulder and seat belts?", Detroit Free Press, 1972); questions regarding public spending (e.g., "How serious a loss would it be if federal programs in certain areas were cut by one-third?", Harris poll, 1977); and questions regarding tax issues (e.g., "Do you favor cutting or limiting property taxes in the State of California?", Gallup poll, 1978). All of these surveys claim to be useful for public officials in discerning what the public's opinion actually is. Unfortunately, there is no single "public" for which all policies are relevant. The public may mean all U.S. citizens or it may mean all U.S. citizens over 18. It may mean only those registered to vote. It is also true that the boundaries of "the public" will change as the level of government involved changes (i.e., the meaning of "the public" to the federal government is not the same as the meaning of "the public" to a state government.) To be useful, "the public" which any survey instrument actually selects must be the same public which is of interest to a particular policy maker. In addition to this concern of whg_"the public" is for any policy maker, there is the problem of how these people should be represented in any policy making process. In our system of voting in the U.S., each qualified person (registered voter) is allowed one vote. In a survey this would be analogous to presenting the response choices "Yes or No," "Agree or Disagree," etc. However, a bureaucratic policy maker is not restricted to using this type of representation when doing a survey. Instead, the policy maker may be interested in allowing those people with intense preference or opinions the opportunity to express them. The survey might then have the response choices "Very Strongly Agree, Strongly Agree, Agree, Disagree, Strongly Disagree, Very Strongly Disagree." This example shows how a policy maker can decide issues of representation while seemingly undertaking the "technical" task of designing a survey. Two difficulties have been distinguished that the policy maker using surveys can encounter. One is the problem of doing a technically valid survey. This involves such things as choosing a truly random sample, using unambiguous questions, making proper statistical computations, etc. (See Birch and Schmid, 1978, for a discussion of internal and construct validity of survey designs.) However, even if a given survey is technically valid, it may be invalid or ambiguous for the purposes of a given policy maker. There may be two different surveys, each done technically correctly, which measure different aspects of a public's opinion, and the policy maker would then have to make a normative judgment as to which one (if either) is best for his/her purposes. For example, a survey might accurately discover that a majority of a city's population favors a clean air program, while another equally valid survey finds that air pollution is tenth on a list of the city's "most pressing problems." and crime prevention is number one. If only the first survey was done (perhaps funded by the "Citizens for Clean Air"), the mayor may use the results as justification for more spending on air pollution controls. In reality, the majority of the residents of the city might rather have that money spent on crime prevention. Therefore, by choosing who to survey and what type of survey design to use, an analyst is making a political choice. As a result, some people's preferences will count and others' will be neglected. Too often this choice of survey design is approached as a technical question, rather than a process involving political choice. Unless it is recognized that political choices are being made, there will not be public awareness and debate on the choices and uses of survey techniques. If people realize that there are winners and losers in the polling process, they might become more interested and involved in it. Purpose of the Paper This paper will analyze the different aspects of survey design which can have an effect on the results of any survey. An attempt will be made to look at some of the ways a survey technique can be altered which may lead to different measures of "public opinion." Again, there is no search for a "correct" survey technique since there is no unambiguous "public opinion" which exists and merely needs to be accurately measured.* Instead, an analysis can show which public has its preferences promoted by a particular survey design. Thus, the paper will look at a number of different polls and surveys and classify them into major types of survey designs. The policy validity (Schmid and Birch, 1978) of different types of designs will be explored. This refers to the application of survey results to policy and the implicit choices that are being made of whose preferences count. Finally, a case study will be analyzed where a citizen advisory council was given two different types of surveys covering the same topic. This experiment was done with the cooperation of the Pigeon River Advisory Council (PRAC) which provides public input into decision-making regarding the Pigeon River Country State Forest (PRCSF). The members of this *There are, however, many technically incorrect survey techniques and some of these will be mentioned along with possible remedies for them. 10 Council had been the respondents of a previous survey done by the Department of Natural Resources (DNR). This was an actual case where surveys were being used as a way of measuring "what the public wants" before land use decisions were made. The results of these surveys and their different implications for policy will be discussed. PART II There are three major areas within survey design which can have an effect on the final results of any survey or poll. The first is the choice of wh9_to survey. The analyst has a particular group in mind as the target of the survey. It may be the "general public" or it may be a political, socio-economic or other subgroup of the general population. The second area of survey design is the choice of hgw to survey. This involves different methods of asking questions and different techniques for measuring responses. The third area is the choice of aggregation technique. Individual responses must be compiled and summed into a reasonable number of categories for meaningful analysis. The number and types of categories and their weights must be decided upon by the analyst. Each of these three areas will be discussed in more detail below. 11 Chapter 11 Choosing Who to Survey This problem is analagous to choosing political boundaries for voting. Those within the boundaries will have representation while those outside the boundaries are not represented, even though they may be affected by political decisions made by the group within. However, survey boundaries differ in many ways from political boundaries used for voting. The survey boundary can be changed as the group of interest to the analyst changes. For example, a congressman may survey his constituents for their attitudes on issues which are before Congress (Congressman Chamberlain Reports, 1972). In this case the survey boundary coincides with the political boundary of the congressman. But the Governor of the same state (Michigan) may turn to the Michigan Public Opinion Survey (1977) to find out what "his public" wants. The major polling organizations (Gallup and Harris) conduct numerous national surveys which are intended to reflect what "the American public wants" (Washington Post, April 28, 1969; Louis Harris, September 20, 1978; State of the Nation, 1974; Wyoming Eagle, June 20, 1978). Many surveys don't conform to political boundaries at all, but rather attempt to address a particular geographic or interest group. Thus, an agency proposing a clean-up effort on a particular river may survey residents of the river basin (Tri-County Regional Planning Commission, 1978). An agency which is attempting to measure recreational 12 13 benefits from a particular river may survey the people engaged in recreation on that river (Sinden, 1973). If a recreation planner does a survey to try and determine recreational "needs" for a given area, the planner must consider particular client groups that recreation has. Hatry, et al. (1976, p. 46) include in these groups: -individuals in different neighborhoods or regions, -male vs. female, since the recreational interests often differ between the two, -age groups--the very young and very old have special needs in terms of recreational facilities, -individuals with handicaps, -individuals without access to an automobile, -low income families, -users of specific types of recreation (e.g., golf, tennis, hiking). If the planner does not make an effort to include and identify these groups when deciding who to survey, then the results may lead to inappropriate recreational facilities being planned. It can be seen that the boundaries of the survey can be changed in an infinite number of ways. Therefore, it is important that the analyst realizes the political choices that are being made through this process and whether these implicit choices coincide with the explicit statements of who the analyst represents. Another way in which survey rules differ from voting rules is that surveys don't attempt to get a response from everyone within the boundaries. Voting rules state that a person must be over 18, must be a citizen, and must be registered, and then anyone within these limits is allowed to express their preferences in an election. With a survey, 14 limits are also set. Usually only people over 18 are considered and they often must reside within a relevant geographic area. But within these limits, the analyst must use some selection technique to reduce the number of respondents to a workable number. Randomization is almost always used in these cases. Thus a check is usually made on the charac- teristics of the sample population to see if they coincide with the characteristics of the latest census taken in the area. If there are discrepancies these are usually reported, but the survey is seldom redone to try to correct for a "non—representative" p0pulation (see for example, Walsh, et al., 1978). Regardless of the randomization technique that is used, there are ways in which the selection of respondents will result in a selection of whose preferences are to count. In some cases, the voter register is used, which means that people who are most likely to register to vote (higher income, higher education) will also be the people represented in the poll. There are certain areas where a voter register may not be at all appropriate for getting a representative sample. Such would be the case in a small college town where students make up 80 percent of the population, but where most of the students vote in their home towns. Another example might be a seasonal tourist area where population doubles during the peak season with second home owners. Whether the analyst wishes to include these temporary residents should depend on the purposes of the survey. But these temporary residents should not be excluded merely because the analyst found the voter register to be the most available information. In other cases a city or other type of official directory is used. Depending on when the survey is done, however, these directories may be 15 out of date since they are only compiled periodically. The periodic nature of this process might result in more permanent residents being represented more than transient ones. Another method that is used is the random selection of names from phone books. While it is true that most households have telephones (Walsh, et al. found approximately 93 percent of the Ft. Collins and Denver area households had telephones), the people who are not listed in the phone book are most likely either very rich or very poor. Therefore, it is not just a random group of diverse peOple that are being left out by this method. A seemingly impartial rule which is used in some surveys is to take only one response from a household. However, at least one survey has found that this leads to under-representation of females in the survey. Walsh,et al., found that a number of female family members requested that their spouse provide information for the survey. "In most of these cases the husband was the traditional family spokesman and the wife requested that he provide the necessary data" (p. 22). Thus, a rule which appears to exclude people at random, actually excludes a particular subgroup of the population. In any of these general population type of surveys it may be useful to check the characteristics of the sample with those from a previous census as was mentioned earlier. Care must be taken to ensure that the relevant geographic areas of the two studies are the same and that the census is not too far out of date. However, since the characteristics of the respondents are not known until after the survey is completed, it is difficult to go back and repeat the survey using a different group of respondents. Chapter III Choosing_a Survey Method There are many elements of the actual administration of a survey which can have an effect on the results. The most important is probably the type of question which is used. But also a factor is the manner in which the questions are asked. The three techniques most often used are mail, telephone and personal surveys. Often there is a combination of these where an advance contact is made by mail or telephone and then the actual survey is done in person. Telephone Surveys The telephone survey has the previously mentioned characteristic of only reaching those segments of the population which have telephones. If a phone book is used to obtain the numbers to be called, then there will also be the problem of only surveying those people with listed numbers.* Furthermore, a telephone survey will tend to represent more heavily those people who spend time at home than those with irregular home schedules. It is also possible that a particular member of the household (i.e., the housewife) will be the more frequent respondent to telephone surveys, since they are more often at home. *However, this can be avoided by simply finding out what telephone exchanges are used in any area and then dialing the last four digits randomly. Then no phone book need be used. 16 17 Mailed Questionnaire The mailed questionnaire can also result in unanticipated problems for the analyst. As Moser and Kalton (1972) point out, the responses on the returned questionnaire have to be accepted as final. It can't be discerned if more than one person actually filled in the answers or if the respondent discussed the questions with someone else before answering. It can't be known whether the respondent was unclear as to the meaning of certain questions and therefore answered randomly just to fill in the blanks. Any additional reactions to questions, outside of what is written down, will not be known (Moser and Kalton, pp. 260—261). These limita- tions would be especially relevant for respondents with low levels of education or when a survey is unusually complicated. Possibly the most important problem with mailed surveys is not getting an adequate return rate. But of more interest here is not just the return rete, but whether certain groups within the population are more likely to return mailed surveys than other groups. Heberlein and Baumgartner (1978) have done a comprehensive study on the factors which affect response rates to mailed questionnaires. The number of contacts that the analyst made with the respondents was the overwhelmingly important factor. Contacts include introductory or lead letters, the actual questionnaire, and any follow-up letters. The second important factor was issue saliency, i.e., whether the respondents were interested or concerned about the issues in the questionnaire. It is not surprising that people who feel they have the most to gain or lose on a particular issue will be the most willing to express their opinion on that issue. Heberlein and Baumgartner also point out that "attitude questions often involve a response choice in which the individual may be ambivalent or 18 undecided about the alternatives. Such cognitive exertion may be sufficient cost to the respondents to deter some from completing the questionnaire" (p. 460). However, this means that the analyst must be careful when interpreting the results from a survey. To take a hypotheti- cal example, suppose a questionnaire asks, "How concerned are you about water pollution? -Very concerned; Somewhat concerned; Not very concerned; Not concerned at all." If 80 percent of the questionnaires are returned, the results might be that 30 percent said "very concerned," 30 percent said "somewhat concerned," 20 percent said "not very concerned" and 20 percent said "not concerned at all." These results could be reported as "a majority of the public is concerned about water pollution." However, suppose that the 20 percent of the respondents who did not return the questionnaires were people who were not concerned at all and therefore did not bother to fill out the survey. Then what “the public" actually feels will have been misrepresented. Of course, there is no way of knowing what the non-respondents actually feel on an issue, but Heberlein and Baumgartner's findings on issue saliency should be considered if a survey has a very low return rate. In particular, gross statements about "what the public feels" should be avoided. Other factors were also found to be significant in affecting response rates of mailed surveys. Government sponsored research which was labeled as such got higher response rates. This apparently increased the perceived importance of the survey and made respondents feel more "obligated" to return it. Techniques such as using special delivery or registered mail had the same effect. In general, if the survey can make the respondent feel that her/his opinion matters, then there is a greater chance that the survey will be returned. Walsh, et al., took advantage of this technique 19 in their survey on recreational benefits of improved water quality. An introductory letter was sent which said that there was no obligation to participate in the survey, but that those who did may influence future water quality decisions (p. 22). While this may be effective in getting responses from those who are interested in future water quality decisions, it does not make the issue any more salient to those who are unconcerned. Perhaps saying that the respondents may influence future government spending on water quality would make the issue salient to more people. Heberlein and Baumgartner also found that students, employees and military personnel are more likely to return a questionnaire. ("Employees" refers to questionnaires sent out by a company to its employees.) Again, there are the factors of issue saliency and feelings of obligation which probably contribute to this finding. In general, the study found that to increase returns, the analyst could either lower the costs involved in completing and returning a questionnaire (e.g., postpaid return envelopes, forms which are easy to fill out), or increase the motivation of the respondent to overcome the cost barrier. It was found that a monetary incentive was significant in increasing the initial response rate (as opposed to increasing the response rate after follow-ups). This incentive may be effective for getting returns from low income respondents, especially if the incentive is high enough. If it is only a small amount of money offered, it may just make the survey appear more important if someone is willing to pay for responses. The important point from these findings is that certain subgroups of the population may be more likely not to return questionnaires, which could lead to under-representation of these groups in the sample. Depending on what the results are used for, this lack of representation can lead to 20 poor political choices. A survey of Dartmouth, a fast growing college town, was done to get an expression of "community opinion“ (Community Resource Development, A Massachusetts Heritage, 1973). The report stated, “As an aid to selectmen in making wise decisions and to assist them in setting priorities for spending, here are some of the indications as to how residents of Dartmouth responded to the questionnaire." The survey was to find out what the most adequate and inadequate community services and facilities were. However, only 15 percent of those surveyed returned the questionnaire. Male responses almost doubled female responses and about half of the respondents were 40-64 years of age. A great majority had 12 or more years of education and almost all owned their own home (which seems rare for a college town). Yet, the report called this "a fine sampling basis for obtaining local opinions." Based on the charac- teristics of the respondents, it is doubtful that college students are represented at all. If these results are used to guide public spending on new community facilities and services, then a political choice has surely been made as to who will have influence on those public decisions. It is very possible that the "selectmen" feel that property owners should have more weight in deciding where public money should be spent, but then such a political value judgment should be stated rather than implying that decisions will be made based on the "community's opinion." While a survey which includes questions on demographic characteristics can provide a check on the representativeness of the sample, it is still a normative decision that must be made by the analyst as to what consti- tutes "representativeness" for his/her purposes. 21 Personal Interview The personal interview is the preferred method of most analysts for doing a survey. Of course, there are trade-offs in the convenience and lower costs of telephone and mailed surveys which have to be considered before deciding to use personal interviews. In a one-to-one situation the interviewer is able to interact with the respondent and help to make the questions more clear or understandable. (This can also be the case with telephone interviews.) Additional comments that are made by respondents can also be noted. In Mitchell's (1978) environmental survey, he used a lengthy "debriefing" of the interviewers afterward to get additional information about how the respondents answered questions, which ones they had trouble with, and any additional responses that weren't written down. This additional information can give the analyst good clues about the construct validity of the questionnaire (i.e., whether the questions are asking what they are intended to ask). The personal element of the direct interview has a different impact than the impersonal nature of a mailed questionnaire. When standing face- to-face with someone there is a subtle pressure not to appear uneducated or naive. Thus, many people will answer questions that they either don't know about or don't have an opinion about the subject matter, just to avoid saying "I don't know." The respondent may also feel a pressure to be polite to the interviewer and therefore try to answer questions that really haven't been thought about. The opposite may also be true. A person may feel that someone who comes to their door asking questions is nosey and rude and therefore doesn't deserve to get any straight answers. The respondent may answer in whatever way she/he thinks will get rid of 22 the interviewer the fastest. Certainly the appearance of the interviewer will have an effect on how the respondent feels about being questioned. It has been suggested that middle-aged, slightly overweight women are mostly likely to get true responses when doing a survey. Apparently people do not want to feel inferior or threatened by the person who is asking them personal questions. It would follow that people in lower socio- economic groups would more often feel threatened by an interviewer and would therefore be less likely to give their true responses. Systematic biases of this type should be watched for. Timing of Surveys Another aspect of survey methodology which can have an effect on the results is the timing of the survey. There are two timing effects that should be considered as factors. The first is the timing of the survey in relation to the entire decision-making process. It will make a difference in the final policy whether the public is included in the beginning when alternatives are first being suggested, or in the end when a final alternative is being approved (Erickson and Davis, 1975). The second effect involves the timing of the survey in relation to the state of current events. The numerous Gallup and Harris polls are usually done in response to some controversial issue which is currently in the news. Examples are the polls on Proposition 13 and related tax issues in mid-1978 (Newsweek, June 19, 1978; Wyoming Eagle, June 20, 1978) and environmental attitude polls in the late sixties and early seventies (National Wildlife Federation, 1969; Christian Science Monitor, 1972). It is not so much a question of whether these polls are measuring the public's attitudes correctly as it is a question of whether government representatives should be basing policy decisions on the results of such 23 surveys. Earl Shorris (1978), in his short article entitled "Market Democracy, The World According to Gallup," has pointed out that constant reactions by politicians to opinion polls will lead to instability in government. The continuity that was provided for in the constitution is being undermined by this new wave of single-issue politics. While this is one person's opinion, it does point out the important impacts that polls and surveys are having, and especially the importance of the timing of the survey. Schmid and Birch (1978) have also asked "whether the survey question can ever approach the political reality where choices are grouped, compromised and traded off. The usual survey question presents choices as if each were to be decided on its own merits." (p. 5) This is one aspect of the policy validity of surveys. The factor of timing has to be considered as crucial when surveys are used in the political arena. Construction of the Questionnaire It was stated previously that the most important factor which influences survey results may be the actual construction of the questionnaire. This includes question wording, the order of the questions, additional infor- mation which is included with the questionnaire, and the type of question being used. These variables will affect the internal and construct validity of a survey, but here the relationship to policy validity will be explored, i.e., how different survey constructions result in different expressions of the "public's opinion." Qgestion Wording Question wording here means the type of words that are used in a question. The most obvious problem occurs when words are unfamiliar to 24 the respondent. If a question is asked which uses large, uncommon words, the respondents with lower education levels will have difficulty under- standing what is being asked and accurately expressing their opinions. Such questions may lead to a large number of "don't know" responses which would leave only the higher educated group being represented. Problems can also arise with ambiguous, misleading or slang words. Words have different meanings and connotations to different people. The analyst must be sure that the intended meaning is conveyed to the respondents or the results won't be meaningful. The National Rural Electric Cooperative Association did a survey to find "the public's attitudes toward rural electric cooperatives" (1968). One question asked whether people preferred to have: (a) Electric cooperatives owned by the consumers, (b) Private electric companies, (c) City-owned electric companies. The result was 31 percent, 29 percent, and 25 percent respectively. However, the authors recognize the importance of using the word "coopera- tive." They state, "the term "cooperative" in itself is a major positive element. (Pe0ple feel it) is a more "human" supplier, more concerned about the consumer, more accessible to him." The question that must be asked is whether people have these feelings about electric cooperatives because of their past experiences with them, or because of past associa- tions with the word "cooperative." Do the results of such a survey tell a city whether it should turn its public utility over to the consumers? Many of the environmental surveys use words whose meanings are insufficiently clear. "Pollution" will have a different meaning for someone living in a rural environment than for someone living in a city. 25 There will also be differences between people's ideas of what it means to "fight pollution." A Harris pollin 1971 (Christian Science Monitor, 1972) asked if Americans would pay $15 per year in added taxes to fight pollution. Although 59 percent said yes, this doesn't really say how pe0ple want their taxes spent. One person may envision the extra taxes being spent to clean up a local river while another might see the money being spent on auto emissions control. The results of such a survey don't help public officials decide where to spend public money. The issue of trade-offs in public spending priorities will be discussed more in a later section. Another environmental survey, sponsored by the National Wildlife Federation (1969), asked questions regarding "our natural surroundings." Again, an urban resident will have a different concept of our natural surroundings than a rural resident. An environmentalist will have a different concept than an industrialist. Therefore, when asking "how much would you be willing to pay each year in additional taxes earmarked to improve our natural surroundings?", the answers will be ambiguous. What are people really saying they are willing to pay for? It isn't clear from the question being asked. It is also true that words like "natural surroundings" carry some connotation of intrinsic worth. People feel that they should be willing to pay to improve their natural surroundings regardless of what those surroundings are made up of. Whether people are really willing to pay for a water pollution program or for an urban renewal project has not been addressed. Another example of ambiguous question wording was found in the Congressional Record (1969). Mr. Mondale had entered into the record a series of articles from the Minneapolis Tribune by Richard P. Kleeman. It had been stated that 82 percent of the public was interested in 26 conservation based on the responses to the question, "Conservation refers to conserving our natural resources. How much interest do you have in conservation?" This "definition" of conservation which is included in the question really doesn't help to clarify the meaning of the term. Conserving our natural resources may mean absolute preservation to some people and prolonging the life of natural resources to others. It is difficult to see what directions for public policy such a survey result could give to the members of Congress. The Order of Questions It has been documented that the order in which questions are presented is a factor in determining the responses. This is especially true for telephone or personal interviews since the respondent cannot see all the questions before answering any one of them. In a mailed survey, Moser and Kalton (1972) have pointed out that information provided in a later question may be used in answering an earlier one (p. 260). This may or may not be a problem depending on the purpose of the survey. It is certainly true that information from earlier questions will be used in answering later ones also. More important than just additional infor- mation from other questions is the influence that this information has on the respondent. If the additional information just adds more "facts" so that the respondent can make a more informed judgment, this probably wouldn't interfere with the purpose of the survey. However, if the additional information persuades the respondent into thinking that a “correct" answer exists which is different from his/her own, then the analyst would not be getting a true measure of the respondent's preference or opinion. An example may be the National Wildlife Federation's survey (1969) where the first question was, "You may have heard or read claims 27 that our national surroundings are being spoiled by air pollution, water pollution, soil erosion, destruction of wildlife and so forth. How concerned are you about this? -- Deeply concerned, somewhat concerned, or not very concerned." The respondent is immediately alerted that this is an environment survey and has been informed that "Our natural surround- ings are being spoiled." Rather than trying to honestly answer each following question, the respondent may identify her/himself as either an environmentalist or an anti-environmentalist and then answer the remaining question on that basis. Other ways that question order can have an effect have been demon- strated by Carpenter and Blackwood (1977). They did an analysis of variance on the results from varied question ordering on each of four different types of surveys. The ANOVA results "showed persuasive position effects for three of the four scaling metrics" (p.ii ). The most variation resulted from criterion effects, which are the effects of rating any given item on the scores for subsequent items (i.e., the criterion for evaluation of an item would be influenced by the foregoing item or items, either by the specific content of the item or merely its presence or absence). The study that the authors used was a nationwide survey of attitudes of adults toward wild and domestic animals and their treatment by man. On a'Scale O - 10 certain items" type of survey, they found that when an item is first in the list, the lack of evaluative reference points results in the assignment of extreme values (either high or low). As the item's position was varied down the list, the scores progressed to the alter- nate extreme. With a "modified magnitude estimation" technique the respondents were asked to rate 16 animals on a scale from O - 100 points, 28 according to how much they liked them. They were to assume that a deer was worth 50 points. The authors found that animals received their lowest score when in the first four positions and the highest score in the last eight positions. This suggests that it took at least three to six animals before a criterion for evaluation was established. Perhaps the first few animals were evaluated with reference to the deer, but then these first items become the references for later items. Overall, the order effects resulted in a great deal of variation in the ordinal ranking. Carpenter and Blackwood say that the criterion effect could probably be overcome by acquainting respondents with full or partial lists of the items before evaluations are to be made. The surveyer could also provide three or four "throw away" items at the beginning of the list. Another suggestion is to randomize the order or presentation among surveys so that the position effects are also randomized. The findings of Carpenter and Blackwood clearly show that two different surveys dealing with the same issue can result in two different measures of "public preferences." It is not possible to say that a particular question ordering is the "correct" one. As with the other factors which influence survey results, the analyst must be aware that these problems exist and that by choosing a particular survey design, the analyst is choosing to weigh certain people's preferences more than others (e.g. choosing to give greater weight to the first four items in a ranking survey). If the analyst is making these types of political choices, then those choices should be open to review and debate by the public, just as any political choice should be. 29 The Type of Question Used The type of question refers to the form of the question and what responses are available for the respondent to choose from. Moser and Kalton have said, "for virtually every conceivable question, there are several possible, and theoretically acceptable forms; in choosing between them, knowledge of the survey population and subject matter, common sense, past experience and pilot work are at present the surveyor's main tools" (p. 308). Using these tools should lead the analyst to a choice of questions form which is most apprOpriate for the analyst's purpose. But these tools will not lead to a choice of a "correct" measure of the "public's opinion." Rather, they will lead to different aspects of the Opinions of different publics. Preferences and opinions are multi- dimensional and any particular question will serve to bring out just one dimension of those preferences. The different question forms can be analyzed as to which dimensions each form serves to emphasize. Open-Ended Questions If the respondent is free to answer a question in his/her own words, then the question is open-ended. Allowing a respondent to choose his/ her own method of expression is felt to lead to truer representation of opinion or preference. Countering this argument is the one which says that people are not good at expressing their preferences unless they are allowed to choose among various responses. Polls of the type which ask "What do you feel is the most pressing problem facing our society?" and allow the respondent to answer freely often get different results than a survey which asks, "Which of the following problems facing our society do you feel is the most pressing? Inflation, Crime, Unemployment, Pollution, etc." (e.g. Harris & Assoc., 1971). There may be a problem 30 listed which the respondent didn't think of when answering freely, yet may be very concerned about. It might be hypothesized that people with lower education levels would have more difficulty answering the open-ended questions. Schuman and Presser (1977) have found that question form does make the least difference in responses for the most educated groups. The authors were testing the assumption of "form resistent correlations“ which says that even if marginals cannot be trusted due to question form uncertainties, associations between variables are not subject to this same instability. They found that the assumption of form resistent correlations must be rejected when Open and closed versions of the same basic item are considered. Since they found that form affects the less- educated groups more, the form becomes a self-selection procedure-- i.e., it is not a random experiment anymore. It is also likely that issues which receive the most media attention will most often be cited in open-ended questions. Thus, the timing of the survey would be extremely important in these cases. Also, special interest groups with the resources to make the public aware of their issues will have their issue cited more often in these types of polls. Therefbre, those groups with the most money and influence on the media may receive more weight in a political decision which uses open-ended polls as a basis for "what the public wants." Even if open—ended questions were better ways of getting people to state their true opinions, there are trade-offs in convenience which the analyst must consider between open-ended and forced choice questions. It is very difficult to aggregate diverse responses to a question into a reasonable number of categories. A set of rules must be devel0ped which will determine what "type" of response goes into what category. 31 For example, problems dealing with air and water pollution, nuclear wastes, congestion, land use and overpopulation might all be categorized as environmental problems as opposed to other categories such as crime, drug abuse, inflation, etc. Such a gross categorization scheme could be misleading with respect to where public spending should be directed. Members of Congress could use such results as "justification" for spending on whatever types of environmental problems they were interested in. If peOple want to be represented in public decision-making they should be concerned about the survey techniques which are used to measure their opinions. Forced Choice Questions As mentioned earlier, forced choice questions have the advantage of convenience over open-ended questions. They are more convenient for respondents, which should lead to higher return rates, and they are also more convenient for the analyst in terms of aggregating results. Obvious problems with the forced choice questions include not offering a wide enough array of questions and "leading" people to respond in certain ways by the choices which are available. While the list of responses should not be so long as to deter the respondent from reading all of them or to confuse the respondent, it must be long enough to cover most choices that are actually available. For example, a hypothetical question might ask how much people would be willing to pay in additional taxes each year for improvement of the environment. If the responses to choose from were "a small amount such as $10 or less, a moderate amount such as $50, or a large amount such as $100 or more?", the only way for a respondent to answer "none at all" would be to choose "a small amount such as $10 or less." If 32 half of the "small amount" responses were actually "none at all" responses, then the government might raise taxes by much more than people were actually willing to have them raised. Surveys should also include the possible responses of "No Opinion," "Don't Know," or "Not Relevant." This would keep people from answering questions that really do not measure their true opinions. Pe0ple may still be reluctant to say "I don't know" or "I have no opinion on that” but these choices should at least be available. The Rural Electric Cooperative Association did a survey (1968) in which the available responses to choose from may have "led" people into responding in a particular way. The survey listed "virtues" of different communities and asked people to indicate whether "Big City" or "Rural" communities were more likely to have those characteristics. "No Difference" and "No Opinion" were also offered. However, their "virtues," along with the labels "Big City" and "Rural" were very stereo- typical in terms of what we have all been led to believe big cities and rural communities are like. Their "virtues" included, "To be in good health," "To be very honest in their business dealings," "To have a lot of tension and pressure in their daily lives." This type of survey tends to confirm whether certain stereotypes exist with respect to big city and urban living. For example, most people felt that more poverty is found in the cities than in rural areas and that housing conditions are worse in the cities. In fact, the poorest of the poor live in rural areas. While it may be useful to know that pe0ple have misinformation about different communities, this shows how responses can be influenced by factors other than people's attitudes. Attitudes are certainly a function of the knowledge that people have, but attitudes based on 33 misinformation may not be relevant as guides to public decision-making. A decision is always made by an analyst as to whether people are informed enough to offer useful opinions. If the analyst uses an attitude survey which does not supply preliminary information, then it has been implicitly decided that people's attitudes (based on whatever information they already have) are relevant in the policy process. Alternatively, the analyst may wish to educate the respondents in some manner by supplying preliminary information with the survey. The amount and content of this information will have an effect on survey results, but the decision to include or not to include information must be made. A problem can occur when a policy maker uses an independent attitude survey as input to the decision making process. If a decision is to be made on whether additional health care facilities should be constructed, then a question such as, "Do you feel rural health care facilities are adequate?" would be more appropriate than using the results of someone else's survey regarding attitudes about rural health. For example, a survey in which people identify "rural" with “being in good health" does not mean that these same people feel that rural health facilities are adequate or any better than those in big cities. Ordinal Ranking Surveys The ordinal system involves presenting the respondent with a list of items and then asking for a ranking of the items according to some specified criteria. The criteria may be how much the respondent likes each item, how important each item is (to the respondent, to the nation, to the region, etc.), or perhaps how deserving each item is for additional public spending. By definition, the ordinal ranking can only reveal the 34 ppgep of preference, it can say nothing about the interval between successively ranked items. As mentioned before (Carpenter and Blackwood), the results of a ranking survey can vary depending on the order that the items are presented. Randomization of the presentation order among the respondents should serve to randomize the bias. This will add to the costs of doing the survey, however. Carpenter and Blackwood also found that if a ranking survey is com- bined with a "distribute 100 points" among alternatives survey, that the ranking of the results will no longer change with different orders of presentation. The respondents were first asked to rank three alternatives and then were asked to distribute 100 points between the three alternatives. It was felt that the ranking which was done first might give the respon- dents a chance to crystallize their ideas about the subject. The results of most ranking surveys will show how important the respondents feel different items are. But as with the force-choice questions which ask "how concerned" people are with various items, these surveys are not necessarily useful for directing public spending. While people may feel a public program is very important, they may not feel that any more money needs to be directed to it. There are few people who would say that national defense is not important, but there are many who feel we should not spend any more money on it. (Chamberlain, 1975; State of the Nation, 1974) Therefore, it may well be that the fifth or tenth most "important program" is where people would like to see more government spending (e.g. Michigan Public Opinion Survey, 1977). To try to overcome this problem, the analyst can include a second type of question which asks the respondent to indicate whether "more, 35 less or the same" amount of money should be spent m H o H v o m coHuomcuom uamcwamHQ H OH H oH s m m :oHuooLuom uoaon>mo e m m N m H m LmnsHH cmaHN m e H m N N H canEHH mHm masocu naHu m.:memueoam aaHu acmuom aaocu owum “Hz: achcuHm aaocu H—ucaou acomH>u< ««_—< a... .ua.m ego—How mmch coca: 2H mcmczoucmd HHanmuca> oum>Hca «use. 2.53.5. 3528 Lm>H¢ commHa ammuaa mgu com um: acuH vcm manage ummsmucH am mopuwsopgm pace N u4m<. HNN N> <2 NNHN NN H. HN cowpmmao mcchmm mgp op NmNcoNNmm Hmzuw>NucH o xHoszNE 87 xz<¢ mu<¢u>< HNHNH loNn2H a xHozmma< BIBLIOGRAPHY BIBLIOGRAPHY Bare, B. and B. Anholt. 1976. Selecting forest residue treatment alternatives using goal programming. USDA For. Serv. Gen. Tech. Rep. PNW-43. Pac. Northwest For. and Range Exp. Stn., Portland, Oregon. 26 p. Bell, Enoch F. 1975. Problems with goal programming on a national forest planning unit. P. 119-126 in Systems Analysis and Forest Resource Management, proceedings of a workshop sponsored by Systems Analysis Working Group, S.A.F., USDA For. Serv., Southeast For. Exp. Stn., and Sch. For. Resour., University of Georgia, Athens. August 11-13. ----- . 1976. Goal programming for land use planning. USDA For. Serv. Gen. Tech. Rep. PNW—53. Pac. Northwest For. and Range Exp. Stn., Portland, Oregon. 12p. Birch, A. and A. Schmid. 1978. Public Opinion Surveys as guides to public policy and spending. Draft COPY. Department of Agricultural Economics, Michigan State University, East Lansing, Michigan. 16p. Bishop, Richard C. 1978. Endangered Species and Uncertainty: The Economics of a Safe Minimum Standard. American Journal of Agri- cultural Economics. Vol. 60, No. 1. February 1978. pp. 10-18. Brookshire, David, Berry Ives and William Schulze. Undated. The valuation of aesthetic preferences. Southwest Regional Project Working Paper. Department of Economics, University of New Mexico, Albuquerque, New Mexico. Buttel, Frederick H. and Denton E. Morrison. Undated. The environmental movement: A research bibliography with some state-of-the-art comments. Michigan Agricultural Experiment Station Journal Article No. 0000. Department of Sociology, Michigan State University, East Lansing, Michigan. Carlson, John E. 1976. Public preferences toward natural resources use in Idaho. Research Bulletin No. 94. Agricultural Experiment Station, University of Idaho. Carpenter, Edwin H. and Larry G. Blackwood. 1977. The effects of question position on responses to attitudinal questions: A look at four different scaling metrics. Experiment Station Journal Paper No. 2800. Department of Agricultural Economics, University of Arizona. 88 89 Center for Community Economic Development. 1979. Performance measures for community development. p. 8-18 in Newsletter. Chamberlain, Congressman Charles E. April 3, 1972. Your man in Washington reports. Christian Science Monitor. 1972. Pollution control tax wins public. Polls show U.S. willing to pay. Clapper, Louis S. Item in Conservation News (released by the National Wildlife Foundation), May 1, 1969. p. 3. Dartmouth-Community Opinion Survey. May, 1973. In Communitngesource Development, A Massachusetts Heritage. Vol. X, No. 2. Detroit Free Press. 1972. Sound Off. April 12, 1972. Dunlap, Riley E. and Kent D. Van Liere. Undated. Environmental Concern: A bibliography of empirical studies and brief appraisal of the literature. Vance Bibliographies, Pub. Admin. Series: Bibliography p. 44. Erickson, David L. and Adam Clark Davis. 1975. Public involvement in recreation resources decision-making. p. 191-215 in Proceedings of the Southern States Recreation Research. Applications Workshop, September 15-18, Asheville, North Carolina. Gallup Organization. 1978. Enough! Newsweek, June 19, 1978. p. 22. ----- . 1974. Overwhelming majority of Americans favor government spending for pollution control. In State of the Nation. Potomac Associates. Gordon, John, Ralph Brooks and Vern Ryan. 1973. Preferences for community living: A 1973 statewide opinion of Indiana residents. Cooperative Extension Service, Purdue University, West Lafayette, Indiana. Gibson, Stephen and Ronald Hodgson. 1971. Ecological attitudes tested. In Michigan State News, February 1971. Greenhalgh, Richard. October 1976. Improving public involvement in USDA natural resource planning. Working Paper No. 15. Natural Resource Economics Division, Economic Research Service, USDA. Hansen, Bruce G. 1977. Goal programming: A new tool for the Christmas tree industry. Forest Service Research Paper NE-378. For. Serv., USDA, Northeastern For. Exp. Stn. Upper Darby, Pennsylvania. 4p. Harris, Louis and Associates, Inc. May 1971. The public's view of environmental problems in the state of New York. Study No. 2119. 90 Harris, Louis and Associates, Inc. 1969. Public backs ABM, but many have doubts. The Washington Post, April 28, 1969. ----- . 1977. Would cut U.S. spending. The State Journal, Sept. 19, 1977. Lansing, Michigan. ----- . 1978. Service cut would be opposed. The Wyoming Eagle. June 20, 1978. Cheyenne, Wyoming. Hatry, Harry, Louis Blair, Donald Fisk and Wayne Kimmel. 1976. Program analysis for state and local governments. The Urban Institute, Washington, D.C. Heberlein, Thomas A. and Robert Baumgartner. 1978. Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. p. 447-462 in American Sociological Review. Vol. 43, No. 4. Kimball, William J., Manfred Thullen, Alan R. Kirk, and Christopher Doozan. 1977. Community needs and priorities as revealed by the Michigan public opinion survey. Dept. of Resource Development, Cooperative Extension Service and Agricultural Experiment Station, Michigan State University, East Lansing, Michigan. Lee, S.M. 1972. Goal programming for decision analysis. Auerbach Publishers, Inc. Philadelphia, Pennsylvania. 387p. Libby, Lawrence W. Current Rules Affecting Natural Resource Use. Paper delivered at NCRS-111 Special Seminar, Fargo, North Dakota, May 30, 1979. McIver, John P. and Elinor Ostrom. 1976. Using budget pies to reveal preferences: Validation of a survey instrument. p. 87-110 in Policy and Politics. Marans, Robert W. and John D. Wellman. 1978. The quality of nonmetro- politan living: Evaluations, behaviors and expectations of northern Michigan residents. ISR. Mason, Robert G., David Faulhenberry and Alexander Seidly. 1975. The quality of life as Oregonians see it. Oregon State University, Corvallis, Oregon. Massey, Dean T. 1978. Attitudes of nearby residents toward establishing sanitary landfills. Economics, Statistics and Cooperative Service, USDA, ESCS-03. Michigan State University. 1978. Satisfaction with rural community. A longitudinal study in the Upper Peninsula. Experiment Station Report 348. East Lansing, Michigan. Mitchell, Robert Cameron. 1978. The public speaks again: A new environmental survey in Resources. No. 60. 91 Mondale, Walter. 1969. Our world--fit for life? In Congressional Record, 516830, December 16, 1969. Moser, C.A. and G. Kalton. 1972. Survey methods in social investigation. New York, Basic Books. National Wildlife Federation. 1969. The U.S. public considers its environment. A National Opinion Trends Report, The Gallup Organization, Princeton, New Jersey. National Rural Electric Cooperative Association. 1968. The nation's view of rural America and rural electrification--A summary. Conducted by International Research Associates, Inc., New York. National Waterways Conference, Inc. 1969. Approval of such a plan indicated in polls showing support of water programs and rural living. In Newsletter, May 9, 1969. North Woods Call. Pigeon River Country. Reprint. Charlevoix, Michigan. Ottinger, Richard L. 1969. Public attitudes regarding environmental improvement. In Congressional Record--Extension of Remarks, E10936. New York House of Representatives, December 20, 1969. Paul, M.E. 1971. Can aircraft nuisance be measured in money? Oxford Economic Papers, Vol. 23, No. 3. pp. 314-321. Randall, Alan, Berry C. Ives and Clyde Eastman. 1974. Benefits of abating aesthetic environmental damages. Agricultural Experiment Station Bulletin 618. New Mexico State University. Schuman, Howard and Stanley Presser. 1978. Attitude measurement and the gun control paradox. The Public Opinion Quarterly, Vol. 41, Winter 1977-78. Columbia University Press. p. 427-438. ----- . 1977. Question wording as an independent variable in survey analysis. Sociological Methods and Research, Vol. 6, No. 2, November 1977. pp. 27-46. Shorris, Earl. 1978. Market democracy. Harpers, November 1978. pp. 93-96. Sinden, J.A. 1973. Utility analysis in the valuation of extra-market benefits with particular reference to water-based recreation. WWRI-17. Water Resources Research Institute, Oregon State Univer- sity and Water Resources Research Center, University of Massachusetts. Soap and Detergent Association. 1971. Poll shows pollution seen as top problem. Water in the News, November 1971. Thiede, Gerald J. A Comprehensive Plan for the Pigeon River Country. Midwest Forest Economist's Meeting, Sept. 6-8, 1978. Dept. of Natural Resources, Forest Management Division. 92 Tri County Regional Planning Commission. Survey distributed at the Sycamore Creek and the Red Cedar River Meeting. U.S. 95th Congress. Assessment of public opinion and public expectations concerning the government and the economy. Hearing before the Subcommittee on Economic Growth and Stabilization of the Joint Economic Committee. 95th Congress, First Session. U.S. Government Printing Office. Washington, D.C. June 22, 1977. Walsh, Richard G., Douglas A. Greenley, Robert A. Young, John R. McKean and Anthony A. Prato. 1978. Option values, preservation values and recreational benefits of improved water quality: A case study of the South Platte River Basin, Colorado. EPA-600/5-78-001. Westman, Walter E. 1977. How much are nature's services worth? Science, Vol. 197, September 2, 1977. pp. 960-964. Westwater Research Center. 1973. Notes on water research in western Canada. Westwater, No. 5. University of British Columbia. Woelfel, Joseph. 1976. Galileo: A non-technical introduction. Mimeo, Department of Communications, Michigan State University.