MEASURING PREFERENCES FOR CHANGES IN WATER QUALITY AT GREAT LAKES BEACHES USING A CHOICE EXPERIMENT By Scott Arndt Weicksel A THESIS Submitted to Michigan State University In partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Agricultural, Food and Resource Economics 2012 ABSTRACT MEASURING PREFERENCES FOR CHANGES IN WATER QUALITY AT GREAT LAKES BEACHES USING A CHOICE EXPERIMENT By Scott Arndt Weicksel In two essays, this thesis reports the findings of a choice experiment (CE) conducted as part of a state-wide web survey of Great Lakes beach goers in Michigan. Despite the popularity of visiting Great Lakes beaches, little is known about visitors’ preferences for environmental attributes at beaches. Existing literature focuses on marine beach valuation and the valuation of Great Lakes ecosystems services other than beaches. In the first essay, to address this gap, we gather preferences for environmental quality attributes at Great Lakes beaches. We find respondents prefer beaches closer to home, beaches with less algae in the shore and in the water, and beaches tested for bacteria. In a second essay, we examine the effect of labeling (i.e. assigning names to) the alternatives within a CE. Although there is a growing literature focused on the effects of CE design elements at the discretion of the researcher, little attention has been given to the effects of different labeling schemes (i.e. whether alternatives within the choice sets are labeled with a name or are left as generic alternatives). We employ a split-sample CE where respondents see either labeled, same-labeled (i.e. labels present but held constant), or unlabeled alternatives (with Great Lake names used as labels). Although we find some differences in parameters and marginal rates of substitution from the three labeling schemes, results are highly similar in rank and magnitude, suggesting a large degree of preference consistency and a high degree of transferability of the values for use in benefit transfer. Copyright by SCOTT ARNDT WEICKSEL 2012 ACKNOWLEDGEMENTS Funding for this research was provided by NOAA Coastal Oceans, Multiple Stressors project and the MSU AgBioResearch and multistate project W2133. This research benefited from assistance provided by Shannon Briggs (Michigan Department of Environmental Quality), Charles Kovatch (U.S. EPA), Sonia Joseph Joshi (Michigan Sea Grant), Erin Dreelin and the Center for Water Sciences at MSU, and numerous County health officials and beach managers who shared their time with us. I would thank my committee members, Drs. John Hoehn, Michael Kaplowitz and Frank Lupi, for their guidance in the preparation of this thesis. To the Co-PI’s of the project, Dr. Kaplowitz and Dr. Lupi, I would like to express the utmost gratitude for the opportunity to work on this project—all of the lessons learned and experiences gained over the past two years are ones I will draw upon regularly as I continue on to new pursuits. Also, I wish to thank my teammates Min Chen and Kwame Yeboah for their help in keeping the project on course, and helping me learn along the way: best wishes to you both as you work to complete your dissertations. Thanks are of course due to Dustin Kubas, the project assistant who laughs at paper cuts, lunch breaks, and all of Jody Knol’s jokes; and to Kevin Adams, whose energy and teamwork was greatly appreciated. And lastly, to my friends and family, especially my parents and Lindsay: I’m incredibly lucky to continually benefit from all of your encouragement and support-thank you all for your thoughtfulness, energy, and home-cooked meals you provided me along the way. iv TABLE OF CONTENTS LIST OF TABLES ........................................................................................... vii LIST OF FIGURES ......................................................................................... x INTRODUCTION ............................................................................................ 1 CHAPTER 1 Introduction .................................................................................................. 12 Threats to the resource: Human Development ............................................ 12 Threats to the resource: Harmful Bacteria ................................................... 13 Threats to the resource: Nuisance Algae .................................................... 14 Purpose ....................................................................................................... 15 Previous Research: Valuing Changes in the Great Lakes Environment ............................................................................ 16 Previous Research: Beach Recreation ........................................................ 19 Previous Research: Great Lakes Beach Recreation ................................... 23 Method: Choice Experiment ........................................................................ 24 Random Utility Theory ................................................................................. 27 Survey Development: Pretests .................................................................... 31 Survey Development: Input from experts .................................................... 34 Survey Sample ............................................................................................ 37 Web Survey Implementation........................................................................ 37 Attributes and Attribute Levels ..................................................................... 39 Results......................................................................................................... 45 Conclusions ................................................................................................. 51 CHAPTER 2 Introduction .................................................................................................. 54 Purpose ....................................................................................................... 55 Research Questions .................................................................................... 61 Method: Choice Experiment ........................................................................ 63 Random Utility Theory ................................................................................. 65 Previous Research: Sensitivity to Design Factors ....................................... 69 Previous Research: Effect of Labeling ......................................................... 71 Application: Great Lakes Beaches............................................................... 74 Previous Research: Great Lakes and Beach Valuation ............................... 75 Survey Development: Pretests .................................................................... 76 Survey Development: Input from experts .................................................... 77 Survey Sample ............................................................................................ 78 Web Survey and Choice Experiment ........................................................... 81 Attributes and Attribute Levels ..................................................................... 82 Results......................................................................................................... 86 v Conclusions ................................................................................................. 97 APPENDICES ................................................................................................ 100 APPENDIX A: Image of screen display from Great Lakes Beaches Web-survey, choice experiment and follow-up portion, original size monitor-dependent ............................................. 101 APPENDIX B: Screener Survey (Michigan activities survey) materials, contact schedule, robodial scripts, and disposition ................ 127 APPENDIX C: Web Survey (Great Lakes beaches web survey) materials, contact schedule, and robodial scripts ................................... 155 APPENDIX D: Details of Michigan Activities Survey, Robocalls, and Great Lakes Beaches web survey methods ........................... 176 REFERENCES ............................................................................................... 202 vi LIST OF TABLES Table 1.1 Attributes and Attribute Levels Included in the Great Lakes Beaches Choice Experiment ...................................................... 40 Table 1.2 Number of Respondents and Observations for Great Lakes beach web survey Choice Experiment ....................................... 45 Table 1.3 Results of random-effects logit model estimating determinants of Great Lakes beach choice...................................................... 47 Table 1.4 Mean estimate of Marginal Rates of Substitution for Great Lakes beach characteristics, including lower-bound and upper-bound estimates of 95% Confidence Intervals from Krinsky-Robb method, 10,000 draws ......................................... 49 Table 2.1 Attributes and Attribute Levels Included in the Great Lakes Beaches Choice Experiment ...................................................... 83 Table 2.2 Results of Random-Effects Logit model estimating determinants of Great Lakes beach choice for “Labeled,” “Same-Labeled” and “Unlabeled” choice experiments ............... 87 Table 2.3 Mean and Krinsky-Robb 95% confidence intervals of Marginal Rates of Substitution for marginal changes in Great Lakes beach characteristics across “Labeled,” “Same-Labeled,” and “Unlabeled” choice experiments .............. 91 Table 2.4 Results of approximate one-sided significance of differences between mean MRSs: “Labeled and Unlabeled,” “Labeled and Same-Labeled,” and “Same and Unlabeled” choice experiments. ................................................................... 96 Table B.1 Schedule of Michigan Activity Survey Contacts ........................ 152 Table B.2 Summary of Michigan Activity Survey response and disposition .......................................................................... 154 Table C.1 Schedule of Great Lakes Beaches Survey Contacts ................ 175 Table D.1 Description and number of different record matches used to assign phone numbers to survey sample members for robodial implementation ............................................................ 183 vii Table D.2 Date, Time, cost, and number of calls for robodials during wave 2 of the Michigan Activities Survey.................................... 183 Table D.3 Date, Time, cost, and number of calls for robodials during wave 3 of the Michigan Activities Survey.................................... 184 Table D.4 Date of Great Lakes Beaches Survey Contacts ........................ 191 Table D.5 Final mailing, distribution and response sorted by post-paid incentive .................................................................... 192 Table D.6 3-way Chi-squared test comparing response rates to web survey based on final communication featuring $20, $10 or no post-paid incentive ........................................................... 192 Table D.7 2-way Chi-squared test comparing response rates to web survey based on final communication offering $20 or $10 post-paid incentive .................................................................... 192 Table D.8 2-way Chi-squared test comparing response rates to web survey based on final communication offering $10 or no post-paid incentive .................................................................... 193 Table D.9 2-way Chi-squared test comparing response rates to web survey based on final communication offering $20 or no post-paid incentive .................................................................... 193 Table D.10 Response and disposition for the Great Lakes beaches web survey ................................................................. 194 Table D.11 Great Lakes Web Survey Item Non-Response table 1 of 6, section: Front Matter ................................................................. 195 Table D.12 Great Lakes Web Survey Item Non-Response table 2 of 6, section: Choice Experiment section .......................................... 196 Table D.13 Great Lakes Web Survey Item Non-Response table 3 of 6, section: Great Lakes Opinion section ....................................... 198 Table D.14 Great Lakes Web Survey Item Non-Response table 4 of 6, section: Exposure section ......................................................... 199 Table D.15 Great Lakes Web Survey Item Non-Response table 5 of 6, section: Background information section .................................. 199 viii Table D.16 Great Lakes Web Survey Item Non-Response table 6 of 6, section: income question .......................................................... 200 Table D.17 Summary of distances seen in Great Lakes beach web survey choice experiment (in miles) ..................................................... 201 Table D.18 Summary of item response among choice experiment respondents, listed by labeling scheme viewed ........................ 201 ix LIST OF FIGURES Figure 1.1 Image of example choice set from the Great Lakes beaches choice experiment, original size monitor-dependent ................ 26 Figure 1.2 Image of diagram of Great Lakes beach attribute: Algae in the water, original size monitor-dependent ................ 35 Figure 1.3 Image of Diagram of Great Lakes beach attribute: Algae on the shore, original size monitor-dependent ................ 36 Figure 2.1 Image of example choice set from “Labeled” choice experiment, original size monitor-dependent ................. 57 Figure 2.2 Image of example choice set from “Same- Labeled” choice experiment, original size monitor-dependent ................. 58 Figure 2.3 Image of example choice set from “Unlabeled” choice experiment, original size monitor-dependent ................. 59 Figure 2.4 Image of diagram of Great Lakes beach attribute: Algae in the water, original size monitor-dependent ................ 79 Figure 2.5 Image of diagram of Great Lakes beach attribute: Algae on the shore, original size monitor-dependent ................ 80 Figure 2.6 Mean and Krinsky-Robb 95% Confidence Intervals of MRS in terms of miles for “amount of algae on the shore,” relative to the baseline level “high” ........................................... 92 Figure 2.7 Mean and Krinsky-Robb 95% Confidence Intervals of MRS in terms of miles for “amount of algae in the water,” relative to the baseline level “high” ........................................... 92 Figure 2.8 Mean and Krinsky-Robb 95% Confidence Intervals of MRS in terms of miles for “frequency of testing for bacteria,” relative to the baseline level “tested daily” ................................ 93 Figure A.1 Image of screen display from Great Lakes Beaches Web-survey, choice experiment and follow-up portion, original size monitor-dependent ................................................ 102 Figure B.1 Image of Michigan Activities Survey Wave 1 Introduction letter, original size 8.5” x 11” ................................. 130 x Figure B.2 Image of Michigan Activities Survey Wave 1 survey instrument, original size of each page 8.5” x 7” ............. 136 Figure B.3 Image of Michigan Activities Survey Wave 2 Introduction letter, original size 8.5” x 11” ................................. 141 Figure B.4 Image of Michigan Activities Survey Wave 2 survey instrument, original size of each page 8.5” x 7” ............ 143 Figure B.5 Image of Michigan Activities Survey Wave 3 introduction letter, original size 8.5” x 11” ................................. 146 Figure B.6 Image of Michigan Activities Survey Wave 3 survey instrument, original size of each page 8.5” x 7” ............. 151 Figure C.1 Image of Great Lakes Beach Web survey Wave 1 Introduction letter, original size 8.5” x 11” .................... 159 Figure C.2 Image of Great Lakes Beach Web survey Wave 2 reminder, half sheet black and white postcard, front, original size 8.5” x 5.5” .............................................................. 161 Figure C.3 Image of Great Lakes Beach Web survey Wave 2 reminder, half sheet black and white postcard, back, original size 8.5” x 5.5” .............................................................. 161 Figure C.4 Image of Great Lakes Beach Web survey Wave 3 reminder, quarter sheet color postcard, front, original size 4.25” x 5.5” ............................................................ 162 Figure C.5 Image of Great Lakes Beach Web survey Wave 3 reminder, quarter sheet color postcard, back, original size 4.25” x 5.5” ............................................................ 162 Figure C.6 Image of Great Lakes Beaches Web Survey Wave 4 reminder letter, $20 post-paid incentive, original size 8.5” x 11” .............. 165 Figure C.7 Image of Great Lakes Beaches Web Survey Wave 4 reminder letter, $10 post-paid incentive, original size 8.5” x 11” .............. 169 Figure C.8 Image of Great Lakes Beaches Web Survey Wave 4 reminder letter, no post-paid incentive, original size 8.5” x 11” ................ 173 Figure C.9 Image of Great Lakes Beaches Web Survey Wave 4 reminder Business Reply Mail postcard stating respondent xi does not have the Internet, front, original size 4.25” x 5.5” ....... 174 Figure C.10 Image of Great Lakes Beaches Web Survey Wave 4 reminder Business Reply Mail postcard stating respondent does not have the Internet, back, original size 4.25” x 5.5” ....... 174 xii INTRODUCTION: The Laurentian Great Lakes are one of the nation’s most vast and important natural resources. In Michigan, the Great Lakes support a broad range of public uses including fishing, boating, and sightseeing along with diverse private industry. Even though visiting Great Lakes public beaches is one of the most popular outdoor activities among Michigan residents, the quality of Great Lakes beaches comes under constant threat from natural (e.g. nuisance algae in the water and on the shore) and man-made (e.g. elevated levels of unsafe bacteria, developed shoreline) sources. When considering how the resource should be managed to maximize public wellbeing, there is a clear gap in required information. While much is known about the effects of stressors on the health and function of natural systems, little is known about the public’s preferences and values for different levels of environmental quality at Great Lakes beaches. Programs that preserve and protect the quality of Great Lakes beaches can be costly: since 2010, the Great Lakes Restoration Initiative has invested over $1 billion into projects throughout the Great Lakes with an additional $300 million authorized for 2013 (U.S. EPA 2009, 2010, 2011, 2012a). Without information on Great Lakes beach goers’ preferences for environmental quality attributes, decision makers are unable to find an efficient balance between cost and benefit of such programs. With each passing year, government spending on programs to improve Great Lakes beach quality falls under further scrutiny, further highlighting the importance of gathering scientifically sound information on the preferences of the public for different conditions at Great Lakes beaches. Funding decisions affecting the quality of Great Lakes beaches can occur at the local, state, or federal level. For example, the proposed 2013 budget for the U.S. EPA would eliminate the federal support of the 1 BEACH Grant Program which funds monitoring of beaches for elevated levels of harmful bacteria (U.S. EPA 2012b). Previous research has focused on valuing a variety of ecosystem goods and services provided by the Great Lakes. Lupi et al. (1998) estimate a statewide model of recreational fishing demand to measure the benefits to anglers that result from changes in fishing site characteristics such as catch rates. Lupi, Hoehn and Christie (2003) estimate the benefits to anglers resulting from a program to control invasive sea lamprey in the St. Mary’s River that would result in a predicted rebound for the trout population in Lake Huron. Kotchen et al. (2007) use a similar recreational fishing model to show that the benefits of improved habitat that results from hydropower dam relicensing exceed the cost of the operational changes. Other studies use the hedonic method relating observed changes in environmental quality (such as water quality or air quality) to the prices of nearby property to estimate economic values of changes in environmental quality. Ara et al. (2006) use a hedonic pricing analysis of the value of houses near Lake Erie and find that changes in water quality in Lake Erie have significant impacts on the value of nearby houses. Braden et al. (2004) use a hedonic model as well as a choice experiment to estimate the benefits from cleaning up contamination in the Waukegan Harbor in Waukegan, IL. Chattopadhyay (1999) used hedonic pricing to estimate the residents’ willingness to pay for air quality improvements in the largest Great Lakes coastal population center, Chicago, finding that residents have a higher willingness to pay for reduction in particulate matter (PM-10) than sulfur dioxide (S02). As a means of valuing proximity to and aesthetics of Lake Erie, Seiler et al. (2001) use a hedonic pricing model of homes in Ohio and estimate that all else equal, houses that 2 have a view of Lake Erie are an average of 56% more valuable than houses that do not have a view of Lake Erie. Whitehead et al. (2009) combine estimates from revealed preference (travel cost recreation site choice) and stated preference (contingent valuation of hypothetical restoration scenarios) to estimate the value of restoring Saginaw Bay wetlands. Hoehn et al. (2010) use a choice experiment to examine the value of wetlands in Michigan in terms of economic equivalency of wetland services that are part of the replacement of impaired or eliminated wetlands required under the Clean Water Act. The authors’ findings describe the public’s willingness to accept wetlands of varying characteristics as compensation for impaired or eliminated wetlands. In general, respondents required increased amounts of wetlands when the restored wetland was of lower quality in terms of habitat than the impaired wetland. There also exists a literature studying the value for characteristics and environmental quality at beaches, but research tends to focus on marine beaches. Bockstael, McConnell and Strand (1989) use the results of a contingent valuation telephone survey of residents of the Baltimore-Washington DC metro area to estimate individual’s willingness to pay to improve water quality in the Chesapeake Bay to levels acceptable for swimming. Smith, Zhang and Palmquist (1997) employ a contingent valuation survey to measure the economic value of controlling marine debris (natural or man-made) at recreational beaches in New Jersey and North Carolina. Parsons, Massey and Tomasi (1999) use a random utility model to estimate the demand Delaware residents have for marine beaches within a day’s drive and relate that demand to benefits of beach nourishment programs that keep beaches from eroding to less preferred widths. Landry, Keeler and Kriesel (2003) use hedonic pricing and 3 a choice experiment to value additional beach width along the beaches of Tybee Island east of Savannah, GA. Shivlani, Letson and Theis (2003) surveyed visitors to Southern Florida coastal beaches, including a contingent valuation question aimed at valuing beach nourishment programs. Lew and Larson (2005) surveyed residents of San Diego County and estimate a travel cost model and calculate implicit prices for different beach characteristics including the presence of life guards and water quality measures. Hilger and Hanemann (2006) use information on beach visits among residents of Southern California and use the travel cost method to estimate individual’s willingness to pay for improvements in water quality. Huang, Poor and Zhao (2007) sampled randomly selected households in New Hampshire and Maine about their preferences for different coastal erosion control programs. In contrast, there is only one published peer-reviewed journal article valuing the changes in characteristics at Great Lakes beaches. Murray, Sohngen, and Pendleton (2001) valued decreasing incidences of water contamination (beach advisories) at beaches on Lake Erie using the results of a survey of visitors to Lake Erie beaches in 1998. A separate technical report stemming from a similar study, Sohngen, Lichtkoppler and Bielen (1999) use a travel cost model to estimate the value of a trip to beaches at Maumee Bay State Park and Headlands State Park on Lake Erie. In addition, unpublished research by Egan and Dwyer (2008) uses a small sample of visitors to the Maumee Bay State Park in northwest Ohio to investigate the value of wetland restoration that would result in the removal of all swim advisories at the site. In unpublished findings presented to several conferences, Shaikh (2005 and 2012) estimates a travel cost model Lake Michigan beaches in Chicago. 4 This overview provides examples of environmental and resource valuation work on the Great Lakes, as well as numerous studies valuing beach recreation and beach characteristics. Despite the vastness of the Great Lakes and the popularity of beach visitation, the literature remains thin in terms of studies valuing Great Lakes beach characteristics. The few available studies focus on a limited number of sites, often have small sample sizes, and only one has been peer-reviewed. To address these gaps in the literature, this thesis provides the first insights into Michigan resident’s preferences and values for environmental conditions at Great Lakes beaches. It reports the results of a state-wide web survey of Michigan residents conducted in the spring of 2012. As part of the web survey, respondents took part in a choice experiment pertaining to Great Lakes beach characteristics. The results of the choice experiment provide valuable insights into the public’s preferences for environmental conditions at Great Lakes beaches. The implementation of the choice experiment also included an experimental element to examine the effect that design elements that are chosen by researchers have on the outcome of choice experiments. This thesis is divided into two chapters. The first chapter discusses the factors that threaten the health of the natural systems and human enjoyment of Great Lakes beaches, then describes the implementation of a choice experiment as a way to provide insight about Michigan residents’ values and preferences for different conditions at Great Lakes beaches in Michigan. This study builds on previous work within the field of environmental and natural resource economics that values aspects of the Great Lakes, but is unique in that only one previously published study focuses on Great Lakes beaches. The preference information will prove useful to resource managers and decision makers in need of scientifically sound, fact- 5 based research about residents’ economic values and preferences for environmental quality characteristics that may be protected by costly, budget-limited programs. The results reported in this thesis stand as a first step to closing a gap left by the existing literature for information that could be used to prioritize further studies or determine demand for programs to protect and enhance environmental conditions at Great Lakes beaches. Results of the choice experiment show that nearly all levels of attributes included in the experiment (distance from home, measures of algae on the shore and in the algae in the water, beach length, and frequency of bacteria testing) are statistically significant and have the expected sign. The results show that all else equal, Michigan beach users prefer beaches on the Great Lakes that are closer to home, have lower levels of algae in the water, and have less algae on the shore. Respondents also prefer beaches that are tested for bacteria at least monthly versus those that are not tested at all, signaling a value for testing information which is separate from any change in risks for negative health impacts from exposure to bacteria. These results can help decision makers prioritize areas of restoration while seeking to balance the benefits of increasing environmental quality at Great Lakes beaches with the costs of enacting such programs. The second chapter of thesis reports on a split-sample experiment within the Great Lakes beaches web survey where respondents were randomly assigned to view choice sets that featured different labeling schemes (i.e., whether or not alternatives have names or are unnamed collections of attributes). Within a choice experiment, alternatives can either be unlabeled or labeled. Unlabeled choice alternatives are commonly denoted by a generic title or a letter (e.g. “Choice A, Choice B” or “Program C, Program D”). Labels on choice alternatives can indicate a brand name, location name, or any other unique identifier for that 6 alternative (e.g. “New York, Los Angeles”, “Coke, Pepsi”, or “Train, Car, Bus”). In the second chapter of this thesis, within a choice experiment pertaining to Great Lakes beach choice, we use labels to indicate which Great Lake the beach alternative lies on. When deciding whether or not to include labels in a choice experiment, researchers should consider the benefit of added familiarity or realism the labels provide (Adamowicz and Boxall 2001). However, when labels are present in a choice set, respondents may seek to simplify the decision making process and base their choice of the preferred alternative solely on the label, ignoring other attributes (Blamey et al. 2000). Respondents faced with choice sets containing labels may also consider alternatives outside of those appearing in the choice task (Adamowicz and Boxall 2001). Concerns exist among practitioners of attribute-based choice or ranking methods that labeled alternatives may contain levels of attributes that, in combination with the presence of a label, appear unrealistic or infeasible to respondents, depending on the respondent’s perceptions of the label (Huybers 2005, Carson et al. 1994). Despite the concerns over how the presence or absence of labels may affect the results of choice experiments, little empirical evidence exists to characterize those effects. Chapter 2 presents the results of a split-sample choice experiment where respondents either viewed “labeled” alternatives, where the beaches were labeled with a Great Lake; a scheme with “same-labeled” alternatives, where Great Lake labels were present but remained constant within choice sets; and a scheme with “unlabeled” alternatives, where no Great Lake labels were present. The investigation into the effect of different labeling regimes builds on previous research which tests the sensitivity of choice experiment results to different design 7 components at the discretion of the researcher. Researchers across disciplines have used split-sample studies to vary design elements such as the number of choice sets faced by respondents (Hensher 2001), the number of alternatives (Rolfe and Bennett 2009, DeShazo and Fermo 2002, Racevskis and Lupi 2008), choice set presentation formats (Hoehn, Lupi and Kaplowitz 2010), and answer formats (Fenichel et al, 2009). A group of studies have used a “Design of Designs” format to generate an experimental design where the design elements are varied within a split sample choice experiment, so as to isolate the effects of variation in design elements on the resulting data. Caussade et al. (2005), Hensher (2006b), and Rose et al. (2009) employ this strategy that varies each of the five design elements (number of attributes, number of attribute levels, range of attribute levels, number of alternatives within choice sets, number of choice sets faced by respondents) across the sample such that they can isolate the impact each element has on the choice data. While the effects of different design elements such as the number of choice sets, alternatives, attributes, and attribute levels faced by respondents have been studied, there are only a few studies that test the effects of labels. Much of the research investigating the effects of labeling on choice experiment results uses labels to represent a means of program provision or program funding. Rolfe and Windle (2011) test the effects of labels indicating the method with which a proposed coral reef protection program would be achieved. Similarly, Czajkowski and Hanley (2009) conduct a study of Polish households’ willingness to pay to protect local biodiversity and test for differences in the results of a generic choice experiment and a choice experiment where alternatives are labeled as being provided through the expansion of an existing national park, or by “other” means. Blamey et al. (2000) implemented a choice experiment among households in Brisbane, Australia, 8 regarding preferences for forest preservation where a portion of the respondents viewed a choice experiment with generic policy alternatives, while the remaining respondents viewed a choice experiment with labeled policy alternatives which named minimum percentages of scarce forest was to be left preserved. A recent study by Brannlund and Persson (2012) conducted a choice experiment eliciting information on Swedish citizen’s preferences for CO2 reduction programs. The authors use labels to indicate different payment mechanisms with half of the respondents shown unlabeled alternatives, and the other half shown alternatives labeled as being funded through private (i.e. individual) taxes or by “other” means. While these studies are able to compare choice experiments with labeled and unlabeled alternatives, the use of labels to represent different program funding mechanisms or modes of program provision can be viewed largely as a means of gathering preferences for separate program attributes, which is different from the use of labels as a brand, site name, or other unique identifier capturing preferences for information not explicitly listed in the choice set. Huybers (2005), however, compares the results of generic and labeled choice experiments pertaining to vacation destination choices where the labels are site names. Huybers’s study is the only published study we are aware of that uses site names as the labels of a choice experiment and compares those results to a generic choice experiment. While certain applications of choice experiments may mandate the use of either generic or labeled alternatives, in other cases it may be left to the discretion of the researcher. In these cases, the literature offers little guidance on the expected effects of such a design decision. The impact of including labels in a choice experiment regarding preference for environmental quality also has implications for possible benefit transfer. Research supports 9 the notion that similarity in site-context is a significant factor in the transferability of values from one site to the next (Johnston 2007). The transferability of the values and preferences elicited in this study would be considered higher (lower) should the results depend less (more) on the presence of labels. If preferences remain consistent in the presence or absence of Great Lakes names, we hypothesize the transferability of the preferences from this study site to other policy sites that do not share the same label would be higher than if preferences depended on whether alternatives were labeled or not. The author is not aware of other choice experiments that test for transferability of results based on the degree of similarity between labeled (alternatives named by site) designs and generic (unnamed) designs. Results show that the preferences elicited under the different labeling schemes are similar in rank and magnitude, while Log-Likelihood Ratio tests reveal significant differences in parameters across the labeled and unlabeled experiments as well as the same-labeled and labeled experiments, while no significant difference was found in the parameters estimated from the unlabeled and same-labeled experiments. However, these LLR tests do not take into account differences in the scale factor (i.e., the unidentified variance) specific to each labeling scheme’s different data set. To control for differences in scale, we compare the MRS calculated from the different labeling schemes and find that in general, the average MRSs are highest in the labeled scheme. We use a Krinsky-Robb procedure with 10,000 iterations to simulate the 95% confidence interval around the MRSs. Comparison of the 95% CI of the MRS shows that there is no statistically significant difference in welfare measures (economic tradeoff information) calculated across labeling schemes. This result is encouraging for the practice of benefit transfer: similarities in the MRS from labeled and 10 unlabeled choice experiments helps build confidence in the transferability of values from study sites where labeled choice experiments have been used at policy sites that may be different from the sites in the original study. 11 CHAPTER 1: Preferences and Values for Changes in Water Quality at Great Lakes Beaches in Michigan Introduction The Laurentian Great Lakes provide Michigan with environmental services that support a broad range of public human uses including fishing, boating, and sightseeing along with diverse private industry. With over 3,000 miles of Great Lakes coast along Lake Superior, Lake Michigan, Lake Huron, Lake St. Clair and Lake Erie, Michigan enjoys the most Great Lakes coastline of any state (MDEQ 2012b). The Great Lakes coast of Michigan features approximately 600 public beaches along with countless private beaches (MDEQ 2012a). Visiting Great Lakes public beaches is one of the most popular outdoor activities among Michigan residents, with more than 50% of respondents to a statewide mail survey reporting having visited a public beach on the Great Lakes during the two prior swimming seasons (Lupi et al. 2012). Threats to the resource: Human Development Despite their popularity and importance, the quality of the Great Lakes ecosystem and the associated human uses constantly come under threat (IJC 2011a). The health of natural systems within the Great Lakes as well as the associated human uses is greatly affected by human development of Great Lakes shoreline. Shoreline development eliminates not only public access for swimming, fishing, boating, and other outdoor recreation, but also eliminates aquatic habitat for biota. Point source and non-point source pollution from human development near the coast of the Great Lakes also degrades water quality. Human development in coastal areas causes increases in pollutants from residential uses (phosphorous, lawn fertilizers, leaky septic systems, and improper hazardous material disposal), agricultural practices (chemical fertilizers, spread manure), and proximate 12 commercial and industrial activity (Environment Canada and U.S. EPA 2009). A summary of the complexity of issues in dealing with the impairment to the Great Lakes caused by nonpoint source pollution can be found in IJC (2011a). Threats to the Resource: Harmful Bacteria The presence of pathogens and harmful bacteria such as E. coli in the waters of Great Lakes threatens the public’s enjoyment of Great Lakes beaches. E. coli enters the Great Lakes through natural processes and manmade sources that are each difficult to avoid or mitigate (U.S. Policy Committee 2002). Manmade sources such as municipal sewers or septic systems can leak feces containing E. coli into water ways connected to the Great Lakes, while waterways receiving runoff from agricultural areas or other land where manure is spread can also carry E. coli into the Great Lakes. Population centers with sewer systems that combine waste and storm water can overflow following wet weather events, leading to Combined Sewer Overflows (CSOs) which can carry untreated wastewater and effluent containing E. coli into the Great Lakes (MDEQ 2011). Projects across the country are underway updating sewer systems to avoid CSOs and can be very costly (ibid; U.S. EPA 1999). When visitors to the Great Lakes are exposed to high levels of harmful bacteria they can suffer a range of negative health impacts including skin irritation, cold-like symptoms, or diarrhea (Rose et al. 1999). Due to the health risks involved with exposure to E. coli at beaches, federal, state, and county health officials have developed state-wide monitoring plans that test for harmful bacteria and pathogens and inform the public about contamination at beaches both on the Great Lakes and on inland lakes in Michigan (MDEQ 2012a). Without testing, beach goers are unable to make informed decisions about the 13 health risks associated with swimming. Information about testing and closures is available 1 online and through local media outlets. As of 2011, 262 of Michigan’s approximately 600 Great Lakes public beaches are tested at least monthly for E. coli, leaving nearly 60% of public beaches on the Great Lakes untested (ibid). In 2011, 101 of the 262 Great Lakes beaches monitored for bacteria, or nearly 39%, reported an exceedance of safe bacteria levels at least once during the peak swimming season. Beach advisories or beach closures caused by high levels of bacteria are not only an inconvenience to recreators, but can also prove costly to local communities whose businesses and economies depend on tourists and visitors to Great Lakes (NRDC 2012). Threats to the Resource: Nuisance Algae The presence of nuisance algae in the Great Lakes has been a long-running challenge for resource managers (IJC 2011b; Higgins et al. 2005). The management of nuisance algae in the Great Lakes raises a set of difficult questions linked to concerns about public health, invasive species, and climate change. Large blooms of naturally occurring species of algae, such as cladophora, can grow throughout the Great Lakes and eventually wash onto shore. The greenish-brown blooms form decaying mats on the shore sometimes several inches thick (MI Sea Grant n.d., Harris 2005). These mats of decaying algae, commonly called muck, can greatly diminish beach aesthetics, obstruct passage along the shore, and produce powerful and unpleasant odors, thus fouling stretches of beaches for residents and visitors (Verhougstraete et al. 2010, SBSCPWG 2007, Higgins et al. 2005). Recent studies have shown increased incidents of muck along the shore of the Great Lakes 1 Michigan’s Great Lakes beach testing and beach action information is available through the Michigan Department of Environmental Quality’s BeachGuard website: http://www.deq.state.mi.us/beach/ 14 across the Great Lakes basin (Auer et al. 2010, Higgins et al. 2008). Research has also shown that muck can harbor pathogens and harmful bacteria (Vanden Heuvel et al. 2010; Verhougstraete et al. 2010; Olapade et al. 2006). The possible negative health concerns associated with exposure to pathogens and bacteria in muck has caused county health officials to issue contact advisories recommending avoidance of muck (Bay County Health Department 2007). Muck also benefits from the presence of invasive zebra mussels that increase water clarity, which encourages further growth of algae such as cladophora (Auer et al. 2010, Higgins et al. 2008, Wilson et al. 2006). Algae in the Great Lakes grows at the highest rates while water temperatures are higher during the summer months (Verhougstraete et al. 2010, Higgins et al. 2005) leading some to hypothesize that levels of muck in the Great Lakes could increase with warming trends as the result of climate change, though current research predicts only marginal increases (Malkin et al. 2008). Previous Research: Valuing Changes in the Great Lakes Environment As one would expect, programs designed to protect the environmental health and public uses of the Great Lakes are costly. While a wealth of information exists characterizing the stresses placed on health of the Great Lakes and the associated human uses, there is still the question of how to manage those threats in a cost effective way that still maximizes the public’s enjoyment of the resource. Only once decision makers can view both the costs alongside the benefits of programs to protect or restore the environment can an economically sound course of action be taken. Within the field of environmental and natural resource economics, a number of studies have sought to value changes in the environmental conditions on the Great Lakes to better inform decision making (NMI and NOAA 2001). These studies elicit the values and preferences of the public for changes in 15 the Great Lakes’ environment. These values can then be applied in policy settings where the costs of programs that protect or improve the environment can be balanced against public benefits of such programs. Lupi et al. (1998) estimate a statewide model of recreational fishing demand to measure the benefits to anglers that result from changes in fishing site characteristics such as catch rates. Lupi, Hoehn and Christie (2003) estimate the benefits to anglers resulting from a program to control invasive sea lamprey in the St. Mary’s River that would result in a predicted rebound for the trout population in Lake Huron. The net present values of several lamprey suppression programs, each with specific costs and trout recovery scenarios, are all found to be positive, indicating that there is a net economic benefit to lamprey control in the St. Mary’s River. Similarly, Kotchen et al. (2007) use a recreational fishing model to show that the benefits of improved Great Lakes fish habitat that results from hydropower dam relicensing exceed the cost of the operational changes. Some studies measure the economic benefits of changes in environmental quality by relating observed changes in environmental quality (such as water quality or air quality) to the prices of nearby property, which is called the hedonic method. Ara et al. (2006) use a hedonic pricing analysis of the value of houses near Lake Erie and find that changes in water quality in Lake Erie have significant impacts on the value of nearby houses. Their model predicts that an increase in water quality at Lake Erie beaches comparable to a 1 meter increase in secchi disk depth could increase housing values in that beach’s county from $221 to $2379 per house, depending on the beach nearest to the home in question (1996 USD). Ara et al. also estimate that homes in the area of beaches experiencing unsafe levels of fecal coliform counts would benefit from a reduction in fecal coliform to safe counts 16 (i.e. 200 counts per 100 mL) in the amount of $88 to $2692 per house (1996 USD). Braden et al. (2004) use a hedonic model as well as a choice experiment to estimate the benefits from cleaning up contamination in the Waukegan Harbor in Waukegan, IL. The results of each method are largely convergent, and the authors estimate the value of removing contamination among residents of Waukegan Harbor to be 16% to 19% of the total values of residents’ homes. Chattopadhyay (1999) used hedonic pricing to estimate the residents’ willingness to pay for air quality improvements in the largest Great Lakes coastal population center, Chicago, finding that residents have a higher willingness to pay for reduction in particulate matter (PM-10) than sulfur dioxide (S02). As a means of valuing proximity to and aesthetics of Lake Erie, Seiler et al. (2001) use a hedonic pricing model of homes in Ohio and estimate that all else equal, houses that have a view of Lake Erie are an average of 56% more valuable than houses that do not have a view of Lake Erie. Whitehead et al. (2009) combine estimates from revealed preference (travel cost recreation site choice) and stated preference (contingent valuation of hypothetical restoration scenarios) to estimate the value of restoring Saginaw Bay wetlands. They estimate the present value of benefits accrued to residents of the area from one acre of Saginaw Bay marsh to be to be $2421 per acre (2005 USD) with the majority of that value ($1870) being derived from benefits to recreational users, and the remaining from passive (non-use) values. Lupi et al. (2002) and Hoehn et al. (2010) use a choice experiment to examine the value of wetlands in Michigan in terms of economic equivalency of wetland services that are part of the replacement of impaired or eliminated wetlands required under the Clean Water Act. The authors’ findings describe the public’s willingness to accept wetlands of varying attributes as compensation for impaired or eliminated wetlands. In 17 general, respondents required increased amounts of wetlands when the restored wetland was of lower quality in terms of habitat than the impaired wetland. Previous Research: Beach Recreation Public beach recreation has received a sizable amount of attention from resource managers, policy makers and researchers. This review focuses on providing examples from the literature on studies that examine not only the value of access (or the demand for) beach recreation, but on the value of changes in beach characteristics or environmental quality present at public beaches, as that is the focus of our study. The value of changes in beach characteristics has been investigated at least since McConnell (1977) studied the effect of congestion on visitation to beaches in Rhode Island. Bockstael, McConnell and Strand (1989) use the results of a contingent valuation telephone survey of residents of the Baltimore-Washington DC metro area to estimate individual’s willingness to pay to improve water quality in the Chesapeake Bay to levels acceptable for swimming. On average, those who reported having used the Chesapeake Bay were willing to pay $121 annually and non-users $38 annually (1984 USD) for the improvement in water quality. Smith, Zhang and Palmquist (1997) employ a contingent valuation survey to measure the economic value of controlling marine debris (natural or man-made) at recreational beaches in New Jersey and North Carolina. While the authors note that study’s sample size is too small for the results to be used to estimate population parameters, the results do show that respondents consistently displayed higher willingness to pay and higher choice probabilities for programs that resulted in lower amounts of debris versus programs that resulted in higher amounts of debris. 18 Parsons, Massey and Tomasi (1999) use a random utility model to estimate the demand Delaware residents have for marine (coastal) beaches within a day’s drive. Should beach re-nourishment programs in the region be discontinued, and all beaches within the choice set be reduced to widths of less than 75 feet, the authors estimate the loss to each individual to be $7.25 per trip (assumed to be 1997 dollars, the year the study was conducted), though the losses are less when considering losses only experienced at sites reported as “favorites” by respondents. This result seems to indicate that beach width is a less important to a person visiting his or her favorite beach. Landry, Keeler and Kriesel (2003) use several methods to value additional beach width along the beaches of Tybee Island east of Savannah, GA. First, they construct a hedonic price model and estimate that all else equal a one meter increase in beach width increases property value by an average of $233 (US$ 1996). Next, the authors implement a choice experiment where respondents make tradeoffs between beach widths in the study area and program costs in terms of parking fees. The authors estimate that on average, households who visit beaches on Tybee Island were willing to pay between about $4 and $11 per day at the beach for a one meter increase in beach width. Significant determinants of willingness to pay for an increase in beach width included whether or not the respondent owned an annual parking pass and the mode of beach management used to achieve additional width (e.g. respondents exhibited higher WTP for achieving increases in beach width under nourishment programs versus programs that achieved wider beaches through 2 shoreline retreat ). 2 As described by Landry et al. (2003), shoreline retreat allows for natural shoreline erosion without nourishment. As the beach erodes, structures nearing the receding shoreline are demolished or relocated. Beaches would be allowed to migrate inland. 19 Shivlani, Letson and Theis (2003) surveyed visitors to Southern Florida coastal beaches, including a contingent valuation question regarding whether or not the respondent would pay additional parking fees for a beach nourishment program. Depending on the version of the survey, respondents were either told the beach nourishment program would increase recreational opportunities, or that the program would enhance habitat available for turtle nesting. The authors found that respondents had a higher willingness to pay to increase turtle habitat ($2.12 per trip) than for improved recreational opportunities ($1.69 per trip). Lew and Larson (2005) surveyed residents of San Diego County, gathering information on their most recent trip to a coastal beach. Using this trip information, the authors estimate a travel cost model and calculate implicit prices for different beach characteristics. The authors find that on average, residents of San Diego County are willing to pay about $9 per trip for the presence of on-beach lifeguards, and $4.25 per trip to avoid a beach that has a cobblestoned, or sand-denuded shore (2004 USD). Interestingly, their 3 model found that water quality (whether or not there was a beach water quality posting the day of the respondent’s most recent trip or during the week before the respondent’s trip) did not have a significant impact on beach choice. The authors hypothesize that visitors’ beach choice may not be influenced by water quality since some individuals may not plan on 3 The authors refer to beach water quality postings both as postings of water quality violations and as postings of beach closures. This would indicate that water quality postings could signify either a type of contact advisory or a beach closure. 20 coming into contact with the water, or that those who do plan on entering the water will enter 4 the water regardless of conditions during their trip . Hilger and Hanemann (2006) use a finite mixed logit model to account for heterogeneity in water quality preferences across groups of individuals and across different seasons. The authors gathered beach visitation data from 595 households in Southern California in 2001 and modeled beach choices, using travel costs for each individual as the price for each beach. Willingness to pay for a one grade improvement in water quality (grades are based on the results of recent and past bacteria tests) at Southern California beaches varied from negative values to over $17 per trip, with an average of $5.71 per trip. Huang, Poor and Zhao (2007) sampled randomly selected households in New Hampshire and Maine about their preferences for different coastal erosion control programs. The researchers designed the study to include respondent’s values for both the benefits of erosion control programs such as maintained beach width, as well as potentially harmful effects of erosion control programs, such as erosion on neighboring beaches, decreases in water quality, or disturbances to wildlife habitat. Results show the least desirable coastal erosion program would be one that caused a disturbance to wildlife habitat, while respondents preferred programs that saved beaches versus programs that protected beach front houses. Not all respondents visited coastal beaches, though the number of trips a 4 Additionally, while the authors do not report how aware respondents were of beach water quality postings, the authors purport the finding of water quality insignificance to “support the idea that beach users may not have much knowledge about current beach water quality postings,” perhaps indicating that beach postings did not affect beach choice because respondents were not aware of them. The authors also comment that water quality postings are only one aspect of how individuals may form opinions of water quality, adding “Other indicators, such as the amount of trash on a beach, or objective water quality measures provided through the media … may provide better insights into how water quality may affect choices between beaches” (Lew and Larson 2005). 21 respondent took to the beach did not significantly influence preferences for erosion control programs. Previous Research: Great Lakes Beach Recreation While there is some existing literature on the valuation of Great Lakes environmental resources and more literature on the valuation of marine beach recreation, examples of which we described above, we know of only one peer-reviewed journal article that studies values and preferences for different characteristics of Great Lakes beaches. Murray, Sohngen, and Pendleton (2001) valued decreasing incidences of water contamination (beach advisories) at beaches on Lake Erie. That study gathered information on Ohio residents’ trips to beaches on Lake Erie during 1998 and found that reducing the number of beach advisories at a given beach by one in a given recreation season would result in a seasonal benefit of $28 for each Great Lake beach goer. In a separate technical report stemming from a similar study, Sohngen, Lichtkoppler and Bielen (1999) use a travel cost model to estimate the mean value of a trip to beaches at Maumee Bay State Park and Headlands State Park to be approximately $25 and $15 respectively (1998 USD). Unpublished research by Egan and Dwyer (2008) uses a small sample (n= 178) of visitors to the Maumee Bay State Park in northwest Ohio to investigate the value of wetland restoration that would result in the removal of all swim advisories at the site. Using a contingent behavior approach, the authors estimate that on average, respondents would increase their trips to Maumee Bay State Park by 37% (from 3.8 trips per year to 5.2 trips per year), with an inferred economic value of $147. The survey also included a contingent valuation referendum question where respondents were asked whether or not they would vote for a program that would restore local wetlands, improving water quality on site and 22 thus eliminating swimming advisories. Analysis of the response to the referendum question yields a median willingness to pay for wetland restoration to eliminate swim advisories of 5 $375 per year . In findings presented to several conferences, Shaikh (2005 and 2012) estimates a travel cost model of visitors to Lake Michigan beaches in Chicago and finds that visitors have an average value per trip of $35. Included in the study was a contingent valuation question where survey participants were asked whether or not they would vote for an increase in income tax that would provide better sewage treatment and decrease the total swimming bans by 50% each year. Depending on the survey version, the increase in income tax ranged from $10 to $100 per year, and respondents either faced a dichotomous choice (vote yes or vote no) or a polychotomous choice (five choices: definitely vote no, probably vote no, not sure, probably vote yes, definitely vote yes). Different coding schemes were used to attribute polychotomous choices as votes for or against the program as a means of dealing with varying degrees of respondent certainty. As reported in Shaikh (2012), an analysis of the responses yielded a willingness to pay for a 50% decrease in beach closures of approximately $38 to $65 per person per year, depending on assumptions made. Purpose Surprisingly, even with millions of visitors each year and nearly 600 public beaches on the Great Lakes in Michigan, resource managers lack information on residents’ preferences for different levels of environmental quality at Great Lakes beaches. This paper 5 The formal statistical models used to develop the willingness to pay for wetland restoration from both the contingent behavior and the contingent valuation were not discussed in the available manuscript. 23 uses a choice experiment to elicit the values and preferences that Michigan residents have for different environmental quality characteristics at Great Lakes beaches. Information on these values and preferences helps fill the gap that exists between knowledge of the stresses to environmental quality of Great Lakes beaches and how those stresses affect the enjoyment of that resource by the public. Information on Michigan residents’ values and preferences for changes in Great Lakes water quality conditions will provide a starting place for resource managers and policy makers: such information could help prioritize further work to study the implementation of Great Lakes environmental protection programs such as the Great Lakes Restoration Initiative, a Federal program that has invested $1.075 billion from 2010 to 2012, with an additional $300 million authorized for 2013 (U.S. EPA 2009, 2010, 2011, 2012a,). Scientifically sound preference information could be used as a basis to justify revisiting recommended best management practices or current industry standards that are known to still cause negative impacts to environmental quality in the Great Lakes. Resource managers could eventually use information on the public’s preferences for different levels of environmental quality when quantifying the damages caused by environmental disasters or benefits of projects that restore injured natural resources. The next section of this paper explains the methods of choice experiments and the underlying statistical model. It then describes the choice experiment we implemented as part of a Great Lakes beaches web-based survey. The results are then presented, followed by a discussion. Method: Choice Experiment The choice experiment format is a well-established tool that has been used by marketing researchers to measure consumer preferences for different levels of attributes in 24 a given product (Louviere, Hensher, Swait 2000). The practice was then popularized within the field of resource and environmental economics where the method is applied to goods for which no markets exist (Hanley, Wright and Adamowicz 1998). In a choice experiment, participants are shown two or more alternatives that are comprised of different levels of attributes and the respondents are asked to pick their preferred alternative. Figure 1.1 shows an example of a choice used in our study. The underlying assumption of a choice experiment is that when faced with two or more alternatives containing varying levels of attributes, the respondent will choose the alternative that leaves them the best off. The choices made by respondents reveal tradeoffs between the levels of attributes presented in each of the alternatives. When researchers include cost as one of the attributes of the alternatives, respondent choices are subject to income constraints, as well as the restriction that they can only choose one of the alternatives. In this way, the choice experiment mimics choices made by consumers in a market: respondents in a choice experiment are forced to tradeoff a finite resource (income) while maximizing their enjoyment (utility) of a product composed of several attributes. As a result of modeling this constrained utility maximization problem, researchers can estimate the demand for the attribute levels included in the choice experiment, as well as measure the tradeoffs individuals make between levels of different attributes. The attributes and the levels of those attributes that comprise alternatives presented in a choice experiment are determined by the researcher. Deciding which attributes to include is a complicated task. Researchers must be sure to include enough attributes to describe the good in question so as to present a choice set that is representative of realworld conditions. However, researchers must keep from making the choice task too 25 Figure 1.1: Image of example choice set from the Great Lakes choice experiment, original size monitor-dependent (For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this thesis.) Visiting a Great Lakes Beach Suppose you are taking a trip to the beach and there are only two beaches to choose from. The beaches have different characteristics as shown in the table, but otherwise, they are the same. For example, they would have the same amount of litter, the same amount of crowding, and the same scenery. Please compare Beach A and Beach B in the table and answer the question below: Beach A Beach B Great Lake Lake Huron Lake Michigan Bathrooms Flushing toilets cleaned hourly Flushing toilets cleaned hourly Algae in the water Moderate (occasionally come in contact with algae) Low (rarely come in contact with algae) Algae on the shore Low (1-20% of the shore has algae.) None Length of beach 5280 yards (3 miles) 880 yards (1/2 mile) Testing water for bacteria Daily Weekly Distance from your home 48 169 Which of the above beaches would you visit? Beach A 26 Beach B burdensome for the respondent by including too many attributes. Since there may be many determinants of an individual’s enjoyment of a good included in a choice experiment, researchers typically conduct focus groups or cognitive interviews with members of the sample population to be sure the attributes and attribute levels included in the choice experiment are meaningful to respondents and that information within the survey is not overly technical or burdensome. Survey forms are also developed carefully and tested to be sure that the information provided is complete, and minimizes any tendencies for respondents to make inferences about missing information which could obscure results. Once the attributes and their levels have been defined, the researcher then generates a list of all the combinations of attribute levels that will appear within the choice sets shown to respondents, known as the experimental design. The levels of attributes included in choice sets is varied in such a way that minimizes unwanted correlation between attribute levels and allows the researcher to properly identify the effect that each attribute level has on the likelihood that a respondent chooses an alternative. Random Utility Theory Underlying the choice between alternatives is the utility the respondent would experience from choosing each alternative. The statistical model used to represent the choices made by choice experiment participants is based on random utility theory (McFadden 1974). Respondents review the attributes and attribute levels listed for each alternative in the choice experiment and choose the alternative that maximizes their utility. The indirect utility function is comprised of two main components: a deterministic (observable) component and a stochastic (unobservable) component. The deterministic component is a function of the attributes that comprise each alternative, the individual’s 27 characteristics, and unknown parameters. The stochastic component is an error term. Per Louviere et al. (2000) and Alberini et al. (2006), this relationship is stated formally as: ̅( ) (1) for every i individual and every j alternative; x is a vector of attributes that take on various levels for each alternative within a choice set, β is a vector of unknown parameters associated with those attributes, and is an error term capturing factors specific to the alternative and to the individual that affect the respondent’s utility but that the researcher cannot observe. The deterministic portion of the indirect utility function is commonly specified as a linear function of the attributes of the alternative and the respondent’s income (y) less the cost of the alternative (C): ( where the coefficient ) (2) represents the marginal utility of income because yi –Cij represents money person i has left over to spend elsewhere after alternative j was chosen at cost Cij. In our application, we ask respondents to review the attributes of two Great Lakes beaches and then choose which beach they would visit. For this study, we do not use a strictly monetary cost component, but rather use distance from home as the cost to the respondent to enjoy the beach alternative of their choosing. By using distance instead of monetary value, the choice situation presented in this choice experiment is consistent with real trip taking behavior, where each respondent would face a trip from home with an associated travel distance to the beach rather than a dollar amount they must pay in order to enjoy the beach. The marginal utility of additional miles away from home could be converted to a monetary measure by defining a relationship between miles driven and money. 28 However, this paper leaves the “cost” of the alternatives, and therefore the costs of changes in attribute levels, in terms of miles driven. Future analysis of web survey data will focus on translating information on actual trips taken into economic values for beach attributes in terms of dollars. The appropriate model for the outcome of the choice experiment with two alternatives is a discrete choice model describing the probability that an alternative, A, is chosen over alternative B by an individual, i. (3) Which can be expanded as: (4) And it follows: (5) We see in equation 5 that the probability of selecting an alternative does not depend on attributes that remain constant across the alternatives, such as income (y), the intercept term ( 0), or other characteristics of the individual. Rather, the probability that a respondent selects an alternative depends on the differences in the levels of attributes for a given alternative relative to the levels of attributes that appear for other available alternatives. When it is assumed that the errors in the model are independently and identically distributed (IID) and follow a type 1 extreme value distribution, a conditional logit model can be used to estimate the probability of a respondent’s choice (Louviere, Hensher and Swait 29 2000, Alberini et al. 2006). McFadden (1974) shows that the choice probabilities within the logit model are equal to: (6) Where µ is scale parameter that is inversely proportional to the model’s overall variance (Ben-Akiva and Lerman 1985). The scale parameter cannot be separated from the estimated parameters, β, and so it is commonly normalized to 1 in order to allow researchers to identify the preference parameters. In the case of choice experiment where respondents are faced with more than one choice set, researchers can employ a random-effects logit to control for correlation in error terms across responses from the same respondent (Wooldridge 2010). Our choice experiment shows respondents three choice sets, thus the estimation of a random-effects logit is appropriate. Using the results of the logit model, researchers can estimate the tradeoffs that respondents make between different levels of the attributes. In economics, these tradeoffs are referred to as the Marginal Rate Substitution (MRS) and are commonly put in terms of a cost parameter, or in our case, distance. In general terms, holding all other attributes constant, the MRS is equal to the change in one attribute, X1, required to compensate the individual for a one unit change in another attribute, X2 (i.e. the amount of X1 required to keep the individual at the same level of utility as before the one unit change in X2). In our case, we calculate the MRS in terms of miles from home. The MRS assumes there is a 30 change in environmental quality at a beach and estimates the additional miles further from home (if the MRS is positive) or the number of miles closer to home (if the MRS is negative) that the beach would have to be in order for an individual to be indifferent between the beach before and after the change in environmental quality. Equation (7) illustrates how to calculate MRS as the opposite of the ratio of the partial derivative of the indirect utility function with respect to the first attribute of interest and the partial derivative of the indirect utility function with respect to the second attribute of interest Equation (8) shows an example MRS calculation for moderate algae in the water in terms of driving distance specific to our application. ( ) (7) ( ) (8) For example, an MRS of 75 for a change in algae in the water from high to moderate is interpreted to mean that all else equal, a respondent would be indifferent between travelling some distance to a beach with high amounts of algae in the water, or travelling that same distance plus an additional 75 miles to a beach with identical characteristics except for moderate amounts of algae in the water instead of high amounts. Survey Development: Pretests This study reports on the findings of a choice experiment implemented as part of the Great Lakes beach web-based survey in the spring of 2012. The web survey was developed and questions were tested in the spring of 2012 using an iterative process that was guided by the results of 57 one-on-one cognitive interviews (Kaplowitz, Lupi and Hoehn 2004). The 31 57 interviews were composed of eight interviews conducted among a convenience sample of Michigan State University students at a campus food court, 19 interviews conducted in person among randomly selected adults at local shopping malls, and 30 interviews conducted remotely with respondents recruited from a web survey panel (Survey Sampling International) of adults in the lower peninsula of Michigan. The remote interviews were conducted over the phone using an innovative approach not previously documented in the literature on the pretesting of web-based valuation surveys. At the start of each remote interview, the lead researcher called the participant at a scheduled time into a shared conference call line, and then emailed the participant a link to 6 web-based screen sharing application called “GatherPlace.” The screen sharing application allowed the participant to view and control the screen of the host computer belonging to the lead researcher which was configured to show the web survey. The use of the shared conference call line and screen sharing application allowed other members of the research team to simultaneously listen to the interview as well as observe the participant navigating the web survey without creating any disturbance. After calibrating the screen sharing application to suit the display settings on the participant’s computer, the lead researcher would grant control of the host computer showing the web survey to the pretest participant. Once the participant had become accustomed to controlling the host computer, the lead researcher instructed the participant to work through the survey to the best of their ability. Following an outline of interview questions while probing as necessary on given issues as they arose, the lead researcher would gather information related to the 6 The application ran using Java, a program that is frequently used by websites. Since Java is popular among a wide range of websites, most internet users already have the program installed. We found that 25 of the 30 remote interview participants had a working version of Java already installed on their computer. 32 respondent’s perceptions, understandings, and opinions of survey tasks and information presented in the survey. Pretest participants were asked detailed questions about their understanding of the definitions of attributes, attribute levels, and their interpretation of questions pertaining to the attributes to be sure that the survey clearly communicated the intended themes. Pretests also focused on ensuring the respondents were able to answer questions according the information provided and, where applicable, their personal experience, attitudes, or opinions. After each interview was complete, research team members debriefed as necessary to address issues raised by a participant’s questions or difficulties, and programming changes were made to the survey instrument as soon as possible to then test those edits in subsequent pretests. This remote pretesting approach proved advantageous since it allowed several research team members to observe survey pretests without “hovering” or “crowding” around a single participant at a research facility, perhaps making it easier for the participant to focus on the tasks and honestly answer questions. The method was also inexpensive, with the only costs being a small monthly subscription to the screen sharing service and low-cost conference calling. Another key advantage this remote pretesting had over onsite pretests was that it allowed pretest participants to be drawn from across the Lower Peninsula of Michigan without requiring research team members to travel. This geographic representativeness helped capture opinions on the survey from individuals living adjacent to the Great Lakes who visit Great Lakes beaches nearly every day as well as those who live a few hours from the nearest Great Lake beach and may visit only a few times each year. Capturing opinions about the survey from individuals with a range of experiences with 33 visiting Great Lakes beaches was a key to our understanding of how well the survey instrument functioned. Pretests were halted once it was clear that respondents comprehended survey tasks and that survey information was clearly communicated to respondents. Once pretests and initial web-survey debugging were complete, a pilot group of 85 individuals were mailed invitations to the web survey. In the two weeks following the initial invitations, pilot study members were mailed two reminders: a half-sheet postcard and a quarter-sheet postcard, each listing the survey’s web-address, the individual’s password, and information on who to contact with questions. The pilot study received a total of 22 logins for a response rate of 25.9%. The pilot survey allowed the research team to simulate the process of actually implementing the survey (e.g. timing of mailing, receiving data) and to see if any questions arrived via phone or email from pilot survey respondents. Survey Development: Input from experts The text, diagrams, and graphics used to describe the attributes to respondents were developed with the help of state and county health officials, water policy experts, as well as resource managers from State and Federal agencies including NOAA Great Lakes Environmental Research Laboratory, Michigan Sea Grant, EPA Great Lakes Program Office, Michigan Department of Natural Resources, and county officials that perform water quality tests and conduct beach algal assessments. Parts of the choice experiment needed to convey scientific information in a way that was understandable to members of the general population who are not assumed to have advanced knowledge of the topics in the survey. The resource experts we consulted were able to ensure that the survey communicated accurate information. 34 Certain attribute descriptions shown to choice experiment participants also contained diagrams and drawings. Figure 1.2 and figure 1.3 show diagrams that accompanied definitions of attribute levels for amount of algae on the shore and amount of algae in the water, respectively. The use of photos to describe attributes and their levels was considered but research shows that unless closely controlled, photos can communicate unintended information (Hoehn, Lupi and Kaplowitz 2003). Even though Figure 1.2: Image of diagram of Great Lakes beach attribute: Algae in the water, original size monitor dependent. Amount of algae in the water Definition View of the Water None visitors never come in contact with algae while swimming or wading Low visitors rarely come in contact with algae while swimming or wading Moderate visitors occasionally come in contact with algae while swimming or wading High visitors constantly come in contact with algae while swimming or wading 35 Figure 1.3 Image of diagram of Great Lakes beach attribute: Algae on the shore, original size monitor-dependent Amount of algae on the shore Definition View of swimming area shore None None of the shore of the swimming area has algae. Low 1 to 20% of the shore of the swimming area has algae. Moderate 21 to 50% of the shore of the swimming area has algae. High More than 50% of the shore of the swimming area has algae. 36 the diagrams are simplifications of real-world conditions, the experts we consulted in the development of survey materials and pretest participants (i.e. perspective survey respondents) found the diagrams to represent conditions experienced at actual Great Lakes beaches. Survey Sample Because there is no list of beachgoers to facilitate sampling, a two-step sampling process was used. First, in the summer of 2011, a general population mail survey on participation in leisure and recreation activities was conducted using a sample of 32,230 residents of Michigan’s Lower Peninsula, 18 years and older, drawn from the drivers' license list. The mail survey contacts followed a modified version of Dillman's tailored design method (Dillman 2009), and the survey achieved a response rate of 38%. Among the activities asked about in the mail survey was whether or not the respondent had visited a Great Lakes beach in the last year. Second, in the spring of 2012, invitations to a Great Lakes beaches web-based survey were sent to 5,434 qualifying respondents from the mail survey (i.e. those who reported having visited a beach on the Great Lakes in Michigan,) again following a modified version of Dillman's tailored design method and achieving a response rate of 59.6%. Web Survey Implementation Web survey sample members received a maximum of five contacts over a month and a half-long field period (from April 14 to June 1, 2012): an invitation letter, a 1/2 sheet reminder postcard (in black and white), a quarter-sheet postcard (in color), an automated phone call reminder (robodial) to respondents with publically available land-line phone numbers (2,469 sample members) were called during the evening on April 27, and a final 37 letter offered a post-paid incentive of $10 or $20 to those who responded to the survey by 7 the end of the field period (May 29) . A small control group was mailed a final letter that did not offer a post-paid incentive. Each final contact letter included a business reply mail postcard instructing respondents to send the postcard back only if the individual did not have access to the Internet. Each mailing listed the survey web-site and provided the respondent with a unique password to use when logging onto the website, as well as a phone number and email address to contact if there were any questions. The web survey included three main sections: a section gathering detailed information about trips to Great Lakes beaches in the last year for use in revealed preference modeling, a choice experiment section, and a section of background and demographic questions. This paper deals with the results of the second section, the choice experiment. The choice experiment section began with separate pages explaining each of the attributes. Attribute descriptions were developed with the help of the aforementioned resource experts. Thus, each attribute description was followed by questions engaging the 8 respondent with the information presented about the attributes. Follow-up questions asked about the respondent’s personal experiences and attitudes towards the attributes. Each respondent’s path through the choice experiment presented information on the attributes in question in a controlled format, and the follow-up questions allowed respondents to interact with information on the attributes, a technique that survey research shows increases 7 8 See appendix C for copies of all materials sent to members of the web survey sample. See appendix A for set of screen captures from the choice experiment portion of the web survey, complete with informational treatments, warm-up tasks, choice sets and follow-up questions. 38 respondent’s comprehension of survey tasks (Kaplowitz et al. 2004, Hoehn and Randall 2002, Schwarz and Sudman 1996). Following descriptions of the attributes, each respondent was shown three different choice sets. Figure 1.1 shows an example of a choice set that was used in our study. The choice experiment asked respondents to assume they were taking a trip to a Great Lakes beach, and the only two alternatives available were the beaches shown in the choice sets. Respondents were told the alternatives had the different characteristics described in the table, but otherwise were same. Respondents were then instructed to review the attributes in the table and then select which beach they would visit. The attribute names in the choice set were hyperlinked so that respondents could click on the attribute name to open a new window displaying the same attribute definition shown earlier in the survey for the respondent to review. Beaches in the choice sets were described using a set of seven attributes. Attributes were selected for our study based on their salience and relevance to Great Lakes beach visitors, with a focus on beach attributes that could be tied to environmental management schemes, not physical characteristics or facilities. Attributes and Attribute Levels The attributes used in the choice experiment were as follows: Great Lake (what Great Lake the beach lies on), bathrooms (the type of bathrooms available at the beach), amount of algae in the water, amount of algae on the shore, length of beach, the frequency of bacteria testing, and the distance away from home. Table 1.1 shows the attributes and their levels. 39 Table 1.1: Attributes and Attribute Levels included in the Great Lakes Choice Experiment Attribute Name Great Lake Bathrooms* Algae in the water Algae on the shore Length of beach Testing water for bacteria Attribute level Michigan Huron St. Clair Erie Flushing toilets, cleaned daily Flushing toilets, cleaned hourly None Low (rarely come in contact with algae) Moderate (sometimes come in contact with algae) High (constantly come in contact with algae) None Low (1-20% of the shore has algae) Moderate (21-50% of the shore has algae) High (more than 50% of the shore has algae) 50 yards 220 yards (1/8 mile) 880 yards (1/2 mile) 1760 yards (1 mile) 5280 yards (3 miles) Never Monthly Weekly Daily Distance from your Individual’s minimum distance to selected Great Lake home + 0 miles + 15 miles + 40 miles + 100 miles + 150 miles * Attribute was held constant within choice sets to control for preferences for different bathrooms, but no preference or value information was gathered. For the levels of the Great Lakes attribute, we focused on the Great Lakes that are adjacent to Michigan’s Lower Peninsula, where we drew our sample from. Bathrooms were included in the choice sets but were held constant within choice sets. In pretests, we found 40 bathrooms were a salient attribute when choosing the preferred beach, but facilities at the beaches (including playgrounds, picnic tables, and bathrooms) did not fit into our study’s focus on environmental quality. Instead of including bathrooms as a variable in our choice experiment, we included it as an attribute that would remain constant within choice sets. This kept respondents from making choices based on inferred levels for a missing bathroom attribute based on existing attributes (e.g. “Lake Michigan has cleaner bathrooms”). With the bathrooms attribute being held constant, bathrooms would not impact the individual’s decision. The attribute levels for algae in the water and algae on the shore were chosen to correspond directly with data currently being gathered by the EPA Great Lake Beach Sanitary Survey (U.S. EPA 2008). By using the same scale as the EPA sanitary survey, we can combine information on which Great Lakes beaches people visited and how often along with what the levels of algae were on the shore and in the water at those beaches (revealed preference data) with the marginal rates of substitution for changes in algae levels to estimate the economic costs or benefits associated with changes in the levels of algae. Within the choice experiment, we define beach length as the distance along the Great Lake 9 within the boundaries of the park or managed area where the beach is located. We chose the attribute describing the frequency of testing to have the levels of daily, weekly, monthly, or none to represent examples of how frequently actual beaches are tested. As mentioned previously, not every Great Lakes public beach is tested for E. coli. The final attribute, driving distance contained two components. The first component was the minimum distance from each individual’s home to the Great Lake that was specifically named in the choice alternative (i.e. the minimum distance from an individual’s home to Lake Michigan when 9 Note, findings from Parsons et al (1999) refer to preferences for changes in beach width, measured in terms of the distance from dune toe to berm. 41 Lake Michigan was selected for the choice alternative, or the minimum distance to Lake St. Clair when Lake St. Clair was selected for the choice alternative). Distances were calculated as the minimum distance from the center of the web survey sample member’s zip code to one of several beaches along each Great Lake using PC Miler. The second component of distance was one of five levels of “additional miles” that were added as described in the experimental design to the minimum distance to create the total distance. This total distance was the only distance presented to respondents. The levels of the additional distance variable were designed to allow distance to be estimated as a continuous variable in our analysis. The distance attribute needed to control for the minimum distance from a respondent’s home to a specific Great Lake since there is such wide variation in distances across our sample: certain sample members live adjacent to some Great Lakes (i.e. minimum distance of zero to the nearest Great Lake) while the same individual may live 150 or more miles from a different Great Lake. By controlling for the minimum distance from a respondent’s home to the Great Lake within the specific choice alternative, we increase the plausibility and realism of the choice alternative, and avoid showing the respondent counterfactual attribute levels, a concern raised previously in literature regarding labeled choice experiments (Huybers 2005, Carson et al. 1994). The experimental design, which is the list of each combination of attributes that would appear within the choice sets featured in the choice experiment, was generated using Ngene software (Choice Metrics 2011). The experimental design was derived in such a way that allows researchers to identify the impact each attribute level has on the probability of a respondent selecting a given alternative (ibid, Johnson et al. 2006.) The study utilized an Fefficient design which minimized the variances of MRS calculations from the parameters 42 estimated from the results (Choice Metrics 2011). To derive the design, a researcher inputted the number of alternatives in the choice experiment, as well as the attributes and attribute levels. As it was possible, attributes were also assigned an expected sign, allowing NGene to minimize the instances of dominated alternatives and further increase design efficiency. NGene used a swapping algorithm that searched for new choice pairs to swap into the existing design in order to increase overall efficiency. For the choice experiment task reported here, 126 choice pairs were generated and randomly sorted into 42 groups of three, with each respondent viewing one of the 42 groups. 43 Following the attribute levels used in the choice experiment (and described above) equation 9 shows the logit model we are estimating: (9) 44 Where each independent variable named with a specific attribute level is a dummy equal to 1 when that attribute level is present in the alternative being evaluated and equal to zero when that attribute takes on any other level. All coefficients in the model are estimated relative to the base case for the following beach attributes: algae on the shore equal to high, algae in the water equal to high, beach length equal to 50 yards, testing for bacteria equal to daily, and Great Lake equal to Lake Erie. Results The web survey obtained a response rate of 59.6% (3,211 individuals out of an effective sample of 5390). A random subset of respondents to the web survey was assigned 10 the choice experiment task as described above, which included a total of 1040 individuals. Of the 1040 assigned to the choice experiment task reported here, 988 filled out all three choice sets presented to them, 43 filled out two, and 9 filled out one, for a total of 3059 observations. Table 1.2 shows the number of respondents and observations for the Great Lakes beach web survey choice experiment reported here. Table 1.2: Number of respondents and observations for Great Lakes beach web survey Choice experiment 1209 individuals logged on to survey 1040 individuals answered at least one choice set within choice experiment 988 individuals with three choice sets answered 43 individuals with two choice sets answered 9 individuals with one choice set answered 3059 total number of choice experiment observations 10 One respondent to the choice experiment task reported here was inadvertently excluded from the analysis due to a write-error performed by the web survey. The error made it impossible to link the choice set viewed by the respondent to the respondent’s answers to the choice experiment until further steps were taken after the analysis was conducted. 45 For our statistical analysis, we first estimate a random-effects logit model, controlling for the fact that individuals answered up to three different choice sets. The model presented here includes the main effects for all of the attributes and attribute levels included in the choice experiment, and results are reported in Table 1.3. Note the baseline attribute level: all preferences are measured as changes from the baseline level. For example, the estimated parameter on “algae in the water: low” is for a change in algae in the water from the baseline level, none, to low. The estimated model had a Wald chi-squared statistic of 705 (p<0.001), and a McFadden’s pseudo R-squared of 0.302 (meaning that on a scale of 0 to 1, the model we estimated has a higher likelihood or fits the data better than a model that only contains a constant). The model fit, as judged by the predictive ability, is well-balanced as it correctly predicted 77% of Beach A choices and 76% of Beach B choices (thus, the model predicted the probability of a person choosing Beach A as being greater than 50% in 77% of the cases a respondent actually chose Beach A). Nearly all model parameters are estimated to be statistically significant at the p<0.001 level. As expected, the estimated parameter on driving distance is statistically significant and negative. The model reveals that Great Lakes beach goers prefer less algae on the shore and lower amounts of algae in the water, with algae in the water causing more of a nuisance than algae on the shore. We find that Michigan residents also prefer longer beaches to shorter beaches, with no significant difference in preferences between three mile long beaches (baseline) and one mile long beaches. The model also shows that respondents prefer a beach that is tested more frequently for bacteria versus a beach that, all else equal, is not tested for bacteria at all. We also see that, all else equal, Great Lakes beach goers prefer beaches on Lake Michigan 46 and Lake Huron to beaches on Lake Erie (the baseline) and that beach goers are indifferent between beaches on Lake St. Clair and Lake Erie. Table 1.3: Results of random-effects logit model estimating determinants of Great Lakes beach choice Variable coefficient p-value One-way distance -0.008 <0.001 Algae on the shore: None 1.474 <0.001 Algae on the shore: Low 1.225 <0.001 Algae on the shore: 0.763 <0.001 Moderate Algae in the water: None 1.768 <0.001 Algae in the water: Low 1.473 <0.001 Algae in the water: Moderate 1.111 <0.001 Length: 50 yards -0.78 <0.001 Length: 220 yards -0.421 <0.001 Length: 880 yards -0.246 0.012 Length: 1760 yards 0.069 0.489 Testing: None -1.496 <0.001 Testing: Monthly -0.332 <0.001 Testing: Weekly -0.381 <0.001 Great Lake: Michigan 1.194 <0.001 Great Lake: Huron 0.498 <0.001 Great Lake: St. Clair 0.004 0.966 *= significant at the 5% level **= significant at the 1% or less level Algae on the shore parameters are relative to "High" Algae in the water parameters are relative to "High" Length parameters are relative to "3 miles/5280 yards" Testing parameters are relative to "Daily" Great Lakes parameters are relative to "Lake Erie" ** ** ** ** ** ** ** ** ** * ** ** ** ** ** While we can judge relative preferences for beach characteristics from the estimated logit parameters (i.e. the β’s), it is also informative to examine the trade-offs between attributes implied by those parameter estimates. We examine the trade-offs by calculating the MRSs for each attribute level. Table 1.4 shows the economic tradeoffs (MRSs) 47 estimated for the marginal changes in Great Lakes beach attributes relative to the baseline attribute level. All trade-offs are in terms of additional miles from home. The table includes 95% confidence intervals around the mean MRS, which were calculated using the KrinskyRobb method with 10,000 draws (Krinsky and Robb 1986). To interpret the marginal rates of substitution these would need to be translated into dollar values and placed into a realistic choice set for beaches, that is, a choice set that includes a full range of substitute sites. Instead, these welfare measures assume that each respondent would take a trip to one of the two beaches they saw in the choice experiment, By not accounting for the real set of substitute alternatives, these measures are an upper-bound for the actual economic value of these marginal changes in beach attributes since respondents were not shown a full range of substitutes, nor were they given a proxy for this such as a "do not go," or "go to some other beach" option. From the trade-offs calculated using our estimated model, we can see that while higher amounts of algae on the shore and higher amounts of algae in the water are both undesirable to respondents, the presence of algae in the water is even more of a nuisance than algae on the shore. While the mean MRSs for testing for bacteria appear to show a preference for monthly testing over weekly testing, the significant overlap between confidence intervals indicate that for qualitative purposes, respondents are indifferent between the two frequencies of testing. Both monthly and weekly testing for bacteria are preferred to no testing at all. The MRSs reinforce the intuition that respondents would prefer longer beaches to shorter ones, with respondents being indifferent between beaches that are one mile versus three miles long, all other beach attributes being equal. 48 Table 1.4: Mean estimate of Marginal Rates of Substitution for Great Lakes beach characteristics, including lower-bound and upper-bound estimates of 95% Confidence Intervals from Krinsky-Robb method, 10,000 draws LowerMean Bound Algae on the shore: None 154 180 Algae on the shore: Low 125 150 Algae on the shore: Moderate 70 93 Algae in the water: None 188 216 Algae in the water: Low 154 180 Algae in the water: Moderate 111 136 Length: 50 yards -121 -95 Length: 220 yards -76 -51 Length: 880 yards -54 -30 Length: 1760 yards -15 8 Testing: None -213 -183 Testing: Monthly -63 -41 Testing: Weekly -70 -47 Great Lake: Michigan 121 146 Great Lake: Huron 38 61 Great Lake: St. Clair -21 0 Algae on the shore estimates are relative to "High" Algae in the water estimates are relative to "High" Length estimates are relative to "3 miles/5280 yards" Testing estimates are relative to "Daily" Great Lakes estimates are relative to "Lake Erie" Attribute UpperBound 210 178 119 248 209 162 -71 -29 -7 32 -157 -18 -25 173 85 23 The MRSs estimated from the model are consistent with what we would expect of preferences for desirable or undesirable traits: respondents have higher MRS for “better” levels of the attributes included in the choice experiment. These tradeoffs show that all else equal, respondents are willing to travel further to beaches that have lower amounts of algae in the water, less algae on the shore, and are tested more frequently for bacteria. 49 The estimated MRSs for beach length 11 are consistent with the law of diminishing marginal utility which states that the marginal utility gained by additional units of consumption is greatest for the first additional unit and decreases for each subsequent unit consumed. The MRS for length of beach is largest for the initial change in beach length from 50 yards to 100 yards. The marginal utility of additional beach length diminishes on a per yard basis until we estimate that respondents are indifferent between a beach that is 1 mile long (1760 yards) and 3 miles long (5280 yards) all else equal. Note that this finding is consistent with revealed preference studies that show a significant and non-linear relationship between beach length and site choice (e.g. the natural log of length in Parsons et al. 2009, or length plus length squared in Lew and Larson 2005) where the marginal benefits of length decrease as beaches are longer. The model also indicates respondents’ preferences for specific Great Lakes. The finding that the parameters for Lake Michigan and Lake Huron are statistically significant (p<0.001) shows that respondents infer certain characteristics about a beach based on which Great Lake it lies on, independent of beach attributes and environmental quality measures such as frequency of testing for bacteria and levels of algae. Compared to Lake Michigan, the most preferred Great Lake, respondents would next prefer Lake Huron. Preferences are the lowest for visiting beaches on Lake St. Clair and Lake Erie. The model predicts that all else equal, a beach on Lake Erie or Lake St. Clair would have to be nearly 150 miles closer to an individual’s home than the same beach on Lake Michigan in order for 11 Note, our findings pertain to beach length, while previously mentioned marine beach studies such as Parsons et al. (1999) refers to preferences for changes in beach width, measured in terms of the distance from dune toe to berm. Our study measures preferences for beach length defined as the distance along the Great Lake within the boundaries of the park or managed area where the beach is located. 50 the respondent to have an equal probability of visiting the Lake Erie or Lake St. Clair beach and the Lake Michigan Beach. Conclusions This paper provides the first insights into the values and preferences that Michigan residents have for a popular and important resource, Great Lakes beaches, which has yet to be fully studied. By capturing tradeoffs between Great Lake beach attributes through a choice experiment, we are able to estimate the upper-bound of the trade-offs residents who visit Great Lakes beaches would make for marginal changes in Great Lakes beach attributes. The study shows that visitors to Great Lakes beaches first and foremost prefer beaches that are closer to home versus beaches that are further away. Great Lakes beach goers also prefer lower amounts of algae on the shore and in the water, and all else equal they prefer longer beaches to shorter beaches. While the findings themselves may seem intuitive, it is important to consider the implications. This study represents the first wide-spread collection of preference information from Michigan residents regarding changes in environmental quality at Great Lakes beaches. The tradeoff information gathered in this study is consistent with accepted economic theory, and stands as a first step in studying the values that residents hold for changes in beach quality. One important outcome of this study is the ability to compare the MRSs for different attribute levels. Comparisons of the Marginal Rates of Substitution give us great insight into the importance of different beach characteristics relative to one another. Interestingly, this study suggests that Great Lakes beach goers have a greater preference for beaches with lower amounts of algae in the water than for beaches with lower amounts of algae on the 51 shore. The combinations of stresses to the natural system that lead to increases in the amounts of algae at Great Lakes beaches are widespread and complicated. The findings of this study indicate that there is value to seeking controls to the various factors influencing nuisance algae growth, or mitigating the presence of nuisance algae at Great Lakes beaches. This study’s results highlight the importance of setting best management practices and other industry standards to minimize the promotion of nuisance algae in the Great Lakes. Based on this study’s results, we gain a clearer understanding of how such programs could have widespread impacts not only on the Great Lake ecosystem but also on the millions of Michigan residents who enjoy the use of public beaches each year. This study also shows respondents have a strict preference for beaches that are tested for bacteria over beaches that are not tested at all. This result shows an example of the value of information: while testing for bacteria does nothing to affect the frequency of beach closures due to elevated bacteria, this study shows there is a benefit to having tests in place. While addressing the large-scale causes of elevated bacteria such as combined sewer overflows or runoff may seem daunting, it is important to consider that there is still value to having beaches tested. This result is similar to Krieger and Hoehn (1998) who found Michigan anglers valued a full disclosure fish monitoring program that would inform anglers of sites that were tested and found to be safe (as opposed to advisory information that only listed exceedances). This finding verifying the value that visitors to Great Lake beaches have for bacteria testing is timely: as this manuscript is in preparation, the draft budget for U.S. EPA’s fiscal year 2013 (U.S. EPA 2012b, p. 28) calls for the elimination of Beach Grant Funding. Beach Grant funding helps support testing of public beaches across 52 the country. As evidenced by this study, tests for bacteria in the waters of public beaches provide vital information and a valuable service to millions of beach goers in Michigan. In general, this study helps describe not only the values Michigan residents have for different Great Lakes beach attributes, but by placing the preferences for attributes relative to one another, gives a sense of the relative importance of each attribute. The values and relative importance of certain environmental quality measures, such as the importance of low amounts of algae on the shore versus algae in the water can help decision makers and resource managers prioritize future studies or the planning of Great Lakes beach improvements. These results also show justification for further beach testing, even at low frequencies, as there is a value for having a beach tested at least monthly, even though the act of testing has no impact on water quality. By gathering information on these preferences, this study has begun to provide the proper economic context for resource managers and decision makers to answer questions facing the Great Lakes. 53 CHAPTER 2: Labeled Versus Unlabeled Choice Experiments for Valuing Great Lakes Beach Characteristics. Introduction There is a rich literature that uses choice models to estimate the values and demand that consumers have for the attributes that comprise goods as well as goods on a whole (Louviere, Hensher and Swait 2000). Choice experiments have been used in marketing (Green and Srinivasan 1978), transportation (Hensher 2001), development economics (Rubey and Lupi 1997), and agricultural economics (Pozo, Tonsor and Schroeder 2012). Within a choice experiment, a participant is shown two or more alternatives comprised of varying levels of attributes and asked to select the most preferred alternative. Alternatives can either be unlabeled or labeled. Unlabeled choice alternatives are commonly denoted by a generic title or a letter (e.g. “Choice A, Choice B” or “Program C, Program D”). Labels on choice alternatives can indicate a brand name, location name, or any other unique identifier for that alternative (e.g. “New York, Los Angeles”, “Coke, Pepsi”, or “Train, Car, Bus”). In this paper, within a choice experiment pertaining to Great Lakes beach choice, we use labels to indicate which Great Lake the beach alternative lies on. In a labeled choice experiment, researchers elicit respondents’ preferences for the labels as well as the other attributes of the good, with the label serving to capture characteristics of the alternative that consumers associate with the name apart from the other attributes in the choice experiment (Louviere, Hensher and Swait 2000). Discrete choice studies in marketing have examined differences in consumer preferences for goods in labeled choice experiments as a means of investigating brand equity and market share at least as far back as the studies of Swait et al. (1993). 54 For many attribute based choice studies, whether alternatives are labeled versus unlabeled is central to the project’s research goal, such as studies of brand equity, brands’ market shares, or different modes of transportation. Other times the inclusion of labeled alternatives is at the researcher’s discretion. When deciding whether or not to include labels in a choice experiment, researchers should consider the benefit of added familiarity or realism the labels provide (Adamowicz and Boxall 2001). However, when labels are present in a choice set, respondents may seek to simplify the decision making process and base their choice of the preferred alternative solely on the label, ignoring other attributes (Blamey et al. 2000). Respondents faced with choice sets containing labels may also consider alternatives outside of those appearing in the choice task (Adamowicz and Boxall 2001). Concerns exist among practitioners of attribute-based choice or ranking methods that labeled alternatives may contain levels of attributes that, in combination with the presence of a label, appear unrealistic or infeasible to respondents, depending on the respondent’s perceptions of the label (Huybers 2005, Carson et al. 1994). Despite the concerns over how the presence or absence of labels may affect the results of choice experiments, little empirical evidence exists to characterize those effects. Purpose The purpose of this study is to provide further evidence of the effects of different labeling schemes on the results of choice experiments. In this paper, we use a split sample web survey to test for differences in the preference information elicited from choice experiments with different labeling schemes. The study is based on a choice experiment pertaining to Great Lakes beach recreation site choices wherein the names of Great Lakes were used as labels to identify which Great Lake that a beach alternative lied on. These 55 labels indicating the Great Lake stand as “brands,” or attributes within the choice alternatives that capture preferences for inferred characteristics outside of the attributes listed in the choice alternatives. Respondents were randomly assigned to one of three choice experiments each with a unique labeling scheme: a scheme using “labeled” alternatives, where the beaches were labeled with a Great Lake; a scheme with “samelabeled” alternatives, where Great Lake labels were present but remained constant within choice sets; and a scheme with “unlabeled” alternatives, where no Great Lake labels were present. Figure 2.1, 2.2 and 2.3 show examples of choice sets from each of the three different labeling schemes used in this study. This paper will add to the existing research on the sensitivity of choice experiments to researchers’ design choices (in the spirit of Rose et al. 2009, among others).The labeling experiment in this paper will also provide insight about the usability of preference information from choice experiments in a benefit transfer setting. Benefit transfer is the practice of applying welfare information gathered from an original study site to a different site (commonly referred to as the policy site) where no such information exists. For overviews of benefit transfer methods, see Johnston and Rosenberger (2010), Rosenberger and Loomis (2003), Boyle et al. (2010), Wilson and Hoehn (2006). A key advantage of benefit transfer is that it can provide decision makers a convenient estimate of relevant information without the considerable time and resources required to conduct an original study. A reasonable application of benefit transfer requires a degree of similarity across the study and policy sites, as well as a similarity in contexts between the original study and the focus of the transfer (Johnston 2006). Within the application of benefit transfer, one must also consider the inclusion of all relevant explanatory variables (Piper and Martin 2001). 56 Figure 2.1: Image of example choice set from “Labeled” choice experiment, original size monitor-dependent Beach A Beach B Great Lake Lake Huron Lake Michigan Bathrooms Flushing toilets cleaned hourly Flushing toilets cleaned hourly Algae in the water Moderate (occasionally come in contact with algae) Low (rarely come in contact with algae) Algae on the shore Low (1-20% of the shore has algae.) None Length of beach 5280 yards (3 miles) 880 yards (1/2 mile) Testing water for bacteria Daily Weekly Distance from your home 48 169 Which of the above beaches would you visit? Beach A 57 Beach B Figure 2.2: Image of example choice set from “Same- Labeled” choice experiment, original size monitor-dependent Beach A Beach B Great Lake Lake Erie Lake Erie Bathrooms Flushing toilets cleaned hourly Flushing toilets cleaned hourly Algae in the water Moderate (occasionally come in contact with algae) None Algae on the shore None High (more than 50% of the shore has algae.) Length of beach 1760 yards (1 mile) 220 yards (1/8mile) Testing water for bacteria Weekly Daily Distance from your home 18 33 Which of the above beaches would you visit? Beach A 58 Beach B Figure 2.3: Image of example choice set from “Unlabeled” choice experiment, original size monitor-dependent. Beach A Beach B Bathrooms Flushing toilets cleaned hourly Flushing toilets cleaned hourly Algae in the water None Moderate (occasionally come in contact with algae) Algae on the shore Low (1-20% of the shore has algae.) High (more than 50% of the shore has algae.) Length of beach 220 yards (1/8mile) 880 yards (1/2mile) Testing water for bacteria Weekly Daily Distance from your home 118 78 Which of the above beaches would you visit? Beach A 59 Beach B Recent published research has focused on advancing practitioners’ understanding of the optimal conditions for conducting benefit transfer. Several key factors for a successful benefit transfer have been identified including similarity between the study and policy sites. Site similarity can refer to several factors including socio-demographic characteristics of the affected populations, and physical characteristics of the resources being valued. Rosenberger and Phipps (2007) review several transfers and conclude that expected errors in transfers decrease as the similarity in sites and populations increase. Johnston (2007) examines how similarity in contexts between study and policy sites can improve benefit transfer. Johnston uses results from an identical choice experiment valuing land conservation practices conducted in four different Rhode Island communities to test the transfer of values between similar populations and sites. Results show that transfer validity is increased for transfers between communities more similar in terms of attributes relevant to land use policy such as housing density and population density. This finding highlights the importance of context-similarity between study and policy sites within the practice of benefit transfer. Additional contributors to similarity in context across study and policy sites could include a number of site-specific features, including the presence of name-specific or sitespecific attributes that are brought to bear when measuring preferences. Through the comparison of the three labeling regimes in the choice experiment, our study allows a view into the context similarity between our study site and possible policy sites. Our study’s split sample design allows a unique perspective as to whether labels such as site names matter when eliciting stated preference information. Transferring preference information from a labeled study site to a policy site outside of the study region is likely to be more feasible if 60 differences between the preference information elicited in labeled and unlabeled choice experiments are minimal (i.e. preference information is not affected by labels, and therefore is more readily transferable). This paper will also help fill the need for information on the public’s values and preferences for different environmental conditions at Great Lakes beaches. The Great Lakes coast of Michigan features approximately 600 public beaches along with countless private beaches (MDEQ 2012a). Visiting Great Lakes public beaches is one of the most popular activities among Michigan residents, with more than 50% of residents reporting having visited a public beach on the Great Lakes during the prior two swimming seasons (Lupi et al. 2012). Despite the popularity and uniqueness of the resource, it comes under constant threat from natural and manmade stresses, and little is known about the values and preferences Michigan residents have for different environmental conditions at Great Lakes beaches to allow for economically efficient management of the resource. Research Questions In addition to interpreting the outcome of the basic choice models, this paper seeks to examine the following research questions: Question 1: How will the parameters estimated from the “Labeled,” “Same-Labeled,” and “Unlabeled” choice experiments compare to one another? We expect to see similar parameters in the “Same-Labeled” and “Unlabeled” models, and to find different parameters in the “Labeled” model. Under this question, we test the null hypothesis that the parameters within the three models (each estimated from the results of different labeling schemes) are identical (equation 10) 61 (10) Where β is a vector of parameters estimated for Great Lakes beach characteristics included in a choice experiment employing a given labeling scheme. Question 2: How will the marginal rates of substitution (MRS) estimated from the different choice experiments compare to one another? The expectation is that despite possible differences in parameter estimates, the marginal rates of substitution from the “Labeled,” “Same-Labeled,” and “Unlabeled” models will all be highly similar and exhibit similar rankings and relative magnitudes. To compare the MRSs, we use the approach of Poe et al. (1997) test the null hypotheses that the MRS estimated from the results of each of the three labeling schemes are identical (equation 11). (11) where MRS is a vector of marginal rates of substitution calculated for parameters (beach characteristics divided by the parameter on distance) included in a choice experiment employing a given labeling scheme. This paper will give a brief background on the theory underlying the choice experiment, including random utility theory and logit model estimation, as well as useful 62 outputs from choice experiments (relative preferences and marginal rates of substitution). Next, the paper describes past work investigating the sensitivity of choice experiment data to different study design dimensions. Then, we review other studies that have investigated the effects of labeled versus unlabeled alternatives. Following that discussion, we will outline the design, implementation, and results from the choice experiment conducted about Great Lakes beach attributes. The choice experiment uses the names of Great Lakes as the labels in different labeling schemes to test the effect of different labeling regimes on the preferences elicited by choice experiments. The paper closes with a discussion of results, and conclusions. Method: Choice Experiment Choice experiments have long been used in marketing research as a means of measuring consumers’ values for entire goods or specific attributes of goods being developed for sale in markets. Green, Krieger and Wind (2001) summarize the evolution of conjoint analysis (rating or ranking exercises) into discrete choice based methods with the introduction of McFadden’s random utility theory and application of logit modeling of choice (McFadden 1974). Choice experiments have since become popular within the field of resource and environmental economics where the method is applied to goods for which no markets exist (Hanley, Wright and Adamowicz 1998). A choice experiment presents a respondent with two or more alternatives comprised of varying levels of attributes and asks the respondent to choose the most preferred alternative. The underlying economic interpretation of the choice experiment is as that the choice set represents a constrained optimization problem where the respondent is assumed to be maximizing his or her utility subject to his or her budget constraint. By observing many choices made among 63 alternatives with varying attribute levels, researchers can model the tradeoffs respondents make between different levels of the attributes, and these tradeoffs can be used to calculate the demand for attributes as well as economic welfare measures. Measuring brand equity has long been a focus of marketing research (Keller 1993). The labeling of alternatives within a choice experiment has served as a means of accessing brand equity since at least Swait et al. (1993) and Park and Srinivasan (1994): both studies used a discrete choice experiment, labeling alternatives with different brand names, as a means for uncovering brand equity. Kamakura and Russell (1993) analyze actual market transactions (consumer’s laundry detergent purchases) to measure brand equity in a framework similar to Swait et al. (1993). Since labels capture inferred attributes associated with the label and not with attributes explicitly listed in the choice set, it can be difficult to identify exactly what is being inferred by the label (Louviere, Hensher, and Swait 2000). However, as Kamakura and Russell (1993) point out, brand equity, or other measures of utility related to alternative specific labels can best be calculated when the measurable or physical attributes of the product are well-accounted for within the choice sets. Practitioners can use focus groups and pretests to ensure that in addition to labels they have included pertinent attributes. Within the context of the choice experiment, researchers can vary alternative’s attributes (and if applicable, labels) to allow the identification of the influence each attribute (and label) has on respondents’ choices apart from the other attributes (Louviere, Hensher, and Swait 2000). Including labels in a choice experiment can increase the realism of the choice alternatives, an overall desirable trait, since choice sets should mimic actual decisions made by respondents as closely as possible to ensure the accuracy of results (Harrison 2006). 64 Random Utility Theory Underlying the choice between alternatives is the utility the respondent would experience from choosing each alternative. The statistical model used to represent the choices made by choice experiment participants is based on random utility theory (McFadden 1974). The choices observed are assumed to consistent with Lancaster (1966) in that respondents review the attributes and attribute levels listed for each alternative in the choice experiment and choose the alternative that maximizes their utility. The indirect utility function is comprised of two main components: a deterministic (observable) component and a stochastic (unobservable) component. The deterministic component is a function of the attributes that comprise each alternative, the individual’s characteristics, and unknown parameters. The stochastic component is an error term. Per Louviere, Hensher and Swait (2000) and Alberini et al. (2006), this relationship is stated formally as: ̅( ) (12) for every i individual and every j alternative; x is a vector of attributes that take on various levels for each alternative within a choice set, β is a vector of unknown parameters associated with those attributes, and is an error term capturing factors specific to the alternative and to the individual that affect the respondent’s utility but that the researcher cannot observe. The deterministic portion of the indirect utility function is commonly specified as a linear function of the attributes of the alternative and the respondent’s income (y) less the cost of the alternative (C) : ( ) 65 (13) where the coefficient represents the marginal utility of income because yi –Cij represents money person i has left over to spend elsewhere after alternative j was chosen at cost Cij. For this study, we do not use a strictly monetary cost component, but rather use distance from home as the cost to the respondent to enjoy the beach alternative of their choosing. By using distance instead of monetary value, the choice situation presented in this choice experiment is consistent with real trip taking behavior, where each respondent would face a trip from home with an associated travel distance to the beach rather than a dollar amount they must pay in order to enjoy the beach since not every beach has parking fees or costs for entry. The marginal utility of additional miles away from home could be converted to a monetary measure by defining a relationship between miles driven and money. However, this paper leaves the “cost” of the alternatives, and therefore the costs of changes in attribute levels, in terms of miles driven. Future analysis of web survey data will focus on translating information on actual trips taken into economic values for beach attributes in terms of dollars. Within the choice experiment, the respondent chooses the option from the available alternatives that maximizes his or her utility. The appropriate model for the outcome of the choice experiment with two alternatives is a discrete choice model describing the probability that an alternative, A, is chosen over alternative B by an individual, i. (14) Which can be expanded as: (15) 66 And it follows: (16) We see in equation 16 that the probability of selecting an alternative does not depend on attributes that remain constant across the alternatives, such as income (y), the intercept term ( ), or other characteristics of the individual. Rather, the probability that a respondent selects an alternative depends on the differences in the levels of attributes for a given alternative relative to the levels of attributes that appear for other available alternatives. When it is assumed that the errors in the model are independently and identically distributed (IID) and follow a type 1 extreme value distribution, or Gumbel distribution, a conditional logit model can be used to estimate the probability of a respondent’s choice (Louviere, Hensher and Swait 2000, Alberini et al. 2006). McFadden (1974) shows that the choice probabilities within the logit model are equal to: (17) Where µ is scale parameter that is inversely proportional to the model’s overall variance (Ben-Akiva and Lerman 1985). The scale parameter cannot be separated from the estimated parameters, β, and so it is commonly normalized to 1 in order to allow researchers to identify the preference parameters. However, logit models estimated from different data sets will have different model variances, and hence different scale factors. When comparing the parameter estimates from logit models with different underlying data sets, without controlling for differences in scale factors, it is unclear whether differences in parameters are due to scale factors or are due to actual differences in parameters (Swait and Louviere 1993). 67 In the case of choice experiment where respondents are faced with more than one choice set, researchers can employ a random-effects logit to control for correlation in error terms across responses from the same respondent (Wooldridge 2010). Using the results of the logit model, researchers can estimate the tradeoffs that respondents make between different levels of the attributes. These tradeoffs are referred to as the Marginal Rate Substitution (MRS) and are commonly put in terms of a cost parameter, or in our case, distance from the respondent’s home. In general terms, holding all other attributes constant, the MRS is equal to the change in one attribute, X1, required to compensate the individual for a one unit change in another attribute, X2 (i.e. the amount of X1 required to keep the individual at the same level of utility as before the one unit change in X2). In our case, we calculate the MRS in terms of miles from home relative to the baseline attribute level. The MRS assumes there is a change in an attribute at a beach and estimates the additional miles further from home (if the MRS is positive) or the number of miles closer to home (if the MRS is negative) that the beach would have to be in order for an individual to be indifferent between the beach before and after the change in environmental quality. Equation (17) illustrates how to calculate MRS, while equation (18) shows an example MRS calculation for moderate algae in the water in terms of driving distance specific to our application. 68 ( ) ( ) (17) (18) For example, an MRS of 75 for a change in algae in the water from high to moderate (high being the baseline level) is interpreted to mean that all else equal, a respondent would be indifferent between travelling some distance to a beach with high amounts of algae in the water, or travelling that same distance plus an additional 75 miles to a beach with identical characteristics except for moderate amounts of algae in the water instead of high amounts. The above MRS equations implicitly include the scale factor, µ, which is inseparable from the parameter estimates and specific to the logit model from which the parameters were estimated. The scale factor, present in both the numerator and denominator of the MRS is cancelled out through this calculation (see equation 6). This result means that MRSs estimated from different data sets can be compared without being confounded by differences in scale factor (Swait and Louviere 1993). Previous Research: Sensitivity to Design Factors Data gathered in choice experiments is sensitive to many factors, including the respondent’s informational capacities and abilities (Ford et al. 1989) but also by aspects at the discretion of the researcher, such as task and informational complexity (Heiner 1983; Swait and Adamowicz 2001). Recently there has been a growing literature on the sensitivity of choice experiment data to the elements of choice experiments that are left to the 69 discretion of the researcher (e.g. Hensher 2006b, Rose et al. 2009). Apart from their own original contributions, Caussade et al. (2005) provide a detailed summary of studies that use split sample experiments to test the effects of varying different design elements within a choice experiment. Hensher (2001) varied the number of choice sets each respondent faced in a choice experiment pertaining to travel choices in New Zealand, finding no relationship between the valuation information elicited across varying numbers of choice sets. Rolfe and Bennett (2009) conducted a split sample choice experiment on the development of water resources in Australia using treatments where respondents viewed choice sets containing two or three alternatives, finding differences in parameter sign and significance between the models. Serial non-response in the case of the two alternative treatment lead the researchers to question whether the referendum approach could prevents respondents from making tradeoffs at all, and instead avoid answering such a format. DeShazo and Fermo (2002) find that increases in the number of alternatives posed to respondents in each choice set first decreases model variance then, as the numbers of alternatives continue to rise, the model variance begins to increase as well. Racevskis and Lupi (2008) found statistically significant differences between Michigan residents’ preferences for forest management programs gathered from individuals facing only one choice set versus those who faced four choice sets. Other studies such as Hoehn, Lupi, and Kaplowitz (2010) have investigated how different information presentation formats affect preferences measured in choice experiments. They found that presenting choice alternative information in a text format elicited different preferences than from identical information in a table, with preference 70 parameters from text formats having higher variances than those estimated from table formats. A group of studies have used a “Design of Designs” format to generate an experimental design where the design elements are varied within a split sample choice experiment, so as to isolate the effects of variation in elements on the resulting data. Caussade et al. (2005), Hensher (2006b), Rose et al. (2009) employ this strategy that varies each of the five design elements (number of attributes, number of attribute levels, range of attribute levels, number of alternatives within choice sets, number of choice sets faced by respondents) across the sample such that they can isolate the impact each element has on the choice data. Recent work by Hensher (2006a) studies respondent’s attribute processing strategies and how those interact with varying design to dimensions to influence perceived task complexity (and therefore affect possible biases). The research is unique in that it considers not only how the “amount” of information in a choice experiment can affect the outcome, but also the perceived relevance of information. While our research does not extend into the impact of attribute processing, it does fill a gap left in previous literature on the influence of including (or excluding) information communicated by a label has on the tradeoffs observed within a choice experiment. Previous Research: Effects of Labeling Much of the research conducted investigating the effect of labeling on the results of choice experiment use labels to represent a means of program provision. Rolfe and Windle (2011) test the effects of labels indicating the method with which a proposed coral reef protection program would be achieved. The authors conducted a split sample survey of 71 households in Brisbane, Australia, eliciting household’s willingness to pay to protect the Great Barrier Reef. One version of the choice experiment featured generic alternatives describing protection policies and outcomes, and the other featured policy labels describing how the policy would be provided (e.g. improve water quality, reduce greenhouse gas emissions, or increase conservation zones). They found WTP to be higher among those responding to the labeled version, perhaps indicating that respondents were willing to pay more for Great Barrier Reef protection when they the method of protection. However, a Poe test (Poe et al. 1997) revealed no statistically significant difference in the WTP from the labeled and unlabeled experiments. Similarly, Czajkowski and Hanley (2009) conduct a study of Polish households’ willingness to pay to protect local biodiversity and test for differences in the results of a generic choice experiment and a choice experiment where alternatives are labeled as being provided through the expansion of an existing national park, or by “other” means. The authors find that by controlling for the effects of service provision through labeling choice experiment alternatives, the resulting WTP measures show greater sensitivity to scope, a desirable trait. Blamey et al. (2000) and implemented a choice experiment regarding preferences for forest preservation, sampling households in Brisbane, Australia. A portion of the respondents viewed a choice experiment with generic policy alternative, while the remaining respondents viewed a choice experiment with labeled policy alternatives which named minimum percentages of scarce forest was to be left preserved. The authors found that on average, the WTP for forest preservation was four times higher in the generic case than in the labeled case. The authors also found that certain attributes that were found to be 72 significant influencers of respondent’s policy choices the forest which was to be preserved. While the results of Blamey et al. do suggest anchoring around the label, the effect of introducing the label to the choice experiment may be confounded by the quantitative nature of the label itself. Recently, Brannlund and Persson (2012) conducted a choice experiment eliciting information on Swedish citizen’s preferences for CO2 reduction programs. Respondents were shown two alternatives, each resulting in the same amount of total CO2 reduction. Half of the respondents saw unlabeled alternatives, while the other half were shown alternatives labeled as being funded through private (i.e. individual) taxes or by “other” means. Models estimated from the unlabeled and labeled choice experiments revealed similar results in terms of statistical significance and sign. The model estimated from the labeled choice experiment included an alternative specific constant which indicated respondents had a significantly lower willingness to pay for CO2 reduction programs funded by taxes than through other means. While these studies are able to compare choice experiments with labeled and unlabeled alternatives, the use of labels to represent different program funding mechanisms or modes of program provision can be viewed largely as gathering preferences for separate program attributes, which is different from the use of labels as a brand, site name, or other unique identifier capturing preferences for information not explicitly listed in the choice set. Huybers (2005) compares the results of generic and labeled choice experiments pertaining to vacation destination choices where the labels are site names. To the best of this author’s knowledge, Huybers’s study is the only published study that uses site names 73 as the labels of a choice experiment and compares those results to a generic choice experiment. The implicit prices calculated from models estimated from responses to the two versions of the survey reveal that in seven of the ten attribute levels, the implicit prices were higher in the labeled case. Again, using the Poe et al. (1997) test, the authors found that fewer than half (four) of the differences in implicit prices were statistically significant at the 10% level or lower. Application: Great Lakes Beaches We apply this experiment investigating labeling effects in choice experiments to a 2012 survey of Michigan resident’s use of Great Lakes beaches. Michigan’s 3200 miles of Great Lakes coastline provides nearly 600 public beaches, with countless more private beaches, each with varied characteristics (MDEQ 2012a, MDEQ 2012b). Beach characteristics range from short beaches adjacent to large population centers to expansive sand dunes in rural areas. Nearly half of Michigan’s residents visit Great Lakes beaches each year (Lupi et al. 2012). Despite their popularity and importance, the Great Lakes constantly come under threat (IJC 2011a). The health of natural systems within the Great Lakes as well as the associated human uses is greatly affected by human development of Great Lakes shoreline, yet pressure still persists to build new homes and to locate new industry within Great Lakes coastal regions. Harmful bacteria and pathogens in the waters of Great Lakes public beaches pose public health risks (Rose et al. 1999) and threaten the public’s enjoyment of Great Lakes beaches. E. coli enters the Great Lakes through natural processes and manmade sources that are each difficult to avoid or mitigate (U.S. Policy Committee 2002), resulting in persistent health risks at Great Lakes beaches. However, state budgets currently 74 allow only 262 of Michigan’s approximately 600 Great Lakes public beaches to be tested at least monthly for E. coli (MDEQ 2012a), leaving nearly 60% of public beaches on the Great Lakes untested. In 2011, 101 of the 260 Great Lakes beaches monitored for bacteria, or nearly 39%, reported an exceedance of safe levels of bacteria at least once during the peak swimming season (ibid). Recently, large blooms of nuisance algae such as cladophora have been occurring with greater frequency than in years past (Auer et al. 2010, Higgins et al. 2008). The blooms can eventually be swept onto beaches and begin to rot, by spoiling the aesthetics of coastal areas and generating a powerful, unpleasant odor (Verhougstraete et al. 2010). Studies have linked the rise in nuisance algae to the success of the invasive zebra mussel (Dreissena polymorpha) (Auer et al. 2010, Higgins et al. 2008, Wilson et al. 2006) and also shows the mats of decaying algae can harbor bacteria and pathogens (Vanden Heuvel et al. 2010; Verhougstraete et al. 2010; Olapade et al. 2006) causing public health concerns among local officials (Bay County Health Department 2007). Previous Research: Great Lakes and Beach Valuation Within the field of environmental and natural resource economics, a number of studies have sought to value changes in the environmental conditions on the Great Lakes to better inform decision making (NMI and NOAA 2001). Several studies use models of recreational fishing demand to examine the economic costs and benefits of programs on the Great Lakes (e.g. Lupi et al. 1998, Lupi et al. 2003, Kotchen et al. 2007). Research has also described how changes in air quality (Chattopadhyay 1999) and water quality (Ara et al. 2006, Braden et al. 2004) on the Great Lakes can impact property values. Others including Whitehead et al. (2009) and Hoehn et al. (2010) examine the economic value of changes in the quality of wetlands near the Great Lakes. 75 Existing literature also provides many examples of studies valuing changes in environmental attributes related to beach recreation on marine beaches. Researchers have examined the value of changes in visitor congestion at beaches (McConnell 1977), water quality (Bockstael, McConnell and Strand 1989, Lew and Larson 2005, Hilger and Hanemann 2006), debris (Smith, Zhang and Palmquist 1997), and beach width and beach re-nourishment (Parsons, Massey and Tomasi 1999, Landry, Keeler and Kriesel 2003, Shivlani, Letson and Theis 2003, Huang, Poor and Zhao 2007). In contrast, existing literature on Great Lakes ecosystem service valuation includes only one peer-reviewed published article (Murray et al. 2001) that examines the value of Great Lakes beaches, in addition to few unpublished studies (e.g. Sohngen, Lichtkoppler and Bielen 1999, Egan and Dwyer 2008, Shaikh 2006, Shaikh 2012). Although there is a growing literature on valuing changes in environmental quality across various Great Lakes resources and an established literature on valuing marine beach recreation, there is very little known about the public’s values and preferences for different levels of environmental quality at Great Lakes beaches. With this gap in mind, we designed a choice experiment to help fill provide this information. Survey Development: Pretests This study reports on the findings of a choice experiment implemented as part of the Great Lakes beach web-based survey in the spring of 2012. The web survey was developed and questions were tested in the spring of 2012 using an iterative process that was guided by the results of 57 one-on-one cognitive interviews (Kaplowitz, Lupi and Hoehn 2004). The 57 interviews were composed of eight interviews conducted among a convenience sample of Michigan State University students at a campus food court, 19 interviews conducted in 76 person among randomly selected adults at local shopping malls, and 30 interviews conducted remotely with respondents recruited from a web survey panel (Survey Sampling International) of adults in the lower peninsula of Michigan. The remote interviews were conducted over the phone using an innovative approach not previously documented in the literature on the pretesting of web-based valuation surveys. Once pretests and initial web-survey debugging were complete, a pilot group of 85 individuals were mailed invitations to the web survey. In the two weeks following the initial invitations, pilot study members were mailed two reminders: a half-sheet postcard and a quarter-sheet postcard, each listing the survey’s web-address, the individual’s password, and information on who to contact with questions. The pilot study received a total of 22 logins for a response rate of 25.9%. The pilot survey allowed the research team to simulate the process of actually implementing the survey (e.g. timing of mailing, receiving data) and to see if any questions arrived via phone or email from pilot survey respondents. Survey Development: Input from experts The text, diagrams, and graphics used to describe the attributes to respondents were developed with the help of state and county health officials, water policy experts, as well as resource managers from State and Federal agencies including NOAA Great Lakes Environmental Research Laboratory, Michigan Sea Grant, EPA Great Lakes Program Office, Michigan Department of Natural Resources, and county officials that perform water quality tests and conduct beach algal assessments. Parts of the choice experiment needed to convey scientific information in a way that was understandable to members of the general population who are not assumed to have advanced knowledge of the topics in the survey. 77 The resource experts we consulted were able to ensure that the survey communicated accurate information. Certain attribute descriptions shown to choice experiment participants also contained diagrams and drawings. Figure 2.4 and figure 2.5 show diagrams that accompanied definitions of attribute levels for amount of algae on the shore and amount of algae in the water, respectively. The use of photos to describe attributes and their levels was considered but research shows unless closely controlled, photos can communicate unintended information (Hoehn, Lupi and Kaplowitz 2003). Even though the diagrams are simplifications of real-world conditions, the experts we consulted in the development of survey materials and pretest participants (i.e. perspective survey respondents) found the diagrams to represent conditions experienced at actual Great Lakes beaches. Survey Sample Because there is no list of beachgoers to facilitate sampling, a two-step sampling process was used. First, in the summer of 2011, a general population mail survey on participation in leisure and recreation activities was conducted using a sample of 32,230 residents of Michigan’s Lower Peninsula, 18 years and older, drawn from the drivers' license list. The mail survey contacts followed a modified version of Dillman's tailored design method (Dillman 2009), and the survey achieved a response rate of 38%. Among the activities asked about in the mail survey was whether or not the respondent had visited a Great Lakes beach in the last year. Second, in the spring of 2012, invitations to a Great Lakes beaches web-based survey were sent to 5,434 qualifying respondents from the mail survey (i.e. those who reported having visited a beach on the Great Lakes in Michigan,) 78 Figure 2.4: Image of diagram of Great Lakes beach attribute: Algae in the water, original size monitor-dependent. Amount of algae in the water Definition View of the Water None visitors never come in contact with algae while swimming or wading Low visitors rarely come in contact with algae while swimming or wading Moderate visitors occasionally come in contact with algae while swimming or wading High visitors constantly come in contact with algae while swimming or wading 79 Figure 2.5: Image of diagram of Great Lakes beach attribute: Algae on the shore, original size monitor-dependent. Amount of algae on the shore Definition View of swimming area shore None None of the shore of the swimming area has algae. Low 1 to 20% of the shore of the swimming area has algae. Moderate 21 to 50% of the shore of the swimming area has algae. High More than 50% of the shore of the swimming area has algae. 80 again following a modified version of Dillman's tailored design method (Dillman 2009) and achieving a response rate of 59.6%. Web Survey and Choice Experiment The choice experiment section of the web survey began with separate pages 12 explaining each of the attributes . Attribute descriptions were developed with the help of the aforementioned resource experts. Each attribute description was followed by questions engaging the respondent with the information presented about the attributes. Follow-up questions asked about the respondent’s personal experiences and attitudes towards the attributes. Thus, each respondent’s path through the choice experiment presented information on the attributes in question in a controlled format, and the follow-up questions allowed respondents to interact with information on the attributes, a technique that survey research shows increases respondent’s comprehension of survey tasks (Kaplowitz et al. 2004, Hoehn and Randall 2002, Schwarz and Sudman 1996). Following descriptions of the attributes, each respondent was shown three different choice sets. Figures 2.1, 2.2 and 2.3 show example choice sets from each of the three labeling schemes were faced by a respondent. The choice experiment asked respondents to assume they were taking a trip to a Great Lakes beach, and the only two alternatives available were the beaches shown in the choice sets. Respondents were told the alternatives had the different characteristics described in the table, but otherwise were same. Beaches were described using a set of six or seven attributes, depending on whether or not the alternatives were labeled. Respondents were then instructed to review the 12 See appendix A for set of screen captures from the choice experiment portion of the web survey, complete with informational treatments, warm-up tasks, choice sets and follow-up questions. 81 attributes in the table and then select which beach they would visit. When respondents logged onto the website, they were randomly assigned to one of the three labeling groups for the choice experiment. Members of each group saw identical versions of the survey until programming dependent upon the respondent's labeling group showed different versions of the choice sets: labeled (alternatives having various Great Lakes names present), samelabeled (Great Lakes names present but not varied within choice sets,) or unlabeled (no 13 Great Lake name present). In total, 1040 respondents were in the labeled group, 665 were in the same group, and 1085 were in the unlabeled group. 14 Attributes and Attribute Levels Attributes were selected for our study based on their salience and relevance to Great Lakes beach visitors, with a focus on beach attributes that could be tied to environmental management schemes, not physical characteristics or facilities. The attributes used in the choice experiment were as follows: Great Lake (what Great Lake the beach lies on), bathrooms (the type of bathrooms available at the beach), amount of algae in the water, amount of algae on the shore, length of beach, the frequency of bacteria testing, and the distance away from home. Table 2.1 shows the attributes and their levels used in this choice experiment. 13 One respondent to the “labeled” choice experiment was inadvertently excluded from the analysis due to a write-error performed by the web survey. The error made it (continued from previous page) impossible to link the choice set viewed by the respondent to the respondent’s answers to the choice experiment until further steps were taken after the analysis was conducted. 14 The unbalanced design occurred due a programming error. Although assignments to treatment groups were random, the error resulted in fewer people being assigned to the “same” design group. However, sample sizes are robust and all statistical tests account for the different sample sizes across treatment groups. 82 Table 2.1: Attributes and Attribute Levels included in the Great Lakes Choice Experiment Attribute Name Great Lake Bathrooms* Algae in the water Algae on the shore Length of beach Testing water for bacteria Attribute level Michigan Huron St. Clair Erie Flushing toilets, cleaned daily Flushing toilets, cleaned hourly None Low (rarely come in contact with algae) Moderate (sometimes come in contact with algae) High (constantly come in contact with algae) None Low (1-20% of the shore has algae) Moderate (21-50% of the shore has algae) High (more than 50% of the shore has algae) 50 yards 220 yards (1/8 mile) 880 yards (1/2 mile) 1760 yards (1 mile) 5280 yards (3 miles) Never Monthly Weekly Daily Distance from your Individual’s minimum distance to selected Great Lake home + 0 miles + 15 miles + 40 miles + 100 miles + 150 miles * Attribute was held constant within choice sets to control for preferences for different bathrooms, but no preference or value information was gathered. 83 For the levels of Great Lakes attribute, we focused on the Great Lakes that are adjacent to the Lower Peninsula, where we drew our sample from. Bathrooms were included in the choice sets but were held constant within choice sets. In pretests, we found bathrooms were a salient attribute when choosing the preferred beach, but facilities at the beaches (including playgrounds, picnic tables, and bathrooms) did not fit into our study’s focus on environmental quality. Instead of including bathrooms as a variable in our choice experiment, we included it as an attribute that would remain constant within choice sets. This kept respondents from making choices based on inferred levels for a missing bathroom attribute based on existing attributes (e.g. “Lake Michigan has cleaner bathrooms”). Instead, this strategy meant that with the bathrooms attribute being held constant, bathrooms would not impact the individual’s decision. The attribute levels for algae in the water and algae on the shore were chosen to correspond with data currently being gathered by the EPA’s Great Lakes Beach Sanitary Survey (U.S. EPA 2008). By using the same scale as the sanitary survey, we can combine information on which Great Lakes beaches people visited and how often along with what the levels of algae were on the shore and in the water at those beaches (revealed preference data) with the marginal rates of substitution for changes in environmental quality characteristics including the level of algae on the shore and in the water to estimate economic costs or benefits of changes in the levels of algae. Five different values for “beach length” were used in the design to represent the approximate range of Great Lakes public beaches in Michigan (MDEQ 2009): 50 yards, 220 yards, 880 yards (1/2 mile), 1760 yards (1 mile), and 5280 yards (3 miles). For modeling purposes, these lengths were handled as categorical variables, allowing for a non-linear relationship between beach length and the respondent’s utility. This mimics other beach valuation work that models the 84 natural log of beach length (e.g. Hilger and Hanemann 2006) or a linear length term as well as a length-squared term (Lew and Larson 2005) as a means for allowing a non-linear relationship between utility and beach length. We chose the attribute describing the frequency of testing to have the levels of daily, weekly, monthly, or none to represent examples of how frequently actual beaches are tested (MDEQ 2012a). As mentioned previously, not every Great Lakes public beach is tested for E. coli. The final attribute, driving distance contained two components. To make choice sets more realistic and avoid showing respondents counterfactual attribute levels, the first component was the minimum distance for each individual to reach the Great Lake that was specifically named in the choice alternative (i.e. the minimum distance for a specific respondent to Lake Michigan when Lake Michigan was selected for the choice alternative, or the minimum distance to Lake St. Clair when Lake St. Clair was selected for the choice alternative). Distances were calculated as the minimum distance from the center of the web survey sample member’s zip code and to one of several beaches along each Great Lake using PC Miler. The second component of distance was one of five levels of “additional miles” that were added as described in the experimental design to the minimum distance to create the total distance. This total distance was the only distance presented to respondents. The levels of the additional distance variable were designed to allow distance to be estimated as a continuous variable in our analysis. The distance attribute needed to control for the minimum distance from a respondent’s home to a specific Great Lake since there is such wide variation in distances across our sample: certain sample members live adjacent to some Great Lakes (i.e. minimum distance of zero to the nearest Great Lake) while the same individual may live 150 or more miles from a different Great Lake. By controlling for the 85 minimum distance from a respondent’s home to the Great Lake within the specific choice alternative, we increase the plausibility and realism of the choice alternative, and avoid showing the respondent counterfactual attribute levels, a concern raised previously in literature regarding labeled choice experiments (Huybers 2005, Carson et al. 1994). The experimental design for each of the three labeling treatments of the choice experiment was generated using NGene software (Choice Metrics 2011). The experimental design was derived in such a way that allows researchers to identify the impact each attribute level has on the probability of a respondent selecting a given alternative (ibid, Johnson et al. 2006.) Designs were optimized for “WTP” or F-efficiency, where designs are produced such that the variance of the ratio of user-specified model coefficients is minimized. Designs were also generated taking into account the expected signs of attribute parameters so as to lessen the presence of clearly dominated alternatives and further increase overall design efficiency. The number of choice pairs (rows) generated for each of the three design types was as follows: 126 for “labeled,” 102 for “same-labeled,” and 96 for “unlabeled.” Since each respondent viewed three choice sets, this translated into the following number of unique versions of the choice experiment (i.e. unique groups of three choice sets): 42 for “labeled,” 34 for “same-labeled,” and 32 for “unlabeled.” Results We begin by estimating a random-effects logit model for each choice experiment type. Results are shown in table 2.2. The "labeled" model has a Wald chi-squared statistic of 705 with a corresponding p-value less than 0.001 and a McFadden's pseudo R-squared of 0.302. The fit of the model, as evidenced by its predictive capability, appears well-balanced as it correctly predicted 77% of Beach A choices and 76% of Beach B choices. As expected, 86 Table 2.2 : Results of Random-Effects Logit model estimating determinants of Great Lakes beach choice for “Labeled,” “Same-Labeled” and “Unlabeled” choice experiments Variable One-way distance Algae on the shore: None Algae on the shore: Low Algae on the shore: Moderate Algae in the water: None Algae in the water: Low Algae in the water: Moderate Length: 50 yards Length: 220 yards Length: 880 yards Length: 1760 yards Testing: None Testing: Monthly Testing: Weekly Great Lake: Michigan Great Lake: Huron Great Lake: St. Clair LABELED coefficient -0.008 1.474 1.225 0.763 1.768 1.473 1.111 -0.780 -0.421 -0.246 0.069 -1.496 -0.332 -0.381 1.194 0.498 0.004 Goodness of fit measures McFadden’s R-Squared Wald Chi-squared (df) Prob>chi-Squared 0.302 705 (17) <0.001 P>|z| <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 0.012 0.489 <0.001 <0.001 <0.001 <0.001 <0.001 0.966 ** ** ** ** ** ** ** ** ** ^ ** ** ** ** ** SAME-LABELED coefficient P>|z| -0.013 <0.001 1.805 <0.001 1.616 <0.001 0.891 <0.001 2.095 <0.001 1.899 <0.001 1.323 <0.001 -0.816 <0.001 -0.491 <0.001 -0.179 0.152 -0.081 0.530 -1.669 <0.001 -0.534 <0.001 -0.072 0.557 0.269 314 (14) <0.001 Beach A predicted correctly (%) Beach B predicted correctly (%) 76% 74% 77% 78% ** significant at the p<0.001 ; * significant at the p<0.01 ; ^ significant at the p<0.05 87 ** ** ** ** ** ** ** ** ** ** ** UNLABELED coefficient P>|z| -0.013 <0.001 1.742 <0.001 1.662 <0.001 0.998 <0.001 2.040 <0.001 1.907 <0.001 1.351 <0.001 -1.060 <0.001 -0.537 <0.001 -0.316 0.002 -0.142 0.156 -1.667 <0.001 -0.624 <0.001 -0.142 0.118 0.317 546 (14) <0.001 78% 77% ** ** ** ** ** ** ** ** ** * ** ** the parameter on driving distance is statistically significant and negative. The model results for the "labeled" choice experiment reveal that Great Lakes beach goers prefer less algae on the shore and lower amounts of algae in the water, with algae in the water causing more of a nuisance than algae on the shore. We find that they also prefer longer beaches to shorter beaches, with no significant difference in preferences between three mile long beaches (baseline) and one mile long beaches. Note that this finding is consistent with revealed preference studies that show a significant and nonlinear relationship between beach length and site choice (e.g. the natural log of length in Parsons et al. (2009), or length plus length squared in Lew and Larson (2005) where the marginal benefits of length decrease as beaches are longer). In the labeled model, we also see that respondents prefer beaches that are tested for bacteria versus beaches that are not tested at all. The results also show that, all else equal, Great Lakes beach goers prefer beaches on Lake Michigan and Lake Huron to beaches on Lake Erie (baseline), and that beach goers are indifferent between beaches on Lake St. Clair and Lake Erie. The model results for the "same-labeled" choice experiment were estimated with a Wald chi-squared statistic of 314 corresponding to a p-value less than 0.001 and a McFadden's pseudo-R-squared of 0.269 while correctly predicting 78% of Beach A choices and 74% of Beach B choices. The "same-labeled" choice experiment results are vastly similar to the "labeled" model in magnitude and significance, with a significant negative parameter on distance. In the case of same-labeled alternatives, we find that, all else equal, respondents are indifferent between beaches that are a half mile, one 88 mile, or three miles long. Respondents shown “same-labeled” choice sets were also indifferent, all other factors equal, between beaches tested daily or weekly for bacteria. The model estimated using observations from the "unlabeled" choice experiment has a Wald chi-squared statistic of 546 with a p-value less than 0.001, a McFadden's pseudo R-squared of 0.317 and correctly predicted 77% of Beach A choices and 78% of Beach B choices. The significance and magnitudes of estimated parameters for Great Lakes beach characteristics in "unlabeled" choice experiment are consistent with the "labeled" and "same-labeled" models. As with the other models, driving distance is estimated to have a significant and negative coefficient. Unlike the "same-labeled" model, the "unlabeled" model shows a preference for 1 mile and 3 mile long beaches over half mile long beaches. Similar to the "same-labeled" model (but in contrast to the "labeled" model) the unlabeled model shows Great Lakes beaches goers, all else equal, do not have a preference for beaches tested for bacteria daily versus weekly. Returning to our first research question regarding the similarity of model parameters estimated from each of the three labeling schemes, we estimate models pooling different combinations of the labeling schemes in order to calculate loglikelihood ratio test statistics. The log-likelihood ratios test the null hypothesis that the parameters estimated in the individual models are equal to one another (equation 10). The results of a likelihood ratio test comparing the “labeled” and “same-labeled” models with 14 degrees of freedom yielded a likelihood statistic of 35.457, with p=0.0013, lead us to reject the null hypothesis that the parameters estimated for Great Lakes beach attributes in the models from the “labeled” and “same-labeled” experiments are the same. A likelihood ratio test comparing the “labeled” and “unlabeled” models with 14 89 degrees of freedom yielded a likelihood statistic of 50.21 , with p<.001, again leading us to reject the null hypothesis that the “labeled” and “unlabeled” choice experiments yielded identical parameters. A likelihood ratio test comparing the “same-labeled” and “unlabeled” models with 14 degrees of freedom yielded a likelihood statistic of 6.034, with p=.96, thus we fail to reject the null hypothesis that the “same-labeled” and “unlabeled” choice experiments in the Great Lakes Web survey have models with identical parameters. One short coming of these LLR tests, however, is that they fail to account for differences in scale factors that exist between different logit model data gathering methods (Swait and Louviere 1993). As mentioned previously, the presence of different scale factors confounds differences in logit models estimated from different data sets. To control for differences in scale, we can compare the MRS calculated from models estimated for each labeling scheme. Table 2.3 shows the welfare measures estimated for marginal changes in Great Lakes beach attributes relative to the baseline attribute level. All MRS estimates are in terms of the number of additional miles a respondent would be willing to travel to reach a beach with the new attribute level versus an identical beach with the former (baseline) level and still be no better or worse off than before the change in attribute. For a desirable change in an attribute, the MRS is positive, and represents a willingness to travel further to reach a beach under the new conditions. For an undesirable change in an attribute, the MRS is negative, and represents a decrease in willingness to travel further from home to experience the beach under the new conditions. The table includes 95% confidence intervals around the mean MRS, which were calculated using 90 Table 2.3: Mean and Krinsky-Robb 95% confidence intervals of Marginal Rates of Substitution for marginal changes in Great Lakes beach characteristics across “Labeled,” “Same-Labeled,” and “Unlabeled” choice experiments. LABELED SAME-LABELED UNLABELED 95% 95% 95% 95% 95% CI CI CI 95% CI CI CI LowerUpper- LowerUpper- LowerUpperAttribute Level Bound Mean Bound Bound Mean Bound Bound Mean Bound Algae on the shore: None 154 180 210 122 144 169 122 139 159 Algae on the shore: Low 125 150 178 108 129 154 115 133 152 Algae on the shore: Moderate 70 93 119 52 71 92 65 80 96 Algae in the water: None 188 216 248 143 167 196 144 163 184 Algae in the water: Low 154 180 209 128 151 179 134 152 174 Algae in the water: Moderate 111 136 162 84 105 130 91 108 126 Length: 50 yards -121 -95 -71 -87 -65 -45 -103 -85 -68 Length: 220 yards -76 -51 -29 -59 -39 -19 -60 -43 -27 Length: 880 yards -54 -30 -7 -35 -14 6 -41 -25 -9 Length: 1760 yards -15 8 32 -27 -6 14 -27 -11 4 Testing: None -213 -183 -157 -160 -133 -110 -152 -133 -116 Testing: Monthly -63 -41 -18 -62 -43 -24 -65 -50 -34 Testing: Weekly -70 -47 -25 -25 -6 13 -25 -11 3 Great Lake: Michigan 121 146 173 Great Lake: Huron 38 61 85 Great Lake: St. Clair -21 0 23 91 the Krinsky-Robb method at 10,000 random draws (Krinsky and Robb 1986). To interpret the marginal rates of substitution these would need to be translated into dollar values and placed into a realistic choice set for beaches, that is a choice set that includes a full range of substitute sites. Instead, these welfare measures assume that each respondent would actually take a trip to the beach specified in the choice set, and not to some other beach. As such, these measures are an upper-bound for the actual economic value of these marginal changes in beach attributes since respondents were not shown a full range of substitutes, nor were they given a proxy for this such as a "do not go," or "go to some other beach" option. In future work we will merge the choice experiment preference estimates with a statewide recreation demand model of beach use in Michigan that fully accounts for all substitution possibilities. The results show some variation among the mean MRSs across models, with the "labeled" choice experiment tending to have larger absolute values of MRS than the "same-labeled" or "unlabeled" choice experiments. Despite this variation in mean MRSs, we see that welfare measures are consistently ordered with intuitively "better" attribute levels (e.g. lower algae, more frequent testing) having higher MRSs than "worse" levels (e.g. decreases in beach length, or beaches with no testing for bacteria at all) across all of the labeling schemes, except in several cases where respondents were shown to be indifferent between attributes (e.g. results for the attribute levels "weekly testing" versus “daily testing” for "unlabeled" and "same-labeled" model). Figure 2.6, Figure 2.7, and Figure 2.8 each show graphs of the lower and upper bounds of the Krinsky-Robb 95% confidence intervals as well as the mean MRSs for each of the different labeling schemes. The figures reinforce the notion that across the 92 Figure 2.6: Mean and Krinsky-Robb 95% Confidence Intervals of MRS in terms of miles for “amount of algae on the shore,” relative to the baseline level “high” Figure 2.7: Mean and Krinsky-Robb 95% Confidence Intervals of MRS in terms of miles for “amount of algae in the water,” relative to the baseline level, “high” 93 Figure 2.8: Mean and Krinsky-Robb 95% Confidence Intervals of MRS in terms of miles for “frequency of testing for bacteria,” relative to the baseline level, “tested daily” three labeling schemes, the relative scale and ordering of MRS is highly similar. The figures also illustrate the significant overlap between confidence intervals, and how in each case, the confidence intervals around the unlabeled MRS are the smallest. Using the estimated MRSs from the Krinsky-Robb iterations, we return to our second research question (equation 11) regarding the similarity of MRS estimated across different labeling schemes, and we test for the presence of a statistically significant difference between MRSs estimated from the different choice experiments. Following Poe et al. (1997) and as applied by Blamey et al. (2000) and Huybers (2005), we pair MRS estimations from the Krinsky-Robb iterations, and take the difference of the paired estimations. We calculate the proportion of the differences that take a hypothesized sign: following Efron and Tibshirani (1993) this proportion is an 94 approximation for a one-sided significance test. The results of these one-sided significance approximations are shown in table 2.4. For simplicity, all differences are tested against the hypothesis that the choice model listed first in the pairing has a higher MRS for any given change in attribute level than the choice model listed second. For example, in the first column, one-sided significance is calculated for the MRSs calculated in the labeled choice experiment are higher than the MRSs calculated from the unlabeled choice experiment. In the second column, we test for one-sided significance of MRSs calculated in the labeled choice experiment are higher than the MRSs calculated from the same-labeled choice experiment. It follows that the third column estimates the one-sided significance of the MRSs elicited by the same-labeled choice experiment are higher than the MRSs elicited by the unlabeled choice experiment. Given these hypotheses, a value of 0.95 or higher indicates an approximate one-sided significance of 0.05 (or lower) in support of the hypothesis that the MRSs in the appropriate choice experiment are higher than the other. Similarly, a value of 0.05 or lower would indicate an approximate one-sided significance of 0.05 (or lower) in support of the converse hypothesis that the MRSs estimated by the choice experiment in question are actually lower than the specified model of comparison. The results of these approximate one-sided significance estimates show no statistically significant differences between MRSs calculated from the same-labeled and unlabeled choice experiments. When comparing MRSs from the labeled and samelabeled as well as the labeled and unlabeled, we find that fewer than half of the MRSs are found have statistically significant differences (5 of 13 MRSs from the labeled choice experiment are different from the unlabeled choice experiment; 6 of 13 MRSs from the 95 Table 2.4: Results of approximate one-sided significance of differences between mean MRSs: “Labeled and Unlabeled,” “Labeled and Same-Labeled,” and “Same and Unlabeled” choice experiments. Proportion of differences greater than zero Attribute Level Algae on the shore: None Algae on the shore: Low Algae on the shore: Moderate Algae in the water: None Algae in the water: Low Algae in the water: Moderate Length: 50 yards Length: 220 yards Length: 880 yards Length: 1760 yards Testing: None Testing: Monthly Testing: Weekly Labeled Vs Unlabeled 0.9924 0.853 0.8234 0.9982 0.9493 0.9622 0.2405 0.2739 0.3614 0.9188 0.0009 0.7451 0.0036 ** ** * ^ ^ Labeled Vs Same-Labeled 0.9755 ** 0.8821 0.9242 0.9895 * 0.9365 0.9592 * 0.0322 ^ 0.2135 0.1534 0.8297 0.0044 ^ 0.5549 0.003 ^ Same Vs Unlabeled 0.6231 0.3959 0.2506 0.5947 0.4733 0.4238 0.9227 0.6128 0.7987 0.6505 0.5059 0.7144 0.6739 * Approximate one-sided significance between p=0.01 and p=0.05 that MRS is greater in base case than in case model is compared to ** Approximate one-sided significance at the p<0.01 that MRS is greater in base case than in case model is compared to ^ Approximate one-sided significance at the p<0.01 that MRS is less in base case than in case model is compared to labeled choice experiment are different from the same-labeled choice experiment; 0 of 13 MRSs from the unlabeled choice experiment are different from the same-labeled choice experiment). Only for certain levels of certain attributes are differences in MRSs found to be statistically significant. We find that respondents in the labeled choice experiment exhibited higher MRSs for beaches with no algae on the shore and no algae on the water in the case of labeled choice experiments than they did in the unlabeled or same-labeled. This could indicate a higher willingness to travel to enjoy a beach with no algae on the shore (or in the water) when the respondent knows which lake the beach 96 lies on. We also see MRSs for no bacteria testing and weekly bacteria testing are lower for respondents to the labeled choice experiment than for the unlabeled or samelabeled experiments. This may indicate that respondents’ preferences for levels of bacteria testing at Great Lakes beaches is dependent on knowing which Great Lake the beach lies on: respondents may rely on pre-conceptions of water quality at lakes when determining their willingness to travel to enjoy beaches tested for bacteria at different frequencies. Never-the-less, the results all show that no matter the labeling scheme, some testing is preferred to no testing at all. Conclusions This paper employs a split-sample Web survey to compare the preference information gathered from three different labeling regimes. Reviewing the randomeffects logit model outputs, we see that estimated model parameters the three labeling schemes are all similar in sign, statistical significance, and magnitude. To address our first research question regarding how model parameters would compare across labeling schemes, we performed Log-Likelihood ratio tests. The results from the Log-Likelihood Ratio tests show there is no statistically significant difference in parameters estimated from the “same-labeled” and “unlabeled” choice experiments. Further, the LogLikelihood Ratio tests suggest that there are statistically significant differences between preference information gathered from “labeled” and “same-labeled” or “unlabeled” choice experiments (i.e. we reject the null hypothesis that parameters estimated in those models are identical in those cases). While useful, simply comparing the parameters does not account for scale factors unique to each model. To account for differences in scale factors and to help answer our second research question regarding 97 the MRSs estimated across models, we calculated the mean and 95% CIs around the MRSs using the Krinsky-Robb method. The mean MRSs elicited from each choice experiment reinforce the notion that preference information is similar in order and magnitude across the different labeling schemes. However, Poe tests of the pairwise comparisons of individual attribute MRS show some significant differences in some of the MRS from the labeled model. Even though some MRS are significantly different in the labeled model, the general pattern and signs of the data are similar. In each labeling scheme, respondents preferred beaches closer to home. The signs and significance of estimated parameters across all three models are identical in all but a few cases. Regardless of labeling scheme, respondents preferred less algae on the shore and less algae in the water compared to higher amounts. In each labeling scheme and at each attribute level, algae in the water was more of a nuisance than algae on the shore. Respondents across the three labeling schemes also preferred longer beaches to shorter beaches, as well as beaches tested for bacteria at least monthly to beaches that are not tested at all. This study’s results show that despite some statistical differences in model parameters (evidenced by LLR test results and some of the MRS based on the Poe tests), the preference information gathered across different labeling schemes is qualitatively similar: the ranking and relative magnitudes of the preferences of respondents are consistent across the different labeling treatments. The similarities in preferences observed in this study run counter to concerns raised by previous research about respondents relying solely on labels when answering labeled choice experiment tasks (Blamey et al. 2000). The preference information gathered in our study could 98 show promise for benefit transfer because in a general sense the values do not depend on the lake names. That the preferences do not substantially depend on the lake name show these results may exhibit a degree of consistency across study and policy site context. Had the preferences gathered from the different labeling regimes showed significant differences in rank or magnitude, the values would appear to be highly sitespecific, and would have raised concerns about the transferability of values. Instead, the results show that allowing site names to influence preferences for other characteristics did not produce results exhibiting any economically consequential differences than when site names were not present. Although our results are favorable regarding the similarity and generalizability of the MRS from our study, before we can confidently conclude the names (i.e., labels) don’t matter for benefit transfer, we would recommend additional original research to compare valuation information gathered under named (labeled) and unnamed regimes. In particular, gaining further insight into cases where labels do and do not affect values would be especially desirable. Never the less, that there are no economically relevant differences between the tradeoffs calculated using data from the three different labeling schemes used here may have ramifications for the practice of benefit transfer: since the values elicited in the labeled setting are consistent with those elicited in the unlabeled setting shows that these results are not Great Lake dependent. In general, the similarities across labeling schemes should provide a firm basis for the application of such economic information to a policy setting. The preference information gathered here is a much needed first step to understanding how the public’s values and preferences can be factored into the management of the Great Lakes. 99 APPENDICES 100 APPENDIX A Image of screen display from Great Lakes Beaches Web-survey, choice experiment and follow-up portion, original size monitor-dependent 101 This appendix replicates the information that was displayed in the Great Lakes Beaches Web survey’s choice experiment and follow-up portions. The content was modified to meet the requirements of the Michigan State University formatting guide for submission of Master’s Theses and Doctoral Dissertations. The figure begins with the border that appeared on the left margin of each webpage viewed throughout the survey and then continues with the content of the survey. Figure A.1: Image of screen display from Great Lakes Beaches Web-survey, choice experiment and follow-up portion, original size monitor-dependent 102 Figure A.1 (Cont’d) Great Lakes Beach Characteristics This section of the survey is about characteristics of Great Lakes public beaches. There are more than 600 public beaches on the Great Lakes in Michigan and many more private beaches. For this survey, the Great Lakes in Michigan include:       Lake Huron Lake Superior Lake Michigan Lake Erie Lake St. Clair and connecting waters (like the Detroit River, and the St. Clair River) The characteristics of Great Lakes public beaches vary widely. We would like to know if some characteristics are more important than others when you choose to visit Great Lakes public beaches. Next Save and Return Later 103 Figure A.1 (Cont’d) Length of Great Lakes public beaches The length of a beach:    is the distance of shoreline along the Great Lake within the boundaries of the park or managed area where the beach is located includes the sandy areas, plus any rocky, muddy, grassy, wetland, and forest areas along the shore does not include beach or shoreline outside the boundaries of the park or managed area Great Lakes public beaches in Michigan vary in length from 50 yards to over 14,000 yards (over 8 miles). The average Great Lakes public beach length is 2,600 yards (1.5 miles). Park boundary Park boundary Land Area outside of park boundary Area outside of park boundary Water 104 Figure A.1 (Cont’d) Q1. What is the length of the Great Lakes beach you most recently visited? Less than 100 yards 100 to 300 yards 300 yards to 880 yards (half mile) 880 yards (Half mile) to 1760 yards (one mile) Over 1760 yards (over one mile) Don’t know Q2. How much do you agree or disagree with the following statement? “The length of a beach is important to my enjoyment of the beach.” Disagree Somewhat Disagree Neither Agree or Disagree Somewhat Agree Agree Next Save and Return Later 105 Figure A.1 (Cont’d) Bacteria testing in the water at Great Lakes public beaches Unsafe bacteria may come from sources such as sewers, septic systems, farms, and manure. Rivers and streams near these sources can carry unsafe bacteria into the Great Lakes. Public agencies can test the water at beaches for unsafe bacteria. When beachgoers come in contact with high levels of unsafe bacteria they are at risk of becoming ill. Unsafe bacteria can cause skin, eye, or ear irritation, vomiting, diarrhea, cramps, or other stomach discomfort. If the levels of unsafe bacteria are too high, health agencies can post signs at the beach and publish messages recommending that beachgoers limit or avoid contact with the water at that beach. Testing Great Lakes public beaches for unsafe bacteria costs money. Not all public beaches are tested for unsafe bacteria because of the large number of beaches and the limited budgets of public health agencies. Great Lakes public beaches can be tested for unsafe bacteria:     Daily Weekly Monthly Never 106 Figure A.1 (Cont’d) Q1. Have you heard about a Great Lakes public beach being closed because of high levels of unsafe bacteria? Yes No Q2. How much do you agree or disagree with the following statement? “I do not care if the Great Lakes public beach I visit is tested for unsafe bacteria because…” … I do not go in the water.* … the beach has been tested in the past and no unsafe bacteria were found.* … the beach is not close to any streams or rivers.* … the beach is not close to any shoreline farms or development.* Next Save and Return Later * The column headings to the right of each sub-question were labeled were slanted text which read (from left to read): “Disagree, Somewhat Disagree, Neither Agree or Disagree, Somewhat Agree, Agree.” 107 Figure A.1 (Cont’d) Algae and Great Lakes public beaches Algae are organisms that grow naturally throughout the Great Lakes. It can be common for algae to be present at Great Lakes beaches. Some types of algae     are invisible to the naked eye look like a layer of film on the surface of the water appear plant-like with stems and leafy branches look like muck or mud Algae in the water Some algae grow at the Great Lakes beach where visitors swim or walk. Other algae may float into areas where visitors swim or walk. At beaches with… Amount of algae in the water Definition View of the Water None visitors never come in contact with algae while swimming or wading Low visitors rarely come in contact with algae while swimming or wading Moderate visitors occasionally come in contact with algae while swimming or wading High visitors constantly come in contact with algae while swimming or wading 108 Figure A.1 (Cont’d) Q1. Using the definitions above, how much algae was present in the water at the Great Lakes beach you most recently visited? None Low amounts Moderate Amounts High amounts Don’t know Q2. How much do you agree or disagree with the following statement? "The amount of algae in the water affects my enjoyment of a beach" Disagree Somewhat Disagree Neither Agree or Disagree Somewhat Agree Agree Next Save and Return Later 109 Figure A.1 (Cont’d) Algae on the Shore Algae may die or be knocked loose from the place where it grows. Dead algae can be carried by winds and waves onto the shore of Great Lakes beaches. When algae washes onto shore, it can accumulate in bunches. At beaches with: Amount of algae on the shore Definition View of swimming area shore None None of the shore of the swimming area has algae. Low 1 to 20% of the shore of the swimming area has algae. Moderate 21 to 50% of the shore of the swimming area has algae. High More than 50% of the shore of the swimming area has algae. . 110 Figure A.1 (Cont’d) Q1. Using the definitions above, how much algae was present on the shore at the Great Lakes beach you most recently visited? None Low amounts Moderate Amounts High amounts Don’t know Q2. How much do you agree or disagree with the following statement? "The amount of algae on the shore affects my enjoyment of a beach" Disagree Somewhat Disagree Neither Agree or Disagree Somewhat Agree Agree Next Save and Return Later 111 Figure A.1 (Cont’d) Bathrooms at Great Lakes public beaches Great Lakes public beaches can have different types of bathrooms:     Outhouses (toilets do not flush), cleaned daily Outhouses (toilets do not flush), cleaned hourly Flushing toilets, cleaned daily Flushing toilets, cleaned hourly Q1. What type of bathrooms are available at the Great Lakes beach you most recently visited? It does not have any bathrooms Outhouses/portable toilets Flushing toilets Don’t know Q2. Do you know how often the bathrooms at the Great Lakes beach you most recently visited get cleaned? Hourly Daily Weekly Don’t Know Next Save and Return Later 112 Figure A.1 (Cont’d) Distance from your home to Great Lakes public beach The distance from your home to a Great Lakes public beach is the number of miles from your permanent residence to get to the Great Lakes beach. This table shows the typical miles of driving distance from the following cities to different Great Lakes: Lake Michigan Lake Huron Lake St Clair Lake Erie Lake Superior Detroit 186 346 61 6 25 Lansing 95 287 100 96 110 Grand Rapids 30 285 130 160 175 Gaylord 37 115 49 235 255 Q1. About how many miles is it from your permanent residence to the Great Lakes beach you most recently visited? Number of miles Q2. How much do you agree or disagree with the following statement? "The distance from my home to a beach affects how often I visit a beach.” Disagree Somewhat Disagree Neither Agree or Disagree Somewhat Agree Agree Next Save and Return Later 113 Figure A.1 (Cont’d) Next, we are going to compare beaches that have some of the characteristics that we have asked about. You are 75% of the way through the survey. Next Save and Return Later 114 Figure A.1 (Cont’d) Visiting a Great Lakes Beach Suppose you are taking a trip to the beach and there are only two beaches to choose from. The beaches have different characteristics as shown in the table, but otherwise, they are the same. For example, they would have the same amount of litter, the same amount of crowding, and the same scenery. Please compare Beach A and Beach B in the table and answer the question below: Beach A Beach B Great Lake Lake Huron Lake Michigan Bathrooms Flushing toilets cleaned hourly Flushing toilets cleaned hourly Algae in the water Moderate (occasionally come in contact with algae) Low (rarely come in contact with algae) Algae on the shore Low (1-20% of the shore has algae.) None Length of beach 5280 yards (3 miles) 880 yards (1/2 mile) Testing water for bacteria Daily Weekly Distance from your home 48 169 Which of the above beaches would you visit? Beach A Next Save and Return Later 115 Beach B Figure A.1 (Cont’d) Visiting a Great Lakes Beach Now suppose you are taking a trip to the beach and Beach C and Beach D are the only beaches to choose from. The beaches have different characteristics as shown in the table, but otherwise, they are the same. For example, they would have the same amount of litter, the same amount of crowding, and the same scenery. Please compare Beach C and Beach D in the table and answer the question below: Beach C Beach D Great Lake Lake Huron Lake Erie Bathrooms Flushing toilets cleaned hourly Flushing toilets cleaned hourly Algae in the water Low (rarely come in contact with algae) None Algae on the shore Low (1-20% of the shore has algae.) High (more than 50% of the shore has algae.) Length of beach 220 yards (1/8 miles) 5280 yards (3 miles) Testing water for bacteria Daily Weekly Distance from your home 25 188 Which of the above beaches would you visit? Beach C Next Save and Return Later 116 Beach D Figure A.1 (Cont’d) Visiting a Great Lakes Beach Finally, suppose you are taking a trip to the beach and Beach E and Beach F are the only beaches to choose from. The beaches have different characteristics as shown in the table, but otherwise, they are the same. For example, they would have the same amount of litter, the same amount of crowding, and the same scenery. Please compare Beach E and Beach F in the table and answer the question below: Beach E Beach F Great Lake Lake Michigan Lake St. Clair Bathrooms Flushing toilets cleaned daily Flushing toilets cleaned daily Algae in the water Low (rarely come in contact with algae) None Algae on the shore High (more than 50% of the shore has algae.) None Length of beach 220 yards (1/8 miles) 50 yards Testing water for bacteria Daily Never Distance from your home 365 178 Which of the above beaches would you visit? Beach E Next Save and Return Later 117 Beach F Figure A.1 (Cont’d) Thank you for your input. You are almost done with the survey. You are 85% of the way through the survey Next Save and Return Later 118 Figure A.1 (Cont’d) Your opinions about Great Lakes beaches Please rate the following beach characteristics of each Great Lake. Q1. Water quality* (clarity, color, and odor) Lake Erie Lake Huron Lake Michigan Lake Superior Lake St. Clair Q2. Level of algae* Lake Erie Lake Huron Lake Michigan Lake Superior Lake St. Clair Q3. Debris in the* water and on the shore Lake Erie Lake Huron Lake Michigan Lake Superior Lake St. Clair O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O * Column headings to the right of each question were listed in slanting text and read (from left to right): “Satisfactory, Somewhat satisfactory, Neither Satisfactory nor unsatisfactory, Somewhat unsatisfactory, Unsatisfactory, Don’t know.” 119 Figure A.1 (Cont’d) Q4. Natural Beauty* Lake Erie Lake Huron Lake Michigan Lake Superior Lake St. Clair Q5. Quality and texture of the sand or beach areas* Lake Erie Lake Huron Lake Michigan Lake Superior Lake St. Clair Q6. Crowding at the beach and surrounding area* Lake Erie Lake Huron Lake Michigan Lake Superior Lake St. Clair O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O * Column headings to the right of each question were listed in slanting text and read (from left to right): “Satisfactory, Somewhat satisfactory, Neither Satisfactory nor unsatisfactory, Somewhat unsatisfactory, Unsatisfactory, Don’t know.” 120 Figure A.1 (Cont’d) Q7. When was the last time you went to this Great Lake ^ Lake Erie Lake Huron Lake Michigan Lake Superior Lake St. Clair O O O O O O O O O O O O O O O O O O O O O O O O O Next Save and Return Later ^ Column headings to the right of each question were listed in slanting text and read (from left to right): “Within the last year, 1 to 10 years ago, More than 10 years ago, Never, Don’t know.” 121 Figure A.1 (Cont’d) Things you typically do at a Great Lakes beach Q1. How often do you do the following things on a typical visit to a Great Lakes beach? Never Sometimes Most of the time I walk on the sand I play or dig in the sand I get my feet wet I go in the water, but not past my knees I go in the water, but not past my shoulders I go in the water and my entire body gets wet O O O O O O O O O O O O O O O O O O Next Save and Return Later 122 Figure A.1 (Cont’d) Concluding Section: Background Information Summaries of these background questions help us accurately represent all Michigan residents. Individual answers are strictly confidential. Q1. Who is filling out this survey? The person the invitation was addressed to Another household member Someone else Q2. What is your sex? Male Female Q3. In what year were you born? Year Q4. What is your race and/or ethnicity? Mark all that apply. White Black / African American Hispanic, Latino, or Spanish American Indian Asian Other 123 Figure A.1 (Cont’d) Q5. What is the highest degree or level of school you have completed? Some schooling High school or equivalent Some college, no degree Associate’s degree Bachelor’s degree Graduate or professional degree Q6. What is the zip code of the place you live? zip code Q7. How long have you lived in Michigan? Less than 1 year 1 to 5 years More than 5 years Q8. What is your current employment status? Employed Full Time Employed Part Time Unemployed Stay at home parent Retired Student 124 Figure A.1 (Cont’d) (If Q8= “Employed Full Time” or “Employed Part Time”) How many hours do you work in a typical week? Q9. Do any of the following live in your household? Spouse or significant other Children age 5 and under Children age 6 to 17 Other immediate family Extended family or other adults None of these Q10. In 2011, what was your total household income, from all sources, before taxes? Less than $24,999 $25,000 to $34,999 $35,000 to $49,999 $50,000 to $74,999 $75,000 to $99,999 $100,000 to $149,999 $150,000 to $199,999 $200,000 or more Next Save and Return Later 125 Figure A.1 (Cont’d) If Q10 was left blank: One last follow up Q1. Was your total household income in 2011 less than $50,000? Yes No If Q1=Yes: Q1b. Was it less than $25,000? Yes No If Q1=No: Q1b. Was it less than $100,000? Yes No Next Save and Return Later 126 APPENDIX B Screener Survey (Michigan Activities survey) materials, contact schedule, robodial scripts, and disposition 127 PLEASE NOTE: The figures in this appendix include images of survey materials that do not meet the font size and/or margin requirements listed in the Michigan State University formatting guide for submission of master’s theses and doctoral dissertations. As such, the text of the materials featured in these figures is entered plainly above each figure to preserve all information contained within the figures and to meet the standards in the formatting guide. The images are presented in the figures to preserve a record of overall formatting of actual materials. The following is the text for the Michigan Activities Survey Wave 1 Introduction letter, original size 8.5” x 11”; for an image of this letter see Figure B.1. Text for Michigan Activities Survey Wave 1 Introduction letter, original size 8.5” x 11”: DATE , Dear , We need your help with a study of recreation and leisure activities in Michigan. By answering the enclosed survey, you will help shape future policies and programs influencing the quality of life of people in Michigan. You are part of a small scientific sample. We need your answers to ensure that our results accurately represent Michigan residents. Whether or not you take part in the activities listed in the questionnaire, your input is important. Please complete the questionnaire and return it to us in the enclosed prepaid envelope. The survey should take less than five minutes to complete. Your answers will remain completely confidential. Your privacy will be protected to the maximum extent allowable by law. Your participation is voluntary and you may choose not to participate at all, or not to answer certain questions. If you have any concerns or questions about this research study, such as scientific issues, how to do any part of it, or if you believe you have been harmed because of the research, please contact Frank Lupi, Department of Agricultural, Food, and Resource Economics, Michigan State University, 301B Agriculture Hall, East Lansing, MI 48824-1039; MIstudy@msu.edu, 517-3551692. If you have any questions or concerns about your role and rights as a research participant, or would like to register a complaint about this research study, you may contact, anonymously if you wish, Michigan State University Human 128 Research Protection Program at 517-355-2180, FAX 517-432-4503, or e-mail irb@msu.edu, or regular mail at: 207 Olds Hall, Michigan State University, East Lansing, MI 48824. Thank you very much for helping with this important study. Sincerely, Frank Lupi, Professor 129 Figure B.1: Image of Michigan Activities Survey Wave 1 Introduction letter, original size 8.5” x 11” 130 The following is the text for the Michigan Activities Survey Wave 1 survey instrument, original size of each page 8.5” x 7”; for an image of this survey, see figure B.2 Text for Michigan Activities Survey Wave 1 survey instrument, original size of each page 8.5” x 7” Michigan Activities Survey We need your help! “By taking five minutes to tell us about your activities, you will help us manage our state’s cultural and natural resources to better meet peoples’ needs.” --Rodney Stokes, Director, Michigan Department of Natural Resources. “Your input will help us better connect people with Pure Michigan’s unique attractions, renowned destinations, festive celebrations, and pristine outdoors.” --George Zimmermann, Vice President, Travel Michigan INSTRUCTIONS Please answer each question for your own activities that were: • in the State of Michigan • since Memorial Day weekend 2010 (since June 1, 2010) • not work related Even if you have not done these activities, we need to hear from you. 1. Have you done any of the following in the State of Michigan in the past year, since June 1, 2010? (YES/ NO); (If YES, how often since June 1, 2010? 1-5 days/ 6-19 days/ 20 or more Eat dinner at a restaurant Go for a walk or a hike Attend or participate in outdoor sports Swim at a pool, lake, or river Go to a movie in a theater Attend a music concert Attend a cultural or arts festival/fair (page 2) Park Visits 2. Have you visited any of the following public parks in the State of Michigan since June 1, 2010? (YES/ NO); (If YES, how often since June 1, 2010? 1-5 days/ 6-19 days/ 20 or more) County, City, or Township Park State Park or State Campground State Forest or State Game Area National Park or National Forest 131 Outdoor Recreation 3. Have you participated in any of the following activities in the State of Michigan since June 1, 2010? (YES/ NO); (If YES, how often since June 1, 2010? 1-5 days/ 6-19 days/ 20 or more) Camping Hunting Fishing Boating Picnicking at public parks Visiting a beach Driving an all-terrain vehicle (ATV) Snowmobiling Skiing or snowboarding Great Lakes Recreation 4. Have you participated in any of the following activities on the Great Lakes in the State of Michigan since June 1, 2010? (YES/ NO); (If YES, how often since June 1, 2010? 1-5 days/ 6-19 days/ 20 or more) Visiting a beach on the Great Lakes Fishing on the Great Lakes Boating on the Great Lakes (page 3) Typical Weekly Activity 5. During a typical week, do you do any of the following activities? (YES/NO); (If YES, how often during a typical week? 1-9 hours per week/ 10-19 per week/ 20-29 hours per week/ 30+ hours per week) Read books Indoor / outdoor exercise Watch television Use the internet Play video games Play a musical instrument Volunteer Obstacles 6a. Were you able to participate in outdoor recreational activities as often as you would have liked since June 1, 2010? (YES/NO) If NO Skip to question 7. 6b. How much do you agree or disagree? “Since June 1, 2010, I would have participated in outdoor recreation more often, except…” (Disagree/ Somewhat Disagree/ Neither Agree or Disagree/ Somewhat Agree/ Agree) 132 … I did not have enough time … I had too many family or work obligations … I did not have enough money / cost too much … I had a personal health issue or disability … I had no one to go with … I did not know what activities were available … I did not know what places to go to ... the places I knew of were poorly maintained ... the places I knew of were too far away ... the places I knew of were too crowded (page 4) Please provide the following background information 7. Who is filling out this survey? (The person it is addressed to / Another household member/ Someone else) 8. What is your sex? (Male / Female) 9. In what year were you born? (19 _ _ ) 10. What is your race and/or ethnicity? Mark all that apply. White Black / African American Hispanic, Latino, or Spanish American Indian Asian Other _____________ 11. What is the highest degree or level of school you have completed? No schooling Some schooling High school or equivalent Associate’s or technical degree Bachelor’s degree Advanced degree 12. Where do you live? Lower Peninsula of Michigan Upper Peninsula of Michigan Some other state, not Michigan 13. How long have you lived in Michigan? Less than 1 year 1 to 5 years More than 5 years 133 14. What is your current employment status? Employed Full Time Employed Part Time Unemployed Stay at home parent Retired Student 15. If employed, how many hours do you work in a typical week? Hours per week _ _ 16. Do any of the following live in your household? Mark all that apply. Spouse or significant other Children age 5 and under Children age 6 to 17 years old Other immediate family Extended family or other adults None of these 17. In 2010, what was your total household income, from all sources, before taxes? Less than $25,000 $25,000 to $49,999 $50,000 to $99,999 $100,000 and higher THANK YOU FOR YOUR HELP. If you have misplaced the return envelope, please return your survey to: Dr. Frank Lupi, 301b Agriculture Hall, East Lansing, MI 488241039 134 Figure B.2: Image of Michigan Activities Survey Wave 1 survey instrument, original size of each page 8.5” x 7” 135 Figure B.2 (Cont’d) 136 Figure B.2 (Cont’d) 137 Figure B.2 (Cont’d) 138 The following is the text for the Michigan Activities Survey Wave 2 Introduction letter, original size 8.5” x 11”; for an image of this letter see Figure B.3. Text for Michigan Activities Survey Wave 2 Introduction letter, original size 8.5” x 11”: DATE , Dear , Three weeks ago, I sent you a questionnaire for an important research study of recreation and leisure activities in Michigan. If you have already completed and returned your survey, thank you very much! If not, please do so today. Whether or not you take part in the activities listed in the questionnaire, your input is important. Your answers will help Michigan ensure the state’s resources are managed for the betterment of Michigan residents. Because you are part of a small scientific sample, we need your answers to be sure that our results accurately represent Michigan residents. The survey should only take a few minutes to complete and your answers will remain completely confidential. Participation in the study is voluntary, and you may choose not to participate at all, or choose not to answer certain questions. If you have any concerns or questions about this research study, please contact Dr. Frank Lupi, Department of Agricultural, Food and Resource Economics, Michigan State University, 301B Ag Hall, East Lansing, MI 48824; MIstudy@msu.edu, 517-355-1692. Thank you very much for helping with this important study. Sincerely, Frank Lupi, Professor 139 Figure B.3: Image of Michigan Activities Survey Wave 2 Introduction letter, original size 8.5” x 11” 140 Note: the Michigan Activities Survey Wave 1 survey instrument and the Michigan Activities Survey Wave 2 survey instrument are identical. For full text of Michigan Activities Survey Wave 2 survey instrument, see text appearing in this appendix before figure B.2.For an image of Michigan Activities Survey Wave 2 survey instrument, see figure B.4. 141 Figure B.4: Image of Michigan Activities Survey Wave 2 survey instrument, original size of each page 8.5” x 7” 142 The following is the text for the Michigan Activities Survey Wave 3 Introduction letter, original size 8.5” x 11” ; for an image of this letter see figure B.5. DATE , Dear , This summer, I have tried to contact you about an important study of recreation and leisure activities in Michigan. If you have returned your questionnaire, thank you very much! I know you are busy, but we need your answers to ensure our results accurately represent Michigan residents. Whether or not you take part in the activities listed in the questionnaire, your input matters. This is the last questionnaire I plan to mail you. Please respond today. The questions take only a few minutes to complete, and your answers will be kept completely confidential. Participation in the study is voluntary, and you may choose not to participate at all, or choose not to answer certain questions. If you have concerns or questions about this study, contact me at the Department of Agricultural, Food and Resource Economics, Michigan State University, East Lansing, MI 48824; 517-355-1692, MIstudy@msu.edu. Thank you, Frank Lupi, Professor Answers to Frequently Asked Questions Why does this survey matter? Surprisingly, there is little scientifically sound information on the recreational and leisure activities of Michigan residents. Yet, decisions must get made about how to manage the State’s natural and cultural resources. The information this survey gathers on people’s recreational and leisure activities will facilitate fact-based decision making about resource management. Why do you want me to do the survey? We need your help because you are part of a small, scientifically selected sample, designed to be representative of all Michigan residents. Some people do the activities in the survey, and others do 143 not. Either way, we need to hear from everyone selected to ensure the accuracy of our results. How was I selected? A computer program randomly selected your name and address from drivers’ license and State ID holders in Michigan over the age of 18. Who sees my answers? Your responses are scanned directly into a computer by a small team of researchers. The scanned data does not contain your name or address. Personal information is only used to manage the mailing and collection of surveys. How is my privacy protected? Your answers are kept separately from our mailing list with your name and address. Our mailing list is stored on password protected in computers in locked offices. Everyone who works on the survey has completed training and signed an oath saying that they will not share any private information they see as part of their work on the survey. How can I see the results? Contact our research team by email (MIstudy@msu.edu) or by phone (517355-1692) or visit our website: www.msu.edu/~mistudy 144 Figure B.5: Image of Michigan Activities Survey Wave 3 Introduction letter, original size 8.5” x 11” 145 Figure B.5 (Cont’d) 146 Note, the Michigan Activities Survey Wave 3 survey instrument differed from the Michigan Activities Survey Wave 1 survey instrument and Michigan Activities Survey Wave 2 survey instrument on pages 1 and 4. The following contains the full text ONLY for those pages of Michigan Activities Survey Wave 3 survey instrument that are different from Michigan Activities Survey Wave 1 survey instrument and Michigan Activities Survey Wave 2 survey instrument (the full text of both wave 1 and wave 2 surveys is available above figure B.2). For an image of Michigan Activities Survey Wave 3 survey instrument, see figure B.6. (page 1) We need your help! Your answers matter whether or not you do the things in the survey. You are part of a small, scientific sample chosen to participate We need your input to ensure the results are accurate. INSTRUCTIONS Please answer each question for your own activities that were: • in the State of Michigan • since Memorial Day weekend 2010 (since June 1, 2010) • not work related Even if you have not done these activities, we need to hear from you. 1. Have you done any of the following in the State of Michigan in the past year, since June 1, 2010? (YES/ NO); (If YES, how often since June 1, 2010? 1-5 days/ 6-19 days/ 20 or more Eat dinner at a restaurant Go for a walk or a hike Attend or participate in outdoor sports Swim at a pool, lake, or river Go to a movie in a theater Attend a music concert Attend a cultural or arts festival/fair (page 4) Summaries of the following questions help us represent all MI residents. Individual answers are strictly CONFIDENTIAL. 7. Who is filling out this survey? (The person it is addressed to / Another household member/ Someone else) 8. What is your sex? (Male / Female) 9. In what year were you born? (19 _ _ ) 147 10. What is your race and/or ethnicity? Mark all that apply. White Black / African American Hispanic, Latino, or Spanish American Indian Asian Other _____________ 11. What is the highest degree or level of school you have completed? No schooling Some schooling High school or equivalent Associate’s or technical degree Bachelor’s degree Advanced degree 12. Where do you live? Lower Peninsula of Michigan Upper Peninsula of Michigan Some other state, not Michigan 13. How long have you lived in Michigan? Less than 1 year 1 to 5 years More than 5 years 14. What is your current employment status? Employed Full Time Employed Part Time Unemployed Stay at home parent Retired Student 15. If employed, how many hours do you work in a typical week? Hours per week _ _ 16. Do any of the following live in your household? Mark all that apply. Spouse or significant other Children age 5 and under Children age 6 to 17 years old Other immediate family Extended family or other adults None of these 148 17. In 2010, what was your total household income, from all sources, before taxes? Less than $25,000 $25,000 to $49,999 $50,000 to $99,999 $100,000 and higher THANK YOU FOR YOUR HELP. If you have misplaced the return envelope, please return your survey to: Dr. Frank Lupi, 301b Agriculture Hall, East Lansing, MI 488241039 149 Figure B.6: Image of Michigan Activities Survey Wave 3 survey instrument, original size of each page 8.5” x 7” (Note: pages 1 and 4 of Wave 3 survey instrument are different from Wave 1 and Wave 2) 150 Figure B.6 (Cont’d) 151 Table B.1: Schedule of Michigan Activity Survey Contacts Wave number and contact type Wave 1 intro letter and survey instrument mailing Wave 2 intro letter and survey instrument mailing Wave 2 "pre-notice" call Wave 2 "reminder" call Wave 3 intro letter and survey instrument mailing Wave 3 "pre-notice" call Wave 3 "concurrent" call Wave 3 "reminder" call * the date mailings were post-marked date 7/12/2011 * 8/5/2011 * 8/5/2011 8/10/2011 9/23/2011 * 9/23/2011 9/25/2011 9/28/2011 Wave 2 Screener Survey robodial script: pre-notice I’m Professor Frank Lupi from Michigan State University. Your household was recently sent a short survey about leisure activities. If it was returned, thank you. In case it was misplaced, I mailed you another copy which should arrive soon. I know you’re busy, but we need your help. Whether or not you do the activities in the survey, we need to hear from you to ensure the accuracy of our results. For survey help, e-mail mistudy@msu.edu or call 517-355-1692 Thanks. 152 Wave 2 Screener Survey robodial script: reminder: I’m Professor Frank Lupi from Michigan State University. Your household was recently sent a short survey about leisure activities. If it was returned, thank you. In case it was misplaced, I mailed you another copy which should have arrived in the last day or two soon. I know you’re busy, but we need your help. Whether or not you do the activities in the survey, we need to hear from you to ensure the accuracy of our results. For survey help, e-mail mistudy@msu.edu or call 517-355-1692 Thanks. Wave 3 Screener Survey robodial script: pre-notice: I’m Professor Frank Lupi from Michigan State University. Your household was recently sent a short survey about leisure activities. If it was returned, thank you. In case it was misplaced, I mailed you another copy which should arrive soon. That is the last copy of the survey we plan to mail you. I know you’re busy, but we need your help. Whether or not you do the activities in the survey, we need to hear from you to ensure the accuracy of our results. For survey help, or for another copy of the survey, call 517-355-1692 Thanks. 153 Wave 3 Screener Survey robodial script: concurrent: I’m Professor Frank Lupi from Michigan State University. Your household was recently sent a short survey about leisure activities. If it was returned, thank you. In case it was misplaced, I mailed you another copy. That is the last copy of the survey we plan to mail you. I know you’re busy, but we need your help. Whether or not you do the activities in the survey, we need to hear from you to ensure the accuracy of our results. For survey help, or for another copy of the survey, call 517-355-1692 Thanks. Wave 3 Screener Survey robodial script: reminder: I’m Professor Frank Lupi from Michigan State University. Your household was recently sent a short survey about leisure activities. If it was returned, thank you. In case it was misplaced, I mailed you another copy which should have arrived in the last day or two. That is the last copy of the survey we plan to mail you. I know you’re busy, but we need your help. Whether or not you do the activities in the survey, we need to hear from you to ensure the accuracy of our results. For survey help, or for another copy of the survey, call 517-355-1692 Thanks. Table B.2: Summary of Michigan Activity Survey response and disposition Initial sample size 32,230 Deceased 78 Undeliverable 2,434 Moved 106 Response 11,028 Wave 1 responses 5522 Wave 2 responses 3061 Wave 3 responses 2445 Refusals 61 (51 blank responses counted as refusals) 154 APPENDIX C Web Survey (Great Lakes beach web survey) materials, contact schedule, and robodial scripts 155 PLEASE NOTE: The figures in this appendix include images of survey materials that do not meet the font size and/or margin requirements listed in the Michigan State University formatting guide for submission of master’s theses and doctoral dissertations. As such, the text of the materials featured in these figures is entered plainly above each figure to preserve all information contained within the figures and to meet the standards in the formatting guide. The images are presented in the figures to preserve a record of overall formatting of actual materials. The following is the text for the Great Lakes Beach Web survey Wave 1 Introduction letter, original size 8.5” x 11”. For an image of this letter, see figure C.1 DATE , Dear , We need your help with a survey about Great Lakes beaches. By taking a few minutes to share your thoughts and opinions you will be helping us out a great deal. The results will be used by local and state governments to make management decisions that affect Michigan residents. No matter how often you go to Great Lakes beaches, your input is essential to ensure the accuracy of our results. Your answers are strictly confidential. The survey is being conducted on the internet. To access the survey, please visit the web address below, and log on using your unique password: Web Address: beaches.anr.msu.edu Password: We have included a small token of appreciation as a way of saying thank you for your help. If you have questions about this study, contact me at 301B Ag Hall, Michigan State University, East Lansing, MI 48824; 517-355-1692, MIstudy@msu.edu. Sincerely, Frank Lupi, Professor (page 2) Answers to Frequently Asked Questions 156 How was I selected? You helped in an earlier survey where a computer program randomly selected names and addresses from licensed drivers in Michigan. Now, you are part of a small group selected to participate in final follow-up survey about Great Lakes beaches. Will I be contacted about other surveys from you? No, this is the last survey that we will ask you to take. We know that you are busy and greatly appreciate your help with this important research project. Why does this survey matter? Surprisingly, there is little scientifically sound information on the Great Lakes beach activities of Michigan residents. Yet, decisions must get made about how to manage this vast natural resource. The information this survey gathers on people’s Great Lakes beach activities will facilitate fact-based decision making about resource management. Why do you want me to do the survey? We need your help because you are part of a small, scientifically selected sample, designed to be representative of all Michigan residents. Some people go to Great Lakes beaches frequently, and others do not. We need to hear from everyone selected to ensure the accuracy of our results. Who sees my answers? Your responses are saved directly into a database that does not contain your name or address. Personal information is only used to manage the mailing of survey invitations. How is my privacy protected? Your answers are kept separately from our mailing list. Our mailing list and data are stored on password protected computers in locked offices. Everyone who works on the survey has completed training and signed an oath saying that they will not share any private information they see working on the survey. How do I get help with web survey access or other problems? If you have trouble accessing the web survey or if you have other technical issues you should contact our research team by email (MIstudy@msu.edu) or by phone (517-355-1692). One of our research assistants will help you. How can I see the results? Contact our research team by email (MIstudy@msu.edu) or by phone (517355-1692) or visit our website in a few months: www.msu.edu/~mistudy 157 Figure C.1: Image of Great Lakes Beach Web survey Wave 1 Introduction letter, original size 8.5” x 11” (Note: all wave 1 survey invitation were mailed with a $1 bill) 158 Figure C.2 (Cont’d) 159 Figure C.2: Great Lakes Beach Web survey Wave 2 reminder, half sheet black and white postcard, front, original size 8.5” x 5.5” We need your help! Please complete the Great Lakes beach survey Website: beaches.anr.msu.edu Password:
Figure C.3: Image of Great Lakes Beach Web survey Wave 2 reminder, half sheet black and white postcard, back, original size 8.5” x 5.5” Recently, I contacted you about an internet survey on Great Lakes beaches. If you have already answered the survey, thank you very much! If you have not filled out the survey, we still need your help. You are part of a small scientific sample, and your answers help us represent all people in Michigan. To access the survey, please use the web address and password printed on the other side of this card. For questions about the survey, email: mistudy@msu.edu or call: 517-355-1692. Thank you very much for your help with this important research study! Sincerely, Frank Lupi, Professor 160 Figure C.4: Image of Great Lakes Beach Web survey Wave 3 reminder, quarter sheet color postcard, front, original size 5.5” x 4.25” Figure C.5: Image of Great Lakes Beach Web survey Wave 3 reminder, quarter sheet color postcard, back, original size 5.5” x 4.25” 161 The following is the complete text for the Great Lakes Beaches Web Survey Wave 4 reminder letter, $20 post-paid incentive, original size 8.5” x 11”. For an image of this letter, see figure C.6. DATE , Dear , We still need your help! This spring we contacted you several times about the Great Lakes beaches survey. The survey ends May 25, 2012, so please respond soon. You are part of a small, scientific sample and we need your input to help us improve Michigan’s Great Lakes beaches. If you complete the survey by May 25, 2012, we will send you a $20 check. To access the survey, please visit the web address below, and log on using your unique password: Web Address: beaches.anr.msu.edu Password: If you do not have access to the Internet, it is critical that you let us know by sending the enclosed prepaid postcard so that our records are correct. If you have questions about this study, contact me at 301B Ag Hall, Michigan State University, East Lansing, MI 48824; 517-355-1692, MIstudy@msu.edu. Sincerely, Frank Lupi, Professor P.S. This is the last time we will contact you; please respond today. (page 2) Answers to Frequently Asked Questions How was I selected? 162 You helped in an earlier survey where a computer program randomly selected names and addresses from licensed drivers in Michigan. Now, you are part of a small group selected to participate in final follow-up survey about Great Lakes beaches. Will I be contacted about other surveys from you? No, this is the last survey that we will ask you to take. We know that you are busy and greatly appreciate your help with this important research project. Why does this survey matter? Surprisingly, there is little scientifically sound information on the Great Lakes beach activities of Michigan residents. Yet, decisions must get made about how to manage this vast natural resource. The information this survey gathers on people’s Great Lakes beach activities will facilitate fact-based decision making about resource management. Why do you want me to do the survey? We need your help because you are part of a small, scientifically selected sample, designed to be representative of all Michigan residents. Some people go to Great Lakes beaches frequently, and others do not. We need to hear from everyone selected to ensure the accuracy of our results. Who sees my answers? Your responses are saved directly into a database that does not contain your name or address. Personal information is only used to manage the mailing of survey invitations. How is my privacy protected? Your answers are kept separately from our mailing list. Our mailing list and data are stored on password protected computers in locked offices. Everyone who works on the survey has completed training and signed an oath saying that they will not share any private information they see working on the survey. How do I get help with web survey access or other problems? If you have trouble accessing the web survey or if you have other technical issues you should contact our research team by email (MIstudy@msu.edu) or by phone (517-355-1692). One of our research assistants will help you. How can I see the results? Contact our research team by email (MIstudy@msu.edu) or by phone (517355-1692) or visit our website in a few months: www.msu.edu/~mistudy How do I get paid? Participants who complete the survey by May 25, 2012, will be mailed a $20 check from Michigan State University about six weeks later. The check will be made payable to the person whose name appears at the top of this letter. 163 Figure C.6: Image of Great Lakes Beaches Web Survey Wave 4 reminder letter, $20 post-paid incentive, original size 8.5” x 11” 164 Figure C.6 (Cont’d) 165 The following is the complete text for the Great Lakes Beaches Web Survey Wave 4 reminder letter, $10 post-paid incentive, original size 8.5” x 11”. For an image of this letter, see figure C.7. DATE , Dear , We still need your help! This spring we contacted you several times about the Great Lakes beaches survey. The survey ends May 25, 2012, so please respond soon. You are part of a small, scientific sample and we need your input to help us improve Michigan’s Great Lakes beaches. If you complete the survey by May 25, 2012, we will send you a $10 check. To access the survey, please visit the web address below, and log on using your unique password: Web Address: beaches.anr.msu.edu Password: If you do not have access to the Internet, it is critical that you let us know by sending the enclosed prepaid postcard so that our records are correct. If you have questions about this study, contact me at 301B Ag Hall, Michigan State University, East Lansing, MI 48824; 517-355-1692, MIstudy@msu.edu. Sincerely, Frank Lupi, Professor P.S. This is the last time we will contact you; please respond today. (page 2) Answers to Frequently Asked Questions How was I selected? You helped in an earlier survey where a computer program randomly selected names and addresses from licensed drivers in Michigan. Now, you 166 are part of a small group selected to participate in final follow-up survey about Great Lakes beaches. Will I be contacted about other surveys from you? No, this is the last survey that we will ask you to take. We know that you are busy and greatly appreciate your help with this important research project. Why does this survey matter? Surprisingly, there is little scientifically sound information on the Great Lakes beach activities of Michigan residents. Yet, decisions must get made about how to manage this vast natural resource. The information this survey gathers on people’s Great Lakes beach activities will facilitate fact-based decision making about resource management. Why do you want me to do the survey? We need your help because you are part of a small, scientifically selected sample, designed to be representative of all Michigan residents. Some people go to Great Lakes beaches frequently, and others do not. We need to hear from everyone selected to ensure the accuracy of our results. Who sees my answers? Your responses are saved directly into a database that does not contain your name or address. Personal information is only used to manage the mailing of survey invitations. How is my privacy protected? Your answers are kept separately from our mailing list. Our mailing list and data are stored on password protected computers in locked offices. Everyone who works on the survey has completed training and signed an oath saying that they will not share any private information they see working on the survey. How do I get help with web survey access or other problems? If you have trouble accessing the web survey or if you have other technical issues you should contact our research team by email (MIstudy@msu.edu) or by phone (517-355-1692). One of our research assistants will help you. How can I see the results? Contact our research team by email (MIstudy@msu.edu) or by phone (517355-1692) or visit our website in a few months: www.msu.edu/~mistudy How do I get paid? Participants who complete the survey by May 25, 2012, will be mailed a $10 check from Michigan State University about six weeks later. The check will be made payable to the person whose name appears at the top of this letter. 167 Figure C.7: Image of Great Lakes Beaches Web Survey Wave 4 reminder letter, $10 post-paid incentive, original size 8.5” x 11” 168 Figure C.7 (Cont’d) 169 The following is the complete text for the Great Lakes Beaches Web Survey Wave 4 reminder letter, $10 post-paid incentive, original size 8.5” x 11”. For an image of this letter, see figure C.8. DATE , Dear , We still need your help! This spring we contacted you several times about the Great Lakes beaches survey. The survey ends May 25, 2012, so please respond soon. You are part of a small, scientific sample and we need your input to help us improve Michigan’s Great Lakes beaches. To access the survey, please visit the web address below, and log on using your unique password: Web Address: beaches.anr.msu.edu Password: If you do not have access to the Internet, it is critical that you let us know by sending the enclosed prepaid postcard so that our records are correct. If you have questions about this study, contact me at 301B Ag Hall, Michigan State University, East Lansing, MI 48824; 517-355-1692, MIstudy@msu.edu. Sincerely, Frank Lupi, Professor P.S. This is the last time we will contact you; please respond today. (page 2) Answers to Frequently Asked Questions How was I selected? You helped in an earlier survey where a computer program randomly selected names and addresses from licensed drivers in Michigan. Now, you 170 are part of a small group selected to participate in final follow-up survey about Great Lakes beaches. Will I be contacted about other surveys from you? No, this is the last survey that we will ask you to take. We know that you are busy and greatly appreciate your help with this important research project. Why does this survey matter? Surprisingly, there is little scientifically sound information on the Great Lakes beach activities of Michigan residents. Yet, decisions must get made about how to manage this vast natural resource. The information this survey gathers on people’s Great Lakes beach activities will facilitate fact-based decision making about resource management. Why do you want me to do the survey? We need your help because you are part of a small, scientifically selected sample, designed to be representative of all Michigan residents. Some people go to Great Lakes beaches frequently, and others do not. We need to hear from everyone selected to ensure the accuracy of our results. Who sees my answers? Your responses are saved directly into a database that does not contain your name or address. Personal information is only used to manage the mailing of survey invitations. How is my privacy protected? Your answers are kept separately from our mailing list. Our mailing list and data are stored on password protected computers in locked offices. Everyone who works on the survey has completed training and signed an oath saying that they will not share any private information they see working on the survey. How do I get help with web survey access or other problems? If you have trouble accessing the web survey or if you have other technical issues you should contact our research team by email (MIstudy@msu.edu) or by phone (517-355-1692). One of our research assistants will help you. How can I see the results? Contact our research team by email (MIstudy@msu.edu) or by phone (517355-1692) or visit our website in a few months: www.msu.edu/~mistudy 171 Figure C.8: Image of Great Lakes Beaches Web Survey Wave 4 reminder letter, no post-paid incentive, original size 8.5” x 11” 172 Figure C.8 (Cont’d) 173 Figure C.9: Image of Great Lakes Beaches Web Survey Wave 4 reminder Business Reply Mail postcard stating respondent does not have the Internet, front, original size 4.25” x 5.5” Figure C.10: Image of Great Lakes Beaches Web Survey Wave 4 reminder Business Reply Mail postcard stating respondent does not have the Internet, back, original size 4.25” x 5.5” 174 Web Survey robodial script: I’m Professor Frank Lupi from Michigan State University. Someone in your household was invited to an internet survey about Great Lakes beaches. If you’ve completed it, thank you. If not, I’ve mailed another post card which should arrive soon. The survey internet address and your password are on the postcard. I know you’ve busy, but we need your help. Whether you go to Great Lakes beaches a lot or rarely, we need to hear from you to ensure the accuracy of our results. If you need help with the survey, call 517-355-1692. Again that’s 517-355-1692. Thank you so much. Table C.1: Schedule of Great Lakes Beaches survey contacts Web Survey contact 1 invitation letter (plain paper, $1 token included) April 12 Web Survey contact 2 reminder postcard (half sheet postcard, black and white) April 20 Web Survey contact 3 reminder postcard (quarter sheet postcard, color) April 27 Web Survey reminder robodial (only for sample members with publically available land line telephone numbers). Web Survey contact 4 final survey reminder letter (printed on MSU bond paper offering $20, $10 or no post-paid incentive, Business Reply Mail postcard saying “I do not have the Internet” was included). April 27 May 16 175 APPENDIX D Details of Michigan Activities Survey, Robocalls, and Great Lakes Beaches web survey methods 176 Note on names of surveys: herein when referring to the screener survey or the mail survey, I am referring to the Michigan Activities Survey, the first of the two-stage survey project. When I refer to robocalls, I am referring to a series of automated calls made to survey respondents as a reminder to respond to either the mail or the web survey. When I refer to the web survey or the internet survey, I am referring to the Great Lakes Beaches survey. Adherence to IRB Guidelines: All survey responses, sample lists, pretests, information for survey respondents/pretest participants were conducted and handled according to IRB guidelines, as described in our IRB application. All project team members handling sample members data or survey materials signed an oath stating that the signee would not disclose any private information encountered as a result of working on surveys at Michigan State University. The oaths were signed by each team member as well as a witness to the signing. Screener Survey Details Timing of Screener Survey and Implementation Mailing schedule for the survey was determined by factors including the time it took to design survey material, testing of survey, materials production, avoidance of major holidays (mainly the 4th of July), the timing of subsequent follow-up web survey, and elements of overall project timelines (e.g. length of assistantships for project team members, etc). Materials were mailed using a modified Dillman tailored design to allow for processing of survey forms, review of data, and possible edits to survey materials. These steps were taken to maximize response and increase representativeness of results. Screener Survey Sample Our sample population included all individuals on the Michigan Drivers’ license list (MDLL). We obtained the MDLL from the Michigan Department of State’s list sales department. The list contained 8,917,678 records as of June 2011 when we obtained the list. Data was received in the form of a text file and was imported into Access, with fields set according to a character-per-column key obtained from the Department of State. Determining Sample Size The sample size was determined based on factors such as forecasted response to the screener and follow-up web survey, expected participation rate in Great Lakes beach recreation, survey production and postage costs including outgoing and business reply mail return postage (for both screener and web survey), data needs for robust analysis for travel cost and stated preference applications from follow up survey, and project budget. Factors were informed by production quotes from FedEx, Michigan Wholesale Printing and IPC Print Services (Karen Scovie), previous recreation 177 participation studies (National Survey of Fishing, Hunting, and Wildlife Associated Recreation (http://www.census.gov/prod/www/abs/fishing.html) and the National Survey on Recreation and the Environment (http://www.srs.fs.usda.gov/trends/Nsre/nsre2.html), studies of survey response rates and past experiences of project team members. 32,000 was deemed an appropriate sized sample. Geographic Extent of Sample The screener survey sample was to be drawn from counties within the more populous Lower Peninsula of Michigan due to expected response rate and budget constraint. Drawing the Screener Survey Sample A random sample of 200,000 records was drawn from the over 8 million records in the Michigan Drivers’ License List. We used information on each county’s portion of the Lower Peninsula’s population in the 2010 Census and applied a desired weighting of 60% coastal counties and 40% non-coastal to determine the number of records should be drawn from each county. Random draws within each county were made by generating a random sequence with a unique value for each member of the 200,000 subset of the MDLL from a specific county (i.e. for Alcona county, with 208 members in the 200,000 subset, each randomly assigned a number from 1 to 208). Records within a county were then sorted and the predetermined number of records was selected for the screener survey sample. Random draws were generated using R. Screener Survey materials: Format The form of survey materials was designed given the expected volume of response (>10,000), postage and production costs, as well as project budget. The survey instrument, a 1 page legal sized (8.5” x 14”) document folded in half to create a small survey booklet was designed as such in part to minimize postage and production costs. It also maximized the efficiency of data processing. The size of the survey instrument was chosen based on having a survey booklet that fit in a standard #10 outgoing envelope and #9 business reply mail returning envelope, keeping the survey weight to 1 ounce or less. The survey instrument was designed to be compatible with and Optical Mark Recognition program (Remark) to expedite data processing and to increase the accuracy of data processing. As part of designing the scan-able form, the project purchased high speed scanner to convert hard copy documents into images (jpg files). Survey images were read by the Remark software, which converted the survey images into data exported into access database files. Before finalizing the design of the survey instrument, various scanner and OMR software settings were tested (i.e. darkness of scan, sensitivity of scoring), as well as different formats of survey instruments to ensure the instrument was properly designed and that the scanner and software were properly calibrated. Different colors, font sizes and layouts were tested to ensure the software correctly scored test surveys (i.e. avoidance of false “positives/scores” and “correct/accurate” scoring of test survey sheets). Scanning sensitivities and settings were tested for accuracy against a variety of marks (crosses, scribbles, lines across circles, words/letters, light marks and overly 178 large dark marks). The OMR software vendor was consulted to judge the score-ability of the final survey instrument and to ensure formatting or lines on the instrument itself would not cause false positives (blank circles from being scored as a marked answer). Each survey sample member was assigned a unique five digit ID number followed by a hyphen and the wave number by the FedEx production facility as part of the materials production process (e.g. 12345-1 for wave 1, 23456-2 for wave 2 and 34567-3 for wave 3). For the production facilities matching process, this identifier had to be included. The ID numbers were assigned by the production facility. We were able to specify the location and size of the identifier, and that it would be five digits followed by a hyphen, and then a 1, 2 or 3 depending on the wave of contact. ID numbers were placed inconspicuously in the upper right hand corner of the survey instrument. This ID was used by the production facility to match the outgoing envelope, introductory letter and survey instrument unique to each respondent. For each wave, these IDs were emailed to the project team by the FedEx production facility. This list of sample respondent and five digit ID was maintained throughout the processing of responses and data analysis as a means of tracking which respondents had responded. As described below, research team members used the IDs from returned surveys to track which sample members had responded to the survey. Screener Survey Content The screener survey was developed with two main purposes: first to identify Great Lakes beach goers who would take part in the follow-up web survey on Great Lakes beach recreation, and second to gather information on the general activities of Michigan residents. For the screener survey, we attempted to “generalize” the contents to appeal to as many respondents as possible by including questions about general leisure activities (e.g. going to a restaurant, going to a movie in a movie theater, going for a walk) and daily activities (e.g. watching TV, using the Internet). By having the survey content appeal to a broad audience, we hoped to minimize possible response bias among those who frequently participate in many activities. Any bias towards more active respondents would skew the study’s estimated participation rates in activities and decrease the representativeness of our results. Questions about general activities, weekly activities, “barriers to participation” and background/demographic questions were included to later summarize possible relationship between respondent characteristics and participation, or participation across various activities. Barrier questions were chosen so as to compare our results to previous studies in other states. The project team obtained permission from the director of the Department of Natural Resources, Rodney Stokes and the Vice President of Travel Michigan (i.e. Pure Michigan), George Zimmerman to place each organization’s logo and a brief appeal to respondents attributed to each official. These appeals were drafted by our project staff and approved by G. Zimmerman and R. Stokes. These appeals, logos, and names were included to legitimize the purpose of the survey. 179 Screener survey instrument pretest: A total of 26 pretests of the screener survey were conducted from June 1 through June 18th, 2011. Pretesting of the screener survey occurred in two phases. The first was a convenience sample of 9 Michigan State University students conducted at the International Center. Respondents were paid $5 for taking the survey and were asked general questions about how they answered questions in the survey once they completed it. The second phase of pretesting was a random sample of 17 adults at a local shopping mall. Participants were paid $10 or $20 for their participation in the pretesting. Following each phase of screener survey pretests, edits were made to the instrument based on respondent feedback. These edits were discussed among the project team members before being implemented and then were tested among the next pretest respondents. Once pretest respondents were able to consistently take the screener survey without incidents of confusion or uncertainty, the instrument was finalized and put into production. An additional subset of pretests was conducted at beaches between Port Huron and Detroit on June 13. This special pretest was conducted to test whether visitors to beaches along Lake St Clair and the Detroit River considered those beaches to be Great Lakes beaches. Our research intended to include beaches along those bodies of water as Great Lakes beaches. A total of 14 random visitors were asked whether or not they had been to a Great Lakes beach in the last year, and whether or not they considered the beach they were currently visiting a Great Lakes beach. 8 of the 14 people considered the beach a Great Lakes beach. Screener Survey Materials throughout the field period: Along with the survey instrument, screener survey sample members were also mailed an introductory letter inviting them to participate in the survey and a business reply mail envelope. Waves 1 and 2 were mailed in standard number 10 outgoing envelopes with either a green block “S” or a green Beaumont Tower icon (depending on the survey wave) and marked with the return address “Michigan Activities Survey” in the upper-left hand corner. In an effort to increase response, the introductory letter for the third wave of the screener survey was edited to include a set of answers to frequently asked questions as a means of increasing the response rate and addressing common concerns among callers to the survey help line. FAQs centered upon the purpose of the study, assurances of the confidentiality of survey responses, and why and how respondents were selected to take part in the survey. The third wave of the survey was also mailed in a flat, 9 inch by 13 inch envelope plainly marked with the study’s return address as a means to change the feel and appearance of the survey package (which the respondent had already received twice prior). The return address on the third wave only listed Dr. Lupi’s name, and did not say “Michigan Activities Survey,” as the prior two waves had. The return address had previously listed “Michigan Activities Survey” as a means of legitimizing the mailer as 180 being for some specific purpose, as well as to signal that this was not a solicitation for fund raising or other purpose. For the third wave, the survey instrument was slightly modified: the appeals from MDNR and Pure Michigan were removed and replaced with a bright yellow box stating: “We need your help!” This edit was made as a last attempt to convert sample members based on the urgency and importance of the survey. Additionally, the page containing demographic and background information was edited to include a header ensuring respondents that their answers would remain strictly confidential. The screener survey instrument was also reviewed by 4 MSU grad students who have experience working on survey research projects. Their comments were taken into account, though they were not offered an incentive for their participation. Sample address cleansing: Before each wave, the project team submitted survey material files and an address list to FedEx. FedEx’s direct mail production facility would cleanse the address list based on the current USPS Saturation File which accounted for known changes in address. Undeliverable mail In addition to completed surveys, undeliverable mail was returned to the project office or picked up at Mail Processing throughout the survey field period. As undeliverable mail was received, the five-digit ID for the respondent was added to a list of undeliverable addresses. The list was double entered (as a means of QA/QC) and IDs on the list were removed from future mailings. The screener survey sample contained a total of 2,464 sample members that had their surveys returned as undeliverable out of a total of 32,230 (12.6%). Receiving and Tracking Responses: Returned surveys were delivered via campus Mail Processing to Agriculture Hall or picked up directly from Mail Processing. Returned surveys were then opened. Any materials aside from surveys that were sent back (notes, comments, etc). were kept locked in the project office for later review. Returned surveys were then scanned by an undergrad assistant. To be sure that no surveys were skipped in the scanning process, the assistant counted the number of surveys in each batch put through the scanner and would compare that number to the number of image files the scanner produced. If there were any discrepancies, the image files for that batch would be deleted and scanned again, and the batch of surveys was recounted. This ensured that each survey was scanned. Hard copies of surveys were then sorted numerically (by wave, since each wave had a different numbering scheme) and filed in locked cabinets in the project office. All survey image files were saved on department lap top, backed up an external hard drive, and on the MSU cloud back up system. The department laptop and external hard drive were stored in a locked cabinet in the project office. 181 This process of receiving and scanning surveys was conducted daily until response decreased. When fewer surveys arrived the process was repeated as necessary. Scanned surveys were then scored using the Remark OMR software. Project team members tracked responses by scoring surveys using Remark, and then comparing the software generated value for the ID versus the scanned image of the survey to ensure the ID was scored correctly. As surveys marked with IDs were received, the corresponding individual was marked as having responded to the survey. Individuals who responded were removed from future mailings. QA-QC of Survey Data Scoring Using Remark Once survey images were scored by Remark, the software flagged certain responses or areas within surveys as “exceptions,” or places where the software could not definitively code the response. The undergraduate assistant reviewed all exceptions as identified by Remark and hand code responses. Before the scanning and scoring of returned surveys began, to be sure that both the scanner and the scoring software was configured to correctly score surveys, 100 random returned surveys were hand coded in addition to be scanned and read by the OMR software. The results of the two data entry methods were compared (after “exceptions” produced by the Remark software were addressed in the scanned and scored data) and no differences were found in the data sets. Robocall Methods A list of publically available phone numbers and addresses were obtained from Select Phone Data (selectphonedata.com) for a total cost of $200. The list covered the entire state of Michigan (all zip codes). Members of screener survey sample were matched with a phone number using 5 different levels of matches. Table D.1 below shows the level number, description of the match, and the number of each type of match. Note, the description of match types may include matches for cell phone numbers were inadvertently included, but removed for further calls and analysis. 182 Table D.1: Description and number of different record matches used to assign phone numbers to survey sample members for robodial implementation Match level Description Number of records matched in Screener Survey Sample from Select Phone Data phone list 1 first name; last name; middle 7257 initial; 9 digit zip code 2 first name; last name; 9 digit zip 1261 code 3 last name; 9 digit zip code 5856 4 first name; last name; middle 508 initial; zip code 5 last name; zip code 606 total 15488 These were the matches used for robocalls that occurred along with the second wave of the screener survey. Robocalls throughout the second wave of the screener survey included a pre-notice and reminder call. Each call type was made at 2:00 PM on the afternoon of the date indicated herein. Any phone numbers that were not answered or did not reach an answering machine or voicemail were called again that same day at 5:30 PM. Table D.2 shows timing, amount of numbers in calls, and cost for calls during wave 2 of the Michigan Activities Survey. Note, the information in table D.2 may include matches for cell phone numbers were inadvertently included, but removed for further calls and analysis. Table D.2: Date, Time, cost, and number of calls for robodials during wave 2 of the Michigan Activities Survey. phone Date time Call type Cost numbers August 5th 2:00 PM wave 2 pre-notice (initial) $136.4 10492 5:30 PM wave 2 pre-notice (recall) $37.04 2849 August 10th 2:00 PM wave 2 reminder (initial) $48.00 3000 5:30 PM wave 2 reminder (recall) $15.01 938 We also conducted robocalls during the third wave of the screener survey. Before the robocalls during the third wave occurred, we acquired additional phone numbers through a phone number appending service, Accurate Append, who appended our list of sample member names and addresses with phone numbers. From Accurate Append, we received an additional 4281 phone number matches at a cost of $306. Robocalls throughout the second wave of the screener survey included a prenotice call, a call estimated to be concurrent with the respondent’s receipt of the wave 3 183 mailer, and a reminder call. Each call type was made at 5:30 PM that day. Any phone numbers that were not answered or did not reach an answering machine or voicemail were called again that same day at 7:30 PM. Table D.3 shows call times and totals which may include matches for cell phone numbers were inadvertently included, but removed for further calls and analysis. Table D.3: Date, Time, cost, and number of calls for robodials during wave 3 of the Michigan Activities Survey. Date September 23rd September 23rd September 25th September 25th September 28th September 28th time Call type 5:30 PM Wave 3 pre-notice (initial) 7:30 PM Wave 3 pre-notice (recall) Wave 3 concurrent call 5:30 PM (initial) Wave 3 concurrent call 7:30 PM (recall) 5:30 PM Wave 3 reminder (initial) 7:30 PM Wave 3 reminder (recall) phone cost numbers $76.38 4774 $26.11 1632 $53.94 3371 $17.63 $39.50 $10.50 1102 2469 656 Web Survey Details and methods Web Survey Development The Great Lakes beaches web survey was developed to suit the purposes of two broad studies: first, a revealed preference travel cost modeling study including estimates of demand for trips to different Great Lakes beaches; and second a stated preference study of Michigan resident’s preferences for Great Lake beach attributes. With those two studies in mind, the content of the survey was crafted based on review of past literature and study goals. Questions were posed to respondents in such a way that allowed for robust recreational demand modeling (in the case of questions related to the revealed preference portion of the survey) and site characteristic preferences to be estimated (in the case of questions related to the stated preference portion of the survey). Consideration was also made to keep from overburdening respondents with lengthy or arduous lines of questioning that could fatigue respondents and induce early terminations/break-offs. The text, diagrams, and graphics used to describe the attributes to respondents were developed with the help of state and county health officials, water policy experts, as well as resource managers from State and Federal agencies. Parts of the choice experiment needed to convey scientific information in a way that was understandable to members of the general population who are not assumed to have advanced knowledge of the topics in the survey. The resource experts we consulted were able to ensure that the survey clearly communicated accurate information. Certain attribute descriptions shown to choice experiment participants also contained diagrams and drawings. Even though the diagrams are simplifications of real184 world conditions, the experts we consulted in the development of survey materials and pretest participants (i.e. perspective survey respondents) found the diagrams to accurately represent conditions experienced at actual Great Lakes beaches. Experts consulted included: Jan Stevenson (MSU), Erin Dreelin (MSU), Shannon Briggs (MDEQ), Charles Kovatch (USEPA) and various county health department officials. The colors included in figures and diagrams were tested across a number of available screen displays to simulate the assorted display types and settings sample members would ultimately view the survey under. These tests ensured contrast and representativeness were preserved as intended among different settings. Various aspects of the web survey instrument were programmed to be customized to the respondent. The web survey was programmed to include skip patterns following a logic decided by research team members so that survey respondents would follow a path through the survey based on their answers to questions. This logic and these skip patterns were all determined based on the study’s specific data needs. For example, only respondents who reported that they had taken an overnight trip to a Great Lakes beach would be asked to provide details about an overnight trip. The web survey also featured programming that filled text into questions based on information provided by respondents in earlier questions. In order to ensure that the tasks included in the web survey met the goals of the research study and that all programming features (skip patterns, text fills, etc.) functioned properly, extensive programming debugging and survey pretesting was performed. The websurvey’s web address was submitted to several search engines (such as google) to ensure that if the address was entered into a search engine instead of into an address bar, that the respondent could reach the webpage. When the websurvey’s web address was entered into a search engine, the first result was the web survey’s page. Web Survey Debugging To ensure that the survey was properly following the determined paths based on possible answers to key questions, research team members systematically worked through the survey following all possible paths, using all combinations of answers. Research team members also tested each question, data field, and answer area to ensure that information entered by respondents on the web survey was properly stored for later analysis or recall in later questions. The text of questions is filled in different ways depending on users’ answers to previous questions. Research team members tested entering all combinations of character types and answers into fields to ensure fills and data storage were functioning properly. All errors were addressed, corrected, and then tested again. Care was also taken to ensure that the general appearance of all aspects of the survey, including formatting, font size, tables, page borders, and diagrams appeared clear, readable, and understandable under different display settings, browsers, and operating systems. This was done in order to account for differences between 185 computers used by research team members to design the survey and respondents who would ultimately take the survey. Web Survey Pretesting: Cognitive interviews were conducted to pretest the web survey for several reasons: The complexity and length of the survey task depend largely on a user’s specific responses. In general, respondents who visit Great Lakes beaches more frequently faced a more complex path through the web survey than those who visit Great Lakes beaches less frequently. We needed to test whether or not users who took varying numbers and types of trips could correctly navigate the tasks without feeling excess recall burden or fatigue. We needed to make sure that the web survey followed a coherent path regardless of how often the respondent visited Great Lakes beaches. The flow, ordering, and transition between different tasks can vary depending on the complexity of the path taken by the user. We needed to be able to test multiple people’s understanding of the tasks as well as the transitions between them. Given the broad range of Great Lakes beach going behaviors and statewide scope of the project (i.e. statewide survey sample,) a pretest mode that allowed for us to pretest the survey with a wide range of participants was required. Web Survey Pretest Method: For the Great Lakes beaches web survey, we chose to conduct pretests using a remote, internet-based screen sharing application accompanied by a phone interview. Using this method, pretest participants were called on the phone by a lead researcher who conducted the interview. The lead researcher would then direct the participant to join a web-based screen sharing session via a direct link emailed to the participant. The participant would view the web survey as it appeared on the lead researcher’s computer. The lead researcher would calibrate the display settings specific to the pretest participant, and then grant control of the cursor (i.e. web survey) to the pretest participant. Throughout the pretest, the lead researcher would observe the participant and note how well the participant seemed to navigate the survey. At designated points in the survey, the researcher would stop the participant and ask questions to test how survey features were functioning and how well the respondent understood the information and tasks presented in the survey. The goal of the interview was to gather information relevant to which portions of the survey were functioning as intended or which portions needed revisions. When the researcher posed questions about the survey to the participant, the researcher was able to freely navigate through the web survey to specific pages in the survey and refer to the specific item in question as it appeared on the screen. A pre-arranged outline of questions and follow-up probes was developed to seek out how respondents interpreted survey directions, and what they interpreted certain questions to be asking. 186 The interviewer also walked through tasks with the respondent, asking questions along the way such as: “how did you answer this question? What were you thinking about when you first answered this question? How confident are you in your answer to this question?” to get a feel for the respondent’s overall grasp of the task, as well as for the cognitive burden of specific questions. Advantages of the method: Screen sharing technology is becoming more common place, and is frequently used in professional settings by companies so that many parties (often remote) can view the same computer screen, electronic document, or presentation at the same time. Based on our contact with over thirty recruits to the survey pretest, screen sharing is understood by many people and is not seen as risky or out of the ordinary. The Great Lakes web survey itself draws from a statewide sample of Michigan residents. We preferred this pretest mode because it made the pretest representative of the survey population by allowing pretest participants to be recruited from a broader geographic range than other modes (such as on campus convenience sampling, or mailing random residents from within a certain radius of Michigan State University to come to campus for onsite pretesting). Additionally, though many East Lansing area residents may make frequently take trips to the Great Lakes, their opinions of the survey may be different from residents of areas/counties that are adjacent to the Great Lakes. To seek the opinion of a key group of Web survey respondents, coastal county residents, we required a pretesting mode that allowed us to reach beyond the East Lansing area. Coastal county residents likely have different Great Lake visitation patterns (due primarily to proximity), and could have a different approach to a survey about Great Lakes beach recreation. For example, a research question we kept in mind was whether or not coastal residents would consider Great Lakes recreation a routine activity and end up under reporting or mis-reporting details of recreation? Would non-coastal county residents have more difficulty recalling details of Great Lakes beach trips that occur less routinely? By drawing from a statewide pretest participant pool, we were able to investigate these questions. This method of pretesting allowed for many members of the research team to observe pretests. While one researcher lead the meeting and interviewed the participant, other researchers could, without disturbing the participant or the ongoing pretest, join the screen sharing session and the call between the researcher and participant (which was hosted on a conference call line). Additional researchers called in to the conference call before the interview began and muted their phone lines. Researchers also had the option of recording calls that ran along with the pretests for review at a later time. This method of pretesting allowed for several research team members to observe a single participant without needing to physically be in the same room as the single respondent. Mechanics of the web survey pretesting method We used the screen sharing application GatherPlace (Gatherplace.net). With a monthly subscription, we were allowed to set up unlimited screen share sessions. Monthly subscriptions cost $65. Each session allowed up to 40 guests to observe the host’s screen at one time. The screen sharing application is run on JAVA, a nearly 187 ubiquitous program commonly used to run many other applications on a variety of websites. Outside of JAVA, GatherPlace does not require guests of screen sharing sessions to download any third party programs. Screen sharing sessions first require a host to begin a session. For pretesting, this meant that only once the lead researcher had set up and begun the session could a pretest participant see the web survey. This provided a large degree of control for the researcher and kept participants from viewing unintended content. Following the start of a session, guests can click direct links to the specific session (that can be emailed directly by the host) or can visit GatherPlace.net and navigate to the session by entering a screen share session ID. We recruited participants through an electronic sample purchased from Survey Sampling International (SSI). We provided SSI with basic information about our target population: adults in the Lower Peninsula of Michigan. Next, SSI emailed a generic invitation to their sample members matching those criteria. The invitation simply stated: “Dear , Your Opinion Matters! Click here to participate in a new SurveySpot survey opportunity or click the button below.” Users were then brought to a webpage explaining the criteria to be eligible to participate in the pretests: participants must have visited a Great Lakes beach in the last year, must speak on the phone while taking the survey on the internet. The interviews were said to last approximately 45 minutes. We offered recruits a $40 incentive for completing the interview. The content of the web site appears below: We attempted to recruit pretest participants of different participation levels who, over the course of answering survey questions to the best of their ability, would follow different paths through the survey, testing different functions of the web survey instrument in the process. In particular, within each batch of pretest completes, we tried to roughly recruit an equal number of respondents who had taken the following types of trips: trips lasting a day or less, overnight trips of less than 4 nights, and overnight trips of 4 nights or more. These trip types each have a separate line of questioning. Additionally, pretests across participants visiting Great Lakes beaches with different frequency let researchers check respondents’ comprehension of each of those lines of questions as well as check respondents’ awareness and understanding of transitions between those lines of questions—therefore, obtaining a pretest sample of respondents who had taken a variety of trips was important to the success of the pretest. SSI sent out an unknown number of emails based on our need for 15 completes. The research team received approximately 25 emails and 20 phone calls in the week following the initial invitation. In many emails and phone messages, pretest recruits listed the Great Lakes beach(es) and when they visited them in the last year as means of proving they met the criteria listed in the invitation. In cases where a pretest recruit did not list where they visited (and therefore were not certain to be eligible for the pretest) a researcher followed up with an email or a call inquiring whether they had visited a Great Lakes beach in the last year (as was required to participate) as well as the length of the trip (to help ensure we spoke with respondents with a range of trip types and frequencies. In general, it was easier to recruit pretest participants using email than using the phone. Email offered an easily traceable line of communication and was more frequently checked than voicemail by pretest recruits. Email contact was established with every participant regardless of initial form of contact (phone or email). 188 Once a pretest recruit was verified as eligible, an appointment was scheduled. Most appointments were made fewer than three days in advance to keep from having pretest recruits forget about appointments. We scheduled a total of 34 appointments to reach our 30 completes. Four pretest recruits made appointments and did not keep them. None of the four cancelers responded to requests to reschedule. Appointments with pretest recruits were confirmed via an email similar to this: Hello , This is an email to confirm that we have an appointment scheduled for