UNIVERSITY BIOSCIENTISTS’ RISK EPISTEMOLOGIES AND RESEARCH PROBLEM CHOICES By Walakada Appuhamilage Dilshani Eranga Sarathchandra A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Sociology – Doctor of Philosophy 2013 ABSTRACT UNIVERSITY BIOSCIENTISTS’ RISK EPISTEMOLOGIES AND RESEARCH PROBLEM CHOICES By Walakada Appuhamilage Dilshani Eranga Sarathchandra Scientific discoveries take place within scientific communities that are established in legitimating organizations such as universities and research institutes. Often times, scientists undergo tensions and paradoxes as they evaluate the risks they are willing to accept in their work. The types of risk/benefit decisions scientists make to determine which research projects to engage in and how they engage in them is more important than ever, due to current restrictions on funding for scientific research. The main objective of this dissertation is to analyze the ways in which university bioscientists define, evaluate, and manage risks in science, i.e. their risk epistemologies. In the process, I examine bioscientists’ risk perceptions and demographic and contextual factors that influence those perceptions. Additionally, I investigate the associations between risk perceptions and research problem choices. This dissertation followed a mix-methods approach. The data collection included twenty semi-structured in-depth interviews and a large-scale online survey of university bioscientists. Based on three theoretically driven research questions that surfaced through examining current literature, I organized the dissertation into three different essays. The first essay explores risk epistemologies of university bioscientists as they determine the best trajectories for their scientific careers. This essay analyses data gathered by conducting in-depth interviews meant to elicit university bioscientists’ different understandings of the notion of risk. The second essay quantifies bioscientists’ risk perceptions using data gathered from the online survey. In this essay, I investigate the influence of life-course, gender, sources of funding, research orientation, network interactions, and perceived significance of research on risk perception. In the third essay I use data gathered from the online survey to investigate the associations between university bioscientists’ risk perceptions and their research problem choices. The results suggest that risk is a useful paradigm to study decision-making in science. In making scientific risk decisions, at times scientists conform to the existing institutional structures. Other times they challenge them, persist through them, or compromise their actions. Bioscientists’ risk epistemologies matter to the extent that these allow for more creative ways in which individual scientists can navigate the institutional environments that they are embedded in. Risk perceptions of university bioscientists differ based on the specific dimension of risk under investigation. Several significant relationships between perceived risks and problem choice orientations have emerged through the data analysis. Overall, university bioscientists’ risk epistemologies seem to be related to the unique reward structure of science, compelling them to use various risk management techniques while navigating their research environments. Copyright by WALAKADA APPUHAMILAGE DILSHANI ERANGA SARATHCHANDRA 2013 ACKNOWLEDGEMENTS I express my deepest gratitude to my dissertation committee, Professors Toby Ten Eyck (Chair), Aaron McCright, Tom Dietz, and Karim Maredia for investing their valuable time in me and guiding and supporting me. I owe my heartfelt gratitude to Professors Chris DiFonzo, Steve Pueppke, and Ernest Delfosse for taking an interest in my research and providing valuable feedback during the data collection and writing process. I thank the Department of Sociology, CANR World Technology Access Program, Julian Samora Research Institute, and Lyman Briggs College for providing financial assistance throughout my graduate studies at Michigan State University. I thank my mother, father, brother, aunt, and uncle for their unwavering love and support. I thank my beautiful cousins Deelaka and Chathura for keeping me true to small things that matter in life. Last but not least I thank my dearest friends and colleagues Drs. Cedric Taylor, Callista Rakhmatov, Nate Colon, Eric Beasley, Meaghan Beasley, and Naomi Glogower for believing in me and supporting me through and through.   v   TABLE OF CONTENTS LIST OF TABLES ................................................................................................................ viii KEY TO ABBREVIATIONS ............................................................................................... ix CHAPTER 1 INTRODUCTION ................................................................................................................. 1 CHAPTER 2 THE RISK EPISTEMOLOGIES OF UNIVERSITY BIOSCIENTISTS ............................. 9 Introduction .................................................................................................................... 9 Background...................................................................................................................... 12 Dynamics of Risk in Scientific Decision-Making ...................................................... 12 Decision-Making in Science ...................................................................................... 13 Risk in Scientific Decisions ....................................................................................... 15 Methods of Data Collection and Analysis ....................................................................... 17 Results and Discussion .................................................................................................... 19 Understandings and Definitions of Risk .................................................................... 19 Evaluating and Managing Risks ................................................................................ 28 Conclusion ....................................................................................................................... 34 CHAPTER 3 RISK PERCEPTIONS OF BIOSCIENTISTS AT A US LAND-GRANT UNIVERSITY .. 38 Introduction ..................................................................................................................... 38 Background...................................................................................................................... 40 Risk and Risk Perception ........................................................................................... 40 Risk in Scientific Decisions ....................................................................................... 41 Methods of Data Collection and Analysis ....................................................................... 47 Dependent Variables ................................................................................................. 48 Independent Variables ............................................................................................... 49 Results and Discussion .................................................................................................... 52 Effects of Life-course on Risk Perception ................................................................. 52 Gender and Risk Perception ...................................................................................... 53 Research Orientation, Sources of Funding, and Risk Perception ............................. 55 Network Interactions and Risk Perception ................................................................ 56 Perceived Significance of Research and Risk Perception ......................................... 57 Conclusion ....................................................................................................................... 58   vi   CHAPTER 4 RESEARCH PROBLEM CHOICES AND RISK PERCEPTIONS OF BIOSCIENTISTS AT A US LAND-GRANT UNIVERSITY ............................................. 62 Introduction ..................................................................................................................... 62 Background...................................................................................................................... 64 Scientists’ Criteria for Research Problem Choice .................................................... 64 Risk, Risk Perception, and Problem Choice .............................................................. 66 Methods of Data Collection and Analysis ....................................................................... 67 Dependent Variables ................................................................................................. 69 Independent Variables ............................................................................................... 70 Results and Discussion .................................................................................................... 73 University Bioscientists’ Criteria for Research Problem Choice ............................. 73 Risk Perception and Research Problem Choice ........................................................ 75 Other Determinants of Research Problem Choice .................................................... 78 Conclusion ....................................................................................................................... 79 CHAPTER 5 SUMMARY AND CONCLUSION ...................................................................................... 82 APPENDICES ....................................................................................................................... 87 Appendix A: Tables ......................................................................................................... 88 Appendix B: Semi-structured interview guide ................................................................ 99 Appendix C: Survey Questionnaire ................................................................................. 101 REFERENCES ...................................................................................................................... 109   vii   LIST OF TABLES Table 1: Mean Risk Ratings for Items Measuring Expressed Risk Preference ..................... 89 Table 2: Survey Items for Four Latent Factors Reflecting Four Dimensions of Perceived Risk .................................................................................................................. 90 Table 3: Coding, Means, and Standard Deviations for Variables in the Study .................... 91 Table 4: Scientists’ Network Interactions ............................................................................ 92 Table 5: Scientists’ Perceived Significance of Research ...................................................... 92 Table 6: Multivariate OLS Regression Models for Variables Predicting Four Dimensions of Perceived Risk (Standardized Regression Coefficients) .............................. 93 Table 7: Multivariate OLS Regression Models (With Interaction Effects for Selected Variables) Predicting Four Dimensions of Perceived Risk (Standardized Regression Coefficients) ......................................................................................................................... 94 Table 8: Scientists’ Criteria for Research Problem Choice ................................................... 95 Table 9: Survey Items for Scales Measuring Scientists’ Generalized Problem Choice Orientations ........................................................................................................................... 96 Table 10: Coding, Means, and Standard Deviations for Variables in the Study ................... 97 Table 11: Multivariate OLS Regression Models for Generalized Problem Choice Orientations (Standardized Regression Coefficients) ........................................................... 98   viii   KEY TO ABBREVIATIONS American Association for Public Opinion Research ............ AAPOR Bachelor of Arts ................................................................... BA Bachelor of Science .............................................................. BS Doctor of Philosophy ............................................................ PhD Intellectual Property ............................................................. IP International Risk Governance Council ............................... IRGC National Institute of Health .................................................. NIH National Research Council ................................................... NRC National Science Foundation ................................................ NSF Ordinary Least Squares ........................................................ OLS Principal Investigator ............................................................ PI United States ......................................................................... US Variance Inflation Factor ...................................................... VIF   ix   CHAPTER 1 INTRODUCTION Science is valuable not only as knowledge of the natural world (Xie and Killewald 2012:1), but also because of its contributions to enhance the overall wellbeing of society through advancements in medicine, agriculture, and other fields of research. In the developed world, varyingly defined as “postindustrial” (Bell 1973), “late modern” (Beck 1992), and “contemporary Western societies” (Knorr-Cetina 2005), science is considered a driving force and an economic engine that directs societal transformations. Social scientists in the US and elsewhere have extensively investigated the social, political, cultural, and economic implications of science as well as the impacts of socioeconomic changes on its practice (e.g., Kleinman 2003; Buccola, Steven, and Yang 2009; Stephan 2012; Xie and Killewald 2012). Merton (1942; 1973) defines science as a social institution governed by a set of norms that presumably influence scientists’ behavior. For him, the core of science is built on four central norms: (1) universalism (claim evaluation using pre-established impersonal criteria), (2) communism (common ownership of knowledge), (3) organized skepticism (suspended judgment in evaluation) and (4) disinterestedness (unbiased by personal interests). In addition to these presumable norms that govern its practice, science is also said to be unique in comparison to other social institutions because of its reward structure. The reward structure in science is closely linked to the peer-review process. Scientists gain recognition by sharing their inventions and discoveries with the larger scientific community through publications, presentations, and other means. Although some scientific discoveries are protected under intellectual property laws, most scientists have long-standing traditions of sharing their ideas freely with one another. Through   1   processes of idea-sharing and peer-reviewed publication, scientific knowledge gains legitimacy and becomes the foundation on which further scientific discoveries are made. While acknowledging the above-mentioned unique characteristics that define science as an institution, social scientists recognize that university science is often influenced by the social and economic contexts in which it is embedded (Knorr-Cetina 1999; Kleinman 2003). For instance, research that targets the public good is no more inherent to public universities than are proprietary research to private research institutions (Buccola et.al. 2009). Some scholars argue that increasing commercialization of academic science harm the distinct research cultures of both universities and private research entities by blurring the distinctions between these different scientific cultures (e.g., Glenna et.al. 2011). Others reaffirm that to understand academic science today, we need to take into consideration the relationship between the social and the scientific, and how the larger social world shapes the practice of academic science (Kleinman 2003:159). Speaking specifically to the context of the United States, science today involves large infrastructure investments, large research budgets, a sizeable professional workforce, and unique processes of institutionalization and politicization. In 2009 alone, the US spent over $380 billion on research and development. Based on the Occupational Employment Statistics Survey, the estimated annual wages of individuals in science and engineering occupations are higher than the average worker. Median annual wages in 2010 in science and engineering occupations (regardless of education level or field) were $75,820, more than double the median ($33,840) for all other US workers (National Science Board 2012). However, these defining characteristics of US science today are inaccurate in describing science in the US 150 years ago or earlier (Xie and Killewald 2012). A majority of scientists living before the nineteenth century were not “professional” in the general sense, and did not   2   earn their living by scientific pursuits. The professionalization of science in the US was only achieved in the twentieth century with systematic training of qualified individuals and payment for services (Xie and Killewald 2012:15). Public support for US science increased rapidly during and after World War II, as did federal funding through large governmental agencies such the National Institute of Health (established in 1930) and the National Science Foundation (established in 1950). In this sense, scholars argue that the science efforts in World War II were a key turning point for support and funding for science in the US. However, the establishment of the National Research Council (NRC) in 1916 shows that the implications of science for policy have been recognized long before the World War II. One important consequence of the professionalization of science involves the changed perception of scientists. Though still recognized for their “intelligence and industry,” scientists may now be thought of “as ordinary professionals rather than as upper-class amateurs pursuing noble causes” (Xie and Killewald 2012:23). Over the years, much social science research has gone into understanding scientists and their work. Social scientists have extensively studied characteristics of individuals who become scientists, their work conditions, and their decisionmaking processes including: (1) methods of scientific knowledge production (e.g., Collins 1974; Latour 1987; Zenzen and Restivo 1982; Delamont and Atkinson 2001; Campbell 2003), (2) aspects of scientific cultures (e.g., Collins 1998; Knorr-Cetina 1999; Knorr-Cetina 2005), (3) research problem choices (e.g., Gieryn 1978; Busch and Lacy 1983; Cooper 2009) and (4) individual and structural determinants of scientific decisions (e.g., Hull et.al. 1978; Cole 1979; Keller 1985; Widnall 1988; Alper 1993; Kuhn 1996; Zuckerman 1996; Messeri 2003; Rier 2003; Wray 2003). However, many of these studies have not paid sufficient attention to risks incurred in research decisions and the ways in which scientists perceive and manage these risks.   3   The awareness that scientists face numerous risks in their work, some of which involve physical harm (i.e., exposure to chemicals, spills, equipment malfunctions) and others related to research decisions (i.e., failures, inability to generate publishable findings), has led some scholars to investigate risk in science. Hackett (2005:805) argues “risk is a central theme in the ideology and practice of science and policy for science.” According to Hackett (2005), scientific communities, funding agencies, as well as the society tend to celebrate scientists who take risks by pursuing ideas that others think are unlikely to succeed. Every sort of research is risky in some way. Scientists may face these risks at different points in their career trajectories. Hackett (2005) develops a framework consisting of four main types of risks in science related to how scientists categorize research problems as doable/not doable and important/unimportant. These risk types include (1) risk of anticipation by other scientists (important research that are doable), (2) risk of failure (important research that are not doable), (3) risk of trivial nature (unimportant research that are doable) and (4) risk of ritual nature (unimportant research that are not doable). A well-known risk scholar, Luhmann (1993:203), argues that scientific research runs risks in the sense that decisions have to be taken without knowing what the results will be. These risks are different from the unintended consequences of technology and development. Scientists expect that in the long run scientific research will generate “truths about nature” that will not be easily refuted by the larger scientific community. The risk is related to the notion that research may not be able to generate such truths (Luhmann 1993:203). Viewed in this way, scientific knowledge production is a decision-making process in which scientists continuously struggle with risks. Additionally, scholars view scientific knowledge as an outcome of a collective, for example, of experts, methods, equipment, and experimental sites. The configuration of any   4   functional collective involves decisions and exclusions (Valve and McNally 2013). When generating knowledge, scientists must make use of what is already available (theories, methods, practices) or prepare to put themselves at risk by venturing into new research areas or by challenging the existing practices. Research decisions such as problem choices, publication decisions, and laboratory managements that determine eventual scientific knowledge production involve numerous risks due to the uncertain nature of the outcomes of these decisions. “Scientists’ risk epistemologies,” which I define as the ways in which scientists define, evaluate, and manage risks in their research decisions, determine the overall direction of scientific knowledge production. In this sense, the risks that are the focus of this dissertation are different from physical risks such as laboratory contaminations or unintended consequences of scientific and technological developments. My focus instead is on uncertain outcomes of research decisions that scientists make that put attributes that scientists value such as their reputations, publication probabilities, and likelihood for obtaining tenure and promotions at stake. As mentioned before, although social scientists have previously studied various aspects of scientific work, a gap exists in investigating risks incurred in research decisions and scientists’ risk perceptions that are shaped by numerous individual and structural factors. In this dissertation I attempt to bring relevant insights from the study of risk to the study of science to strengthen our understanding of scientists’ risk epistemologies. Furthermore, building on the argument that scientists’ risk epistemologies drive their research choices, I seek to develop an understanding of the association between risk perception and research problem choice. Research problem choice provides a unique niche where the effects of various social and structural factors (e.g., institutional environments, trends in commercialization, market influences, career pressures, and   5   professional demands) on the practice of science can be investigated. By incorporating insights from the risk literature at every point, we can expand our understanding of the interrelationship among science, risk, and research problem choice. The findings of the dissertation will provide insights into risk definitions and understandings of scientists and indicate, to some extent, the ways in which these understandings influence their risk evaluation and management. I expect that by investigating scientists’ risk epistemologies and research problem choices, this dissertation will explicate the nature and practice of a unique social institution in the contemporary US society: academic science. Overall, my research examines three areas related to risk and science. 1. How do scientists define, evaluate, and manage risks in their research decisions? 2. What factors influence scientists’ risk perceptions? 3. What are the associations between scientists’ risk perceptions and their research problem choices? To guide the data collection, I began with an education-based definition of a scientist (e.g., Xie and Killewald 2012:9). Individuals working with or working towards science degrees were considered as scientists or potential scientists. My primary research was conducted in one large land-grant university in the US. I limited the dissertation’s focus to researchers in biological sciences, excluding those in physical sciences, mathematics, engineering, and social sciences due to considerable epistemic disunities between these different fields (Knorr-Cetina 1999). I used a mixed-methods approach for the data collection. Data gathered from a set of twenty in-depth interviews were combined with data collected through a large-scale online survey of bioscientists. Bioscientists at various stages in their career trajectories such as professors, associate professors, assistant professors, postdoctoral fellows, research associates,   6   and graduate students, and both male and female scientists were included in the sampling frame (n=1241). I followed Dillman’s (2000) Tailored Design Method to develop and administer the survey. The survey garnered a response rate of 62%. I analyzed the data using STATA statistical software package. Guided by the theoretically driven research questions mentioned above, I organized the dissertation into three essays. The first essay (Chapter 2) explores risk epistemologies of university bioscientists as they determine the best trajectories for their careers. This essay analyses data gathered by conducting in-depth interviews meant to elicit university bioscientists’ different understandings of risk. Two major issues that emerged from the interview data are examined: the ways in which participants understood and defined risk in research decisions, and the ways in which participants evaluated and managed those risks. Results show that scientists view risk as a recurrent and inherent theme in their work and recognize a large number of risks in research such as failures, anticipation, competition, issues of freedom and control, as well as the controversial nature of research topics. Overall, university bioscientists’ risk epistemologies seemed to be related to the unique reward structure of science, compelling them to use various risk management techniques while navigating their work environments. In the second essay (Chapter 3) I attempt to quantify university bioscientists’ risk perceptions using data gathered from the online survey. In this process, I investigate the influence of a set of demographic and contextual factors (e.g., life-course, gender, sources of funding, basic-applied orientation of research, network interactions, and perceived significance of research) on risk perception. Prior research on scientific decisions are limited in scope because of their reliance on participant observation, interviews and other ethnographic techniques, which limit sample sizes and prevent any extrapolations to a larger population of scientists. I make an   7   effort to overcome this limitation by providing an early critical step towards quantifying scientists’ decision-making processes including risk perceptions of scientific work. The results indicate that risk perceptions of public university bioscientists differ based on the specific dimension of risk under investigation. While gender was not associated with risk perception in this population, life-course, research orientation, sources of funding, network interactions, and perceived significance of research all emerged as significant predictors of the various dimensions of perceived risk under investigation. In the third essay (Chapter 4), I use data gathered from the online survey to investigate the associations between university bioscientists’ risk perceptions and their research problem choices. Keeping in line with prior research, I also investigate how research problem choices are affected by a selected set of demographic and structural predictors. By relating risk perception to research problem choice, I demonstrate the value of using a risk perspective to understand factors that impact problem choice. Overall results demonstrate the importance of exploring effects of unconventional and under-studied factors such as risk perception in determining problem choice among scientists. The implications of the findings of the dissertation for US science policy are discussed.   8   CHAPTER 2 THE RISK EPISTEMOLOGIES OF UNIVERSITY BIOSCIENTISTS Introduction Most scientific discoveries take place within scientific communities that are established in legitimating organizations such as universities and research institutes. Often times, scientists face tensions and paradoxes as they evaluate the risks they are willing to accept in their work. The types of risk/benefit decisions scientists make to determine which research projects to engage in and how they engage in them is more important than ever due to current restrictions on funding for scientific research (Xie and Killewald 2012) and greater public knowledge about how funding is acquired and used (Stephan 2012). Although much has been written in social studies of science about how scientists make decisions, little empirical investigation has been conducted on the interplay between risk and decision-making in science. The concept of risk encompasses uncertainty of outcomes. If our futures were predetermined, the concept of risk would make no sense (Rohrmann and Renn 2000:13). Risk exists because unexpected future consequences may occur due to natural events or human actions. Encapsulating the potential for both desirable and undesirable outcomes, Rosa (2010:240) defines risk as “a situation or event where something of human value (including human themselves) is at stake and where outcome is uncertain.” IRGC (2005) defines risk as “an uncertain consequence of an event or an activity with respect to something that humans value.” In these definitions risk is considered as an ontological state of the world, independent of our perception of risk. Having recognized conceptual difficulties in applying these ontological definitions to risk management, Aven and Renn (2009) propose a rephrasing of the definition of   9   risk as “uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value.” Alternatively, some scholars argue that because risks are risks in knowledge, perceptions of risks and risks are the same thing (Slovic 1987; Beck 1992). Compared to risk, ‘risk perception’ is a psychological construct, “a subjective judgment about the felt likelihood of encountering hazards” (Gierlach, Belsher, and Beutler 2010:1539). Individual actors (including scientists) receive and process information about possible future outcomes to form their risk perceptions towards risk sources and events. Broadly, risk perception can be viewed as a subjective judgment of the likelihood of an undesirable outcome resulting from a decision at any given point in time (Slovic 1987). Risk perception is generally influenced by a large number of factors such as the availability of information, media, personal control, voluntariness, dread, and novelty of the risk as well as demographic factors such as age, gender, race and ethnicity (e.g., Vaughan and Nordenstam 1991; Flynn, Slovic and Mertz 1994; Cohn, Macfarlane and Yanez 1995; Byrnes, Miller and Schfter 1999; Finucane et.al. 2000). Scientists’ risk perceptions play critical roles in determining the direction of scientific research. For instance, perceiving certain research problems as “risky” may deter junior scientists or scientists in smaller laboratories from such problems (Wray 2003; Hackett 2005). In comparison, a “risky” research project may attract more senior scientists or scientists in larger laboratories as they are better equipped with resource relationships that enable them to manage the risks. When completed successfully, “risky” research may bring higher rewards such as publications in prestigious journals, enhanced scientific reputations, grant opportunities and other rewards. Categorizing a research problem as risky or not involves scientists’ various understandings and evaluations of the notion of risk, i.e. their risk epistemologies.   10   In this study I define scientists’ risk epistemologies as a combination of three facets: the definition, the evaluation, and the management of risk in research decisions. In this sense, risk epistemologies encompass not only how scientists perceive risks but the risk seeking and risk aversive behaviors that result from those perceptions. When generating knowledge, scientists must decide to use what is already available (theories, methods, practices) or prepare to put themselves at risk by venturing into new research areas or by challenging the existing practices. As a result, research decisions that determine scientific knowledge production involve numerous risks such as failures. In this sense, the risks that are the focus of this dissertation are different from physical risks (e.g., laboratory contaminations, unintended environmental or health effects of products). Instead, I focus on the numerous uncertainties and value stakes in the research decisions that scientists make in the pathway to knowledge production. I gathered data for the study through twenty in-depth interviews meant to elicit research participants’ various understandings of the notion of risk. In particular, I examine two major issues from the interviews: the ways in which participants defined and understood risk in research decisions and the ways in which they evaluated and managed those risks. In the process I examine the extent to which general understandings of the notion of risk, particularly those developed through risk research, can be applied to study university bioscientists. I explore what the concept of ‘risk’ means to bioscientists and how they see it as affecting their scientific careers. What risks do scientists consider as most threatening or important to themselves? What risks do they consider as most important to members of the scientific community? What are the mechanisms through which scientists manage risks? What roles do demographic and contextual factors such as age, rank, gender, sources of funding and institutional policies play in the way scientists understand and deal with risk? The results provide insights into university   11   bioscientists’ various risk definitions and understandings and indicate, to some extent, the ways in which these understandings influence their risk behaviors. Background Dynamics of Risk in Scientific Decision-Making Risk has been a topic of interest to social scientists for several decades as seen by the many different approaches they have used to study risk. These approaches include rational choice (Jaeger et.al. 2001), reflexive modernization (Giddens 1990; Beck 1999), systems theory (Luhmann 1993), critical theory (Habermas 1984), and cultural theory (Douglas and Wildavsky 1982; Douglas 1992). Among these different approaches, the insights gained from the study of psychological and social psychological factors in risk decision-making are particularly useful to understanding scientists’ risk epistemologies. Psychological and social psychological approaches to risk suggest that responses to risks are driven by context variables such as the degree of personal control, the availability of information, or the familiarity with a risk situation (Slovic 1987, Vlek 1996, Renn and Rohrmann 2000; Renn 2008). Scientific assessments influence individual risk decisions only to the degree that they are part of individual perceptions. Scientific assessments of risk are often substituted with beliefs that people have about the likelihood of undesirable effects (Fischhoff et.al. 1981; Covello 1983; Fischhoff 2012). Individuals and social/cultural groups respond to risk primarily based on their perception of risk and not according to an objective scientific assessment. Even if personal outcome optimization were the preferred strategy, decision-making is not only governed by probability and consequence of a risk event, but is affected by other contextual factors such as familiarity with the risk situation or the experience of equitable risk-benefit distribution (Boholm 1998). Most people also rely on information from third parties when they are faced with unknown risks.   12   Overall, social science approaches to risk suggest that risk decision-making among individuals is a functional relationship that represents perceived violations of what humans value, perceived patterns of occurrence, and social context variables (Renn 2008:62). Decision-Making in Science Risk and decision-making are integral components of the practice of modern science. Although much has been written about decision-making in science, little empirical investigation has been done so far on aspects of risk in scientific decision-making. In investigating scientific decisions, some scholars report that we need to focus on the micro-level and understand scientists as strategic actors within changing research environments (Bercovitz and Feldman 2008; Lam 2010). Others integrate both macro and micro level approaches (Ingram and Clay 2000; Glenna et al. 2011). The challenge is to determine how individual scientists pursue their research interests by making decisions while managing existing institutional constraints. For instance, how do university scientists select their research topics while navigating institutional priorities, criteria for tenure and promotion, and funding restrictions? Risk epistemologies provide an avenue to investigate these issues by examining how scientists make research decisions. Prior research in social studies of science has made significant contributions to explain various aspects of decision-making in science. Some relate differences in decision-making to differences in scientific cultures (e.g., Collins 1998; Knorr-Cetina 1999; Knorr-Cetina 2005). Some explicate various methods of scientific knowledge production that involve norms, tacit knowledge and expectations for acceptable behavior (e.g., Collins 1974; Latour 1987; Zenzen and Restivo 1982; Delamont and Atkinson 2001; Roth and Bowen 2001; Campbell 2003). Some investigate scientists’ research problem choices (e.g., Gieryn 1978; Busch and Lacy 1983;   13   Cooper 2009). Many others analyze demographic predictors of scientific decisions such as age, life-course, and gender. Investigating the effects of “life-course” (generally defined as a composite of related dimensions of age, professional rank, and professional experience; e.g., Rier 2003) on scientific decision-making, some scholars claim that senior scientists are resistant to change and innovations (Hull et al. 1978; Kuhn 1996) whereas junior scientists are more productive and make more significant contributions (Cole 1979; Kuhn 1996; Zuckerman 1996). Others argue that senior scientists contribute to scientific innovations more than junior scientists (e.g., Wray 2003; Messeri 2003). Hackett (2005:805) argues “what matters most is not the age of individual scientist’s but the risk profiles of the groups, which are shaped by interactions between scientists at different phases of their career.” Senior scientists’ increased contributions to innovation may also be explained by further exploring their scientific networks and the resulting social capitals. For instance, scientists who are embedded in larger networks may have more social capital that would allow them to collaborate in research projects where their personal expertise or resources are lacking. They achieve this by substituting the lack of human capital with the social capital accumulated through their various network interactions (Chen et al. 2012). Studies that examine gender differences in decision-making among scientists suggest that the differences stem, at least in part, due to gender differences in risk-taking (e.g., Rier 2003). Max (1982) suggests that females are less likely to make risky decisions with their work at the beginning of their careers. Schiebinger (1999) reports that female scientist are more risk aversive and hesitate to publish their work until they are extremely certain of the results, which puts them at a competitive disadvantage. Byrnes, Miller, and Schfter (1999) find men to be bigger risktakers than women in 14 out of 16 categories of risk-taking, intellectual risk-taking being one of   14   the most highly correlated and significant among the findings. Etzkowitz, Kemelgor and Uzzi (2000) investigate risk behaviors of students and faculty across different scientific and engineering fields in a variety of US institutions and report that females are more risk-aversive in their work than their male counterparts. In general, literature suggests that risks tend to be judged lower by men than by women and by white people than by people of color. Many scholars also suggest that gender differences in risk seeking occurs predominantly due to a “white male effect” where white men in the US have unusually low levels of risk perception (Finucane et al. 2000; McCright and Dunlap 2013). Risk in Scientific Decisions Among the numerous studies that examine how scientists make decisions, few focus on risk. Hackett (2005:805) argues “risk is a central theme in the ideology and practice of science and policy for science.” According to Hackett (2005), scientific communities, funding agencies, as well as the society tend to celebrate scientists who take risks by pursuing ideas that others think are unlikely to succeed. Every sort of research is risky in some way. Scientists may face these risks at different points in their career trajectories. Hackett (2005) develops a framework consisting of four main types of risks in science related to how scientists categorize research problems as doable/not doable and important/unimportant. These risk types include (1) risk of anticipation by other scientists (important research that is doable), (2) risk of failure (important research that is not doable), (3) risk of trivial nature (unimportant research that is doable) and (4) risk of ritual nature (unimportant research that is not doable). While this framework summarizes some of the risks that arise from scientific decisions, it narrowly focuses on problem choice and does not accommodate many other scientific decisions that involve risks such as ethical   15   concerns, social and political sensitivity of research topics, funding restrictions, and other constraints. Luhmann (1993:203) argues that scientific research runs risks in the sense that decisions have to be taken without knowing what the results will be. These risks are different from unintended consequences of technology and development. Scientists expect that in the long run scientific research will generate “truths about nature” that will not be easily refuted by the larger scientific community. The risk is related to the notion that research may not be able to generate such truths (Luhmann 1993:203). In this sense, scientific knowledge production becomes a decision process in which scientists continuously struggle with the future uncertainty of not being able to generate truths about nature that they seek. While Luhmann acknowledged the role of risk in scientific decisions, he did not conduct any empirical investigations to further explicate this line of thinking. Examining differences in scientific cultures, Collins (1998) argues that these cultures may be more or less risk aversive depending on the degrees to which evidential cultures (defined as a combination of evidential collectivism/ individualism, high/low evidential significance, and high/low evidential threshold) affect decision-making. Other studies that do empirically investigate risk and decision-making in science typically focus on only one or two aspects of scientific decisions. For example, investigating gender differences in publication decisions, Rier (2003) finds that male scientists are more risk seeking at the beginning of their careers and grow more cautious with age, whereas female scientists report a reverse pattern of risk-seeking as they move through their careers. Gordon (1984) reports that scientists routinely use a risk-benefit calculus when choosing outlets for their research. Despite the advantages of top tier journals (e.g., increased visibility and recognition), they entail many risks such as high competition, high   16   rejection rates, and amplified negative impacts to one’s reputation should research conclusions be subsequently refuted. Studies that examine risks to credibility arising from distorted media coverage of scientific results report that media coverage is riskier and less attractive to younger scientists than more senior scientists with well-established reputations (e.g., Dunwoody and Scott 1982; Boffey, Rodgers and Schneider 1999). My research builds on these existing studies and incorporates insights from the study of risk and risk perception to develop a deeper understanding of university bioscientists’ risk epistemologies. In particular, I examine two issues related to risk epistemologies: the ways in which research participants define and understand the notion of risk, and the ways in which they evaluate and manage risks. The primary objective is to gain a better understanding of how a range of risks are defined and dealt with by bioscientists in a US land-grant university. Methods of Data Collection and Analysis I conducted in-depth interviews with twenty scientists working in various biological science fields in a large land-grant university in the US. Following Xie and Killewald (2012:9) I used an education based definition to identify individuals for the study and considered individuals working with or toward science degrees as scientists or potential scientists. I limited the study to scientists in biological science and excluded those in physical science, mathematics, and engineering in order to hold constant the effects of epistemic cultures (Knorr-Cetina 1999) and other organizational factors. I recruited subjects using a purposive sampling technique to cover a broad range of subfields, ages, gender, and career trajectories. The study sample included four full professors, four associate professors, four assistant professors, four post doctoral and research fellows, and four PhD students. Ten of the participants were female and ten were male. The participants’ ages   17   ranged from 27 to 58 and they represented several different sub-fields: biochemistry, molecular biology, entomology, forensic science, microbiology, molecular genetics, plant biology, zoology, plant pathology, horticulture, crop and soil science, cell biology, and biosystems and agriculture engineering. I interviewed each participant individually using a semi-structured interview guide (Appendix B). Each interview lasted approximately one hour (ranging from forty minutes to 1.5 hours). The questions were open-ended and were used to elicit participants’ views of and experiences with risks related to scientific practices. I began the interviews with questions about participants’ selection of research projects, their relationships with other scientists, and the daily workings of their research laboratories. I used these questions to elicit attitudes and perceptions related to risk/benefit decisions scientists engage in in their day-to-day research. I asked the participants to describe various risks they encounter during a research project and explain how they evaluate and manage those risks. I also asked participants to describe the types of research that they and their peers generally consider as “risky” in their fields as well as their willingness to engage in such research. Participants elaborated the circumstances under which they would abandon research projects without completion. I audio recorded all the interviews with participants’ consent and subsequently transcribed them. The data analysis emphasizes identification of definitions, key themes, narratives, and other expressive devices. The focus of the results includes how participants define, evaluate, and manage the risks they are willing to accept in their work. Because of the relative homogeneity of the interview sample (bioscientists from one university), I do not intend to make claims for the generalizability of findings. However, I argue that in-depth data such as   18   those generated through my research are valuable to develop a deeper understanding of risk epistemologies among university bioscientists. Results and Discussion Risk epistemologies of university bioscientists impact how they make decisions throughout their careers, encapsulating a range of decisions from the selection of research topics to the eventual publication of results. Results suggest that scientists’ risk epistemologies affect their research program creation, laboratory management, interactions with other scientists, financial decisions, time allocation, publications, perceptions of freedom and control, as well as the management of controversial topics. In the sections below I elaborate risks encountered in these different aspects of scientific decisions, using themes and narratives identified by analyzing the interview data. Understandings and Definitions of Risk Early careers of university bioscientists depend on establishing unique research programs (i.e., niches) that poses exciting research problems for the individual scientists and research groups to explore. The process of searching for and establishing a unique research program can be a precarious one. It often involves navigating failed experiments or hours of lab/field work with little to show for the efforts. Arriving at a solid research program involves critical risk decisions, as discussed by an assistant professor in Plant Pathology. “The ideal research project is one that is very doable, easy, and interesting to a broader group. The jargon that we use to describe those projects is “low hanging fruits.” Those are the projects that we should prioritize our efforts on, because there is a big payoff in terms of the interest such research will generate and the influence it might have on the   19   development of the field in general. They are easy to do and have high certainty of success. There is low likelihood of failure… You can contrast those with projects that are hard to do and have high payoffs. Those are good projects as well. But we want to avoid research projects and programs that are too hard to do. [For those programs] low levels of interest will be generated among peers. Investing a lot of time and money into a program that has little likelihood of working is too risky. We want to stay away from that kind of research endeavors.” (Male assistant professor, Plant pathology) Bioscientists’ research choices are driven by their perceptions of risk incurred in research endeavors. Some scientists in my study pick research programs that they perceive as doable and having a high likelihood of success. Those programs carry “low-risks,” i.e., explore questions that are considered important and exciting by the scientific community while generating a steady flow of publications. They are more “fundable” because they often align with the interests of funding agencies or produce enough preliminary data that can warrant further investigation. Generally, these programs also meet organizational requirements for tenure and promotion, as well as generate rewards and incentives. In other words, these programs have low uncertainty in terms of generating expected results. However, according to some of my research participants, in any given field of research only a limited number of research programs will fall under the category of “low hanging fruits” or “low-risk research.” The alternative to low-risk research is to engage in research programs perceived as carrying greater risks (i.e., high-risk research). High-risk research is categorized as such due to various attributes that increase the uncertainty of outcomes and increase value stakes for the scientists. These attributes include novelty of research topics, lack of prior supporting theory or   20   methods, lack of support from the intellectual community, extensive time and resource commitments, and inability to generate publishable results. When inquired about their willingness to explore a novel research topic, a graduate student commented: “When studying new diseases, the literature is lacking. I have to develop my own protocols for research. There is a certain inherent risk in that, as well as the controversy that surrounds new problems, new issues. It is risky. But I think that with great risk comes great reward. The reward is that it gives you the opportunity to stand out in a crowd. When you go to industry or scientific meetings, your research is looked upon as kind of a hot topic or a hot issue.” (Male graduate student, Plant Pathology) New research topics are risky because they lack supporting scientific inscriptions such as protocols and literature, which leads to higher probabilities of failure. However, when these risks do pay off they not only produce publications but also elevate the reputation of the individual scientists and research groups (Hackett 2005). As a result, most scientists engage in high-risk research with the expectation that successful completion of such research would lead to better incentives such as increased publications and enhanced scientific reputations. Speaking of laboratory management and hiring decisions, an assistant professor in Plant Biology stated that laboratory heads who are usually senior scientists (assistant, associate, and full professors) generally guide the group’s research trajectory by encouraging preferred directions for research and establishing larger frameworks for research. “I set the research agenda for my lab as the PI. My philosophy is that, I have my research agenda that I’m personally interested in. It is set by what I can acquire funding for, obviously. But I’m deciding the direction of it. For graduate students and to some degree   21   for postdocs in the lab, they can decide their research programs. I won’t tell them what to do. My goal as the PI is to attract people bright, motivated and also interested in some of the generally same ideas that the rest of us in the lab are interested in. So by doing that we can have co-themes that run through the lab. It’s my responsibility to bring in people to the lab who work well together. We are not going to have research agendas that are going in every direction. That will be chaotic. Beyond that I mentor people in the lab and help them decide what they are going to work on, but ultimately those are their decisions. I would also say that my approach to research is to set up large research frameworks or research projects. People in the lab are welcome to work on them and a lot of them do so, because a lot of the legwork has been done.” (Male assistant professor, Plant biology) Although this professor is supportive of research interests of graduate students and postdocs, he manages to exert some control by establishing an overall research direction for his laboratory. Hiring people for the lab who work on the same general thematic area is one way of ensuring a controlled research agenda for one’s laboratory. The risk in such a situation for senior scientists is in failing to establish new phenomena to study, while the risk for graduate students and postdocs is their limited ability to exercise intellectual freedom and choice. On the topic of intellectual freedom, several graduate students I spoke to indicated a different point of view than those in more senior and/or tenure-stream positions. The general view was that graduate students, postdocs and research fellows have less autonomy to determine their own research directions, especially when they are dependent on the laboratory for financial support. This was made clear while interviewing a graduate student. “Whenever you join a lab, you basically do what the professor wants you to do. You may choose pieces to add to the ideas, but the basic ideas come from the professor. Because   22   they are the ones that have gotten the grant, the money that supports you. Sometimes you could get really lucky and write a grant with the professor, or if you have a specific project in mind and it gets funded, you can do it. But normally what happens is, you don’t just have one project. You have several projects that you are working on, because not all of them work. So you have to have backup plans. While you are working on what your professor wants, if you have a pet project, you kind of do that on the side. So you can pursue your own agenda, if there’s money to do it. If you are not taking too many chemicals that are supposed to go to the main project. But usually it’s your major professor that directs your research.” (Female graduate student, Crop and Soil Science) This reaffirms that seniority and tenure positions bring autonomy and control. Graduate students and other researchers depend on senior scientists (i.e., professors who are the laboratory heads) for financial and intellectual support and generally fit into research agendas set by their senior colleagues. Mutual interests are not uncommon, and they lead to better and more fruitful collaborations. It seems that situations where junior scientists write grants and bring in funding to pursue their own research agendas are the exception rather than the rule in my study sample (Only 2 out of 8 graduates students and postdocs I spoke to indicated that they have written successful grants along with their professors). Those situations provide more opportunities for junior scientists to pursue their own research choices. An interesting observation is that, even within research agendas set by senior scientists, junior scientists sometimes find creative ways to pursue their own interests, such as by directing resources to pet/side projects, in turn exercising their intellectual freedom to some (albeit limited) degree. In further defining risk, a graduate student stated the following.   23   “I think my research is risky. It takes a long time to see an outcome. You don’t know if it’s good or bad for years. During that time if someone else does the same kind of work in a different place and if they publish the work, they will get credit and your work will be ignored after that. Until you publish your work, nobody is going to give recognition to your work.” (Female graduate student, Horticulture) Similar to this graduate student (who was nearing completion of a PhD), most research participants were conversant with the language of risk. For many participants of my study, risks involved the possibility of experimental failure and failure to win the “priority race” (i.e, not being the first to announce or publish research results). Until a research project comes to its end, individual scientists are uncertain of the research outcomes, which lead to many other uncertainties such as competition and anticipation of the research topics by other scientists, financial impacts, and failures in the peer review system. Not all participants associated risk with negative ascriptions. Some considered risk as an inevitable part of the practice of science. The following is from a professor in Biochemistry who highlighted the desirable outcomes of risk-taking. “If you don’t want to talk, you wager nothing and gain nothing. By talking you gain insights on what you could do. Many of us are not doing exactly the same thing but similar projects. So others can understand what you are doing and make suggestions for improvement. They might evaluate your papers and grants. If they understand your ideas then you are more likely to get through to them what’s important…For the most part we have to find time to advertise ourselves. Especially with colleagues. I often hear them pointing out the significance of their research.” (Male professor, Biochemistry)   24   Scientists talk about their ongoing research with peers not only to gain insights on what has worked and what has not, but also to “advertise” their efforts. By advertising their work, they gain a reputation for working on a particular research area and establish the value of their research among their peers. This is also seen when presenting ongoing research at academic conferences. By giving talks before publishing, scientists put their ideas into circulation and stake “informal priority claims” (Hackett 2005:809). This is not to say that scientists do not worry about the risks of anticipation and competition that may result when ideas are freely shared with others. Most of my research participants, such as the graduate student below, were cognizant of the underlying risk of “scooping” when ideas are shared freely. “Assume that I was working on initial stages of a project and I told someone at X university about it… and they took that idea and ran with it and did more experiments faster and got a publication out before me. Therefore my data and observations are now invalid. So yes, I think the risk of scooping is real. I think that’s why there is a culture of, publish as fast as you can. Just so that [scooping] doesn’t happen.” (Male graduate student, Plant Pathology) One way scientists manage the risk of scooping is by not presenting their research at academic forums until the research is close to getting published. As one professor in Zoology told me, “usually by the time we present research we are close to the finish line, so we are getting the manuscript ready. Other people won’t have time to catch up.” This suggests that some scientists in my study behave in ways that indicate they are rationally motivated, self-interested individuals who make risk-decisions for maximizing their personal gains and gains for their research groups. Most research participants were cautiously optimistic on the topic of talking about their research with peers. They realized the positive aspects of sharing research ideas before formal   25   publication. However, they also acknowledged the potential risks (i.e., competition and scooping) and the need to employ one or few risk management techniques. In order to manage the risk, they may selectively provide information to certain scientists, discuss only pieces of their research, or provide only an overview of the research without discussing specifics. Sometimes they were open to talking about their research with peers when they had fairly unique research projects that could not be easily adopted by other scientists. These various risk management techniques allowed scientists to balance the risks of “scooping” with the benefits of “advertising” research. For instance, “It’s not as easy to jump into someone else’s project as you would think. Maybe it was earlier, but now it’s not. And also, imagine you decide I’m going to work on this project because I think it’s hot. But where are you going to get the money from? You have funding for one thing. You can’t all of a sudden decide to work on this other guy’s stuff because you like that.... It’s not that simple. However, there are some people I know who can do it. They have funding and the methods. So I wouldn’t mention that [research] to them. I’m careful about who I talk to and how much information I give out.” (Female assistant professor, Biochemistry and Molecular Biology) Prior research has reported that peer review encourages scientists to remain productive throughout their careers while maintaining a degree of intellectual freedom (Stephan 2012). Peer review also acts as an internal check that ensures quality of published work and information shared. On the downside, Stephan (2012) argues that the peer review system discourages risk taking because it is a system heavily based on successful results and does not reward failure, even when failure sheds light on an important scientific concept or body of literature. My participants held somewhat similar viewpoints. Participants thought of the peer review process as   26   a necessary, albeit inconvenient, aspect of conducting research. They also talked about peer review as a check and balance system that increases the quality of published work and reduces the future risks to one’s reputation. For example, one assistant professor in Biochemistry and Molecular Biology explained: “[Peer review] is annoying. But it’s also helpful. Often times you don’t see mistakes in your paper. So if you send it out and three people come up with the same answer, you realize that you have to fix something. And I think you can take it as a lesson. It’s not always fun. You want to publish your paper today, not in a year. Sometimes it’s a bit unfair. Sometimes it’s over the top. But in general it’s useful.” (Male assistant professor, Biochemistry and Molecular Biology) Lastly, research incurs financial risks as all research costs money for salaries, supplies, and research facilities. Even research not funded by a government agency or foundation requires salaries and overhead. As Stephan (2012:14) puts it “an off-the-shelf mouse can cost between $17 and $60; a postdoc can cost $40,000 or more, when fringe benefits are included; a sequencer can cost $470,000; and a telescope can have a price tag in excess of a billion dollars.” As a result of this, a large part of a university bioscientist’s daily work includes writing grants to support the financial needs of his/her research laboratory. This is especially true in the case of senior scientists who are responsible not only to produce publishable results, but also to financially support research groups. Speaking of financial risks, a professor in Entomology said: “It all comes down to money. The resources you need now are much larger. The base support that used to be there when I was a grad student is not there anymore. In my mind there’s far more risk than when I started, mostly financial risk. Also you have employees. They are either technicians that are very valuable professionals, but they are on short   27   term appointments and you risk losing them [if you can’t support them]....Students have a defined risk in that you need to get them through a degree. If you feel like you can’t do that, you have to let that person go or tell them upfront that you can’t hire them. For a student you need at least 3 years of support. I think it’s wrong to bring in a person for a short amount of funding and then promise you are going to get more. You have to be cognizant of the risk you are putting on other people. You don’t want to go into a budget hole.” (Female professor, Entomology) Overall, the discussion above suggests that bioscientists’ understandings and definitions of risk in science are closely linked to their institutional environment as well as seniority and career trajectory. Scientists operate within specific institutional environments that have structures in place to obtain funding, develop collaborations, limit time and resource allocation, set expectations for tenure and promotion, and set standards for publications and peer review. Scientists do respond to these institutional characteristics at least to the extent that they are aware of the expectations and actively seek ways to navigate them while exercising their individual choices. For example, scientists have devised ways to maintain open communication while avoiding the risk of scooping, balance quality and quantity of publications with time and resource allocation, and exercise freedom and control in the hiring process and laboratory management. Evaluating and Managing Risks Many of the techniques used by scientists to evaluate and manage risks are linked to the reward structure in science. Rewards in science typically come in the form of recognition through publication, which is key to conducting science. Publications bring further funding and resources to continue working in one’s research area. What is unique about the reward structure   28   in science is that, to gain recognition, you have to be the first to communicate a research outcome. According to Merton (1957), the interest in priority and awarding recognition to the scientist who is first to communicate results are not new phenomena, but have been integral characteristics of science for at least three hundred years. Scientists recognize that being second in a priority race is a risk. It reduces the likelihood of publishing their research and gaining recognition, which are the main driving forces of science. Due to this, scientists constantly strive to publish research as fast as possible. This compels scientists to keep track of what other scientists are doing and what kind of research is coming out in journals in their major fields. When one’s research is anticipated, investigated, and published by a different scientist, one has lost the priority race. For instance, a professor in Zoology explained, “When somebody else has already figured it out, I will drop the research project. If they publish it, it makes my work irrelevant. Usually we just take it down a different path. You don’t lose everything that you have already done…But you never lock yourself so much [into one project] that there isn’t another avenue. If you’ve locked yourself to the point where everything is proven wrong, there is no significance, then you didn’t formulate your hypothesis correctly. The way you manage the risk is by always having different avenues to take off on.” (Male assistant professor, Zoology) Scientists manage “anticipation” by other scientists by keep their research projects and hypothesis open enough. In situations where someone else first publishes the same work, they are able to redirect their research agenda to a different path. This suggests that contrary to purely functionalist assumptions about science and scientific norms (e.g. Merton 1942/1973) the practice of science is not just an intellectual exercise done out of curiosity and the need to   29   generate knowledge. Rather, it is a vocation and a livelihood that require careful navigation of contingencies. Contrary to the popular scientific norm of validation through replication, in risk situations some scientists in my study do not see benefits to replication. Another commonly used technique of risk management among research participants is conducting multiple research projects at the same time, some that are certain to yield publications and others that are more speculative. Explaining this further, an assistant professor in Plant Pathology stated; “I think the way to deal with that [potential failure] is, particularly early in the career, to be working on a series of projects simultaneously. Some may have high likelihood of success and hopefully high likelihood of being interesting and important. Some others may have a high likelihood of success but only moderate importance. Some may be more risky and tend to fail. But the trick is that you don’t put all your investments into one risky project. You are never going to do highly important research if you never take a risk. But you don’t only do that. Because the reality is that they will fail time to time, or a lot of the time.” (Male assistant professor, Plant Pathology) This argument suggests that scientists’ risk calculations involve not only their subjective estimates of likelihood of success (i.e., probability) but also their subjective evaluations of the payoff (i.e., consequence). A scientist’s risk profile may consist of a number of different research programs that involve different estimated degrees for probability of successful completion and payoff. Scientists sometimes use collaborations to manage failure. Collaborations allow them to be involved in several projects at the same time, while only being responsible for a part of each project. It also allows optimization of time, expertise, research materials, and funding. Successful   30   collaborations lead to better publications at a faster pace and allow scientists to co-author a higher number of papers that may improve their career prospects or leverage of grant applications. Collaboration however means sharing authorship and recognition. The leverage of a co-authored publication, or a publication in which one is not the first author, is generally lower than a publication with few authors or one author. Scientists are cognizant of this fact and use a risk/benefit calculus is making publication decisions. On the topic of research collaborations and authorship decisions, a postdoctoral fellow told me that the research group carefully evaluates the contributions of each scientist to determine authorship. For instance, “If [collaborating lab or scientist] just provided the plasmid you put them in the acknowledgment rather than give them authorship.” Once again, this shows that contrary to a purely functionalist perspective of science that views knowledge as collectively shared and owned, scientists view science as knowledge produced “by someone or few people” who then should be adequately rewarded for their contributions. Scientists’ publication decisions are sometimes made using a rational choice approach. Each scientist evaluates the costs and benefits of publication in different venues and makes decisions that optimize their personal gains and gains to the research group. For example, scientists in my study evaluate their rank, career trajectories, and institutional expectations to determine whether to aspire for fewer publications in more prestigious journals or more publications in lower tier journals. A new assistant professor put this in perspective: “I think the goal is to publish enough so you can satisfy the tenure reviewers. You don’t necessarily need a publication in Science or Nature. In fact, our department prefers when you publish anything rather than nothing with the ambition to publish in Science… I   31   wouldn’t mind publishing in Science but I think you have to be realistic whether you can publish there or not.” (Male assistant professor, Biochemistry and Molecular Biology) This perspective was different from that of a tenured full professor in Biochemistry who argued that he tries to publish in the most prestigious journals. This individual stated that he would not publish below a third tier journal “even if someone said we will take your paper.” Differences in publication decisions among scientists with different levels of seniority result from their perceptions and evaluations of risks incurred in the process. As Hackett (2005:805) points out, “routine publications have little incremental value for established reputations, but greater incremental value for nascent reputations.” Hence, According to Hackett (2005) junior scientists generally tend to manage risks related to tenure and promotion by publishing more articles in journals with lower impacts, whereas senior scientists take larger risks by pursuing publications in more prestigious journals. Contrary to this rational choice approach to publication decisions, few participants indicated a preference for “satisficing” as an approach to decision-making (Simon 1955; Agosto 2002). As opposed to finding the best option available (optimal decision-making), some scientists in my sample show a preference for subjectively defined thresholds for acceptability that determine their publication decisions. For example, reflections below by an assistant professor in Plant Biology show that publication decisions are a tradeoff not only between quality and quantity, but also the amount of time and resources one is willing to invest in collecting data. Although journals set criteria for the peer review process, individual scientists use their subjective risk evaluations to determine how much work (i.e., data collection and experimentation) is good enough to warrant a publication.   32   “My research is based generally on testing questions. I feel ready to submit a paper when I know we have enough data to answer the question. You can always throw more data and investigate from multiple angles. Once you have answered the question, I feel that there are diminishing returns. You can answer it well enough, or you can answer it definitively… In general I ask myself where I think I can submit this paper. If the answer doesn’t change by me collecting more data, then I’m not inclined to collect more data. It’s definitely the tradeoff between investment and payoff.” (Male assistant professor, Plant Biology) Scientists are conscious of the social and political sensitivity of their research topics. However, opinions voiced by my participants indicate that being in a public university compels scientists to conduct research that does not generate a large amount of controversy outside of the academia. As one graduate student put it, “final commercialization is beyond our control. As growers and breeders we just try to do the research.” A similar sentiment was also expressed by a professor in Forensic Science: “I have some students who work on pigmentation genes that determine hair color. So if you have a sample you might get an idea of what the individual look like. It’s controversial because it may lead to racial profiling. But it’s not controversial in our field. If someone shot someone and we can say what the person looks like, that can be very helpful. But people are concerned about racial profiling.” (Male professor, Forensic Science) Notice how this individual argued that the research is not controversial in his field although some people may be concerned. This was a common theme among scientists in my study who discussed risks of conducting research on socially and/or politically sensitive subjects. As long as   33   peers in their scientific field did not consider their research topics intellectually “wacko” or “crazy”, most scientists did not deter from pursuing a topic out of concern for social or political sensitivity. As before, overall results suggest that risk evaluation and management of scientists vary by seniority and context. At times junior scientists tend to take more risks. Other times, senior scientists may be better placed to take greater risks as they have already built up their reputations, have better funding and resources, and have a better understanding of how their fields operate. Most of my research participants, such as the associate professor below, alluded to the fact that overall risk preference of a scientist changes over time. “At the beginning you are always more enthusiastic, that’s why you get hired in the first place. And you are willing to go out and throw the big net. But some people are more cautious at that point. I know I was. As you go on, you get a better feel of what is more likely to be fundable. And you have a lot more things [projects] that you are doing, so you are probably submitting fewer grants and having a higher percentage of them getting funded, because you figured out how it works. So I would say that probably, if you are very successful, you can afford to be a bit riskier.” (Female associate professor, Microbiology and Molecular Genetics) Conclusion Investigating scientists’ risk epistemologies allows us to develop a deeper understanding of their research decisions and the ways in which they arrive at those decisions. In making scientific risk decisions, at times scientists conform to the existing institutional structures. Other times they challenge these structures, persist through them, or compromise their actions. Through these processes, scientists find ways to exercise their agency within their work   34   environments. The results suggest that the ways scientists define, evaluate, and manage risks (i.e., risk epistemologies) matter to the extent that they allow for more creative ways for scientists to navigate the institutional environments in which they are embedded. My research shows that scientists view risk as a recurrent and inherent theme in their day-to-day work. Some even go as far as to classify risk as part of the “nature of science” (Female professor, Biochemistry). As Hackett (2005) states, “every sort of research is risky in some ways, so scientists cannot choose between risky and safe research problems, only between problems that are risky in one way and those risky in another.” Scientists are cognizant of many uncertainties in their research decisions and value stakes such as failures, anticipation, competition, restrictions on funding and time, issues of freedom and control, and the controversial nature of research topics. Scientists’ risk epistemologies are tightly linked to the unique reward structure of science that compels scientists to engage in a race to claim priority for their discoveries. The typical way to establish reputation and gain recognition in science is to publish faster and be the first person to communicate a particular research result to the larger scientific community. In this process, scientists face the risk of competition from other scientists or research groups pursuing similar research programs who have the potential to publish results before them and gain priority. Small laboratories and small research groups with limited funding and resources may be at a competitive disadvantage. Competition from large research groups may even prevent smaller groups from venturing into similar research where their chances of winning a priority race are lower (Hackett 2005). Scientists are also cognizant of the possibility of their research programs being anticipated by other scientists. This leads to a culture of “publish fast; publish first” within academia and non-academic research institutions.   35   Additionally, financial risk is also very real and ever present when conducting research. No laboratory can run without sufficient funding. Senior scientists and scientists who are PIs on research projects have the responsibility to not only support their own projects but also support staff and graduate students. Maintaining a steady flow of funding is an essential requirement for senior scientists, and one most of them carefully navigate through various means of risk management. As discussed before, these risk management techniques range from writing and submitting grants with research associates and graduate students, advertising one’s research in various academic forums, redirecting resources to smaller projects that may lead to preliminary data that can then form the basis for future grant proposals, and dividing up time between benchwork and writing grants. In addition to managing financial risks, scientists use risk management techniques to navigate the system of peer review and publications, as well as to handle controversial and sensitive research topics. Publication decisions are usually made using subjective risk estimates. Not all scientists strive to publish their research in the most prestigious journals. They tend to compare the quality of their own research with research generally published in different journals and make estimates about their likelihood of publishing research in said journals. Scientists’ publication decisions may change and evolve as their careers progress, reflecting changes in risk perception and ability to manage risk. For example, senior scientists with well-established reputations may strive to publish their work in high prestige journals such as Science or Nature, whereas graduate students or new assistant professors may settle for lower tier journals. As far as managing sensitivity of research topics, scientists in my study are more concerned about risks related to how their research topics are viewed by their peers and the larger scientific community than by the wider society. This observation may have arisen due to the fact that I conducted my   36   research in a public university where most participants engaged in research that did not carry immediate societal impacts. Overall, it appears that managing risks is an integral part of decision processes that lead up to development a of unique research niche for university bioscientists. Once developed, such a research niche would ideally a) be interesting and exciting to the scientist; b) be unique enough to allow the scientist to manage anticipation and competition from other scientists; c) be relevant and generate institutional interest and funding; d) build on existing literature; e) be considered reasonable and worthwhile by peers and the larger scientific community; f) produce publishable results; and, g) have direct or indirect (largely positive) effects on society and environment. In the process of creating and continuing a unique research agenda, risks become a critical and unavoidable part of the equation. The ways scientists perceive and manage risks may mean the difference between successfully completing a research project and communicating results or failing to establish a reputable scientific career. Research along this line is important not only because of its descriptive and analytical value, but also because it sheds light on the workings of a unique set of individuals – bioscientists in a US land-grant university. Through this research I have taken a critical early step towards using a risk perspective to develop a deeper understanding of the workings of university bioscientists in the US.   37   CHAPTER 3 RISK PERCEPTIONS OF BIOSCIENTISTS AT A US LAND-GRANT UNIVERSITY Introduction For the past several decades, the United States has been at the forefront of scientific research. In spite of competition from other countries such as China, scholars argue that US science continues to hold its place as a world leader of innovation (Xie and Killewald 2012). In 2009 alone, the US spent over $380 billion on research and development (National Science Board 2012). Part of the justification for large amounts of funding for science in the US comes from its innovative roles in areas such as agricultural development and medicine. In spite of holding a prominent place in the world, scientists in the US are currently facing many challenges such as funding cuts and increased costs of materials and labor. These challenges make the processes of scientific decision-making critical in the sense that they intensify the negative consequences of (failed) research decisions. As a result, Stephan (2012:3) argues that the desire to minimize risks plays a major role in the contemporary decision making processes of [US] scientists. Furthermore, the underlying incentive system in science is largely based on obtaining successful outcomes, which also discourages risk-taking (Stephan 2012:139). Under these circumstances, one can argue that “playing it safe” (i.e., risk avoidance) is a more rational strategy for US scientists than undertaking research that involves large risks. Although risk avoidance may generate research that provides easily identifiable findings, according to Hackett (2005:805), sometimes “it is risky not to take risks.” Safe research eventually fails to capture the interest of reviewers and funding agencies, thereby necessitating breakthrough research that may incur greater risks.   38   Risk in scientific research entails, among other things, pursuing research projects that have a low likelihood of success. However, when completed successfully, high-risk research may bring in high rewards such as publications in highly prestigious journals and enhanced scientific reputations. It should be noted, however, that no research program, regardless of its theoretical and methodological soundness, is completely free of the risks. At any given point, individual scientists or scientific organizations face several options for choosing research programs, each of which may be associated with potential positive or negative outcomes. Scientists may select a research program that promises more benefits than losses compared to other available options, based on their subjective risk perceptions. Prior studies have made significant contributions to understanding scientists’ decisionmaking processes including: (1) methods of scientific knowledge production (e.g., Collins 1974; Latour 1987; Zenzen and Restivo 1982; Delamont and Atkinson 2001; Campbell 2003), (2) aspects of scientific cultures (e.g., Collins 1998; Knorr-Cetina 1999; Knorr-Cetina 2005), (3) scientists’ research problem choices (e.g., Gieryn 1978; Busch and Lacy 1983; Cooper 2009) and (4) individual and structural determinants of scientific decisions (e.g., Hull et.al. 1978; Cole 1979; Keller 1985; Widnall 1988; Alper 1993; Kuhn 1996; Zuckerman 1996; Messeri 2003; Rier 2003; Wray 2003). However, most of these studies have not paid sufficient attention to risk in scientific endeavors and how scientists perceive risk in their research decisions. Few studies that do take risk into consideration (e.g., Rier 2003; Hackett 2005) are limited in scope because of their reliance on participant observation, interviews, and other ethnographic techniques, which limit sample sizes and prevent generalizations to a larger population of scientists. Lacking are empirical studies that attempt to quantify scientists’ risk perceptions.   39   In my research I begin to fill this gap by adding breadth to the existing literature on scientific decisions. Specifically, I use data gathered from an online survey of bioscientists in a US land-grant university to investigate their risk perceptions. Scientists’ risk perceptions are important in guiding their research decisions. These perceptions affect not only the day-to-day practices within research laboratories, but also the overall problem choices. Risk perceptions in turn, are influenced by a variety of individual and structural factors. Although prior research has investigated how some of these factors (i.e., age, gender) influence scientific decisions, none have directly examined their relationship to risk perception. In my research I attempt to quantify the associations between scientists’ risk perceptions and a selected set of demographic and structural factors. These include life-course, gender, sources of funding, basic-applied orientation of research, network interactions, and perceived significance of research. The resulting analysis is more comprehensive in the sense that it not only uses a relatively new paradigm (risk and risk perception) to examine scientific decisions, but also attempts to simultaneously investigate the effects of a range of demographic and structural factors on risk perception, giving rise to some generalizations about scientific risk decisions. Background Risk and Risk Perception Any discussion of scientists’ risk perceptions should begin with a clear understanding of the concepts of risk and risk perception. In recent years, risk has become a salient theme intrinsically linked to scientific and technological development. Risk is a subject of debate among academics in different fields such as biophysical sciences, economics, psychology, and social sciences. Social scientists who have engaged in risk research for over four decades, have investigated a wide range of relevant topics including the ways in which people conceptualize   40   risk, cope with risk, make decisions on levels of acceptability, and determine which sources to trust in a risk debate. The concept of risk encompasses uncertainty of outcomes. If our futures were predetermined, the concept of risk would make no sense (Rohrmann and Renn 2000:13). Risk exists because unexpected future consequences may occur due to natural events or human actions. Encapsulating the potential for both desirable and undesirable outcomes, Rosa (2010:240) defines risk as “a situation or event where something of human value (including human themselves) is at stake and where outcome is uncertain.” Compared to risk, ‘risk perception’ is a psychological construct, “a subjective judgment about the felt likelihood of encountering hazards” (Gierlach, Belsher, and Beutler 2010:1539). Individual actors (including scientists) receive and process information about possible future outcomes to form their risk perceptions towards risk sources and events. Risk perception can be viewed as a subjective judgment of the likelihood of an undesirable outcome resulting from a decision at any given point in time (Slovic 1987). Risk perception is generally influenced by a large number of factors such as the availability of information, media, personal control, voluntariness, dread, and novelty of the risk as well as demographic factors such as age, gender, race and ethnicity (e.g., Vaughan and Nordenstam 1991; Flynn, Slovic and Mertz 1994; Cohn, Macfarlane and Yanez 1995; Byrnes, Miller and Schfter 1999; Finucane et.al. 2000) Risk in Scientific Decisions Although the study of risk has been well established as an area generating considerable academic interest, relatively few scholars have investigated risk and risk perception related to science. Hackett (2005:805) argues “risk is a central theme in the ideology and practice of science and policy for science.” According to Hackett (2005), scientific communities, funding   41   agencies, as well as the society tend to celebrate scientists who take risks by pursuing ideas that others think are unlikely to succeed. Every sort of research is risky in some way. Scientists may face these risks at different points in their career trajectories. Hackett (2005) develops a framework consisting of four main types of risks in science related to how scientists categorize research problems as doable/not doable and important/unimportant. These risk types include (1) risk of anticipation by other scientists (important research that is doable), (2) risk of failure (important research that is not doable), (3) risk of trivial nature (unimportant research that is doable) and (4) risk of ritual nature (unimportant research that is not doable). While this framework summarizes some of the risks that arise from scientific decisions, it narrowly focuses on research problem choice and does not accommodate many other areas of scientific work that involve risks such as issues of ethics, social and political sensitivity of research topics, funding restrictions, and other constraints. Luhmann (1993:203) argues that scientific research runs risks in the sense that decisions have to be taken without knowing what the results will be. These risks are different from unintended consequences of technology and development. Scientists expect that in the long run scientific research will generate “truths about nature” that will not be easily refuted by the larger scientific community. The risk is related to the notion that research may not be able to generate such truths (Luhmann 1993:203). In this sense, scientific knowledge production is a process in which scientists continuously struggle with the future uncertainty of not being able to generate truths about nature that they seek. Additionally, Collins (1998) argues that scientific cultures may be more or less risk aversive depending on the degrees to which evidential cultures (defined as a combination of evidential collectivism/ individualism, high/low evidential significance, and high/low evidential threshold) affect decision-making in science. While both Luhmann (1993)   42   and Collins (1998) acknowledge the role of risk in scientific decisions, little empirical investigation has been conducted to further explicate this line of thinking. Although only a few studies have directly explored scientific decisions from a risk perceptive, common in the social studies of science are examinations of demographic and structural predictors of scientific decisions. Below I engage in an analysis of this body of literature in order to develop some testable hypothesis for my research on scientists’ risk perceptions. Much prior research has associated age/life-course and gender to risk perception and decision-making among scientists. On the question of life-course, generally defined as a composite of related dimensions of age, professional rank, and/or professional experience by focusing on one’s professional domain (e.g., Rier 2003), some scholars claim that more senior scientists are resistant to change and innovations because they tend to protect prevailing theories (Hull et al. 1978; Kuhn 1996). Others suggest that junior scientists are more productive and make more significant contributions to science (Cole 1979; Kuhn 1996; Zuckerman 1996). On the contrary, more recently scholars claim that senior scientists contribute to scientific innovations more than junior scientists. For example, Wray (2003) reports that middle-aged scientists are responsible for initiating more scientific revolutions than young scientists. Messeri (2003) affirms that senior scientists are better positioned to support new and controversial research due to their resource relationships and interactions with the larger scientific community. Hackett (2005:805) argues that senior scientists may take on riskier problems than their junior colleagues because of overall characteristics of their risk profiles. This discussion on life-course leads to my first hypothesis.   43   Hypothesis 1: Senior scientists are more risk seeking in their research decisions than their junior colleagues. Social scientists find that gender plays a vital role in science. Some studies find that gender differences in scientific decisions result from the hierarchical nature of science where women are more likely to face discrimination (e.g., Keller 1985; Widnall 1988; Alper 1993). Other studies that investigate gender differences in decision-making among scientists suggest that the differences stem, at least in part, due to gender differences in risk-taking (e.g., Schiebinger 1999; Rier 2003). For instance, Max (1982) suggests that females are less likely to make risky decisions with their work at the beginning of their careers. Schiebinger (1999) argues that female scientists are more risk aversive and hesitant to publish their work until they are extremely certain of the results. Several other scholars have also found that females are more risk-aversive in their work than their male counterparts (Byrnes, Miller, and Schfter 1999; Etzkowitz, Kemelgor and Uzzi 2000). This leads to my second hypothesis. Hypothesis 2: Female scientists are more risk aversive in their research decisions than male scientists. Some prior studies associate sources of funding with scientists’ research decisions and demonstrate that public financial support encourages more basic investigation while private financial support encourages more applied investigation (e.g., Buccola, Ervin, and Yang 2009; Cooper 2009; Glenna et al. 2011). In general, the objective of basic research is to gain more comprehensive knowledge of the subject under study without specific applications in mind, while applied research attempts to meet specific, recognized needs (National Science Board 2008). Basic research sets the foundation for future applied discoveries while applied research leads to more immediate economic gains. Buccola et al. (2009:1245) argue that in general basic   44   research tends to be less patentable and excludable, discouraging private sector investments and funding. Applied research programs and projects that are outcome oriented and patentable are more frequently funded by private sector organizations. However, private funding usually exerts more pressure on scientists to obtain successful outcomes. Continued funding from the private sector for applied research is contingent upon obtaining targeted outcomes in current research. As a result, I expect that scientists who conduct more applied research have higher stakes in their research decisions, which causes them to minimize risks in order to ascertain expected results. Based on this discussion, I hypothesize: Hypothesis 3: Scientists who receive private funding are more risk aversive than scientists who do not receive private funding. Hypothesis 4: Scientists who conduct more “applied” research are more risk aversive than scientists who conduct more “basic” research. Additionally, social scientists argue that universities are influenced by the social and economic contexts in which they are embedded (Knorr-Cetina 1999; Kleinman 2003). Scientists do not conduct their research in isolation. Their laboratory operations are embedded in “transscientific fields of interaction” that involve not only themselves and other scientists, but other stakeholders such as administrators, grant agencies, and publishers (Knorr-Cetina 2005:191). As a result, in the modern practice of science a larger circle of individuals affects research decisions such as selection of research topics, funding, and communication. Scientists develop relationships with other individuals in their scientific networks. Scientists who interact with individuals in their networks more frequently may be better positioned to take on “risky” research projects due to enhanced resource-relationships. They may have more avenues to share ideas and expertise, funds, research materials, research facilities, as well as technological   45   platforms, which would allow them to better manage the risks incurred in research. Additionally, scientists who are embedded in larger networks may have more social capital that would allow them to collaborate in research projects where their personal expertise or resources are lacking. In this sense, they substitute the lack of human capital with the social capital accumulated through their various network interactions (Chen et al. 2012). This leads to my next hypothesis. Hypothesis 5: Scientists who a have higher degree of interaction with other individuals in their networks are more risk-seeking. Risk scholars investigate optimistic bias as a concept related to risk perception and behavior (Cohn et.al. 1995; Weinstein and Klein 1996). Optimistic bias refers to people’s tendency to think their risk is less than that of their peers. Several studies report a positive relationship between perceptions of personal control over outcomes of an event and greater optimistic bias for the event (Weinstein 1980; Harris 1996; Klein and Helweg-Larsen 2002). Perceptions of control are generally associated with personal risk estimates (Klein and HelwegLarsen 2002:438) so that the greater one’s perceptions of control, the lower one’s risk estimate. Stemming from the studies of optimistic bias and perception of control, I argue that similar biases are applicable to scientific decision-making. In particular, scientists’ who perceive higher significance in their own research agendas tend to be more optimistically biased, which lowers their risk estimates for future research decisions. This leads to my final hypothesis. Hypothesis 6: Scientists who perceive higher significance in their research are more riskseeking.   46   Methods of Data Collection and Analysis My data come from an online survey of bioscientists in a large land-grant research university in the US. Following Xie and Killewald (2012:9) I used an education based definition to identify individuals for the study and considered individuals working with or toward science degrees as scientists or potential scientists. I limited the study to scientists in biological science and excluded those in physical science, mathematics, engineering, and social sciences in order to hold constant effects of epistemic cultures (Knorr-Cetina 1999) and other organizational factors. I conducted the online survey following Dillman’s (2000) Tailored Design Method. Initially I interviewed twenty scientists from bioscience departments through purposive sampling. The sample consisted of four professors, four associate professors, four assistant professors, four post doctoral and research fellows and four PhD students. Ten of the participants were female and ten male. I developed the survey questionnaire using information gathered during the interviews. Subsequently, I conducted five cognitive interviews to refine the survey instrument’s structure and content and to improve its validity (see Appendix C for the full survey questionnaire). Initially I sent out the survey to 40 individuals for pretesting. Once it was deemed acceptable, I administered the survey to all individuals in the sampling frame. A comprehensive list of bioscientists in the target population did not exist, primarily because of the fluid nature of each unit (department) selected for the research. Individuals were regularly moving in and out of their research environments due to various reasons such as internships, promotions, completion of PhDs, and pursuing other careers. Therefore I developed the sampling frame by obtaining and combining the most current (spring 2013) email lists of faculty, researchers, and graduate students in bioscience departments. After redundant email   47   addresses were removed, this process resulted in a sampling frame of 1241 unique e-mail addresses. I calculated the final response rate using the AAPOR (American Association for Public Opinion Research) outcome rate calculator, which was 62%. This response rate is at the higher end of response rates previously reported by Sheehan (2001) for online surveys. However, the 38% non-response rate warranted further investigation. I conducted a comparison between respondents and non-respondents using their ranks (only information on ranks of all individuals was available to the researcher). This comparison revealed that while the sampling frame consisted of 39.5% professors, 14.7% postdocs and research associates, and 45.7% graduate students, the final sample consisted of 32.9% professors, 17.3% postdocs and research associates, and 49.7% graduate students. This shows that as far as rank is concerned, the sample closely resembles the population, thereby reducing concerns for non-response bias. Dependent variables: Following Sjoberg (1999), I used a “mean risk ratings” method (i.e., rating whether a particular technology, process, action, or event is of high or low risk, based on a Likert-type scale) to develop measures for “risk perception.” Risk perception was operationally defined as the degree of expressed preference for risk-seeking and risk-aversive behavior that scientists reported on 15 different items in the survey. See Table 1 for the survey questions, wording of the items, means, and standard deviations. A high score on an item indicates a preference for risk seeking behavior (less perceived risk), whereas a low score indicates a preference for risk aversive behavior (high perceived risk). The 15 items measuring risk perception were developed using the results of the twenty in-depth interviews that I conducted to elicit university bioscientists’ various understandings of   48   the notion of risk. From analyzing the interview data I identified four concepts that I believe are captured by the 15 single items. I carried out an exploratory factor analysis to test for the existence of latent factors. Factor analysis identified four factors with Eigenvalues greater than 1. I used the clusters of variables in each of the four factors to develop four indices to measure risk perception by calculating their means. I applied Cronbach's alpha coefficient to measure homogeneity of the items within indices and examined the inter-item correlations (as measured by Pearson’s r) to test for multicollinearity. All indices reported sufficient reliability as measured by Cronbach's alpha values. Table 2 summarizes factor loadings for individual factors and results of reliability analysis as well as inter-item correlations. The four indices form the four main dependent variables of my study and are labeled as risk perception of: (1) new topics, (2) controversial topics, (3) competition, and (4) visibility. Independent variables: Table 3 provides coding, means, and standard deviations for the independent variables, the control variable and the four dependent variables used in the analyses. I operationalized lifecourse using two variables: “number of years in the degree field” and “seniority.” Information on number of years in the degree field was gathered by asking respondents to indicate how long they have been working in their particular research areas. This is kept as a continuous variable in the analysis. To measure seniority I developed an ordinal variable in which respondents were ranked ordered from “only BA/BS degrees” to “full professor” based on a combination of their levels of education and professional ranks. Gender was coded “1” for male. In the survey I inquired about respondents’ sources of funding by asking whether they have received funding from any governmental or private organizations. I dummy coded sources of funding as “1” for having received private funding. I measured the basic-applied orientation of scientists’ research   49   agendas by asking respondents to locate their research programs on a scale from “purely basic” 1 to “purely applied” 7. To measure network interactions I asked scientists to rate their frequency of communication with other scientists and other individuals in their scientific networks on scales of 1 to 5 (1-rarely to 5-daily). I developed a composite measure based on the average of the 13 items to indicate the frequency of scientists’ communications with other individuals in their networks. Index reliability was measured using Cronbach’s alpha (.859) and inter-item correlations (all correlations were less than .70). Scale reported good internal reliability as demonstrated by these two measures. See Table 4 for means and standard deviations of individual items used to develop the composite measure for network interactions. To measure scientists’ perceived significance of their research I asked respondents to rate how they believe their research and publishing has benefitted (or will benefit) the scientific community and the larger society on a scale from “not at all” 1 to “a great deal” 7. I developed a composite measure based on the average of the 6 items selected to indicate how scientists perceive the significance of the outcomes of their research. Scale reliability was measured using Cronbach’s alpha (.852) and inter-item correlations (all correlations were less than .70). See Table 5 for means and standard deviations of the individual items used to develop this composite measure. I used race (sometimes found to correlate with decision-making under risk; e.g. Flynn et al. 1994; Finucane et al. 2004) as a control variable in the analysis and dummy coded race to “white” 1 and “nonwhite” 0. Initially, I conducted regression diagnostics through influential analysis using DFBETA influence statistics and tests for multicollinearity. Correlation analysis revealed a high correlation between number of years in degree field and seniority (0.76). Since both of these variables   50   measure related aspects of the professional domain of a scientist’s life-course, I decided to test them in separate regression models. To text my hypotheses about the effects of selected demographic and contextual factors on scientists’ risk perceptions I employ eight multivariate OLS regression models; two each (one with years in degree field and one with seniority to measure life-course) measuring four different dimensions of risk perception, namely, (1) risk perception of new topics, (2) risk perception of controversial topics, (3) risk perception of competition, and (4) risk perception of visibility. All regression models have the same independent variables and control variables for ease of comparison (Table 6). To further test for multicollinearity, I investigated the VIF values for the eight regression models. Mean VIF was 1.12 with individual VIF values ranging from 1.07 to 1.15, reducing concerns of multicollinearity. Because of the ordinal nature of the dependent variables, I conducted a separate set of analysis using ordered logistic regression. The results of the ordered logistic regression models were consistent with the OLS regression models in terms of direction and significance of the effects. For ease of reporting and interpretation, only OLS regression results are reported below. In general, OLS regression techniques are more commonly used in risk analysis than ordered logistic regression methods (e.g., Slovic, Fischhoff, and Lichtenstein 1979; Sjoberg 1999; Moen and Rundmo 2006; Sjoberg 2008). Additionally, I also tested a set of eight OLS regression models that included potential interactions between life-course and gender (i.e., years in degree field x gender, seniority x gender) for which results are reported in Table 7. The implications are discussed below.   51   Results and Discussion I test each of the six hypotheses using results from the multivariate analyses shown in Table 6. I also compare the effects of the demographic and contextual factors used in the analyses across the models and discuss their implications for scientific decision-making. Effects of Life-course on Risk Perception Results show that effects of the selected demographic and contextual factors on risk perceptions of university bioscientists differ based on the specific dimension of risk under investigation. As far as effects of life-course are concerned, I hypothesized that senior scientists will be more risk seeking. When life-course was operationalized as “seniority” based on respondents’ levels of education and rank, no significant effects were observed for the four dimensions of risk under investigation. Similarly, when life-course was operationalized as scientists’ years of experience in their degree fields, for two of the dimensions of risk under investigation (new topics and competition) I did not observe any significant effects. However, on pursuing controversial research topics and gaining more visibility for research, I observed significant effects. According to Table 6 Model C, with every one-year increase of experience in degree field, scientists’ mean risk score for perceived risks of controversial research decreases by .077 standard deviations (controlling for all other covariates in the model), indicating that as professional experience increase scientists become risk aversive in terms of pursuing controversial research. This result contradicts my initial hypothesis about life-course and risk seeking. It appears that as scientists’ years of experience increase they become more cautious when engaging in controversial research. This may be a result of the differential effects of failure in controversial research areas on the reputations of scientists at various points in their careers. The finding aligns with Hackett’s (2005:805) argument that “senior scientists face greater   52   downside risks from certain sorts of failure whereas junior scientists can expect to have more time to make up for failure or reap the benefits of a gamble that pays off.” Failure in a controversial research project may have more negative impacts on well-established reputations of scientists who have been in the research fields for a longer time, thereby pursuing them to avoid such risks, whereas the magnitude of negative impact may be lower for the nascent reputations of scientists with little experience. At the beginning of their careers, scientists lacking experience have more leeway to take on controversial research projects, particularly when they are trying to establish unique research programs. As seen in Table 6 Model G, years in degree field also has a significant positive effect on perceived risks of gaining visibility for one’s research (p<.05). This indicates that as professional experience increases scientists look for more visible outlets to advertise their research. More experienced scientists can be risk seeking in this area because they have a better understanding of how their fields operates and what kind of research generate more interest among reviewers, peers, media, and the public. Overall results indicate that as far as risk perceptions are concerned, the number of years a scientist has worked on his/her research area matters more than their professional rank and level of education. Contrary to prior research that uses one or a combination of factors such as age, professional experience, and professional rank in determining scientific decision processes (e.g. Rier 2003; Wray 2003), my research suggests a focus on professional experience (as measured by years in the degree field) may be crucial in capturing life-course changes in this population. Gender and Risk Perception Based on existing literature on gender differences in risk-taking, I hypothesized that female scientists will be more risk aversive than male scientists. Contrary to this hypothesis, no   53   significant gender differences were found in any of the analytical models. Additionally, I also tested the interaction effects between gender and years in degree field, and gender and seniority (Table 7). Once again, no significant gender differentials were observed as scientists move through their life-course. This finding contradicts Max (1982) that reports females are less likely to make risky decisions with their work at the beginning of their careers, but they become more risk seeking as they move through their career trajectories. The lack of gender effects in this population of bioscientists may have a number of meanings. The reward structure of public university bioscience that is based mainly on claiming priority through fast publication and pursuing further funding necessitates attributes such as competitiveness and risk-seeking in all scientists regardless of gender. In this way it is plausible that female and male scientists who pursue scientific careers form a unique personality-type that reduces gender differences in risk-taking. Additionally, new and/or junior scientists such as graduate students and research associates are generally shielded within research laboratories controlled by a senior scientist whose responsibility it is to set up research agendas and fund the laboratory. Such an insulated environment may mask any gender differences that may have been visible if every scientist was making his/her own research decisions. Another reason that reduces gender differences in risk taking among bioscientists may be the nature of “professional socialization” in bioscience. Lahsen (2008) argues that while the professional socialization of physicists and chemists create a “high-proof” (higher faith in science and technology) attitude that results in more risk-seeking, biologists are more riskaversive due to the inherently unpredictable nature of the biological material they work with. Additionally, most public university bioscientists conduct basic research that does not create   54   immediate media interest or direct societal impacts. Therefore gender differences in risk taking in controversial research areas may also be masked within the overall research environment of a public university. Another way of looking at this issue is by taking into consideration the characteristics of the organizational setting of the land-grant university within which this population of scientists operates. Units implemented in order to ease some of the tensions arising due to gender, race, and class differences such as the “Office for Inclusions and intercultural Initiatives” may be reasonably effective in supporting faculty and other researchers to overcome biases and discriminations in their work place. Research Orientation, Sources of Funding, and Risk Perception Based on existing research that investigates impacts of sources of funding (public and private sector financing) and basic-applied research orientation on research decisions (e.g., Buccola et al. 2009; Cooper 2009; Glenna et al. 2011) I hypothesized that scientists who receive private funding as well as scientists who conduct “applied” research are more risk aversive. The hypothesis that scientists who conduct applied research are more risk aversive was justified for all four dimensions of risk under investigation. The largest effect was seen on the perceived risk of new research topics (-.158, p<.001). With every one-degree increase in “applied” orientation of one’s research agenda, scientists’ mean risk score decreases by .158 standard deviations, indicating that they become more risk aversive and less likely to invest in completely new research topics. Because applied research projects have targeted outcomes, scientists who conduct such research attempt to achieve those specific outcomes rather than venturing into completely new research areas. This finding, as discussed before, may reflect sources of funding for research in public universities. There is a lean towards more basic research in public   55   universities, particularly for research projects that are funded by federal agencies such as the NIH and NSF. Inherent to basic research is the need to be more creative and pursue new scientific ideas. No new knowledge will be created if new ideas (i.e., risky ideas) are not pursued. As discussed before, future funding for privately funded research agendas depends on producing successful outcomes in current research, which leads to risk avoidance among university scientists. This effect was seen in all eight analytical models. Compared to those who did not receive any private funding, those who received private funding had significantly lower mean risk scores for all four dimensions of perceived risks tested in the models, thereby indicating that private funding leads to more risk aversion. Network Interactions and Risk Perception In terms of scientists’ interactions with other individuals in their networks, I hypothesized that higher network interactions lead to more risk seeking. Results show that as scientists’ degrees of network interactions increase, they become more risk seeking in pursuing new and controversial research topics as well as gaining more visibility for their research. As scientists’ network involvements grow, they develop better resource-relationships and social capital that can help them pursue difficult or “risky” research projects. Some scientists may venture into research collaborations that help them pursue new research topics while managing failures by distributing risks among two or more research groups. Scientists who have reported higher network interactions generally belong to larger scientific communities, which also means that they are more open to gaining visibility for their own research through talking with peers, funders, media, and the general public.   56   However, as shown in Models E and F, no significant effects of network interactions were observed on the risk perception of competition. Although in general it is possible to argue that as scientists develop better network interactions they become better positioned to take on riskier problems, in interpreting this finding, one should also take into consideration the opposite effect; as network interactions grow, scientists become more aware of their competition and therefore may steer away from seeking risky research, particularly in cases where the said scientists belong to resource poor or smaller laboratories. Perceived Significance of Research and Risk Perception Based on prior research on optimistic bias and risk perception I hypothesized that scientists who perceive higher significance in their research express preferences for risk-seeking behaviors. This positive effect was observed for all four dimensions of perceived risk tested in the analytical models. When scientists perceive their research as having already contributed or will contribute significantly to science and society, they tend to be more risk-seeking. They are more likely to pursue new as well as controversial research topics. They are also more competitive and are open to talking about their research in various scientific and public avenues. The largest effect was seen on the perceived risk of gaining visibility for one’s own research (.225, p<.001). With every one degree increase in scientists’ perceptions of the significance of their own research, their mean risk score for gaining visibility increase by .225 standard deviations, holding constant all other covariates in the model. In other words, when scientists perceive more value in their own research, they tend to pursue more visible outlets to advertise their research, both within and outside of academia.   57   Conclusion Prior research that explores various aspects of scientific decisions has identified risk as a central theme in the ideology and practice of science (e.g., Hackett 2005). However, there is a lack of empirical studies that quantify scientists’ decision-making processes, including risk perceptions in scientific work. To fill this void, my study examined the influence of a set of demographic and contextual factors (e.g., life-course, gender, source of funding, basic-applied orientation of research, network interactions, and perceived significance of research) on risk perceptions of university bioscientists in a land-grant university in the US, using a survey based, quantitative methodology. Results suggest that, in general, risk perceptions of university bioscientists differ based on the specific dimension of risk under investigation. While gender was not found to be associated with risk perception in this population, the professional domain of one’s life-course (as measured by years of experience in degree field) had significant impacts on two dimensions of perceived risk: on pursuing controversial research topics and gaining more visibility for research. Results also show that the basic-applied orientation of research and sources of funding have significant impacts on risk perception. In comparison to other sources of funding, private sector funding discourages risk seeking. Similarly, applied orientation in research agendas also discourages risk seeking. Network interactions and perceived significance of research emerged as significant predictors of risk perception. Interacting with other individuals in their scientific networks more frequently and perceiving their research as making significant contributions to science and society lead university bioscientists to be more risk seeking in their research choices. Although my research has attempted to expand the existing literature on risk and science in terms of quantifying risk perceptions, there are some limitations that need to be taken into   58   consideration. As Xie and Killewald (2012) argue, any statistical analysis is based on an implicit assumption of homogeneity among categories as defined by some measurable characteristics. But, in reality, I recognize that this assumption is too simplistic in the sense that every scientist comes with unique characteristics; risk perception being only one of those defining characteristics. However, quantitative analyses such as mine are useful to identify some defining characteristics of members of large organizations such as the US land-grant research university that I focused on. Since my research was conducted in one university with a unique institutional structure, it is not possible to draw direct generalizations about the larger population of bioscientists in the US. In operationalizing basic-applied orientation of research agendas, I have followed Vannevar Bush’s linear model that conceptualizes the relationship between basic and applied research in terms of a spectrum with basic research at one end and applied research on the other. Such a model implies that basic research leads to applied research, development, and production. However, some scholars such as Stokes (1997) have pointed out that this linear model is limiting in the sense that there is a class of research that is at once both applied and basic (i.e., research that falls under the “Pasteur’s Quadrant” or “use-inspired basic research”). This particular class of research seeks fundamental understandings of scientific problems, and, at the same time, seeks to be beneficial to society. Louis Pasteur's research is thought to exemplify this type of research, which bridges the gap between "basic" and "applied" research. This argument calls for further empirical investigations on other ways of conceptualizing basic-applied research such as Stoke’s two-dimensional model. Based on the results of this study, several avenues for future research can be suggested. Further investigations using more representative and larger samples are required to verify how   59   the association between basic-applied orientation of research agendas and risk perceptions may be moderated by sources of funding (public versus private sector financing). Future research should oversample racial-ethnic minorities among scientists so that effects of race and ethnicity on risk decisions can be investigated. The lack of gender effects on risk perception should be retested with larger samples of bioscientists that are representative of the national population. Finally, methods should be developed to compare scientists in biological sciences to those in physical sciences, mathematics, engineering, and social sciences that take into consideration epistemic disunities among these different branches of modern science. Along similar lines, investigating cross-cultural differences in risk perception among scientists in developing and developed countries can shed further light on how science is practiced around the world. Such an exercise will be particularly useful as sociologists of science continue to discuss the place of US science in a rapidly changing and globalizing world. On a subject as complicated as science, any attempt to make recommendations based on one study may seem like a pointless exercise (See also, Xie and Killewald 2012:137). Quantitative studies such as mine suffer from data limitations unavoidable in most social science research. However, a few points can be made regarding US science policy. My research shows that both applied orientation in research agendas and private sector funding reduces risk seeking among university bioscientists. In order for individual scientists to be more risk seeking and pursue new (and/or controversial) research areas, scientists should continually be supported through public sector financing while maintaining a steady stream of private funding and industry relations. A recently released National Research Council (2010) report also states that private research organizations lack the incentives to conduct basic and non-proprietary research and recommends increases in public sector support for universities and other public research   60   organizations. Other scholars point out that private funding and commercialization of academic science, including bioscience, harm the distinct research cultures of both universities and private research entities by blurring the distinctions between these different scientific cultures. They argue that commercialization compromises the unique structural positions of research universities that have enabled them to be hubs of innovation and provide public goods and services (Glenna et.al. 2011). One interesting question that arises through this discussion and requires further investigation is whether universities can participate in commercial science without compromising creativity, innovation, and public good science. My research has also shown that network interactions increase risk seeking among bioscientists. In order to produce scientists who are more innovative, junior scientists such as graduate students should be given enhanced opportunities to establish better networks not only among other scientists in their own fields but also among publishers, editors, funding organizations, and clients through participation in conferences, internships, training programs and other professional activities both within and outside of the academia.   61   CHAPTER 4 RESEARCH PROBLEM CHOICES AND RISK PERCEPTIONS OF BIOSCIENTISTS AT A US LAND-GRANT UNIVERSITY Introduction Research problem choice (also known as “problem choice”) is defined as “the decision by an individual scientist to carry out a program of research on a related set of problems or, more simply, in a problem area” (Gieryn 1978:97). It encompasses the process of identification, selection, and pursuit of research problems. Problem choice is a critical component of the practice of modern science and it provides a unique niche where scholars can investigate the effects of scientists’ agency (i.e., preferences, beliefs) and other structural factors (i.e., institutional environments, policies, trends in commercialization, professional and career demands, and funding structures). Scientists’ problem choices are important to society as they determine the content and value of accumulated bodies of scientific knowledge at any given time. Science is often pursued, at least in the academia, for nonmonetary rewards such as publications and recognitions (Dasgupta and Maskin 1987). As a result, problem choice is generally structured around the ability to produce peer-reviewed scientific publications. Some scholars such as Cooper (2009) argue that any reorientation of problem choice may represent a shift in the forms of knowledge that scientists produce. For example, scholars report that increased commercialization of university science has shifted scientists’ focus from producing knowledge about the natural world to developing proprietary outputs such as patents and start-up companies (Kleinman 2003; Buccola, Ervin and Yang 2009; Cooper 2009; Glenna et al. 2011). Others report synergies between problem choices guided by traditional missions (of universities) for producing peer-   62   reviewed publications and those reoriented towards commercialization (Azoulay, Ding, and Stuart 2006). A few contend that “patenting faculty publish in academic journals at equal or greater rates than non-patenting faculty” (e.g., Azoulay, Ding, and Stuart 2006; Foltz, Barham, and Kim 2007). Others such as Noble (1977) argue that increased commercialization and patenting of science has become yet another piece of corporate capitalism that render patent rights to the employers in many cases, instead of protecting individual scientists. The potential impacts of commercialization and other institutional and structural factors on problem choice reorientation and knowledge production call for a better understanding of scientists’ research problem choices including factors that determine those choices. Prior research highlights three main approaches for investigating scientists’ problem choices: (1) by exploring how problems become defined as interesting and legitimate or uninteresting and illegitimate (less scientific) (Zuckerman 1978), (2) by focusing on the emergence of “scientific specialties” and treating problem choices as a byproduct (Edge and Mulkay 1976), and (3) by investigating criteria that determine scientists’ problem choices (Lacy, Busch, and Sachs 1980). However, no comprehensive study that I am aware of has been conducted so far relating scientists’ risk perceptions to their research problem choices. Risk in research decisions entails, among other things, pursuing “high-risk” research projects that have a low likelihood of success. However, when completed successfully, high-risk research bring in high rewards such as publications in high prestige journals. No research program, regardless of its theoretical and methodological soundness, is completely free of risks. At any given point, individual scientists or scientific organizations face several options for choosing research programs, each of which may be associated with potential positive or negative outcomes. Scientists may select a research program that promises more benefits than losses   63   compared to other available options, based on their subjective risk perceptions. In this sense scientists’ risk perceptions are important as they determine not only the day-to-day practices within research laboratories, but also the overall problem choices. To date, scientists’ risk perception has remained an under-studied determinant of research problem choice. In this paper I take critical early steps to link scientists’ risk perceptions to their research problem choices. I use data gathered from an online survey of bioscientists in a large land-grant university in the US and investigate the associations between bioscientists’ risk perceptions (as measured by their expressed preferences for “risk seeking” and “risk aversive” behavior) and their research problem choices. My research contributes to the social studies of science by expanding our understanding of the interrelationships among science, risk and problem choice of university bioscientists in the US. Background Scientists’ Criteria for Research Problem Choice Scientists are motivated by a variety of factors such as curiosity, reputation, credibility, financing for themselves and their research laboratories, and professional and ethical norms (Merton 1973). As a result, the investigation of determinants of problem choice has long been a topic of interest (Cooper 2009:634). Problem choice achieved its status as a foundational concept in social studies of science through research by several sociologists of science such as Merton (1938), Edge and Mulkay (1976), Gieryn (1978), and Zuckerman (1978). Their works led to three main approaches through which problem choice can be investigated. First, scholars such as Zuckerman (1978) investigate problem choice by exploring how problems become defined as interesting and legitimate or uninteresting and illegitimate (less scientific). Summarizing research in this tradition Zuckerman (1978:74) concludes, “scientists   64   define some problems as pertinent and others as uninteresting or even illegitimate, primarily on the basis of theoretical commitments and other assumption structures.” A second approach focuses on the emergence of “scientific specialties” and treat problem choice as a byproduct of the processes leading up to defining scientific specialties (Busch and Lacy 1983:42). In this approach, scientists’ backgrounds and research experiences are examined to determine patterns among scientists who enter different specialties (Edge and Mulkay 1976). A third and more frequently used approach pioneered by Zuckerman (1978) and later employed by Lacy, Busch, and Sachs (1980) and Busch and Lacy (1983) investigates criteria that determine scientists’ problem choices. Several recent studies employ this third method and use expressed preference and survey techniques to ask scientists to rate the importance of various criteria in their problem choice (e.g., Busch and Lacy 1983; Buccola, Ervin, Yang 2009; Cooper 2009). Employing this third approach, Busch and Lacy (1983) developed a list of 21 criteria that may shape scientists’ problem choices. The list includes aspects of scientific norms, rewards, as well as commercial priorities. Busch and Lacy (1983:44) conclude that scientists’ problem choices “reflect the varying influences of each group within the system and is the result of a rather complex and continuing process of negotiation.” Because of the methodological parallels of this third approach to my study of risk perception (which also uses survey-based expressed preferences), I employ this approach in my study. In examining determinants of problem choice, scholars often focus on scientific and professional norms, institutional environments, commercial influences, and sources of funding. Analyzing the impacts of scientific norms and institutional environments on problem choice, Buccola et al. (2009) report that professional norms have substantial impacts on the research   65   pursued by university bioscientists. Buccola et al. (2009) also demonstrate that public financial support encourages more basic investigation and private financial support encourages more applied investigation. Walsh, Cho, and Cohen (2005) argue that patent laws and material transfer agreements (which are components of commercialization of academic science) reduce basic research within research organizations. Similarly, Atkinson et al. (2003:174) argue that fragmented ownership of intellectual property (IP) rights restricts research inputs and problem choices, particularly for public university scientists. Others researchers also investigate generic differences in the way scientists make research decisions. Azoulay, Ding, and Stuart (2007:23) suggest that academic incentive systems are evolving in ways that deviate from traditional scientific norms of openness to situations where the ability to patent and generate excludable results are gaining importance as criteria for research problem choice. Aghion, Dewatripont, and Stein (2005) suggest that by allowing scientists to freely pursue their problem choices, academia promotes more basic research, whereas the private sector’s ability to direct scientists towards higher-payoff activities makes it more attractive for applied research. Risk, Risk Perception, and Problem Choice Scientific knowledge can be viewed as an outcome of a collective, for example, of experts, methods, equipment, and experimental sites. The configuration of any functional collective involves decisions and exclusions (Valve and McNally 2013). When generating scientific knowledge, scientists must make use of what is already available (theories, methods, and practices) or prepare to put themselves at risk by venturing into new research areas or by challenging the existing practices. Research decisions often involve the potential for negative outcomes on attributes that are generally valued by scientists (i.e., risks) such as experimental failure, damages to scientific   66   reputations, inability to generate peer-reviewed publications, and inability to meet criteria for tenure and promotions. As a result, the ways in which scientists estimate risks in their research decisions, i.e. their risk perceptions, become a valuable component that drives problem choice. Several previous studies have demonstrated that risk perception is an important component of decision-making in science (e.g., Rier 2003; Gordon 1984; Dunwoody and Scott 1982; Boffey, Rodgers and Schneider 1999). However, sufficient attention has not been paid to risk perception’s association to problem choice. As problem choice is a product of decision-making in science, I expect scientists’ risk perceptions to be closely linked to their problem choice. Scientists select research problems based on their individual and subjective risk preferences. In this process, some scientists may pursue problems that are generally considered as “difficult” or “unlikely to be successful” which in turn may bring higher rewards (upon successful completion), whereas other scientists may use risk avoidance strategies, thereby leading to differences in criteria used to determine problem choice based on scientists’ subjective risk perceptions. While my research is not guided by any specific assumptions about the direction of the causal relationships, I argue that investigating the relationship between risk perception and research problem choice is useful to identify significant associations which may shed light on an under-studied potential determinant of problem choice: risk perception. Methods of Data Collection and Analysis My data come from an online survey of bioscientists in a large land-grant university in the US. Following Xie and Killewald (2012:9) I used an education based definition to identify individuals for the study and considered individuals working with or toward science degrees as scientists or potential scientists. I limited the study to scientists in biological sciences and   67   excluded those in physical science, mathematics, engineering, and social sciences in order to hold constant effects of epistemic cultures (Knorr-Cetina 1999) and other organizational factors. I conducted the online survey following Dillman’s (2000) Tailored Design Method. Initially I interviewed twenty scientists from bioscience departments through purposive sampling. The interview sample consisted of four professors, four associate professors, four assistant professors, four post doctoral and research fellows and four PhD students. Ten of the participants were female and ten male. I developed the survey questionnaire using information gathered during the interviews. Subsequently, I conducted five cognitive interviews to refine the survey instrument’s structure and content and to improve its validity (see Appendix C for the full survey questionnaire). Initially I sent out the survey to 40 individuals for pretesting. Once it was deemed acceptable, I administered the survey to all individuals in the sampling frame. A comprehensive list of bioscientists in the target population did not exist, primarily because of the fluid nature of each unit (department) selected for my study. Individuals were regularly moving in and out of their research environments due to various reasons such as internships, promotions, completion of PhDs, and pursuing other careers. Therefore I developed the sampling frame for my study by obtaining and combining the most current (spring 2013) email lists of faculty, researchers, and graduate students in bioscience departments. After redundant email addresses were removed, this process resulted in a sampling frame of 1241 unique e-mail addresses. I calculated the final response rate using the AAPOR (American Association for Public Opinion Research) outcome rate calculator, which was 62%. This response rate is at the higher end of response rates previously reported by Sheehan (2001) for online surveys. However, the 38% non-response rate warranted further investigation. I conducted a comparison between   68   respondents and non-respondents using their ranks (only information on ranks of all individuals was available to the researcher). This comparison revealed that while the sampling frame consisted of 39.5% professors, 14.7% postdocs and research associates, and 45.7% graduate students, the final sample consisted of 32.9% professors, 17.3% postdocs and research associates, and 49.7% graduate students. This shows that as far as rank is concerned, the sample closely resembles the population, thereby reducing concerns for non-response bias. Dependent variables: This study consists of three main dependent variables measuring (1) commercial priorities orientation, (2) professional norms orientation, and (3) institutional priorities orientation in university bioscientists’ problem choices. In the survey, criteria for problem choice was measured using a list of 19 items I developed by combining several of the criteria used in Busch and Lacy (1983:45) with some criteria that emerged through my in-depth interviews. I eliminated some criteria for problem choice developed by Busch and Lacy (1983), as they were specific to Agricultural Science. For a full list of the criteria used in my study, their means, and standard deviations, see Table 8. Following methods employed in Cooper (2009) and Buccola et al. (2009), responses to five of the given problem choice criteria indicate the existence of commercial priorities in scientists’ problem choice (potential marketability of final products, demands raised by clientele, clients’ needs as assessed by the scientist, potential to patent and license research findings, and feedback from extension personnel). To assess the existence of a broad “commercial priorities orientation” in problem choice, I created a new variable comprised of the average of these five criteria. Cronbach’s alpha was used to measure the internal consistency of this construct. Alpha > 0.7 is generally considered robust. For the measure of generalized commercial priorities   69   orientation in problem choice alpha=.837, confirming the validity of this construct. Similarly, responses to six of the given problem choice criteria indicate the existence of a “professional norms orientation” in problem choice, and eight of the given problem choice criteria indicate the existence of “institutional priorities orientation” in problem choice. I created two new variables comprised of the average of these two sets of criteria. Cronbach’s alpha and inter-item correlations were used to measure the internal consistency and test for issues of multicollinearity for the two constructs. For a description of the three generalized problem choice orientations, criteria comprised in them, and their internal consistency measures, see Table 9. Independent variables: This study consists of 4 main independent variables measuring university bioscientists’ risk perceptions. In order to measure risk perception, I used a “mean risk ratings” method (i.e., rating whether a particular technology, process, action or event is of high or low risk, based on a Likert-type scale) based on Sjoberg’s (1999) work. Perceived risk was operationally defined as the degree of expressed preference for risk-seeking and risk-aversive behavior that scientists reported on 15 different items in the survey. The 15 items measuring risk perception were developed using the twenty in-depth interviews that I conducted initially to elicit university bioscientists’ various understandings of the notion of risk. From analyzing the interview data, I identified four concepts that I believe are captured by the 15 single items. I carried out an exploratory factor analysis to test for the existence of latent factors. Factor analysis identified four factors with Eigenvalues greater than 1. I used the clusters of variables in each of the four factors to develop four indices to measure risk perception by calculating their means. I applied the Cronbach's alpha coefficient to measure homogeneity of the items within indices. Inter-item correlations (as measured by Pearson’s r) were also investigated to test for multicollinearity. All   70   indices reported sufficient reliability as measured by Cronbach's alpha values. Table 2 summarizes factor loadings for individual factors and results of reliability analysis as well as inter-item correlations. The four risk indices form the four main independent variables of my study and are labeled as risk perception of: (1) new topics, (2) controversial topics, (3) competition, and (4) visibility. Table 10 provides coding, means, and standard deviations for the four main independent variables, other selected general predictors, the control variables, and the three dependent variables used in the multivariate analyses. General predictors used include two life-course measures (number of years in degree field and seniority), gender, source of funding, research orientation (basic-applied orientation), network interactions, and perceived significance of research. Information on number of years in the degree field was gathered by asking respondents to indicate how long have been working in their particular research area. This is kept as a continuous variable in the analysis. To measure seniority I developed an ordinal variable in which respondents were ranked ordered from “only BA/BS degrees” to “full professor” using a combination of their level of education and professional rank. Gender was coded “1” for male. In the survey I inquired about respondents’ sources of funding by asking whether they have received funding from any governmental or private organizations and dummy coded source of funding as “1” for having received private funding. I measured scientists’ research orientations by asking respondents to locate their research programs on a scale from “purely basic” 1 to “purely applied” 7. To measure network interactions I asked scientists to rate their frequency of communication with other scientists and other individuals in their scientific networks on scales   71   of 1 to 5 (1-rarely to 5-daily). I developed a composite measure based on the average of 13 selected items to indicate the frequency of scientists’ communications with other individuals in their networks. Index reliability was measured using Cronbach’s alpha (.859) and inter-item correlations (all reported correlations were less than .70). Scale reported good internal reliability as demonstrated by these two measures. I measured scientists’ perceived significance of their research by asking them to rate how they believe their research and publishing has benefitted (or will benefit) the scientific community and the larger society on a scale from “not at all” 1 to “a great deal” 7. I developed a composite measure based on the average of the 6 items selected to indicate how scientists perceive the significance of the outcomes of their research. Scale reliability was measured using Cronbach’s alpha (.852) and inter-item correlations (all reported correlations were less than .70). I used race (sometimes found to correlate with decision-making under risk; e.g. Flynn et al. 1994; Finucane et al. 2004) as a control variable in the analysis and dummy coded race to “white” 1 and “nonwhite” 0. Initially, I conducted regression diagnostics through influential analysis using DFBETA influence statistics and tests for multicollinearity. Correlation analysis revealed a high correlation between number of years in degree field and seniority (0.76). Since both of these variables measure related aspects of the professional domain of a scientist’s life-course, I decided to test them in separate regression models. To examine how scientists’ risk perceptions are associated with their problem choices I employ six multivariate OLS regression models, two each (one with years in degree field and one with seniority to measure life-course) measuring the associations of all predictors under investigation with (1) commercial priorities orientation, (2) professional norms orientation, and   72   (3) institutional priorities orientation in problem choices of university bioscientists (Table 11). I tested VIF values for all six analytical models. Mean VIF was 1.23 with individual VIF values ranging from 1.10 to 1.37, reducing concerns of multicollinearity. Because of the ordinal nature of the dependent variables, I conducted a separate set of analysis using ordered logistic regression and developed six analytical models for the three problem choice orientations under investigation. The results of the ordered logistic regression models were consistent with the OLS models in terms of direction and significance of the effects of the main independent variables and several of the general predictors. For ease of reporting and interpretation, only OLS regression results are reported and discussed below. Results and Discussion University Bioscientists’ Criteria for Research Problem Choice Before interpreting the findings of the multivariate analyses, following Busch and Lacy (1983), I examine how the bioscientists in my study sample rated the various criteria for research problem choice to garner a general overview of their research choices. According to Table 8, “enjoyment of doing this kind of research” emerged as the single most important criterion for research problem choice. This finding is in line with Busch and Lacy (1983). The second most important criterion is “scientific curiosity.” Availability of funding is also ranked high among criteria that affect research problem choice. Scientists tend to rate potential contributions to theory and intellectual freedom as important criteria that direct their problem choice. “Publication probability in professional journals” is also ranked high. This comes as no surprise when practices of modern academic bioscience are taken into consideration. Scientists’ tenure, funding, recognition, and other reward structures are tightly linked to rates of successful publication in professional journals. Bioscientists generally tend to select research problems that   73   provide them with more avenues for publications. The eighth item on the list, “availability of research facilities and personnel” underscores the importance of “paraphernalia of research” (Busch and Lacy 1983:44). No research project can be completed without space, equipment, libraries, and other paraphernalia that supports the research. Similar to Busch and Lacy (1983), clients’ needs are ranked low at positions 14 and 15. It is particularly interesting to note that scientists’ self-assessments of clients needs are considered more important than demands raised by clients themselves, and more important than feedback from extension personnel. However, this finding may have been confounded by the amount of time scientists in my study sample allocated to teaching, research, and extension. Clients’ needs and feedback from extension are more effective in determining problem choices of scientists who devote a higher portion of their time to research and extension as opposed to teaching. Findings also show that scientists in the sample categorize the potential to patent and license research findings as considerably less important than most other criteria used to determine their problem choices. Overall, it seems that criteria representing commercial priorities and factors external to academia such as clients needs and demands, feedback from extension, and potential to patent and license research findings were ranked as less important in determining research problem choice than criteria driven by scientific and professional norms such as intellectual curiosity and enjoyment. These findings indicate that, as far as my study sample is concerned, amidst general institutional pressures for University-Industry Relations and excludable/ patentable research, bioscientists maintain a preference for “doing science” as a “purely intellectual exercise.” In this sense, my findings justify Dasgupta and David’s (1994) emphasis on scientists’ professional norms and values in determining their research trajectories. At least to the extent that can be   74   interpreted through a study of expressed preference, results also point to some validity in the functionalist analysis of scientific norms (e.g., Merton 1942/1973) that characterize the behavior of university scientists. Furthermore, results suggest that although most bioscientists in the sample no longer express an “ivory tower orientation to academic research” and “untenable claims to the objectivity of their inquiry” (Cooper 2009:648), still continue to be guided by longheld scientific and professional norms, even while acknowledging and managing market-oriented solutions. Risk Perception and Research Problem Choice The results of the multivariate analyses are shown in Table 12. I compare the effects of the risk perception measures and other selected demographic and institutional factors used in the analytical models across the models and discuss their effects on research problem choice. As far as scientists’ risk perceptions of new research topics are concerned, significant relationships are observed for all three problem choice orientations. Results show that as scientists’ expressed preference for risk seeking increases in venturing into new areas of research, the commercial priorities orientation (-.119 and -.126, p<.01) and institutional priorities orientation (-.113 and -.139, p<.01) in their problem choice decrease (models A, B, E and F respectively). However, as scientists’ expressed preference for risk seeking increases in venturing into new areas of research, the professional norms orientation in their problem choice increases (.223 and .232 p<.001). When comparing the six models, risk perception of new research topics has the highest significant relationship with the professional norms orientation in problem choice. In other words, scientists who are more risk seeking in investigating new research areas are likely to rate professional norms such as scientific curiosity and intellectual freedom as more valuable in determining their problem choice than criteria influenced by   75   commercial interests and various structural factors such as availability of funding and priorities of the university. This finding aligns with previous research in the sense that commercial priorities in university science are changing the distinct research culture within universities by shifting research focus from public goods research to proprietary research (e.g., Campbell and Blumenthal 2000; Kleinman and Vallas 2001). My research shows that changes in scientists’ risk perceptions have significant relationships to these shifting research orientations. As far as scientists’ risk perceptions of competition is concerned, results once again show significant relationships for all three problem choice orientations: commercial priorities (.154 and .164, p<.001), professional norms (.055 and .046, p<.05) and institutional priorities (.013 and .001, p<.05). As scientists become more competitive (i.e., expressed preference for risk seeking in competitive areas increase) they tend to estimate all three problem choice orientations as more important than scientists who are risk aversive in competitive research areas. This finding alludes to the fact that when university bioscientists venture into competitive research areas they display a heightened concern for criteria that determine their research problem choices. The largest significant effect of risk perception of competition is seen on commercial priorities orientation, indicating that as bioscientists become more competitive, their estimates of the importance of commercial priorities in their research problem choices grow at a rate faster than professional norms or institutional priorities. In other words, competitive scientists pay more attention to commercial priorities in their problem choices. This finding is reflective of the current funding structure of US university bioscience. Private funding gives a competitive advantage to scientists and leads to more commercially oriented research. As one interviewee of my research argued, “they [private funding organizations] give you what you need to do your   76   work. You are not money grubbing around to find duct tape to put on a broken piece of a tractor. They will give you top of the line equipment and technicians who know how to use them.” No significant relationships were observed between scientists risk perceptions of gaining visibility for their research and commercial priorities orientation in their problem choice. However, significant effects of risk perceptions of gaining visibility for one’s research was observed on both professional norms orientation (-.153 and -.149, p<.001) and institutional priorities orientation (.028 and .004, p<.05). As scientists become more risk seeking in pursuing visibility for their research through publications, talking to peers and media, and other means, institutional priorities orientation in their problem choice increases. On the contrary, as scientists become more risk seeking in pursuing visibility, professional norms orientation in their problem choice decreases. This suggests that university bioscientists who seek more visibility are mostly those who orient their research agendas based on institutional requirements than professional norms or commercial interests, once again alluding to the changing nature of institutional environments in US research universities. No significant relationships were observed between scientists’ risk perceptions of venturing into controversial research areas and their research problem orientations. This indicates how university bioscientists perceive the social and political sensitivity of their research topics within and outside of academia does not have a relationship with criteria that guide their research problem choices. Bioscientists in my interview sample were conscious of the social and political sensitivity of their research topics. However, opinions voiced indicated that being in a public university compels scientists to conduct research that do not generate much controversy outside of academia, thereby reducing the importance of sensitivity of research topics in guiding problem   77   choices. As one graduate student put it, “final commercialization is beyond our control. As growers and breeders we just try to do the research.” Other Determinants of Research Problem Choice In addition to scientists’ risk perceptions, several other general predictors in the analytical models show existence of significant relationships with problem choice orientations, some displaying results similar to previous studies and some contradicting them. Private sector funding emerged as having a significant relationship with commercial priorities orientation in problem choice (.185 and .204, p<.001). Compared to those who do not receive private funding, those who receive private funding are significantly more likely to show concern for commercial priorities. This comes as no surprise when one takes into consideration trends in commercialization of university bioscience in the US. More patentable and excludable research are now privately funded. Before the 1980 Bayh-Dole Act which enabled universities to license their inventions to the private sector, university science operated under the general expectation that universities should produce more basic and non-proprietary research outputs than applied and excludable research (Glenna et.al. 2011:957). However, after the passage of the Bayh-Dole Act, these expectations have changed dramatically. University-Industry Relations and commercialization are now not only possibilities but also requirements in many US research universities. As a result scientists continue to pursue more private funding and take into consideration aspects of commercialization when determining their problem choices. Contrary to popular understandings of gender differences in scientific decisions (e.g., Max 1982; Schiebinger 1999; Rier 2003), no significant gender effects were observed for commercial priorities or professional norms orientations in problem choice. However, a significant gender effect was observed for institutional priorities orientation in problem choice.   78   Males were significantly less likely to rate institutional priorities as determining factors in their problem choices than females. This may be an outcome of the gendered and hierarchical nature of science (e.g., Keller 1985; Widnall 1988; Alper 1993) where females are facing unique and heightened institutional barriers that limit the exercise of their intellectual freedoms. Applied orientation of research is significantly and positively associated with commercial priorities, professional norms as well as institutional priorities in problem choice. This is indicative of a tendency among more applied oriented scientists to rate all criteria for research problem choices as more important than scientists who conduct more basic research. In other words, applied scientists among university bioscientists tend to display more concern when evaluating their criteria for research problem choices. This finding may have a relationship to the outcome-oriented nature of applied research where scientists pursue targeted applications and products, thereby increasing stakes for achieving expected end-results. Conclusion According to the analyses above, academic bioscientists in my study utilize a diverse range of criteria to determine their research problem choices, even while working in an organizational setting predominantly structured to promote public service and public-interest science, i.e. the US land-grant university (e.g., Teague 1981; McDowell 2001). In general, most university bioscientists are driven by criteria that reflect professional norms such as enjoyment of doing research, scientific curiosity, and intellectual freedom. The investigation of impacts of bioscientists’ risk perceptions on their research problem choices indicate that there are several significant relationships between risk perceptions of venturing into new research topics, handling competition, and gaining visibility for one’s own research with problem choice orientations of university bioscientists. As such, risk perception has emerged as an important determinant of   79   problem choice for university bioscientists, and one that can supplement determinants previously investigated by other scholars such as sources of funding, institutional environments, and commercial interests. In summary, scientists’ risk perceptions of new research topics and competition show existence of significant relationships with all three problem choice orientations (i.e, commercial priorities, professional norms, institutional priorities) under investigation. Scientists’ risk perceptions of gaining visibility for their research shows existence of significant relationships with both professional norms orientation and institutional priorities orientation in research problem choice. No significant relationships were observed between scientists’ risk perceptions of controversial research topics and their problem choice orientations. The associations between scientists’ risk perceptions and their criteria for problem choices require further investigation. In my study risk perception was measured using expressed risk preferences. However, expressed risk preference may differ from scientists’ actual risk behaviors. Therefore further empirical investigations are required to determine the relationship between bioscientists’ risk behaviors and criteria that influence their research choices. Evidence left after a scientist had chosen his/her research programs (such as succession of scientific papers and publications throughout one’s career trajectory) may give some indication as to how risk preferences impact actual risk behavior and problem choices among university bioscientists. In operationalizing actual risk behavior, future research can take into consideration measurements such as the “h-index” and its more recent modifications (e.g., Bornmann and Daniel 2007) to quantify the research outputs of individual scientists and relate them to their problem choices. Although my research has contributed some useful findings to the existing literature on research problem choice, there are some limitations that need to be taken into consideration. As Xie and Killewald (2012) argue, any statistical analysis is based on an implicit assumption of   80   homogeneity among categories (of scientists) as defined by some measurable characteristics. But, in reality, this assumption is too simplistic in the sense that every scientist is unique with unique characteristics: research problem choice being only one of those defining characteristics. However, quantitative analysis such as mine is useful to identify some of the defining characteristics of members of a large social institution (the US land-grant university). The cross-sectional data used in this study cannot reveal how scientists’ research choices change during their career trajectories. The data in this study can only be used to develop a “temporally constrained description” (Cooper 2009:636) of how the occupation of particular positions in academic bioscience is tied to different conceptions of its practice. Furthermore, since this research was conducted in one large public university with a unique institutional structure, I do not intend to use findings from this research to draw generalizations about the larger population of bioscientists in the US. However, this research suggests some potential avenues for future research on bioscientists’ problem choices and hint to new determinants that may contribute to their problem selection. The research highlights the importance of exploring unconventional and under-studied factors such as risk perception in determining problem choices among public university bioscientists in the US.   81   CHAPTER 5 SUMMARY AND CONCLUSION Given the unique nature of science as an institution and its value to society, much social science research has gone into understanding scientists and their work. In particular, scholars have examined characteristics of individuals who become scientists, their work conditions, their methodologies, and the ways in which they compete in their various academic fields. Limited among existing literature are studies that explore the interplay between risk and science, including risk perception and its effects on the practice of modern science. Having recognized this gap in the literature, my dissertation employed a mixed-methodology to explore three aspects related to risk among public university bioscientists in the US: (1) their risk epistemologies, (2) factors that influence their risk perceptions, and (3) associations between their risk perceptions and research problem choices. The results of the first essay (Chapter 2) indicate that risk is a useful paradigm to study research decisions in science. Results show that scientists view risk as a recurrent and inherent theme in their work and recognize a large number of risks in research such as failure of research projects, anticipation and competition by other scientists, issues of freedom and control, as well as the controversial nature of research topics. In managing risks, at times scientists conform to the existing institutional structures and determine the levels of risk they are willing to take. Other times they challenge these structures, persist through them, or find that the structures compromise their actions. Scientists’ risk epistemologies matter to the extent that these allow for more creative ways in which individual scientists can navigate the institutional environments in which they are embedded. Overall, university bioscientists’ risk epistemologies seemed to be   82   related to the unique reward structure of science, compelling them to use various risk management techniques while navigating their work environments. The results of the second essay (Chapter 3) indicate that risk perceptions of public university bioscientists differ based on the specific dimension of risk under investigation. While gender was not associated with risk perception in this population, life-course (as measured by years of experience in degree field) had significant impacts on two dimensions of perceived risk: the pursuit of controversial research topics and gaining more visibility for one’s research. Results also show that the basic-applied orientation of research agendas and sources of funding have significant impacts on risks perception. In comparison to public funding, private funding discourages risk seeking. Similarly, applied orientations in research agendas also discourage risk seeking. Network interactions and perceived significance of research emerged as significant predictors of risk perception. Interacting with other individuals in their scientific networks more frequently and perceiving their research as making significant contributions to science and society lead university bioscientists to be more risk seeking when making research decisions. Finally, the results of the third essay (Chapter 4) demonstrate that academic bioscientists utilize a diverse range of criteria for selection of research problems, even when working in an organizational setting that was predominantly structured to promote public service and publicinterest science (i.e., US land-grant university). In general, most university bioscientists are driven by criteria that reflect individual motivational factors and aspects of scientific and professional norms such as “enjoyment of doing research”, “scientific curiosity”, and “intellectual freedom.” The investigation of impacts of university bioscientists’ risk perceptions on their research problem choices indicate that there are significant relationships between perceived risks of venturing into new research areas, handling competition, and gaining visibility   83   for one’s research on different problem choice orientations of university bioscientists. Overall results highlight the importance of exploring the effects of unconventional and under-studied factors such as risk perception in determining problem choices among university bioscientists. Although my research increases our understanding of the interplay among science, risk, and research problem choice, there are some limitations that need to be taken into consideration when evaluating the results. Any statistical analysis such as mine is based on an implicit assumption of homogeneity among categories as defined by some measurable characteristics. But, in reality, I recognize that this assumption is too simplistic. Every scientist comes with a unique set of characteristics; risk perception and problem choice being only two of those defining characteristics. Additionally, the cross-sectional data used in this study cannot reveal how scientists’ risk perceptions and research problem choices change during their career trajectories. The data in this study can only be used to develop a temporally constrained description of how the occupation of particular positions in academic bioscience is tied to different conceptions of its practice. Since my research was conducted in one university with a unique institutional structure (i.e., a US land-grant university), it is not possible to draw direct generalizations about the larger population of bioscientists in the US. However, based on the results several avenues for future research can be suggested. Future research should oversample racial-ethnic minorities among scientists so that effects of race and ethnicity on risk decisions can be investigated. The lack of gender effects on risk perception should be retested with larger samples of bioscientists that are representative of the national population. Methods should also be developed to compare scientists in biological sciences to those in physical sciences, mathematics, engineering, and social sciences that take into consideration epistemic disunities among these different branches   84   of modern science. Along similar lines, investigating cross-cultural differences in risk epistemologies and research choices among scientists in developing countries and developed countries can shed further light on how science is practiced around the world. My research also highlights the importance of exploring unconventional and under-studied factors such as risk perception in determining problem choices among bioscientists. Future research would benefit by employing advanced statistical techniques such as structural equation modeling to investigate causal relationships and associations between demographic and structural determinants, risk perception, and research problem choices among scientists. On a subject as complicated as science, any attempt to make recommendations based on one study may seem like a pointless exercise. Quantitative studies such as mine suffer from data limitations unavoidable in most social science research. However, a few points can be made regarding US science policy debates. My research shows that both applied orientation in research agendas and private sector funding reduces risk seeking among university bioscientists. In order for individual scientists to be more risk seeking and pursue new (and/or controversial) research areas, scientists should continually be supported through public sector financing while maintaining a steady stream of private funding and industry relations. A recently released National Research Council (2010) report also states that private research organizations lack the incentives to conduct basic and non-proprietary research and recommends increases in public sector support for universities and other public research organizations. As discussed throughout the dissertation, several other scholars point out that private funding and commercialization of academic science, including bioscience, harm the distinct research cultures of both universities and private research entities by blurring the distinctions between these different scientific cultures. They argue that commercialization compromises the unique structural positions of   85   research universities that have enabled them to be hubs of innovation and provide public goods and services. One interesting question that arises through this discussion and requires further investigation is whether universities can participate in commercial science without compromising creativity, innovation, and public good science. My research has also shown that network interactions increase risk seeking among bioscientists. In order to produce scientists who are more innovative, junior scientists such as graduate students should be given enhanced opportunities to establish better networks not only among other scientists in their own fields but also among publishers, editors, funding organizations, and clients through participation in professional conferences, internships, training and other professional activities both within and outside of the university. Overall, this dissertation has contributed to the social studies of science literature by incorporating insights from the study of risk and expanding our understanding of the relationship between risk epistemologies and research problem choices among public university bioscientists in the US. The findings provide insights into bioscientists’ various understandings of the notion of risk and show how these understandings influence their risk evaluation, management, and research choices. Knowledge gained through this dissertation on risks that bioscientists encounter in research and the factors that affect their risk negotiations can be used to create research environments that are more inductive, particularly for junior scientists.   86   APPENDICES   87   APPENDIX A TABLES   88   Table 1: Mean Risk Ratings for Items Measuring Expressed Risk Preference Survey Question stem: item # For items 1 to 11: Please rate how likely you are to choose research problems that contain following characteristics. Rate each characteristic by clicking one number from "Not likely" (1) to "Very Likely" (7). Mean For items 12 to 15: On a scale of 1 "Not likely" to 7 "Very likely", please rate how likely you are to engage in the following activities. 12 Talk openly about your research with your peers 6.20 1 Research requires you to use new theories or concepts 5.33 13 Submit your manuscripts to high impact journals 5.21 5 Research problem is relatively new and unconventional in your field 5.21 2 Research requires you to master new research skills/techniques 5.10 6 Less prior literature is available on the research topic 4.93 7 Research topic is competitive 4.41 8 Research topic is likely to be anticipated by other scientists in your field 4.31 15 Conduct research with potentially high payoffs but low rates of success 4.07 10 Research topic is controversial among your peers 3.99 14 Talk about your research to media 3.93 3 Research requires a long time to complete 3.85 4 Research requires buying services from other units 3.83 11 Research topic is controversial in the public eye 3.70 9 Research has a low likelihood of success 2.72 a All items in this table have the same coding scheme. Items are measured on scales of not likely=1 to very likely=7.   89   a SD 1.19 1.30 1.52 1.37 1.25 1.39 1.44 1.44 1.60 1.51 1.98 1.42 1.70 1.58 1.51 Table 2: Survey Items for Four Latent Factors Reflecting Four Dimensions of Perceived Risk Latent factors New topics Survey item # 1 2 5 Controversial 6 topics 10 11 Competition 3 4 7 8 9 Visibility 12 13 14 15 Survey items Factor Reliability loadings (alpha) Question stem for items 1 to 11: Please rate how likely you are to choose research problems that contain following characteristics. Question stem for items 12 to 15: On a scale of 1 "Not likely" to 7 "Very likely", please rate how likely you are to engage in the following activities. Research requires you to use new theories are concepts Research requires you to master new research skills/techniques Research problem is relatively new and unconventional in your field Less prior literature is available on the research topic Research topic is controversial among your peers Research topic is controversial in the public eye Research requires a long time to complete Research requires buying services from other units Research topic is competitive Research topic is likely to be anticipated by other scientists in your field Research has a low likelihood of success Talk openly about your research with your peers Submit your manuscripts to high impact journals Talk about your research to media Conduct research with potentially high payoffs but low rates of success Interitem corr. 0.62 0.52 0.66 0.84 <.66 0.61 0.71 0.54 0.45 0.48 0.67 0.51 0.79 <.62 0.75 <.54 0.50 0.57 0.51 0.57 0.81 <.53 Note: All items in this table have the same coding scheme. Items are measured on scales of not likely=1 to very likely=7. Factor loadings reported are the maximum loadings for each item for each reported latent factor. Inter-item correlations reported are the maximum correlations between any two items in each set of items within latent factors.   90   Table 3: Coding, Means, and Standard Deviations for Variables in the Study Variable Number of years in degree field Seniority - Gender Private funding Applied orientation of research Network interactions Perceived significance of research Race Risk perception scales New topic Controversial topic Competition Visibility     Coding 0 to 59 (number in actual years) 0 (only BA/BS) 2 (only MA/MS) 3 (only PhD) 4 (PhD and currently a post-doc) 5 (assistant professor) 6 (associate professor) 7 (full professor) 0 (female) to 1 (male) 0 (no) to 1 (yes) 1 (purely basic) to 7 (purely applied) 1 (rarely) to 5 (daily) 1 (not at all) to 7 (a great deal) Mean 11.06 3.58 0.53 0.19 3.93 1.63 3.40 .49 0.39 1.73 0.76 1.75 0 (nonwhite) to 1 (white) 1 (high perceived risk, risk aversive) to 7 (low perceived risk, riskseeking) 0.75 0.43 5.29 4.21 4.09 4.85 1.05 1.18 1.05 1.08 91   SD 10.82 2.19 Table 4: Scientists’ Network Interactions a Question stem: How frequently do you communicate with the following Mean parties regarding your research? By "communicate" we mean verbal and written communications about your research and sharing of expertise, materials, and resources. Scientists in my lab 4.19 Students 3.35 Scientists in my department 3.23 Scientists outside my department 2.59 General public 1.53 University administrators 1.40 Journal editors and/or publishers 1.27 Funding agencies 1.13 Clients 1.05 Extension staff 1.03 Media 0.88 University Office of Intellectual Property 0.72 Lawyers 0.57 a SD 1.36 1.63 1.39 1.32 1.25 1.08 0.81 0.75 1.21 1.11 0.71 0.57 0.61 All values in this column are measured on a scale of rarely=1 to daily=5. Table 5: Scientists’ Perceived Significance of Research Question stem: Do you believe that your research and publishing over the past 5 years has already benefited or will benefit (directly or indirectly) any of the following? To my scientific discipline To the general public To other scientific disciplines To federal agencies To local and state governmental agencies To foreign groups, institutions, or governments a a 5.00 3.95 3.83 3.69 3.55 3.28 All values in this column are measured on a scale of not at all=1 to a great deal=7.     Mean 92   SD 1.40 1.70 1.54 1.83 1.85 1.77 Table 6: Multivariate OLS Regression Models for Variables Predicting Four Dimensions of Perceived Risk (Standardized Regression Coefficients) Variables Number of years in degree field Seniority Gender Private funding Applied orientation of research Network interactions Perceived significance Race N 2 R *p<.05 **p<.01 ***p<.001   New topics Model A Model B -.102 Controversial topics Model C Model D -.077* Competition Model E Model F -.043 Visibility Model G Model H .057* -.052 -.102** -.158*** -.082 -.056 -.100** -.157*** .002 -.030* -.101* -.044 -.004 -.041* -.098* .001 -.021* -.124** .026 -.011 -.026* -.117** .069 -.118** -.120** .073 .066 -.124** -.109** .110** .151*** -.079 584 .094 .112** .151*** -.100 599 .094 .055* .182*** .026 582 .074 .060* .182*** .013 598 .073 .050 .142*** -.138* 576 .053 .041 .142*** -.164* 592 .061 .174*** .225*** -.116** 582 .152 .171*** .221*** -.104** 598 .157 93   Table 7: Multivariate OLS Regression Models (With Interaction Effects for Selected Variables) Predicting Four Dimensions of Perceived Risk (Standardized Regression Coefficients) Variables Years in degree field Seniority Gender Private funding Applied orientation of research Network interactions Perceived significance Years in degree field x Gender Seniority x Gender Race N 2 R *p<.05 **p<.01 ***p<.001   New topics Model A Model B -.099 -.096 -.049 -.072 -.047 -.024 -.150*** -.147*** .111** .151*** -.005 .111** .149*** Controversial topics Model C Model D -.080 -.017 -.001 .028 -.074 -.076 -.105* -.103* .054 .183*** .005 .026 -.078 584 .092 -.099* 599 .090 .062 .183*** Competition Model E Model F .039 .071 .051 .042 -.025* -.015* -.123** -.114** .054 .143*** -.118 -.052 .0225 582 .071 94   .013 598 .070 .045 .143*** Visibility Model G Model H .017 .047 .048 .042 .158 .156 -.085* -.064* .173*** .221*** .054 -.085 -.137** 576 .060 -.164** 592 .059 .172*** .216*** .042 -.113** 582 .132 -.099** 598 .133 Table 8: Scientists’ Criteria for Research Problem Choice Rank Question stem: How important were the following criteria in setting up your research program/agenda? Please rate each criterion by clicking one number from ‘Not Important’ (1), to ‘Very Important’ (7). 1 Enjoyment of doing this kind of research 2 Scientific curiosity 3 Availability of funding 4 Potential contribution to scientific theory 5 Importance to society 6 Intellectual freedom 7 Publication probability in professional journals 8 Availability of research facilities and personnel 9 Potential contribution to new methods or devices 10 Ability to collaborate with colleagues 11 How peer scientists evaluate your research 12 Length of time required to complete research 13 Currently a “hot” topic 14 Clients needs as assessed by you 15 Demands raised by clientele 16 Potential marketability of final products 17 Priorities of the university 18 Feedback from extension personnel 19 Potential to patent and license research findings a   Mean 5.75 5.67 5.50 5.46 5.45 5.34 5.32 5.23 4.70 4.68 4.43 4.40 4.13 3.20 3.16 3.10 3.00 2.86 2.50 Mean score based on a seven point scale of not important=1 to very important=7. 95   a SD 1.39 1.36 1.49 1.57 1.44 1.57 1.50 1.52 1.80 1.55 1.70 1.59 1.68 2.11 2.10 1.99 1.64 1.89 1.76 Table 9: Survey Items for Scales Measuring Scientists’ Generalized Problem Choice Orientations Scale Survey items Reliability (alpha) Inter-item correlation Question stem: How important were the following criteria in setting up your research program/agenda? Please rate each criterion by clicking one number from ‘Not Important’ (1), to ‘Very Important’ (7). Commercial priorities Potential marketability of final products Demands raised by clientele Client needs as assessed by you Potential to patent and license research findings Feedback from extension personnel 0.84 <.67 Professional norms Potential contribution to scientific theory Potential contribution to new methods or devices Enjoyment of doing this kind of research Importance to society Scientific curiosity Your intellectual freedom 0.77 <.58 Institutional priorities Availability of funding Availability of research facilities and personnel Publication probability in professional journals Length of time required to complete the research Priorities of the university How peer scientists evaluate your research Currently a 'hot' topic Ability to collaborate with colleagues 0.72 <.51 Note: All items in this table have the same coding scheme. Items are measured on scales of not important=1 to very important=7. Inter-tem correlations reported are the maximum correlations between two items in each set of items within scales.   96   Table 10: Coding, Means, and Standard Deviations for Variables in the Study - Variable Dimensions of perceived risk New topics Controversial topics Competition Visibility Number of years in degree field Seniority Gender Private funding Applied orientation of research Network interactions Perceived significance of research Race Commercial priorities orientation Professional norms orientation Institutional priorities orientation   Coding 1 (highest perceived risk, risk aversive) to 7 (lowest perceived risk, riskseeking) 0 to 59 (number in actual years) 0 (only BA/BS) 2 (only MA/MS) 3 (only PhD) 4 (PhD and currently a post-doc) 5 (assistant professor) 6 (associate professor) 7 (full professor) 0 (female) to 1 (male) 0 (no) to 1 (yes) 1 (purely basic) to 7 (purely applied) 1 (rarely) to 5 (daily) 1 (not at all) to 7 (a great deal) 0 (nonwhite) to 1 (white) 1 (not important) to 7 (very important) 1 (not important) to 7 (very important) 1 (not important) to 7 (very important) 97   Mean SD 5.30 4.21 4.10 4.85 11.06 3.58 1.05 1.18 1.05 1.09 10.82 2.19 0.53 0.19 3.93 1.63 3.40 .49 0.39 1.73 0.76 1.75 0.75 2.61 3.55 3.68 0.43 1.23 1.25 0.93 Table 11: Multivariate OLS Regression Models for Generalized Problem Choice Orientations (Standardized Regression Coefficients) Variables New topics Controversial topics Competition Visibility Years in degree field Seniority Gender Private funding Applied orientation of research Network interactions Perceived significance Race N 2 R *p<.05 **p<.01 ***p<.001   Commercial priorities orientation Model A Model B -.119** -.126** .058 .052 .154*** .164*** .016 .021 -.018 -.085* -.038 -.019 .185*** .204*** .351*** .329*** .055 .107*** -.294*** 533 .326 .072 .118** -.307*** 566 .339 Professional norms orientation Model C Model D .223*** .232*** .024 .028 .055* .046* -.153*** -.149*** .029 .026 -.051 -.049 .005 .014 .144*** .129** .095* -.042 -.003 561 .122 98   .110* -.043 -.011 575 .122 Institutional priorities orientation Model E Model F -.113** -.139** .061 .065 .013* .001* .028* .004* .004 -.038 -.089* -.072* .052 .064 .035* .026* .127** .031 .097* 557 .055 .131** .027 -.090* 571 .056 APPENDIX B SEMI-STRUCTURED INTERVIEW GUIDE   99   1. What is your primary department? Do you work with other departments, graduate programs, colleges or universities? 2. What is your primary research area? 3. How long have you been conducting research in this area? 4. How did this research area/program originate? 5. Who sets the research agenda for your research? 6. As a researcher what are your goals? 7. What are the different parties you interact with during a research project? What are the extents of these interactions? 8. How many scientists work in your laboratory? What is the laboratory composition? 9. Who determines the responsibilities of individual scientists in the laboratory? 10. What kind of risk decisions do you encounter in your research? By risk I do not mean physical dangers, but rather, risk/benefit decisions that go into selecting particular research projects and conducting them. 11. How do you manage the risks (you mentioned above)? 12. What are groundbreaking research/hot topics in your field? What determines whether a research project is groundbreaking or not? 13. Is groundbreaking research more risky? Are they more rewarding? If so, why? 14. When do you decide to drop a research project without completion? 15. Who is responsible for the validity of scientific results produced in your laboratory? 16. What level of certainty do you look for before announcing/publishing results?   100   APPENDIX C SURVEY QUESTIONNAIRE   101   About Your Research Environment 1) What is your department? ( ) Biochemistry and Molecular Biology ( ) Entomology ( ) Microbiology and Molecular Genetics ( ) Physiology ( ) Plant Biology ( ) Zoology ( ) Animal Science ( ) Biosystems and Agricultural Engineering ( ) Crop and Soil Sciences ( ) Fisheries and Wildlife ( ) Food Science and Human Nutrition ( ) Plant Pathology ( ) Horticulture ( ) Forestry ( ) Other [please specify in comment below] Comments: ____________________________________________ ____________________________________________ 2) What is your primary field of research? (Examples are plant reproduction, wheat breeding, stress tolerance, or microbial genomics) ____________________________________________ 3) Are you a principal investigator or co-principal investigator for at least one externally funded project in your research program? ( ) Yes ( ) No 4) On average, how do you allocate your professional work time across the following activities? % of professional work time Teaching ___ Research ___ Administration ___ Extension ___ Outreach ___ Other [please specify below] ___ Comments: ____________________________________________   102   5) Please indicate the degree of "basicness" or "appliedness" of your research/research program, using a scale in which 1 means "purely basic" and 7 means "purely applied." By "purely basic," we mean experimental or theoretical discoveries that add to fundamental scientific knowledge. By "purely applied," we mean research that draws from basic or other applied research to create new products. ( ) Purely Basic 1 ()2 ()3 ()4 ()5 ()6 ( ) Purely Applied 7 6) During the past 5 years, have you received funding from any of the following source(s)? (Click all that apply) ( ) Federal government ( ) State government ( ) Foundations/ non-profit organizations ( ) Trade or commodity associations ( ) Individual firms/ private companies ( ) Other [please specify below] Comments: ____________________________________________ ____________________________________________ 7) How frequently do you communicate with the following parties regarding your research? By "communicate" we mean verbal and written communications about your research and sharing of expertise, materials, and resources. Rarely Monthly Bi-weekly Weekly Daily Scientists in your lab () () () () () Scientists in your department () () () () () Scientists outside your () () () () () department University administrators () () () () () Clients (for example, growers) () () () () () Funding agencies () () () () () MSU Office of Intellectual () () () () () Property Extension staff () () () () () Lawyers () () () () () Journal editors and/or publishers () () () () () Media () () () () () Students () () () () () General public () () () () ()   103   8) Who plays a significant role in deciding your research agenda/research program? (Click all that apply) ( ) Myself ( ) My research group ( ) My immediate supervisor ( ) My institution (department/college/university) ( ) Funding agencies ( ) Other [please specify below] Comments: ____________________________________________ ____________________________________________ 9) Who should take responsibility for the validity of scientific results that a researcher publishes? (Click all that apply) ( ) The individual researcher herself/himself ( ) PI of the research project ( ) The research group ( ) The institution (department/college/university) ( ) The scientific community as a whole ( ) Other [please specify below] Comments: ____________________________________________ ____________________________________________ About Your Decision-Making 10) How important were the following criteria in setting up your research program/agenda? Please rate each criterion by clicking one number from "Not Important" (1) to "Very Important" (7). Not Very 2 3 4 5 6 important 1 important 7 Potential contribution to scientific theory () () () () () () () Potential contribution to new methods or () () () () () () () devices Potential marketability of final products () () () () () () () Availability of funding () () () () () () () Availability of research facilities and () () () () () () () personnel Publication probability in professional () () () () () () () journals Length of time required to complete the () () () () () () () research Enjoyment of doing this kind of research () () () () () () () Importance to society () () () () () () () Scientific curiosity () () () () () () () Demands raised by clientele () () () () () () ()   104   Client needs as assessed by you Priorities of the university How peer scientists evaluate your research Currently a 'hot' topic Potential to patent and license research findings Ability to collaborate with colleagues Feedback from extension personnel Your intellectual freedom () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () 11) Please rate how likely you are to choose research problems that contain following characteristics. Rate each characteristic by clicking one number from "Not likely" (1), to "Very Likely" (7). Not likely 1 2 3 4 5 6 Very likely 7 Requires you to use of new theories or () () () () () () () concepts Requires you to master new research skills/ () () () () () () () techniques Requires a long time to complete () () () () () () () Requires buying services from other units () () () () () () () (ex: mass spectroscopy facility) Research problem is relatively new and () () () () () () () unconventional in your field Less prior literature is available on the () () () () () () () research topic Research topic is competitive () () () () () () () Research topic is likely to be anticipated by () () () () () () () other scientists in your field Research has a low likelihood of success () () () () () () () Research topic is controversial among your () () () () () () () peers Research topic is controversial in the public () () () () () () () eye 12) On a scale of 1 "Not likely" to 7 "Very likely", please rate how likely you are to engage in the following activities. Not likely 1 2 3 4 5 6 Very likely 7 Talk openly about your research with your () () () () () () () peers Submit your manuscripts to high impact () () () () () () () journals Talk about your research to media () () () () () () () Conduct research with potentially high () () () () () () () payoffs (ex: lead to significant findings and more publications) but low rates of success   105   About Your Research Outcomes, Goals, and Expectations 13) Below is a list of professional goals cited by research scientists. Please rate how important you believe each of these goals is to you by clicking one number from "Of no importance" (1), to "Of highest importance" (7). Of highest Of no Not 2 3 4 5 6 importance importance 1 applicable 7 Developing unique research () () () () () () () () projects Obtaining grants/funding () () () () () () () () Increasing total number of () () () () () () () () publications Increasing number of publications () () () () () () () () with first authorship Increasing number of publications () () () () () () () () in high-impact journals Attending more conferences () () () () () () () () Pursuing better career prospects () () () () () () () () Making significant contributions () () () () () () () () to your scientific field Making significant contributions () () () () () () () () to society Training students () () () () () () () () 14) In the last five years, how many of each of the following types of publications have you authored or co-authored? Number of publications Journal articles ___ Posters ___ Conference presentations ___ Invited talks ___ Abstracts ___ Books ___ Book chapters ___ Bulletins (Extension and outreach) ___ Newsletters ___ Other [please specify below] ___ Comments: ____________________________________________ ____________________________________________   106   15) Do you believe that your research and publishing over the past 5 years has already benefited or will benefit (directly or indirectly) any of the following? Please click one number from "Not at all" (1), to "A great deal" (7). Not at all 1 2 3 4 5 6 A great deal 7 Your scientific discipline () () () () () () () Other scientific disciplines () () () () () () () General public () () () () () () () Local or state governmental agencies () () () () () () () Federal agencies () () () () () () () Foreign groups, institutions, or () () () () () () () governments 16) Under what circumstances would you decide to drop a research project before completion? (Click all that apply) ( ) The funding runs out ( ) The research is not producing publishable results ( ) A different scientist publishes the results before you ( ) The research topic becomes controversial within the scientific community ( ) The research topic becomes controversial among the general public ( )The environmental conditions become unsuitable for field work ( ) Other [please specify below] Comments: ____________________________________________ ____________________________________________ 17) Are there any research interests that you have not been able to pursue until now? ( ) Yes ( ) No 18) If "Yes", what would make it possible for you to pursue these research interests? ____________________________________________ About Yourself 19) What is your gender? ( ) Male ( ) Female 20) What is your age? ( ) Under 24 ( ) 25-34 ( ) 35-44 ( ) 45-54 ( ) 55-64 ( ) 65+   107   21) What is your ethnic background? ( ) Asian/Pacific Islander ( ) Caucasian ( ) Native American/Alaska Native ( ) Black/African-American ( ) Hispanic ( ) Other/Multi-Racial 22) What is your highest degree obtained? ( ) Baccalaureate ( ) Masters ( ) Doctorate ( ) Other [please specify below] Comments: ____________________________________________ ____________________________________________ 23) Think about when you first started your research program/research agenda. How many years have you been working in your particular research area? ____________________________________________ 24) What is your current position? ( ) Assistant professor ( ) Associate professor ( ) Full professor ( ) Professor emeritus ( ) Lecturer/Instructor ( ) Post-doctoral fellow ( ) Graduate student ( ) Researcher ( ) Other [please specify in comment below] Comments: ____________________________________________ ____________________________________________ 25) If you have a faculty appointment, please choose the option most applicable to you. ( ) Tenured ( ) On tenure track but not yet tenured ( ) Non-tenure position ( ) Retired ( ) Not applicable 26) Conducting scientific research involves continuously making decisions and evaluating their impacts. If you have any additional comments, questions, or suggestions about scientists' decision-making please share them here. ____________________________________________   108   REFERENCES   109   REFERENCES Aghion, Philip, Mathias Dewatripont, and Jeremy C. Stein. 2005. “Academic Freedom, Privatesector Focus, and the Process of Innovation.” Working Paper, Harvard University. Agosto, Denise E. 2002. “Bounded Rationality and Satisficing in Young People’s Web-based Decision Making.” Journal of the American Society for Information Science and Technology 53:16-27. Alper, Joe. 1993. “The Pipeline is Leaking Women all the Way Along.” Science 260:409-411. Atkinson, Richard C., Roger N. Beachy, Gordon Conway, France A. Cordova, Mary Anne Fox, Karen A. Holbrook, Daniel F. Klessig, Richard L. McCormick, Peter M. McPherson, Hunter R. Rawlings III, Rip Rapson, Larry N. Vanderhoef, John D. Wiley, and Charles E. Young. 2003. “Public Sector Collaboration for Agricultural IP Management.” Science 301:174-175. Aven, Terje and Ortwin Renn. 2009. “On Risk Defined as an Event Where the Outcome is Uncertain.” Journal of Risk Research 12:1-11. Azoulay, Peter, Waverly Ding, and Toby Stuart. 2007. “The Determinants of Faculty Patenting Behavior: Demographics or Opportunities?.” Journal of Economic Behavior and Organization 63:573-6. Beck, Ulrich. 1992. Risk Society: Towards a New Modernity. London: Sage. Bell, Daniel. 1973. The Coming of Post-Industrial Society: A Venture in Social Forecasting. Basic Books: New York. Bercovitz, Janet, and Maryann Feldman. 2008. “Academic Entrepreneurs: Organizational Change at the Individual Level.” Organization Science 19(1):69-89. Boffey, Philip M., Joann E. Rodgers, and Stephen H. Schneider. 1999. “Interpreting Uncertainty: A Panel Discussion”, Pp 81-91in Communicating Uncertainty: Media Coverage of New and Controversial Science, edited by Sharon M. Friedman, Sharon Dunwoody, and Carol Rogers. NJ: Lawrence Erlbaum Associates. Boholm, Asa. 1998. “Comparative Studies of Risk Perception: A Review of Twenty Years of Research.” Journal of Risk Research 1(2):135-163. Bornmann, Lutz, and hans-Dieter Daniel. 2007. “What Do We Know About the h Index?” Journal of the American Society for Information Science and Technology 58(9):1381-1385. Buccola, Steven, David Ervin, and Hui Yang. 2009. “Research Choice and Finance in University Bioscience.” Southern Economic Journal 75(4):1238-1255.   110   Busch, Lawrence, and William B. Lacey. 1983. Science, Agriculture, and the Politics of Research. Boulder, Colorado: Westview Press. Byres, James P., David C. Miller, and William D. Schafner. 1999. “Gender Differences in RiskTaking: A Meta-Analysis.” Psychological Bulletin 125(3):367-383. Campbell, Eric G., and David Blumenthal. 2000. “Academic Industry Relationships in Biotechnology: A Primer on Policy and Practice.” Cloning 2(3):129-136. Chen, Xiaodong, Kenneth A. Frank, Thomas Dietz, and Jianguo Liu. 2012. “Weak Ties, Labor Migration, and Environmental Impacts.” Organization and Environment 25:3-24. Cohn, Lawrence D., Susan Macfarlane, and Claudia Yanez. 1995. “Risk Perception: Differences Between Adolescents and Adults.” Health Psychology 14:217-222. Cole, Stephen. 1979. “Age and Scientific Performance.” American Journal of Sociology 84(4):958–977. Collins, Harry M. 1974. “The TEA Set: Tacit Knowledge and Scientific Networks.” Science Studies 4:165-186. Collins, Harry M. 1998. “The Meaning of Data: Open and Closed Evidential Cultures in the Search for Gravitational Waves.” American Journal of Sociology 104: 293-338. Cooper, Mark H. 2009. “Commercialization of the University and Problem Choice by Academic Biological Scientists.” Science, Technology, and Human Values 34(5):692-653. Cooper, Mark H. 2009. “Commercialization of the University and Problem Choice by Academic Covello, Vincent T. 1983. “The Perception of Technological Risks: A Literature Review.” Technological Forecasting and Social Change 23: 285-297. Dasgupta, Partha, and Eric Maskin. 1987. “The Simple Economics of Research Portfolios.” Economic Journal 97:581–95. Delamont, Sara, and Paul Atkinson. 2001. “Doctoring Uncertainty: Mastering Craft Knowledge.” Social Studies of Science 31:87-107. Dillman, Don A. 2000. Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley & Sons, Inc. Douglas, Mary, and A. Wildavsky. 1982. Risk and Culture. Berkeley, CA: University of Dunwoody, Sharon, and Byron T. Scott. 1982. “Scientists as Mass Media Sources.” Journalism Quarterly 59:52-59.   111   Edge, David O., and Michael J. Mulkay. 1976. Astronomy Transformed: The Emergence of Radio Astronomy in Britain. New York: Wiley. Etzkowitz, Henry, Carol Kemelgor, and Brian Uzzi. 2000. Athen Unbound: The Advancement of Women in Science and Technology. Cambridge: Cambridge University Press. Finucane, Melissa L., Paul Slovic, C.K. Mertz, and James Flynn. 2000. “Gender, Race, and Perceived Risk: The ‘White Male’ Effect.” Health, Risk & Society 2:159-172. Fischhoff, Baruch, Sarah Lichtenstein, Paul Slovic, Stephen L. Derby, and Ralph L. Keeney. 1981. Acceptable Risk. Cambridge, MA: Cambridge University Press. Fischhoff, Baruch. 2012. Risk Analysis and Human Behavior. Earthscan: London. Flynn, James, Paul Slovic, and C.K. Mertz. 1994. “Gender, Race, and Perception of Environmental Health Risks.” Risk Analysis 14(6):1101–1108. Foltz, Jeromy D., Bradford L.Barham, and Kwansoo Kim. 2007. “Synergies and Tradeoffs in University Life Sciences Research.” American Journal of Agricultural Economics 89(2):353367. Fox, Mary Frank. 2001. “Women, Science, and Academia: Graduate Education and Careers.” Gender & Society 15:654-666. Giddens, Anthony. 1990. The Consequences of Modernity. Stanford, CA: Stanford University Press. Gierlach, Elaine, Bradley E. Belsher, and Larry E. Buetler. 2010. “Cross-Cultural Differences in Risk Perceptions of Disasters.” Risk Analysis 30:1539-1549. Gieryn, Thomas F. 1978. “Problem Retention and Problem Change in Science” Pp 96-115 in Sociology of Science, edited by J. Ganson. San Francisco: Jossey-Bass. Glenna, Leland L., Rick Welsh, David Ervin, William B. Lacy, and Dina Biscotti. 2011. “Commercial Science, Scientists’ Values, and University Biotechnology Research Agendas.” Research Policy 40:957-968. Gordon, Michael. 1984. “How Authors Select Journals: A Test of the Reward Maximization Model of Submission Behavior.” Social Studies of Science 14(1):27-43. Groves, Robert M., Floyd J. Fowler, Mick P. Couper, James M. Lepkowski, Eleanor Singer, and Roger Tourangeau. 2009. Survey Methodology. New Jersey: John Wiley & Sons. Habermas, Jurgen. 1984. Theory of Communicative Action. Volume 1: Reason and The Rationalization of Society. Translated by T. McCarthy. Boston, MA: Beacon.   112   Hackett, Edward J. 2005. “Essential Tensions: Identity, Control, and Risk in Research.” Social Studies of Science 35:787- 826. Harris, Paul. 1996. “Scientific Grounds for Optimism? The Relationship between Perceived Controllability and Optimistic Bias.” Journal of Social and Clinical Psychology 15:9-52. Hull, David L., Peter D. Tessner, and Arthur M. Diamond. 1978. “Planck’s Principle.” Science 202:717–23. Ingram, Paul, and Karen Clay. 2000. “The Choice-Within-Constraints: New Institutionalism and Implications for Sociology.” Annual Review of Sociology 26:525-546. IRGC. 2005. White paper on risk governance. Towards an integrative approach. Geneva: IRGC. Jaeger, Carlo, Ortwin Renn, Eugene A. Rosa, and Tom Webler. 2001. Risk, Uncertainty, and Rational Action. London: Earthscan. Keller, Evelyn Fox. 1985. Reflections on Gender and Science. New Haven: Yale University Press. Klein, Cynthia, and Marie Helweg-Larsen. 2002. “Perceived Control and the Optimistic Bias: A Meta-Analytic Review.” Psychology and Health 17(4):437-446. Kleinman, Daniel Lee, and Steven Vallas. 2001. “Science, Capitalism, and the Rise of the ‘Knowledge Worker’: The Changing Structure of Knowledge Production in the United States.” Theory and Society 30:451-492. Kleinman, Daniel Lee. 2003. Impure Cultures: University Biology and The World of Commerce. WI: The University of Wisconsin Press. Knorr-Cetina, Karin D. 1999. Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, MA: Havard Univeristy Press. Knorr-Cetina, Karin D. 2005. “Science, Technology, and Their Implication.” Pp. 546-560 in The Sage Handbook of Sociology, edited by Craig Calhoun, Chris Rojek, and Bryan Turner. Thousand Oaks: Sage. Knorr-Cetina, Karin D. 2005. “The Fabrication of Facts: Toward a Microsociology of Scientific Knowledge.” Pp. 175-195 in Society and Knowledge, edited by Nico Stehr and Volker Meja. New Brunswick, NJ: Transaction. Kuhn, Thomas S. 1996. The Structure of Scientific Revolutions. Chicago, IL: The University of Chicago Press.   113   Lacy, William B., Lawrence Busch, and Carolyn Sachs. 1980. “Perceived Criteria for Research Problem Choice in Agricultural Sciences.” Fifth World Congress for Rural Sociology. Mexico City. Lahsen, Myanna. 2008. “Experiences of Modernity in the Greenhouse: A Cultural Analysis of a Physicist “Trio” Supporting the Backlash Against Global Warming.” Global Environmental Change 18:204-219. Lam, Alice. 2010. “From ‘Ivory Tower Traditionalists’ to ‘Entrepreneurial Scientists’? Academic Scientists in Fuzzy University-Industry Boundaries.” Social Studies of Science 40(2):307-340. Latour, Bruno. 1987. Science in Action. Cambridge: Harvard University Press. Lincoln, Anne, Stephanie Pincus, Janet Koster, and Phoebe Leboy. 2012. “The Matilda Effect in Science: Awards and Prizes in the United States, 1990s and 2000s.” Social Studies of Science 0(0):1-14. Luhmann, Niklas. 1993. Risk: A Sociological Theory. New York: Aldine De Gruyter. Max, Claire. 1982. “Career Paths for Women in Physics” Pp 99-118 in Women and Minorities in Science: Strategies for Increasing Participation, edited by Sheila M. Humphreys. Boulder, CO: Westview Press. McComas, Katherin A., John C. Besley, and Zheng Yang. 2008. “Risky Business: Perceived Behavior of Local Scientists and Community Support for Their Research.” Risk Analysis 28:1539-1552. McCright, Aaron M. and Riley E. Dunlap. 2013. “Bringing Ideology In: The Conservative White Male Effect on Worry About the Environmental Problems in the USA.” Journal of Risk Research 16:211-226. McDowell, George R. 2001. Land-Grant Universities and Extension into the 21st Century: Renegotiating or Abandoning a Social Contract. Iowa State University Press: Iowa. Meon, Bjorg-Elin and Torbjorn Rundmo. 2006. “Perception of the Transport Risk in the Norwegian Public.” Risk Management 8:43-60. Merton, Robert K. 1938. “Science, Technology and Society in Seventeenth Century England.” Osiris 4(1): 360-632. Merton, Robert K. 1942/1973. “The Normative Structure of Science.” Pp 267-273 in The Sociology of Science: Theoretical and Empirical Investigations edited by Robert K. Merton. Chicago, IL: University of Chicago Press.   114   Merton, Robert K. 1957. “Priorities in Scientific Discovery: A Chapter in the Sociology of Science.” American Sociological Review 22:635-659. Merton, Robert K. 1973. The Sociology of Science. Chicago: University of Chicago Press. Messeri, Peter. 1988. “Age Difference in the Reception of New Scientific Theories: The Case of Plate Tectonics Theory.” Social Studies of Science 18:91-112. Messeri, Peter. 2003. “Age Difference in the Reception of New Scientific Theories: The Case of Plate Tectonics Theory.” Social Studies of Science 18:91-112. National Academy of Sciences. 2012. About NAS. Retrieved July 11, 2013. (http://www.nasonline.org/about-nas/organization/). National Science Board. 2012. Science and Engineering Indicators 2012. Arlington, VA: National Science Foundation. Retrieved February 24, 2013 (http://www.nsf.gov/statistics/seind12/pdf/seind12.pdf). Noble, David F. 1977. America By Design: Science, Technology, and the Rise of Corporate Capitalism. Oxford University Press: Oxford. Renn, Ortwin, and Bernd Rohrmann. 2000. “Cross-Cultural Risk Perception Research: State Renn, Ortwin. 2008. “Concepts of Risk: An Interdisciplinary Review.” GAIA Part 1:50-66. Renn, Ortwin. 2008. Risk Governence: Coping With Uncertainty in a Complex World. Earthscan: London. Rier, David A. 2003. “Gender, Lifecourse and Publication Decisions in Toxic Exposure Epidemiology: Now! Versus Wait a Minute!.” Social Studies of Science 33:269-300. Rohrmann, Bernd, and Ortwin Renn. 2000. Cross-Cultural Risk Perception: A Survey of Empirical Studies. London: Kluwer Academic Publishers. Rosa, Eugene A. 2010. “The Logical Status of Risk: To Burnish or to Dull.” Journal of Risk Research 13:239-253. Roth, Wolff-Michael, and G. Michael Bowen. 2001. “Creative Solutions’ and ‘Fibbing Results’: Enculturation in Field Ecology.” Social Studies of Science 31:533-556. Schiebinger, Londa. 1999. Has Feminism Changed Science?. Cambridge, MA: Harvard University Press. Sheehan, Kim Bartel. 2001. “E-mail Survey Response Rates: A Review.” Journal of ComputerMediated Communication 6(2).   115   Simon, Herbert A. 1955. “A Behavioral Model for Rational Choice.” Quarterly Journal of Economics 69:99-118. Sjoberg, Lennart. 1999. “Risk Perception in Western Europe.” Ambio 28(6):543-549. Sjoberg, Lennart. 2008. “Genetically Modified Food in the Eye of the Public and Experts.” Risk Management 10(3):168-193. Slovic, Paul, Baruch Fischhoff, and Sarah Lichtenstein. 1979. “Rating the Risks: The Structure of Expert and Lay Perceptions.” Environment 21:14-20. Slovic, Paul, Baruch Fischhoff, and Sarah Lichtenstein. 1984. “Behavioral Decision Theory Perspectives on Risk and Safety.” Acta Psychologica, 56:183-203. Slovic, Paul. 1987. “Perception of Risk.” Science 236(4799):280-285. Slovic, Paul., Baruch Fischhoff, and Sarah Lichtenstein. 1985. “Characterizing Perceived Risk.” Pp 92-125 In Perilous Progress: Managing the Hazards of Technology, edited by R.W. Kates, C. Hohenhemser, and J.X. Kasperson. Westview, Boulder: CO. Stephan, Paula. 2012. How Economics Shapes Science. Harvard: Harvard University Press. Stokes, Donald E. 1997. Pasteur’s Quadrant: Basic Science and Technological Innovation. The Brookings Institution: Washington, D.C. Teague, Gerald V. 1981. “Compatibility of Teaching and Research at a Lang-Grant University.” Improving College and University Teaching 29(1):33-35. Thursby, Jerry G., and Mary C. Thursby. 2002. “Who is Selling the Ivory Tower? Sources of Growth in University Licensing.” Management Science 48(1):90-104. Vallas, Steven, Daniel Kleinman, Abby Kinchy, and R. Necochea. 2004. “The Culture of Science in Industry and Academis: How Biotechnologists View Science and the Public Good.” Pp 217-238 in Biotechnology: Between Commerce and Civil Society, edited by by N. Stehr. New Brunswisk: Transaction Publishers. Valve, Helene and Ruth McNally. 2013. “Articulating Scientific Practice with PROTEE: STS, Loyalties, and the Limits of Reflexivity.” Science, Technology, and Human Values 3(4): 470491. Vaughan Elaine, and Brenda Nordenstam.1991. “The Perception of Environmental Risks Among Ethnically Diverse Groups.” Journal of Cross-Cultural Psychology 22(1):29-60. Vlek, Charles A. 1996. “A Multi-level, Multi-stage and Multi-attribute Perspective on Risk Assessment, Decision-making, and Risk Control.” Risk, Decision, and Policy 1(1):9-31. Walsh, John P., Charlene Cho and Wesley M. Cohen. 2005. “The View from the Bench: Patents,   116   Material Transfers, and Biomedical Research.” Science 309:2002–3. Warsh, David. 2006. Knowledge and the Wealth of Nations: A Story of Economic Discovery. New York: W.W.Norton. Weinstein, Neil D. 1980. “Unrealistic Optimism About Future Life Events.” Journal of Personality and Social Psychology 39:306-320. Widnall, Sheila E. 1988. “AAAS Presidential Lecture: Voices From the Pipeline.” Science 241:1740-1745. Wray, Brad. 2003. “Is Science Really a Young Man’s Game?” Social Studies of Science 33:137149. Xie, Yu and Alexandra A. Killewald. 2012. Is American Science in Decline?. Harvard: Harvard University Press. Zenzen, Michael, and Sal Restivo. 1982. “The Mysterious Morphology of Immiscible Liquids: A Study of Scientific Practice.” Social Science Information 21:447-473. Zuckerman, Harriet. 1978. “Theory Choice and Problem Choice in Science.” Pp 65-95 in Sociology of Science, edited by J. Ganson. San Francisco: Jossey-Bass.   117