— .. . ; . J15. _ any}? .1 ~ o . rnl o . C .01... I i ”Him-7’ g Ii‘ga‘nwv. ‘. 53:11; vainqu 3'. Wm. '. fl Lt" .' "t .‘,' ” wfii’fi’r-i-‘e' ‘ x ‘ "fit: 5‘11“, :3? «k '4 l. . .‘ 01'; '.= I . ~.~' '1 3 m4 2!. J : '3 3.1%“ K“ ‘é‘fiéfi 1,392? ,3 = ' Iggy} gfli '. at 2“ . 2 _ a" i' .”, 4! {5.13% ‘s: 5"" M LE?" ‘4 . I : , r '5: 4 ' ‘ ~= ~ 1 I . =1 . . . V .. ,- . ‘ ““0171, A 31-13. 2% 1 3 _ ' .1" " ' .» IL-‘Jflh' . “ E 1‘ ' £3? #3 £31 “m z :7 .'~.. .1! . ' EEE1 0: t if ' WWW. - . g ' ,at : ' ' " .:‘ ' - - . i ‘ if, . 1 , ’1'.) ’31:. . = ,u . I, E 1, . ' iii , - «'3‘. ‘ in? : .fl’t‘“: : ’ . if fi~ ~ ‘ u “‘Eii'si‘iifihsfiil- gift! 43": ‘53:? ‘ i - ~ '11“ '. ‘ . ' 1!" ‘ I 5%. E, jug" '«2‘5'1141' . Hi , :4 ‘ ‘ ’ 2'2; 4 1'? " .j'd ml 33$ r1335}.‘21,;3‘fig3i*“=3§1333!;*T 3 Hi. ;§:::;- ; rm" a mi \ WNW mam mum. - 1 as? «my! ~- “ 3%? '1 {21- via-431542.. A ,_m=9 ,. [' ~§i‘,"‘4 . ' t “. ! rrflu' . q 3: ‘,£§1.:s¥’.‘§%31§v 5., E 1 ‘ ~;,;tll§:,figii ‘ § 2‘; ‘ . w "“'i‘~.*é§=3'=W wfigg’téfii’ffiigifiifiii‘gfiégwg3}’1”?1o‘ r . 0‘ g ‘ * '3 ‘l ' ‘ | f :5 § ‘ ‘t.2"i' !‘. ";' : ' V' ‘2 ' .ié'iéii‘fiEFrx 3‘ ‘- 4 m "53%;“ ”5* ' ' q 3’ fl; *3 3“’*§"‘%fiéfiififi‘=?§5§' $25.3 ‘. ."i " V'I L -.\‘ 1 *5." 1. ‘3 ’3’ !:" z-"“5‘.:-; 3" Vii-1:13." ‘risisémfii‘. 5 ifi’z' { ' L} ‘= ., f L‘Ef'hfifidéc r fliggfiéfiisfiifiéfi. i a 41mm. my 4. J! . J P = wt -‘ L .‘ ,xlf‘igg:,“,iri§;"}:’i}§ if?! dig-{$1} igrazi’figi 13%... ‘ K? i *1 a a ‘ m myqia‘ EL? 9: ' ‘. J“: fi",'j'nl':§l‘ g1 i N. :73 ° 1‘ ‘7 :x‘ihlfifiizs ‘ i¥_ ‘ V. “_ g'.‘ ’ l _ i .3"; ‘ ‘ "1"" 535?? fizz?“ i=9 5%!“ ‘n‘. “Eu; .}3T‘,?*zie':ixii.&~,§_ 2* '4‘: "15“.” "a. W ékm vf 3"ishpl':g!:§1“,;xhfi 3mm» .. 9E2: mwpi‘ W‘ , ‘3’ F ! fi=lui‘5*§#“h-fi*.zv vw 1v - 1% = 342 5: ‘3‘ 1.}.éx'1’4‘ in .- [:51] I’ in; g‘. 1. at} . . Q l}!!- . . i . .f‘. z . 9114': = z‘ @1131 1 . . l . I}! r, (hi 1 » ! Asks-fl. 1‘ f 5"} 1:4 ‘ ‘11"! s" :i k L“? :I: c . ‘ ’12.. ' : 2"" glut“ “ 3““ I! ” =:i£i.;z.;z‘9 .‘:%.'-1z‘xéza*“§‘=‘f:“"z ‘”3&9:ii?a'.i§i§“.q;?lfig}%§g“é“" 5 g IIIImmIli‘l’l‘filllilllllllifilumml 3 1293 01834 5680 This is to certify that the dissertation entitled Whflfiapm when Mater; 36311110 Mes 7 14958 W17 0f Tfél’mfllb’jlw int’lflv’éd‘m W Offl’lllflmm (“W6 presented by Valerie L. W orihmj/o/z has been accepted towards fulfillment of the requirements for Ph D degree in C’IMJJ'D/Ulil'flm/{f/Y V Major professor \ Date 8"}; A77 MSU is an Affirmative Action/Equal Opportunity Institution 0-12771 l I 0 P h PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINE return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE l: a I "flnn ' ‘5 Jr: hidii’fll‘fl- 20m ’7 Fl : 59 1m comm“ WHAT HAPPENS WHEN COMPUTERS BREAK THE RULES? A CASE STUDY OF TECHNOLOGICAL INNOVATION AND ORGANIZATIONAL CHANGE By Valerie L. Worthington A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Counseling, Educational Psychology, and Special Educaflon 1999 ABSTRACT WHAT HAPPENS WHEN COMPUTERS BREAK THE RULES? A CASE STUDY OF TECHNOLOGICAL INNOVATION AND ORGANIZATIONAL CHANGE By Valerie L. Worthington Like fish in water, scholars and educators generally regard the rules that govern their activity as being obvious and natural; for all intents and purposes, they are invisible despite their strong influence. Scholarly and educational practices are embodied in media and artifacts; the nature of different media and artifacts can influence issues such as appropriate methods of inquiry, what constitutes work of significance, and even who can participate. The introduction of new media, such as computer technology, disrupts activity in scholarly settings by making the familiar rules strange and revealing the deeper meanings that have been ascribed to them, as well as highlighting the interpenetration of media and practice. In this dissertation, I explore the influence of the electronic medium, in the form of a Web-based manuscript management system known as Tiger, on scholarly activity within the American Educational ii Research Association (AERA). Specifically, I undertake a primarily qualitative investigation of the ways Tiger interacts with established AERA practices for submitting, reviewing, and accepting or rejecting proposals of original research for AERA's annual meeting. Using a theoretical framework that draws upon literature in neoinstitutionalism, the rhetoric of inquiry, and technology and society, I argue that Tiger’s perceived violation of these rules provides an opportunity for developing a deeper understanding of their implications for our beliefs about scholarship. It also facilitates the investigation of the broader implications of technology integration and its impact on organizations. Copyright by VALERIE LYNN WORTHINGTON 1 999 iv Dedicated to my parents, Roland and Lois Worthington, and to my sister Marjorie. Words cannot express my love and gratitude. ACKNOWLEDGMENTS Everybody who writes a dissertation Claims that their committee is the best that ever existed. Unfortunately, everybody who makes this claim is wrong, because that distinction belongs to my committee. I owe so much to each of my committee members: to Yong Zhao, for taking a risk on a bizarre idea (and student, for that matter), for always encouraging me and stimulating my thinking, and, most importantly, for friendship; to David Plank, for somehow always knowing what I was trying to say long before I had any clue, for calling me on every line of BS I tried to slip past him, and for many much-needed reality checks; to Dick Prawat, for unconditional confidence in me, for big ideas, and for priceless perspective on the whole academic scene; and to David Wong, for creativity above and beyond the call of duty, for healthy and intelligent skepticism, and for memorable conversations about everything under the sun. And finally, to all of them together, for truly having my best interests at heart, and for being so darn likable. Needless to say, countless other people have been and continue to be incredible influences on my academic life, as well as vi wonderful friends and confidantes. It is not easy to make a distinction between the two, because some of my favorite people to laugh with are also some of the smartest. And if I said as much about each person as I’d like to say, the acknowledgments would be longer than the dissertation. So I’d simply like to express my heartfelt thanks to the following people, whom I’ve listed in alphabetical order (just to be different from those people who write “in no particular order,” and also to make sure I haven’t missed anyone), for having a positive influence on me in a variety of ways (they know how, and that’s what’s important): Avra Jordana-Alter, Ken Amaditz & Mary Hanlon, Sharon Anderson, Naomi Aoki, Jennifer Beams, JaNice Benjamin, Chip Bruce, Gina Cervetti, Ron & Meg Derrer, Patrick Dickson, Cherilyn Elsey, Rick Ferdig, Karen Fox, Tracy Gath, Debbie Gillespie, Mark Gillingham & Ruth Garner, Mark & Tanna Girod, Karen Glickman, Julie Gordon & Patrick Willis, Molly & Robert Hardie, Andrew Henry, Tom Henry, Joan Hughes, Su Jones, Maura McManimon, Punya Mishra, Samantha Neukom, Sharman & Chris Oliver, Becky Wai-Ling Packard & Seamus Gahan, Michael & Stacey Pardales, Jenny Patrick, Lisa Payne, Arun Ramanathan, Henry Ratliff, Cheryl Rau, Woody Richman, Darlene Roebel, Gary Rovner & Suzanne vii Karbarz, Lisa Roy, Allison Salkeld, Noah Schenendorf, Lisa Sensale, Steve Sheldon & Brenda Neuman, Maria Sosa, Elizabeth Spring, Rachel & Ryan Stark-Lilienthal, Misha Strauss & Scott Moore, Jennifer" Thomas, Michelle Veersma, Sapna Vyas, Chris Walker, Joyce Wasserstein, Jennifer Williams, Chris Wixson, Leigh Zarelli, and everyone at Loredo’s Cross Trainers of Martial Arts, especially Sifu Julian Loredo and Colleen Lillie. viii TABLE OF CONTENTS INTRODUCTION ............................................................................................................... 1 Bushmen, Coke Bottles, and Gods: The Interaction of People, Things, and Beliefs--.......-.- -_ ___-_..-.-.-- 4 AERA: The Interaction of People, Things, and Beliefs in Educational Research ........................... - - ................ - 7 Chapter 2: Theoretical and Physical Contexts for the Study ................ 13 The Concept of Organization- - ..... ...... - 13 The Ramifications of Organized Activity ...................................... - - -14 The Annual Meeting: A Microcosm of an Organization ...................... 18 The Annual Meeting: What Makes the lmmoveable Object Move? ....22 The lnterrelationship of People, Things, and Beliefs: When One Changes, the Others are Affected- - -_ -_ - 25 Neoinstitutionalism and the Rhetoric of Inquiry ............ - ...... 31 Institutions at Work in AERA’s Annual Meeting- 34 The Institutions of Inquiry ......................................................... -_-.37 The Significance of Institutions, or, Why Study the Annual Meeting? __ _ 4O Unexpected Byproducts of Institutions--- 42 Artifacts and Institutions: Rhetoric and Realities about How Change Occurs--.... ........ ........ ---- __ 43 ix The Influence of Paper on Scholarship ....................................................... 52 Conceptualizing Technology. Computer and Otherwise ....................... 55 Conceptualizing Organizational and Social Change ReahsficaHy ........................................................................................................... 61 Revealing the Institutions of Inquiry: The Introduction of a Technological Innovation .............................................................................. 66 Annual Meeting Activities: Tiger as a Breaching Experiment .......... 69 Research Questions ......................................................................................... 73 Summary ............................................................................................................. / (’7’ 6 Chapter 3: Interpretation ...................................................................................... 79 Data Sources and Justification for Using Them ..................................... 80 Justifying Data Source #1 ........................................................................... 81 Justifying Data Source #2 ........................................................................... 84 Justifying Data Source #3 ........................................................................... 85 Data Collection: Contexts and Limitations ............................................... 88 Contexts ............................................................................................................... 88 Documentation ............................................................................................... 88 E-mails ............................................................................................................. 89 Survey responses .......................................................................................... 91 Interviews ....................................................................................................... 93 Limitations ......................................................................................................... 94 Self-Selection ............................................................................................... 94 Limitations of Survey ................................................................................ 95 Data Analysis Procedures ................................................................................. 97 Summary ................................................................................................................. 101 Chapter 4: Institutions of Inquiry within AERA ....................................... 104 Context for Interpretation of Findings ..................................................... 106 Research Questions #1 and #2: What are the institutions of inquiry within this organization related to submitting, reviewing, and coordinating proposals for the annual meeting? How does the introduction of Tiger into the practices of AERA reveal the institutions of inquiry? .................................................................................. 109 Institution of Inquiry #1: Scholarship Must Appear Scholarly..110 Categories of Formatting ....................................................................... 113 Category 1: Fundamental Meaning ................................................... 115 Category 2: Functional Meaning ........................................................ 119 Category 3: Stylistic Meaning ........................................................... 121 Category 4: Symbolic Meaning .......................................................... 125 Articulating the Significance of Formatting ................................. 128 Institution of Inquiry #2: There Are Levels of Scholarship ........ 136 The Significance of Session Format .................................................. 138 Scarcity as an Explanation for the Hierarchy of Session Formats ....................................................................................................... 144 The Influence of Tiger on Institution of Inquiry #2 ................... 151 xi Research Question #3: How is this perception of the institutions of inquiry adaptive and maladaptive within the organization? .......... 152 Institutions of Inquiry Are Adaptive Because They Lend Credibility to the Enterprise of Educational Research ................. 153 The Reputation of Educational Research .......................................... 157 Institutions of Inquiry Are Maladaptive Because They Are Constraining and Generate Unexpected/Unwanted Byproducts..163 Research Question #4: What Changes might we expect to see in these institutional perceptions in the face of a change agent such as Tiger? ................................................................................................................ 168 Chapter 5: Conclusions and Implications ..................................................... 172 Implications for the Conduct and Practices of Scholarship ........... 176 Implications for Technological Innovation ............................................. 180 Implications for Education ............................................................................ 183 References ................................................................................................................. 1 85 Appendices ................................................................................................................ 1 93 Appendix A: Interview Protocol ................................................................... 194 Appendix B: Table of Themes ......................................................................... 198 xii LIST OF TABLES Table 1: User Response Rate For Survey ........................................................ 92 Table 2: Comparison of Tiger with Traditional Proposal Process..106 Table 3: Users’ Overall Impressions .............................................................. 107 Table 4: Demographic Information on Users ............................................... 107 Table 5: Breakdown of Proposer Complaints about Formatting ......... 115 Table 6: Breakdown of User Complaints about Formatting .................. 124 xiii INTRODUCTION In the United States, “the academy” is a term that refers to the disciplined, rigorous quest for knowledge in areas such as the sciences and the humanities. People within the academy conduct and contribute to “scholarship,” which is both the body of knowledge that helps to explain phenomena in the world and the set of systematic, rigorous procedures and guidelines (Soltis, 1990) that) scholars use to contribute to this knowledge. For example, the American Educational Research Association (AERA), “the most prominent international professional organization with the primary goal of advancing educational research and its practical application” (http://aera.net), is concerned with the generation Of scholarship specifically about education, and the scholarship that is disseminated under its auspices is required to follow systematic procedures and guidelines that demonstrate scholarly rigor. It is these systematic procedures and guidelines that distinguish scholarship from other forms Of knowledge (Shulman, 1988) such as folklore or storytelling. However, these procedures and guidelines are not somehow self-evident, but rather are the result of the tacit and widespread agreement of a community of scholars. As Henneberg (1997) writes, Science as a human endeavor is also Open to social pressures expressed as moral and ethical norms. All human activities need to be organized and regulated by norms of behavior that constrain actions of individuals and attach value to decisions. In other words, individual opinions and actions must be censored and decisions justified. Science is associated with sets of protocols that tacitly suggest proper behavior, but these protocols develop over time and are perpetuated within communities of scholars that support certain procedural and ideological decisions on the basis of certain beliefs and values. One of the protocols of scholarship has to do with rigor; indeed, as Shulman (1988) writes, “What is important about disciplined inquiry is that its data, arguments, and reasoning be capable of withstanding careful scrutiny by another member of the scientific community” (p. 5). These protocols also have a strong influence over what we believe as a community of scholars. In other words, the procedures we use for carrying out our scholarly activity have an impact on what it is that we can actually do, and subsequently, on what it is that we as a community believe. This is due in part to the fact that_the beliefs underlying our procedural decisions about scholarship operate at a subconscious, taken-for-granted level. While we may explicitly understand and evaluate the “proper” procedures for conducting scholarship, we often have a less conscious perception of the beliefs and values that legitimate these procedures. According to the language of neoinstitutionalism, a theoretical perspective I describe in more detail later, the beliefs and values underlying our practices are institutionalized, which means that they perpetuate themselves within a social context without deliberate intervention by actors within the context; they are deeply rooted in the social context because they are taken for granted. The rule systems that operate in a social context index the more tacit ethos that gives that context its distinctive character. This more tacit ethos often influences our activity in ways of which we are not necessarily aware; thus, while we may adhere to certain practices because they seem “the thing to do” in a given context, the fact remains that these practices are not the only ones that would work within that context. They are to some extent arbitrary and contrived. For instance, whether they realize it or not, individuals within a social context bring tacit perceptions derived from the social context to bear on their interpretation of and interaction with the physical artifacts they discover or create in this context. Activities within any social system reflect the lived environment, such that individuals interact with and create artifacts in their physical surroundings as they carry out the activities that characterize their social system. The tacit perceptions that pervade a social context and distinguish it from other social contexts mediate an individual’s understanding of the physical artifacts in this context; these institutionalized perceptions shape individuals’ interpretations of different artifacts. A good example of this interaction among actors, institutionalized perceptions, and artifacts in a social context can be found in the film TW. : shm ok- Bo-ttl .19. .. 'T : I a . '. . ’:o_-l= i ._ ansLBeliete In Wm an airplane pilot from the West drops a Coke bottle into the midst of a group of Bushmen of the Kalahari Desert in Africa. Having never seen a Coke bottle before, the Bushmen decide it is a gift from the gods and incorporate it into their daily activities, using it for grinding meal, making music, and curing skins, among other things. To those of us who have seen Coke bottles before and know what they were originally designed for, the scenes of the Bushmen interacting with the bottle are amusing and perplexing: Why do they use the bottle in such strange ways? Why do they treat it with such reverence? Don’t they see that it is simply a Coke bottle? This discrepancy between the perception of an American, for instance, and that of a Kalahari Bushman, as to the “true” purpose of a Coke bottle illuminates the influence the tacit beliefs and values of a social context have on the interpretation and use of physical artifacts. The Bushmen interpreted the Coke bottle in terms of the ways it could help them carry out the activities that were central to their way of life—belief in a system of deities, activities related to the generation of food, clothing, and shelter—while an American or other Westerner watching the film might interpret the Coke bottle as a self-evident artifact designed for the sole purpose of holding and transporting Coca-Cola, the drinking of which is an activity associated in advertising with leisure and fun. In each social context, actors—Bushmen or Westerners—interpreted the physical artifacts they found in their social contexts—in this case, a Coke bottle—in terms of their tacit belief systems. The subsequent use of the artifacts differed depending on the parameters Of the belief system. Thus, as this example shows, the meaning that is made within any social context depends upon the interpenetration of the actors, the institutionalized belief systems, and the physical artifacts that comprise that context. Within this structure, any change in any of these elements could result in some sort of change to the overall context, a point the film also illustrates. For example, at the start of the film, the Bushmen are characterized as a maple that knows no violence; however, by the middle of the film, at which time the Coke bottle has assumed a prominent place in the Bushmen’s lives, tensions are running high. Sincethe Bushmen have discovered so many uses for the bottle, it has become a valuable, but scarce, commodity. Subsequently, since all the group members want to use the bottle at the same time, the bottle also causes squabbling and the first appearance Of violence in the group; a memorable scene in the film depicts different members of the group repeatedly snatching the bottle out of one another’s hands and hitting one another on the head with it as they take it away for their own use. Eventually, the Bushmen decide that they must return this gift from the gods, as it is tearing at the very fabric of their existence, turning them into people who are greedy for material things and distrustful of their fellow group members. A R-‘ = In '. . '=-_-I- mo. .. o. =:II- ' ,. ,.., BeseaLch A similar interpenetration of actors, physical artifacts, and institutionalized belief systems characterizes the social context within which educational research takes place. In addition, a similar structure exists here such that a change in one element of the social system could result in change to the entire system, a point I argue later. Within educational research contexts, ihe actors are mostly scholars and academicians; the artifacts, including things such as scholarly journals and books, reflect the signficance of the role of the artifact of paper in educational research; and the institutionalized belief systems have to do with the rationale for what constitutes scholarly activity. That our interaction with the physical artifacts of educational research is mediated by our institutionalized belief systems about educational research has particular significance for scholarship in the information age. As we learn how to contend with new computer technologies, we may experience changes in our beliefs and expectations about scholarship that are influenced by the presence of these technologies, much as the Kalahari Bushmen were forced to interact and come to terms with the Coke bottle. This dissertation seeks to portray the procedures and protocols that characterize the academy as computer technology plays an ever increasing role in the conduct of scholarship. I will use AERA to provide tangible evidence for this argument. I will provide evidence to develop a picture of the factors associated with a particular set of knowledge-generation activities—management of annual meeting research proposals—within the organization that is directly related to the structure and implicit value system of the annual meeting. The introduction of a technological innovation, known as Tiger, into the protocols related to this activity provides a good Opportunity for understanding how this knowledge generation proceeds. The introduction of this innovation also provides a good opportunity for understanding how change takes place within this particular organization, and what the implications are for the functioning of the organization. In other words, I can examine the interpenetration of the actors (proposers, reviewers, and chairs), the artifacts (annual meeting proposals that had heretofore been created on paper but are now in computer-compatible form), and the belief systems (related to the conduct of scholarship) within AERA It is also important to situate this close examination of the mechanisms that characterize the social context of AERA within a broader theoretical arena. Strengthening the connection between this specific example of organizational and interpersonal dynamics and an overarching theoretical framework increases the likelihood that this investigation will contribute to a better general understanding of the mechanisms that structure the activities of members of other social organizations. Thus, the purpose of the following sections is to develop a theoretical understanding of the mechanisms that drive the social activity—in this case, knowledge- generating activity—within AERA as it relates to the annual meeting. I will draw upon three relevant bodies of literature, that dealing with neoinstitutionalism, that dealing with the rhetoric of inquiry, and finally, that dealing with technology and society. I argue that it is in the intersection of these three bodies of literature that I can develop a clearer picture of the theoretical underpinnings of AERA’s annual meeting. In this dissertation, I will argue that an examination of the interaction of physical artifacts, implicit but deeply institutionalized “institutions Of inquiry,” and human actors within the social system that is AERA can help to illuminate the mechanisms at work that give it its distinctive character. Using the annual meeting as a microcosm of AERA, I will draw on theories of neoinstitutionalism and the rhetoric of inquiry, supported by my own empirical data, to develop this characterization. Further, I will explore the implications of introducing a new artifact, a piece of computer technology known as Tiger, into the practices surrounding the AERA annual meeting. Similarly, I will draw upon theories of technology and society, in addition to empirical data, to pursue an understanding of these implications, particularly given AERA’s current concerns with maintaining its status as a viable, authoritative governing body for educational research. 10 I will show that questions of authority, tradition, and acceptability all play a role in reducing the uncertainty attendant upon educational research by positioning it within the broader social and political context of the academy. I will describe stakeholders within the organization whose actions play more and less significant roles in preserving what they perceive to be the integrity and Character of AERA and the annual meeting. I will develop an understanding of stakeholders’ experiences engaging with other stakeholders and a characterization of what it means to be a member of AERA and, by extension, an educational researcher. I will elaborate upon the ways in which factors of authority, tradition, and acceptability interact with one another to give the annual meeting its distinctive character. I will investigate the nature of the relationships among actors, institutionalized values and beliefs, and artifacts within AERA, and examine how these relationships combine to give the annual meeting its distinctive character. I will investigate how stakeholders respond to the introduction into institutionalized AERA practices of artifacts, specifically an instantiation of computer technology known as Tiger, that challenge 11 these practices, as well as the ramifications of change for the conduct of educational research. 12 Chapter 2 THEORETICAL AND PHYSICAL CONT EXTS FOR THE STUDY Th f r i ti Much social activity takes place within organizations, which, according to Jepperson & Meyer (1991), are social ideologies with social (usually legal) Iicenses...Formal organization is not only interdependent with, but interpenetrated with, the various elements of rationalized society: modern actors and their ‘interests,’ legitimated functions and their functionaries, and agents of the modern collectivity such as state elites, and legal and professional theorists and practitioners (p. 205). Organizations are thus characterized by groups of people, beliefs, and objects that are interconnected, ostensibly in the service of sets of explicit socio-political goals, and that have the social, and often legal, authority to govern these peOple, beliefs, and objects via sets of policies and expectations. It is the unique configurations of these people, beliefs, and objects and their interactions with the policies and expectations that help to distinguish one organization—representing one category of social activity—from l3 another. In the case of the organization known as AERA, the interaction of these organizational elements, policies, and expectations is characterized by a concern with the generation and communication of educational scholarship: with the rules for how to write, speak, think, and conduct research in a manner befitting a “serious” educational scholar. if' ' f In addition to facilitating an organization’s attempts to achieve its explicit purposes, these organizational elements may also engender unanticipated but robust side effects that are actually at odds with the organization’s stated goals. By authorizing certain types of behaviors—which they must, if they are to meet their Objectives—organizations also necessarily prohibit other types Of behaviors, in the name of appropriateness and efficiency. In the case of AERA, researchers are expected to adhere to certain protocols for carrying out research that other scholars can identify and replicate. Work that is not done according to sets of stringent and explicit “scientific” standards is not considered to be scholarly. 14 What constitutes scientific standards is not self-evident, however. The rhetorical turn in scholarship (Bazerman, 1989, Nelson, Megill, & McCloskey, 19xx) suggests the academy’s acknowledgment of the fact that the notion of objectivity breaks down upon Close inspection. In other words, while methodological Objectivity has traditionally been a treasured concept within scholarship, it does not truly exist. Instead, what passes for objectivity is actually the set of assumptions and values that give a discipline or organization its distinctive character. Scholarly work must accord with these assumptions and values, which are often simply perceived as “the way things are done,” but these assumptions and values actually only comprise the reality of the discipline or organization within which they operate. Members of a scholarly organization such as AERA, whether they realize it or not, are influenced by the norms and institutionalized behaviors that act as arbiters of scholarship for that organization. Membership in the organization brings with it tacit acceptance of the taken-for-granted norms and beliefs within that organization that mediate between social actors and the artifacts, such as research papers or grant proposals, with which they interact. 15 This issue is salient for three reasons. First, the value systems that influence organization members’ perceptions about appropriate versus inappropriate behaviors or beliefs have social, sociological, and political implications. For instance, if there are appropriate and inappropriate behaviors, there must be individuals or collectives who police those behaviors, which introduces a power and authority differential among organization members. Second, the introduction of a power and authority differential increases the likelihood that people with the power and the authority are the ones whose agendas will be furthered (Zhao & Worthington, 1999), perhaps to the detriment of people with less power and different agendas. Third, the policies that support the goals of an organization often do not adequately keep pace with social change. Organizations that are slow to change may adhere to policies that do not account for new and influential cultural developments; this in turn may compromise the organization’s ability to carry out its stated purposes. A good example of a “new and influential cultural development” is the increasing use of computer technology in the academy. 16 What this means for the AERA membership is that the power differential has implications for the types of educational research agendas that are considered relevant, the types of people who ascend to positions of power within AERA, and the types of social interactions that influence normal, “appropriate” activity. Further, this may also mean that the policies Of AERA may not be keeping pace with a changing world, which may compromise their perceived usefulness among the membership. Depending upon one’s perspective, then, while some set of policies and standardized behavior is necessary for serving the best interests of an organization, these policies may inadvertently generate negative social, sociological, and political byproducts. They may also create the perception that the organization is “out of touch” if they do not account for changing social and cultural conditions. The policies of an organization like AERA represent a double- edged sword—they provide structure but simultaneously impose constraint. Once again, the objectivity issue is salient here, because those people who enforce the policies are themselves not Objective but instead filter their understanding of the policies through their own experiences. Thus, their perceptions of the policies may not 17 coincide with the perceptions of other, less powerful members. This raises a host of questions about the true nature of the influence the values and beliefs about appropriate behavior have on activity within AERA. In this dissertation, I explore some of the ways the beliefs and values of AERA influence behavior with respect to the annual meeting, and what the real ramifications are for the enterprise of scholarship. In the following section, I explain how the beliefs and practices associated with the annual meeting can be considered to be representative of the beliefs and practices associated with AERA as a whole. T i ' i f n r ' ' n A research organization’s annual meeting helps define the organization to outsiders as it simultaneously helps its members articulate the meaning and significance of the organization for themselves. Within AERA, each yearly conference represents far more than members convening to share empirical or professional discoveries made in the previous year. First, the conference has become an integral part of the culture of the organization, helping in a significant way to “put a face” on the organization for members 18 and the general public. Minutes (e.g., Russell, 1995), photos, statistics, and research papers from the annual meeting combine to provide a tangible indication of what the organization is and what it does. Second, with respect to the membership, not only does the annual meeting provide educational researchers with an opportunity to interact face to face and in real time with dozens of colleagues, but it also, by its very nature, implicitly Iegitimates the overarching research and political agendas of members and of the entire field. As SOderqvist and Silverstein (1994) state, Today meetings not only provide arenas where researchers can exchange information about new theories, data and techniques. By analogy with scientific disciplines, they can also be seen as political-rhetorical units—arenas for negotiation of what constitutes interesting research topics, for delineation of cognitive territories, and for distribution Of scientific status and roles within the disciplinary hierarchy (p. 514). Third, the annual meeting of AERA has been cited as a resource for increasing the involvement of minority scholars in the field of educational research (Russell, 1994), serving as a pedagogical tool for novice scholars and “breathing life” into the texts they read (Parker, et al., 1995), and encouraging interaction with international educational researchers (White, 1995). 19 The annual meeting provides a context that exemplifies and ratifies members’ status positions and popularizes certain types of research agendas; that puts power in the hands of some researchers by providing them with a forum for voicing their ideas while depriving others of similar power; and that supports researchers’ joint, face-to-face exploration of many issues relevant to the life of the organization and to educational research more generally. Thus, Miles’ (1994) contention that “the [annual meeting] is the central event” (p. 21) of AERA rings true on many levels. Since the annual meeting assumes such a prominent place in AERA’s activities, it stands to reason that there would be much discussion about those aspects of the meeting that seem useful and about those aspects that could be improved, and indeed, this is the case (e.g., Zuckerman, 1992; Miles, 1994). There is a recurring tendency among AERA members to use the annual meeting as a site for discussion of issues relevant to the membership (e.g., Jackson, 1990), such as future directions for the organization, the changing needs of the membership, and the development of more explicit links between educational research and teaching practice. 20 These discussions help to generate the ideals and standards by which AERA defines itself. However, discussion of these ideals and standards and the realization of an annual meeting and an organization that actually live up to them are two different things. Those researchers taking on the challenge of understanding and improving on the annual meeting face an uphill battle; inertia, tradition, and the sheer size of the annual meeting all represent obstacles to meaningful change. As a result, the resolution of these discussions is Often reflected in only the most superficial ways in the structure, implicit value system, and protocols of subsequent annual meetings. This poses a problem not only in terms of the stated goals of the organization, but also in terms of public relations. Educational research is Often castigated for being disconnected from the actual problems of classroom practice (Lieberman, 1992). The apparent difficulty AERA faces in effecting meaningful, lasting change to its own annual meeting does little to enhance the credibility of the enterprise of educational research, to say nothing of actually meeting the needs of the constituency and, ultimately, the practitioners and learners who are ostensibly the beneficiaries of 21 the educational research endeavor. Uncertainty about what AERA is or is supposed to be, then, appears to influence, sometimes in negative ways, what AERA does and does not do. Th AnnlMeln:Wh Mkes lmvble b' M ? Miles’ (1994) report “Recasting the Annual Meeting: Reflections on a Change Process” supports this notion that the annual meeting provides an environment for the discussion of issues that are widely believed to be significant for AERA, but that the outcomes of these discussions seldom result in any meaningful change to the annual meeting itself. His close examination of AERA’s annual meeting revealed that on the one hand, the annual meeting was doing a “reasonably” good job of helping members “reaffirm... existing friendships; build...new networks; pursu[e]...personal objectives such as funding or employment; and conduct... [organization] business” (p. 22). On the other hand, however, Miles reported that three major goals—interaction around work of substantive quality; influence on educational policy and practice; and development of the profession—were not being well met. He agreed with the contention of the ad hoc committee convened to 22 examine the annual meeting that “the [annual meeting’s] professional, substantive aspects are in jeopardy and deserve serious redesign and improvement” (p. 23). Miles’ effort to understand and improve upon the shortcomings of the annual meeting was the fourth such effort initiated by AERA in recent years; it follows the reports of three previous AERA committees that reviewed the annual meeting in 1971, 1975, and 1985 (Miles, 1994). These issues attest to the profound significance of the annual meeting in the life of AERA. However, the relative lack of success of these committees to change the annual meeting also raises uncertainty about the extent to which the expectations of different groups within the membership for the annual meeting are realistic, or even consistent with one another. What might account for the fact that the annual meeting tends to be impervious to the best intentions of very capable scholars to change it? Miles’ (1994) arguments that “[I]arge annual meetings often seem to be self- defeating social systems that discourage the collegial learning they were set up to provide” and that “changing the dynamics is not an easy task” (p. 21) are obviously accurate, as evidenced by the relative lack of progress on this issue in recent years. However, 23 what is also obvious is that even deep awareness of the fact that annual meeting dynamics are difficult to change is not sufficient knowledge for effecting change. Miles’ (1994) description of the annual meeting as a social system provides one way of thinking about this problem. In any social system, there are actors, including and especially human ones, that influence the “finished product,” in this case, the annual meeting itself. And it is human activity that gives the annual meeting its distinctive nature, both in terms of scholarly content and in terms of structural imperviousness to change. This is so because it is these human actors who decide what the content and structure of the annual meeting should be. Any understanding of the characteristics of the annual meeting and the reasons it is constituted in a certain way, then, necessitates an examination of the characteristics and assumptions of the people who plan, execute, and aflend n. An important second point about social systems is the fact that human actors within a social system interact with physical artifacts in the pursuit of their goals and are influenced in these interactions by the tacit, institutionalized belief systems that give 24 structure to the social system. As I indicated previously, it is the interpretation Of these physical artifacts, such as handouts from research presentations, the configuration of conference rooms, and annual meeting registration materials, that help bring the abstract concepts of the annual meeting and AERA into concrete focus. The physical artifacts within a social system, then, represent far more than simple tools for helping human actors meet their needs within the social system. Rather, these physical artifacts, mediated by the norms and values of the system, become touchstones of meaning within a social system, such that human actors unconsciously rely upon them to act as a yardstick for understanding all stimuli within the system. In other words, human actors combine their tacit understandings of the belief systems of the social context to interpret physical artifacts, which then anchor their understandings of activity within the social system. I: 1 : :10: 1,- 0 ':09: no 11'. 9:: ' Al‘l 0|: An example from an anthropologist named Sharp (1952) about the introduction of the steel axe into the social practices of the Yir 25 Yoront Aboriginal group in Australia helps to illustrate both the extent to which our interpretation of physical artifacts through the belief systems of our social contexts helps us understand our worlds, and also the potential implications of a change in a physical artifact for the belief systems of a social group. In the essay “Steel Axes for Stone Age Australians,” Sharp (1952) describes the introduction of the steel axe into the practices of the Yir Yoront, arguing that it had a revolutionary impact on their world order. He describes how European missionaries distributed steel axes among the Yir Yoront, an Aboriginal group whose social and cultural practices had originally centered on the use of stone axes. Sharp (1952) describes the long-standing social power Of the stone axe, writing that not only was it adult men alone who Obtained axe heads and produced finished axes, but it was adult males who retained the axes, keeping them with other parts of their equipment in camp, or carrying them at the back slipped through a human hair belt when traveling. Thus, every woman or child who wanted to use an axe—and this might be frequently during the day—must get one from some man, use it promptly, and return it to the man in good condition. While a man might speak of ‘my axe,’ a woman or child could not; for them it was always ‘your axe,’ addressing a male or ‘his axe’ (p. 76). From this description, we can derive a fairly clear sense of what the nature of the social interactions of the Yir Yoront must 26 have been like when the stone axe was the tool of choice. Men were clearly in charge, creating and owning outright the main artifact that conferred social power, with women and children necessarily subordinating themselves to the men to gain access to this artifact. Indeed, as Sharp (1952) writes, “repeated and widespread conduct centered on the axe helped to generalize and standardize throughout the society these sex, age, and kinship roles...and helped to build up expectancies regarding the conduct of others defined as having a particular status” (p. 77). The stone axe, then, was more than a mere tool; more significantly, it provided the basis for the social belief and status systems of the Yir Yoront. The stone axe acted as a touchstone of meaning for this group, as a reference point by which the Yir Yoront were able to define and evaluate their social activity. Further, the “expectancies regarding the conduct of others,” or what the neoinstitutionalism literature would call the institutions of the Yir Yoront, mediated individuals’ interactions with the stone axe and suggested appropriate patterns of behavior for different groups within the social system. Sharp (1952) writes that European missionaries introduced the steel axe into Yir Yoront culture some time in the nineteenth 27 century. The fact that the steel axe was significantly more easily created than the stone axe turned out to have a strong influence on the Yir Yoront culture. The missionaries saw that the steel axe could perform some tasks more quickly and efficiently than the stone axe, and distributed them to the Yir Yoront in the hopes that the Yir Yoront would subsequently spend their increased leisure time on more “civilized” activities. While this did not occur, Sharp argues that the steel axe did have a significant, though unexpected, impact on the social environment of the Yir Yoront. As it turns out, the distribution of steel axes to the Yir Yoront was a matter of happenstance; the missionaries gave them to any group members who happened to be around, whether they were male, female, or children. Thus, women and children, who had never before possessed their own stone axes but who had received steel axes from the missionaries, developed a sense of ownership over their newly acquired steel axes that they had never before experienced. Sharp (1952) writes that young men or even boys might also obtain steel axes directly from the mission. A result was that Older men no longer had a complete monopoly of all the axes in the bush community. Indeed, an old man might have only a stone axe, while his wives and sons had steel axes which they considered their own and which he might even desire to borrow. All this led to a 28 revolutionary confusion of sex, age, and kinship roles, with a major gain in independence and loss of subordination on the part of those able now to acquire steel axes when they had been unable to possess stone axes before (p. 84). Sharp’s (1952) example supports the argument that physical artifacts can have tremendous meaning within a social system. Further, this example supports the argument that the extent of this meaning is not necessarily consciously known until the use of the artifacts is challenged in some way; it underscores the tacit nature of the institutions that mediate individuals’ interactions with these artifacts. It is not my purpose in supplying this example to suggest that the introduction Of computer technology into the practices Of . AERA will have a similarly transformative effect on that social context. But what I would like to suggest With Sharp’s (1952) example is the following: first of all, artifacts are deeply interconnected with our meaning-making activities, and second of all, this reliance on artifacts does suggest the possibility for some change within a social system following the introduction of a novel artifact. In keeping with these arguments, in the following section I explore the explanatory possibilities of the theoretical perspectives of neoinstitutionalism, the rhetoric of inquiry, and technology and 29 society for understanding the mechanisms that influence the nature of AERA’s annual meeting structures, activities, and values. Specifically, I will investigate the connection between these theoretical perspectives and the particular set of institutions related to annual meeting preparation. I explain more clearly what the literature has to say about the ways people interact in organized social settings, the purposes and procedures for communicating scholarship (known in the literature as the rhetoric of inquiry), the implications of these procedures for the nature of the scholarship that is produced, and the changes that occur as part of the activity within a social system. I use these theoretical perspectives to frame my study and increase my understanding of why AERA annual meeting activities are constituted the way they are, why they are Often impervious to change, and how they may finally change after all. Specifically, I elaborate more fully on the role a computer technology may play in revealing the annual meeting practices—by revealing the fact that social practices are enacted via the artifacts that support them—and in facilitating or encouraging change in these practices. 3O Neoinstitutionalism and the Rhetoric of lnguiry Neoinstitutionalism takes the stance that in order to understand human activity, one must understand this activity in the social and historical context in which it takes place (e.g., Powell, 1988). Indeed, central to neoinstitutionalism is the idea that “all institutional theorists see action as socially embedded and constrained by regulations, normative obligations, and/or cognitive schemata” (Rowan & Miskel, in press, p. 8). These “regulations, normative obligations, and/or cognitive schemata” are known as institutions. In the terms Of neoinstitutionalism, an institution “represents a social order or pattern that has attained a certain state or property...institutions are those social patterns that, when chronically reproduced, owe their survival to relatively self- activating social processes” (Jepperson, 1991, p. 145). In other words, an institution is a stable, predictable, socially legitimated pattern of activity that involves the interaction of at least one member of a subset of society with at least one other. Jepperson (1991) provides many examples of institutions, including the handshake, the vacation, sexism, voting, marriage, attending college, 31 and the corporation. At first glance, these concepts appear to have very little in common with one another; some of them represent abstractions while others involve concrete behaviors; some of them are examples of organizations while others are not; there is no immediately notable context within which all of these concepts would have an appropriate place. However, these concepts do share an important commonality, according to Jepperson (1991). He writes that, “Each of these metaphors connotes stable designs for chronically repeated activity sequences” (p. 145). Thus, an institution has a quality about it such that people who interact by means of the institution use—and expect from others—certain types of behavior that are implicitly associated with the institution. For instance, the American institution of the handshake necessitates that two people face one another, extend their right hands, clasp each other’s hands, and move their clasped hands up and down several times. Most people understand that this is how a handshake works, and when a handshake is or is not appropriate, without necessarily being actively aware that they understand this. Each time an individual finds herself in a situation that warrants a handshake, for instance, when being introduced to 32 someone for the first time, she simply performs the handshake, probably without thinking about it very much, if at all. DiMaggio and Powell (1991) write that people within organizational contexts develop beliefs about appropriate behavior that are absorbed through socialization, education, on-the-job learning, or acquiescence to convention. Individuals face choices all the time, but in doing so they seek guidance from the experiences Of others in comparable situations and by reference to standards of Obligation (p. 10). Thus, institutions act as shorthand that helps people make sense Of their world in an efficient way. Armed with institutions, people can come to social situations with some sense of what to expect and how to interact with others. DiMaggio (1988) sums this up nicely when he writes, Central to institutional theory is the assumption that humans have a preference for certainty and predictability in organizational life. Individuals’ preferences for relatively routinized and predictable environments generate much behavior that tends to create and sustain institutions” (p. 7). Further, institutions are basically tacit (e.g., Rowan & Miskel, in press). In other words, the fact that institutions make for “routinized and predictable environments” enables people largely to forget about them; for the most part, individuals are able to take for granted the notion that the people they interact with will honor the 33 same institutions in the same ways. The taken-for-grantedness of institutions makes them difficult to examine, and difficult to change, unless they are challenged or questioned in some way. s'in erinA A’ AnnlM in In this context, conceptualizing the annual meeting as an institution—that is itself constituted by other institutions—is reasonable and helps us understand its relatively enduring nature. Individuals who have been socialized into the practices and expectations of AERA have come to expect to engage in certain routine, predictable activities at the annual meeting. For instance, all conference goers rely on name tags to help them interact with some of the 12,000+ other attendees. They may not be consciously aware of their reliance on name tags, but they would certainly notice the absence of them. Indeed, these expectations have become so ingrained that these individuals likely do not even realize that they harbor them. Thus, at the same time that organization members clamor for change to the annual meeting, their reliance on institutions perpetuates stasis, perhaps without even their conscious knowledge. This is in keeping with DiMaggio and Powell’s 34 (1991) contention that neoinstitutionalism “tends to stress the stability of institutionalized components” (p. 14). In other words, institutions have a tendency to endure (see also Meyer & Rowan, 1991; Zucker, 1991). A complex institution such as an organization’s annual meeting is supported and characterized by countless other institutions. For instance, with respect to the annual meeting, repetitive, unconsciously anticipated behavior patterns proliferate that are associated with activities such as attendance at paper presentations, informal interactions with peers (including, of course, the handshake), and any of a number of social activities. Indeed, there are even institutions that help to structure the annual meeting before it ever takes place; there would not be an annual meeting were there not many predictable facilitative activities prior to it. Since one of the main purposes of AERA is to support and influence the generation of knowledge, and since one of the main purposes of the annual meeting is to provide a forum for discussion of this knowledge, one would expect to encounter many institutionalized practices for writing, assessing, and selecting 35 original research to be presented at the annual meeting. Much has already been written on institutions related to the communication of original research, which have been characterized collectively as the rhetoric of inquiry. As its name implies, the rhetoric of inquiry focuses on “the quality of speaking and writing, the interplay of media and messages, the judgment of evidence and arguments” (Nelson, Megill, & McCloskey, 1987, p. ix). The conduct of scholarship has as its implicit intent the communication to peers and society of meaningful discoveries; indeed, this sense that scholarship is something to be shared serves as the basis for the rhetoric of a scholarly discipline. In other words, the need to communicate ideas and discoveries to one another necessitates that scholars speak “the same language”—use the same rhetoric of inquiry—so that different people within the discipline can interpret their ideas and discoveries according to similar sets of rules and expectations. In short, scholars harbor expectations about the way research can be conducted; they rely upon repetitive and reliable understandings—institutions—Of scholarship and research to help them make sense of and evaluate the process of knowledge generation. Importantly, the “rhetorical turn” (Nelson, et al., 1987; 36 Bazerman, 1987, 1988) has caused scholars to expand their focus beyond understanding the phenomena they investigate to include consideration of the methods by which they understand these phenomena. This is so because these scholars have come to believe that these methods constrain that which we can know. Even within the physical sciences, which are Often touted for the “objectivity” of their methodologies, scholars are increasingly willing to accept the notion that there is no such thing as a completely Objective research finding (Jansen & Peshkin, 1992; Hammersley & Atkinson, 1983), because all research findings are constructed so that they conform to the social rules—the institutions—of a disciplinary community (Guba & Lincoln, 1994; Krieger, 1991). Th I ' i n f The rhetoric of inquiry, as its name implies, tends to focus primarily on the communication of scholarship, on those rhetorical conventions scholars employ when speaking about their work before a group Of peers or drafting a written document for possible publication in a scholarly journal. However, the notion of accepted protocols and subtle expectations about the “proper” way to do 37 things is by no means limited to the communication of scholarship. Rather, scholars are also constrained by proper social protocols when they carry out research studies, interact with peers, apply for jobs or funding, or attend the annual meeting of AERA. The enterprise of scholarship is governed by countless social protocols, and it is these protocols, or institutions that can help us understand the reasons AERA’s annual meeting and AERA itself operate the way they do. Since the rhetoric of inquiry only focuses on the communication of scholarship, and since this dissertation is concerned with more social protocols that govern scholarship than just those that govern its communication, it becomes necessary to devise a term for thinking about this larger group of social protocols. The term I have chosen is the institutions of inquiry. The fact that these institutions influence activities that contribute to the shaping of the annual meeting is in keeping with the idea that many institutions come into play before the annual meeting even takes place. It is one set of these “prior” institutions of inquiry that is the subject of this dissertation. 38 Neoinstitutionalism’s focus on the social environment as an influence on organizational behavior (Scott & Meyer, 1991) implies that the context of the organization imposes its own set of behavioral requirements on organizational actors that may or may not be consistent with the stated goals of the organization. In addition to providing a set of useful terms for characterizing the social protocols within AERA, then, neoinstitutionalism provides a way to explain the reasons for these social protocols. As they have been characterized here, the goals of the organization and the realities of the annual meeting do not dovetail in an expected way, despite repeated attempts to align them with one another. Within the framework of neoinstitutionalism, however, they would not necessarily be expected to, because the power of established institutions to prevent substantive change is given prominence. Thus, we can move past this apparent inconsistency, using our deeper understanding of it to develop a clearer picture of the factors at work and their implications for the behavior and goals of AERA members. Further, as we will see in a later section, other institutions that may come to play a role in the life of the organization also impose their own behavioral requirements on 39 organizational actors. This, too, has implications for the functioning, goals, and expectations of the members of the organization. Th i nifi f lnsiui r Wh the Ann I Me in ? Neoinstitutionalism and an extension of the rhetoric of inquiry can help explain the interplay of social actors within an organization such as AERA. The next step is to understand the implications of the institutions of inquiry for the day-to-day activities and beliefs of AERA members; indeed, the notion that institutionally-influenced activities in organizations have significant implications for the nature of educational research is one of the central assertions of this dissertation. An additional definition of an institution by Jepperson (1991) helps us begin to understand why this would be important to examine. Jepperson writes that An institution is then a social pattern that reveals a particular reproduction process. When departures from the pattern are counteracted in a regulated fashion, by repetitively activated, socially constructed, controls—that is, by some set of rewards and sanctions—we refer to a pattern as institutionalized (p. 145) 40 The key concepts here are rewards and sanctions. What this definition of institution makes clear is that there are consequences associated with the existence of institutions: positive ones for people who comply with them, and negative ones for people who defy them. Those who adhere—whether consciously or not—to institutional constraints may be rewarded within the organization with increased legitimacy (Meyer & Rowan, 1991) or power (Jepperson, 1991), while those who disregard them or are less well versed in them may be less able to exert their influence or enjoy success within the organization. This pattern of rewards and sanctions stems from the notion of the institution as prescribing particular forms of behavior that are considered appropriate and adaptive within a social group or organization. In turn, the notions of appropriateness and adaptiveness stem from a belief that the integrity of the institutions must be protected if people within organizations are to understand each other. For instance, with respect to the rhetoric of inquiry, there is a line of research suggesting that would-be researchers must demonstrate that they can “walk the walk” of scholarship, that they understand and follow the rules of conducting 41 research, if they and their work are to be taken seriously by their peers (e.g., Madigan, Johnson, 8. Linton, 1995; Gaskins, et al., 1998). If researchers do not demonstrate this ability, they and their work run the risk of being summarily dismissed by the research community on the basis of sloppiness, arrogance, or just plain ignorance. n r i ti It On one hand, retaining the integrity of the institutions makes perfect sense, because it helps knowledge-generating groups carry out their work and gives them some sense that the work they do is valuable and rigorous. On the other hand, however, if we extend the rhetoric of inquiry example, we can begin to identify problems related to the constraining and enabling pOwer of institutions. The use of rhetoric in scholarship is the result of many decisions on the part of the scholar, or more accurately, on the part of the collective of scholars within a discipline who judge each other’s work. Far from being an Objective or neutral entity, the rhetoric with which members of a community Of scholars interact and communicate with one another reflects the assumptions and beliefs of that community 42 (Maxwell, 1992). A statement by Charles Bazerman (1987), a student of the rhetoric of inquiry, alludes to the potential implications of the non-objectivity Of scholarly investigation. He writes, “The forging of a scientific language is a remarkable achievement; but since it is a human accomplishment, it must be constantly reevaluated and remade as the human world changes” (p. 125). Bazerman’s (1987) comment underscores the notion that there is much to be learned from an examination of the ways in which the academy attempts to make discoveries and justify these to peers and a wider audience. More generally, it provides support for another of the overarching ideas of this dissertation, which is that our methods for doing things have an impact on that which we can actually do. Indeed, in large part, the significance of the current study stems from the contention that how we do the work of educational research influences the nature and substance of this work. To the extent that our methods represent institutions that help to define the educational research community, they shape and constrain the interactions of educational researchers, research participants, graduate students, faculty, and all the physical objects 43 that combine to create our perceptions of “school,” “educational research,” and “education.” The problem has to do with the fact that our methods necessarily advantage some viewpoints and disadvantage others; indeed, this issue is at the heart of AERA officials’ determination to make changes to the annual meeting. Those whose viewpoints are valued have nothing to complain about; indeed, they may not even realize the influence of their methods on their values. However, those whose viewpoints are not valued may be silenced, leading not only to a situation of rhetorical haves and have-nots, but also to a situation in which our perceptions of what is “real,” and, subsequently, what is adaptive for the purposes Of educational research, are skewed. A good example of this situation within the educational research community has to do with the fact that in the recent past, the traditional, quantitative approach to research limited the types of inquiry that were considered viable. Specifically, more qualitative research strategies such as those employed by Heath (1983) in her groundbreaking ethnography Ways with Words were originally devalued as too subjective, despite the fact that her book 44 is now widely viewed as an important contribution to both educational research and practice. It was only over time that practices and beliefs within the educational research community changed sufficiently to support and value work such as Heath’s. Thus, at the same time that our practices give us structure and predictability, they can also give us tunnel vision. But on the flip side, all researchers, including educational researchers, must justify their findings by adhering to accepted methodological and ideological practices. Indeed, the furor caused by the introduction of work such as Heath’s (1983) stemmed from the fact that it represented such an Obvious departure from the traditional scientific method and the experimental, positivistic research paradigm. The scientific method and the experimental model had gained credence and legitimacy in the physical sciences and that had then been co-opted to demonstrate “scientificness” within the social sciences. Due to the influence of this paradigm, large-scale, quantitative studies of educational phenomena were common within education at this historical moment; thus, work such as Heath’s (1983), which was far more qualitative in nature, fell 45 outside the realm of expectations among the educational research community. Studies of educational phenomena patterned after protocols that were widely sanctioned by scholars in the hard sciences were—and still are—able to help us learn something about education. However, while these studies were considered to be methodologically sound, the methodologies left much to be desired in terms of capturing some phenomena of interest. For example, these more quantitatively-oriented studies and their attendant methodologies were unable to capture much Of the “messiness” and complexity of human behavior in educational settings, despite the fact that this messiness and complexity are recognized as two important hallmarks of educational phenomena. The strength Of the ethnographic methodology Heath (1983) employed was its ability to capture just those elements of the educational experience, but this methodology had not yet been deemed rigorous within the educational research community. Subsequently, Heath’s findings were considered suspect, despite the fact that they rounded out our picture of the educational landscape in that no quantitative methods had been able to reveal them. 46 There is-thus the perception of a tension at work in educational research between the need for intellectual curiosity and free access to various components of the educational experience, and the need for intellectual rigor and standards (Tierney, 1993). Indeed, this tension is the subject of an ongoing and heated debate within the scholarly community in general. The question then becomes: How do educational researchers maintain an open mind about scholarship and intellectual investigation at the same time that they maintain acceptable standards of quality? This very issue is the heart of the debate in educational research, as AERA officials make decisions about how to structure the annual meeting: How is it possible to retain credibility while also promoting inclusivity? Since there is a wide variety of opinions on the subject Of intellectual rigor and intellectual inclusivity and what constitutes scholarly activity (e.g., D’Souza, 1991; Bloom, 1988; Hirsch, 1988; hooks, 1989; Harding, 1991), perhaps it does not make sense to think of this debate as ever coming to a definitive conclusion. However, it is still very important to consider these issues, as they do constrain what we can know and consider to be scholarship. 47 In the following section, I describe in greater detail some of the particulars of how the institutions of inquiry are propagated through a social context such as AERA. I will argue that these institutions are closely entwined with the media—namely, instantiations of paper—that support them, such that alterations in the media may facilitate alterations in the institutions themselves. This argument also precipitates a deeper discussion of the introduction of a piece of computer technology into AERA’s institutions of inquiry and the implications of that introduction for members’ perceptions of the nature of those institutions of inquiry. «If. . -_l ' ion : Rh ori - . .-lii-s .o-. .-w nu: gem At its heart, this dissertation is about the methods an organization uses to carry out its business and the influence these methods have on the ethos, values, and overall character of the organization and its members, and vice versa (as this is a recursive process). However, this fundamental aspect of the dissertation runs the risk of being overshadowed by the fact that the dissertation also focuses on the introduction of a piece of computer technology into 48 the social practices of this organization. In the current educational climate, any research that focuses partially or totally on computer technology is vulnerable to sweeping rhetoric about technology and education. This is so because some scholars are touting technology as the catalyst for the next great revolution in education while others express skepticism that it will make much difference in educational contexts. Indeed, much research on technology and education is cast as a debate in the theoretical and empirical literature between those who believe that computer technology will transform education and those who believe that the stability of the social processes that support education will mitigate the influence of computer technology on educational practice. By logical extension, the domain of educational research is also implicated in this either-or debate. Bruce (1993) describes these two types of literature as the social system-focused discourse and the innovation-focused discourse. According to Bruce, the social system-focused discourse downplays the impact of a new technology or artifact on a social system and evinces pessimism about the real capacity of a new technology or artifact to make meaningful and lasting change in a 49 real social setting, arguing that the established practices of a social situation are resistant to change. Educators and researchers on this side of the debate cite the example of instructional television, which made "no substantial difference” (Cuban, 1986) in student learning as compared to traditional teaching methods, to support their contention that computer technology will have no meaningful effect on education. As Bruce (1993) states, this discourse “takes on the role of the critic” (p. 13). The innovation-focused discourse, on the other hand, is more optimistic or utopian in nature, painting a picture of a social world in which technological innovation holds the key to social improvement. This discourse presupposes “not only that change is possible and that it does occur, but that the goal of discussion is to articulate the path to that change” (Bruce, 1993, p. 12). Participants in this discourse, then, heartily endorse technological innovation and are predisposed to observe and identify positive changes in the social system as a result of the introduction of a technological innovation. On this side of the debate, educators often tout 50 technology as the deus ex machina’ that will single-handedly revolutionize education (Papert, 1980; Negroponte, 1995; Rheingold, 1985; Miller & Olson, 1994). While the either-or debate makes for some interesting rhetoric, it oversimplifies the realities of educational contexts such as classrooms and research settings, which are informed by and based on the complex interactions and characteristics Of cultural artifacts and social practices related to education. Further, this rhetoric provides no insight into the ways change actually takes place in a social setting. As a result, the rhetoric that characterizes this debate does not help educators make realistic decisions that reflect a true understanding of the ways computer technology and established social practices in education interact, nor of the types of educational experiences and expectations that are likely to stem from this interaction. In short, as Bruce (1993) argues, neither discourse alone accounts for important aspects of technological and social change; rather, an integrated model is needed...The maintenance of these separate spheres makes it difficult to see how changes to a social system occur through other than simple, one-directional causation. This impedes both the development of successful innovations and the understanding of social change (Bruce, 1993, p. 11, 14). 'The deus ex machina, or "god from the machine,” is a dramaturgical device that originated in ancient Greek theater. It is a person or thing that is introduced into a dramatic scene to solve an apparently insoluble problem or to help a story reach a conclusion when none seems readily apparent. 51 In other words, Bruce would argue that rather than continuing to argue in “either-or” terms, we must develop a coherent language for discussing what actually happens when an innovation, such as computer technology, interacts with a set of existing social practices, such as the protocols associated with processing AERA annual meeting proposals. A way to start this conversation is to take a closer look at what cultural artifacts such as computer technology actually do to support or impede our activity and beliefs ' in social contexts. The physical artifacts we use to interact with one another in social contexts serve as touchstones of meaning around which entire belief systems and habits of mind can be constructed. It is these artifacts that give concrete meaning and shape to the abstract institutions that act as shorthand for helping us understand our social worlds. Inf ol Further, the constraints and affordances that are associated with any artifact lend a distinctive character to the activities they support, and, by extension, to the institutions they index. The abstract institution of scholarship, then, is actually realized in 52 terms of various social activities that are enabled via a set of physical artifacts. I argue that within the educational research community—and within the academy in general, for that matter—the main artifact that serves as a touchstone of meaning for scholars is paper. The institution of educational research is infused at every turn with meaning derived from and based on the medium of paper. Indeed, the implicit rules and beliefs of the institution Of educational research are codified in the very language we use to describe it, and this language is the language of paper. For instance, within every scholarly discipline there is a canon of authoritative “texts”, which almost always appear in book or manuscript form. Doctoral candidates “write” their dissertations, circulating “chapters” to their committee members. An individual scholar gauging her progress on a given day may measure her productivity in terms of the number of “pages” she has written. Scholars present “papers” at research conferences, generate “articles” or “chapters” for publication in printed journals or books, study “print-outs” of data, send cover letters (which both abstractly “cover” qualifications and physically “cover” the rest of the materials) and “manuscripts” when applying for employment or 53 funding. Thus, paper, in its many instantiations, serves as a tangible artifact that indexes the abstract institution of scholarship, with all the significance and implications that pertain to it; the language of paper pervades not only our conscious thoughts, but also our unconscious thoughts—and our subsequent actions—related to scholarship. In addition, inherent in the artifact of paper are certain constraints and affordances that lend particular characteristics to the actual activities that take place in the name of the institution of scholarship. As McLuhan (1994) argues, The railway did not introduce movement or transportation or wheel or road into human society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure (P- 8)- Similarly, the medium of paper did not create the notion of scholarship, as the venerable oral tradition of Greek scholars attests, but it has placed its own distinctive brand on the conduct and products of scholarship. It subtly encourages certain types of interactions and discourages others. Thus, in terms of scholarship within AERA, we must more closely examine our interaction with artifacts, especially paper, and the institutions that help us understand them to determine the implications of this interaction 54 for our scholarly belief systems and habits Of mind. I argue that this understanding will fuel the conversation Bruce (1993) believes we must have. Further, we must also try to understand that when these touchstones of meaning are altered, the possibility arises that our understanding of our social contexts will also be altered. This point is particularly relevant with respect to computer technology (Burris, 1989), given the speed with which it is entering our educational and educational research contexts. To the extent that computer technology differs from paper, and to the extent that our beliefs about scholarship are entwined in large part with our interactions with the medium of paper, the introduction of computer technology into educational contexts may alter our perceptions of what scholarship should be. In the next section, I provide my definition of computer technology, which will facilitate a comparison of its constraints and affordances with those of paper. or‘o--.,0_ :10”. 09fl'hor10‘ '- Any attempt to define “computer technology” first necessitates an attempt to define “technology” more generally. The 55 discussion can quickly become complicated, because countless things, both physical and mental, that we take for granted in our everyday lives are classifiable as technologies. For instance, an example that is commonly cited as a taken-for-granted technology is the pencil. It is a simple physical instrument, but it enables us to carry out incredibly complex mental tasks, such as separating ourselves in space and time from our words or the words Of others. Institutions as I have been defining them are arguably technologies as well, in ways that will become more evident as I further describe the concept of technology. Below, I describe the distinguishing characteristics of technologies. One thing that distinguishes a technology as a technology is the notion that it is used to modify the environment in a systematic manner. In the words of Street (1992), “What determines its status as ‘technology’ is the deliberate and conscious use of it by human agents” (p. 8). The notion of “deliberate and conscious use” may sound paradoxical in light of the previous statement that many things we take for granted in our day-to-day lives are actually technologies. In this context, it is important to remember the argument that it is in the disruption of the use of these technologies 56 that we realize their significance. Similarly, it is in the disruption of these technologies that we can realize the systematic ways in which we use them. The book is a good example of this notion that the disruption of a technology reveals the systematic ways humans use it. Indeed, as Landow (1996) writes, “We find ourselves, for the first time in centuries, able to see the book as unnatural, as a near-miraculous technological innovation and not as something intrinsically and inevitably human” (p. 25). In comparison with the computer, which serves as a breaching experiment for the book, the ways we create, maintain, and interact with books are thrown into relief. For instance, one of the systematic purposes of the book that we have come to take for granted, but that is illuminated in comparison with the computer, is the notion of having permanent records of ideas. The book and the written/printed word provided people with a way to record information for perpetuity, and importantly, there had been no way to create or maintain such a record prior to the book. As the centuries wore on, the concept of permanent records became more and more part of the fabric of the culture in the form of libraries, bookstores, and even microfilm, as well as systems such as the 57 Dewey decimal system that help people keep organize and maintain these permanent records. Thus, the concept of permanence did slowly become more taken for granted, but upon closer inspection of books and their maintenance, we can see more clearly the ways in which books have been—and are still—used in systematic ways to meet particular goals, such as maintaining permanent records. Again, the notion of permanence in books is illuminated in comparison with computers, which distinctly differ from books in that the text on computer screens can be altered repeatedly and indefinitely. The notion of systematic activity used to meet particular goals is another important aspect of the concept of technology. From the idea that a technology has to do with the systematic modification of the environment, it logically follows that there is some purpose behind this systematic modification. Hodas (1993) describes technology as “a way of knowing applied to a specific goal.” This description implies that a belief system or a web of assumptions is associated with the use of any technology; indeed, this web of assumptions is part and parcel of the technology. This characterization also brings to mind the Kalahari Bushmen’s 58 interpretation of the Coke bottle. It is the Bushmen’s way of knowing—their web of assumptions about their way of life—that influences how they use and understand the Coke bottle, just as it is the Westerners’ way of knowing that influences how they use and understand the Coke bottle. Thus, it is not just the bottle itself, but also the belief system surrounding the bottle that helps in the definition and attainment of goals. Subsequently, it is both the bottle and the belief system supporting and influencing the use of the bottle that constitute the technology. The notion of the belief system as part of the technology also helps to explain how and why some technologies are eventually used for purposes for which they were not originally designed; to the extent that different people are shaped by different experiences, a given technology takes on different meaning for different people and in different contexts (Bruce, 1993). This subsequently allows them to read different possibilities into the context. The final distinctive aspect of a technology is also a paradoxical one, in light of the notion that different people can read different things into a technology. Specifically, a technology has certain constraints and affordances that make it more supportive of 59 some behaviors and beliefs and more discouraging of other behaviors and beliefs. In other words, a technology is never value-neutral (Postman, 1992), but instead serves particular political agendas. This is so because technologies are created by people, who themselves are never value-neutral, and because the values associated with a technology are an integral part of the technology. As Street (1992) writes, Because technology is not simply a matter of hardware, the emergence of, or change in, technology can be expressed through new relationships. The liberation provided by a technology is not cost-free...While technology may reduce freedom as well as increase it, we need to recognize that one implication of this general account of technology is that it is constituted by a set of choices (p. 10-11). In other words, different technologies facilitate different types of relationships among individuals and groups. The reason the introduction of a new technology into a social setting can be so threatening to the people in that setting is because it has the potential to change the relationships among the people in that setting. For instance, those who have traditionally found themselves in positions of power may find that their expertise and beliefs are irrelevant in the context of the new technology; thus, they may find that they no longer wield the power they once did. There is then a 60 tension between the beliefs and values upon which the old technology was based and the beliefs and values that the new technology introduces. It is when these sets of beliefs and values do not coincide with one another that confusion and frustration manifest themselves. n e tualizin r anizational and cial han e--Realisticall Thus, in examining changes in our own practice, we should be focusing on the push and pull of the old and the new ways, rather than on the either-or arguments that characterize much rhetoric about the introduction of technology into existing practices (Bruce, 1993). It remains to be seen whether and how Tiger and the practices of AERA will interact and shape each other in unique ways, but it is here, rather than on the extreme “yes it does—no it doesn’t” rhetoric, that we should be focusing our attention. If we examine our existing practices and the introduction of new technologies in terms of reliance on artifacts and the belief systems enacted through them, we can begin to get closer to the actual interplay, rather than speaking in sweeping generalizations that provide little insight into the kinds of changes—both in the setting and in the innovation—that 61 we can actually expect in a classroom or educational research setting that has witnessed the introduction of technological innovation into its practices. The point, then, is not to argue that an artifact will have a revolutionary effect on a social system or to argue that it has no effect at all, although this might be anticipated and desired depending on whom you ask. The point is that the introduction of such an artifact can help us learn something valuable about the significance we place on particular activities within social systems, as well as about how these activities may interact with innovations and actually change over time. This is a noteworthy - thing to keep in mind given the current social and sociological climate: It is difficult work to document and understand the complex experiences of actual teachers, learners, and researchers as they struggle to teach and learn while contending with the introduction of technology, because the experiences of these people are where “real life” resides, and real life is exceedingly complicated. But it is “real life” that should be the focus of our attention. As Kling (1991) notes, To help advance social studies of computerization, the following conceptual issues need serious work: how to 62 characterize the social-technological systems that are the objects of our inquiry and subjects of our theories, how to characterize the social organization that supports these social-technological systems, and how to draw boundaries around studies so that they are manageable by the tiny groups of researchers who investigate them (p. 355). This dissertation, the product of a tiny group of researchers, attempts to address some of these needs within the literature. In order to make useful observations, I need a more discriminating and sophisticated model than the either-or model described above. One of the tools I use in this attempt is a taxonomy of change developed by Bruce (1993) that is more useful in thinking about these issues than the either-or rhetoric, and that I describe below. In the context of Bruce’s (1993) taxonomy, one of the potential contributions of this dissertation is that it provides an opportunity to examine the changes that occur within the social system that is AERA as they actually occur. This also addresses the needs described by Kling related to understanding both innovations and social systems in that I must provide a clear and comprehensive characterization of AERA and Tiger in order to document this change. Bruce’s (1993) model describes several different types of change implicated in what he calls the “realization process,” whereby an abstraCt technology comes to have concrete meaning and 63 purpose in a specific social setting. During the realization process, Bruce argues, there are five types of change that can occur with a social system. These types include: consonant change, dissonant change, resistance to change, cascades of changes, and redesign of the innovation. Consonant change, as its name implies, refers to changes in social practice that are consonant with the values of the social system; for instance, if a technology enables members of a group to carry out their work more efficiently and more quickly, then this change is viewed as consonant. Conversely, dissonant change refers to changes in social practice that are in conflict with the values or expectations of the organization. Sherry Turkle’s (1995) work illuminates the notion of dissonant change. She argues that the introduction of the computer—specifically the Internet—into society has upended our conception of social constructs such as gender and the nature of romantic relationships, which are tacitly dependent on face-to-face interaction. For instance, the Internet supports on-line communities in which users can portray themselves as a different gender or species and develop relationships with other users whose identity they know only in the context of these communities. Resistance to change refers to the fact that there are 64 times when an innovation is introduced into a social setting but precipitates no change whatsoever in the social practices; the introduction of the television into school settings provides an example of resistance to change. Indeed, as Bruce (1993) writes, “The nonuse of many patented inventions and the failure of technologically innovative products and companies attests to this fact” (p. 25). The term cascades of changes refers to the fact that, as Bruce states, “Changes beget other changes” (p. 26). Finally, the term redesign of the innovation raises the important issue that it is not only the social practices that change when a technological innovation is introduced. Rather, in a manner consistent with the “realization process” Bruce describes, the technology is also constantly in a state of change, acting as it does as a “full member” of the social system, and created by developers who may well be members of this social system as well. Bruce argues that “in practice it may be difficult to say exactly which type of change is occurring, and any real example is likely to involve a mixture of these types” (p. 20). This argument further supports the notion that the actual interaction of technological innovations with the established practices of a social 65 system is a complex phenomenon that must be examined in greater detail than has previously been done. My data analysis will help operationalize Bruce’s model in a “real” social context and with a “real” technological innovation. Flvli hlniin flni:Thlnrutinfa mm The next challenge, having articulated the significance of studying the institutions of inquiry, the media that support them and that may facilitate change in them, and their influence on our knowledge-generating practices and beliefs, is to bring these issues to light so that they may be examined. As I have mentioned previously, according to neoinstitutionalism, one of the reasons for the staying power of institutions is the fact that they influence our behavior at a tacit, subconscious level. More accurately, they operate at a tacit, subconscious level as long as they are honored. This is a useful fact for those interested in bringing institutions to the surface for examination, because the literature suggests that one way to bring institutions to a more conscious level is to challenge or violate them, using what Garfinkel (1967) named a 66 breaching experiment. As Rowan & Miskel (in press) explain, in a breaching experiment, the tacit understandings and expectations structuring a social situation are violated. As this occurs, tacit understandings are sometimes more easily brought to the surface for study than would be the case in situations where these remain in taken- for-granted form (p. 40). Considering again our handshake example, what would happen if our handshaker extended her right hand, as is customary, but her partner in handshaking extended his left hand or, more oddly, his foot, instead of the right hand? She would be at a loss as to what to do next, because nowhere in her script for handshaking are there any provisions for dealing with feet. This is an example of a breaching experiment. Our handshaker would expect a right hand to be extended to her, and if something different happened, she would be startled by the awareness. She would likely say something to the effect of, “I just tried to shake that guy’s hand, and he waved his foot in my face! What is he thinking?” This is how a breaching experiment works: by confronting people with behaviors that are not part of their tacit, shorthand understanding of the world, behaviors that they were not expecting, we can gain better access to the kinds of behaviors they 67 were expecting, which are the types of behaviors that structure that social system. The introduction of a novel cultural artifact into the practices of an established social system is likely to engender the breaching experiment effect. This is so because artifacts by their very design are suggestive of certain types of behaviors or uses (Bruce & Hogan, 1998). And as often as not, the behaviors or uses of which they are suggestive are at odds with some practices that are taken for granted within the social system. For example, automated teller machines (ATMs), which were introduced in the 19808, enabled people to carry out bank transactions without having to interact with an actual, human bank teller. Users could withdraw cash from their accounts, make deposits, or transfer funds from one account to another by inserting a credit-type card into the machine and entering a 4-digit access code. At first, however, many users balked at using ATMs, especially for making deposits, because they weren’t entirely sure where their deposits went once they entered the machine. While people interacting with a human bank teller could actually watch the teller cancel a check or count cash and place it in a drawer, people interacting with an ATM had only a receipt to tell 68 them that their money was “in the bank.” Eventually, as more and more users were able to carry out their banking transactions on the ATM with few problems, this issue became less and less important. The fact that it was ever a problem at all, however, underscores the idea that the ATM introduced certain expectations and behaviors that were sufficiently different from the traditional method of banking that users noticed and became nervous. This is a clear example of the breaching experiment effect. Annual Meeting Activities: Tiger as a Breaching Experiment A complex series of social activities influences the process by which original research from AERA members is selected for presentation at the annual meeting. Briefly, a proposer submits a proposal according to guidelines enumerated in the call for proposals that is sent out every year several months prior to the submission deadline. Proposers submit their proposals to a specific division or special interest group (SIG) within AERA, each of which is headed by a division or SIG chair. Once they have received the proposals, the division or SIG chairs send the proposals out for blind review to at least three peer reviewers, who are usually AERA 69 members themselves. Reviewers critique the proposals and make recommendations to the chairs as to whether or not they should be accepted for presentation at the annual meeting. The chairs use the reviews to make final decisions about each proposal’s acceptance or rejection. And finally, the chairs notify proposers as to the final decisions on their proposals. In 1997, a Web- and electronic database-based document processing system, known as Tiger, was introduced into this complex series of social activities related to proposal submission, review, and selection or rejection. As computer technology is increasingly finding its way into all aspects of academia, the developer of Tiger was curious about the potential of such a system to help AERA carry out its duties. Indeed, Tiger was originally introduced to help the organization streamline the processing of research proposals submitted by organization members who want to present their original work at the organization’s annual meeting; to use the terminology of Bruce’s taxonomy, Tiger was designed to foment consonant change. However, it soon became readily apparent that Tiger had greater implications for the organization’s institutions of inquiry than a simple increase in efficiency. The 70 developers of Tiger consulted elected organization officials, as well as official organizational documentation, to determine the best way to structure the system to meet the needs of the three types of users (proposers, reviewers, and division chairs). Despite these attempts to anticipate the needs of the constituency and simplify the transition from a paper-based to a Web-based method, however, the developers found that some users became confused or upset about some aspects of the system, and subsequently about the entire process of submitting, reviewing and/or coordinating proposals electronically. In other words, Tiger introduced change that was dissonant for some users. During the summer and fall of 1997, when Tiger collected proposals and reviews as part of the first pilot test, proposers and reviewers sent numerous e-mail messages and evaluative comments that prompted the developers to make several mid-course changes to the system. The division chairs, although fewer in number, were called upon to use the system far more than either proposers or reviewers due to the nature of their responsibilities. Thus, they also had many pressing questions and suggestions for improving the system. 71 Many of the comments from users went beyond simple suggestions for making Tiger easier to use, although the developers did receive those types of suggestions as well. The more interesting comments, however, came from users who seemed truly confused or upset by certain features of Tiger. These tended to be features that contradicted or eliminated the need for features of the paper-based proposal process, or features that introduced new considerations into the proposal management process. Many proposers and reviewers discussed Tiger in the terms of paper-based proposal processing, asking questions about processes that were relevant in the paper-based system but that had no bearing on the Web-based system. For instance, some users asked whether they needed to include self-addressed, stamped envelopes and 3x5 cards containing their names and addresses when they submitted proposals on Tiger. There is no need for users to do this, but they were understanding Tiger in terms of the traditional, paper-based system, which did require these things. Based on evidence such as this, I argue that the introduction of Tiger into the existing knowledge-generating practices of the organization—the existing institutions of inquiry—has the potential 72 to challenge the expectations of group members about the ways knowledge is and should be generated. Thus, examining the Introduction of this instantiation of computer technology into the practices of this organization represents a breaching experiment that provides an opportunity to surface these practices and perhaps develop a better understanding of them. In the next section, I outline the research questions I address in this dissertation. These questions help me frame my investigation of AERA. Answers to these questions help me paint a picture of the unique characteristics of AERA and understand the ways that unexpected practices and change agents interact with established practices in adaptive and maladaptive ways. The first two questions will be answered with empirical data from my study, and the second two questions, which are more speculative in nature, will be answered by extrapolation from these empirical data. 8W 1. How does the introduction of Tiger into the practices of AERA reveal the institutions of inquiry related to submitting, reviewing, and coordinating proposals for 73 the annual meeting? By examining- the institutional practices that Tiger disrupts, I can develop a better sense of how they structure the activities of organization members under normal circumstances. Further, following Madigan, et al. (1995), by examining these institutional practices, I can develop a sense of the more fundamental beliefs and tacit assumptions that influence the organization. .What are the institutions of inquiry within this organlzation? In part, I answer this question by consulting “official” documentation of the organization regarding protocols for submitting, reviewing, and processing proposals for the annual meeting, as well as interview data from division chairs, reviewers, and proposers, both those who used and those who did not use Tiger. I answer this question in part by the process of elimination. This is so because it is likely that users do not even realize the extent to which they rely on the institutions of “doing proposals” to carry out this task until they cannot rely on them anymore (DiMaggio, 1988; Perrow, 1979). Thus, by isolating some of the specific features or assumptions related to Tiger that the 74 users did not expect, I will develop a better sense of what they did expect but did not find in using Tiger. . How is this perception of the institutions of inquiry adaptive and maladaptive within the organization? The reasoning behind having all members submit and review proposals in the same way is sound. A standardized submissions and review process facilitates the efficient, consistent, and fair evaluation and selection of proposals. However, the question arises as to what, if any, undesirable restrictions the use of the standard method imposes on organization members. Following on the tenets of the rhetoric of inquiry, for example, do the social practices that comprise the standard method privilege certain types of people, research methodologies, or ideas over others? If so, what, if anything, can and should be done to diversify the representation of people and ideas at the organization’s annual meeting? If not, what can we learn from this organization that might help other organizations address these types of issues? . What changes might we expect to see in these institutional perceptions In the face of a change agent such as Tiger? To the extent that the individuals within this 75 organization carry out activities related to the construction of knowledge according to assumptions that are socially sanctioned within the organization, an understanding of individual perceptions of standard operating procedures in this organization could shed some light on the organization as a whole. Further, perhaps the once-standard method for coordinating annual meeting proposals will change significantly in the new context Tiger has introduced. If this is the case, what will the institution of “submitting a proposal” look like in this context? Stimulant This dissertation examines the complex workings of an elaborate social organization at a pivotal moment in its history. There have been many attempts made to improve the activities associated with AERA, particularly the annual meeting, which is the flagship event of the organization. These efforts often fall short of their intended effects, however, leaving AERA and the annual meeting relatively stable and unchanged, despite the claim on the part of organization members that the annual meeting must be changed. In this chapter, I have attempted to demonstrate how AERA, 76 which is the premier governing body of educational research in the world, might be difficult to change in fundamental ways. Neoinstitutionalism and literature on the rhetoric of inquiry help explain the inertia of AERA and the staying power of annual meeting practices by revealing that AERA is constituted of people who are influenced by certain tacit but very robust expectations, who act according to the dictates of institutions of inquiry. These expectations, or institutions of inquiry, shape the types of activities and beliefs that characterize AERA, specifically within the annual meeting. Another important way to illuminate the nature of this inertia involves the notion that institutions, both within AERA and within other social systems, are carried out via physical artifacts. It is these artifacts that act as touchstones of meaning for human actors within a social system. In other words, a human actor perceives a social system and the institutions that guide it by filtering her activities and beliefs through the artifacts or technologies. In the previous sections, l have argued that a very significant artifact that scholars have at their disposal to make sense of the notion of scholarship is the medium of paper. The close connection between 77 the institutions of an organization—in this case the institutions of inquiry—and the physical artifacts that perpetuate them—in this case paper, and perhaps, in the near future, computer technology—suggests that an alteration in the types of artifacts that are used in the organization would tend to disrupt the social activity in the organization. When these institutions are disrupted, or breached (Garfinkel, 1967), people become confused and upset. Importantly, however, this confusion and upset helps to surface institutionalized norms and values by tapping into the unmet expectations of the actors in the social system. Subsequently, these institutions may be examined more easily than they would be if they were still tacit and taken- for-granted. 7'8 Chapter 3 Interpretation In this chapter, I describe and justify the data sources, the contexts within which data were collected, and the protocols I followed to analyze the data to answer the four research questions described in the previous chapter. This study is inductive rather than deductive in nature, such that my understanding of the theoretical themes and issues of the organization have arisen out of my prior understanding of the empirical evidence. Thus, despite the fact that the theoretical perspectives are provided first in this dissertation, I suggest that these perspectives helped me to explain what I previously saw taking place within AERA, rather than providing a framework within which I could subsequently ask questions of the organization. In other words, the theories detailed in the literature review helped me develop a post hoc explanation for the activity I 79 observed, activity that is typical of AERA members. Further, these theories helped me develop a post hoc understanding of the ways in which the introduction of Tiger surfaced and influenced these typical activities within AERA. D re ifi infr sin The I examined three types of data in this dissertation: 1. representative texts of the organization, including “official” documentation such as the call for proposals, guidelines for submitting and reviewing proposals, relevant articles and boilerplate text from AERA-supported publications, and reports from committees within AERA. 2. information collected as a result of the normal, intended use of the proposal process, for example, proposals, reviews of proposals, e-mail archives of communications (including questions, suggestions, complaints, comments, mid-course changes, requests) between users and developers, and survey responses from users of Tiger. 3. interview data collected from users, developers, and proponents/detractors of Tiger who indicated a willingness to be 80 interviewed. These interviews were semi-structured in that they focused on the collection of more in-depth information from subjects about their experiences in supporting implementation of the system, use or non-use of the system, or development of the system. The exact wording of the questions varied according to each particular interview, but the same basic issues, related to the four research questions were covered in each interview (refer to Appendix A for the interview protocol). if in Daa r #1 It is a relatively straightforward matter to justify use of the representative texts of the organization (data source #1) as data for this dissertation. As I have been arguing throughout the previous chapter, it is the physical artifacts of a social system that act as touchstones of meaning for human actors within the social system. And while it is true that the artifacts that comprise “official” documentation of AERA are themselves invented by members of the organization, the fact that they are put forth as representative texts of AERA gives them authority that extends beyond that of the individual members who created them. The scholarly journal 81 American Educational Research z,leurnal (AERJ), widely considered to be the flagship research journal of AERA, provides a clear example of this phenomenon. As with any scholarly journal, AERJ is edited by actual people, and manuscripts are submitted and reviewed by actual people, some of whom I may even know personally. It is highly likely that those decisions made by members of the editorial board during one year about whom and what to publish in AERJ would not map exactly, or even closely, onto those decisions made by members of a different editorial board. In other words, depending on those individuals who. are editing and reviewing during any given year, we might see a very different AERJ in tone and focus from editorial board to editorial board. That being said, the fact remains that AERJ itself is an institution whose significance and longevity within AERA originate much farther back than the tenure of any one editorial board. Regardless of who is producing it, AERJ is one of the physical artifacts that has come fundamentally to represent AERA. When one reads AERJ, one taps into the decades of tradition and protocol that have come to distinguish AERA from other organizations; these decades of tradition and protocol are realized and perpetuated in the 82 artifact that is AERJ. Thus, those artifacts that are actually created during the process of publishing AERJ—the actual bound issues, complete with representation of particular articles, authors, and topics that have been sanctioned by the editor—become taken for granted as part of the party line regarding what is meaningful and what is not within the organization. All of the documentation that is sanctioned by AERA, then, provides a common ground for understanding; it will have a similar meaning to anyone within the organization. From official documentation that contributes to this common ground, then, I can begin to characterize the organization as a whole, at least in terms of its official stance. On the other hand, despite the fact that the official documentation of the organization serves as a touchstone of meaning for members, it is also true that there are individual differences in interpretation of these artifacts. Indeed, we can expect a different tone to emerge in AERJ depending on those serving on the editorial board, ostensibly because members of different boards interpret their duties or the mission of AERJ in different ways. With respect to this dissertation, I observed a similar range of individual differences in users’ interpretations of Tiger and its purpose and 83 meaning to the organization. While some users unilaterally praised its efficiency and ease of use, for instance, others expressed concerns about what they perceived to be various technical or philosophical glitches. 'f i D r # The participants in this study cover a range of occupations, interests and activities, including among them university faculty, think-tank research associates, graduate students in education- related fields, and K-12 teachers with an interest in educational research. Many of these people are members of AERA, representing varying levels of participation in the organization (e.g., member, office holder, editor of one of the organization’s publications), demographic characteristics (e.g., gender, race, geographic location), and research agendas. For the purposes of this study, potential participants were grouped according to their use of Tiger: division/SIG chair, proposer, and reviewer. The e-mails and other communications that were collected as a result of the use of Tiger constitute data source #2. In these communications l observe how individual people struggled to make 84 meaning of the new artifact that is Tiger. Further, I can develop a sense of the range of meanings that users made of Tiger (Bruce, 1993); notably, this diversity of meanings among various users and groups of users becomes relevant in considering issues of authority, validity, and the official “party line” within AERA. The data that demonstrate this range of meaning-making activities were not generated in response to any prompts on my part; rather, these data were generated in response to the introduction of Tiger and the subsequent influence Tiger had on the AERA proposal submission and review process. They represent user reactions to Tiger that were motivated purely by users’ interactions with it; they were not influenced by my own research agenda. 'fi r # Sometimes it is difficult for a researcher to determine the “true” meaning of a comment without first having a sense of the proper context within which the comment was made. Since many of these e-mail communications and survey comments were generated in response to immediate user problems or questions, there is less evidence than might be useful of the thought processes and belief 85 systems that gave rise to these problems or questions. In other words, while users generated e-mail communications and survey comments in the context of authentic use, they weren’t always specific as to the reasons they raised particular issues or concerns. Without access to the user’s thought processes, then, there is arguably a range of interpretations for any given piece of data. For this reason, I conducted follow-up interviews (data source #3) to explore in greater detail some of the more notable comments we received from people who used the system. I conducted these follow-up interviews with six proposers, six reviewers, and five division chairs in the hopes of narrowing the range of possible interpretations of the data that represent data source #2. While I was not able to interview every survey respondent or user who e- mailed us with questions, I was able to press on some issues that may form the basis for further attempts to understand AERA, its membership, and its protocols. With these three data sources, I am able to see how well individuals’ perceptions of AERA, its purpose, and its protocols map onto the official party line. Further, I am able to check some of the e-mail and survey responses to ensure that my interpretations of 86 them actually map onto those of the peOple who gave the responses. As Maykut & Morehouse (1994) state, The combination of interviews and observations from the field, along with reviews of relevant documents increases the likelihood that the phenomenon of interest is being understood from various points of view and ways of knowing. Convergence of a major theme or pattern in the data from interviews, observations, and documents lends strong credibility to the findings (p. 146). The interrelatedness of the three data types used here also begins to attest to the social and political complexity of AERA, which gives rise to an important question about the unit of analysis in this study. While I am examining data that come from individual users within the organization, I argue that my unit of analysis is actually AERA as a whole. This is so because my interests and research questions have to do with the nature of the entire organization and the ways the entire organization changes or does not change in interaction with Tiger. To the extent that my interests are in change at the organizational level, I have an N of one, albeit a very complex N of one. One of my challenges will be to explain the changes I see taking place within this N of one in a way that illuminates the experiences of different groups of members within it. 87 Da lI ti n: ntex nd Li iaions Contexts mm The official documentation of AERA was relatively simple to gather; due to my involvement in setting up and maintaining the electronic submissions process, I had access to relevant documents from the AERA Standing Committee. The most notable of these was a version of the AERA Annual Meeting handbook for division/SIG chairs, which is a three-ring binder containing information that division/SIG chairs need to know in order to carry out their duties as chairs. Information contained in the handbook included, among other things, statistics on proposal submission and acceptance rates in recent years; boilerplate letters of notification to proposers and reviewers about receipt of their proposals or their assignments to review proposals, respectively; and instructions to chairs about various elements of proposal processing. I had access to other documentation, such as Edgeetjggeljeseerener and the call for proposals because I am a member of AERA, and as such, I receive 88 some of the AERA journals in the mail as part of my membership. I also consulted the AERA web site (http://aera.net). E-mails. As I also mentioned previously, the e-mails we received from users were almost always in response to a particular problem or to find the answer to a specific question. Generally, a user would access the Web site or send for an e-mail template; attempt to submit a proposal or complete a review, depending on the type of user; and then hit a roadblock or a confusing point. The user would then send a question or comment to the e-mail hotlines we had set up to field such questions. In answering users’ questions, we were often able to have almost synchronous conversations with them, because we were constantly checking the messages to make sure we were helping meet the users’ needs sufficiently. The time frame for the collection of these data stemmed from the beginning of July, 1998, through November, 1998. Proposers asked the most questions during the time when they were submitting proposals, the deadline for which was 11:59 p.m., EDT, August 3, 1998. Reviewers asked many of their questions directly thereafter, 89 as they were assigned proposals shortly after the deadline and had the month of August and part of September to complete their reviews. Chairs asked questions every step of the way, from the beginning of July all the way through November, as they familiarized themselves with Tiger and its functions, which ranged from reading received proposals to assigning reviewers to evaluate proposals, to creating annual meeting session sheets to be turned into AERA’s Central Office. The number of e-mails received from each type of user is difficult to determine, primarily because some users contacted us more than once. In addition, our responses to users’ e-mails sometimes generated follow-up questions. Further, some messages that raised problems or issues were forwarded to us by a program chair from a user in her/his division/SIG, for instance. I considered an “e-mail” to be a relevant chunk of data when the text described a problem or issue a user was having or posed a question to us. This criterion excluded e-mails that simply thanked us for our help, or that were automatically generated messages notifying users of upcoming deadlines or duties. Using this criterion, we received 286 90 e-mail messages from proposers, 113 e-mail messages from reviewers, and 102 e-mail messages from chairs. Survey reeponsee. Once a user had finished submitting a proposal or reviewing a group of proposals, that user was invited by means of a link on the finishing screen to fill out an on-line evaluation form. A separate, more elaborate and detailed form was created for chairs. Generally, proposers submitted responses on or about August 3, 1998, reviewers submitted responses on or about September 15, 1998, and chairs submitted responses in November, 1998. We received some 541 evaluation forms from proposers and some 235 evaluation forms from reviewers. From chairs, we received 27 out of 53 possible responses to the evaluation form. Table 1 provides information on the survey return rate and percentages for each user type. 91 Table 1 r n f r rv User Type # of Survey % of Total # of Users Responses proposer 541 43 reviewer 235 27 chair 27 50 The evaluation form for proposers and reviewers asked multiple choice questions that were designed to gauge users’ experiences using Tiger to submit or review proposals, especially compared to their experiences submitting or reviewing proposals in the traditional way. Similarly, chairs were also invited to fill out a survey form; their form was far more elaborate than that created for the proposers and reviewers, because their interaction with Tiger was far more involved than that of proposers and reviewers. The evaluation form for proposers and reviewers also provided a text box in which users could elaborate on their experiences, commenting on particular features of the system or unanticipated issues or problems. Users commented on a wide variety of features 92 and issues in the text boxes, but eventually it became clear that there were several recurring themes among users. Based on these themes, as well as the indication of the user that he or she was amenable to this, I selected a subset of users—six proposers, six reviewers, and five division chairs—to interview in more depth. Interviewe= I conducted a total of 17 interviews, during the time period from the end of October, 1998, until mid-December, 1998. I conducted two of these interviews face-to-face with the interviewee; the other 15 I conducted over the phone. All interviews were tape recorded with either the written or the spoken permission of the interviewee. Interviews were anywhere from 30 minutes to 60 minutes in duration, with one interview lasting, with the permission of the interviewee, for roughly 90 minutes. One of the interviews with a proposer was unintelligible upon revisitation. Thus, I have useable data from interviews with five proposers, five division/SIG chairs, and six reviewers. Three proposers also reviewed proposals, so they put on the appropriate “hat” when answering the interview questions. 93 Limitations If- I ti n. The main limitation of the data that were collected from participants, whether in e-mail, survey, or interview form, is that these participants were self-selected; they do not represent a random sample of AERA members. This is so because electronic submission/review has been voluntary, which means that we might expect certain types of people to have been interested in the electronic submission option. Indeed, this turned out to be the case, with just over 55% and 41% of proposers and 52% and 46% of reviewers identifying themselves as intermediate or expert computer users, respectively. This statistic alone strongly suggests that those who used Tiger are not representative of the whole of AERA. I nevertheless argue that we can learn something meaningful from the people who used Tiger despite the non-randomness of the sample. My rationale is as follows: This dissertation is a study of technological innovation and organizational change within AERA. As such, it is necessary for me to observe those areas where the technological innovation and organizational change—and 94 subsequently the breaching experiment effects—are most likely to occur, which happens to be where those people are who are using Tiger. Given the naturalistic tone of this study, I am in agreement with Lincoln and Guba (1985), who indicate in their description of naturalistic inquiry that the naturalist is likely to eschew random or representative sampling in favor of purposive or theoretical sampling because he or she thereby increases the scope or range of data exposed (random or representative sampling is likely to suppress more deviant cases) as well as the likelihood that the full array of multiple realities will be uncovered; and because purposive sampling can be pursued in ways that will maximize the investigator’s ability to devise grounded theory that takes adequate account of local conditions, local mutual shapings, and local values (for possible transferability) (pg. 40). Thus, in the context of naturalistic inquiry, the non-randomness of the participant sample is not only not a problem, but is in fact preferable to random sampling. Limitatiene ef Sumey. Another limitation with the data involved the multiple choice segments of the proposer and reviewer evaluation forms. Users evaluated Tiger on Web forms by responding to multiple choice questions via radio buttons. A number of respondents commented that the radio buttons in the survey had been pre-set to the middle response for each question, and they were concerned that this would 95 bias any results. While I acknowledge that a better course of action would have been to leave all buttons unselected, I do not perceive that much, If any, bias resulted from having them pre-set. This is so because the pre-set radio button for each question was set on the middle response, where the choices were generally in the form of “more,” “same,” or “less” difficulty/trust/time in using electronic submission compared to paper-based submission. Thus, anyone who did not respond to the multiple choice segment of the evaluation form would not have biased the results either way, because their choices would have indicated that they perceived no difference between the two systems. And since an overwhelming majority of users reported that they experienced less difficulty, for example, it is clear that they made selections themselves. The fact that the survey presupposed that users had submitted or reviewed proposals in the traditional, paper-based manner before using Tiger posed problems of its own, however. A handful of people commented that they had never submitted or reviewed proposals before, so they were unable to respond to the questions as they were written. Those limitations aside, however, I argue that the responses received on the evaluation forms were generally accurate, 96 primarily because the user’s short answer responses generally corresponded with his/her overall perception of the system as measured by multiple choice questions. Further, I was more interested in the potential of the short-answer responses to reveal interesting phenomena, because these enabled respondents to be clearer about the experiences they had had with Tiger that might have demonstrated the disruption of an institution of inquiry. Thus, I ended up focusing on them far more in my data analysis than on the multiple-choice survey responses themselves. W For the most part, the data in this study take the form of “texts.” They are chunks of narrative that describe a problem or question, provide feedback on an experience with Tiger, or, in the case of the official documentation, delineate some set of rules or authoritative perspective that represents the organization as a whole. I organized these texts in a manner that enables me to identify consistent themes .or issues that are in evidence at multiple levels within the organization (with these levels being represented by the different data sources and the different respondents who 97 supply data). I have used the previously described texts to create a picture of the social practices that govern the knowledge-generation activities of AERA and to search for indications of “cracks in the surface”—evidence of disruption of these social practices that appears to be correlated with the introduction of Tiger. I approached the task of characterizing AERA and identifying disruptions in the social practices that influence its activity using the constant comparative method of analyzing qualitative data (Glaser & Strauss, 1967, Lincoln & Guba, 1985). According to Maykut and Morehouse (1994), The constant comparative method of analyzing qualitative data combines inductive category coding with a simultaneous comparison of all units of meaning obtained (Glaser & Strauss, 1967). As each new unit of meaning is selected for analysis, it is compared to all other units of meaning and subsequently grouped (categorized and coded) with similar units of meaning. If there are not similar units of meaning, a new category is formed. In this process there is room for continuous refinement; initial categories are changed, merged, or omitted; new categories are generated; and new relationships can be discovered (Goetz & LeCompte, 1981, p. 58) (p. 134). In other words, the constant comparative method requires the researcher to classify various texts according to themes that suggest themselves as the researcher examines the data. Rather than making preconceived predictions about the story the data will tell, 98 then, the researcher instead “discovers” the existing story as she immerses herself in the data. The themes that emerge from data analysis help the researcher focus on those issues that are particularly salient for the subjects of the study, those issues that are most significant in helping participants make meaning in their social context. The official documentation tells part of the story that has been going on for years within AERA, and the data collected from AERA members help to round out that story by providing evidence of users’ lived experiences of AERA, the proposal submission and review processes, and the annual meeting. Thus, my examination of the process of technological innovation and organizational change within AERA will be firmly grounded in the experiences and realities of organization members. Of course, despite the fact that the constant comparative method is touted as a post hoc means of developing a deep and multifaceted understanding of a social situation, it does not follow that I had no prior expectations about the data. Indeed, it was my growing awareness of certain interesting trends in user responses to Tiger that led me to believe that this was a viable topic for a dissertation study in the first place. Again, Lincoln and Guba (1985) 99 have this to say about making meaning of data within the naturalistic inquiry paradigm: However the categories may be derived, it is clear that as a first step Glaser and Strauss would have us assign “incidents” (units?) to them, initially on a “feels right” or “looks right” basis. The investigator should not fail to draw on his or her tacit knowledge in making these judgments; errors made as a result of using such knowledge are correctable on successive review, but incidents recognized tacitly, once eliminated, are virtually impossible to recapture (pg. 340-1). In other words, my first steps in making sense of these data in terms of identifiable categories involve the identification of certain patterns or similarities among different chunks of data. Notably, Lincoln and Guba (1985) sanction the researcher’s reliance on her own intuition in the initial development of these categories, arguing that intuition is a valuable tool for meaning making. While this may sound “unscientific” at first, there are ways to check the nature of the categories that the researcher imposes on the data to make sure they are not arbitrary. In this case, following Lincoln and Guba (1985), I eventually established working definitions of the categories based on my initial intuitions. These category definitions came to serve as external, explicit checks on my initial impressions, which had been rather tacit at first. 100 Using this method of intuitive and then increasingly explicit and rule-based category assignment, I discovered some 48 themes within the three data sources; these themes are in the form of key words or phrases. The themes I discovered, the rule or definition associated with each theme, and the source of information about the theme are discussed in Appendix B. Source of information about the theme was broken down into type of user of Tiger (proposer, reviewer, chair), and/or type of data source (official AERA documentation, survey, data, e-mail data, interview transcripts). Not every theme was as relevant to this particular study as every other theme, as will become evident in the following chapter, in which I discuss the results of my data analysis. I plan to examine the ancillary themes in a later study. My next step in developing a meaningful picture of the data and, subsequently, of AERA, involved the identification of meta-themes, which assumed the form of rule statements. These meta-themes cut across the initial 48 themes such that more than one theme could be associated with each meta-theme. This step in analyzing the data was important because it raised my level of understanding from key word descriptions of activity within AERA to verifiable or 101 falsifiable statements about the nature of this activity. These meta-themes help paint a picture of AERA on the level of behavioral and belief trends of the membership. Indeed, the meta-themes I identified actually represent the institutions of inquiry that my research questions targeted. Thus, in keeping with the constant comparative method of data analysis, I was able to create, from the ground up, a picture of AERA that is firmly rooted in empirical data, but that also speaks to the broader issues that the organization faces. Summary In this chapter, I have outlined the contexts, limitations, and analysis procedures for my data. The three data sources are official documentation from AERA (including documents such as the call for proposals and the annual meeting handbook for chairs); information collected as a result of the normal use of Tiger, such as e-mail questions and survey responses; and follow-up interview data from a handful of proposers, reviewers, and chairs who agreed to a more in- depth discussion of their experiences with Tiger. Limitations of the data include the non-randomness of the sample and potential 102 researcher bias. I have argued that the non-randomness issue is actually not a problem given that this is a naturalistic inquiry. l have used the constant comparative method of analyzing qualitative data for developing an understanding of the story the data tell. This method yielded 48 themes and several meta-themes that help me understand the nature of AERA on several levels. In the following chapter, I provide the results of the data analysis, focusing specifically on those themes and meta-themes that are relevant to addressing my research questions. I enumerate each research question and discuss the data sources that contributed to the response to that question. 103 Chapter 4 Institutions of Inquiry within AERA In this chapter, I restate my research questions, answer these questions, and provide evidence to support my answers. Further, I discuss my findings in terms of Bruce’s (1993) taxonomy for understanding technological innovation and organizational change, using the tangible example of AERA to contextualize this taxonomy. With respect to my research questions, I argue that AERA is influenced by institutions of inquiry that shape the types of scholarly activity that can legitimately take place within the organization. Further, I argue that Tiger performs a breaching experiment function within AERA by revealing the institutions of inquiry for closer examination. In a later chapter, I discuss the implications of these institutions of inquiry and the breaching experiment for our understanding of AERA in particular, and of organizational change and technological innovation in general. 104 The research questions I considered were divided into two categories: those that I addressed with empirical data, and those for which I extrapolated comments based on the empirical data for the first two questions. The empirical questions were: 1. How does the introduction of Tiger into the practices of AERA reveal the institutions of inquiry? 2. What are the institutions of inquiry within this organization related to submitting, reviewing, and coordinating proposals for the annual meeting? The more speculative questions were: 3. What types of member activity and beliefs does this perception of the institutions of inquiry afford and constrain? How is it adaptive and maladaptive within the organization? 4. What can our understanding of the institutions of inquiry in this organization tell us about the working assumptions of the organization more generally? What changes might we expect to see in these institutional perceptions in the face of a change agent such as Tiger? 105 Context for Interpretation of Findinge According to the survey responses, users’ overall impressions of Tiger were actually quite favorable. They felt the Web- and database-based system was an easier and faster process than the traditional paper and postal service process; they were pleased with the level of technical support and customer service we provided; and they indicated that they would use Tiger again and even recommend it to their colleagues. Tables 2, 3, and 4 provide a breakdown of proposers’ and reviewers” responses to the general impression questions, as well as some general demographic information about them. Table 2 one... o o: .12... . sec 0 - Questions Authors Reviewers Amount of effort compared with less 473 87.3 152 62.8 Established practices? same 63 11.6 72 29.3 more 6 1.1 19 7.9 More or less difficult compared with less 461 85.0 168 69.1 Established practices? same 73 13.5 58 23.9 more 8 1.5 17 7.0 Amount of time taken compared with less 480 88.6 152 62.8 Established practice same 57 10.5 68 28.1 more 5 .9 22 9.1 106 Table 3 r ’ verall Im ressions Authors Reviewer Items Responses n °/c n % Recommend for other AERA Yes 540 99.8234 96.3 divisions? No 1 0.2 9 3.7 Recommend for other conferences? Yes 539 99.4234 96.3 No 3 0.6 9 3.7 What is your overall feeling? Happy 507 93.5227 93.4 Neutral 33 6.1 6 2.5 Unhappy 2 .4 10 4.1 Table 4 Demegreehie lnfermetion en Deere Author Reviewer Items Responses n % n % Gender Male 260 48.1107 45.5 Female 281 51.9128 54.5 Age 20-29 7313.5 18 7.7 30-39 187 34.6 59 25.1 40-49 206 38.1 99 42.1 50-59 72 13.3 53 22.6 60+ 3 0.5 6 2.5 Developers of a system such as Tiger who were not also researchers might have been tempted to report the favorable findings from the survey responses and view the more negative 107 findings as “noise,” as anomalous data from a few dissatisfied customers that were not representative of the overall potential of the system to help streamline AERA’s practices. From a qualitative perspective (Lincoln & Guba, 1985), however, it is these “noisy” data that can reveal some of the more significant mechanisms at work in a social system. On the other hand, focusing on the data that seem anomalous can be a risky proposition, because it is likely more difficult to interpret the significance of the anomalous data convincingly than it is the composite data that have a fairly straightforward message. Further, there are fewer of these data. These data, which may allow access to the more subconscious, tacit aspects of our social lives in organizations, are difficult to come by because they hinge upon the uncovering of the tacit institutions I have been describing. When we do discover such data, then, they tend to be perceived as “cracks in the surface” of a seemingly well-oiled organizational machine. As such, they could be discarded as outliers, or the conclusions drawn from them may seem overblown, mountains made out of molehills. However, it is these data that are the most interesting to me, and it is precisely because of their elusivity that they are so interesting; given the theoretical context through which 108 l have understood these data, I contend that they provide glimpses of the powerful, deeply entrenched, and often invisible institutions that structure our activity in AERA and in other contexts. In the following sections, I discuss and interpret the findings that help me answer research questions 1 and 2; these findings are primarily composed of the “noisy” data that are challenging to interpret. i r 0 ' I01 1 in! h 1 I cf l1! 1 w n hl o .1ll_v.nr - ‘0 0 on tort/":11! to H 1 -l1-_ .009- 0 u 113107-01 H I- in H. In Of In 10 ‘ 0 l 0 -_'- vV- 1: As I have been arguing, AERA is influenced by institutions of inquiry that give the organization and the annual meeting their peculiar characteristics. On the basis of my examination of the data sources in this dissertation, I have identified several of these institutions of inquiry. In the following sections, I will describe the institutions of inquiry that have emerged from the data, focusing specifically on the types of evidence I used to determine their existence. I will also explain how Tiger served the breaching 109 experiment function, bringing these institutions of inquiry to the surface so that they could be examined more closely. In titution f In ui 1: ch Iarshi mu a ear 8 hol rl The first institution of inquiry that emerged from the data sources and that has a strong influence on the knowledge-generating practices within AERA is the idea that scholarship must have a scholarly appearance. The phrase proposers and reviewers used is that scholarship must have the proper “formatting.” By formatting, proposers and reviewers were generally referring to the overall look of the text of a proposal on the printed page; this included the appearance of the headings over each section of the proposal, the use of indentations and underlining or italics in the references section, and the consistency of line breaks and text fonts. Within educational research, the American Psychological Association (APA) style manual (1994) is the definitive document that dictates proper formatting. There is a famous saying that goes, “If it walks like a duck and quacks like a duck, it must be a duck.” The transposition of this famous phrase into, “If it’s going to be considered a duck, it must 110 walk and quack like a duck,” is very illustrative of the significance of formatting for AERA proposers and reviewers. According to the data, proposers and reviewers perceived a connection between the appearance of their proposals and the extent to which reviewers were likely to consider them scholarly, such that the more they adhered to the “proper” formatting protocol, the more seriously they would be taken. In other words, scholarship had to look the part if it was to be considered scholarly; it had to walk and quack like a duck to be considered a duck. The significance of formatting within AERA was revealed by a breaching experiment caused by the introduction of Tiger into the proposal submission practices. Tiger could only accept ASCII, or plain text, as opposed to adorned text adhering to APA style. Thus, much of the formatting users were accustomed to using to structure their proposals was lost in the electronic transmission of proposals via Tiger, leaving users with documents that were single spaced, contained irregular line breaks (e.g., some lines wrapped in the middle rather than at the end), and did not retain differences in text font or size, as for section headings. According to the survey data, 98 proposers (18%) indicated in their short answer responses that 111 this loss of formatting was a problem. For instance, one proposer wrote, I suppose I also wonder how the reviewers will treat this new format. Will they receive it in electronic form? How will that affect evaluation? especially since some formatting I usually do to highlight important points is lost in the HTML base. This is a leap of faith... This proposer made an explicit connection between the appearance of her proposal on the page (or in this case, on the Web) and the extent to which she anticipated that reviewers would consider it to be a scholarly product. In other words, she worried that her “duck” was not “quacking” properly in the Web environment. Other proposers made similar connections. For instance, another proposer, commented that, I miss the ability to format the document with bold, italics, and underlines. I realize that using ASCII, I can’t make my proposal as “pretty” as I could on paper. While I recognize that substance is more important than style, I can't help but wonder if the format of my electronic proposal will affect its evaluation in any way. While not every proposer who complained about the lack of formatting made a connection this explicit between appearance and quality, the fact that almost one fifth of the proposers who responded to the survey did comment in some way about formatting attests to its importance in the submission process. Further, the 112 somewhat vehement reactions) of some proposers to the loss of what appeared to us at first glance to be a relatively minor aid to the reader supports the notion that formatting is indeed significant. For instance, one proposer wrote, “My only concern is that I am not sure if the formatting of the proposal stayed intact in the process of submitting electronically. I spent a considerable amount of time on the formatting of the paper and would be extremely disappointed if that was ruined in this process.” Categeries of Fermetting. Upon closer inspection, I determined that there were actually different categories of formatting. Eventually, I realized that what I was observing was the conflict between the old, paper-based sensibilities about scholarship and the constraints of a new medium. Since the presentation conventions that represent the old, paper- based sensibilities were not translating well into the new medium, people were becoming upset. I realized that, as Madigan, Johnson, and Linton (1995) argue, these presentation conventions had been used to convey meaning beyond their seemingly functional purposes. 113 Indeed, I observed four categories of meaning related to the use of APA style and formatting in general. The four categories of meaning I observed are: 1) fundamental meaning; 2) functional meaning; 3) stylistic meaning; and 4) symbolic meaning. In the following sections, I describe both these categories of meaning and evidence from users about the ways Tiger disrupted them. I juxtapose these four categories of meaning with the constraints of the Web system, revealing the conflict related to trying to use the rules and assumptions of one medium in a different medium whose constraints and affordances render the old rules and assumptions irrelevant, or at least less applicable. Further, I reveal the deeper meaning of APA style within scholarship, helping to explain why APA style seems to hold so much sway over the process of developing and communicating scholarship. Table 5 shows the breakdown of the 98 proposers who reported problems with the different categories of formatting. Table 6 shows the breakdown of the 27 reviewers who reported problems with the different categories of formatting. These categories of meaning that I have defined are accompanied or characterized by a disruption on Tiger. Indeed, it is 114 through the disruption of these categories of meaning, through a breaching experiment, that I was able to identify them in the first place. I will first describe each category of meaning and then explain, by means of empirical data, the ways in which Tiger disrupted each category of meaning. As will become evident, the categories of meaning are not mutually exclusive, such that one formatting issue could be an example of more than one category of meaning. The important point to remember is that what actually distinguishes the categories of meaning from one another is the perception of the user as to the implications of the disruption of this category of meaning. Table 5 r k w i Pro r Iain F rm in Fundamental Functional Stylistic Symbolic Meaning Meaning Meaning Meaning # of people reporting it 29 23 58 21 as a problem example of Although the “It was Although I still complaint instructions difficult to after-the- foresee one stated that work with fact this is problem- one could equations in pretty those paste from the obvious, proposals onens word electronic writers of submitted processor format electronic electronical program, enabled submissions ly are text 115 there was here. In should be files no fact, my told not to without indication equations on use word- formatting- that some the entry processing I am not characters screen formatting convinced might be looked commands that readers changed. For correct, but (e.g., bold, aren’t e, the they were underlined, influenced acurlyn incorrect on etc) and by a “good quotation the ultimate that the Iooking” marks in my file which submission presentation original text was will be - we don't were processed. converted to always see changed to This was ASCI through the other due to the format. (I words to the characters, automatic had to re- meaning. I so that the elimination format my think the word of spaces proposal year we all adecolonise which were after seeing got laser 1t appears in used to its printers for the make the appearance the first submitted equations. in ASCI on time - every text as the screen.) grant édecolonise submission 1. was successful - they just looked so geod! CATEGORY 1: FUNDAMENTAL MEANING The first category of meaning is fundamental meaning. By fundamental meaning, I refer to the fact that it is necessary to be able to understand the content that is actually provided in a document. To use a paper-based example, if I were to attempt to 116 read a flyer that had been smudged so that the ink ran together and obscured the actual words on the page, I would have lost the ability to access the fundamental meaning of the document. Another disruption of fundamental meaning would occur if I found myself unable to read the handwriting in a letter written by a friend. There would presumably be meaning for me to find, but I would simply not be able to access it. Fundamental meaning, then, is disrupted when the reader perceives that content is rendered somehow less accessible or even completely inaccessible. Users of Tiger experienced a disruption of fundamental meaning due to the translation of word processing text into ASCII text. As previously mentioned, the system often introduced or deleted text characters, which made it difficult to process the content. For instance, one reviewer noted that, “In the text which I reviewed...all of the apostrophes came out on my screen as the letter i. I found this to be distracting.” Another reviewer commented that, “The difficulty I had was that the formatting was irregular and in more than one case the document was heavily interspersed with extraneous characters that made reading difficult. The formatting was lost in several of the proposals.” 117 Of the four categories of meaning, fundamental meaning involves the most basic perception of communications: If I am unable to understand what you are trying to say, not because I have misunderstood, but because I have been unable to process your comments at all, we have no basis for further communications. In much the same way that a person who speaks no Russian would simply not be able to compute any information provided in a novel written in Russian, if I cannot decipher a document’s contents, I cannot create a context for having a conversation or an understanding-~or even a misunderstanding in the familiar sense-- with the author. I cannot even arrive at the point where I could quibble with the author’s argument or rhetorical style; if I cannot convey and comprehend a document’s fundamental meaning, there is absolutely no basis for communication or subsequent interaction of any kind. Happily, due to its basic and straightforward nature, disruption of fundamental meaning is also the most easily managed of the three categories. As I have suggested, advances in computer technology are already rendering the fundamental meaning problem obsolete, because it is becoming increasingly possible to guard 118 against the introduction of extraneous text characters into documents that appear in electronic form. However, as I will argue in the following sections, scholars still face many challenges associated with the juxtaposition of a novel medium—networked computers—with traditionally paper-based sensibilities. CATEGORY 2: FUNCTIONAL MEANING The second category of meaning is functional meaning. By functional meaning, I refer to the fact that there are different ways within the same medium to represent data or information, and that the different ways highlight different aspects of the data set. For 1 example, a narrative description of a data set would provide a much different understanding of the data set than would the use of a table to describe it. The choices people make about how to represent their data, then, are a function of what they perceive to be the noteworthy and relevant points in a given context. The disruption of functional meaning within Tiger was largely characterized by the inability of users to use statistical symbols to communicate their work or to represent data in tabular form. For example, one reviewer commented, “Only problem I found was that statistical symbols 119 were not displayed as intended (a problem when using different word processing programs and fonts. This could be a major problem in reviewing proposals which include equations.” Similarly, another reviewer noted, “I do wonder how proposals with a lot of statistical symbols were dealt with.” With respect to tables, a proposer commented, Something needs to be done to retain the spacing for tabular arrays of text. Even when I put spaces rather than tabs between the columns, the system deleted them and as a result, the table is mush. Not a pleasing experience and probably enough to drive some people away from using the system. I didn’t even “think“ about including a graph or figure. A reviewer had similar questions about the use of tables: One of the main complaints I have is with regard to the tables/charts that might be included in the proposals. Because the formatting may be different, the tables/charts were extremely difficult, if not impossible to read. I had to skip over them and hope the author(s) discussed them further in their results section. As a result, I may have lost something significant. To this end, I believe that one of two things needs to happen: (1) people who submit electronic proposals can’t include tables/charts or (2) a better way to ensure the correct formatting of tables/charts needs to be implemented. In short, these users’ comments about functional meaning and its disruption within Tiger highlight the fact that there are different ways to represent the same body of information. This issue is more complex than the issue of fundamental meaning for two 120 reasons. First of all, the apparently simple question of whether or not it is possible to include a table in a proposal actually indexes the deeper-seated question of what constitutes “legitimate” data and methods. The second reason is closely linked with the first. Simply fixing Tiger so that it can accept tables won’t make the first issue disappear. The medium change, as l have argued previously, facilitates a shift in the message that is conveyed; thus, we must look more closely at the issue of functional meaning to help us think about what features of knowledge and information will be, and already are, salient in a networked computer context, and why. What new values will be gained, and what current values will be lost or discarded in the shift? CATEGORY 3: STYLISTIC MEANING The third category of meaning is stylistic meaning. By stylistic meaning, I refer to the fact that what a document looks like does appear to provide some indication of its intent or its characteristics. For instance, readers can probably gain a sense of what a magazine or journal will contain just by glancing at its cover. EggeatjeneLBeseereheL has a monochromatic cover 121 interspersed with plain white or black text, while People Weekly uses a variety of different fonts and colors, splashy, full-color photographs, and more vernacular language. Even if a reader was not familiar with either of these publications, she could probably tell something about them without even opening them; perhaps Wwould be perceived to be a more academic, scholarly forum while Pepmjleelewould be perceived to be a more social, human-interest forum. Thus, appearance, or style of presentation, becomes intimately associated with meaning. Again, if it looks like a duck and quacks like a duck, it is probably a duck, or in the case of scholarship, if it looks like a scholarly work, it is probably scholarly in content. This phenomenon, of appearance being equated with meaning and, by extension, validity, definitely manifested itself as users pilot tested Tiger. Indeed, users’ repeated and vehement complaints about the loss of stylistic meaning were the first indication I had had as to the deeper implications of the introduction of a new medium into the pre-existing social practice of submitting or reviewing an annual meeting proposal. On paper, as Burbules & Bruce (1995) write, 122 the care and precision of proofreading, revision, editing, designing, and typesetting manuscripts to create an authoritative (and aesthetically appealing) version of an author or authors’ document has traditionally been linked with the finality of creating a printed, bound version that will be archived as such for perpetuity. Both the producer of the text and its editor and publisher have a common interest in seeing it be as complete, persuasive, and carefully written as possible, since there is a sense in which, once published, there is no taking it back (p. 15). There is a perception of finality associated with the publication of scholarly work on paper, such that once it has been created, it stands as is forever. This has potentially far-reaching implications. For instance, if I have published a manuscript with one or more typographical or APA style errors, this means that any researcher, many years hence, who locates my manuscript will not only associate me with the ideas put forth in the manuscript, but also with the errors left behind. Any mistakes I make in a printed manuscript will forever brand me as careless and perhaps, by extension, less intellectually rigorous than other scholars. Perhaps these errors will undermine the reader’s perception that I know what I am talking about, that my argument can be trusted; if I cannot be trusted to make the document adhere to the appropriate guidelines, what does that say about my deeper scholarly abilities? 123 Table 6 Br k n R view r l i F rm tin Fundamental Functional Stylistic Symbolic Meaning Meaning Meaning Meaning # of people reporting it 10 12 7 5 as a problem example of The Everything The *only” I did find complaint difficulty I was fine drawback that two of had was except for was the the five that the the awkward proposals formatting presentation format in were not was of TABLES-- which the formatted irregular this was a text of the properly. and in more major proposal This may than one problem in appeared. indirectly case the formatting. affect the document Please work evaluation. I was heavily on it. did try to interspersed avoid this with bias but it extraneous does make it characters more that made difficult to reading review and difficult. find the The information. formatting was lost in several of the proposals. While this may seem to be a somewhat extreme train of thought, it appears to be consistent with the reported experiences of 124 a significant minority of proposers and reviewers alike. For instance, one proposer commented, “I was very distressed to see that my careful formatting had gotten messed up when I got the confirmation from the server. I felt it might bias my submission.” Similarly, another wrote, I did not like the way the formatting of my text was changed from the way it was submitted. Specifically, I put 2 returns after each paragraph of the proposal to seperate (sic) paragraphs for the reader. This formatting was lost and the text ran together in the confirmation copy I recieved (sic). This is a MAJOR problem in my view, making it much more difficult for a reviewer. The comments of these two proposers are representative of the trend of proposers to equate, or at least closely associate, the appearance of their documents with the quality and/or legitimacy of its content. These users voiced concerns that reviewers would have a difficult time evaluating their proposals or would harbor biases against them because of their appearance. CATEGORY 4: SYMBOLIC MEANING Along these lines, the fourth category of meaning is symbolic meaning. This category of meaning is closely linked to the other three, but it deserves its own categorization because some people 125 were better able to articulate this connection than others. Symbolic meaning has to do with the notion of the significance associated with formatting that went beyond the ability actually to read the proposal, represent data, or make the proposal clearer and more coherent. Specifically, symbolic meaning relates to the fact that the formatting conventions that support these first three categories of meaning have a significance of their own. They represent the proposer’s ability to demonstrate that she is an insider, that she knows how to make a proposal look the part of scholarship. The meaning is symbolic because the formatting itself becomes a visible proxy for the idea of high quality scholarship, the assumption being that those people who know how to format their documents must also know how to conduct rigorous research activities. Whether there actually is an empirical correlation between a proposer’s proper use of formatting and the relative rigor of her scholarship is irrelevant, because it is the community of scholars that assesses scholarship. Thus, it is the community of scholars that must be convinced of the idea that a proposal can be considered scholarly. For better or worse, the data suggest that proposers’ fears about the implications of the lack of formatting for their subsequent 126 reviews were not completely unfounded. In a pattern consistent with the proposers’ concerns, 28 reviewers (12%) mentioned in their free responses on the survey form that the lack of formatting was a problem for them as they read proposals. One reviewer commented that The difficulty I had was that the formatting was irregular and in more than one case the document was heavily interspersed with extraneous characters that made reading difficult. The formatting was lost in several of the proposals. In addition, another reviewer commented that, “I did find that two of the five proposals were not formatted properly. This may indirectly affect the evaluation. I did try to avoid this bias but it does make it more difficult to review and find the information.” Thus, reviewers appeared to perceive a connection between proposal appearance and proposal quality as well. The comment from the second reviewer highlights a tricky and important issue related to the breaching experiment effect in general and to formatting in particular: the fact that the institutions that influence our behavior are tacit, and it is often difficult to recognize and acknowledge their influence. That this reviewer was able and willing to acknowledge that she might be subconsciously influenced by the lack of formatting is obviously 127 supportive of my hypotheses about the institutions of inquiry within AERA; however, the fact that many reviewers did not explicitly indicate the potential for bias is not necessarily a detriment. The web of assumptions that surrounds the use of APA style and other formatting conventions is so much a part of our activity as scholars that it stands to reason that it would be difficult for us to articulate our concerns about it; it is one of many things we take for granted about scholarship. That being said, despite the fact that on the surface we may think of APA style simply as the standard, our tacit beliefs about APA style ascribe far more profound meaning to it, and this supposition is borne out with the breaching experiment. I explain this stance further in my response to research question #3. The existence of this institution of inquiry, as well as its influence over the scholarly activities of proposers and reviewers within AERA, were surfaced as a result of the introduction of Tiger. The introduction of Tiger served the breaching experiment function with respect to this institution of inquiry, surfacing the institution for closer inspection by systematically violating it. Specifically, 128 Tiger violated this institution of inquiry by accepting only plain, or ASCII text. As its name implies, plain text looks plain, so that, per the instructions for submission on Tiger, when a proposer copied and pasted her proposal into the Web form, much formatting of the sort described above was lost, leaving in its place the intact, but unadorned text of the proposal. Often, this constraint of Tiger completely stripped proposals of their APA style. Thus, the proposals no longer looked the part of scholarship; they no longer walked and quacked like ducks. In contemporary educational psychology research, the APA style manual is the definitive set of instructions for scholars on how to “walk and quack like a duck.” The manual enumerates rules for the written presentation of scholarship that apply to a wide range of issues, from how to organize a manuscript with the proper headings, to how to reduce gender and/or race bias in the reporting of research, to how to generate a correctly formatted bibliography. The APA style manual (1994) has evolved over time from a 1929 seven-page article comprising flexible guidelines and suggestions to its current instantiation as a roughly 350-page, more explicitly prescriptive volume. 129 The most current edition of the manual includes sections explaining the proper use in a publication of many stylistic conventions, including punctuation (e.g., commas, periods, dashes, quotation marks, and parentheses); five different levels of manuscript headings; and tables and figures, to name a few. The placement of punctuation marks depends on other elements of the manuscript. For instance, if the end of a sentence is in parentheses, the period comes just after the right parenthesis. On the other hand, if a complete sentence is in parentheses, the period comes just before the right parenthesis. The manuscript headings also depend on other elements of the manuscript. Not all headings are necessarily used in every manuscript, but those that are used require, in various cases, centeredness on the page, underlining, and/or all capital letters. Tables and figures are carefully explained in the style manual as well, and they must be formatted according to specific sets of APA rules. Importantly, it is a given within AERA that APA style is to be used for all manuscripts submitted for consideration for the annual meeting and for the official publications of AERA. The General Information for Contributors to AERA Journals 130 (http://aera.net/pubs/pubinfo.html) states that, “The preferred style guide for most AERA journals is the Publication Manual of the American Psychological Association, 4th ed., 1994.” This information about contributing to AERA journals provides a lengthy and explicit description of how a manuscript should look. For instance, the second paragraph of the general information states, Manuscripts should be typed on 8 1/2 x 11-inch white paper, upper and lower case, double spaced in entirety, with 1-inch margins on all sides. The type size should be at least 10 pitch (CPI) or 12 point. Subheads should be reasonable intervals to break the monotony of lengthy text. Words to be set in italics (contrary to the rules of the style manual) should be set in italics, not underlined; sentence structure should be used to create emphasis. Abbreviations and acronyms should be spelled out at first mention unless they are found as entries in their abbreviated form in Webster's Tenth Collegiate Dictionary (e.g., IQ needs no explanation). Pages should be numbered consecutively, beginning with the page after the title page. Mathematical symbols and Greek letters should be clearly marked to indicate italics, boldface, superscript, and subscript. As these excerpts indicate, appearances are the alpha and omega of printed or written scholarship. These author or proposer guidelines for manuscript submission provide the first indication to the author of what a journal or conference considers acceptable work; and the elements of formatting, including spell checking and ensuring the proper appearance of the references, tend to be the last 131 points to which the author or proposer attends before sending off a manuscript or proposal for review. This is not to say that the content of written scholarship is irrelevant, but rather to assert that written scholarship that lacks the “proper” appearances may start out at a disadvantage compared to scholarship that looks the part. In keeping with this assessment of the importance of formatting, my data suggest that AERA annual meeting proposers and reviewers clearly perceived formatting to be a significant issue. The issue of formatting has significance on both the organizational and the individual level; the official documentation attests to the organization’s preoccupation with formatting, and the fact that individual proposers and reviewers raised the issue indicates that this organizational preoccupation is shared by AERA members. The breaching experiment effect of Tiger also makes clear another important point about this institution of inquiry: namely, that it has developed around the medium of paper. All of the formatting issues that proposers and reviewers perceived as problems can be traced directly to the change in medium from paper to networked computer. Some of the most common complaints 132 related to loss of formatting included the scrambling of data in table form; extraneous text characters appearing in the body of proposals, and the disappearance of formatting done to set off sections or notable points in the proposal (e.g., boldface or italics, section headings, different font sizes). The use of these conventions makes sense on the printed page, because these conventions have developed around the constraints and affordances of the printed page. The introduction of the networked computer, a medium that supports Tiger and that differs from paper, into these practices disrupted users’ ability to rely on the outward appearances of a proposal to indicate quality or adherence to the proper protocols for conducting scholarship. In the context of Tiger, APA style cannot be taken for granted. The APA manual itself indicates that its basic purpose has always been “to aid authors in the preparation of manuscripts” (p. xxiii) because “[w]ithout APA style conventions, the time and effort required to review and edit manuscripts would prohibit timely and cost-effective publication and would make clear communication harder to achieve” (p. xxiv). Thus, the overt purpose of the style manual was to help standardize the written or printed 133 communication of scholarly activity. This makes sense, given that scholarship has traditionally been conducted on paper; thus, it stands to reason that the conventions standardizing scholarly communication would make use of the constraints and affordances of the medium used. The introduction of Tiger disrupted the use of these convenfions. The question arises, however, as to why users would have such a reaction to the loss of formatting if it were merely an aid to the reader, as it appears to be at first glance. In response to this question, the data suggest that the influence of APA style conventions extends beyond the mere standardization of scholarly communication. The stylistic conventions for work on paper have taken on a life and meaning of their own, such that their presence in a piece of work is perceived by the scholarly community to be synonymous with quality and rigor. Whether there is truly an empirical connection between appearance and quality is almost irrelevant, because as Eisner (1997) says, the extent to which a research product is considered valuable “is determined by the judgment of a critical community” (p. 8). Thus, those who would be taken seriously within the community must play by its rules. The 134 institution of inquiry that scholarship must look scholarly is intimately entwined with our beliefs and assumptions about the nature of scholarship and the value of scholarly activity, and these are intimately entwined with our experience of scholarship as it appears on paper. At this point, it is useful to use Bruce’s taxonomy of technological change as a lens through which to examine these phenomena. With respect to this institution of inquiry, it is clear that at first glance, Tiger fomented consonant change, because it expedited the process of submitting proposals. On the other hand, the breaching experiment effect calls to mind the notion of dissonant change, to the extent that there was a price to pay for this expediency. In other words, many users were able to get their work done more quickly, which was a change consonant with the values of the organization, but in so doing, they were stripped of one of the tools they normally have at their disposal—formatting—to demonstrate their scholarly know-how, which was a change at odds with the values of the organization. Further, for some people, resistance to change was evident, because they refused to use the system at all; while I have little data on those who did not use the 135 system, the fact that some 60% of the proposals were submitted via regular postal services attests to some members’ resistance to change. Ii'nflni 'hr relv f o h' The second institution of inquiry that became clear upon close inspection of the data is that there are levels of scholarship, such that certain formats at the annual meeting are considered the domain of more scholarly or higher quality work than other formats. In other words, some types of scholarship are more legitimate than others. Specifically, as part of the process of submitting a proposal, proposers are asked to select their top three choices of session format, which refers to the way the research will be presented. For instance, proposers who choose a paper presentation format know that they will be part of a group of four or five researchers, each of whom has 10 minutes to give a brief, formal presentation. After the presentations, a discussant will moderate interaction among the group of researchers and the audience; this interaction is facilitated by the fact that the research papers have been clustered together because they deal with similar issues. As another example, 136 proposers who choose a poster session know that this type of session requires them to prepare a visual representation of their work, which may consist of graphs, diagrams, or bulleted text, to set up in a large room filled with similar representations by other researchers. They also know that the atmosphere during a poster session is relaxed and facilitates informal discussion of the research over a 90-minute period with various people who spend the session walking around and “window shopping” at the different posters. The breaching experiment came about because Tiger was not configured to provide reviewers with a piece of information—session format—that they were accustomed to using in the traditional process to orient themselves to the task of reviewing. And their use of this particular piece of information in the review process is directly linked to the notion of scarcity, as I will argue in subsequent sections. Officially, all pr0posals, regardless of format requested, are supposed to be evaluated according to the same criteria, as one division chair stated. He commented that, “I know for our division we specify that we’re using the same criteria and standards for reviewing papers and round tables.” However, 137 according to the reviewers I interviewed, there are different criteria for reviewing proposals depending on the format the proposer requests. And this issue became clear when Tiger set in motion a breaching experiment whereby it did not provide reviewers with access to the relevant information about session format that they believed would have helped them determine how best to review a particular proposal. Th i if' n i n F According to the annual meeting call for proposals in W (1998), there are 18 different session formats, such as town meetings, interactive symposia, and paper sessions. The call for proposals indicates that the purpose of these different formats is to enable researchers to “consider...alternatives to the standard paper and symposium formats” (Saxe, 1998, p. 31). Further, the call for proposals goes on to specify that, “No attempt has been made to provide an exhaustive listing of the varieties of creativity embodied in annual meeting presentations, still less to indicate how one particular format might be better than others” (Saxe, 1998, p. 31). At first glance, then, we 138 can surmise that the purpose of the different session formats is to provide some variety among research presentations so that attendees can have different types of experiences interacting with their peers around issues of interest. And indeed, this is one of the purposes of the variety of session formats. However, this is not the only purpose, as analysis of the interview data indicates. I had interview data from 16 people, 4 proposers, 3 reviewers, 5 chairs, and 3 users who both submitted and reviewed proposals, about their experiences with Tiger. One of the questions I asked them had to do with whether proposers cared which type of session format they were given; how important reviewers felt it was to know the requested session format in order to be able to review proposals; and how chairs dealt with different session formats. Twelve users indicated that it was an important issue. For instance, one chair commented that “it would be very important to know what the proposer is, what the intention is in terms of format.” One person who served as both a proposer and a reviewer commented that “I think it does impinge somewhat in that speaking space is pretty limited, and so from that point of view, as a reviewer, I wanna look carefully at whether 139 something oughta be, take up a whole roomful of people’s time in terms of presentation.” While these findings obviously do not enable me to make sweeping generalizations about the significance of session format to AERA reviewers, they do hint at some of the “cracks in the surface” of the organizational procedures of AERA. In other words, these findings do enable me to tentatively identify some of the institutions that appear to influence scholarly behavior within AERA. Subsequently, I was able use other data sources and answers to other interview questions to follow up on these cracks in the surface. I eventually developed a picture of AERA and its institutions that is based on a wide variety of evidence. In the course of my in-depth interviews with participants, for instance, I noted an interesting pattern that appeared to explain why knowing the requested session format was so important for reviewers. Namely, some participants perceived that different session formats, in addition to being opportunities for different types of interactions around scholarly ideas, were also arranged in a hierarchical order with respect to prestige. A comment from one division chair I interviewed is representative of the sentiments of 140 many of the interviewees: this division chair commented that “there is kind of this pecking order that in some way a paper is better than a poster is better than a roundtable;” indeed, this division chair stated that the mandate to consider formats in this order came directly from the program chair of the division. While some interviewees suggested that a roundtable is actually better than a poster, there was an unmistakable trend to perceive that a paper or symposium was in some way the creme de la creme of presentation formats, that to request a paper or symposium format was, in the words of one proposer, to “go for the gusto.” The implication was that those proposals that were accepted as papers or symposia showcased research that was of better or more polished quality than those proposals that were accepted as posters or roundtables. Of the 16 people I interviewed, the same 12 suggested that in addition to the explicitly stated purpose of the different session formats to provide variety in the annual meeting program, another important purpose of the different session formats was to distinguish between research of different levels of quality. For example, one interviewee, who had submitted a proposal and also reviewed proposals, commented that 141 I know that for myself, I expect a much higher level of scholarship in a paper than I do in a poster presentation...Well, I guess I, I kind of assume that the reason that there are all these different levels of presentation and different formats, that it’s partly because of the diversity of research that’s done just simply what is done, what the topic areas that are covered, and some things are, lend themselves more to a poster, visually and so forth, that they need that visual backup. But I also feel that implied in that whole scheme of things with proposals is that there are levels as well. Similarly, one of the reviewers I interviewed noted that I rate paper presentations as the highest form of acceptance at AERA. In order to get a paper presentation, the submission needs to be very strong and needs to hold a lot of merit, needs to be well thought out, in order to hold a round table, the criteria are a little less stringent, and then I would go down from the round table to an actual poster presentation. And when people submit for poster presentations, for example, I don’t score them as critically, I guess as I would if they had submitted for a paper presentation. Finally, a division chair commented that We had, one of our most highly rated, in fact, our single most highly rated submission was submitted poster only, and I considered contacting her and saying, listen, you’ve gotten fabulous reviews on this, would you consider doing it as a paper, but I assumed that she had already considered that herself, so there was no point in really bothering her about it. Each of these comments from users of Tiger suggests that on some level, there is an implicit belief among many AERA members that there is a hierarchy of session formats, and that the paper format tops this hierarchy. I also have numerical data to buttress 142 these claims about the preference of proposers for paper and symposium presentations over other types of formats. Specifically, of the 1028 proposals that were designated “individual” submissions (meaning that one paper was submitted), 703, or 68 percent, listed “paper presentation” as the first choice of session format. Further, of the 212 proposals that were designated multiple submissions (meaning that multiple papers were submitted to be part of the same session), 103, or just over 48 percent, listed “symposium” as the first choice of submission format. As further evidence for this trend, another Standing Committee member referred to the roundtable format as the “booby prize,” suggesting that those proposals that did not pass muster as paper sessions were relegated to the status of roundtables. In addition, an informal canvassing of five or six tenured faculty at my university revealed that had their proposals been accepted as roundtables rather than as papers or‘symposia, these faculty would not have bothered to go to the annual meeting at all; they claimed that it would not have been worth their while to attend. Indeed, in past years several of these tenured faculty had actually had 143 proposals accepted as roundtables, and had subsequently decided not to attend the annual meeting to present their work in this format. SCARCITY AS AN EXPLANATION FOR THE HIERARCHY OF SESSION FORMATS These data suggest possible ways in which this perception is perpetuated within the AERA membership. Namely, as new scholars are enculturated into the field of educational research, they are bombarded with subtle—and in the above cases, not so subtle—cues from their more experienced colleagues as to the hierarchy of session formats. Indeed, as one proposer/reviewer commented, ' Well AERA definitely promotes it by allowing only so many paper presentations and allowing two to three times as many poster presentations or two to three times as many round tables. And so it’s definitely promoted within the organization, but I believe it’s also promoted within the profession as well, in that there’s a certain, I don’t know, I wouldn’t say stigma, but there’s a certain sense of accomplishment with a paper presentation that doesn’t necessarily come with a poster presentation... And also I believe that the hierarchy we were discussing is also promoted within our institutions in that [when] we go up for tenure and promotion...they highly value paper presentations over round tables over posters. And so there is definitely a hierarchy, at least at this institution, that dictates that you receive more points...toward your promotion or tenure for various kinds of presentations. 144 A division chair also expressed concern that alternative session formats such as performances were often considered “fluff” within the membership and the educational research community. Thus, scholars entering the field are tacitly encouraged by their more experienced colleagues and by the expectations of their university environments to perceive the session format hierarchy as I have described it above. Given that there is no explicit articulation in the call for proposals that different session formats represent different levels of scholarship, the next question becomes: What could account for this perception of session formats among AERA members? One of the official documents of AERA, the annual meeting handbook for division and SIG chairs, helps shed light on this question. In the handbook, there is a section called Issues and Advice from the 1995 Annual Meeting Committee, and in this section, one of the issues raised is called Proliferation of Sessions. This section reads as foHows: . Major issue is the access and responsiveness that expansion permits pitted against quality (or at least the perception thereof) of the program 145 0 May want to think further about creative uses of time (e.g., more posters, more roundtables, more numerous but shorter session (sic) were all cited) to permit access within the constrained (sic) that still militate for quality. Examine ways in which the status of such sessions could be enhanced (e.g., discussant at the tables) to make them attractive options for submitters . Reexamine limits on number of presentations: consider reducing further and enforce seriously (1998 AERA Annual Meeting Procedures Handbook) Several interesting points are raised in this segment of text from the handbook. First, there is the issue of status, raised in the second bullet point. This point tacitly acknowledges what I have been arguing, which is that session formats such as posters and roundtables are generally perceived to be of lower status than formats such as papers. Specifically, there is mention in this bullet point of suggestions for ways to enhance the status of these sessions, such as adding discussants to them. Second, all three bullet points allude to a possible explanation for this perception of differences in quality of scholarship based on session format, and 146 this explanation has to do with scarcity of space. Phrases such as “creative uses of time,” and “limits on number of presentations” suggest that there is not enough room in the program to accommodate every researcher who wishes to present original work at the annual meeting. Further, the phrase “expansion...pitted against quality” suggests the perception on the part of the membership and AERA officials of an inverse relationship between quality of proposals and amount of space in the meeting, such that the more space allotted for proposals, the lower they will be in quality. More relevant text from the handbook further develops this supposition, and also provides a clue as to why the roundtable in particular might be considered a “booby prize” format. The text in question states that Effective with the 1994 Annual Meeting, the Council increased the limit of the number of 1-1/2 or 2-hour sessions from 775 to 969...Additional sessions can be accommodated in the 40 minute Small Roundtable format and are not counted in the cap. The ceiling is firm and cannot be exceeded. According to the handbook, then, there is a finite, absolute number of annual meeting sessions to be divided among divisions and SIGs each year. The limit on the number of sessions has been raised in recent years, but the fact remains that space and hence 147 opportunity to present original research at AERA’s annual meeting are at a premium. Sessions such as papers or symposia are included in the cap, which means there is a finite number of them available to any given division or SIG. Thus, since each division usually receives far more proposals than can be accommodated in the limited number of sessions allotted, there must be some mechanism for determining which proposals will be selected to occupy those limited sessions. This is where the peer review process comes in that is used to help division and SIG chairs determine which annual meeting proposals to accept or reject, and indeed, this notion of scarcity is one of the premises upon which the peer review process is based. As Henneberg (1997) states, No economy is unlimited in size and, hence, no economy can support all possible experiments scientists might think of. Limited resources require that somebody decides which experiments are to be done and which are not. Such decisions necessarily involve an element of subjectivity and are constrained by social norms...With time it became obvious that not all written matter submitted to journals for publication or to societies for presentation could be published. A certain standard of quality had to be applied. As this statement from Henneberg (1997) indicates, then, both the opportunity to conduct research and to communicate it to others are scarce commodities relative to the number of people desiring to 148 conduct and communicate research. By subjecting scholars’ individual research offerings to a critical review process and then, on the basis of these reviews, approving this research for presentation or rejecting it, the annual meeting subtly rewards the efforts of some researchers while discouraging those of other researchers. It puts power in the hands of some researchers by providing them with a forum for voicing their ideas while depriving others of similar power. The logical conclusion that follows from this connection between scarcity and research is that the research topics and products that are chosen to be conducted or presented are those most worth doing and learning about (Dalton, 1995). In other words, in terms of AERA, if one’s work passes muster with reviewers and division or SIG chairs and is selected to occupy one of the precious spaces in the annual meeting program, it stands to reason that this work is of higher quality and significance than work that was not selected. Thus, a proposer who is awarded a paper or symposium session in which to present her work has achieved a professional coup. On the other hand, the handbook also clearly states that a division or SIG chair can request as many roundtables as he or she 149 likes, because roundtables are not included in the cap of 969 sessions. Indeed, a drastic example of this comes from one AERA Standing Committee member who commented that in a recent year a division or SIG chair requested some 93 roundtable sessions, a number disproportionately large compared to the number of sessions from the cap that that division or SIG had been allotted. By requesting so many roundtable sessions, this division or SIG chair was effectively able to include in the program all researchers who had submitted a proposal to that division or SIG. A comparison of the 100% acceptance rate of this division or SIG with the organization- wide trend to accept just over 50% (Annual Meeting Handbook, 199x) of submitted proposals demonstrates the way this division or SIG chair used the unlimited number of roundtable sessions to “beat the system.” What I conclude from the handbook information, coupled with the testimony from users of Tiger, is that this perception that the paper format is more prestigious than the roundtable format stems directly from the scarcity of the resource called paper session. As the handbook indicates, and as the example from the Standing Committee member attests, a division or SIG chair can request as 150 many roundtables as desired. The implication that follows from this is that any old scholarly products, rather than only those that have withstood rigorous peer review, may be given air time in a roundtable session. Assuming this perception is shared by the research community, which is an assumption supported by the data, the scholarly value of the research presented as a roundtable is therefore immediately called into question, or is automatically perceived to be of lesser status than that of a paper session. The Influenee pf Tiger pn lnetitgtion pf Inggiry #2. As with institution of inquiry #1, scholarship must appear scholarly, the breaching experiment effect of Tiger accounts for my ability to identify this institution of inquiry. Specifically, in many cases, Tiger neglected to provide reviewers with the session formats the proposers had requested on the submission forms, despite the fact that this information is always provided to reviewers in the context of the traditional review process for the annual meeting. Some reviewers complained about this lack of information about session format requests. Eventually, it became clear that reviewers were concerned about the lack of this 151 information because, as the data indicate, they review proposals differently depending on the format requested by the proposer. With respect to Bruce’s (1993) taxonomy, the main type of change associated with this institution of inquiry that was in evidence was dissonant change related to the breaching experiment effect of Tiger. Specifically, the fact that Tiger did not consistently provide reviewers with information about the type of presentation that was being requested caused them confusion and difficulty in reviewing proposals. In the implications section, I will provide more detail about the subsequent cascades of changes that may take place as a result of the introduction of Tiger into AERA’s proposal processing activities. 11-1 1110‘ :10. 1111-1.- : i i 1: 'r9111.l? In response to this third research question, I take what I have learned about the institutions of inquiry within AERA and discuss the ways these institutions of inquiry help and hinder the organization as it attempts to meet its goals. I also draw upon information that could be considered part of the literature, to the 152 extent that it comprises published articles, or that could be considered data, to the extent that much of it is from Egueetionel fieseercher, one of the pieces of AERA documentation that I have been referring to as data. In any event, this information further helps to explain how and why institutions of inquiry are so salient within educational research in particular, and also how and why they eventually reach a point of diminishing returns. In other words, in this section I discuss how the institutions of inquiry within AERA are helpful and harmful to the enterprise of educational research. This discussion centers on the tension in educational research between the need for credibility and the unpredictable byproducts of constraint. [1 | lo 0 o g - c o..V or - I”) lo 0 0| 0 W What is the purpose of research? Scholars offer many reasons: to generate new knowledge; to develop a deeper understanding of the world we live in; to make a meaningful and memorable contribution to our intellectual history. The more focused question of what the purpose is of educational research generates a similar list of 153 reasons, couched in the context of educational improvement: to develop a better understanding of factors that promote meaningful learning; to intervene in positive ways in children’s learning experiences; to contribute to the ongoing effort to build theories of learning and teaching. The issue of what constitutes educational improvement is not self-evident, however, but instead requires value judgments on the part of those conducting the research. Progress can only be measured in relation to relatively stable notions of the current state of education and the sought-after, ideal state of education. Not surprisingly, these notions differ for different stakeholders. Indeed, recent trends within the educational research community have been characterized by a growing perception that facticity is relative, that different ways exist to make sense of the world, and that these different ways have the potential to enter into conflict with one another. And given that education is fundamentally applied, that is, located in the practices and activities of real people rather than abstracted from them, these conflicts about the nature of progress, meaningful education, and “proper” courses of action can become extremely volatile. 154 It is for this very reason that if we as educational researchers wish to generate research that truly furthers the agenda of improving education, whatever that may mean, we cannot succumb to an “anything goes” approach. While it may be true that multiple perspectives and methods for understanding the world have validity, it does not logically follow from this that all perspectives and methods for understanding the world have validity. For instance, people are capable of deceit and mistakes, both of which would throw their research results and recommendations for future research and practice into question. All research, then, including educational research, is characterized by the need for rigor. The scientific method is the concept that embodies the elements that set scholarship apart from other forms of discourse; these elements include the replicability of research activities; a clear sense that what we conclude about our research is the result of the intervention or experimentation, and not the result of someone’s mistake or malicious intent; and the reporting of research in ways that make the different elements transparent and available for scrutiny by members of the research community. In other words, the 155 ways we as researchers carry out and communicate our research matter immensely within the research community. For this reason, what I have been calling the institutions of inquiry within scholarship count a great deal, and they are a standard, a shorthand on which we as a field can “hang our hat” when discussing our findings and proposing courses of action based on them. When we encounter scholarship that has been carried out in a manner consistent with the institutions of inquiry, we might not even notice, because our sensibilities have not been affronted. But when those institutions of inquiry are disrupted, as they were when Tiger was introduced into AERA, we take note and take action, often feeling viscerally that something is not right without knowing exactly what it is. Institutions of inquiry, then, are a tacit but deeply ingrained safeguard against un-“scientific” research activity. And for better or worse, education is definitely a field that requires this kind of vigilance. Within academia in general, education is typically perceived to be less rigorous and therefore less scholarly or scientific than other fields that adhere closely to the scientific method. Thus, the fact that AERA members subconsciously act in accordance with institutions of inquiry is 156 adaptive first and foremost because these institutions of inquiry help us gauge scholarship by something approximating a stable standard; in other words, they afford credibility within the wider academic community. I argue this point in greater detail below in terms of the strikes educational research already has against it in terms of credibility in the broader scholarly context. T ai f ' n I r h. The field of educational research has often been known informally but widely as the “stepchild” of the research world, perceived as a less rigorous, less challenging field of inquiry than other fields such as the physical sciences. Educational research has a poor reputation for several reasons. Some of those that are more widely discussed have to do with the supposition that educational research tells us nothing new, that reported results are obvious (Gage, 1991), and that educational research agendas are time- consuming and costly, yet typically result in very little demonstrable change in actual educational practice (Kaestle, 1993; Scheurman, Heeringa, Rocklin, & Lohman, 1993; Carroll, 1993; Lagemann, 1996). This is problematic with respect to researchers’ 157 credibility to funders, educators, and policymakers, because the ostensible goal of educational research is to exert a positive influence on the teaching and learning experiences of teachers and students. Thus, if educational research can not be shown to have a demonstrable influence on teachers’ practice or student learning, then educational researchers are hard pressed to justify their work. There are other influential perceptions of educational research that contribute to its diminished status within academia in general. First of all, education is known as a social science. The very use of the word “social” to describe those fields that focus in some way on human activity diminishes or casts doubt on their status as rigorous intellectual activity. This is so because the term “social” implies subjectivity, where a science is traditionally perceived to be objective and devoid of bias and relativism. The prevailing wisdom is that social sciences are messy, “wannabe” sciences filtered through the lens of human experience rather than standing on their own as “fact,” as a mathematical proof, a balanced chemical equation, or the structure of an atom can. It is important to note that the community of physical scientists is increasingly acknowledging the subjectivity inherent 158 in the work of the physical sciences (House, 1991), a point I return to later. However, in general practice, the academy has by no means relinquished the perceived distinction between the physical and the social sciences. Indeed, this distinction is codified in the very language used to describe these groups of disciplines. For example, the physical and social sciences are also known as the “hard” and “soft” sciences, respectively. “Hard” implies “hard and fast,” that is, trustworthy, dependable, truthful, while “soft” implies malleable, tentative, weak. Social sciences, then, may be perceived as less scholarly because they are perceived as less steadfast; the very content of the social sciences is perceived to be less systematic, less able to be compartmentalized, than the content of the physical sciences. Another reason educational research suffers from diminished status in academia has to do with the fact that education has historically been perceived as a woman’s field. Within education, women comprise quite a large percentage of practitioners and researchers alike, especially relative to other fields. While in a perfect world this would make no difference, the fact is that historically, the education of children has been seen as an extension 159 of the maternal duties performed by the “weaker” or “fairer” sex, not as an intellectually challenging or theoretically significant activity. Further, the association in people’s minds of education with women has also facilitated the association of education and educational research with many of the stereotypes that prevail about women: that they are irrational, governed by feelings rather than reason or logic, and more concerned with personal relationships than with abstract principles. This furthers the characterization of educational research as subjective and untrustworthy. Finally, yet another reason educational research has suffered from diminished status within academia has to do with its multidisciplinary nature. Educational research encompasses the examination of student learning of all types of disciplines, at all ages and developmental stages, and in a wide variety of contexts. By its very nature, it can be based upon multiple theoretical perspectives and carried out via multiple types of methodologies (Eisner, 1993; Glass, 1993; Howe & Dougherty, 1993). Thus, there is a striking lack of consensus as to the “proper” content or nature of educational research within the community of educational scholars; indeed, one researcher’s theoretical or methodological treasure may 160 turn out to be another researcher’s trash. These disparate elements of educational research leave the field seeming quite fractionalized and wide-ranging. We can thus begin to piece together an explanation of the reasons the annual meeting might prove so difficult to change, an explanation that links back to the notion of the annual meeting as a complex social system. The foregoing description of the reputation of educational research relative to other academic fields delineates a social and political context within which educational research takes place. It enables us to explore the notion that the annual meeting might be difficult to change because its current structure has some measure of adaptive value for AERA and its members within this social and political context. Indeed, I have implied this by describing the tendency of educational researchers to adhere to certain types of annual meeting practices in the face of uncertainty. It is interesting to note that the habits of educational researchers die hard despite their explicitly and repeatedly stated desire to change these practices, and despite the fact that the perception of educational research as it currently stands garners relatively little respect from those outside the discipline. 161 Given the many perceptions, beliefs, and theoretical frameworks that characterize educational research, then, it appears that one element of this seemingly non-coherent field that gives it some coherence is the notion of institutions of inquiry and what they represent on a deeper level: conformity to a measurable standard. When all else fails, educational researchers can be secure in the knowledge that APA style and the notion of presenting a paper will mean something significant to others within their academic community. Further, as l have been arguing, these apparently trivial concerns actually index more complex and significant issues in the conduct of educational research; thus, adherence to them lends credibility that extends far beyond appearances or simple rule- following. This is important not only so that we can convince our hard science counterparts of the worthiness of our work, but also so that we ourselves can be assured that we have done work of some quality and of some potential use. On the flip side, however, educational research is caught between a rock and a hard place. These same institutions of inquiry and rigorous methods that help educational research retain its credibility in the larger research arena and ensure the conduct and 162 communication of good quality scholarship also have the potential severely to limit the scope of what we consider to be scholarly. Unfortunately, our limited scope often proves to exclude ideas and methods that may actually aid us in trying to reach our stated goals as a field. I discuss this tension in the following section. Institutions of ingpity are maladaptive becauee they ere on l"-.11-_ -. 0 0,: -r- ‘ - '19- :0 l1:- 0 9 H. What is also clear about institutions of inquiry is that they do constrain our scholarly behavior in certain, sometimes maladaptive, ways. The example offlayeeyithflgyds represents how our beliefs about the “proper” ways to do things can influence our meaning- making, sometimes in ways that hinder rather than help us in our efforts to improve education. Given that educational research is so widespread and multifaceted, it does not make sense to expect that all research would neatly fit within the confines of the institutions of inquiry that currently exist. However, this is often exactly what happens, as the foregoing data analysis demonstrates. Madigan, et al. (1995) demonstrate how institutions of inquiry can have a very far- reaching influence on our activities using a specific example. They 163 argue that some of the physical characteristics of written scholarship in psychology as dictated by the APA style manual have broader social and intellectual implications than might be immediately apparent. Their argument is that “some of the less obvious characteristics of APA style support the discipline’s commitment to the empirical method and the discipline’s view of itself as a cumulative, collaborative enterprise” (p. 428). Further, they argue that “[sjtudents who enter the field of psychology acquire psychology’s language conventions, and in doing so they also come to implicitly endorse important values of their discipline” (p. 428). In other words, they argue that the discipline’s requirement that members adhere to APA style conventions when conducting inquiry shapes the types of inquiry that can take place, the types of knowledge that are considered relevant and accurate, and the types of social relations that surround the enterprise of psychological research. They write that APA conventions “may be viewed from a sociocultural perspective wherein APA style richly reflects psychology’s intellectual milieu, in which agreement about trivial details can carry with it agreement about more fundamental matters” (p. 429). Seemingly inconsequential conventions of this 164 disciplinary community’s written interaction patterns, such as the “overall organization of empirical reports” (p. 430), are actually seen to combine in ways that become symbolic of the broader rhetorical and intellectual tendencies and social habits of mind that influence it This example demonstrates how the influence of our institutions of inquiry can extend far beyond appearances or levels of scholarship. Further, within the membership itself, other by- products of our institutions of inquiry, both immediate and long- term, may perpetuate practices and habits of mind that we don’t necessarily want perpetuated. For example, with respect to scholarship needing to look scholarly, this institution of inquiry could have an immediate impact on people who are not as familiar with the forms as others (e.g., people for whom English is not a native language), putting them at a disadvantage with respect to the perception of their competence. And given that one of the stated goals of AERA is to increase international participation, it would behoove us to examine our institutions of inquiry more closely to determine any unwanted side effects, such as these, that they may engenden 165 Similarly, the institution of inquiry that suggests a hierarchy in formats of scholarship may be engendering maladaptive side effects that we do not even realize consciously. For instance, given that the paper presentation is considered to be the highest form of scholarship, and given that the paper presentation is the most like a lecture format of any of the choices for the annual meeting, it is a short jump to consider how this perception of presentation format might be having an immediate influence on our unconscious beliefs about activity in the classroom. In other words, despite the fact that we as educational researchers talk a good game about desiring to help practicing teachers move away from the lecture format in the classroom to more collaborative, active modes of interaction around content, the important question remains as to how much we are subconsciously influenCed by this institution of inquiry to disparage these other types of interaction. It a researcher considers the paper format to be the best of the best in an educational research context, is it safe to assume that this researcher will be able to set this bias aside while working in a classroom, particularly since it often operates at a subconscious level? The point I am making is that perhaps our stated intentions about schooling and research are 166 actually being undermined by our tacit belief systems in ways we do not even realize. This, clearly, could be a serious problem. The issue goes even deeper than these concerns, suggesting more long-term and deep-seated implications for the enterprise of educational research. Ultimately, the entire enterprise of scholarship hinges on the taken-for-granted assumption that work is judged on the basis of quality, such that the best, most rigorous, and most useful works are selected for publication in sanctioned journals and presentation at sanctioned meetings. Despite the existence of an “old boys’ network” that operates in scholarly circles and despite the fact that peer review is a subjective process, the belief is that the highest quality scholarship will still somehow rise to the top. What my study suggests, however, is that the scholarship that rises to the top is often the scholarship that adheres to the institutions of inquiry, that, more than they realize, scholars rely on trivial, arbitrary proxies of quality rather than screening for the real thing. This suggestion throws the entire enterprise of scholarship into question; if scholarship is not actually based on those values it espouses, but instead on ultimately meaningless proxies of quality, then what is the true nature of that 167 scholarship that typically passes muster within the scholarly community? This is a troubling question with potentially far- reaching implications. Rr:_-.., Q-=sro 4: . c .no-. mi-h w -x-e . e int - - in iuio I .o-r ‘9I0l'l i he f - of . ch.--n a-.- c . Tiger? Another issue that will continue to be significant is the extent to which a technological innovation like Tiger will interact with these institutions of inquiry, and what the resulting combination of innovation and established practice will mean for AERA practices and habits of mind in general. It is exceedingly difficult to make such predictions with any measure of accuracy. However, this should not preclude us from trying, because in the attempt, we may learn more about those elements of scholarship that we claim to value, how our scholarly practices support and undermine those elements, and how our practices interact with, change, and are changed by novel artifacts such as computer technology. I discuss this issue below, speculating about what the institutions of inquiry I have described in the results section might look like in the future as they 168 continue to interact with computer technologies. Again, Bruce’s (1993) taxonomy of types of technological change is relevant here. With respect to the notion that scholarship must look scholarly (institution of inquiry #1), the issue of formatting on Tiger is already being resolved. In keeping with Bruce’s (1993) notion of changes to the innovation, this year’s version of Tiger has already been modified so that it can easily accept graphics files and hypertext mark-up language (HTML), which will enable proposers to retain the formatting, the tables, and the graphs associated with their proposals. So for this institution of inquiry, what Tiger facilitated was a “sneak peek” at this powerful institution of inquiry. As users become more proficient with HTML and learn ways to retain the formatting that is so important to them, they will more than likely be able to return to their familiar habits of using formatting to transmit meaning that goes far beyond appearances. And this institution of inquiry will once again become taken for granted: that is, until the next innovation wreaks havoc with it, surfacing it once again for our examination. On the other hand, it appears that the notion of a hierarchy of session formats (institution of inquiry #2) in the context of this 169 technological innovation is well described by Bruce’s (1993) concept of cascades of changes. First, Tiger disrupted this institution of inquiry by stripping proposals of their formats and preventing reviewers from using the requested format information in their assessment of proposals. As with the formatting issue, this is a glitch that is easily remedied; indeed, it has already been addressed for the coming year. However, the cascades of changes have to do with the rationale behind the institution of inquiry, the notion, as l have been arguing, of scarcity. Given that scarcity of space is what drives this institution of inquiry, Tiger has the potential to wreak havoc with it because of the nature of the medium that supports Tiger. This is so because Tiger is based on the World Wide Web, which, theoretically, provides an infinite amount of space for research proposals and papers that are to be presented at the annual meeting, or to be published in a scholarly journal, for that matter. And since AERA officials are considering diversifying the use of Tiger in future years, it is actually conceivable that a lot of research that would normally be presented at the annual meeting could eventually be “presented” on the Web, via real-time 170 conferencing and the like. So the notion that there would only be a finite number of paper sessions would be called into question, which would undermine this apparent connection between scarcity and quality. The realization that a hierarchy of session formats exists may lead to a cascade of changes that will completely alter the nature of the annual meeting itself. 171 Chapter 5 CONCLUSIONS AND IMPLICATIONS The main implication of this study is that the way we do things has a profound influence on the kinds of things we can do; within a scholarly context, our methods and tacit expectations about scholarship influence that which we consider scholarly. Our beliefs are shaped by our institutions of inquiry, and these beliefs also shape those things we can consider appropriate behavior or knowledge. This is not to say that having institutions or being influenced by them is a negative thing; rather, the institutions of inquiry within AERA help us establish a meaningful standard for evaluating our work. However, it is to say that a deeper awareness of the influence of these institutions on our behavior can only be a positive thing, because it will force us to look beneath what we perceive to be “proper” to access what we perceive to be adaptive or desirable or valuable in a scholarly setting. And subsequently, we 172 would be better able to orchestrate those kinds of predictable organizational behaviors and beliefs that we think we want. This is so because a deeper understanding of our institutions of inquiry would increase our understanding not only of what it is that we truly value in scholarship, but also how our practices and assumptions support, but also subconsciously hinder, what it is that we truly value. Another implication of this study is that the introduction of any technological innovation into an organizational context represents more than the introduction of a simple tool. In addition, it represents a challenge to an accepted, taken-for-granted way 'of life—to a set of institutions—bringing as it does its own set of assumptions, values, and beliefs (Bruce & Hogan, 1998) to bear on the context that might be at odds with this way of life. Within AERA, the new innovation required of the membership that they learn and incorporate a new set of complex, interdependent social practices that are not necessarily consistent with the social practices that already exist within AERA. Clearly, this complicates matters, but it also provides an opportunity closely to examine one example of organizational change, via the literature on technology and society. 173 This is important because the fact of the matter is that even though change may seem infinitesimal, especially if we use the AERA annual meeting as an example, it does ultimately occur, with potentially far-reaching implications for members of the social group. In this context, researchers are coming to realize that there is a dire need for a vocabulary that can help us understand this more appropriately complex perception of technology’s interaction with established social practices. This study, then, makes a meaningful statement about the social construction of knowledge in a changing world. At first glance, AERA’s annual meeting appears to be an immovable object, unfathomable in its inertia. With the assistance of some literature on scientific inquiry and neoinstitutionalism, however, we can begin to look beneath the surface of the annual meeting to develop an understanding of the nature of the annual meeting and AERA. We can see that institutions of inquiry have an enduring and widespread influence on the social construction of knowledge. We can also see that the institutions structuring the social interactions that provide the context for the construction of knowledge are subject to the influence of technological innovation. 174 Thus, the introduction of Tiger into the social practices of AERA provides a timely example of the fact that many social organizations are facing the similar infiltration of technological innovation into their own practices, with as yet unclear implications about what “should” be happening in these organizations with respect to technology. In other words, examination of computer technology in the context of the practices of a knowledge-generating organization puts us in mind of Bazerman’s (1987) argument that we must constantly reevaluate our practices in a changing world. Further, the use of Bruce’s (1993) taxonomy of social change related to technological innovation can help us take a closer look and develop a clearer and more detailed picture of the nature of this social change, linking this picture back to what it means to engage in the production of scholarship. In the following sections, I discuss specific implications of this study for research practices, technological innovation, and educafion. 175 Implications for the Conduct and Practices of Scholarship The findings from this study provide a context within which we as a scholarly community can think more deeply about those practices and expectations that form the very basis of what we consider to be scholarship. While these practices and expectations may differ depending on the discipline, they exist nonetheless; thus, the implications of this study are relevant to all scholarly disciplines and fields. When we examine these practices, or institutions of inquiry, it is distinctly possible that we would discover other instances in which our tacit expectations undermine our explicit claims about what we value and hope to achieve with scholarship. Peer review is a notable example of this possibility. As Campanario (1993) indicates, “Almost all leading journals use a peer review system in order to evaluate or select contributions. Manuscripts are reviewed by members of the editorial staff, or by one or more external referees who are supposedly experts in their fields” (p. 343). Annual meeting proposals are also selected via peer review. As Campanario’s description indicates, peer review involves the evaluation of original research by reviewers who are 176 knowledgeable about the subject matter of a manuscript or proposal. Peer review is usually blind, which means that neither the manuscript/proposal author nor the reviewer knows the name or other identifying characteristics of the other person. The rationale for this is that it enables the reviewer to provide candid feedback without fear of repercussion from the author, which is sometimes difficult when the feedback is negative. In theory, peer review is a good idea, because it enables members of a given scholarly discipline to police the works that eventually come to represent the work of the field; theoretically it enables the academy to ensure the rigor of the works that are sanctioned by the field. It puts the responsibility for evaluation of original work squarely in the hands of the author’s colleagues, specifically those colleagues who have demonstrated their expertise in the content area covered in the manuscript. For these reasons, peer review seems like a logical choice of method to use for selecting scholarship. However, the institution of peer review breaks down upon closer inspection. Since a major purpose of scholarship is to advance our understanding of the world and contribute to or modify in systematic ways the pre-existing body of scientific knowledge, it is 177 useful to examine those scientific discoveries that have had a significant impact on our understanding of the world. And in this examination, it is useful to keep two questions in mind. First, can those individuals whose work advanced our understanding of the world in revolutionary ways really be perceived to have had peers? And second, was the community within which these individuals did their work truly responsible for the advancement and endorsement of these revolutionary ideas? With respect to the first question, who among his contemporaries could be considered to be Einstein’s peer, for example? Einstein obviously worked within a scholarly context at Princeton University, but given the scope and radicality of his ideas for their day, could those colleagues of his who were working within the existing scientific paradigm truly be considered peers? Einstein is widely considered to be a genius (as evidenced, among other things, by the fact that upon his death his brain was preserved for closer inspection of what it might tell us about the organic nature of genius); but how many true geniuses exist in one scholarly discipline or field? As Gibbons, Limoges, Nowotny, Schwartzman, Scott, 81 Trow (1994) write, “the evidence seems to say that most of 178 the advances in science have been made by 5 per cent of the population of practising scientists” (p. 1). Thus, while we may speak broadly of our peers in the scholarly enterprise, those scholars who truly further the cause of science actually have far fewer peers than we assume. With respect to the second question, how did Galileo’s peers, for example, advance the knowledge that he put forth regarding the solar system and outer space? In fact, they did no such thing. Rather, Galileo’s peers imprisoned him and excommunicated him from the Catholic church because they considered his beliefs to be heretical. In Galileo’s situation, then, the “peer review” process such as it was did not further our understanding of the world or help to generate discussion about a potentially exciting scientific discovery, but instead attempted to force a renegade to renounce his radical, and therefore unpalatable, ideas and conform to the prevailing belief systems of the time. According to Campanario (1993), this is still a widespread problem in academia today. He writes that the assertion that scientists sometimes resist scientific discovery clashes with the stereotype of the scientist as “the open-minded man’. However, it is also commonly accepted that new ideas and unexpected observations often have difficulty 179 getting published. It is also a common idea that some discoveries are ‘premature’, and do not fit the current conceptual framework of a given discipline. Sometimes these considerations cause a delay between the discovery and acceptance of new ideas by the scientific community (p. 342). Thus, the plight of a modern-day Galileo would likely not differ in any significant respect from that of the original one; while he might not actually be excommunicated or imprisoned, it is plausible that he would be silenced or otherwise professionally sanctioned for his “heresy.” In light of these examples, we as a scholarly community must continually ask ourselves: Which of the other practices that we follow in the name of scholarship and the advancement of scientific knowledge are actually hindering our attempts to contribute to our systematic understanding of the world? This dissertation helps us address these questions by providing a language and a context for helping us deepen our understanding of the influence of our scholarly practices on what it is that we claim to believe and value within scholarship. Im II i n T h i l l v i The findings of this dissertation also help in the development of a deeper understanding as to why technological innovations are so 180 often resisted or watered down to the point that they make no significantly different contribution to society—why we often find ourselves confronting the “no change” part of Bruce’s (1993) taxonomy. The findings of this dissertation make clearer the notion that people have very ingrained and influential senses of what constitutes appropriate behavior—very ingrained and influential institutions—that are salient within a given social context. The dissertation suggests that a new technological innovation may disrupt these expectations (Hodas, 1993), leaving members of a community or organization feeling anxious and upset because they are unable to rely on their shorthand understanding of the world to make sense in the new context. Further, the dissertation suggests that these community or organization members may not even realize why they are Upset. This may be so because, as the findings of the dissertation indicate, the institutions that structure our behavior are often tacit and taken for granted, rather than explicit and easily identifiable; thus, while we definitely get upset when a technological innovation enters the context, we may not be able to articulate the reasons. 181 Another implication of this study for technological innovation involves the breaching experiment effect. Now that we have a sense that the breaching experiment effect exists, particularly with respect to computer technologies, we can use it to our advantage, to help us determine how to incorporate computer technologies into our existing practices in ways that truly support our stated beliefs and values. We can do so by making a point of following up on those complaints and fears organization or community members express when they are faced with the introduction of a piece of computer technology into their pre-existing practices and habits of mind. In so doing, we can reveal the institutions at work that are structuring our behaviors and expectations, and we can develop a better sense of how the computer technology is or, as is more likely, is not mapping on to these behaviors and expectations. In so doing, we can develop better methods for incorporating technology into existing practices in ways that are more likely to encourage rather than discourage community or organization members in their use of technology. 182 Implications for Education The findings of this dissertation also speak to issues in educational settings such as K-12 classrooms. Insofar as institutions of inquiry are clearly at work in scholarly settings such as AERA, it is likely that there are also institutions of pedagogy and learning at work in classrooms that similarly hinder our explicitly stated goals of improving education. If, as l have been arguing, institutions influence our behavior and beliefs in educational research settings, it stands to reason that they would influence our behavior and beliefs in practitioner and teacher education settings as well. And if institutions of teaching and/or learning are indeed structuring our behavior in these settings, then understanding them would help us determine how to implement technology in classrooms in ways that are truly meaningful and that are also truly in keeping with what we claim to want for our students. For instance, we could examine classroom discourse practices (Cazden, 1988) such as the initiate-respond-evaluate mode of communication, which I hypothesize is a deep-seated institution of the classroom. We could use the breaching experiment effect both to uncover the tacit practices that structure our behavior in classrooms, and to 183 determine the extent to which and the reasons why such practices do or do not appear to mesh with computer use. In addition, we would also be able to access the deeper issues of what we aspire to make school and education look like, and how our actions and assumptions (our institutions) do and, equally importantly, do not, support our efforts to achieve this goal. Thus, while an educational research organization was a compelling context for this study, it is clear that the findings of this dissertation have implications for other educational contexts. The findings, then, could speak to the experiences of other knowledge-generating social organizations in the face of change agents, such as the one under investigation, that influence their social practices. This study provides us with the opportunity to move past the rhetoric about our goals for education, as well as the rhetoric about computer technology, and enables us to begin to think about the significant issues at hand. 184 References ~ American Educational Research Association 1999 annual meeting call for proposals. (1998). Educational Reeeareher, 27 (4), 29- 42. American Psychological Association. (1994). Publication menuel of the American Peyeholpgical Association, Fourth Edition. Washington, DC: American Psychological Association. Annual report. (1995). Educational Researcher, 24 (6), 30-39. Bazerman, C. (1987). Codifying the social scientific style: The APA Publication Manual as a behaviorist rhetoric. In J. S. Nelson, A. Megill, & D. N. McCloskey (eds..) [be t hetptie pf the human 1‘ ‘° -!n 1!: -0n -!f 11‘ IS 0.. 1.10.90 effeire (pp. 125- -144). Madison, WI: University of Wisconsin Press. Bazerman, C. (1988). Shaping written knowledge: The genre and eetiv vity pt the experI meptel enieleI in scienee. Madison, WI: University of Wisconsin Press. Bloom, A. (1988). [he eleeing pf the Amerieep mind. New York: Touchstone Books. Bogdan, R. C. & Biklen, S. K. (1992), Qgeli iitet ve teeeeteh for WW Boston MA: Allyn & Bacon. Bruce, B. C. (1993). Innovation and Social Change. In B. C. Bruce, J. K. Peyton, & T. Batson (Eds), Netwprk-based classrppme (pp. 9- 32). Cambridge, UK: Cambridge University Press. Bruce, B. C., & Hogan, M. P. (1998). The disappearance of technology: Toward an ecological model of literacy. In D. Reinking, M. McKenna, L. Labbo, and R. Kieffer (Eds.),__t-l_an_dpo_pls_pt_iite;aey 185 and technolo : Transformations in a ost-t o r hic worl (pp. 269-281). Hillsdale, NJ: Erlbaum. Burris, B. H. (1989). Technocracy and the transformation of organizational control. The Speiei Seienee Journel, 26 (3), 313-333. Campanario, J. M. (1993). Consolation for the scientist: Sometimes it is hard to publish papers that are later highly-cited. m cienc , 23, 342-62. Carroll, J. B. (1993). Educational psychology in the 21st century. Egueetipnal Peychologiet, 28 (2), 89-95. Cazden. C. 3- (1988).__Cfia_ssm9_m_dj§§2urs_e;_1b_eJamuagLof_teaming eng learning. Portsmouth, NH: Heinemann. Cuban. L- (1986)-WW teehnpipgy einee 1920. New York: Teachers College Press. Dalton, M. S. (1995). Refereeing of scholarly works for primary publishing. Annuel Review pf lnfprmetion Scienee end Iechpelegy, 30, 213-250. DiMaggio, P. (1988) Interest and agency in institutional theory. In L. G Zucker (EdLJflSjMIQDQLQQfliDLa—LQLQQQIMWP 3- 21). Cambridge, MA: Ballinger Publishing Co. DiMaggio, P. J. 8 Powell, W. W. (1991). Introduction. In W. W. Powell & P J DiMaggio (Edsmmmsmuttmaflm prganizatipnel anelysie, Chicago, IL: The University of Chicago Press, 1-38. D’Souza, D. (1991). illiperel egueetion. New York: The Free Press. Eisner, E. W. (1997). The promise and perils of alternative forms of data representation. Egueetionel Reeearcher, 26 (6), 4-10. Eisner, E. W. (1993). Forms of understanding and the future of educational research. Edgpetipnal Reeeercher, 22 (7), 5-11. 186 Gage, N. L. (1991). The obviousness of social and educational research results. Egueationel Reeearcher, 20 (1), 10-16. Garfinkel, H. (1967). Studies in ethnomethogelpgy. Englewood Cliffs, NJ: Prentice-Hall. Gaskins, R., Kinzer, C. K., Mosenthal, P., Watts Pailliotet, A., Reinking, D., Hynd, C., & Oldfather, P. (1998). Bringing scholarly dialogue to the surface: A view of the JLR review process in progress. Journel of Literacy Research, 30, 139-176. Gibbons, M., Limoges, C., Nowotny, H. Schwartzman, S., Scott, P., & Trow, M. (1994).T he pew prpdgption pf knpwlegge: The o1mi of i--nc.o:.1nerin on- 9011 i London, England: Sage Publications. Glass, G. V. (1993). A conversation about educational research priorities: A message to Riley. Eggeetipnal Reeeereher, 22 (6), 17-21. Guba, E. G. 8: Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. Denzin & Y. S. Lincoln (eds.). fiapdpepkgj guetitetiye__e§_eetgn_(pp 105- -.117) Thousand Oaks, CA: Sage Pubficaflons. Hammersley, M. & Atkinson, P. (1983)._E_tn_pqgtapny;_E_Lipeipte§_a_rtd magic; London: Tavistock. Harding. 8- (1991)-WWW women’e lives. Ithaca, NY: Cornell University Press. Heath. 8. B. (1983)-MW commpnitiee ang eleeerepme. New York: Cambridge University Press. Henneberg, M. (1997). Peer review: The holy office of modern science. NatuLaLSeieme. 1 (2). http://naturalscience.com/ns/articles/O1-02/ns_mh.html. 187 Hirsch, Jr., E. D. (1988). ultur l Iiterac : What eve American needs te knew. New York: Vintage Books. Hodas, S. (1993). Technology refusal and the organizational culture of schools. E ati n P Ii An I si Archiv 1 (10) (http://olam.ed.asu.edu/epaa/v1n10.html). hooks, b. (1989). Talking beek: Thinking feminist, thinking pleek. Boston, MA: South End Press. House, E. R. (1991). Realism in research._§dde_ati_odal_l3_e_seepc_het, 20 (6), 2-9, 25. Howe, K. R. & Dougherty, K. C. (1993). Ethics, institutional review boards, and the changing face of educational research. Educetidnal Researcher, 22 (9), 16-21. Jackson, P. W. (1990). The functions of educational research. Eddcetional Researcher, 19 (7), 3-9. Jansen G & Peshkin A 0992)me Meme Academic Press, Inc. Jepperson, R. L. (1991). Institutions, institutional effects, and institutionalism. In W. W. Powell & P. J. DiMaggio (Eds.) T__eh WWP 143-163)- Chicago, IL. The University of Chicago Press. Jepperson, R. L., & Meyer, J. W. The public order and the construction of formal organizations. In W. W. Powell & P. J. DiMaggio (Eds.) nwinitinli inor iatinlanli(pp.204- 231). Chicago, IL: The University of Chicago Press. Kaestle, C. F. (1993). The awful reputation of education research. Wrong-2L. 22 (1). 23-31- Kling, R. (1991). Computerization and social transformations. Wm. 16 (3), 342- 367. 188 Krieger, S. (1991). Social science and the self: Personal essays on an ed form. New Brunswick, NJ: Rutgers University Press. Lagemann, E. C. (1996). Sontested terrain: A histoty pf education [eseeteh in the United States, 1890-1990. Chicago, IL: The Spencer Foundation. Landow, G. P. (1997). Hypertext 2.0. Baltimore, MD: Johns Hopkins University Press. Lieberman, A. (1992). The meaning of scholarly activity and the building of community. MW 21 (6), 5-12. Madigan, R., Johnson, S. & Linton, P. (1995). The language of psychology: APA style as epistemology. WM, 50 (6), 428- 436. Maxwell, J. A. (1992). Understanding and validity in qualitative research. Herverd Eddeetional Review, 62 (3), 279-300. Maykut. P. & Morehouse. Fl. (1994)._B_eginni_rlg_gualitatiy§_r_e_s_e_ar_c_tu_a hil o hic and r ical ui . London, England: The Falmer Press. McLuhan, M. (1994). nder an in me i : Th xten ion of m n Cambridge, MA: MIT Press. Meyer, J. W. & Rowan, B. (1991). Institutionalized organizations: Formal structure as myth and ceremony. In W. W. Powell & P. J. DiMaggio (Eds WM enelysis (pp. 41- -.62) Chicago, IL: The University of Chicago Press. Miles, M. B. (1994). Recasting the annual meeting: Reflections on a change Process. Washer. 23 (1). 21-27. Miller, L. & Olson, J. (1994). Putting the computer in its place: A study of teaching with technology. W Studies, 26 (2),121-141. 189 Negroponte, N. (1995). Being digital. New York: Knopf Publishers Nelson, J. S., Megill, A. & McCloskey, D. N. (1987). Rhetoric of inquiry. In J. S. Nelson, A. Megill, 8. D. N. McCloskey (eds.). The rhetpric pf the hdman seienees: Lengdege end argdment in sehdlarship end public affairs (pp. 3-18). Madison, WI: The University of Wisconsin Press. Ong, W. J. (1982). r II n liter :Th t hnl izin word. London: Methuen & Co., Ltd. Papert, S. (1980). Mindst rm : hil r ideas. New York: Basic Books. Parker, G., Cole, D., Fetter, B., Hollier, D., Munselle, L., Murphy, T., Raji, O., Smithwick, C., Wright, L. (1995). The learning connection: AERA’s annual meeting as a catalyst._Ed_dc_etme_l BM 24 (1), 28-31. Perrow, C. (1979). r ni i n ' iic l Glenview, IL: Scott, Foresman, and Co. Postman. N. (1992).J_ec_bn_opp_ly:_Il1e_s_uLLend_er_o_f_c_ultura_t9 teendplegyt New York: Vintage Books. Powell, W. W. (1988). Institutional effects on organizational structure and performance. In L. G. Zucker (Ed.),__ipstitmtppel petterns and orgenizetiens (pp. 115-136). Cambridge, MA: Ballinger Publishing Co. Rheingold, H. (1985). Teols fer thdught: The pedple end idees pehind WNew York: Simon & Schuster. Rowan, B. & Miskel, C. G. (in press). Institutional theory and the study of educational organizations. Russell, W. J. (1994). Achieving diversity in academe: AERA’s role. EducationaLBeseamheL 23 (9) 26- 28 190 Scheurman, G., Heeringa, K., Rocklin, T., & Lohman, D. F. (1993). Educational psychology: A view from within the discipline. Edgeetipnel Psychpldgist, 28 (2), 97-115. Scott, W. R. & Meyer, J. W. (1991). The organization of societal sectors: Propositions and early evidence. In W. W. Powell & P. J. DiMaggio (Eds.) The new institutionalism in organizational enelysis (pp. 143-163). Chicago, IL: The University of Chicago Press. Shulman, L. S. (1998). Disciplines of inquiry in education: An overview. In R. Jaeger (Ed.) I n meth d f r research in edueation (pp. 3-17). Washington, DC: American Educational Research Association. deerqvist, T. & Silverstein, A. M. (1994). Participation in scientific meetings: A new prosopographical approach to the disciplinary history of science—the case of immunologY. 1951-72. Seeiel Studies pf Science, 24, 513 48 Soltis, J. F. (1990). The ethics of qualitative research. In E. W. Eisner & A. Peshkin (Eds.) Qualitative inguity in education: The W New York: Teachers College Press. Tierney, W. G. (1993). Academic freedom and the parameters of knowledge. Hemetd Eddeatidnei Review, 63 (2), 143-160. Turkle, S. (1995). Life on the sereen: Identity in the age pf the WNew York: Simon & Schuster. White, R. (1995). International scholarship and the AERA. Educationel BaseargheL 24 (6). 19-21. Zucker, L. G. (1991). The role of institutionalization in cultural persistence. In W. W. Powell & P. J. DiMaggio (Eds.) lne new WWWP 83- --107) Chicago, IL. The University of Chicago Press. 191 Zuckerman, D. W. (1992). First questions, next steps: Reflections on the work of the ad hoc committee. Educational Reseereher, 21 (8), 36. 192 Appendices 193 Appendix A General Questions 1. Demographic information (type of institution, size, community). 2. How have you been involved with Tiger (user, developer and/or advocate within the organization)? 3. Why did you become involved with Tiger? Why did you decide to use it? 4. How have your expectations been met/not been met? Research Question 1: How do they do things? also use the texts from the organization (journals, etc). 1. Did you know there is an annual meeting handbook? What do you think .of it? Is it a help or a hindrance as you try to do your work? What changes, if any, would you make to the handbook? If you don’t use it, how do you do your work? 2. What do you think of the call for proposals? Is it a help or a hindrance as you try to do your work? What changes, if any, would you make to the call for proposals? . chairs: Do you think proposers follow the rules adequately? reviewers: Do you think proposers follow the rules adequately? proposers: Do you worry if you depart from the rules in the call? What do you worry about? 3. Do you pay much attention to the theme of the meeting? 4. chairs: What do you think of the different presentation formats? For example, compare how you handle roundtables with how you handle paper presentations. Why do we have these formats? What would you think about just having all paper presentations or all roundtable presentations? reviewers: When you review a proposal, do you want to know the format requested? Does this influence your review? proposers: Do you care about the type of presentation you get? How much time do you spend deciding which presentation type you want? Why does it or doesn’t it matter? 194 5. chairs: How do you select reviewers? What kinds of instructions do you give them? How do you know the reviewers share your expectations and/or understandings about scholarship? reviewers: How do you think chairs select reviewers? What instructions did you receive for reviewing proposals? How do you know other reviewers and the chair share your expectations and/or understandings about scholarship? . chairs: How do you deal with differing reviews of the same proposal? What are your criteria for accepting and rejecting proposals? How much do you think your final decision about proposals is based on reviews? Do you read all the proposals? reviewers: How do you review proposals? Do you think all reviewers follow the same criteria? proposers: How do you reconcile conflicting reviews on your work? Do you think the review process is a fair and accurate way to decide who gets to present their work at the annual meeting? 195 z'u-ZMI - Research Question 2: How does Tiger affect how they do 1. things? chairs: Compare the experience of processing proposals'on paper and electronically. What were some notable similarities and differences? reviewers: Have you reviewed before? On paper, electronically, or both? Describe the experience of reviewing on paper, electronically, or both. proposers: Have you proposed before? On paper, electronically, or both? Describe the experience of reviewing on paper, electronically, or both. chairs: Describe the steps you take to process proposals electronically. Who is responsible for each step? Describe the steps you take to process proposals on paper. Who is responsible for each step? What are the most important tasks? all: Do you think reviewers do a better job in one or the other medium? What about proposers? Why or why not? all: The system as it stands now puts some restrictions on the formatting of proposals (e.g., no tables or graphics). Do you think this affects reviewers’ ratings of proposals? How do you know? Are you concerned about these restrictions? Why or why not? all: There are some built-in assumptions associated with the system (e.g., classifying proposals as received, reviewed, decided). Do these kinds of assumptions help or hinder your work? all: Do you think there should be an electronic submission option for all divisions and Sle? Do you think the organization should go completely “electronic?” Can you think of any reasons that we should not support electronic submission? Research Question 3: How are the ways they do things 1. adaptive and maladaptive? Do you think the demographics of people who submitted electronically differ from those of people who submitted on paper? What about reviewers? 196 'I- :.: nulflfi .. Ir 9:01 . Do you think the research agendas of people who submitted electronically differ from those of people who submitted on paper? What about reviewers? The organization is committed to increasing the participation of international and novice scholars. Do you see electronic submission as providing any possibilities with these types of commitments (including others I may not have mentioned)? Can you think of any groups who might be disadvantaged or hindered by the introduction of electronic submission? . Can you think of any downsides to electronic submission? Have you spoken with people who refused to use electronic submission? What reasons did they give for their refusal? Why do you think the organization wanted to implement electronic submission in the first place? Research Question 4: What can we extrapolate to the entire organization? 1. 2. What, if anything, would you change about the organization if you could? What are the implications of electronic submission for the organization as a whole? 197 Appendix B m i hin a r Theme Description Information Source aerareasons Alludes to reasons reviewer, proposer, (interview) AERA would have chair meetingtheme (interview) anticipation appropriateness wanted to have e- sub in the first place Whether the user paid attention to the theme of the annual meeting What users expected before they used e- sub and what they found once they had used fl. Whether the number, topic, and authors of the proposals assigned were appropriate for the reviewer 198 reviewer, proposer, chair reviewer, proposer, chair reviewer aspects Implication that reviewer, proposer, (interview) there is the chair “thinkwork” and the “administrative work” to writing, reviewing, and/or coordinating proposals. audience Who is the proposal reviewer, proposer or review written for? authority User’s perception of reviewer, proposer who is in charge and what they can do to/for you. bias Seeing author names reviewer, proposer or having the potential for other biasing experiences. clarity How clearly the reviewer, proposer, proposal process chair was explained, in the CFP, by chairs, by us. comparison Curiosity about what reviewer how reviewers handled particular issues. conflictreviews How to deal with reviewer, proposer, (interview) conflicting reviews chair of the same proposal/paper. 199 connecflon fairprocess Technical difficulties, including problems connecting to the site. Whether the blind, reviewer, proposer reviewer, proposer, (interview) peer review process chair is a fair way to select proposals formatting Problems related to reviewer, proposer the appearance of a proposal, including text wrap, loss of text embellishment, loss of table/graphic formatting. howprocess The steps the chair chair (interview) follows to process groposals howreview How a reviewer went reviewer (interview) about reviewing howselect How the chair chair (interview) selected reviewers howto Allusion to doing documentation, things the “right” way within AERA, playing the game properly 200 proposer, reviewer, chair inclusion innovafion interests (interview) mix nature papermention partyline Whether e-sub or the proposal process in general has the potential to include or exclude particular groups When users comment that e-sub is a significant development for AERA User’s research interests Allusion to dealing with both paper and electronic proposals at once Whether the process affected the nature of users’ proposals/reviews. Comparison or allusion to traditional, paper- based submissions process The “official” stance on what is right and what is wrong within AERA 201 reviewer, proposer reviewer, proposer reviewer, proposer, chair reviewer, proposer, chair reviewer, proposer, chair reviewer, proposer documentation, reviewer, proposer, chair politics prestype/hierarchy proposalcriteria proposingprocess responsibility reviewform reviewprocess revinstructions Having to do things unrelated to your research in order to be able to do your research Perception of a hierarchy of presentation type Things that makes a good proposal How the proposer went about proposing Relates to the vagueness of the “true” duties of e- sub people compared to duties the user or others should take care of themselves Comments about the proposal review form itself How the reviewer reviewed What instructions the reviewers were given on how to review (what to look for in a proposal) 202 reviewer, chair proposer, reviewer, chair proposer, reviewer, proposer proposer reviewer, chair proposer, reviewer reviewer reviewer, chair proposer, revising Going back to reflect reviewer, proposer on or change elements of Jroposals/reviews reviewerresponsibility What people perceive reviewer, proposer, to be responsibility chair of reviewer rules explicit instructions reviewer, proposer, for how to do chair something scholarly comments about the reviewer, proposer, relative chair scholarliness of something selfawareness people say reviewer, proposer something that implies “I know I shouldn’t think this way, but...” shouldhave Whether e-sub reviewer, proposer, should be available chair for all divisions and SIGs twochairs Whether there should chair (interview) be two chairs to deal with two types of submissions 203 use rproblems When users make mistakes that have repercussions for us or other users (e.g., when people put in incorrect e-mail reviewer, proposer, chair addresses) whereproposed e-mail or web proposer submission wherereviewed on- or off-line reviewer whyelectronic (interview) Why user decided to use or offer electronic submission 204 reviewer, proposer, chair