EXAMINING THE RESULTS OF AN INTERVENTION TO INFLUENCE FACTORS OF GROUP DYNAMICS IN VIDEO CONFERENCING LEARNING ENVIRONMENTS By William Christopher Cain A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Educational Psychology and Educational Technology – Doctor of Philosophy 2017 ABSTRACT EXAMINING THE RESULTS OF AN INTERVENTION TO INFLUENCE FACTORS OF GROUP DYNAMICS IN VIDEO CONFERENCING LEARNING ENVIRONMENTS By William Christopher Cain The following study was framed around a simple question: when a group of people is engaged in video conferencing, what sort of things can they do to improve their group dynamics? This is an important question for current and future educational practice because web-based video conferencing has increasingly become an important tool for use in online and distance education programs. Using computer-based audio and visual equipment, web-based video conferencing allows groups of students and teachers to see and hear each other in real-time, providing a channel of communication that is often rich in information. Informal video chat, using applications like Skype, FaceTime, and Google Hangouts, has become a popular means of communication in much the same way as phone calls. Formal group video conferencing, however, is a different communication and interaction format from informal video chat, and many teachers and students often unfamiliar with rules and norms associated with it. For example, best practices literature on video conferencing stress that things like framing, lighting, proximity to the camera, and the composition of background can all affect the way a person is perceived by others. These factors can also affect the overall quality of the video conferencing session, making it easier or harder for people to hold sustained interactions with each other. In short, formal group video conferencing requires people to be mindful of certain things that they may not pay attention to when they are engaged in either faceto-face conversations or informal video chats. When people are not mindful, they can cause serious disruptions to overall group dynamics. Group dynamics play a role in any setting where people come together for a period of time. Forsyth defines a group as “two or more individuals who connected by and with social relationships” (Forsyth, 2009, p. 4). Dynamics are the interactions between and among factors in a context or system of elements. Group dynamics therefore refers to the qualities of interaction with one another in a group. Factors that influence group dynamics include morale, belongingness, tone, atmosphere, influence, participation, trust, leadership, conflict, competition, cooperation, etc. (Hanson, 2005). The goal of this study was to design an intervention based on a series of activities that instructors or facilitators could use with students in simulated high-stakes video conferencing learning environments. The results were illuminating but not in a way the author intended. The intervention at the heart of this study was not implemented as it was originally designed, which affected not only the results but the entire direction of analysis. This is not necessarily a bad thing. This study shows the importance of intervention design and the role that facilitators play in bringing the benefits of an intervention to those who need it. The different chapters in this dissertation discuss why the author felt this study was important and necessary, how he went about designing the central intervention, what the results suggest about intervention design and implementation, and his recommendations for future research in the area of group dynamics in video conferencing learning environments. It is the author’s wish that readers gain a new appreciation for the complexity of research in this area, as well as a newfound or renewed interest in seeing this research continue. Copyright by WILLIAM CHRISTOPHER CAIN 2017 ACKNOWLEDGEMENTS Many wonderful individuals gave their time, energy, guidance and support to helping me complete this study and I wish to thank the following for their many contributions. I wish to thank Dr. John Bell for his insights, guidance, and extraordinary patience in helping me navigate the complexities of my study and all its many twists and turns. As my advisor (and boss), John truly inspired me as both a mentor and a good friend, helping me find not only find the value in my work but the researcher within myself. I would also like to thank Dr. Punya Mishra, my close mentor and friend during the first half of this study. Punya helped me see what my research was trying to tell the world and always encouraged me to give it clearer voice. He has been model for everything I could hope to be as a scholar and a mentor: creative and inspired, playful and deep, rigorous yet flexible, and always demanding of himself and those close to him to contribute more to this world than what we first found. I owe a debt of thanks to my committee members, Dr. Patrick Dickson, Dr. Bill Donohue, and Dr. Rabindra “Robby” Ratan. Their scholarly insights, advice, and gentle exhortations helped me to see this project through from inception to conclusion. To each of them I share my utmost respect and gratitude. I would also like to thank Bing Tong and other members of the MSU Center for Statistical Research of helping me make sense of my qualitative data. I also would like to thank members of the SLATE Research Groups and the CEPSE/COE Design Studio for the sustained encouragement and camaraderie. Finally, I would like to thank my wife, YoungJin, for her unwavering love and support. She is the truest light of my life and I cannot imagine having made this journey without her. v TABLE OF CONTENTS LIST OF TABLES ...................................................................................................................... viii LIST OF FIGURES ...................................................................................................................... xi CHAPTER 1 ................................................................................................................................... 1 INTRODUCTION AND LITERATURE REVIEW ...................................................................... 1 Significance of the Study .......................................................................................................... 5 Review of the Literature ........................................................................................................... 5 Group Dynamics in Technology-mediated Environments ................................................. 6 Social Presence ................................................................................................................. 12 Group Cohesion ................................................................................................................ 18 Team training .................................................................................................................... 21 Fidelity of Implementation ............................................................................................... 29 Study Purpose and Research Questions .................................................................................. 31 CHAPTER 2 ................................................................................................................................. 33 METHODS ................................................................................................................................... 33 Purpose of the Study ............................................................................................................... 33 Setting: COM 100 ................................................................................................................... 34 Course Personnel .............................................................................................................. 34 Using Video Conferencing for Recitations ....................................................................... 35 Implementation ....................................................................................................................... 36 Pre-study design and planning .......................................................................................... 36 Description of Intervention: Team Training Activities .................................................... 39 Study Design ........................................................................................................................... 42 Orientation and Training for Implementation ................................................................... 42 Implementation Protocol................................................................................................... 43 Instrumentation ....................................................................................................................... 44 Participant Sampling and Recruitment ................................................................................... 46 Data Collection ....................................................................................................................... 47 CHAPTER 3 ................................................................................................................................. 49 FINDINGS .................................................................................................................................... 49 Research Question 1 & 2 Analysis ......................................................................................... 49 Research Question 3 Analysis ................................................................................................ 52 Fidelity of Implementation Analysis ...................................................................................... 52 Adherence ......................................................................................................................... 53 Frequency.......................................................................................................................... 54 Duration ............................................................................................................................ 56 Content .............................................................................................................................. 60 Coverage ........................................................................................................................... 63 Overall Adherence Scores................................................................................................. 66 Moderators of Fidelity of Implementation .............................................................................. 68 vi Intervention complexity .................................................................................................... 68 Complexity – Activity Descriptions and Instructions ....................................................... 69 Complexity – Task Analysis ............................................................................................. 70 Facilitation strategies ........................................................................................................ 73 Participant responsiveness ................................................................................................ 77 Quality of delivery ............................................................................................................ 79 Summary: Integrating the Results of Research Questions 1, 2 and 3 ..................................... 80 CHAPTER 4 ................................................................................................................................. 83 DISCUSSION AND IMPLICATIONS ........................................................................................ 83 Discussion of Results .............................................................................................................. 83 What FOI Analysis Can Tell Us About Intervention Design ........................................... 83 Implications........................................................................................................................... 102 Limitations ............................................................................................................................ 107 Suggestions for Future Research .......................................................................................... 108 Summary ............................................................................................................................... 109 APPENDICES ............................................................................................................................ 112 APPENDIX A: Reliability Scores for Psychological Involvement, Behavioral Engagement, and Group Cohesion Across Times 1-4 ........................................................................... 113 APPENDIX B: Means and Standard Deviations for 3 Factors for the Consolidated Data Set .......................................................................................................................................... 115 APPENDIX C: Results of Univariate Analyses of Copresence/Psychological Involvement (CPRe) for Times 1-4 ....................................................................................................... 117 APPENDIX D: Pairwise Comparisons Across Groups for Copresence/Psychological Involvement (CPRe) for Times 1-4 .................................................................................. 119 REFERENCES ........................................................................................................................... 122 vii LIST OF TABLES Table 1: Parallels in organizational and education outcomes associated with team training ....... 27 Table 2-1: Organization of Recitation groups by UTAs, Weekly Schedule, and Video Conferencing ID.................................................................................................................... 38 Table 2-2: Team Training Activities, Themes, Interactions, and Theoretical Basis .................... 39 Table 2-3: Task Protocol for Implementation ............................................................................... 44 Table 2-4: Social Presence and Group Cohesion Survey Items ................................................... 45 Table 2-5: Scorecard for Adherence in Videoconferencing Interventions (SAVI) ...................... 46 Table 3-1: MANOVA for Copresence/Psychological Involvement, Behavioral Engagement, and Group Cohesion by Group for Consolidated Data Set.......................................................... 51 Table 3-2: Univariate ANOVA for Copresence/Psychological Involvement for Consolidated Data Set ................................................................................................................................. 51 Table 3-3: Frequency by Group/UTA and Time (in %) ............................................................... 55 Table 3-4: Frequency FOI Scores and Averages by Group and Treatment .................................. 56 Table 3-5: Duration (in seconds) by Group/UTA and Time......................................................... 57 Table 3-6: Duration FOI Scores and Averages by Group/UTA and Time ................................... 58 Table 3-7: Duration FOI Scores and Averages by Group/UTA and Treatment ........................... 59 Table 3-8: Content FOI Scores and Averages by Group/UTA and Time..................................... 61 Table 3-9: Content FOI Scores and Averages by Group/UTA and Treatment............................. 62 Table 3-10: Coverage FOI Scores and Averages by Group/UTA and Time ................................ 64 Table 3-11: Coverage FOI Scores and Averages by Group/UTA and Treatment ........................ 65 Table 3-12: Adherence Scores and Averages by Group/UTA and Time ..................................... 66 Table 3-13: Adherence Scores and Averages by Group/UTA and Treatment ............................. 67 Table 3-14: FOI Scores for Intervention Complexity – Activity Descriptions and Instructions .. 69 viii Table 3-15: Team Name Pre-Recitation Tasks ............................................................................. 70 Table 3-16: Team Name In-Recitation Tasks ............................................................................... 70 Table 3-17: Background Pre-Recitation Tasks ............................................................................. 71 Table 3-18: Background In-Recitation Tasks ............................................................................... 71 Table 3-19: Emotional Roleplay Pre-Recitation Tasks ................................................................ 71 Table 3-20: Emotional Roleplay In-Recitation Tasks .................................................................. 71 Table 3-21: Speak Up! Pre-Recitation Tasks................................................................................ 72 Table 3-22: Speak Up! Pre-Recitation Tasks................................................................................ 72 Table A-1: Improved Reliability Scores and Summary Item Statistics for Copresence/Psychological Involvement Scale ................................................................... 113 Table A-2: Improved reliability scores and Summary Item Statistics for Behavioral Engagement Scale .................................................................................................................................... 114 Table A-3: Reliability scores and Summary Item Statistics for Group Cohesion Scale ............ 114 Table B-1: Means and Standard Deviations for 3 Factors for the Consolidated Data Set.......... 115 Table C-1: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 1 ........................................................................................................................... 117 Table C-2: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 2 ........................................................................................................................... 117 Table C-3: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 3 ........................................................................................................................... 117 Table C-4: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 4 ........................................................................................................................... 118 Table D-1: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 1 .................................................................................................................. 119 Table D-2: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 2 .................................................................................................................. 120 Table D-3: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 3 .................................................................................................................. 120 ix Table D-4: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 4 .................................................................................................................. 121 x LIST OF FIGURES Figure B-1: Means and Standard Deviations for Copresence/Psychological Involvement for Groups for the Consolidated Data Set (Times 1-4 Combined) ........................................... 115 Figure B-2: Means and Standard Deviations for Behavioral Engagement for Groups for the Consolidated Dat Set (Times 1-4 Combined) ..................................................................... 116 Figure B-3: Means and Standard Deviations for Group Cohesion for Groups for the Consolidated Dat Set (Times 1-4 Combined) ........................................................................................... 116 xi CHAPTER 1 INTRODUCTION AND LITERATURE REVIEW Anyone who has played on a team, worked in a crew, or otherwise been part of a group is familiar with the importance of good group dynamics. Group dynamics play a role in any setting where people come together for a period of time. Groups – i.e. collections of people - form around, or are created for, a number of different purposes and often involve shared interests or objectives. Dynamics are the interactions between and among factors in a context or system of elements. Group dynamics therefore refers to the qualities of interaction in a group. More precisely, Forsythe (2009) defines group dynamics as, “the influential actions, processes, and changes that occur within and between group; also, the scientific study of those processes” (Forsythe, 2009, p. 2). The benefits of good group dynamics can manifest in a number of different ways: high morale among team members; a sense of belongingness and identification with the group; a style of communication that works; an atmosphere of trust; high levels of participation; responsibility and commitment; effective and productive cooperation (Hanson, 2005). The reverse is also true. Low morale, general distrust, poor communication, states of conflict or competition, and little or no shared commitment can all be symptoms of poor group dynamics. Group dynamics can be especially important in learning situations where interactions within the group are key to the learning process. Prichard, Bizo, and Stratford (2006) note that outcomes of group dynamics like morale, belongingness, and trust can impact the quality of learning in educational contexts, and that teachers should understand and develop techniques and strategies for managing those factors. 1 With the rise of modern information and communications technologies, groups are no longer restricted to meeting face-to-face. Web-based applications like wikis, chat forums, and video chat have created new kinds of shared psychological spaces - mediated by technology - in which people can gather and interact. According to Vygotsky (1978) and others, mediation is the phenomenon by which an activity is shaped by the tools that are being used to accomplish the activity’s objective. For example, mediation can be seen in the difference between using pen and paper or a computer application like Word to write an essay. There are significant physical and psychological differences between these two approaches. Consider the process of editing a sentence on a computer versus editing with pen and paper. A computer allows us to make as many revisions as we like, while pen and paper may demand we be more careful the first time. In turn, the affordances of these two technologies can lead to different feelings and affective outcomes. For example, an essay written on a computer may not be able to impart the same feelings of personality, style, and attention to detail as an essay written by hand. Yet the computer-written essay may be more grammatically precise and include images or other features that are not easily done with pen and paper. The point is, the same activity mediated by different tools can feel very different to the person doing the activity, while the activity itself may have different outcomes depending on which tool is used. Understanding mediation as a real and active phenomenon, we can see how web-based mediating technologies like video conferencing create new contexts and psychological spaces in which groups can meet, interact, and pursue a variety of activities and objectives. As with most technologies, much of what happens in these spaces depends on the technological skills of the participants, and on their willingness to apply and adhere to technology-related rules and norms, 2 among other factors. This is where group dynamics come into play but this is also where fostering good group dynamics can become complex. In face-to-face, in-person contexts, many rules and structures that guide and manage group dynamics can be learned and adopted rather quickly. Many of us spend the early parts of our lives becoming accustomed to being around others, learning as we go that different group contexts can require different styles of behavior and interaction. Even when the rules and norms of a particular group are not obvious, face-to-face interactions are usually rich enough in verbal, visual and spatial cues that they can often be easily communicated and learned. The same cannot be said when interactions are mediated through technology. In these cases, familiar rules and norms of group dynamics may not be immediately obvious and it can take time and practice to establish new ones. For example, when a group decides to meet synchronously online (such as in a video conference), it can take time for members to learn how to not talk over one another. Familiar cues like body language and facial expressions that we use to take turns in face-to-face conversations are often absent or diminished in video conferencing, so people become uncertain as to when it is appropriate to speak. This can lead to either long pauses where people are unsure whether to speak or not, or to people speaking over one another, causing confusion and occasional embarrassment. Awkward moments like this may lead members of the group to feel they are not completely “present” with other members of the group, or that the group as a whole is not completely together. This in turn can also lead to diminished perceptions of social presence, which is the degree to which people in mediated contexts are perceived as “real” to other group members. Therefore, in the absence of traditional cues for establishing cohesion and social presence among its members, groups that operate in technology-mediated environments 3 must either learn how to adapt familiar ways of interacting to the new context or they must learn to create new ones. Now an interesting question emerges: what if groups in a technology-mediated environment are asked to learn something new? Learning can be a challenge for individuals in any context or environment because of the complex cognitive, psychological, social, and emotional factors that may come into play. The same is also true for when groups of people collaborate for the purpose of learning new ideas or acquiring new skills. Individual learning and group learning, however, involve somewhat different processes. Issues such as trust, interdependence, communication, belongingness, task awareness, and social norms can all play a role in collaborative learning situations and can have significant influence on learning and collaborative outcomes. That said, research suggests there may be many benefits to collaborative learning in technology-mediated environments, such as flexibility, diversity of perspectives, and enriched forms of pedagogy and content (Lawson, Comber, Gage, & Cullum-Hanshaw, 2010). Yet many teachers and educational institutions are adopting collaborative learning strategies to be used in technology-mediated environments without due consideration for developing and fostering healthy group dynamics (Burbach, Matkin, Gambrell, & Harding, 2010). As Rousseau, Aubé, and Savoie (2006) note in their study on frameworks for analyzing teamwork behaviors, “Indeed, it is not enough to put individuals together and expect that they will know automatically how to work in a team” (Rousseau, Aube, & Savoie, 2006, pg. 541). While there has been significant research on how to foster group dynamics for collaborative learning in traditional, face-to-face contexts, however, group dynamics in technology-mediated contexts have not received as much attention. Understanding how group dynamics work, and can be pedagogically 4 influenced, is an area worthy of more than observational study; it deserves an applied research agenda with an interventionist methodology at its core. Significance of the Study The significance of this study is based on the following. One, prior research has shown that group dynamics are a crucial part of making group activities and interactions effective. Two, although research on group dynamics in technology-mediated environments is sparse, the author’s practical experience indicates group dynamics in these environments can be both subtly and significantly different from non-mediated environments. Finally, cognitive and socialemotional factors can make learning in technology-mediated environments a challenging proposition, and add an additional layer of complexity. Given the increasing use of technologymediated learning environments in education, as well as the rising prominence of collaborative learning approaches, this study was designed to examine whether facilitator-led team training activities can positively influence group dynamics in technology-mediated environments. The remaining sections in this chapter review research and literature that is relevant to the purpose and scope of the study. Chapter 1 concludes with the research questions that will guide to overall purpose and direction of the study. The purpose of these sections is to provide support for the epistemological, theoretical, and conceptual ideas on which this study was based. Review of the Literature To better understand the purpose, goal, and method of this study, the following sections review literature and research on four central topics: group dynamics in technology-mediated environments; two factors crucial to group dynamics, social presence and group cohesion; and the concept of team training and how it may be applied to enhance group dynamics in technology-mediated environments. 5 Group Dynamics in Technology-mediated Environments Research on group dynamics in technology-mediated environments like video conferencing is still in its infancy but studies done within the field of computer-supported collaborative learning (CSCL) provide guidance on how that research could be conducted. As a branch of the learning sciences, CSCL centers on the phenomenon of people learning and working together through mediating technologies to achieve commonly understood objectives. According to Stahl, Koschmann, and Suthers (2006), unifying themes of CSCL research include: a) the interaction between people in a group as a key unit of analysis; and b) the mediating role of computer technology in interaction processes. With these themes in mind, the following paragraphs review definitions, theories, conceptual frameworks, and methods used in CSCL research. Definition and Theory CSCL as a field of research is situated in the even older tradition of collaborative learning. Collaborative learning has its roots in the industrial research of the 1930s that sought to understand the processes and behaviors of groups of people engaged in collaborative and cooperative tasks, with an eye towards increasing measures like effective use of time. Collaborative learning was then gradually adopted in education research in the 1980s as an alternative to research and practices that focused solely on individual cognitive perspectives and processes. Researchers have proposed a number of definitions for CSCL that hint at the importance of group dynamics in their studies. According to Dillenbourg, Järvelä, and Fischer (2009), collaborative learning describes “a variety of educational practices in which interactions among peers constitute the most important factor in learning, although without excluding other factors 6 such as the learning material and interactions with teachers” (Dillenbourg et al., 2009, pg. 3). Implicit in this emphasis on the “interactions among peers” is the notion of group dynamics shaped by pedagogical strategies and decisions for the purpose of collaboration. Likewise, Prichard et al. (2006) define collaborative learning as “an educational approach in which the learning environment is structured so that students work together towards a common learning goal” (Prichard, 2006, pg. 119). In this definition, group dynamics (“students work[ing] together”) are shaped by rules and norms of learning environments for the purpose of collaborative learning outcomes. Stahl, Koschmann, & Suthers (2006) describe CSCL as a subset of collaborative learning “concerned with studying how people can learn together with the help of computers” (Stahl et. al., 2006, pg. 409). They note that the addition of the term “computer-supported” to collaborative learning refers not only to the act of using computers to connect remote students but also to using technologies to shape face-to-face interactions. Strijbos and Fischer (2007) define CSCL as “a multidisciplinary field in the learning sciences encompassing researchers with backgrounds in psychology, educational science, sociology, anthropology, communication science, and computer science” (pg. 389). They stress that each discipline has a specific theoretical perspective on the group dynamics aspects of CSCL and specific methods to study it. Stahl et al. (2006) also note that CSCL should be viewed as a “vision” of possible interactions, outcomes, and learning scenarios rather than an “established body” of practices and methodologies (Stahl, 2006, p. 409). In this view, CSCL is a line of research and practice appropriate to address some of the challenges of group dynamics inherent in combining information-communication technology and collaborative learning. 7 In examining theoretical perspectives that address group dynamics in CSCL contexts, Dillenbourg et al. (2009) note two main research strands: sociocultural perspectives and constructivist/socio-constructivist perspectives. Sociocultural perspectives tend to study largescale instances of CSCL where hundreds and even thousands collaborate through various media. Studies focused on smaller scale collaborative contexts trend towards constructivist, socioconstructivists, and socio-cognitive theories of interaction and learning. Note that the present study focused on small-group dynamics in video conferencing contexts and therefore drew from CSCL’s constructivist and socio-constructivist theoretical perspectives. Conceptualizing Group Dynamic in CSCL Whether it is for sports teams, work groups, or student projects, effective or good group dynamics is a precursor to effective collaboration. Researchers working from constructivist CSCL perspectives conceptualize collaboration in terms of shared meaning. As Stahl (2006) notes, “Collaboration is primarily conceptualized as a process of shared meaning construction. The meaning-making is not assumed to be an expression of mental representations of the individual participants, but is an interactional achievement” (Stahl, 2006, pg. 415). Dillenbourg et al. (2009) note two important factors that play into shared meaning construction: grounding and cycles of divergence and convergence. Clark and Brennan (1991) identify grounding as the verbal and non-verbal communication mechanisms by which two discussants detect and reinforce common understandings and correct misunderstandings. The degree to which these grounding mechanisms come into play depends on the task at hand, known as the grounding criterion. Cycles of divergence and convergence describe the different states of shared understanding between discussants. Interactions in shared learning contexts start with certain levels of divergence and convergence in terms of knowledge, skills, and common agreement on 8 task items and goal objectives. Dillenbourg et al. (2009) note that complete shared understanding is never fully achieved in shared learning interactions, but rather cycles between the two conditions. In other words, items that cause divergence in shared understanding are negotiated through grounding mechanisms to give rise to cycles of convergence, which in turn produce new potential items of divergence. Dillenbourg et al (2009) go on to identify three main categories of interactions that have been found to facilitate learning: 1) explanation, 2) argumentation/negotiation, and 3) mutual regulation. It should also be noted that the team training activities detailed in Chapter 2 were designed to correspond with these three types of collaborative learning interactions. As for the technological component of CSCL, researchers have emphasized the importance of design when accounting for collaboration in technology-mediated environments. As Dillenbourg et al. (2009) observe: “The key consequence [of CSCL] is not at the methodological level but at the design level: the purpose of a CSCL environment is not simply to enable collaboration across a distance but to create conditions in which effective group interactions are expected to occur.” (Dillenbourg et al., 2009, pg. 6, emphasis by original authors) Aleven, Stahl, Schworm, Fischer, and Wallace (2003) observe that viable computersupported collaborative environments are notable in part for their ability to foster rich social interactions among participants. This richness of social interaction in turn depends on both pedagogical and technological structures that support, constrain, and guide interactive processes (Aleven, et al., 2003). At the same time, it seems that designing/creating the technology that creates “conditions in which effective group interactions are expected to occur” is often perceived as the biggest challenge when it comes to CSCL. Bromme, Hesse, and Spada (2005) note there are a number of significant barriers, biases and opportunities related to problems of 9 communication and cooperation in CSCL contexts. These include (from Bromme et al., 2005, pg. 4): • Meaning barriers – meaning is constructed mutually between participants; the cooperative establishment of meaning is viewed as the central challenge • Common ground barriers – the need to identify or create shared context • Epistemic barriers – deficits of knowledge and skill on the part of the learner or other participants • Structure barriers – social interactions are structured; missing, mismatched, or inadequate structure in computer-mediated communication represents a potential barrier • Motivation barriers –computer-mediated environments may affect user motivation for some tasks Common ground, epistemic, and structural barriers were thought to be particularly salient for the subjects in this study. For example, the students in this study potentially shared a great deal in common outside the course (e.g. similar ages, backgrounds, ethnicities, academic majors, etc.). Nevertheless, reinforcing common ground (e.g. task orientation) was expected to be a new experience for both students and facilitators as they learned to interact with one another through video conferencing. Likewise, epistemic barriers were expected to be challenging, in that students would have to balance two sets of knowledge during their collaborations: their knowledge of the course content and their knowledge of, and skills using, the facilitating technology (i.e., video conferencing). Finally, students may find the structural barriers of technology-mediated interactions challenging. Bromme et al. (2005) note that structural cues for effective face-to-face interactions are often explicit; those same cues may be absent or less 10 obvious in technology-mediated contexts, leaving students unsure as to the appropriateness of their actions and behaviors. Bromme et al. observe that: “…the technical side (hardware and software) is neither the sole cause of - nor the only solution to - the problems which occur with computer-mediated communication and cooperation. Many of these barriers are rather challenges which are present in all cooperation and communication scenarios. Some of these barriers are aggravated in computer-mediated settings, some are easier to overcome” (Bromme et. al., 2005, pg. 2). While these three barriers – common ground, epistemic, and structural – may be common to many communication and collaboration scenarios, the author felt they were particularly relevant to the goal of influencing group dynamics in video conferencing situations. A central point of this study was to design a set of human-centric (as opposed to techno-centric) interventions that helped mitigate the common ground, epistemic, and structural barriers to effective group dynamics in video conferencing. Summation of CSCL Research CSCL research looks at the learning interactions of groups of people in technologymediated contexts, a perspective the author considered useful for a study on influencing group dynamics in video conferencing. Constructivist and social-constructivist perspectives are the primary theoretical perspectives that guide CSCL research; the constructivist perspective primarily guides small-group CSCL research and was used in the present study. CSCL researchers have also identified certain interactions that are beneficial to collaborative learning in technology-mediated contexts – these are explanation, argument/negotiation, and mutual regulation. Finally, CSCL studies have identified barriers and biases that complicate interactions (and therefore group dynamics) in technology-mediated environments. In designing the interventions that were central to this study, the author chose to focus on three of these barriers 11 and biases: establishing common ground among participants, differences in epistemic foundations (skills and knowledge), and missing social structural cues. Constructs for Group Dynamics in Technology-Mediated Contexts The above section provides grounding for how to design an intervention-based study of group dynamics in a technology-mediated context such as video conferencing. The following two sections will focus on two important constructs related to group dynamics in technologymediated contexts that will serve as dependent variables: social presence and cohesion. Social presence is a construct central to research in technology-mediated contexts because it is considered vital to establishing relational and emotional connections in distance interactions. Cohesion is similarly important to researchers as a measure of effective group dynamics and task performance. The purpose of these two sections is to provide evidence that both social presence and cohesion are appropriate dependent variables in relation to the independent variable of team training in the context of learning in technology-mediated learning and instruction. Social Presence Definition and Theory Social presence is a concept that has its basis in telecommunications literature. In their analysis of the social-psychological dimensions of mediated communication, Short, Williams, and Christie (1976) first defined social presence as “the degree of salience of the other person in the interaction and the consequent salience of the interpersonal relationships” (Short et al., p. 65). Taking social cues in communication as their point of analysis, they viewed social presence as a quality inherent in communications media but one that varies among different types of media. Users in turn are aware to some degree of the capacity for social presence in a given medium and tend to moderate their behaviors accordingly. For example, text chats and videoconferences both 12 operate in real-time but have different capacities for transmitting information about facial expressions and non-verbal cues. According to the social cues perspective, these different capacities contribute to the degree of social presence experienced in either medium. In this way, social presence “affects the nature of the interaction and it interacts with the purpose of the interaction to influence the medium chosen by the individual who wishes to communicate” (Short et al., p. 65). Short et al. (1976) identified two factors as integral to social presence: intimacy (Argyle & Dean, 1965) and immediacy (Wiener & Mehrabian, 1968). In their study of eye contact, distance, and affiliation, Argyle and Dean asserted that intimacy in a communication medium is influenced by the factors of physical distance, eye contact, smiling, and personal topics of conversation. Wiener and Mehrabian (1968) conceptualized immediacy as a measure of psychological distance that communicators keep between one another. Immediacy and nonimmediacy can be conveyed verbally or non-verbally through cues such as physical proximity, formality of speech, and facial expression. Advances in computer mediated communication have caused researchers to rethink the cues perspective of social presence first put forth by Short, et al. Examining the concepts of “social presence” and “interactivity”, Rafaeli (1990, 1988) observes that social presence is a subjective measure of the presence of others as Short et al., defined it in 1976, while “interactivity” is the actual quality of a communication sequence or context. Interactivity is a quality (potential) that may be realized by some, or remain an unfulfilled option. When it is realized, and when participants notice it, there is “social presence”. Charlotte Gunawardena, a researcher in the area of social presence and computermediated conferencing, argued that, “it is important to examine whether the actual characteristics 13 of the media are the causal determinants of communication differences or whether users’ perceptions of media alter their behavior" (Gunawardena, 1995, p. 164). Both Gunawardena (1995) and Walther (1992) note that the behaviors identified by Short et al. are in fact subject to cultural norms associated with communication. This leads to the notion that social presence can be “encultured’ among teleconference participants, a position different from the view that social presence is largely an attribute of the communication medium. Moreover, their research demonstrated that social presence is both a factor of the medium and of the communicators and their presence in a sequence of interactions. Scholars have linked the importance of social presence in CSCL to its role in socialconstructivist principles of learning and development. Salomon (1998) notes that CSCL approaches, such as Scardamalia and Bereiter’s CSILE (1996) and Anchored Instruction from the Cognition and Technology Group at Vanderbilt (1996) are “based on constructivist psychological and philosophical principles, team-based, often interdisciplinary, oriented toward the solution of complex, real-life problems, and utilizing a variety of technological means” (Salomon, 1998, pg. 1). Likewise, Jonassen (1994) observes that social-constructivist epistemology grounds thinking in perceptions of physical and social experiences. The mind forms mental models from these perceptions and uses them to explain, predict, or infer phenomena in the real world. Individual models are then shared, verified and modified through a process of social negotiation. Jonassen (1994) also discusses the implications of socialconstructivism for instructional design and observes that purposeful knowledge construction may be facilitated by learning environments which: provide multiple representations of reality; focus on knowledge construction and not reproduction; provide real world case-based learning 14 environments; foster reflective practice; enable context and content dependent knowledge construction; and support collaborative construction of knowledge through social negotiation. Gunawardena (1995) argues that, “Computer conferences can be designed to promote the construction of knowledge that is meaningful to the learner” (p. 164). Seen from this perspective, CSCL environments such as video conferencing may promote collaborative learning that involves the active construction of knowledge through social negotiation, but only if participants can relate to one another, share both a sense of community and a common goal. It is with this in mind that the development of social presence becomes key to fostering group dynamics for effective learning outcomes. Conceptualization There are a number of conceptual models associated with social presence. Garrison, Anderson, and Archer (2001) included social presence in a model of community inquiry developed for use as a conceptual framework in computer-mediated communication in higher education. The model identified three core elements of an educational experience that included social presence and two other concepts: cognitive presence, and teaching presence. Cognitive presence, a vital element in critical thinking, refers to the extent to which participants in a community of inquiry are able to construct meaning through sustained communication. Teaching presence refers to designing and managing learning, providing subject matter expertise, and facilitation of active learning. In the model, social presence is defined as “the ability of participants in the community of inquiry to project their personal characteristics into the community, thereby presenting themselves to others as 'real people'” (Garrison et al. (2000), p. 89). Three categories of social presence are identified in the model: expression of emotion, open communication, and group cohesion. Emotional expression includes humor and self-disclosure. 15 Open communication consists of reciprocal and respectful exchanges. Examples of open communication are mutual awareness and recognition of each other’s contributions. Group cohesion refers to activities that foster a sense of group commitment and a sense of belonging. Garrison et al. (2000) argue that cognitive presence itself is not enough to sustain a community of learners— individuals must feel comfortable relating to each other. Therefore, social presence is critical to cognitive presence and to establishing a critical community of learners. In their words, “…social presence marks a qualitative difference between a collaborative community of inquiry and a simple process of downloading information” (Garrison et al. 2000, p. 96). The third element of the model, teaching presence, consists of the design of the educational experience and facilitation. Teaching presence is “a means to an end–to support and enhance social and cognitive presence for the purpose of realizing educational outcomes” (Garrison, et al., p. 90). While the teaching role is pivotal in building a community of learners, when the Community of Inquiry Model (e.g. cognitive presence, social presence, and teaching presence) is applied to a computer conferencing environment, social presence is regarded as a function of both learners and teachers (Rourke, Anderson, Garrison, & Archer, 2007). Rourke et al. (2007) postulated that while fairly high levels of social presence are necessary to support the development of deep and meaningful online learning, there is an optimal level above which too much social presence may be detrimental to learning. Empirical studies In a review of social presence research, Cobb (2009) presents a number of studies that describe the impact of social presence in technology-mediated contexts. One of the earliest is Gunawardena’s (1995) report on two studies of student perceptions of computer-mediated communication (CMC) in computer conferences in which graduate students discussed distance 16 education issues and research related to distance education. Findings from both studies indicated that subjects characterize CMC as highly interactive, active, stimulating, and a social medium. It should be noted that the role of the moderator or facilitator was identified as critical to creating a sense of online community and enhancing social presence. Relevant to the present study is Gunawardena’s assertion that, “…it is [pedagogical] techniques, rather than the medium, that will ultimately impact students’ perception of interaction and social presence” (p. 165). In a later study, Gunawardena and Zittle (1997) differentiate social presence and interaction, indicating that interactivity is a potential quality of communication that may or may not be realized by the individual. When it is realized and noticed by participants, there is “social presence.” Tu and McIsaac (2002) also supported the reciprocal relation of interaction and social presence, noting that in order to increase the level of online interaction, the degree of social presence must also be increased. Another notable study is Richardson and Swan’s (2003) study examining social presence among undergraduate and graduate students participating in online courses during a semester. A correlational design was used to examine the relationship of social presence, perceived learning, and satisfaction with the instructor. The authors used a modified version of Gunawardena and Zittle’s (1997) Social Presence Scale, along with question about students’ overall perception of the course and general demographic items. In this study, Richard and Swan report that students’ perception of social presence served as a predictor of perceived learning. Summation of Social Presence Research The above section described the concept of social presence and highlights its importance to interactivity and group dynamics in technology-mediated environments. It also presented arguments that facilitators in these environments may influence social presence in aid of effective group dynamics and discussed methods for assessment. The following section describes 17 another social construct related to both social presence and group dynamics that was important to this study: group cohesion. Group Cohesion Definition and Theory The concept of group cohesion has been actively studied since the mid-twentieth century, particularly in connection with small-group dynamics research (Drescher, Burlingame, and Fuhriman, 2012). It has been commonly referred to as a group’s “sticking-togetherness” or more formally as “the resistance of the group to disruptive forces” (Gross & Martin, 1952, p. 535). Carron (1982, p. 124) defined cohesion as "the tendency for a group to stick together and remain united in the pursuit of its goals and objectives." In the 1930s and 1940s, Kurt Lewin and other researchers working at MIT laid the foundation for the concept of cohesion as an essential property of groups, without which they could not exist. Festinger, Black, and Schachter (1950) later proposed an early modern definition of group cohesion in their study of human factors affecting friendships and community life at dorms at MIT. Group cohesion in that study was defined as “the total field of forces which act on members to remain in the group” (Festinger et al., 1950, p. 164). Other definitions have focused on group cohesion as a multidimensional construct (Dion, 2000) and included such factors as direction of cohesion (e.g. vertical (superior-subordinate) and horizontal (peer-to-peer)) and functions of cohesion (e.g. task, goal, etc.). Conceptualization A central debate within group cohesion research has focused on the use of either objective or subjective measures. This distinction can have important ramifications for conceptualizing group cohesion as either the sum of its parts (e.g. the number of friendships, 18 connections, etc.) or something greater (e.g. productivity). For example, recognizing it was difficult to precisely name and measure “the total field of forces” that might act on a group, Festinger moved away from his original definition and reconceptualized cohesion as the “resultant of all forces” that influence members to stay in a group. This move allowed for research that focused more on the effects of cohesion and away from factors that cause cohesion. Since Festinger’s reconceptualization, researchers have taken at least two approaches to conceptualizing group cohesion (Bollen & Hoyle, 1990). The first involves forming some composite of each group members' judgment of his or her closeness to each of the other group members. For example, Gross (1954) looked at the average of each member's self-reported closeness to all other group members, while Hall (1995) summed the forces perceived by individuals that act against leaving a group. Bollen and Hoyle (1990) note that other researchers used sociometric choice measures to construct indexing instruments for objective measures of group cohesion, including an index of morale (Zeleny, 1939), an index of cohesiveness (Martin, Darley & Gross, 1952), and an index of morale cohesiveness (Fessenden, 1953). Dimock (1986) devised an index formed by dividing the actual number of mutual friendships in the group by the number of possible mutual friendships in the group. These researchers proposed that measuring “field of forces” factors like morale and intragroup friendships are effective ways of objectively measuring group cohesion. A second approach to conceptualizing cohesion as an independent construct involves asking individual group members about their own perceptions of cohesion. This subjective approach follows Gross and Martin’s (1952) proposal to operationalize cohesion by "allowing the subjects to use their own perceptions of why the group is important to them" (Gross & Martin, 1952, p. 554). Bollen and Hoyle note these perceptions are often expressed along several 19 dimensions such as satisfaction, task cohesion, social cohesion, group integration, and instrumental value of the group (Bollen & Hoyle, 1990). In their review of cohesion research, Bollen and Hoyle (1990) also proposed that perceived group cohesion is a function of feelings and perceptions of belongingness and morale among a group’s members. After a review of group cohesion research and theory, and an assessment of the expected limitations of gathering the necessary data, the author decided that belongingness and morale were appropriate factors of cohesion to measure in this study. This study makes modified use of Bollen and Hoyle’s Perceived Cohesion Scale as the primary instrument for measuring cohesion. More on design and use of this modified scale can be found in Chapter 2 – Methods. Empirical Studies There have been numerous studies done in respect to group cohesion since the midtwentieth century. The above sections detail some of the conceptual approaches used to conduct this research and the different instrumentation used for each approach. Although the author reviewed a number of empirical studies concerning group cohesion, one study in particular provided guidance on using Bollen and Hoyle’s Perceived Cohesion Scale (PCS) in conjunction with small-group dynamics. Chin, Salisbury, Pearson, and Stollak (1999) note that Bollen and Hoyle (1990) tested the validity and stability of PCS with large groups of students and residents in a small college and the surrounding community. For their study of cohesion in small work group decision-making processes, Chin et al. (1999) adapted the PCS instrument in ways that made the instrument a better fit for small group use (e.g. substituting small unit words like “team” or “group” for words like “community”). Chin et al. performed factor analysis and fit assessment on the modified PCS instrument and found that their modifications did not adversely impact item load and goodness-of-fit indices for the two constructs of belongingness and morale 20 (Chin et al., 1999, 757). This provided evidence that a) a reliable instrument for measuring factors of group cohesion existed, and b) that the instrument could be modified successfully to focus on measuring cohesion in small group dynamics in technology-mediated environments. Chin et al.’s study is also notable for its use of intervention-based methods that focus on group cohesion in team situations. As a means for influencing group dynamics in face-to-face setting, team training was considered a natural fit for use in the present study. The following section discusses how team training is defined and conceptualized as a means of affecting and improving group dynamics. Team training Background and definitions Corporations, government agencies, and other collaboration-oriented institutions have long created programs and training regimens aimed at helping groups achieve more desired outcomes (Noe, 2002). One of the more common types of regimen is team training - activities or interventions used to develop beneficial group dynamics and facilitate team effectiveness. Buller (1986) writes that, “The primary purpose of team [training] is to improve the effectiveness of work teams within organizations” (pg. 147). Team development researchers seek to understand how these programs and interventions are most effective. Broadly speaking, team training interventions fall into two categories: teambuilding and team-skills training. Both types of intervention generally aim to enhance group effectiveness by improving group members' skills in areas such as goal setting, technical and performance competencies, problem solving, interpersonal relations, and role clarification (Klein, DiazGranados, Salas, Le, Burke, Lyons, & Goodwin, 2009). Although they are ultimately designed to improve team functioning and effectiveness, teambuilding and team-skills training 21 differ in important ways (Tannenbaum, Beard, & Salas, 1992). Team-skills training focuses on gaining specific competencies. It is typically situated in context, includes practice elements, and is generally formal and systematic. Teambuilding, on the other hand, does not target skill-based competencies, is not systematic in nature, and is typically done in settings that do not approximate the actual performance environment. Teambuilding works by assisting individuals and groups to examine, diagnose, and act upon their behavior and interpersonal relationships (Schein, 1969, 1999). For the purposes of this study, the author elected to base the core intervention on Klein et al.’s definition of teambuilding as “a class of formal and informal team-level interventions that focus on improving social relations and clarifying roles, as well as solving task and interpersonal problems that affect team functioning” (Klein et al., 2009, pg. 183). I would add, however, that group interactions in this study were mediated in a way that was potentially unfamiliar to some participants. To fit the context, therefore, the chosen definition of team training in this study reads, “A class of formal and informal group-level interventions that focus on imparting skills for effective communication and collaboration, building and improving social relations, and clarifying roles, tasks, and goals that affect group functioning.” Theoretical foundations Communications and organizational behavior theories have made significant contributions to the theoretical foundations of team training. Two theories stand out in particular: structuration theory and symbolic convergence theory. Structuration theory holds that group members interact according to particular rules, and those group members also produce those rules through their interactions. This suggests that group members can negotiate group structures, yet at the same time, their interactions are constrained by those structures. Structuration theory 22 distinguishes between systems, such as small groups, and structures, the practices, rules, norms, and other resources the system uses to function and sustain itself (Poole, Seibold & McPhee, 1996). When applied to small groups, structuration theory approaches small groups as systems that both produce structures and are produced by structures. This means that group members follow particular rules in their interactions with the expectation of achieving desired outcomes. Those outcomes eventually influence the group's future interactions. Symbolic convergence studies the sense-making function of communication. "Symbolic" refers to verbal and nonverbal messages and "convergence" refers to shared understanding and meaning. In small groups, members develop private code words and signals that only those inside the group understand. When groups achieve symbolic convergence, they have a sense of community based on common experiences and understandings. Central to symbolic convergence is the idea that group members share fantasies that serve as critical communication episodes, forming the basis for members' sense making (Bormann, 1996). Fantasy themes are stories or narratives that help group members interpret group interactions and their surrounding environment. Fantasy themes develop when group members actively engage in dramatizing, elaborating on, and modifying a story. Sharing fantasies helps group members create a social reality that indicates who is part of the group and who is not. Sharing fantasy themes increases group cohesiveness as members develop a common interpretation of their experiences. In this way, the story becomes publicly shared within the group as well as privately shared by each group member. Fantasy themes are related to small group culture in that the stories reveal the group's identity and underlying values. 23 Conceptual frameworks Team training as a standard or formulated set of activities with proven outcomes has not been well defined conceptually (Buller, 1986; Prichard, 2006). Nevertheless, team training models have been guided by in part by conceptual frameworks relating to the development of interpersonal relationships and task activities in small groups. Among these is Tuckman’s model of developmental sequences for small groups (1965; 1977). Using a review of existing literature, Tuckman proposed a group life cycle model of small-group development that accounted for structural acquisition and interpersonal dynamics. The stages of this model are: • Forming: orientating to tasks, rules, and interpersonal and group dynamics • Storming: identifying misconceptions and interpersonal conflicts • Norming: developing group cohesion and shared mental models; acceptance of personal idiosyncrasies; discovering effective collaboration strategies • Performing: developing ‘functional role relatedness; structures support task performance; roles are flexible and functional; and • Adjourning: disbanding or reorienting the group as a functioning entity Beer (1976) presented a conceptual scheme describing several models of team training, including: 1) the goal setting model, 2) the interpersonal model, and 3) the role model. In the goal setting model, the teambuilding effort is aimed at establishing group goals and action plans to accomplish goals. The interpersonal model focuses on improving interpersonal relations in the group, assuming that an interpersonally competent group is more effective than one that is not. The role model approach consists of activities and communication strategies to clarify team members' roles. Both Beer (1976) and Buller (1986) note that each model carries methodological biases towards certain variables and problems within team contexts, and that these biases can 24 often manifest during implementation. For example, a consultant or instructor who is knowledgeable about or more familiar with interpersonal strategies will tend to identify those types of problems in a team-training scenario. As a result of the instructor’s bias, efforts at team training will tend to focus more on improving interpersonal relationships, even at the expense of other types of training. Buller also notes that the models in Beer’s classification scheme “rarely exist in pure form; teambuilding programs usually involve elements from each of the models” (p. 149). This in turn can lead to uncertainty as to which aspects of a particular intervention are most effective for any given team. While taking into account any uncertainty about the effectiveness of any one particular aspect of its proposed intervention, the present study will indeed make use of both teambuilding and team skills training strategies to achieve the desired objective of examining its effects on group dynamics by way of social presence and cohesion. Empirical evidence According to Sanborn and Huszczo (2007), the effectiveness of team training differs substantially from one organization to another. The most effective team training efforts occur when members of the team are highly interdependent in performing the task, highly knowledgeable and experienced in the task to be accomplished, and when organizational leadership actively establishes and supports the team. Sanborn and Huszczo note that effective team training must also incorporate an awareness of the ultimate objective of the task, and work to develop goals, roles and procedures to achieve it successfully. In addition, team training must often strike a balance between task-oriented and relationship-oriented strategies. To ensure effectiveness, team training should work towards the establishment of policies and procedures and working with the environment, including relationship support systems. Sanborn and Huszczo 25 (2007) caution that some elements of team training as an intervention are designed to work when the members of the team are actually involved in solving the problem and when they are already intact as a team (i.e. they worked with each other before). They also note that members of the team must have the willingness and ability to speak up about their needs. While evidence on the efficacy of team development interventions overall is mixed (Salas, Rozell, Mullen, & Driskell, 1999; Woodman & Sherwood, 1980), a more consistent finding is the effect of team training on affective measures and outcomes. In an early review of teambuilding research, Woodman and Sherwood (1980) found evidence of a range of postintervention attitudinal improvements following training in almost all of the 30 studies they included in their review. These attitudinal improvements included variables such as organizational climate, task satisfaction, morale, and group cohesion. Educational research on teambuilding and collaborative learning echo these gains in outcomes. Research shows that college students across majors frequently respond favorably to group projects and suggests that team assignments are useful in team skills acquisition (DeeterSchmelz & Ramsey, 1998; Lancellotti & Boyd, 2008; McCorkle et al., 1999; McKinney & Graham-Buxton, 1993). In addition, Burbach, Matkin, Gambrell, and Harding (2010) reviewed a number of studies that show team approaches to learning (as compared to lecture or individualist pedagogies) result in higher student achievement, greater use of higher-level reasoning and critical thinking skills, more positive attitudes toward the subject matter, higher levels of class satisfaction, better interpersonal and communication skills, and increased motivation to learn. Table 1 shows parallels in organizational and educational research on outcomes associated with the use of teams and team training. 26 Table 1: Parallels in organizational and education outcomes associated with team training Organizational outcomes Improved climate Satisfaction Morale Cohesion All Educational outcomes More positive attitudes toward the subject matter, higher levels of class satisfaction Higher levels of class satisfaction, increased motivation More positive attitudes toward the subject matter, higher levels of class satisfaction Better interpersonal and communication skills Greater use of higher-level reasoning and critical thinking skills Finally, there is research to suggest instructors can improve students' ability to work together in teams successfully (Kapp, 2009). Johnson, Johnson, and Smith (1998) observed that an instructor’s role in structuring teams includes but is not limited to: specifying the objectives for the lesson; making instructional decisions (e.g., group size, method of assigning students to teams); explaining the task and benefits of positive interdependence; monitoring students’ learning and intervening within the groups to provide task assistance or to increase students' teamwork skills; and evaluating students' learning and helping students process how well their group functioned. Students also expect instructors to be actively involved with many aspects of team functioning, and instructors' active involvement is associated with positive student outcomes such as achieving learning goals and student satisfaction with collaboration (Lizzio & Wilson, 2005; Oakley, Hanna, Kuzmyn, & Felder, 2007). As it stands, there are common threads to be found in team training and collaborative learning research. In a review of both fields, Prichard et al. (2006) identified 5 main elements that are common across both team training and collaborative learning. They are: the existence of a group goal; member interdependency; coordination of member's activities; the structuring of group/task roles; and a focus on interactive processes. Prichard, et al. (2006) also note that while collaborative learning approaches and methods are encouraged and often employed in educational settings, little attention is paid to training 27 students and teachers on how to organize these activities in ways that are most effective. As such, the team training in this study can be viewed as an explicit strategy for giving students structural and relational supports for effective group dynamics and collaboration in technologymediated environment. Summation of team training The above section identified its working definition of team training as, “A class of formal and informal group-level interventions that focus on imparting skills and structures for effective communication and collaboration, building and improving social relations, and clarifying roles, tasks, and goals that affect team functioning.” (Klein et al., 2009) It also identified structuration and symbolic conveyance as key theoretical perspectives underlying many team-training approaches. Conceptual models for small-group development were also discussed, in particular Tuckman’s model of sequenced development. In turn, the above section also presented evidence of how various models for actual team training interventions can be developed to correspond with sequences in small-group development. This section also presented of review of empirical studies aimed at determining the effectiveness of team training interventions; this review suggests that the outcomes of team training interventions might vary according to context. Finally, the similarities between tenets of team training and collaborative learning were discussed, as was the seeming need for team training approaches to influencing group dynamics in technology-mediated environments. Overall, the literature supports a central premise of this study: groups in technology-mediated environments can benefit from team training approaches that utilize structural and interpersonal strategies to improve their overall group dynamics. The preceding sections established a basis for studying group dynamics in technologymediated environments using perspectives and methods found in computer-supported 28 collaborative learning research. These sections also identified two constructs – social presence and group cohesion – that could be assessed as dependent variables of group dynamics in an intervention-based research study. The final section discussed the concept of team training as a viable form of intervention for influencing group dynamics in technology-mediated environments and its role as the primary independent variable in the present study. It should be noted that initial analysis of the data revealed many of the team training activities were not conducted by the UTAs in this study. This put the validity of the data collected in the survey in serious doubt. Moreover, in order to make sense of the existing data (e.g. videos of Recitations when team training activities were conducted), a sub-analysis based on another research perspective was needed. The author evaluated the available choices for a situation like this and decided that an analysis of the study’s fidelity of implementation would be the best course of action. The following is a review of the definition, theory, and conceptualizations of fidelity of implementation, as well as empirical research that were useful in conducting the sub-analysis. Fidelity of Implementation Definition and Theory Fidelity of Implementation (FOI) has been defined as the determination of how close a program is implemented according to its original design or as intended (Carroll, Paterson, Wood, Booth, Rick, & Balain, 2007; Durlak, & DuPre, 2008; Dusenbury, Brannigan, Falco, & Hansen, 2003; Gearing, El-Bassel, Ghesquiere, Baldwin, Gillies, & Ngeow, 2011; Ruiz-Primo, 2006). FOI is a field of research that examines the factors that affect the efficacy of interventions such as medical regimens, community and school-based programs, and contexts or situations that involve the transfer of social technologies (i.e. rules, guidelines, manuals, regulations, laws, etc.). 29 FOI acts as a potential moderator of the relationship between interventions and their intended outcomes (Carroll et al., 2007) and can be an important yet overlooked source of variation in intervention-based research. Reviews of implementation research have shown that the fidelity with which an intervention is implemented affects how well it succeeds (Carroll et al., 2007; Fisher, Smith, Finney, & Pinder, 2014). FOI research and methods have been a part of medical studies for many years (Dane & Schneider, 1998; Ruiz-Primo, 2006) but have only become prominent in educational contexts since the 1990s (O'Donnell, 2008). O'Donnell notes that the US Department of Education (through the What Works Clearinghouse) specifies that “research designs should permit the identification and assessment of factors affecting fidelity of implementation, including considering its effects as a mediating or moderating variable” (O’Donnell, 2008, p. 35). Conceptualization There is a measure of consensus among researchers in conceptualizing factors considered critical for achieving effective fidelity of implementation (Dane & Schneider, 1998; Dusenbury et al., 2003; Gearing et al, 2011). These factors include: a) adherence - whether necessary elements of an intervention are being delivered as designed; b) duration - length of time; c), frequency – the number of sessions implemented; d) quality of delivery - the manner in which a facilitator delivers an intervention using prescribed techniques, processes, or methods; e) participant responsiveness—the extent to which participants are engaged by and involved in the activities and content of the program; and e) program differentiation—whether critical features that distinguish the program from the comparison condition are present or absent during implementation. The weight given to these factors in different studies may differ significantly according to context, research perspectives, methods, etc. Carroll et al. (2007) organize these 30 factors in a way that treats adherence as a top-level factor of fidelity with 4 supporting subfactors (frequency, duration, content (or “active ingredients”), and coverage). Carroll et al. also identify potential moderators that potentially influence adherence – these include: a) intervention complexity, b) facilitation strategies, c) quality of delivery, and d) participant responsiveness. Intervention complexity relates to elements and factor (such as the number of tasks that must be completed or followed) that make an intervention easy or difficulty to implement. Facilitation strategies related to activities and resources (e.g. manuals, guidelines, etc.) that support implementation efforts. Quality of delivery relates to concerns whether an intervention is delivered in a way appropriate to achieving what was intended. After evaluating available frameworks for assessing fidelity of implementation, the author chose to Carroll et al.’s conceptualization as the guiding framework for the FOI analysis in this study, based on its completeness, clarity, and fit with the study’s research context. Study Purpose and Research Questions Based on the above literature review, this study aimed to test the general hypothesis that facilitator-led team training activities used to influence group dynamics in physical classroom settings may also be used in video conferencing settings. The facilitator-led activities were modeled as team training exercises and based on the theoretical principles of structuration and symbolic conveyance. Structuration is the process by which rules and structures are socially constructed for the benefit of guiding and fostering group interactions and dynamics. Symbolic conveyance is the process of sharing privileged or insider information for the purpose of mitigating uncertainty in individual and group identity profiles. Both structuration and symbolic conveyance are theories that play important roles in team development strategies. In turn, these team development strategies have proven effective in promoting factors of group dynamics such 31 as cohesion in non-mediated environments. What is less known, however, is how effective these strategies will be in developing group cohesion and social presence during collaborative learning activities in technology-mediated environments. The specific research questions were as follows: R1: Does the use of facilitator-led team training affect social presence in small-group video conferencing? • H1: In a video conferencing situation, groups that have facilitator-led team training will have higher social presence than groups that do not. R2: Does the use of facilitator-led team training affect group cohesion in small-group video conferencing? • H2: In a video conferencing situation, groups that have a facilitator using team training will have higher cohesion than groups that do not. R3: What factors contributed to the results found in Research Questions 1 and 2? These questions formed the basis of the present study. Subsequent chapters will detail the study’s research design and methods (Chapter 2), and report on findings from quantitative and qualitative analysis (Chapter 3). The final chapter (Chapter 4) contains a discussion of the findings, implications, and limitations of the study, as well as suggestions for future research. 32 CHAPTER 2 METHODS The following chapter presents the purpose and research design of the proposed study. In short, this study employed both quantitative and qualitative methods to answer its central research questions. It also provides a detailed description of the intervention itself, including its central setting, its theoretical foundations and connections to established research, and its intended outcomes. This chapter also describes the nature and rationale of the study’s blended design, the sample population, intended data sources, the instruments and methods that were used for collection and analysis, and the methods used to determine the study’s validity and reliability. It concludes with a discussion of possible limitations to the study in terms of methodology and implementation. Purpose of the Study The purpose of this study was to advance research on telepresence in educational settings by a) designing a research-based intervention aimed at improving specific factors of group dynamics in video conferencing (i.e. social presence and group cohesion); and b) examining the effects of that intervention through quantitative and qualitative analyses. Chapter 1 argued for the need for this study, reviewing the concept of telepresence and the increased use of video telepresence in a collaborative learning and work contexts, including education, business, and medicine (Henriksen, Mishra, Greenhow, Cain, & Roseth, 2014; Lawson et al., 2010; Roseth, Akcaoglu, & Zellner, 2013). Chapter 1 also reviewed literature on the importance of positive group dynamics for learning and work situations, as well as literature on two factors of group dynamics – social presence and group cohesion. Chapter 1 also reviewed the concepts and effectiveness of team training activities aimed at improving group dynamics, as 33 well as literature that suggested ways team training might be applied towards telepresence. Finally, the review indicated that very little research had been conducted to date on efforts to improve group dynamics in video telepresence situations through activities such as team training. Setting: COM 100 The setting for this study was an undergraduate course on interpersonal communications and public speaking (here after referred to as “COM 100”) at a large Mid-western university in the United States (“the University”). COM 100 took place over 15 weeks during the University’s Fall 2015 semester. Course Personnel Course personnel in COM 100 consisted of 1 primary instructor, 5 graduate teaching assistants (GTAs), 23 undergraduate teaching assistants (UTAs), 1 UTA Coordinator, and 598 students. The primary instructor was a full professor in the University’s College of Communication Arts & Sciences (“the College”). The 5 GTAs were 1st year doctoral students in the College who had no prior experience with COM 100. The 23 UTAs were undergraduate students who had previously taken COM 100 and were therefore familiar with the course. Seven of these undergraduate students had prior experience serving as UTAs in the course. The UTA Coordinator was responsible for managing the UTAs and acting as their liaison to the primary instructor and the GTAs. 598 students enrolled in COM 100 at the time of the study, and were mostly, but not exclusively, freshman in their first semester at the University. Course Design Students in COM 100 were organized into 3 different groupings for 3 different types of course activities: Section, Speech, and Lecture. First, students were organized by the Section of the course they enrolled in (e.g. COM 100-001, Wednesdays – 10:20-11:40 am). Second, 34 students were organized into Speech groups, 4 Speech groups per Section. Last, there were 2 Lecture groups divided evenly among all students in the course. COM 100’s course design required student to participate in two periods of instruction each week: 1 Lecture and either a Section Recitation or a Speech Recitation. The primary instructor was responsible for leading the 80-minute Lectures twice a week. The Lectures were based on PowerPoint presentations given by the primary instructor, along with readings, assignments, and quizzes contained in the assigned course textbook. The primary instructor gave the same lecture twice a week, on Mondays to one half of the enrolled students and then on Wednesdays to the other half. The GTAs and the UTAs were responsible for leading the weekly 80-minute Recitations. COM 100 featured 2 different types of Recitations: Section Recitations and Speech Recitations. During Section Recitations, the students in each Section listened to presentations from graduate teaching assistants that expanded on concepts covered during the Lectures. During Speech Recitations (here after referred to as “Recitations”, the Sections were each divided into 4 student groups in order to practice public speaking by making short speeches one at a time in front of their peers. Recitations were a prime feature of the course and accounted for 40% of the students’ final grade. UTAs were responsible for leading the Recitations, with 1 UTA per student group. Using Video Conferencing for Recitations In the fall semester of 2015, the primary instructor introduced a change in course design for COM 100: the use of video conferencing for Recitations. This change was planned before the present study was conceived and proposed. The primary instructor had little experience with using this technology for educational purposes so he contacted the author (based on his prior 35 experience) to help assess and arrange the necessary technology requirements. The author was also asked to provide guidance and support to the GTAs and UTAs on how to use video conferencing effectively with their student groups. Researcher’s note: The author considered this proposal to be an ideal opportunity for research and professional development for three reasons. One, he could gain experience redesigning a course to integrate a promising technology. Two, he would have the opportunity to design an intervention that could affect both technical and psychological aspects of educational video conferencing. Finally, he would have the opportunity to study the impact of the intervention with a large sample population of students. Implementation Pre-study design and planning Preparations for moving elements of COM 100 to video conferencing began several months before the start of the course. The author held a series of meetings with the primary instructor and the UTA Coordinator to assess the technological and scheduling needs for the course in light of the introduction of video conferencing for Recitations. Discussions centered in particular on what videoconferencing solution would be used, how the students would access the videoconference sessions, how many students would be in each session, and how the technology could be used to positively impact the students’ learning experience. Shortly before the start of preparations for COM 100, the University had selected Zoom as its enterprise-level videoconferencing solution, which meant all faculty members, students, and staff had online access to the technology using their University IDs. Zoom is a web-based videoconferencing solution that can be accessed through an Internet connection and web browser both on and off-campus. A Zoom session can be hosted from one person’s Zoom account; others 36 may join the session through an HTTPS link or by typing in a meeting ID number in their Zoom account. For privacy and other concerns, it was determined that UTAs should not use their own Zoom accounts to host Recitations. Separate course-affiliated Zoom accounts were therefore established and a coding scheme was used to designate them. A total of 10 Zoom accounts were created for COM 100, named comz01-comz10. Through an iterative process, the author, primary instructor, and UTA Coordinator determined the scheduling and other details of the Recitations for the semester, including which Zoom accounts the different UTAs would use, how many students would be in each session, and what Zoom accounts would be used during the Recitations. Table 2-1 shows the organization of the Recitation groups, including Section numbers, UTA assignments, weekly schedule, and corresponding video conferencing session ID numbers. 37 Table 2-1: Organization of Recitation groups by UTAs, Weekly Schedule, and Video Conferencing ID UTAs M: 5:30-6:50 5.1, 3.1 Section 5 comz01 5.2, 7.1 Section 5 comz02 5.3, 10.1 Section 5 comz03 5.4 Section 5 comz04 W: 10:20-11:40 W 12:40-2:00 W 5:30-6:50 F: 12:40-2:00 Section 7 comz01 Section 10 comz01 1.1 Section 1 comz01 1.2 Section 1 comz02 1.3 Section 1 comz03 1.4, 6.1 Section 1 comz04 2.1 Section 2 comz06 2.2 Section 2 comz07 2.3, 3.2 Section 2 comz08 Section 3 comz02 2.4, 4.1 Section 2 comz09 Section 4 comz03 4.2 W: 3:00-4:20 Section 3 comz01 Section 6 comz02 Section 4 comz04 6.1 Section 6 comz03 6.2 Section 6 comz04 6.3 Section 6comz07 7.2 Section 7 comz05 7.3 Section 7 comz06 Section 10 comz03 10.2 8.1 Section 8 comz01 9.1 Section 9 comz02 8.2 Section 8 comz03 38 Description of Intervention: Team Training Activities The intervention in this study was framed as a series of team training activities to be used at the beginning of each Recitation. This was entirely new in the history of the course and the activities were specifically designed for use in COM 100. The activities were based in part on the psychological principles of structuration and symbolic convergence (see Chapter 2) and on the author’s knowledge of best practices in videoconferencing. The intervention was also aligned with 5 shared elements of team training identified by Pritchard et al. (2006): 1) establishing group goals; 2) establishing member interdependency; 3) coordination of member's activities; 4) structuring individual/group/task roles; and 5) focusing on interactive processes. The team training intervention in this study was designed to take into account both the study’s theoretical foundations and the context of COM 100 . The four separate brief activities were meant to orient Recitation groups to some of the goals, roles, tasks, technologies, and guidelines for video conferencing interactions. Each activity was named according to its central theme: Team Name, Emotional Roleplay, Speak Up!, and Background. Table 2-2 organizes the 4 activities according to team training themes, type of interaction, and theoretical basis. Table 2-2: Team Training Activities, Themes, Interactions, and Theoretical Basis Activity Team Training Themes Team Name Group goal; Member interdependency; Interactive processes Structuring group task/roles; Interactive processes Emotional Roleplay Speak Up! Background Coordination of members’ activities; Interactive processes Member interdependency; Interactive processes Type of Interaction Negotiation Theoretical Basis Structuration Explanation Structuration Mutual Regulation Symbolic Convergence Symbolic Convergence Explanation The following presents detailed descriptions for each of the 4 intervention activities. 39 Team Name: This was a negotiation activity that was designed to help students create a group identity. It was a short intervention in creating and achieving a group goal, emphasizing the interdependency of the group through interactive processes. Students were given a short time to suggest different team names and then vote for the one they will use for the rest of the semester. The Team Name intervention was also intended to lend the group a measure of structure to their activities and allow for the exchange of personal values and preferences symbolized by the name they chose. Emotional Roleplay: This was an explanation activity designed to help students understand their roles and responsibilities during the Recitations. Students were asked to visually convey different emotional states as called for by the UTA. For example, the Presentation Group moderator would pick a group format (such as watching a football game) and a group emotion (excitement or dismay). Students would then try to enact the scenario using non-verbal expressions, as well as the variability of camera angles and distances. This activity was intended to show how visual information such as facial expressions and camera framing can aid in fostering social presence, morale, and belongingness. This explanation activity was intended to lend structure to the ways students present themselves during the actual Recitations. The Emotional Roleplay intervention was designed to have an influence on the group’s overall social presence, as well as perceptions of belongingness to improve group cohesion. Speak Up!: This was a mutual regulation activity designed to help students understand that what they project in videoconferencing can impact the whole group in both positive and negative ways. It was also design to reinforce the notion that they can help manage both their projection and the projection of others for the benefit of the group. For this activity, UTAs were to contact 1-2 students as confederates that would cause a loud disruption or series of disruptions 40 at the beginning of the Recitation. Students would then practice constructive ways of “speaking up” to solve or manage these disruptions. The UTA should lead the discussion by asking about common disruptions, noting that they affect the videoconferencing sessions as a whole, and stress that “speaking up” to solve disruptions is actually a service to the group. The Speak Up! intervention was intended to stress the interdependence of groups in videoconferencing environments, and to have an impact on the social presence factors of psychological involvement and behavioral engagement. It was also meant as a fun, light-hearted exercise aimed at improving interpersonal communications and morale. Background: This was an explanation exercise designed to highlight member interdependency while improving interactive processes through symbolic conveyance. Students were to give information about the background they selected for their Recitations based on a couple of questions from the UTA. The rationale behind this activity is that having students explain their choice of background can prompt them to think more critically about the backgrounds they choose during videoconferencing. Background is an important aspect in videoconferencing because it is part of the overall visual impression a person presents. For instance, a bookshelf may imply studiousness or professionalism, while a cluttered background may imply disorganization or slovenliness. Students were asked to briefly describe their choice of background and provide their rationale for why they chose it (i.e., the impression they wanted to make). This was an explanation exercise that was rich with the potential for symbolic convergence. The Background intervention was intended to promote perceptions of intimacy and group cohesion, as well as increase the overall morale of the group. 41 Study Design The author used a mixed methods (quantitative and qualitative) design to gather data and answer the research questions. Creswell notes that a mixed methods research design is appropriate when the objective is “to obtain statistical, quantitative results from a sample and then follow up with a few individuals to help explain those results in more depth” (Creswell, 2009, p. 121). Data for quantitative methods were gathered through a series of online surveys. Data for qualitative methods were gathered through an analysis of video recordings and a series of focus group interviews with UTAs and students. Orientation and Training for Implementation The UTAs in COM 100 were responsible for conducting the team training activities with the students in the Recitations. The author first met the UTAs for COM 100 one week before the start of the course. The meeting was a general orientation to the course led by the primary instructor and the UTA Coordinator. After the orientation by the primary instructor, the author, along with the UTA Coordinator, hosted a technology orientation with the UTAs for 1 hour to explain how videoconferencing would be used in the Recitations. The UTAs practiced logging into Zoom and hosting sessions on their laptop computers. They also learned how to record the videoconferencing sessions using the Record feature in Zoom, as well as how to upload the recordings to MediaSpace, a video storage and viewing service provide by the University to students, faculty, and staff. After the 1 hour technology orientation, the author hosted a separate study orientation with the UTAs of Sections 1, 2, and 5 (12 UTAs in total). These Sections were selected on the basis of their full enrollments (75 students per Section); the author reasoned that choosing 3 full Sections would expose the most students to the intervention. This study orientation took 42 approximately 30 minutes and it served as the initial point of contact between the author and those responsible for leading the team training activities with the students. The author introduced the rationale and concepts behind the study, and gave the UTAs a breakdown and explanation of each of the team training activities. The author then asked if the UTAs would like to take part in the study; all of the UTAs agreed to take part. At this time, the author also announced the creation of a Facebook group for the study. The purpose of this social media group was to enhance communications between the author and UTAs, as well as among the UTAs themselves, and to give them a forum in which to share their experiences with the team training activities. Implementation Protocol The author performed a series of facilitation support activities before and after the implementation of each team training activity. Table 2-3 details the task protocol for administering each implementation: 43 Table 2-3: Task Protocol for Implementation Implementation Task Before Recitation Discuss Team Training activity description with UTAs in person; practice the activity with UTAs Email Team Training activity description to UTAs; contains both explanation of activity for UTAs and an activity prompt for the UTAs to mail to their students Remind UTAs to email students the prompt for the Team Training activity; remind UTAs to have students take appropriate online survey Administer Team Training activity; remind students to take online survey After Recitation Confirm implementation and discuss outcomes of Team Training activity Send email to UTAs to send to students to remind students to take online survey Time Administered Delivery Participants 1 week before Recitation In-person Author, UTAs 1 week before Recitation Email Author, UTAs 2 days before Recitation Email Author, UTAs UTAs, students At the beginning of Recitation During videoconferencing UTAs, students Day after Recitation Facebook (social media) Author, UTAs 1 week after Recitation Email Author, UTAs UTAs, Students Instrumentation A survey instrument – the Social Presence and Group Cohesion Survey - was designed and used for the quantitative portion of this study. This survey was based in part on two separate survey instruments used in prior studies involving the constructs of social presence and cohesion. First, a modified version of Gunawardena and Zittle’s Perceived Social Presence Scale (Gunawardena & Zittle, 1997) was used to design items that measured students’ perceptions of social presence. The 9 items (Likert 1-7) in this part of the survey were designed to address 3 factors of social presence: copresence, psychological involvement, and behavioral engagement. Second, a 6 item (Likert 1-5) modified version of Bollen and Hoyle’s Perceived Cohesion Scale 44 (Bollen & Hoyle, 1990) was used to measure students’ perceptions of cohesion in relation to their Presentation Groups. Table 2-4 shows the list of the survey items: Table 2-4: Social Presence and Group Cohesion Survey Items Item 1 2 3-5 Code SP-Co1 SP-Psy1 SP-Be 6 7 8 9 10 11 12 13 14 15 SP-Co2r SP-Psy2 SP-Co3 SP-Psy3 GC1 GC2 GC3 GC4 GC5 GC6 Prompt: When I'm videoconferencing with my Recitation Group... I feel like the other people are with me or close by. I sense other people in the group are thinking about the same things as I am. We often communicate to each other with: a. Speaking b. Gestures c. Facial expressions Other people seem far away from me. I have a good sense what the other people are thinking and feeling. It's like being with them in person. I feel a positive emotional connection to the other people in the group. When we are all videoconferencing... I feel I belong in my Recitation group. I am enthusiastic about getting together with my Recitation group. I have an important role to play in my Recitation group. My Recitation group is one of the best in the course. I feel I am a valued member of my Recitation group. I feel good after I've met with my Recitation group. A scoring instrument was also used for the fidelity of implementation (FOI) analysis portion of this study – the Scorecard for Adherence in Videoconferencing Interventions (SAVI or “the Scorecard”). The instrument was based on Carroll et al.’s (2007) conceptual framework for measuring and assessing FOI. The SAVI was used to score UTAs Adherence to the intervention’s original design in individual Recitations based on the following sub-factors: Frequency, Duration, Content, and Coverage. Each sub-factor was weighted equally in the overall Adherence score. UTAs’ performances along these 4 sub-factors was based on observations of the video recorded Recitations. Note that the SAVI was developed to score observed behaviors, but not the quality of those behaviors (e.g. the enthusiasm with which a UTA performed a particular team training activity). Therefore, while the SAVI may be considered a reliable objective measure of observed behaviors related to Frequency, Duration, Content, and Coverage, it does not measure subjective and/or affective factors that may have had an impact on Adherence. Table 2-5 shows the items in the SAVI instrument: 45 Table 2-5: Scorecard for Adherence in Videoconferencing Interventions (SAVI) Adherence Frequency (0-1 pts., 1 pt. max) Percentage Duration (0-3 pts., 3 pts. max) Percentage Content (0-2 pts., 14 pts. max) Percentage Coverage (0-2 pts., 4 pts. max) Percentage Overall Total Intervention Task Does activity T1 T2 T3 T4 T5 T6 T7 Does activity for optimal length of time (does not seem rushed but does not linger) Asks about and confirms students received email prompts Explains purpose of activity by using email prompt Explains purpose of activity by using his/her informed interpretation Confirms whether the students understand the purpose of the activity Asks students if they have questions about the activity Does activity based on instructions from email and orientation Models activity for students Gets appropriate input from most or all students Gives appropriate feedback to most or all students Participant Sampling and Recruitment The participants in this study were the students enrolled in COM 100. The participants in this study were students already enrolled in COM 100, therefore no direct recruitment of participants was required. Participation in the intervention was treated as part of normal course activities, although participation in the survey portions of the study was voluntary. As noted earlier, there were 10 Sections in COM 100 comprising a total of 544 students. The enrollment in each of the 10 Sections was unevenly distributed (range = 21-73). The 10 Sections were divided into treatment and control groups based solely on enrollment numbers (i.e. Sections 1, 2, and 5 had nearly full enrollment so they were selected as Treatment groups). The treatment protocol called for 1 team training activity (Team Name) to be administered at the same time to every treatment group, and for 3 different activities to be administered at different times to different groups during the semester. Based on this protocol, the author reasoned that 3 fully enrolled Sections were needed to meet the study requirements. The treatment and control 46 Sections were then treated as naturally randomized populations with only their enrollment in COM 100 as a common denominator. There was no attempt to identify whether there were significant differences between Sections in terms of student demographics (e.g. age, race, ethnicity, gender, etc.) since this kind of data did not play a role in the study design. In additions, all students received a 20-minute in-person orientation from the author on best practices in video conferencing prior to their Recitations. All Sections and Recitation groups were treated as comparably representative populations of COM 100, save for differences in enrollment. The only difference between the treatment and control groups was exposure to the team training activities. Data Collection All COM 100 students in both the control and treatment groups were asked to complete the same online survey after each Recitation. Students had 2 weeks to complete a survey before it was closed. The survey questions were designed to elicit students’ responses regarding their feelings of social presence and group cohesion with other students during their Recitations. The scores from the survey questions responses were then used for the study’s quantitative analysis. The same survey was used after each Recitation so that changes in social presence and group cohesion might be detected both after each Recitation and cumulatively over the course of the semester. Qualitative data came from several sources. First, the author reviewed the video recordings of the Recitations made by the UTAs using the Zoom Record feature. Special attention was paid to the treatment groups and to the implementation of the team training activities of the intervention. Additional data was gathered in focus groups interviews that probed UTAs’ and students’ perceptions of cohesion and social presence in greater depth. The 47 rationale for the qualitative interviews was to complement the survey data with additional insights and richer, more complete descriptions of the participants’ overall impressions of cohesion and social presence in video-mediated collaborative learning contexts. Survey data was gathered from COM 100 students at 4 points during the semester. The surveys were administered immediately after the students’ four video conference Recitations. The speeches are part of the course curriculum and are assessed as partial fulfillment of their course requirements. All elements of the recorded Recitations (audio and video) were recorded and provided to students as review materials for feedback and reflective practice. The recorded Recitations were also used to gather observational research data. Finally, focus groups were conducted with UTAs in both the treatment and control groups to gather first-hand accounts, descriptions, and explanations from the participants themselves. The focus groups were structured around open-ended questions designed to elicit more detailed responses from the UTAs regarding their perceptions of students’ social presence and group cohesion in their Recitations. The focus group format was selected to allow for greater input and feedback from more of the students. The focus groups and interviews were conducted via Zoom and video recorded. Transcripts from the audio portion of the recordings were then made and used for analysis. 48 CHAPTER 3 FINDINGS Research Question 1 & 2 Analysis The purpose of this study was to examine whether facilitator-led team training activities affected factors of group dynamics in videoconferencing environments, specifically the factors of social presence and group cohesion. The first two research questions were as follows: RQ1: Does the use of facilitator-led team training affect social presence in small-group videoconferencing? RQ2: Does the use of facilitator-led team training affect group cohesion in small-group videoconferencing? Data from 4 surveys administered for this study were used to answer research questions 1 and 2 (RQ1 and RQ2) for quantitative analysis. The survey instrument was designed using items that would measure students’ self-reports of social presence and group cohesion after each Speech Recitation. The same instrument was used each time, meaning students took the same survey 4 times, once after each Speech Recitation. The purpose was to see if the treatment as administered by the UTAs (facilitators) had an impact on the students’ social presence and group cohesion. The data were first cleaned to remove survey responses that were either incomplete, corrupted, or from students under the age of 18. Data were then organized by Treatment (Groups 1, 2, and 3) or Control (Group 4) conditions, as well as by Time. It should be noted that a posteriori content validity analysis revealed 3 survey items (SP-Co3, SP-Psy2, SP-Psy3) could be viewed as related to both Copresence and Psychological Involvement. Given the close association between feelings of proximity (Copresence) and intimacy (Psychological 49 Involvement) as reflected in prior research (Wiener & Mehrabian, 1968), this result was not unusual and was also not expected to change the overall findings. Confirmatory factor analysis was conducted individually on the 4 data sets (n=358, n=384, n=382, n=380 respectively), as well as on a consolidated set (n=1504). Analysis confirmed the presence of 3 moderately correlated factors related to items in the surveys: Copresence/Psychological Involvement, Behavioral Engagement, and Group Cohesion. This was understandable given the results of the content validity analysis that showed overlap among the survey items for Copresence and Psychological Involvement. Reliability analysis was then conducted to determine if alpha values would improve with the removal of items; 2 survey items were removed to improve reliability (see Appendix A). Means and standard deviations for the 3 factors were then calculated using the revised scale (see Appendix B). MANOVA was then conducted on the consolidated data set with the three latent variables (labeled as “copsych”, “behav”, and “cohesion”,) as dependent variables (DVs), and Group and Time (corresponding to the different treatments each group received at different times) as independent variables (IVs). Pillai’s Trace results showed a significant multivariate effect for the three latent variables in relation to Group (p=.005). Note: The author used MANOVA rather than multiple ANOVA for initial statistical analysis because multiple ANOVA ignores the correlation among the three factors, thereby increasing the possibility for rejecting the null hypothesis (Type I error). If multiple hypotheses are tested, the chance of a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases. Bonferoni correction was used here, meaning the desired alpha value (.05) divided by the number of factors (.05/3=0.017). Table 3-1 shows these results: 50 Table 3-1: MANOVA for Copresence/Psychological Involvement, Behavioral Engagement, and Group Cohesion by Group for Consolidated Data Set Effect Group Value .014 .986 .014 .008 Pillai's Trace Wilks' Lambda Hotelling's Trace Roy's Largest Root F 2.358 2.359 2.358 3.744c Sig. .005 .005 .005 .007 Partial Eta Squared .012 .012 .012 .011 MANOVA results indicated there was a significant interaction effect between factors but with an effect size too small to be meaningful. Univariate analysis of the three latent variables between-subject effects between Groups was then conducted on the consolidated data set. Results showed a significant interaction effect (p = .006) for only one of the factors Copresence/Psychological Involvement - but again the effect size was too small to be meaningful (partial eta squared = .008). Table 3-2 shows these results: Table 3-2: Univariate ANOVA for Copresence/Psychological Involvement for Consolidated Data Set Dependent Variable: CPRe Type III Sum of Source Squares Corrected Model 6.937a Intercept 6894.244 Group 6.937 Error 820.626 Total 11699.040 Corrected Total 827.563 df 3 1 3 1500 1504 1503 Mean Square 2.312 6894.244 2.312 .547 F 4.227 12601.801 4.227 Sig. .006 .000 .006 Partial Eta Squared .008 .894 .008 Subsequent univariate tests of the three latent variables for between-subject effects between Groups (1-4) at different individual survey Times (1-4) again showed a significant interaction effect for only one factor - Copresence/Psychological Involvement - for only survey Time 4 (p = .014) but with a small effect size (ηp 2 = .028, see Appendix C). In addition, pairwise comparison of Copresence/Psychological Involvement using the consolidated data set (see Appendix D) also showed significance between the mean differences of Group 3 and Groups 1, 2 and 4. 51 Results from analyses for Research Questions 1 and 2 indicated facilitator-led team training had significant effect with a weak effect size on copresence/psychological involvement but not behavioral engagement, or group cohesion when compared with those that did not have facilitator-led team training, but this was only seen in the data from survey Time 4, as well as the consolidated data set. Finding significant interaction effects but with a weak effect size for Copresence/Psychological Involvement served as a catalyst to begin looking at Research Question 3 in order to explain the results. Research Question 3 Analysis R3: What factors contributed to the results in Research Questions 1 & 2? To determine what factors contributed to the results of Research Questions 1 & 2, the author conducted a fidelity of implementation (FOI) analysis. Implementation fidelity can be an important yet overlooked source of variation in a study. Data for FOI analysis came from two sources: 1) video recordings of the Speech Recitations and 2) excerpts from the author’s email and social media communication with the UTAs. Please note that as per the design of this study, the author did conduct focus groups and interviews with the UTAs. Due to information gained from analysis of the video recorded Recitation, however, the data from these sessions was deemed unreliable and was not used in the following FOI analysis. Fidelity of Implementation Analysis FOI analysis was conducted using a conceptual framework proposed by Carroll et al. (2007) that was deemed appropriate for this study (see Chapter 2). According to this framework, researchers can conduct detailed FOI analysis by measuring and evaluating 3 key factors: 52 adherence, moderators, and essential components. The Scorecard for Adherence in Videoconferencing Interventions (SAVI, Chapter 2, pg. 44) was used to analyze and measure four sub-factors (Frequency, Duration, Content, and Coverage), then used products of the scores to create overall Adherence scores for the different team training activities. Because each subfactor was equally important to Adherence in the implementation of the intervention, the product, rather than the sum or average, of the scores most accurately reflected the relationship between them. For example, if a UTA conducted an activity with no Content or no Duration, the activity was in fact not really conducted, hence the Adherence score for that instance would be 0. The author also conducted a review of potential moderators of Adherence – intervention complexity, facilitation support, participant responsiveness, and quality of participation. This chapter concludes with a brief summary and interpretation of the overall study findings. Adherence Carroll, et al. (2007) note, “The measurement of implementation fidelity is the measurement of adherence, i.e., how far those responsible for delivering an intervention actually adhere to the intervention as it is outlined by its designers” (Carroll et al., 2007, pg. 3). Adherence is measured in FOI analysis by examining 4 key subfactors: content, frequency, duration, and coverage. To measure adherence across the different Groups, Treatments, and Times, the author developed a scoring instrument based on measurements and assessments of these 4 subfactors (see Chapter 2) The following sections report the measures and assessments for each of the 4 subfactors of adherence, as well as the findings from the final scoring instrument. Please note: There were 18 instances when there was no video available for FOI analysis, which raised the question of how to treat missing FOI data. The author first used a conservative 53 approach by treating the missing data as 0, but this approach risked unduly deflating scores for the 4 sub-factors and overall Adherence. Treating the missing data as Null, however, risked inflating the scores in a way that was potentially too optimistic. Instead of choosing one approach over the other, the author calculated scores for the Adherence sub-factors by treating missing data (N/A) as both Null and 0. The following FOI tables present side-by-side columns for treating missing data (N/A) as either Null or as 0. Frequency Frequency was first measure by counting the number of times team training activities were conducted by UTAs in the Speech Recitation. The number of actual activities conducted was scored and then compared to the potential number of times the intervention could have been conducted to generate a frequency percentage. Table 3-3 shows the percentages of times when the activities were conducted for each time and each UTA based on FOI Frequency scores. Table 3-4 shows only the FOI Frequency scores (not percentages) from the Adherence scorecard organized by treatments (activities). The purpose of this analysis was to see if some team training activities were conducted more than others, regardless of the time they were conducted. 54 Table 3-3: Frequency by Group/UTA and Time (in %) Group/UTA Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7 Group 1.1 1 1 0 0 0 0 N/A Group 1.2 1 1 1 0 0 0 0 Group 1.3 N/A 1 0 1 N/A 0 N/A Group 1.4 1 1 N/A N/A N/A N/A N/A Treatments TN ER ER BGD BGD SPU SPU Group 1 Frequency - N/A (Null) 100% 100% 33% 33% 0% 0% 0% Group 1 Frequency - N/A (0) 75% 100% 25% 0% 0% 0% 0% Group 2.1 1 0 0 0 0 0 0 Group 2.2 1 0 0 0 1 0 0 Group 2.3 0 0 0 0 0 0 0 Group 2.4 1 1 0 1 0 1 N/A Treatments TN SPU SPU ER ER BGD BGD Group 2 Frequency - N/A (Null) 75% 25% 0% 25% 25% 25% 0% Group 2 Frequency - N/A (0) 75% 25% 0% 25% 25% 25% 0% Group 3.1 1 1 1 1 1 1 0 Group 3.2 1 1 1 N/A 0 1 0 Group 3.3 1 1 1 0 0 1 N/A Group 3.4 N/A 1 N/A N/A N/A N/A N/A Treatments TN BGD BGD SPU SPU ER ER Group 3 Frequency - N/A (Null) 100% 100% 100% 50% 33% 100% 0% Group 3 Frequency - N/A (0) 75% 100% 75% 25% 25% 75% 0% Overall Frequency Total 9 9 4 3 2 4 0 Overall Frequency Average - N/A (Null) 90% 75% 40% 33% 22% 40% 0% Frequency Score Combined (Null) 90% 58% 28% 20% Overall Frequency Average - N/A (0) 75% 75% 33% 25% 17% 33% 0% Frequency Score Combined (0) 75% 54% 21% 17% Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Frequency N/A (Null) 33% 43% 50% 100% Frequency N/A (0) 29% 43% 29% 29% 38% 14% 29% 0% 67% 32% 14% 29% 0% 57% 25% 86% 67% 67% 100% 25% 86% 57% 57% 14% 69% 54% 31 43% 37% Table 3-3 shows the overall Frequency score by Group/Time was 43% (Null) and 38% (0). The highest Frequency average for the activities that were conducted twice in a single survey Time occurred at Time 2-3 (58% (Null; 54% (0)); the lowest Frequency average was at Time 6-7 (20% (Null); 17% (0)). 55 Table 3-4: Frequency FOI Scores and Averages by Group and Treatment Average Score Group/UTA TN BGD1 BGD2 ER1 ER2 SPU1 SPU2 - N/A (Null) Group 1.1 1.00 0.00 0.00 1.00 0.00 0.00 N/A 1.00 Group 1.2 1.00 0.00 0.00 1.00 1.00 0.00 0.00 1.00 Group 1.3 N/A 1.00 N/A 1.00 0.00 0.00 N/A 1.00 Group 1.4 1.00 N/A N/A 1.00 N/A N/A N/A 1.00 Group 1 Frequency N/A (Null) 1.00 0.33 0.00 1.00 0.33 0.00 0.00 1.00 Group 1 Frequency N/A (0) 0.75 0.25 0.00 1.00 0.50 0.00 0.00 Group 2.1 1.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 Group 2.2 1.00 0.00 0.00 0.00 1.00 0.00 0.00 1.00 Group 2.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Group 2.4 1.00 1.00 N/A 1.00 0.00 1.00 0.00 1.00 Group 2 Frequency N/A (Null) 0.75 0.25 0.00 0.25 0.25 0.25 0.00 0.75 Group 2 Frequency N/A (0) 0.75 0.25 0.00 0.25 0.25 0.25 0.00 Group 3.1 1.00 1.00 1.00 1.00 0.00 1.00 1.00 1.00 Group 3.2 1.00 1.00 1.00 1.00 0.00 N/A 0.00 1.00 Group 3.3 1.00 1.00 1.00 1.00 N/A 0.00 0.00 1.00 Group 3.4 N/A 1.00 N/A N/A N/A N/A N/A 1.00 Group 3 Frequency N/A (Null) 1.00 1.00 1.00 1.00 0.00 0.50 0.33 1.00 Group 3 Frequency N/A (0) 0.75 1.00 0.75 0.75 0.00 0.25 0.00 Overall Frequency N/A (Null) 0.90 0.55 0.38 0.73 0.22 0.22 0.13 0.43 Frequency Score Combined - N/A (Null) 0.90 0.46 0.47 0.17 Overall Frequency N/A (0) 0.75 0.50 0.25 0.67 0.25 0.17 0.08 Frequency Score Combined - N/A (0) 0.75 0.38 0.46 0.13 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Average Score - N/A (0) 0.29 0.43 0.43 0.29 0.36 0.14 0.29 0.00 0.57 0.25 0..86 0.57 0.57 0.14 0.50 0.38 Table 3-4 shows the overall Frequency score by Group/Treatment was .43 (Null) and .38 (0). Team Name had the highest frequency score for a single survey Time (.90 (Null), .75 (0) for Time 1). Emotional Roleplay had the highest combined Frequency score (.47 (Null), .46 (0)) for the 3 activities that were conducted twice in a survey Time, followed by Background (. 46 (Null), .38(0)) and Speak Up! (.17 (Null, .013 (0)). Duration Duration was first measured in seconds to assess the amount of time the UTAs actually spent administering the different activities (see Table 3-5). 56 Table 3-5: Duration (in seconds) by Group/UTA and Time Group/UTA Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Group 1.1 30 110 0 0 0 0 Group 1.2 45 60 60 0 0 0 Group 1.3 N/A 120 0 120 N/A 0 Group 1.4 105 440 N/A N/A N/A N/A Treatments TN ER ER BGD BGD SPU Group Duration Total 180 730 60 120 0 0 Group Duration Average 45 183 15 30 0 0 Group 2.1 60 0 0 0 0 0 Group 2.2 105 0 0 0 60 0 Group 2.3 0 0 0 0 0 0 Group 2.4 540 45 0 60 0 270 Treatments TN SPU SPU ER ER BGD Group Duration Total 705 45 0 60 60 270 Group Duration Average 176 11.25 0 15 15 68 Group 3.1 60 330 210 35 60 60 Group 3.2 250 870 550 N/A 0 320 Group 3.3 60 200 260 0 0 120 Group 3.4 N/A 120 N/A N/A N/A N/A Treatments TN BGD BGD SPU SPU ER Group Duration Total 370 1520 1020 35 60 500 Group Duration Average 93 380 255 9 15 125 Overall Duration Total 1255 2295 1080 215 120 770 Overall Duration Average 139 255 270 72 60 193 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Time 7 N/A 0 N/A N/A SPU 0 0 0 0 0 N/A BGD 0 0 0 0 N/A N/A ER 0 0 0 0 Total Duration 140 165 240 545 1090 273 60 165 0 915 1140 285 755 1990 640 120 3505 876 Table 3-6 shows the Duration scores organized below by group and time (activities). The author scored Duration based on whether the amount of time a UTA conducted an activity was effective given the content of the activity. Scoring Duration required expert judgment on the part of the author because different team training activities theoretically required different durations to allow the UTAs and students enough time to deliver the Content and give proper Coverage to the participants. For example, the Background activity required more time to administer effectively (7-9 minutes) than either the Team Name (3-5 minutes), Speak Up! (3-5 minutes), or Emotional Roleplay (2-3 minutes) activities because of the amount of information students would need to communicate in order to effectively do the activity. Duration was assessed on a 02-point scale, from no duration (0 points) to moderate duration (1 point) to effective duration (2 points). 57 Table 3-6: Duration FOI Scores and Averages by Group/UTA and Time Average Score – N/A (Null) 0.22 0.29 0.34 1.00 Group/UTA Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7 Group 1.1 0.33 1.00 0.00 0.00 0.00 0.00 N/A Group 1.2 0.33 1.00 0.67 0.00 0.00 0.00 0.00 Group 1.3 N/A 0.67 0.00 0.67 N/A 0.00 N/A Group 1.4 1.00 1.00 N/A N/A N/A N/A N/A Treatments TN ER ER BGD BGD SPU SPU Group 1 Duration N/A (Null) 0.55 0.92 0.22 0.22 0.00 0.00 0.00 0.27 Group 1 Duration N/A (0) 0.42 0.92 0.17 0.17 0.00 0.00 0.00 Group 2.1 0.33 0.00 0.00 0.00 0.00 0.00 0.00 0.05 Group 2.2 1.00 0.00 0.00 0.00 0.33 0.00 0.00 0.19 Group 2.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Group 2.4 1.00 0.33 0.00 0.66 0.00 1.00 N/A 0.50 Treatments TN SPU SPU ER ER BGD BGD Group 2 Duration N/A (Null) 0.58 0.08 0.00 0.17 0.08 0.25 0.00 0.17 Group 2 Duration N/A (0) 0.58 0.08 0.00 0.17 0.08 0.25 0.00 Group 3.1 0.33 1.00 0.66 0.33 0.33 0.66 0.00 0.47 Group 3.2 1.00 0.33 0.66 N/A 0.00 1.00 0.00 0.50 Group 3.3 0.33 1.00 1.00 0.00 0.00 1.00 N/A 0.56 Group 3.4 N/A 1.00 N/A N/A N/A N/A N/A 1.00 Treatments TN BGD BGD SPU SPU ER ER Group 2 Duration N/A (Null) 0.55 0.83 0.77 0.17 0.11 0.89 0.00 0.47 Group 2 Duration N/A (0) 0.42 0.83 0.58 0.08 0.08 0.67 0.00 Overall Adherence Average - N/A (Null) 0.57 0.61 0.30 0.18 0.07 0.37 0.00 0.30 Adherence Score Combined (Null) 0.57 0.45 0.13 0.18 Overall Adherence Average - N/A (0) 0.47 0.61 0.25 0.14 0.05 0.31 0.00 Adherence Score Combined (0) 0.47 0.43 0.10 0.15 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Average Score – N/A (0) 0.19 0.28 0.19 0.29 0.24 0.05 0.19 0.00 0.43 0.17 0.47 0.43 0.48 0.14 0.38 0.26 Table 3-6 shows the overall Duration score by Group/Time was .30 (Null) and .26 (0). The highest Duration score for a single time occurred at Time 2 (.61). The highest overall Duration score for a survey Time occurred at Time 1 (.57 (Null), .47 (0)). The table also show a number of instances where Duration was scored at 0 (Time 3, Group 2, Time 5-6, Group 1, Time 7, Groups 1-3). 58 Table 3-7 shows the Duration scores from the Adherence scorecard organized by treatments (activities). Table 3-7: Duration FOI Scores and Averages by Group/UTA and Treatment Group/UTA Group 1.1 Group 1.2 Group 1.3 Group 1.4 Group 1 Duration - N/A (Null) Group 1 Duration - N/A (0) Group 2.1 Group 2.2 Group 2.3 Group 2.4 Group 2 Duration - N/A (Null) Group 2 Duration - N/A (0) Group 3.1 Group 3.2 Group 3.3 Group 3.4 Group 3 Duration - N/A (Null) Group 3 Duration - N/A (0) Overall Duration Average - N/A (Null) Duration Score Combined (Null) Overall Duration Average - N/A (0) Duration Score Combined (0) TN 0.33 0.33 N/A 1.00 BGD1 0.00 0.00 0.66 N/A BGD2 0.00 0.00 N/A N/A ER1 1.00 1.00 0.66 1.00 ER2 0.00 0.66 0.00 N/A SPU1 0.00 0.00 0.00 N/A SPU2 N/A 0.00 N/A N/A Average Score – N/A (Null) 0.33 0.66 0.33 1.00 0.55 0.22 0.00 0.92 0.22 0.00 0.00 0.58 0.42 0.33 1.00 0.00 1.00 0.17 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 N/A 0.92 0.00 0.00 0.00 0.66 0.17 0.00 0.33 0.00 0.00 0.00 0.00 0.00 0.00 0.33 0.00 0.00 0.00 0.00 0.00 0.33 0.67 0.00 0.75 0.58 0.25 0.00 0.17 0.08 0.08 0.00 0.44 0.58 0.33 1.00 0.33 N/A 0.08 1.00 0.33 1.00 1.00 0.00 0.66 0.66 1.00 N/A 0.17 0.66 1.00 1.00 N/A 0.08 0.00 0.00 N/A N/A 0.25 0.33 N/A 0.00 N/A 0.00 0.33 0.00 0.00 N/A 0.50 0.75 0.83 1.00 0.55 0.83 0.77 0.89 0.00 0.17 0.11 0.77 0.42 0.83 0.58 0.67 0.00 0.08 0.08 0.57 0.45 0.29 0.63 0.11 0.07 0.04 0.57 0.47 0.47 0.37 0.36 0.19 0.37 0.59 0.30 0.08 0.33 Average Score – N/A (0) 0.19 0.28 0.19 0.29 0.24 0.05 0.19 0.00 0.43 0.17 0.47 0.43 0.48 0.14 0.38 0.60 0.06 0.11 0.03 0.26 0.04 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Table 3-7 shows the overall Duration score by Group/Treatment was .60 (Null) and .26 (0). Team Name had the highest score for Duration for a single time instance (.57 (Null, .47 (0) for Time 1). Emotional Roleplay and Background tied for the highest combined Duration score for the 3 activities that could be conducted twice when N/A is treated as Null (.37), but Emotional Roleplay was higher when N/A was treated as 0 (.33 > .30). Speak Up! had the lowest Duration scores at .06 (Null) and .04 (0). 59 Content Content can be viewed as the “active ingredients” of a particular intervention (Carroll et al., 2007). The author assessed content for each of the observed Speech Recitations by using a 7item checklist covering how well the content in each activity was delivered by the UTA to the students. Each item on the checklist was worth between 0-2 points. Items on the checklist included: 1) asking and confirming students received activity email prompt; 2) explaining the purpose of the activity using information in the email prompt; 3) explaining the purpose of the activity using the UTA’s own interpretation and synthesis of the activity; 4) confirming whether students understand the purpose of the activity; 5) asking students if they have questions about the activity; 6) modeling the activity for the students; and 7) conducting the activity based on instructions from both the email prompt and the activity orientation. Table 3-8 shows the Content scores from the FOI analysis organized by Group/UTA and Time. 60 Table 3-8: Content FOI Scores and Averages by Group/UTA and Time Average Score Group/UTA Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7 – N/A (Null) Group 1.1 0.21 0.28 0.00 0.00 0.00 0.00 N/A 0.25 Group 1.2 0.21 0.28 0.18 0.00 0.00 0.00 0.00 0.22 Group 1.3 N/A 0.07 0.07 0.00 N/A 0.00 N/A 0.07 Group 1.4 0.14 0.36 N/A N/A N/A N/A N/A 0.25 Treatments TN ER ER BGD BGD SPU SPU Group 1 Content N/A (Null) 0.19 0.25 0.08 0.00 0.00 0.00 0.00 0.20 Group 1 Content N/A (0) 0.11 0.25 0.06 0.00 0.00 0.00 0.00 Group 2.1 0.07 0.00 0.00 0.00 0.00 0.00 0.00 0.07 Group 2.2 0.21 0.00 0.00 0.00 0.21 0.00 0.00 0.42 Group 2.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Group 2.4 0.14 0.14 0.00 0.21 0.00 0.50 N/A 0.14 Treatments TN SPU SPU ER ER BGD BGD Group 2 Content N/A (Null) 0.11 0.04 0.00 0.05 0.05 0.13 0.00 0.16 Group 2 Content N/A (0) 0.11 0.04 0.00 0.05 0.05 0.13 0.00 Group 3.1 0.07 0.14 0.14 0.21 0.21 0.14 0.00 0.15 Group 3.2 0.43 0.14 0.14 N/A 0.00 0.21 0.00 0.23 Group 3.3 0.07 0.75 0.75 0.00 0.00 0.21 N/A 0.45 Group 3.4 N/A 0.14 N/A N/A N/A N/A N/A 0.14 Treatments TN BGD BGD SPU SPU ER ER Group 3 Content N/A (Null) 0.19 0.29 0.34 0.11 0.07 0.19 0.00 0.17 Group 3 Content N/A (0) 0.14 0.29 0.25 0.05 0.05 0.14 0.00 Overall Content Average - N/A (Null) 0.16 0.19 0.13 0.05 0.05 0.11 0.00 0.10 Content Score Combined (Null) 0.16 0.16 0.05 0.05 Overall Content Average - N/A (0) 0.12 0.19 0.10 0.03 0.03 0.09 0.00 Content Score Combined (0) 0.12 0.15 0.03 0.05 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Average Score – N/A (0) 0.07 0.10 0.02 0.07 0.06 0.01 0.06 0.00 0.14 0.05 0.13 0.13 0.25 0.02 0.13 0.08 Table 3-8 shows the overall Content score by Group/Time was .10 (Null) and .08 (0). The highest average overall and combined Content score (.16) occurred at Times 1 and 2 when N/A was treated as Null; the highest Content score occurred at Time 2 when N/A was treated as 0. Table 3-9 shows the Content FOI scores from the Adherence scorecard organized by treatments (activities). 61 Table 3-9: Content FOI Scores and Averages by Group/UTA and Treatment Average Score – Group/UTA TN BGD1 BGD2 ER1 ER2 SPU1 SPU2 N/A (Null) Group 1.1 0.21 0.00 0.00 0.28 0.00 0.00 N/A 0.21 Group 1.2 0.21 0.00 0.00 0.28 0.18 0.00 0.00 0.22 Group 1.3 N/A 0.00 N/A 0.07 0.07 0.00 N/A 0.07 Group 1.4 0.00 N/A N/A 0.36 N/A N/A N/A 0.36 Group 1 Content N/A (Null) 0.14 0.00 0.00 0.25 0.08 0.00 0.00 0.22 Group 1 Content N/A (0) 0.11 0.00 0.00 0.25 0.06 0.00 0.00 Group 2.1 0.07 0.00 0.00 0.00 0.00 0.00 0.00 0.07 Group 2.2 0.21 0.00 0.00 0.00 0.21 0.00 0.00 0.21 Group 2.3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Group 2.4 0.14 0.50 N/A 0.21 0.00 0.14 0.00 0.25 Group 2 Content N/A (Null) 0.11 0.13 0.00 0.05 0.05 0.04 0.00 0.13 Group 2 Content N/A (0) 0.11 0.04 0.00 0.05 0.05 0.13 0.00 Group 3.1 0.07 0.14 0.14 0.14 0.00 0.21 0.21 0.14 Group 3.2 0.43 0.14 0.14 0.21 0.00 N/A 0.00 0.23 Group 3.3 0.07 0.75 0.75 0.21 N/A 0.00 0.00 0.45 Group 3.4 N/A 0.14 N/A N/A N/A N/A N/A 0.14 Group 3 Content N/A (Null) 0.19 0.29 0.34 0.19 0.00 0.11 0.07 0.24 Group 3 Content N/A (0) 0.14 0.29 0.26 0.14 0.00 0.05 0.05 Overall Content Average - N/A (Null) 0.14 0.15 0.13 0.16 0.05 0.04 0.03 0.20 Content Score Combined (Null) 0.14 0.14 0.11 0.03 Overall Content Average - N/A (0) 0.12 0.11 0.09 0.15 0.04 0.06 0.02 Content Score Combined (0) 0.12 0.11 0.09 0.02 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Average Score – N/A (0) 0.07 0.10 0.02 0.05 0.06 0.01 0.06 0.00 0.14 0.05 0.13 0.13 0.25 0.02 0.13 0.08 Table 3-9 shows the overall Content score by Group/Treatment was .20 (Null) and .08 (0). Team Name and Background tied for the highest scores (.14) for Content when N/A was treated as Null; Team Name had the highest score when N/A was treated as 0. Team Name had the best Content score overall (.12) but again that was only for the one time it was conducted (Time 1). Speak Up! had the lowest Content scores (.03 (Null) and .02 (0). The results show variation in the amount of Content the UTAs delivered for each team training activity. 62 Coverage Coverage is a measure of “whether all the people who should be participating in or receiving the benefits of an intervention actually do so” (Carroll et al., 2007, p. 2). For this study, coverage was measured by assessing the degree to which a UTA would a) get all students to participate in the activity and b) give appropriate feedback on the students’ participation. For instance, an example of good coverage would be a UTA who, while conducting the Background activity, would call on each student to ask about their choice of background for their Speech Recitation and give each brief feedback on the appropriateness of their selection and any tips for improvement for the future. An example of bad coverage might be a UTA who only calls on a few students and offers no feedback on their responses. Table 3-10 shows the Coverage scores from the FOI analysis organized by Group/UTA and Time. 63 Table 3-10: Coverage FOI Scores and Averages by Group/UTA and Time Group/UTA Group 1.1 Group 1.2 Group 1.3 Group 1.4 Treatments Group 1 Coverage N/A (Null) Group 1 Coverage N/A (0) Group 2.1 Group 2.2 Group 2.3 Group 2.4 Treatments Group 2 Coverage N/A (Null) Group 2 Coverage N/A (0) Group 3.1 Group 3.2 Group 3.3 Group 3.4 Treatments Group 3 Coverage N/A (Null) Group 3 Coverage N/A (0) Overall Coverage Average - N/A (Null) Coverage Score Combined (Null) Overall Coverage Average - N/A (0) Coverage Score Combined (0) Average Score – N/A (Null) 0.17 0.18 0.19 0.88 Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7 0.25 0.25 N/A 0.75 TN 0.75 0.50 0.75 1.00 ER 0.00 0.50 0.00 N/A ER 0.00 0.00 0.00 N/A BGD 0.00 0.00 N/A N/A BGD 0.00 0.00 0.00 N/A SPU N/A 0.00 N/A N/A SPU 0.42 0.75 0.17 0.00 0.00 0.00 0.00 0.19 0.13 0.25 0.50 0.00 0.75 TN 0.69 0.00 0.00 0.00 0.00 SPU 0.13 0.00 0.00 0.00 0.00 SPU 0.00 0.00 0.00 0.00 0.50 ER 0.00 0.00 0.50 0.00 0.00 ER 0.00 0.00 0.00 0.00 0.75 BGD 0.00 0.00 0.00 0.00 N/A BGD 0.04 0.14 0.00 0.33 0.38 0.00 0.00 0.13 0.13 0.19 0.00 0.12 0.38 0.25 0.75 0.50 N/A TN 0.00 0.75 0.75 1.00 1.00 BGD 0.00 0.75 1.00 1.00 N/A BGD 0.13 0.00 N/A 0.00 N/A SPU 0.13 0.00 0.00 0.00 N/A SPU 0.19 0.75 1.00 0.75 N/A ER 0.00 0.00 0.00 N/A N/A ER 0.36 0.58 0.54 1.00 0.50 0.88 0.92 0.00 0.00 0.83 0.00 0.45 0.38 0.88 0.69 0.00 0.00 0.63 0.00 0.43 0.54 0.33 0.06 0.06 0.33 0.00 0.16 0.43 0.29 0.52 0.27 0.12 0.40 0.06 0.04 0.04 0.04 Average Score – N/A (0) 0.14 0.18 0.11 0.11 0.13 0.04 0.14 0.00 0.29 0.12 0.36 0.50 0.46 0.14 0.37 0.25 0.16 0.27 0.00 0.21 0.14 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Table 3-10 shows the overall Coverage average by Group/Time was .25 (Null) and .21 (0). The highest combined score was for survey Time 2 (.43 (Null); .40 (0)); the lowest combined score was for Time 3 (.06 (Null); .04 (0)). Table 3-11 shows the Coverage scores from the Adherence scorecard organized by treatments (activities). 64 Table 3-11: Coverage FOI Scores and Averages by Group/UTA and Treatment Group/UTA Group 1.1 Group 1.2 Group 1.3 Group 1.4 Treatments Group 1 Coverage - N/A (Null) Group 1 Coverage - N/A (0) Group 2.1 Group 2.2 Group 2.3 Group 2.4 Treatments Group 2 Coverage - N/A (Null) Group 2 Coverage - N/A (0) Group 3.1 Group 3.2 Group 3.3 Group 3.4 Treatments Group 3 Coverage - N/A (Null) Group 3 Coverage - N/A (0) Overall Coverage Average - N/A (Null) Coverage Score Combined (Null) Overall Coverage Average - N/A (0) Coverage Score Combined (0) N/A 0.00 N/A N/A SPU Average Score – N/A (Null) 0.17 0.18 0.19 0.88 Average Score – N/A (0) 0.14 0.18 0.11 0.25 0.00 0.00 0.19 0.13 0.00 0.50 0.00 0.00 ER 0.00 0.00 0.00 0.00 0.00 SPU 0.00 0.00 0.00 0.00 0.00 SPU 0.04 0.14 0.00 0.33 0.14 0.04 0.14 0.00 0.29 0.13 0.13 0.00 0.00 0.12 0.00 0.75 1.00 1.00 N/A BGD 0.13 0.75 1.00 0.75 N/A ER 0.13 0.00 0.00 N/A N/A ER 0.00 0.00 N/A 0.00 N/A SPU 0.00 0.00 0.00 0.00 N/A SPU 0.36 0.58 0.54 1.00 0.12 0.36 0.50 0.46 0.14 0.88 0.92 0.83 0.00 0.00 0.00 0.45 0.38 0.29 0.26 0.63 0.00 0.00 0.00 0.43 0.39 0.34 0.55 0.11 0.00 0.00 0.43 0.37 0.29 0.16 0.09 0.29 0.12 Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7 0.25 0.25 N/A 0.75 TN 0.00 0.00 0.00 N/A BGD 0.00 0.00 N/A N/A BGD 0.75 0.50 0.75 1.00 ER 0.00 0.50 0.00 N/A ER 0.00 0.00 0.00 N/A SPU 0.42 0.00 0.00 0.75 0.17 0.13 0.25 0.50 0.00 0.75 TN 0.00 0.00 0.00 0.00 0.75 BGD 0.00 0.00 0.00 0.00 N/A BGD 0.75 0.00 0.00 0.00 0.50 ER 0.38 0.19 0.00 0.38 0.25 0.75 0.50 N/A TN 0.19 0.75 0.75 1.00 1.00 BGD 0.50 0.33 0.50 0.08 0.22 0.26 0.00 0.00 0.29 0.00 0.00 0.16 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Table 3-11 shows the overall Coverage average by Group/Treatment was .26 (Null) and .16 (0). Team Name had the highest score for Coverage for a single time instance (.43, Time 1) when N/A was treated as Null; Team Name and Emotional Roleplay had the same Coverage scores when N/A was treated as 0 (.29, survey Times 1 and 3 respectively). Emotional Roleplay also had the highest combined Coverage score for the 3 activities that could be conducted twice (.29), followed by Background (.12) and Speak Up! (.00). 65 Overall Adherence Scores Scores for overall Adherence were calculated by multiplying scores for the above 4 subfactors. Table 3-12 lists the final scores from each of the Speech Recitations by Time and by Group number (UTA). Table 3-12: Adherence Scores and Averages by Group/UTA and Time Group/UTA Group 1.1 Group 1.2 Group 1.3 Group 1.4 Treatments Group 1 Adherence - N/A (Null) Group 1 Adherence - N/A (0) Group 2.1 Group 2.2 Group 2.3 Group 2.4 Treatments Group 2 Adherence - N/A (Null) Group 2 Adherence - N/A (0) Group 3.1 Group 3.2 Group 3.3 Group 3.4 Treatments Group 3 Adherence - N/A (Null) Group 3 Adherence - N/A (0) Overall Adherence Average - N/A (Null) Coverage Score Combined (Null) Overall Adherence Average - N/A (0) Adherence Score Combined (0) Time 7 N/A 0.00 N/A N/A SPU Average Score – N/A (Null) 0.05 0.02 0.02 0.23 Average Score – N/A (0) 0.04 0.02 0.01 0.07 0.00 0.00 0.04 0.00 0.00 0.04 0.00 0.00 ER 0.00 0.00 0.00 0.00 0.38 BGD 0.00 0.00 0.00 0.00 N/A BGD 0.00 0.02 0.00 0.09 0.04 0.00 0.02 0.00 0.08 0.02 0.01 0.09 0.00 0.03 0.00 0.07 0.10 0.64 N/A BGD 0.02 0.00 N/A 0.00 N/A SPU 0.01 0.00 0.00 0.00 N/A SPU 0.09 0.07 0.21 0.16 N/A ER 0.00 0.00 0.00 N/A N/A ER 0.04 0.11 0.24 0.14 0.03 0.04 0.10 0.21 0.02 0.23 0.27 0.00 0.00 0.15 0.00 0.11 0.08 0.23 0.20 0.00 0.00 0.11 0.00 0.07 0.14 0.09 0.01 0.00 0.08 0.00 0.07 0.11 0.06 0.14 0.07 0.06 0.11 Time 1 0.01 0.02 N/A 0.11 TN Time 2 0.27 0.11 0.04 0.36 ER Time 3 0.00 0.05 0.00 N/A ER Time 4 0.00 0.00 0.04 N/A BGD Time 5 0.00 0.00 N/A N/A BGD Time 6 0.00 0.00 0.00 N/A SPU 0.05 0.19 0.02 0.01 0.00 0.03 0.01 0.11 0.00 0.11 TN 0.19 0.00 0.00 0.00 0.00 SPU 0.01 0.00 0.00 0.00 0.00 SPU 0.01 0.00 0.00 0.00 0.07 ER 0.06 0.00 0.00 0.06 0.01 0.32 0.01 N/A TN 0.00 0.11 0.04 0.64 0.14 BGD 0.11 0.01 0.01 0.00 0.01 0.09 0.06 0.04 0.07 0.00 0.03 0.05 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Table 3-12 shows the overall Adherence average by Group/Time was .06 (Null) and .05 (0). The highest Adherence scores for a single Group (2) occurred at Times 2 and 3 (survey Time 66 2) with scores of .23 and .27 (Null) and .23 and .20 (0). There were multiple instances where Adherence was scored as 0 for both individual UTAs and groups for a particular time. Table 3-13 shows the overall scores from the Adherence scorecard organized by treatments (activities). Table 3-13: Adherence Scores and Averages by Group/UTA and Treatment Group/UTA Group 1.1 Group 1.2 Group 1.3 Group 1.4 Treatments Group 1 Adherence N/A (Null) Group 1 Adherence N/A (0) Group 2.1 Group 2.2 Group 2.3 Group 2.4 Treatments Group 2 Adherence N/A (Null) Group 2 Adherence N/A (0) Group 3.1 Group 3.2 Group 3.3 Group 3.4 Treatments Group 3 Adherence N/A (Null) Group 3 Adherence N/A (0) Overall Adherence Average - N/A (Null) Adherence Score Combined (Null) Overall Adherence Average - N/A (0) Adherence Score Combined (0) TN 0.01 0.02 N/A 0.11 TN BGD1 0.00 0.00 0.04 N/A BGD BGD2 0.00 0.00 N/A N/A BGD ER1 0.27 0.11 0.04 0.36 ER ER2 0.00 0.05 0.00 N/A ER SPU1 0.00 0.00 0.00 N/A SPU SPU2 N/A 0.00 N/A N/A SPU Average Score – N/A (Null) 0.05 0.02 0.02 0.23 0.05 0.01 0.00 0.19 0.02 0.00 0.00 0.04 0.03 0.01 0.11 0.00 0.11 TN 0.01 0.00 0.00 0.00 0.38 BGD 0.00 0.00 0.00 0.00 N/A BGD 0.19 0.00 0.00 0.00 0.07 ER 0.01 0.00 0.04 0.00 0.00 ER 0.00 0.00 0.00 0.00 0.00 SPU 0.00 0.00 0.00 0.00 0.00 SPU 0.00 0.02 0.00 0.09 0.04 0.00 0.02 0.00 0.08 0.06 0.09 0.00 0.02 0.01 0.00 0.00 0.03 0.06 0.01 0.32 0.01 N/A TN 0.09 0.11 0.04 0.64 0.14 BGD 0.00 0.07 0.10 0.64 N/A BGD 0.02 0.07 0.21 0.16 N/A ER 0.01 0.00 0.00 N/A N/A ER 0.00 0.00 N/A 0.00 N/A SPU 0.00 0.00 0.00 0.00 N/A SPU 0.04 0.11 0.24 0.14 0.03 0.04 0.10 0.21 0.02 0.11 0.23 0.27 0.15 0.00 0.00 0.00 0.11 0.08 0.23 0.20 0.11 0.00 0.00 0.00 0.07 0.12 0.10 0.12 0.01 0.00 0.00 0.07 0.11 0.06 0.11 0.07 0.06 0.09 0.06 0.11 0.01 0.06 Average Score – N/A (0) 0.04 0.02 0.01 0.07 0.09 0.06 0.00 0.00 0.00 0.00 0.05 Table key: BGD – Background; ER – Emotional Roleplay; SPU – Speak Up!; TN – Team Name Table 3-13 shows the overall Adherence average by Group/Treatment was .06 (Null) and .05 (0). Background had the highest score for Adherence for a single time instance (.23 and .27 (Null), .23 and .20 (0), Times 2-3 (survey Time 2). Background also had the highest combined 67 Adherence score for the 3 activities that could be conducted twice (.11 (Null), .09 (0), followed by Emotional Roleplay (.06 (Null and 0)) and Speak Up! (.00 (Null and 0)). It is worth pointing out that Speak Up! had an overall Adherence score of .00. Again, individual Adherence scores were calculated by multiplying the 4 sub-factor scores, meaning that a 0 in any one of these sub-factors would result in a 0 score for Adherence for that activity. So, while Speak Up! did have overall scores for Frequency, Duration, and Content, it had an overall score of .00 for Coverage. This means that even when UTAs took time to announce and explain the activity with their students, there was no observed evidence that any of the students got to engage in the activity, nor was there evidence of UTA feedback to any of the students about their engagement. Thus, the overall Adherence score for the Speak Up! activity was calculated as .00. Moderators of Fidelity of Implementation Moderators in an FOI analysis are those factors outside of adherence that can influence or moderate fidelity of implementation for a treatment or intervention. Carroll et al. (2007) list 4 moderators to be examined in a FOI analysis: intervention complexity, facilitation strategies, quality of delivery, and participant responsiveness. Each of these will be reviewed in turn, with an eye towards understanding how they may have influenced overall adherence. Intervention complexity One way researchers assess intervention complexity is by examining descriptions, guidelines, recommendations and other supporting materials that are used for implementation. Carroll et al. (2007) note that treatments with more detailed descriptions are more likely to be implemented with higher fidelity than those with vague or broad descriptions. 68 Complexity – Activity Descriptions and Instructions To assess this aspect of intervention complexity in this study, the author examined the email prompts containing the intervention descriptions that were sent to the UTAs and students before their Speech Recitations. The prompts were scored on a scale of 1-3 (3=broad or vague, 2=somewhat detailed, 1=very detailed) according to 5 factors: structure (ease of identifying instructions); clarity of instructions; conciseness; ease of usability (for UTAs); and ease of enactment (for students). Analysis found that the prompts overall did not provide clear step-bystep instructions for both UTAs and students to conduct and participate in the activities. Rather, the email prompts relied heavily on persuasive language and were structured as evocative scenarios or explanations the author felt might capture the essence of the activities and the rationale for doing them. Instructions were often embedded in the second or third paragraphs, making them more difficult to identify as things the UTAs and students should do separately or together. However, some of the activities scored better than others. Table 3.13 shows the final scoring for each of the activities. Table 3-14: FOI Scores for Intervention Complexity – Activity Descriptions and Instructions Activity Structure Clarity Conciseness Ease of Usability Ease of Enactment Total Background Emotional Roleplay Speak Up! Team Name 3 2 3 1 3 3 3 2 3 2 3 2 1 1 3 1 2 1 3 1 12 9 15 7 % Score 53% 73% 100% 87% Note: Each factor worth 1-3 points; 3=broad or vague, 2=somewhat detailed, 1=very detailed Scores in Table 3-14 shows that Team Name scored the highest (13 points) in terms of Structure, Clarity, Conciseness, Ease of Usability, and Ease of Enactment. Emotional Roleplay came second (11 points), followed by Background (8 points). The Speak Up! activity was last, scoring the minimum on every factor. This is one indication that the activity instructions 69 provided to both the UTAs and the students varied in overall complexity depending on the activity. Complexity – Task Analysis Complexity was also analyzed using by looking at the number and difficulty of the tasks associated with each activity that both UTAs and students had to perform in order to conduct them as originally designed. The following Gant charts (Tables 3-15 through 3-22) show the preand in-Recitation tasks for each activity; the tasks are color coded to indicated difficulty (green=not complex, yellow=complex difficult, red=very complex, potential activity breaker). Table 3-15: Team Name Pre-Recitation Tasks Tasks 1 2 3 4 UTA Read email UTA Students Students Forward email Read email Think of team names Table 3-16: Team Name In-Recitation Tasks Tasks 1 2 UTA Call for team name suggestions Students Students UTA UTA Give team name suggestions Vote for best team name 3 Tally votes for team names 4 Announce winner Tables 3-15 and 3-16 show a low level of complexity for UTAs conducting Team Name, with 2 easy pre-Recitation tasks, and 3 easy in-Recitation tasks. Complexity for students was slightly higher, with 1 easy and 1 moderate pre-Recitation task (thinking up 2 team name suggestions) and 2 easy in-Recitation tasks. 70 Table 3-17: Background Pre-Recitation Tasks Tasks 1 2 3 4 UTA Read email UTA Students Students Forward email Read email Choose background Table 3-18: Background In-Recitation Tasks Tasks 1 UTA Ask each student 3 questions about background 2 Students UTA Answer 3 questions about background 3 Engage each student with feedback about answers Task analysis in Tables 3-17 and 3-18 indicates a low level of complexity for UTAs conducting Background, with 2 easy pre-Recitation tasks, and 2 easy in-Recitation tasks. Complexity for students was slightly higher, with 1 easy and 1 moderate pre-Recitation task (choosing a background) and 1 easy in-Recitation task. Table 3-19: Emotional Roleplay Pre-Recitation Tasks Tasks 1 2 3 UTA Read email UTA Students Forward email Read email Table 3-20: Emotional Roleplay In-Recitation Tasks Tasks 1 2 UTA Call out emotional prompts for students to display Students Respond to UTA emotional prompts Task analysis in Tables 3-19 and 3-20 indicates a very low level of complexity for UTAs conducting Emotional Roleplay, with 2 easy pre-Recitation tasks, and 1 easy in-Recitation tasks. Complexity for students was the same, with 1 easy pre-Recitation task and 1 easy in-Recitation task. 71 Table 3-21: Speak Up! Pre-Recitation Tasks Tasks 1 2 UTA Read email UTA Students UTA UTA UTA UTA Forward email 3 Read email 4 Devise plan for conducting activity using confederates 5 Identify possible confederates 6 Contact confederate students (2) 7 Explain plan to confederates Table 3-22: Speak Up! Pre-Recitation Tasks Tasks UTA 1 Explain purpose of activity and expectations for participation 2 3 Confederate #1 Students UTA Confederate #2 Students UTA Cause disruption; confederate needs to know right time and what is appropriate Identify disruption and correct it 4 Give students feedback 5 Cause disruption; confederate needs to know right time and what is appropriate 6 Identify disruption and correct it 7 Give students feedback 72 Task analysis in Tables 3-21 and 3-22 indicates a very high level of complexity for UTAs conducting Speak Up!, with 2 easy and 4 moderate pre-Recitation tasks, and 1 easy and 2 moderate in-Recitation tasks. Complexity for students was also high, with 1 easy pre-Recitation task but 2 hard in-Recitation tasks (identifying disruptions, speaking up in a constructive way to resolve them). Added to this was the complexity of coordinating with the student confederates to create disruptions that the students could identify and correct using the Speak Up! method. Facilitation strategies The author used several facilitation strategies to help UTAs with the team training activities. The first was to meet with all of the UTAs face-to-face a week before a particular Speech Recitation to give them an orientation to the Team Training they would be doing that week. The orientations included much of the information contained in the email prompts, including the rationale for doing the activities. Orientations also aimed at giving UTAs strategies for using the activities. However, because the orientations were held face-to-face, the author did not model these strategies for the UTAs in an actual video conferencing session. The author used social media as another facilitation strategy. The UTA Coordinator had already created a Facebook group for all UTAs in the course that semester. The author simply created a second Facebook group (titled “Group Dynamics”) that was only open to UTAs who were conducting team training activities. The group was established so the author could communicate more directly with these UTAs through a medium that was arguably more helpful, more collaborative, and more social than emails or face-to-face discussions. To interact with the UTAs, the author would post questions asking for feedback and insights regarding the activities they had just done, as well as give them reminders about sending links to the surveys. For 73 example, the following excerpts from the first Group Dynamics discussion thread (9/26/20169/27/2016) are examples of the type of interactions that took place for this facilitation strategy: UTA 3.3: Team Catfish. It didn't take them that long because they didn't really talk to each other that much. When I said, you guys can talk and think of a team name, they went silent. Author: Yeah, I guess it takes a little courage or familiarity to suggest a name for a group of people you hardly know... Still, glad to hear they were able to come up with something. Note: This exchange suggests the UTA was not clear on the Content of the activity. The instructions asked for students to think of 2 names before they came to the Recitation but the UTA said she ask them to “talk and think of a name”. It is easy to imagine students who had just met would have a difficult time talking to one another about team names; instead, the UTA should have followed instructions and ask each student for his/her suggestions. It is also worth noting that the author’s response to the UTA did not explicitly ask or advise her on aspects of fidelity of implementation (e.g. facilitation strategies to get better compliance, questions about Duration, recommendations for modeling or other aspects of Content, etc.). It raises the question of whether the author himself gave tacit approval to the UTAs lack of adherence to the intervention design. UTA 3.1: Hi [author], my session went super well after I go [sic] zoom to open. The name activity went well also. When zoom first started everyone was chatty and complimenting each other on their clothing and what not. The team name they decided on was "The Smooth Talkers". It was between that and Speech Spartans. 74 UTA 2.2: Our team decided to be the terrible 2's since we're group 2. The session went very well without any technical difficulties and everyone was well in the session before the start time! Also, the students were very supportive of one another. UTA 1.4: I forgot to update you, but we named our Group the COMBeasts Author: COMBeasts - I like it! It's got a little aggression, a little 'tude, kind of rough around the edges feel and still manages to be fun at the same time :) Was there a lot of negotiating or was it a pretty fast decision - I ask because the length of time might indicate people's interest in defining the group's Team identity... UTA 1.4: Pretty fast' I put it out there for discussion and one girl suggested it and immediately I had 5 or 6 students say oooo I like that and then we decided Note: Again, this exchange suggests the UTA did not adhere to the instructions of the activity. The UTA seems to suggest that one student made a team name suggestion and that judging from immediate reactions from some of the students, that was the name she chose, ending the activity at that point. It is unclear whether other students got to make suggestions of their own. The author also used the Facebook group as a forum to share personal feedback and insights from the UTAs on each of the Team Training activities. Participation in these threads (34 posts per activity) was not as high as it was for the first discussion thread (9 posts total). The following are excerpts from discussions on the Background, Emotional Roleplay, and Speak Up! activities: Background: UTA 3.4: So not many of my kids had things in the background so rather than just talking to the kids that had them, I had everyone go through and say what they could have something in their background if they were have prepped earlier 75 and all of them were pretty interesting and they ranged all over the place. One boy had an Elvis poster because he really liked Elvis and that type of music. One girl wanted a picture of a beach because she loves how it makes her feel calm and collected. One girl said her mom is an artist and she would want a picture of one of her mom's pieces because she loves and misses her. Another guy said he wanted a Texas flag because he is from Houston and another guy said he would want the American flag because he's proud of where he’s from. UTA 3.4: My kids didn't have anything in their background this time either, which was mostly disappointing, so I did what I did the last time and improvised. The kids were not very imaginative and the background objects were very boring. They mostly were flags of different things and the one boy said he would have a wake board in the back of his screen because he did his speech on how to properly wake board. I hope this helps you, as it did last time. UTA 3.3: I thought that this activity went better than the team name activity. I think it was harder for my group to come together as a group for the team name. By going one by one and sharing information in the background activity, I think it made them a little more comfortable socially. I used [UTA’s] advice if they did not have anything in their background and that worked out well. Emotional Roleplay: UTA 1.3: So I did do the Emotional Roleplay Activity and it went pretty well. The students didn't react to it with a whole lot of enthusiasm, but they participated. I hope that's helpful!! UTA 1.2: For my section the activity went OK! Most of the students were confused about it, but just passively participated. UTA 2.2: My activity went well, the students actively participated showing excitement, boredom, and a few others. They even giggled a little. I forgot to record the activity in the beginning but next week I'll make sure to!! 76 Speak Up!: UTA 3.1: I just finished my recitation. Speeches went well. The activity was speak up. I teamed up with 3 kids. They did a really good job being distracting. I only had one student speak up and say something tho that wasn't a part of my little team. Kind of disappointing. I had the kids who knew about it speak up so maybe the other ones would feel more comfortable. No leaders in my group haha.. Maybe next week. It's recorded for your viewing and is only about 25-30 mins long if that. UTA 3.3: it was kind of disappointing in my second section. I tried to have two people do the distractions for the first recitation and since they both knew what was going on, they were the only ones that spoke up about each other. I didn't want that to happen again so I chose one person to do the distraction and they never responded back to me and I ended up having no distraction in my second recitation. I wish I could've been more helpful the second time around A review of the group discussion threads shows less than half (5 out of 12) of the UTAs participated in posting feedback to the facilitation group after the first activity. However, crossreferencing participation in the Facebook group with Adherence scores did not reveal a correlation with moderation (i.e. those that participated more in the Facebook group scored higher overall on Adherence). Participant responsiveness Participant responsiveness is defined as a factor impacting an intervention’s “acceptance by and acceptability to those receiving it” (Carroll, 2007, p. 6). Participant responsiveness concerns the attitudes and practices of those receiving the treatment, in this case, the UTAs who received team training and were responsible for conducting the activities with the students, and the students who took part in the activities. Although video analysis provides some visual 77 evidence about the UTAs attitudes towards conducting the activities with their students, it is difficult to accurately measure their participant responsiveness. One potential objective measure of participant responsiveness was the number of times the UTAs actually conducted the activities with their students. As the Frequency scores (Table 3.3) showed, many of the UTAs attempted the first activity (Team Training), with 10 of the 12 UTAs conducting it with their students. Frequency then drops, with fewer and fewer UTAs conducting the activities with their students as the semester went on. On the other hand, the author recalls that none of the UTAs expressed reservations about conducting the activities with their students; some in fact expressed eagerness to be a part of the overall study. In short, there was a disconnect between what the UTAs as a group expressed to the author in person about the activities and the study, what the UTAs actually did with their students. Thus, the disconnect between what was said and what was done makes it hard to evaluate precisely the responsiveness of those responsible for conducting the activities or participating in the study overall. As for student participant responsiveness, analysis of the recorded Recitations shows students were generally willing to follow the directions of the UTA. Student responsiveness, however, seemed to reflect UTA responsiveness, meaning that when a UTA showed enthusiasm for conducting the activity, students seemed to respond in kind. UTAs that paid attention to Adherence factors also seemed to garner better student responsiveness. For example, UTAs that paid attention to Coverage (e.g. calling on every student to make sure all students got an opportunity to engage) appeared to boost student responsiveness. On the other hand, some students when called on (e.g. when they were asked to suggest team names) showed little interest in the activity and simply made a suggestion on the spot to satisfy the UTA’s request. In short, 78 the data that is available seems insufficient to draw definitive conclusions about the state of participant responsiveness to the intervention. Quality of delivery Quality of delivery concerns “whether an intervention is delivered in a way appropriate to achieving what was intended” (Carroll, 2007, p. 6). Review of the video recorded Recitations suggested many of the UTAs had little commitment to what could be called the “spirit” of the intervention. For example, the UTAs would often characterize the activities to their students as something that was “part of that study” or “some experiment we’re supposed to do”. These types of statements may have been interpreted by the students as meaning the activities had little or no relation to the rest of the class and hence were of little value. The author also evaluated the style of the UTAs when conducting the activities. Style and tone varied between the UTAs based presumably on a number of personality factors. For example, some UTAs displayed lively and engaging styles of delivery by asking frequent questions, joking with students, and generally showing interest in the outcomes. Other UTAs conducted the activities in styles that could be characterized as monotone or wooden. Style of delivery also seemed to vary according to the activity being conducted. In particular, some of the UTAs (and students) would often smile and laugh during Emotional Roleplay, communicating a greater enthusiasm for activity to their students. The other activities – Team Name, Background, and Speak Up! – did not elicit the same type of emotional response as reflected in the UTAs’ style of delivery. Still, it is difficult to draw any clear conclusions because of a lack of multiple raters. 79 Summary: Integrating the Results of Research Questions 1, 2 and 3 The first part of this chapter reported on how the author used quantitative analysis to examine what impact, if any, an intervention based on theories of team training had on social presence (Research Question 1) and group cohesion (Research Question 2) in a videoconferencing learning environment. The results of these tests showed evidence that team training positively influenced one group’s perceptions of a sub-factor of social presence (Copresence/Psychological Involvement, i.e. sensing the presence and thoughts of others) during survey Time 4 and the consolidated data set, but that the effect size was too small to be important or meaningful. The data showed no significant effect for team training on the other sub-factor of social presence (Behavioral Engagement) or on group cohesion. To better understand the results to Research Questions 1 and 2, the author conducted a fidelity of implementation (FOI) analysis, i.e. scoring the observable performance of the team training activities for adherence to the intervention’s original design. Taken as a whole, FOI analysis showed all three treatment groups had generally low adherence to implementing the team training activities as they were originally designed. However, FOI analysis did identify that the point of highest scored adherence occurred in Time 2 and was achieved by Group 3 (.23 and .27 (Null)) respectively for the 2 Recitations in survey Time 2). Group 3 also had the highest overall Adherence average (.11 (Null)) compared to Group 1 (.04 (Null)) and Group 2 (.03 (Null)). Stepping back, one can see how the findings from the three research questions inform one another. In particular, the results from the FOI analysis provided insight on the results of the quantitative analyses for Research Questions 1 and 2. That is, the survey data as a whole are consistent with low to no adherence to the intervention’s original design on the part of the UTAs. 80 What is more, quantitative analysis of the survey data showed that even in an instance where adherence was highest (Time 2 for Group 3), univariate analysis showed no evidence of a statistically significant relationship between treatment group and outcome for any of the factors for that time or the time after. On the other hand, univariate analysis of Copresence/Psychological Involvement by group in Time 4 did show a significant interaction effect but with an effect size too small to be meaningful (see Appendix C). In addition, pairwise comparisons of the four groups showed significant variation in scores for Copresence/Psychological Involvement between Group 3 and the other treatment groups and the control group, again at Time 4 (see Appendix D). Again, that effect did not appear at the time when that group’s adherence to fidelity of implementation was highest but rather 2 survey times later. One possible hypothesis that could explain this finding is that the beneficial effects of Adherence to the original design of the interventions took time to become detectable in the survey results. Finally, FOI analysis of possible moderators of adherence to fidelity suggested several factors, including complexity of design, facilitation support, quality of delivery, and the cultural milieu of the study context, may have influenced adherence. FOI analysis also identified variation in the complexity of the team training activities that may have influenced how often a team training activity was conducted. For example, Speak Up!, an activity that rated as both broad (vague) in its instructional description and difficult in pre- and in-Recitation task complexity also had the lowest Adherence scores of any of the team training activities. Clear conclusions, however, could not be drawn about the influence of other moderators, such as facilitation support and quality of delivery, on adherence to fidelity of implementation for this study’s intervention because of limited data. In addition, the FOI analysis does not provide an explanation as to why the UTAs in 81 Group 3 achieved higher Adherence relative to UTAs in the other treatment groups. Chapter 4 will discuss these findings and attempt to put their implications in perspective. 82 CHAPTER 4 DISCUSSION AND IMPLICATIONS The purpose of this study was to test the general hypothesis that facilitator-led interventions can significantly improve factors of group dynamics in video conferencing settings. The facilitator-led interventions were modeled as team training exercises and based on the theoretical principles of structuration and symbolic conveyance. The interventions were then administered to roughly a third of the students in an introductory college-level course on public speaking. To determine whether the interventions had an effect, surveys designed to measure social presence and group cohesion were administered to all the students in the course. Results from a quantitative analysis of the survey data show the intervention had no significant effect on group dynamics factors of social presence and group cohesion in the groups that took part in the intervention. This outcome led the author to conduct a fidelity of implementation (FOI) analysis to determine what factors may have played a role in the final results. This analysis produced important findings about factors that likely moderated fidelity of implementation in this study, and thus have potential implications for similar studies as well as interventions in similar contexts. The purpose of this chapter is to discuss the finding in Chapter 3 in order to better understand the factors and processes at work. This discussion in turn is meant to help formulate recommendations on how a study like this might be conducted more effectively in the future. Discussion of Results What FOI Analysis Can Tell Us About Intervention Design As shown in Chapter 3, there was very low fidelity of implementation in this study, and given that low fidelity, it is unreasonable to expect the interventions to have had a significant effect on group dynamics. Furthermore, fidelity of implementation analysis revealed that the 83 UTAs varied in all 4 factors of adherence - duration, frequency, content, and coverage – in regards to implementing the different interventions. Better understanding the variations can help yield insights into the low adherence. The following sections discuss each of these factors in relation to the 4 activities. It is the position of the author that variation among the 4 factors of adherence may be due in part to the variation in complexity among the different team training activities. Different interventions, different FOI results TEAM NAME ACTIVITY Team Name was the first team training activity conducted by the UTAs with their students. It was designed to primarily foster group cohesion through symbolic conveyance. The pre-Recitation task for the UTAs was to email a prompt that explained the benefits of having a team name for the semester and asked the students to think of 2 team names for their Recitation group; pre-Recitation tasks for the students were to think of 2 team names. The in-Recitation tasks for the UTAs were to ask each of the students for their team name suggestions, ask for a vote on the different team names, tally the votes, and declare a winning team name for the group. The in-Recitation tasks required the students to present their team name suggestions when called upon and to cast their vote for one of the suggested names. As an intervention, Team Name had the second highest overall Adherence score average (.07 (Null), .06 (0) after Background (.11 (Null), .09 (0)). Task analysis indicates a low level of complexity for UTAs conducting Team Name, with 2 easy pre-Recitation tasks, and 3 easy inRecitation tasks. Complexity for students was slightly higher, with 1 easy and 1 moderate preRecitation task (thinking up 2 team name suggestions) and 2 easy in-Recitation tasks. It should be noted that the Team Name adherence scores might have been influenced by the timing of the activity. Team Name was the first team training activity that all of the UTAs 84 attempted so there may have been a kind of “novelty effect” at work but there is no data to confirm this. Team Name was attempted first because the author felt having the different groups create a team name would help forge group identity early in their time together. At first glance, the idea of having students create a team name for their group seemed like a simple and effective way to start to establish some sort of collective identity. A team name also seemed like a good way to boost morale and belongingness, in that students could feel they were no longer just a random collection of students but a team of individuals with a shared sense of purpose. Team Name may have benefited from it being the first activity conducted by the UTAs with the students but its actual adherence scores are still compared to possible scores for full adherence. Much of this can be attributed to very low content and coverage scores, in that many UTAs just simply announced the activity to their students with statements like, “Ok, so today if you read the email we’re supposed to do…”, and “So before we get started I’m supposed to have you all…”. Introductions like this (which were common in fact to all the activities) implied that UTAs felt the activities were not a part of the course content or their responsibilities. It also suggests the UTAs wanted their students to know they were not directly involved with the creation of the activities and wanted to shift responsibility for any failure or confusion the activities might cause onto a 3rd party. In addition, analysis of the Recitation videos shows the team names were rarely used by the UTAs or the students in regular group communication, implying: they considered it unnecessary; it never became part of their communication routine; or they simply had forgotten what is was or even that they had created one in the first place. BACKGROUND ACTIVITY The Background activity was designed to foster social presence and group cohesion through structuration and symbolic conveyance (e.g. a student describes how the background behind her in the Recitation session is both personal and professional). For the Background 85 activity, the pre-Recitation task for the UTAs was to email a prompt that asked students to think carefully about their background and how it relates to the framing and composition of their image in a videoconference, and to be prepared to answer 3 questions about why they chose to use it during the Recitation. The pre-Recitation tasks for students was to read the email, to think about what background they would use for their next Recitation, and to be prepared to answer the 3 questions in the email. The in-Recitation tasks for the UTAs were to ask each student the 3 questions about the background they had selected and to give them feedback on the personal and professional aspects of their selection. The in-Recitation task for the students was to answer the UTA’s questions about their background selection. As an intervention, Background had the highest overall Adherence score average (.11 (Null, .09 (0)). Task analysis indicates a low level of complexity for UTAs conducting Background, with 2 easy pre-Recitation tasks, and 2 easy in-Recitation tasks. Complexity for students was slightly higher, with 1 easy and 1 moderate pre-Recitation task (thinking up 2 team name suggestions) and 1 easy in-Recitation task. Background was designed to improve social presence through symbolic conveyance, in that the students could share personal and professional details about themselves through the backgrounds they chose to present to others in their Recitation group. The activity was also meant to enhance group cohesion through structuration, in that the students could think critically about the effect their choice of framing had on others in their group. Background was also a chance for the students to practice their interpretation of best practices of video conferencing (lighting, distance, background) with their fellow students and the UTA to get constructive feedback on their choices. 86 While its complexity ranked low in terms of task analysis, video analysis showed Background could nonetheless be a tricky activity to conduct with a group of students. The activity called for students to answer 3 questions about their choice of background in the hopes that students would incorporate personal details but in many instances, students chose simple white backgrounds with very little personal detail. When the UTA would ask the 3 questions, many students said they chose plain backgrounds because it seemed professional and nondistracting. While this may be true in most cases, it is also possible that some students were reluctant to share details of their personal space with others. However, questions 2 and 3 specifically ask students to consider what message or information the personal details in their backgrounds convey to others, so the choice of plain white backgrounds renders those questions mute. The author did notice that plain white or non-descript backgrounds became a kind of de facto standard for the majority of the Recitations and so, in a way, the students did establish a kind of norm in relation to video conferencing with one another. The norm just happened to run counter to what they were supposed to learn and practice in the Background activity. The students’ choice of plain backgrounds is also possible evidence of students opting out of the complexity of the activity, choosing a simpler solution than that of thinking, and having to answer questions, about their background decisions. If true, this would indicate again that complexity played an important role in moderating fidelity of implementation, this time on participant responsiveness. EMOTIONAL ROLEPLAY ACTIVITY The Emotional Roleplay activity was designed to foster social presence and group cohesion through symbolic conveyance (e.g. students contribute their personal interpretation of different emotional states to create a shared group experience). The pre-Recitation task for the UTAs was to email the description and explanation of the activity to the students. The pre- 87 Recitation task for the students was to read the email sent by their UTA. The in-Recitation task for the UTAs was to think of emotions the students could act out and to get the students to collectively engage. The in-Recitation task of the students was to act out the emotion the UTA called for, preferably in a way that others could see. As an intervention, Emotional Roleplay had the third highest overall Adherence score average when missing data was treated as Null (.06) and was tied with Team Name when missing data was treated as 0 (.06). Task analysis indicates a very low level of complexity for UTAs conducting Emotional Roleplay, with 2 easy pre-Recitation tasks, and 1 easy in-Recitation tasks. Complexity for students was the same, with 1 easy pre-Recitation task and 1 easy inRecitation task. Emotional Roleplay was designed to foster social presence among the students by having them mimic sharing emotional states using behavioral engagement. The intended effect was meant to heighten the students’ sense of copresence and psychological involvement with each other and with the group. Based on the Adherence sub-factor scores, Emotional Roleplay was conducted more frequently for more appropriate periods of time to more participants than either Background or Speak Up!. Emotional Roleplay tended to score low on Content, however, because UTAs often failed to explain the purpose of the activity, possibly because they did not quite understand it themselves. Content scores were also low because UTAs did not model the activity for their students, meaning that the UTA often watched the students perform the different emotions without joining in, thereby negating the group effect of shared emotional states (e.g. laughing together, feeling sad or disappointed together, etc.) Finally, video analysis showed some of the UTAs were not adept at calling out emotions that the students could convey easily through facial expression in a video conferencing environment (e.g. they called for emotions such as 88 “suspicion”, “bemusement”, “mistrust”, which are much harder to convey than “joy” or “anger”). Emotional Roleplay seemed to garner the most engagement when students were mimicking emotions that were broad and expressive, such as “joy”, “anger”, and “confusion”. Group size may have also been a factor, in that participation in smaller groups (4-5 students) seemed more halting and awkward that the participation in large groups. One interesting effect to note that was experienced by the author when watching the video recordings of the sessions was that he often caught himself mimicking the emotions along with the students and even experiencing the effects, even though he was not directly interacting with them. SPEAK UP! ACTIVITY The Speak Up! activity was designed to foster social presence and group cohesion through structuration. That is, students would hear or see a disturbance that affected their attention or concentration either audibly or visually, consider that the disturbance was a threat to the quality of the group’s interaction, and take steps (Speak Up!) to nullify the disturbance of behalf and themselves and the group. The pre-Recitation tasks for the UTAs were: to send students an email describing the importance of minimizing visual and audio distractions in video conferencing because they can affect the quality of the experience for everyone; to identify student confederates that would create disturbances during the Recitation for students to speak up and correct; contact the confederates and describe the activity and their roles; and plan what each confederate would do to create a disturbance and at what time. The Speak Up! preRecitation task for all of the students was to read the email; in addition, 2 students were required to act as confederates and create some kind of disturbance during the Recitation that other students would identify and correct. The in-Recitation tasks for the UTAs were to explicitly give students permission to speak up if they saw or heard a disturbance during the Recitation, and to give the students feedback on the effectiveness of their attempts to correct the disturbances. The 89 in-Recitation tasks for the students were to stay mindful of audio and visual disturbances, and to speak up to correct them when they occurred. Two students were also to act as confederates in the activity to create disturbances for the other students to correct, and to do so at appropriate or effective times. Note that the activity would not take place (i.e. there would be no disturbance for the students to correct and the UTA to give feedback on) if the student confederates failed to do their part. As an intervention, Speak Up! had the lowest overall Adherence score average at .00 (Null and 0). Video analysis showed that during the times UTAs attempted to conduct Speak Up!, there was never any discernible response from the students. Likewise, the UTAs never gave feedback on students’ lack of response to possible Speak Up! disruptions. Looking at the video recordings, Speak Up! was announced as an activity in which they would all participate but then never actually takes place. Task analysis indicates a very high level of complexity for UTAs conducting Speak Up!, with 2 easy and 4 moderate pre-Recitation tasks, and 1 easy and 2 moderate in-Recitation tasks. Complexity for students was also high, with 1 easy pre-Recitation task but 2 hard in-Recitation tasks (identifying disruptions, speaking up in a constructive way to resolve them). Added to this was the complexity of coordinating with the student confederates to create disruptions that the students could identify and correct using the Speak Up! method. As an activity, Speak Up! was meant to foster group cohesion through structuration among the students by helping to reinforce the idea that they were all stewards of the standards of quality related to their video conferencing environments and experiences. Practically speaking, the activity was meant to help students practice managing the audio and video quality of their shared video conferencing experiences in helpful and constructive ways that benefited both themselves and the rest of the group. But Speak Up! seemed to prove an especially difficult 90 activity, both for UTAs to conduct with the students, and for the students to participate in as intended. A number of UTAs reported having difficulty finding confederates to help them stage the disruptions that were at the heart of the activity. In addition, video analysis and interviews with the UTAs show that even when disruptions occurred as planned, students either had a hard time identifying them (because other disruptions were taking place) or simply did not “speak up” to correct them. These finding suggest that Speak Up! failed in part because the activity depended on students knowing what and when to participate without prompting from the UTA. This was in stark contrast to the other activities in which the UTA was the primary driver of student participation (e.g. asking questions about students’ backgrounds or calling out emotions to mimic). In retrospect, Speak Up! was a complicated activity for all involved, both in terms of pre-Recitation preparation and in-Recitation execution. Given the role and significance of intervention complexity in fidelity of implementation, it is no surprise that everyone involved either failed to realize the activity in all its particulars or just avoided it all together. Moderators Play an Important Role in FOI Chapter 3 identified a number of factors as potential (i.e. likely) moderators of adherence in regards to fidelity of implementation. These factors included intervention complexity, facilitation strategies, participant responsiveness, and cultural milieu. The following discusses the impact each of these had on fidelity of implementation. INTERVENTION COMPLEXITY FOI analysis of all four measures of adherence (frequency, duration, content, and coverage), analysis of moderators, and task analysis of the different activities indicate suggests that intervention complexity may have been the primary moderator on fidelity of implementation. A comparison of measures from 2 activities – Speak Up! and Emotional Roleplay – can best illustrate this point. 91 First, Speak Up! While all the activities suffered from low adherence to fidelity of implementation, Speak Up! had particularly bad scores. It was rarely attempted by the UTAs (just 3 times out of a possible 24, compared to Emotional Roleplay’s 11 times) and had the lowest scores across every measure of adherence to fidelity of implementation. At the same time, analysis of the different moderators identified in the FOI analysis shows that Speak Up! received the same kind of facilitation support as Emotional Roleplay and the other activities. In addition, different activities were conducted within different groups at different times during the semester and data shows that Emotional Roleplay (an easy activity) actually increased overall intervention frequency from 2 times (in Time 5) to 4 times (in Time 6). This suggests that the time in which an activity was conducted during the semester was not an important factor. The task analysis described in Chapter 3 provides additional evidence for the importance of complexity as a moderator. The Speak Up! activity contained several pre-Recitation and inRecitation steps that caused it to score as much more complex than Emotional Roleplay and somewhat more complex that the other activities. Conversely, Emotional Roleplay (the activity scored as the least complex) was conducted more times (11) than even the first activity tried (Team Name, 9 times). Scores for Frequency, Duration, and Coverage were also highest for Emotional Roleplay when it was first conducted by the different groups, again regardless of the time it was conducted during the semester. One way to interpret this is that the simplicity of conducting Emotional Roleplay with others made it an attractive activity for the UTAs to try, regardless of when it was done in the semester. Emotional Roleplay required almost no preparation on the part of students and the UTA – the UTA simply had to call out a series of emotions for the students to mimic. Contrast this with the other activities, which required UTAs to interact one-on-one with students (e.g. Team Name and Background) or required UTAs to 92 make special arrangements with particular students (e.g. Speak Up!). At this point, the author wonders if this simplicity not only made doing Emotional Roleplay easier, it made thinking about doing Emotional Roleplay easier than other activities. Pulling back from the measurements, it seems intuitive to note that asking people as a group to do something complex will not be as successful as asking them to do something simple. It also seems obvious that designing an activity that hinges on simple call and response will be easier to grasp and do with others than an activity that requires extensive planning and a level of advanced cooperation from some, but not all, of the participants. Indeed, task analysis of the different activities showed an activity like Speak Up! runs the risk of breaking down all together if just one person fails to cooperate. Finally, but maybe not so obvious, is the idea that just considering the complexity of how to go about conducting a team training activity with others might have been enough to impact fidelity of implementation. In other words, when considering whether or not to do a team training activity with their students, did some UTAs balk simply because they thought it would be too complex to try in the context of the Recitations? Further research on this question may be valuable, in that helping facilitators understand and navigate the complexity of an intervention ahead of time may help them feel more comfortable conducting it with their group and lead to greater fidelity of implementation. FACILITATION STRATEGIES As noted in Chapter 3, this study incorporated a number of facilitation strategies to familiarize the UTAs with the team training activities and how to conduct them with their students. These strategies included: holding a face-to-face orientation meeting with the UTAs a week before the semester began to explain the purpose and scope of the study; meeting with the UTAs face-to-face to give instructions and discuss the particulars of each activity; send the UTAs emails 48 hours before their Recitation sessions that gave background information and 93 what students and UTAs should do in each activity; and hosting a Facebook group where UTAs could share the results and experiences with the activities with the researcher and their UTA peers. The UTAs’ responses to most of these strategies seemed positive but in retrospect a reevaluation of each is definitely in order. For example, holding an orientation prior to the course was a good strategy but holding it a week before classes started was not enough time for the UTAs to achieve the level of practical and theoretical fluency necessary for conducting the activities with their students. In other words, their knowledge of the activities was not really well formed before they were asked to perform them with their students. Likewise, meeting with the UTAs face-to-face was seemingly effective at communicating the main ideas and goals behind each of the activities but it did not give the researcher the opportunity to model the activities for them in an actual video conferencing environment, and it did not give the UTAs themselves a chance to practice conducting the activities with others. Seen in this light, it is not surprising that many UTAs missed points for “Modeling the Activity” in the Content portion of the FOI analysis – they had never seen the activities modeled for them in the first place so they were unsure what it looked like! As for sending emails 48 hours ahead, this added an extra layer of complexity for the UTAs, in that they were required to forward these emails to their students. Judging by student responses in the Recitation sessions, it is unclear if all the students a) received the emails and b) had sufficient time or inclination to read them. Compounding this complexity is the fact that the emails were not clearly structured to allow UTAs and students know what they should actually do for each activity. Finally, hosting a Facebook group was a good communications and support strategy that in retrospect could have been used much more extensively and effectively. For example, the young adults in this study seemed more responsive 94 to communicating through social media channels than through emails. Thus, communication could have been simplified and perhaps made more effective if the Facebook group had been used more extensively or even been the exclusive communication channel. Finally, the author made some effort to encourage UTAs to use team training techniques on a regular basis throughout the semester. For example, the author suggested the UTAs should use the group’s team name when communicating with the students. These efforts were inconsistent, however, in that the author may not have been explicit in his instructions to the UTAs that they should continue to use the techniques they learned in previous activities. The author made the assumption that participating in the activities meant the underlying techniques had been learned and that using them from then on would be obvious. Video analysis shows this was not the case, as none of the videos showed either the UTAs or the students using the team training techniques beyond the actual activity time. A stronger facilitation strategy would have been to prompt the UTAs to use the techniques from each activity in other Recitations, and to monitor their use and offer feedback on how they were doing. PARTICIPANT RESPONSIVENESS It is the opinion of the author that participant responsiveness on the part of the UTAs in relation to being a part of the study was good. In other words, in instances when the author met with the UTAs in person, or when he communicated with them through email or the Facebook group, they generally seemed enthusiastic about the team training activities and curious about trying them with their students. Analysis of the video recorded Recitations and the overall and individual FOI analysis scores tell a different story. The video recording show that out of the 84 potential times the activities could have been conducted during the semester, there were 35 times when the UTAs 95 simply chose to not to do them. It is difficult to tell whether UTAs willfully ignored doing the activities with their students or whether they simply forgot – the data is incomplete in this regard. Video analysis also showed no evidence of follow-through, in that both UTAs and students seemed to treat the activities as one-off events that did not carry over from Recitation to Recitation. For example, team names were created but never used. UTAs and students rarely commented on the composition of each other’s backgrounds beyond doing the actual Background activity. Disruptions continued to go unaddressed by the students even after they were given instructions and permission to Speak Up! As mentioned earlier, how simple or complex an activity was seems to have had a significant impact on adherence to fidelity of implementation. It also seems likely that complexity partly impacted UTA responsiveness as well. The simplest intervention in this study was attempted more times that the more complicated ones, suggesting issues of complexity were precursors to UTAs even attempting participation in the study with their students. CULTURAL MILIEU As discussed in Chapters 2 and 3, the term “therapeutic milieu” refers to “how responsive the environment is into which an intervention is introduced” (Carroll et al., 2007). The author has chosen to rename this term “cultural milieu” for the sake of clarity in this study but the essence of this definition remains the same. As discussed in Chapter 2, the cultural milieu of COM 100 presented a number of complex challenges to successfully implementing the interventions at the core of this study. Chief among these were: the limited amount of time available to UTAs and students during the Recitations; the decentralized decision-making structure of the course leadership and the “outsider” status of team training relative to the core course requirements; the occasionally awkward power dynamics between the UTAs and their students. 96 TIME CONSTRAINTS Video analysis showed the UTAs and students were often keenly aware of time during their video conferencing Recitations. Recitations were scheduled to last for 80 minutes and there were usually somewhere between 10-12 students in each Recitation. Depending on the type of Speech the students were supposed to make (e.g. 1 minute for each Special Occasion speech, 5-7 minutes for each Persuasive speech), time-in-Recitation was commodity that changed in value over time. In other words, there was more time to make speeches in the earlier Recitations than there were in the later Recitations. In the author’s opinion, the perceived amount of time available to the UTAs in-Recitation had an impact on their decisions to conduct team training activities with their students. Consider, for example, a Recitation that had 10 students that meets to perform their Persuasion speeches. If each student takes their full 7 minutes to make their speech, it would take 70 minutes for the students to finish. Add to this the time it took for the UTA to mark their rubric sheet for the speech and to transition from student to student and we see there is very little time to do any activities beyond what was specified as a core course requirement. Perhaps more significant was the fact that the Recitations left little time for the students to engage in interactions among themselves beyond either giving or listening to a speech. The roles and responsibilities for students in the Recitations were fairly well defined: if you were not speaking as a presenter, you should be listening as an attentive audience member. Video analysis shows that some of the UTAs in the treatment groups occasionally asked students to offer feedback to presenters after they spoke but for the most part, UTAs and students rarely spoke or reacted to one another beyond asking questions about the course, such as when assignments were due and what was required. Both UTAs and students seemed mindful that time was of the essence and that too much interaction could force them all to stay over the time limit. In short, 97 the team training activities were designed to not take a lot of time but the structure of the Recitations meant there wasn’t really a lot of time to give to conducting the activities in the first place. SLOW MARGINALIZATION OF THE AUTHOR AND HIS STUDY As originally designed, COM 100 had one primary instructor, 5 graduate teaching assistants (GTAs), and 1 UTA coordinator. For the UTAs in the course, that potentially meant looking to 3 different sources or “bosses” for answers to questions about the course. Enter the author of this study. He worked with the primary instructor pre-semester to devise a plan to implement video conferencing as a platform for hosting Recitations with the expectation that he would be able to conduct the present study. However, as the semester got underway, both the UTA Coordinator and the GTAs extensively revised the changes the author initially made to the syllabus. One change suggested by the UTA Coordinator was aimed at reducing the number of times the students met in video conferencing Recitations. Another change by both the GTAs and the UTA Coordinator removed a self-reflection assignment connected with watching the recorded Recitations. The end result of these changes was that UTAs were often unsure about what was still valid about the syllabus and what had changed from the authors original course design. It is the author’s opinion that the GTAs and the UTA coordinator had slightly different but not incompatible priorities. The GTAs were interested in changes that would make their responsibilities and duties simpler and less demanding. The UTA Coordinator was interested in basically the same thing, not for himself but for the UTAs. What was not a priority for the either the GTAs or the UTA Coordinator was the implementation of the present study. This is not to say the either party deliberately undermined or negatively impacted the study. On the contrary, the author felt they contributed as much support as they could and that they were very open to 98 collaboration. In retrospect, however, the GTAs and the UTA Coordinator were focused on what they understood to be the primary purpose of the course – teaching and grading students on principles and concepts in public speaking. Team training activities to enhance group dynamics did not seem to be a part of that core purpose. For instance, the author was introduced to the UTAs at the beginning of the semester as someone who helped redesign the syllabus to accommodate video conferencing for Recitations. Yet much of the correspondence in the UTA support group hosted by the UTA Coordinator on Facebook (not the author’s Facebook group) focused on helping UTAs navigate their interactions with students. This meant posting messages about correct Zoom IDs for hosting Recitations, assignment due dates, changed schedules, possible instances of plagiarism and how to respond, questions about missed Recitations and so on. On the other hand, if changes to the syllabus were made, the UTA Coordinator rarely if ever mentioned the author as someone who could help clarify issues. The end result was that the author became increasingly peripheral to the decisionmaking and important course-related matters as the semester went on. In the context of COM 100’s cultural milieu, the author slowly became someone that UTAs could potentially ignore without serious repercussions. It is possible therefore that some of the UTAs reasoned that they could disregard instructions to conduct team training with their students because it wasn’t part of their core commitments as UTAs. Again, it is possible, though unverifiable, that the drop in Frequency for all but one of the time periods that activities could be conducted may be due in part to UTAs silently opting out of the study simply because there would be no negative consequences for doing so. LEADERS VS. PEERS As noted earlier, the UTAs in this study were undergraduates who had previously taken COM 100 only a year or two before. This meant most of the UTAs were not much older that the 99 students they were asked to guide through the course. The result was a cultural milieu in which “peer-to-near-peer” instruction and assessment was one of the primary ways students engaged with the course. It is the informed opinion of the author that the slight differences in age and emotional maturity may have led some of the UTAs to regard and interact with the students more as peers and less as authorities in the course. The author observed a number of instances in Recitations where UTAs seemed reluctant to project themselves as leaders for their group. For instance, some UTAs would start their official Recitation time by using phrases like, “Do you guys just want to get started?” or “It looks like most everyone is here, I guess we’ll just go ahead and start.” In another instance, some students joined their Zoom session early and waited for their rest of their peers to join. One of the students then began playing music containing profanity that was clearly audible. The UTA then joined the Zoom session but instead of mentioning to the student that the music was not appropriate for this particular group setting, he simply let the music continue. Indeed, the UTA and the student seemed to be friends or acquaintances because they spoke to each other about issues unrelated to the COM 100. Meanwhile, the lyrics in the music seemed to cause some students to feel awkward or embarrassed; one female student left the session and then joined when it was closer to the actual time to start the Recitation. When it was time to start, the student turned off the music and the Recitation began without incident. Still, the UTA seemed to have difficulty switching from being a friend to being the UTA and the team training activity he attempted with his students (in this case Emotional Roleplay) seemed perfunctory and lacking in both Duration and Content. Recounting these exchanges between UTAs and students is not meant to question their abilities or commitment to their responsibilities in the course. Rather, these episodes may be evidence that in COM 100’s unique cultural milieu (i.e. peer-to-near-peer instruction and 100 assessment), some UTAs were not entirely comfortable in the role of leading their students through the different team training activities. The Background and Speak Up! activities in particular asked UTAs to assume a certain air of authority that may have made them feel awkward because of the minor age difference between themselves and their students. In the opinion of the author, some low scores for Duration and Content may be evidence that UTAs felt that acting as leaders in the team training activities was outside the scope of their responsibilities as facilitators in the course. TIMING OF THE TEAM TRAINING ACTIVITIES Results from statistical analysis show there was a significant interaction effect with a negligible effect size for Copresence/Psychological Involvement at survey Time 4. Analysis also showed that Group 3, the group with the highest Adherence scores among the 3 treatment groups, was the source of this statistical significance. Moreover, UTAs in Group 3 achieved some of their highest Adherence scores for the interventions they conducted for the first 2 survey Times. While the data is inconclusive, one hypothesis for this finding is that achieving good Adherence at the beginning of the series of team trainings had a positive effect on Copresence/Psychological Involvement later in the semester. In addition, the team training activities they performed for survey Times 1 and 2 (Team Name and Background) ranked low in complexity. Again, while data here is far from conclusive, one hypothesis for this finding is that the UTAs’ level of Adherence at the beginning of the semester to team training activities that were low in complexity had a positive effect on students’ perceptions of Copresence/Psychological Involvement later in the semester, specifically Time 4. 101 Implications Drawing clear conclusions and implications from the available data in this study was challenging. Quantitative analysis of the survey data indicated that facilitator-led team training activities in video conferencing learning environments did not have a meaningful significant effect on measures of social presence and group cohesion (Research questions 1 and 2). However, analysis of the factors that influenced this outcome (Research question 3) shows that no conclusion about the efficacy of the team training activities can be drawn because of severely low fidelity of implementation. Nevertheless, available data from the fidelity of implementation analysis points to several factors that may have influenced the UTAs’ adherence to the intervention’s design. But drawing definitive conclusions about what may have caused the study’s low fidelity of implementation is difficult because the available data is so limited (e.g. the focus group questions asked UTAs to evaluate the students’ reactions to doing the activities, not their reactions in conducting them). With that said, the following are several implications from this study that would serve to inform the author’s own research, as well as others’ research, in the future. One implication of this study relates to intervention research in general. The results of this study suggest that when fidelity of implementation for an intervention is important, monitoring for adherence is vital. Adherence is a measure of how closely a person or group enacts the design of an intervention. For some interventions, strict adherence may be unnecessary to still achieve acceptable results. In other interventions, participants may view the difference between doing and not doing the intervention is negligible or unimportant. In these cases, monitoring for adherence may also be unimportant or unnecessary. Still other 102 interventions may require very strict adherence and in these cases, monitoring can mean the difference between a successful intervention and no intervention at all. This study featured strategies that would help facilitate adherence before the interventions took place (face-to-face meetings, emailed instructions, etc.). The study also featured strategies to assess aspects of adherence after the interventions had been conducted (the Facebook group, student surveys). Yet there was no strategy in place to monitor adherence while the interventions were taking place. For the author, fidelity of implementation was important because it directly related to the success of his study. But the study itself – assessing the effects of team training activities on video conferencing group dynamics - had no direct bearing on what happened in COM 100 as a course. Participation by the students was treated as part of the course but it was not measured or evaluated in any way, nor did participation have a positive or negative effect on what was being evaluated, namely homework assignments and speech performances. Participation by the UTAs was more complex. A central fact of this study is that, minus 18 instances where data was not available, the UTAs did not conduct team training activities with their students in 31 out of 66 times. Frequency scores across the different times indicated that the UTAs conducted the team training activities with their students more towards the beginning of the semester. As the semester progressed, however, fewer UTAs conducted the activities with their students yet continued to profess interest in the study when talking with the author. The result was a faulty assumption about the UTAs adherence on the part of the author as he was conducting the study. As the study was taking place, the author assumed the strategies to implement the intervention were effective based on two sets of responses: verbal and written responses from the 103 UTAs, and the survey responses from the students. The author had put in place facilitation strategies to support what he considered to be good training for and communication with the UTAs. The UTAs in turn gave the author what he considered favorable and enthusiastic responses to taking part in the study. At the same time, the number of responses to the survey throughout the semester gave the author the impression that participation in the study was sufficient on the part of both the students and the UTAs. These two sources of feedback were helpful but ultimately lacking. Indeed, although survey participation was robust, the data it represented had limited value in addressing Research Questions 1 and 2 because of poor fidelity on implementation. Problems with adherence were only detected through video analysis after the study was completed. Monitoring for fidelity of implementation while the study was taking place, however, would have provided the author with opportunities to intervene, offer suggestions and support, and even propose changes to the interventions themselves to improve adherence factors (i.e. frequency, duration, content, and coverage). Monitoring could also have been an important source of contemporaneous data in this study as well as a means of quality control for adherence factors like Content and Coverage. In short, results from the fidelity of implementation analysis suggest intervention studies such as this one miss a vital element of facilitation support, data collection, and quality control if they do not feature a method or strategy for monitoring adherence while interventions are taking place. Another implication is that intervention complexity is an important consideration in intervention design. The results of this study seem to confirm findings from similar fidelity of implementation studies that suggest interventions with higher levels of complexity are less likely to be adhered to than those with more modest levels of complexity (Carroll et al., 2017). Task analysis showed Emotional Roleplay took less preparation and required less steps to implement 104 than Speak Up! But the fact that Speak Up! was attempted so few times suggests that complexity is not just a factor during an activity but also before. That is, the evidence does not show that UTAs tried the activity but then gave up when it got too hard; rather, one possible explanation is that many of the UTAs evaluated the complexity of Speak Up! beforehand and opted not to attempt it. The author used the same facilitation strategies (face-to-face training, emails, social media for feedback and discussion) for each of the team training activities. The uneven distribution of frequency, however, suggests that more or different facilitation support was needed for this particular intervention, both in terms of practical application and how it was perceived by the UTAs. Related to intervention complexity is Duration, specifically the time required or estimated for effective implementation and how it relates to cultural milieu. Video analysis of the Recitations shows the author did not fully appreciate how much time was required to achieve sufficient Duration, Content, and Coverage. Background, for example, was envisioned as a brief 5-7-minute activity in which the UTAs would ask 3 questions about a student's background and the student would give simple, informative answers. In reality, Background often took much longer than 5-7 minutes, particularly if the UTA paid close attention to Content and Coverage. If done according to design, the author observed that it generally took a UTA 2-3 minutes for each student. In a Recitation with 10 students, this means the Background activity could take anywhere from 20-30 minutes - almost half the Recitation time - leaving very little time for students to perform their speeches. This may account for why the three activities that could be conducted twice (Background, Emotional Roleplay, and Speak Up!) were tried a first time but not for a second. In short, adherence to fidelity of implementation may have been seriously impacted by how long the activities took and how much time was actually available to the UTAs 105 to do everything they need to do. Thus, the overall implication is that Duration relates not only to an intervention's ideal estimates for effectiveness but also to whether the ideal matches the intended cultural milieu. On this topic, another implication from this study is that cultural milieu can be a deceptively complex mix of factors and should be considered carefully as a part of intervention design. On the surface, COM 100 seemed like an ideal context in which to study the enhancement of group dynamics in small-group video conferencing learning environments. The cultural milieu in COM 100 during the time of this study, however, contained factors that worked against effective implementation of the intended interventions. First, the course was structured around learning concepts of public speaking. Public speaking is often a solo endeavor, meaning there is little opportunity or need for team work or collaboration. As such, the Recitations were designed to give students the chance to perform their speeches in front of their peers one at a time, and there was little opportunity for the development and exercise of the kind of group dynamics among the students the interventions were meant to enhance. For example, the Recitations offered little opportunity for the students to interact in ways that demonstrated behavioral engagement (an aspect of social presence) beyond listening attentively to each other's speeches. The performance-driven structure of the Recitations also meant UTAs and students were not required to collaborate or work as a team on any given project - students were simply required to prepare and perform their speeches while the UTAs assessed them. Thus, activities like Speak Up! and Team Name that were designed to encourage collective action and group identity may have felt out of place because they didn't apply to the general purpose of the Recitations. In other words, team training interventions were not a good fit for a cultural milieu that didn't really have teams in them to begin with. 106 There is also the matter of incentives for conducting and performing the team training activities. As noted, there were none; the UTAs were asked to participate in conducting the activities with their students but no compensation was offered. The students were asked to participate in the activities as a part of the normal operations of the course but there was no assessment of their performance, beyond constructive feedback from the UTAs, that would impact their grades. Moreover, the concept of collaboration or working as a group was not expressly mentioned as an integral part of the course, which made it hard for UTAs and students to identify and appreciate as something relevant or important in relation to their course duties. While it is difficult to assess how much impact this may have had on adherence to fidelity of implementation given the data, the fact that there was no penalty and no obvious reward for conducting and participating fully in team training activities leads the author to believe this had an adverse impact on adherence in this study. Limitations One limitation of this study was the missing data regarding some of the Recitations during the semester. As noted in Chapter 3, there were 18 instances where videos recordings of Recitations were not available for review (i.e. 21% or 1/5th of the potential video data). This presented significant challenges for scoring in the fidelity of implementation analysis. While the author included FOI scores that treated missing data both as Null and as 0, the study does not present a complete picture of Adherence to fidelity of implementation as it actually occurred. Another limitation of this study was a lack of basis for comparison. No previous data was collected on COM 100 students’ feelings of social presence and group cohesion in their inperson Recitations groups. Had the interventions been more successful in implementation, 107 survey results would still have only measured social presence and group cohesion across control and treatments groups in video conferencing contexts only. In terms of instrumentation, the study was also limited by questions of content validity of the surveys administered to the students (see Chapter 3). Again, even if the interventions had been more successfully implemented, a posteriori content validity analysis showed the survey data would still have suffered because several survey items that were meant to measure one factor of social presence (copresence) were actually more suited to measure another (psychological involvement) and vice versa. Suggestions for Future Research It seems obvious that if this study were to be conducted again in the future, a number of changes would have to be made. For one, more emphasis would be placed on ensuring adherence to fidelity of implementation. The UTAs were in retrospect the prime drivers of the intervention in this study and yet they were treated as merely a step in the delivery. The findings in this study suggest greater attention should be paid to helping facilitators in interventions better understand their role and to supporting them in their efforts. Research perspectives from the field of andragogy (i.e. adult learning and instruction) would certainly be of value in this respect because they could inform strategies for facilitation support. Research on task assessment and compliance would also be of value because it would help in understanding how facilitators evaluate tasks and methods associated with interventions and inform features of intervention design. In terms of intervention design, two directions might be taken for future studies. As noted earlier, the intervention in this study was in fact a poor fit for the cultural milieu of COM 100. Group dynamics among students did not play a role in normal course operations, assessments, or learning outcomes so an intervention designed to enhance group dynamics was simply not 108 appropriate. One direction for a future study of this type would be to find a different course with a cultural milieu that fits the intervention (team training activities) as they are currently designed. Another direction would be to alter the intervention to more closely match the cultural milieu as it currently exists. One recommendation along these lines would be to focus more on the technological or compositional aspects that students can control in video conferencing (e.g. proximity, framing, lighting, background, etc.). For example, a study could be conducted on students using techniques to enhance their overall visual presentation to determine whether this had an impact on social presence. There are any number of directions a future study like this one could go, but the findings suggest significant modifications to intervention design and implementation would need to be made. It is the position of the author that there is still valid and important research to be done in the area of group dynamics for students in video conferencing environments. Video conferencing and online video communication continues to become increasingly common in higher education and even some K12 learning contexts. Intervention- and design-based researchers should continue to explore how this type of mediated communication and interaction affects learning and instruction but do so in ways that take into account the complexity of these particular contexts. Summary Looking back, it is both sobering and exciting to realize that if the results from the quantitative analysis portion of this study had yielded significant results, the deeper analysis regarding fidelity of implementation might never have taken place. Initial success would have in fact been failure that could have distorted the author’s research for years to come. As it turned out, fidelity of implementation was a rich area of learning and insight, one the author has now 109 fully embraced. It reminds him that failure can be a most instructive experience and recognizing this is an essential part of being a researcher. In truth, studies rarely go completely as planned, and the most important findings are not always lying on the surface. One of the main takeaways for the author as a researcher is to not take results at face-value, but to push beyond the initial findings, even if they are successful. This study has also given the author a newfound respect for complexity in intervention design and research. It may be impossible to overestimate the complexity of a project or intervention and how different people will react to it but it is certainly possible to underestimate it. The author’s experiences in this study should serve as a warning to himself and others against falling into a false sense of complacency about what was required to conduct intervention studies of this size and complexity. The people involved were merely players in a carefully plotted narrative, so he thought; all that was required was for them to fulfill their roles as they were written. But to paraphrase what a wise person once told the author, “It’s difficult to get others to carry the tune you’re whistling in your own head.” Creating effective environments for teaching and learning is an important matter that should concern all educators and educational researchers. Content and pedagogical practice can be rendered meaningless if the conditions for student engagement and presence are not tended with thoughtfulness and care. This study was an attempt to create conditions in a technologymediated learning environment so that students felt they were more present with one another, allowing for a greater sense of group identity and common purpose. The results show that interventions like the one in this study can have a positive effect on measures important to interactions in mediated learning contexts. The results also show that the strength of that effect depends on adherence to enacting interventions are they are designed. While this is not 110 necessarily a major discovery, it should serve notice to educational researchers and practitioners as they prepare for a future with increasing technology mediation. It is not always obvious how to make the best use of the tools we have, and so we should always be mindful of the environments we create and how we go about trying to make them better. 111 APPENDICES 112 APPENDIX A: Reliability Scores for Psychological Involvement, Behavioral Engagement, and Group Cohesion Across Times 1-4 Reliability tests for both the Copresence/Psychological Involvement scale and the Behavioral Engagement scale showed improvement when a single item was removed from each scale (items SP-Co2R and SP-Bea. Respectively, see Table 2-4). The Copresence/Psychological Involvement scale consisted of 5 items (α=.847) and the Behavioral Engagement scale consisted of 2 items (α=.802). The Group Cohesion scale consisted of 6 items (α=.921) and did not improve with the removal of any items. Below are the improved reliability scores for the Copresence/Psychological Involvement scale (Appendix A-1) and the Behavioral Engagement scale (Appendix A-2), as well as the original reliability score for the Group Cohesion scale (Appendix A-3). Table A-1: Improved Reliability Scores and Summary Item Statistics for Copresence/Psychological Involvement Scale Reliability Statistics for Copresence/Psychological Involvement Cronbach's Alpha Based on Cronbach's Alpha Standardized Items N of Items .847 .849 5 Summary Item Statistics for Copresence/Psychological Involvement Item Means Item Variances Mean 2.689 .888 Minimum 2.465 .739 Maximum 3.027 1.198 Range .561 .460 113 Maximum / Minimum 1.228 1.622 Variance .047 .035 N of Items 5 5 Table A-2: Improved reliability scores and Summary Item Statistics for Behavioral Engagement Scale Reliability Statistics for Behavioral Engagement Cronbach's Alpha Based on Cronbach's Alpha Standardized Items N of Items .802 .804 2 Summary Item Statistics for Behavioral Engagement Item Means Item Variances Mean 2.116 .786 Minimum 2.009 .705 Maximum 2.222 .867 Range .213 .162 Maximum / Minimum 1.106 1.230 Variance .023 .013 N of Items 2 2 Table A-3: Reliability scores and Summary Item Statistics for Group Cohesion Scale Reliability Statistics for Group Cohesion Cronbach's Alpha Based on Cronbach's Alpha Standardized Items N of Items .921 .921 6 Summary Item Statistics for Group Cohesion Item Means Item Variances Mean 2.569 .760 Minimum 2.410 .726 Maximum 2.787 .817 Range .378 .091 114 Maximum / Minimum 1.157 1.126 Variance .026 .001 N of Items 6 6 APPENDIX B: Means and Standard Deviations for 3 Factors for the Consolidated Data Set Table B-1 shows the means and standard deviations for Copresence/Psychological Involvement, Behavioral Engagement, and Group Cohesion for the consolidated data set. Table B-1: Means and Standard Deviations for 3 Factors for the Consolidated Data Set Descriptive Statistics CPRe BERe GCRe Valid N (listwise) N 1504 1504 1504 1504 Minimum 1.00 1.00 1.00 Maximum 5.00 5.00 5.00 Mean 2.6886 2.1157 2.5691 Std. Deviation .74203 .81010 .73807 Figures B-1, B-2, and B-3 are graphs of the means and standard deviations for the individual factors (Copresence/Psychological Involvement, Behavioral Engagement, and Group Cohesion) by Group for the consolidated data set (Times 1-4 Combined). Figure B-1: Means and Standard Deviations for Copresence/Psychological Involvement for Groups for the Consolidated Data Set (Times 1-4 Combined) 115 Figure B-2: Means and Standard Deviations for Behavioral Engagement for Groups for the Consolidated Data Set (Times 1-4 Combined) Figure B-3: Means and Standard Deviations for Group Cohesion for Groups for the Consolidated Data Set (Times 1-4 Combined) 116 APPENDIX C: Results of Univariate Analyses of Copresence/Psychological Involvement (CPRe) for Times 1-4 Univariate tests were conducted on the factor of Copresence/Psychological Involvement (DV). Group was used as the independent variable in this analysis. The tests show the interaction effect is statistical significant for Copresence/Psychological Involvement at Time 4 (p = .014) but with an effect size too small to be meaningful (ηp 2 = .028, (.009 after Bonferroni correction is applied)). Tables C-1 through C-4 show the results of the univariate analysis conducted for Copresence/Psychological Involvement for survey Times 1-4. Table C-1: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 1 Univariate Tests Dependent Variable: CPRe Sum of Squares Contrast 1.114 Error 148.958 df 3 354 Mean Square .371 .421 F .882 Sig. .450 Partial Eta Squared .007 Table C-2: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 2 Univariate Tests Dependent Variable: CPRe Sum of Squares Contrast .613 Error 184.419 df 3 380 Mean Square .204 .485 F .421 Sig. .738 Partial Eta Squared .003 Table C-3: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 3 Univariate Tests Dependent Variable: CPRe Sum of Squares Contrast 1.737 Error 219.667 df 3 378 Mean Square .579 .581 117 F .997 Sig. .394 Partial Eta Squared .008 Table C-4: Univariate analysis of Copresence/Psychological Involvement (DV) by Group (IV) for Time 4 Univariate Tests Dependent Variable: CPRe Sum of Squares Contrast 7.371 Error 259.308 df 3 376 Mean Square 2.457 .690 118 F 3.563 Sig. .014 Partial Eta Squared .028 APPENDIX D: Pairwise Comparisons Across Groups for Copresence/Psychological Involvement (CPRe) for Times 1-4 Pairwise comparison across the different Groups were conducted on the factor of Copresence/Psychological Involvement (DV) as part of the overall univariate analysis for each of the 3 factors. Group was used as the independent variable in this analysis and least significant difference was used as the adjustment for multiple comparisons (equivalent to no adjustments). The pairwise comparisons show significance for the mean differences between Group 3 and Groups 1, 2, and 4 for survey Time 4. Tables D-1 through D-4 show the pairwise comparisons across groups for Copresence/Psychological Involvement (DV) for Times 1-4. Table D-1: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 1 Pairwise Comparisons Dependent Variable: CPRe (I) Group (J) Group 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3 Mean Difference (I-J) Std. Error Sig. -.190 -.035 -.125 .190 .155 .065 .035 -.155 -.090 .125 -.065 .090 .168 .802 .258 .168 .246 .528 .802 .246 .392 .258 .528 .392 .137 .139 .110 .137 .133 .102 .139 .133 .105 .110 .102 .105 119 95% Confidence Interval for Difference Lower Bound Upper Bound -.460 .080 -.309 .239 -.342 .092 -.080 .460 -.107 .417 -.137 .266 -.239 .309 -.417 .107 -.297 .117 -.092 .342 -.266 .137 -.117 .297 Table D-2: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 2 Pairwise Comparisons Dependent Variable: CPRe (I) Group (J) Group 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3 Mean Difference (IJ) -.006 .122 .011 .006 .129 .017 -.122 -.129 -.111 -.011 -.017 .111 Std. Error Sig. .143 .142 .113 .143 .138 .108 .142 .138 .107 .113 .108 .107 .965 .389 .921 .965 .352 .872 .389 .352 .298 .921 .872 .298 95% Confidence Interval for Difference Lower Bound Upper Bound -.288 .275 -.156 .401 -.212 .234 -.275 .288 -.143 .400 -.196 .231 -.401 .156 -.400 .143 -.321 .099 -.234 .212 -.231 .196 -.099 .321 Table D-3: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 3 Pairwise Comparisons Dependent Variable: CPRe (I) Group (J) Group 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3 Mean Difference (I-J) Std. Error Sig. .125 .189 .010 -.125 .064 -.115 -.189 -.064 -.179 -.010 .115 .179 .405 .208 .931 .405 .675 .336 .208 .675 .134 .931 .336 .134 .150 .150 .115 .150 .152 .119 .150 .152 .119 .115 .119 .119 120 95% Confidence Interval for Difference Lower Bound Upper Bound -.169 .419 -.105 .483 -.217 .237 -.419 .169 -.236 .364 -.349 .119 -.483 .105 -.364 .236 -.413 .055 -.237 .217 -.119 .349 -.055 .413 Table D-4: Pairwise Comparison Across Groups for Copresence/Psychological Involvement (DV) for Time 4 Pairwise Comparisons Dependent Variable: CPRe (I) Group (J) Group 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3 Mean Difference (I-J) Std. Error Sig. -.065 .366* -.025 .065 .431* .040 -.366* -.431* -.390* .025 -.040 .390* .704 .029 .856 .704 .008 .756 .029 .008 .002 .856 .756 .002 .171 .167 .135 .171 .162 .130 .167 .162 .125 .135 .130 .125 121 95% Confidence Interval for Difference Lower Bound Upper Bound -.400 .271 .037 .694 -.291 .242 -.271 .400 .111 .750 -.214 .295 -.694 -.037 -.750 -.111 -.635 -.145 -.242 .291 -.295 .214 .145 .635 REFERENCES 122 REFERENCES Aleven, V., Stahl, E., Schworm, S., Fischer, F., & Wallace, R. (2003). Help seeking and help design in interactive learning Environments. Review of Educational Research, 73(3), 277–320. Argyle, M., & Dean, J. (1965). Eye-contact, distance and affiliation. Sociometry, 28(3), 289–304. Bandura, A. (1977). Social learning theory. Englewood Cliffs, N.J.: Prentice-Hall. Beer, M. (1976). The technology of organization development. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology. Chicago: Rand McNally. Bollen, K. A., & Hoyle, R. H. (1990). Perceived cohesion: A conceptual and empirical examination. Social Forces, 69(2), 479. Bormann, E. (1996). Symbolic convergence theory and communication in group decision making. In R. Hirokawa & M. Poole (Eds.), Communication and group decision making (2nd ed.), (pp. 81-113). Thousand Oaks, CA: Sage. Bromme, R., Hesse, F. W., & Spada, H. (2005). Barriers, biases and opportunities of communication and cooperation with computers: Introduction and overview. In R. Bromme, F. W. Hesse, & H. Spada (Eds.), Barriers and biases in computer-mediated knowledge communication, (pp. 1–14). Springer US. Buller, P. F. (1986). The team building-task performance relation: Some conceptual and methodological refinements. Group & Organization Studies, 11(3). Burbach, M. E., Matkin, G. S., Gambrell, K. M., & Harding, H. E. (2010). The impact of preparing faculty in the effective use of student teams. College Student Journal, 44(3), 752. Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2, 40. Chin, W. W., Salisbury, W. D., Pearson, A. W., & Stollak, M. J. (1999). Perceived cohesion in small groups adapting and testing the perceived cohesion scale in a small-group setting. Small Group Research, 30(6), 751–766. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. Perspectives on Socially Shared Cognition, 13, 127-149. Deeter-Schmelz, D. R., & Ramsey, R. (1998). Student team performance: A method for classroom assessment. Journal of Marketing Education, 20(2), 85–93. 123 Dillenbourg, P., Järvelä, S., & Fischer, F. (2009). The evolution of research on computersupported collaborative learning: From design to orchestration. In Technology-enhanced learning, (pp. 3–19). Springer Netherlands. Drescher, S., Burlingame, G., & Fuhriman, A. (2012). Cohesion: An odyssey in empirical understanding. Small Group Research, 43(6), 662–689. Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41(3–4), 327. Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237–256. Fessenden, S. A. (1953). An index of cohesiveness-morale based on the analysis of sociometric choice distribution. Sociometry, 16(4), 321–326. Festinger, L., Back, K. W., & Schachter, S. (1950). Social pressures in informal groups: A study of human factors in housing. Palo Alto, CA: Stanford University Press. Fisher, R., Smith, K., Finney, S., & Pinder, K. (2014). The importance of implementation fidelity data for evaluating program effectiveness. About Campus, 19(5), 28-32. Forsyth, D. R. (2009). Group dynamics. Belmont, CA: Wadsworth Cengage Learning. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7-23. Gearing, R. E., El-Bassel, N., Ghesquiere, A., Baldwin, S., Gillies, J., & Ngeow, E. (2011). Major ingredients of fidelity: A review and scientific guide to improving quality of intervention research implementation. Clinical Psychology Review, 31(1), 79–88. Gunawardena, C. N. (1995). Social presence theory and implications for interaction and collaborative learning in computer conferences. International Journal of Educational Telecommunications, 1(2), 147–166. Gunawardena, C. N., & Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer‐mediated conferencing environment. American Journal of Distance Education, 11(3), 8–26. Henriksen, D., Mishra, P., Greenhow, C., Cain, W., & Roseth, C. (2014). A tale of two courses: Innovation in the Hybrid/Online Doctoral Program at Michigan State University. TechTrends, 58(4), 45–53. 124 Johnson, D. W., Johnson, R. T., & Smith, K. A. (1998). Cooperative learning returns to college. Change, 30(4), 26–35. Jonassen, D. H. (1994). Thinking technology: Toward a constructivist design model. Educational Technology, 34(4), 34-37. Kapp, E. (2009). Improving student teamwork in a collaborative project-based course. College Teaching, 57(3), 139–143. Klein, C., DiazGranados, D., Salas, E., Le, H., Burke, C. S., Lyons, R., & Goodwin, G. F. (2009). Does team building work? Small Group Research, 40(2), 181–222. Lancellotti, M. P., & Boyd, T. (2008). The effects of team personality awareness exercises on team satisfaction and performance: The context of marketing course projects. Journal of Marketing Education, 30(3), 244–254. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. Lawson, T., Comber, C., Gage, J., & Cullum-Hanshaw, A. (2010). Images of the future for education? Videoconferencing: a literature review. Technology, Pedagogy and Education, 19(3), 295–314. Lizzio, A., & Wilson, K. (2005). Self-managed learning groups in higher education: Students’ perceptions of process and outcomes. British Journal of Educational Psychology, 75(3), 373–390. Martin, W. E., Darley, J. G., & Gross, N. (1952). Studies of group behavior: II. Methodological problems in the study of interrelationships of group members. Educational and Psychological Measurement, 12(4), 533–553. McCorkle, D. E., Reardon, J., Alexander, J. F., Kling, N. D., Harris, R. C., & Iyer, R. V. (1999). Undergraduate marketing students, group projects, and teamwork: The good, the bad, and the ugly? Journal of Marketing Education, 21(2), 106–117. McKinney, K., & Graham-Buxton, M. (1993). The use of collaborative learning groups in the large class: Is it possible? Teaching Sociology, 21(4), 403–408. Noe, R. A. (2002). Employee training and development. Boston, MA: McGraw-Hill/Irwin. O’Donnell, C. L. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research, 78(1), 33–84. 125 Oakley, B. A., Hanna, D. M., Kuzmyn, Z., & Felder, R. M. (2007). Best practices involving teamwork in the classroom: Results from a survey of 6435 engineering student respondents. IEEE Transactions on Education, 50(3), 266–272. Poole, M., Seibold, D., & McPhee, R. (1996). The structuration of group decisions. In R. Hirokawa & M. Poole (Eds.), Communication and group decision making (2nd ed.), (pp. 114-146). Thousand Oaks, CA: Sage. Prichard, J. S., Bizo, L. A., & Stratford, R. J. (2006). The educational impact of team-skills training: Preparing students to work in groups. British Journal of Educational Psychology, 76, 119–140. Rafaeli, S. (1990). Interacting with media: Para-social interaction and real interaction. Mediation, information, and communication: Information and behavior, 3, 125-181. Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1). Roseth, C., Akcaoglu, M., & Zellner, A. (2013). Blending synchronous face-to-face and computer-supported cooperative learning in a hybrid doctoral seminar. TechTrends, 57(3), 54–59. Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2007). Assessing social presence in asynchronous text-based computer conferencing. International Journal of E-Learning & Distance Education, 14(2), 50-71. Rousseau, V., Aubé, C., & Savoie, A. (2006). Teamwork behaviors: A review and an integration of frameworks. Small Group Research, 37(5), 540–570. Ruiz-Primo, M. (2006). A multi-method and multi-source approach for studying fidelity of implementation. Center for Research on Evaluation, Standards, and Student Testing/ University of California, Los Angeles. CSE: Technical Report 677. Salas, E., Rozell, D., Mullen, B., & Driskell, J. E. (1999). The effect of team building on performance: An integration. Small Group Research, 30(3), 309–329. Salomon, G. (1998). Novel constructivist learning environments and novel technologies: Some issues to be concerned with. Learning and Instruction, 8, 3–12. Sanborn, L., & Huszczo. G. (2007). Team building. In S. Rogelberg (Ed.), Encyclopedia of industrial and organizational psychology, (pp. 788-90). Thousand Oaks, CA: SAGE Publications, 2007. Scardamalia, M., & Bereiter, C. (1993). Computer support for knowledge-building communities. The Journal of the Learning Sciences, 3(3), 265–283. 126 Schein, E. H. (1969). Process consultation: Its role in organization development. Retrieved from http://eric.ed.gov/?id=ED037619 Schein, E. H. (1999). Process consultation revisited: Building the helping relationship. Reading, MA: Addison-Wesley. Shapley, K. S., Sheehan, D., Maloney, C., & Caranikas-Walker, F. (2010). Evaluating the implementation fidelity of technology immersion and its relationship with student achievement. The Journal of Technology, Learning and Assessment, 9(4). Short, J., E. Williams, & B. Christie. (1976). The social psychology of telecommunications. London: John Wiley & Sons. Stahl, G. (2013). Theories of collaborative cognition: Foundations for CSCL and CSCW together. In S. P. Goggins, I. Jahnke, & V. Wulf (Eds.), Computer-supported collaborative learning at the workplace, (pp. 43–63). Springer US. Stahl, G., Koschmann, T., & Suthers, D. (2006). Computer-supported collaborative learning: An historical perspective. In R. K. Sawyer (Ed.) Cambridge handbook of the learning sciences, (pp. 409–426). Strijbos, J.-W., & Fischer, F. (2007). Methodological challenges for collaborative learning research. Learning and Instruction, 17(4), 389–393. Tannenbaum, S. I., Beard, R. L., & Salas, E. (1992). Team building and its influence on team effectiveness: An examination of conceptual and empirical developments. In K. Kelley (Ed.), Issues, theory, and research in industrial/organizational psychology, (pp. 117– 153). Oxford, England: North-Holland. Tu, C. H., & McIsaac, M. (2002). The relationship of social presence and interaction in online classes. The American Journal of Distance Education, 16(3), 131-150. Tuckman, B. W. (1965). Developmental sequence in small groups. Psychological Bulletin, 63(6), 384. Tuckman, B. W., & Jensen, M. A. C. (1977). Stages of small-group development revisited. Group & Organization Management, 2(4), 419-427. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19(1), 52–90. 127 Wiener, M., & Mehrabian, A. (1968). Language within language: Immediacy, a channel in verbal communication. Ardent Media. Woodman, R. W., & Sherwood, J. J. (1980). The role of team development in organizational effectiveness: A critical review. Psychological Bulletin, 88(1), 166. Zeleny, L. D. (1939). Characteristics of group leaders. Sociology & Social Research. 128