SHARING JOY THROUGH TECHNOLOGY: THE EFFECT OF SYNCHRONICITY, NONVERBAL CUES, AND PERSISTENCE ON AFFECTIVE WELL-BEING By Lin Li A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Information and Media — Doctor of Philosophy 2021 ABSTRACT SHARING JOY THROUGH TECHNOLOGY: THE EFFECT OF SYNCHRONICITY, NONVERBAL CUES, AND PERSISTENCE ON AFFECTIVE WELL-BEING By Lin Li As an important form of self-disclosure, sharing positive events is related to increased positive affect and reduced negative affect, i.e., affective well-being. The act of communicating a positive personal event to another person and reaping the intrapersonal and interpersonal benefits of such communication is termed capitalization in social psychology. Previous research identified perceived partner responsiveness and memorability of the positive event as two central mechanisms underlying capitalization’s positive effect on well-being. One understudied area in this line of research is the technologies through which capitalization processes are mediated. In terms of the processes and outcomes, does it make a difference when capitalization is carried out in text, voice calls, video calls, or social media? By drawing upon literature in computer- mediated communication and interpersonal communication, this dissertation examines the mechanisms and outcomes of capitalization using various information and communication technologies (ICTs). In characterizing technology use, this study moves beyond the descriptive and objective differences between specific forms of ICTs. Instead, it focuses on individuals’ perceived features of ICTs, including synchronicity, nonaudio-based nonverbal cues, audio- based nonverbal cues, and persistence while sharing a particular positive event. The main finding of the study was that perceived partner responsiveness was positively related to state affective well-being but not global affective well-being. The relationship of perceived partner responsiveness on state affective well-being was also moderated by relationship closeness, in that responsiveness from less close communication partners was related to a bigger increase in state affective well-being. Synchronicity and audio-based nonverbal cues were positively related to perceived partner responsiveness, which in turn, was positively related to state affective well- being. In contrast, nonaudio-based nonverbal cues were negatively associated with perceived partner responsiveness, which in turn, was negatively related to state affective well-being. None of the features’ direct relationships with perceived partner responsiveness and affective well- being was moderated by relationship closeness. Still, the negative indirect relationship between nonaudio-based nonverbal cues and state affective well-being was moderated by relationship closeness. Specifically, the negative indirect effect of nonaudio-based nonverbal cues on state well-being decreased as relationship closeness increased. Perceived nonaudio-based nonverbal cues also had a direct positive association with both state affective well-being and global affective well-being. Lastly, perceived persistence was related to both state and global well-being through memorability of the positive event. This study serves as a framework for uncovering how features of ICTs can play a role in the processes and outcomes of capitalization. Theoretical and practical implications and limitations of the findings are discussed. Copyright by LIN LI 2021 ACKNOWLEDGMENTS It is hard to believe that it has been five years since I have started the program. In this time, my growth as a person and a scholar would not have been possible without the support of my advisor, friends, and community. As the first person in my family who pursued a doctoral degree, I cannot imagine making it thus far without the numerous people who have shown me kindness, care, and generosity. First and foremost, I would like to thank my advisor, Dr. Wei Peng, for supporting my research and believing in my ability to succeed in the projects my interests have taken me. I am proud of this dissertation, which is a cumulation of my work in the graduate program. This work would not have been what it is without your patience, guidance, and feedback. I have learned and benefitted greatly from your ability to dissect a complicated phenomenon and the precision in connecting research methods to the relevant literature and research questions. I am fortunate to have had the opportunity to be your advisee and learn from the best in the past five years. Second, I’d like to thank my committee members, who have always offered encouragement and insightful feedback. Thanks to Dr. Jingbo Meng, who modeled exemplary scholarship and kindness. Thanks to Dr. Anastasia Kononova, for being there as a mentor and as a friend. Thanks to Dr. Dave Ewoldsen, who had shown unwavering support for me even when I doubted myself. Thanks to Dr. Ryan Bowles, who is as good in his craft as he is honest about the limitations of every method. I have learned a great deal from each of you, and I am confident that our conversations will continue to serve me long after I finish my dissertation. Third, I’d like to thank my friends and community. The patience and grace of my friends continuously bring light to my life and inspire me to be the best version of myself. Pursuing a v Ph.D. in a foreign country with limited social support presented many challenges I could not have faced alone. I am lucky to have made friends in both China and the U.S. Thanks to my sister and cousin for always believing in me and cheering me on. Thanks to Shao for never hesitating to answer my call wherever you are in the world and offering a precise and reasonable guide on some of the most challenging questions in life and academia. I could not have completed this journey without your friendship, wisdom, and support. Thanks to Hui, who understands me like no other. Thanks to Cynthia, who always knows and shares the truth with me. Thanks to Alice, Lirong, Sheng, and Jinping, for simply being there when I needed it. Thanks to friends in the IM program, MJ, Shaheen, Tim, Kelley, and Julia, for being open, honest, and resilient (yet never hide your vulnerability). In the last year of my Ph.D. program, I was also a graduate fellow in the Residential College in the Arts and Humanities, which provided me with valuable opportunities to learn about undergraduate teaching, engage in professional development, and socialize with a lovely group of individuals. I also met Tara, Jeny, Zhenzhen, Keer, and Kenlea through the job club. The power that came from a group of like-minded people exchanging information, support, and fellowship is eye-opening, and I am fortunate to have them in my life. Lastly, I’d like to thank my parents and grandparents, who have sacrificed so much for me to be who I am today. My academic career is only made possible because of your belief in a better life through education. vi TABLE OF CONTENTS LIST OF TABLES ......................................................................................................................... ix LIST OF FIGURES ....................................................................................................................... xi CHAPTER ONE INTRODUCTION .............................................................................................. 1 CHAPTER TWO LITERATURE REVIEW .................................................................................. 6 2.1. Capitalization as a form of supportive communication........................................................ 6 2.2. Antecedents and outcomes of capitalization ........................................................................ 8 2.3. Affective well-being ........................................................................................................... 11 2.4. The current gap in capitalization research: Mediated capitalization and its effects ........... 14 2.4.1. Perceived features of mediated channels and outcomes of supportive communication ............................................................................................................................................... 15 2.4.2. Perceived nonverbal cues and supportive communication .......................................... 21 2.4.3. Perceived synchronicity and supportive communication ............................................ 23 2.4.4. Perceived audio modality and supportive communication .......................................... 25 2.5. Relationship development as a moderator of perceived features of mediated channels and outcomes of supportive communication.................................................................................... 27 2.6. Perceived partner responsiveness and affective well-being ............................................... 32 2.6.1. Perceived partner responsiveness and its antecedents ................................................. 34 2.7. Memorability and affective well-being .............................................................................. 38 2.7.1. Perceived persistence as a determinant of memorability............................................. 39 CHAPTER THREE SCALE DEVELOPMENT .......................................................................... 41 3.1. Chapter preview ................................................................................................................. 41 3.2. Item generation ................................................................................................................... 41 3.3. Establishing content validity .............................................................................................. 43 3.4. Sample and procedure ........................................................................................................ 48 3.5. Measures ............................................................................................................................. 50 3.6. Data analysis....................................................................................................................... 51 3.7. Results ................................................................................................................................ 52 CHAPTER FOUR MAIN SURVEY STUDY.............................................................................. 61 4.1. Chapter preview ................................................................................................................. 61 4.2. Sample and procedure ........................................................................................................ 61 4.3. Measures ............................................................................................................................. 63 4.4. Data analysis....................................................................................................................... 68 4.5. Results ................................................................................................................................ 71 CHAPTER FIVE DISCUSSION .................................................................................................. 92 5.1. The direct and conditional effect of perceived partner responsiveness on state affective well-being .................................................................................................................................. 94 vii 5.2. The direct and indirect effect of synchronicity, audio-based nonverbal cues, nonaudio- based nonverbal cues on perceived partner responsiveness and affective well-being .............. 96 5.3. The conditional effect of synchronicity, audio-based nonverbal cues, nonaudio-based nonverbal cues on perceived partner responsiveness and affective well-being ...................... 101 5.4. The significant indirect effect of persistence on affective well-being through memorability of the positive event ................................................................................................................ 102 5.5. Limitations and Future Directions .................................................................................... 103 5.6. Conclusion ........................................................................................................................ 106 APPENDICES ............................................................................................................................ 108 APPENDIX A: Table 25: Study 1 Cognitive interview participant information .................... 109 APPENDIX B: Study 1 Cognitive interview probes............................................................... 110 APPENDIX C: Study 1 key survey questions......................................................................... 111 APPENDIX D: Study 2 Main survey key questions ............................................................... 118 REFERENCES ........................................................................................................................... 128 viii LIST OF TABLES Table 1: Means and standard deviation of all scale items of the EFA study ............................ 52 Table 2: Zero-order correlations of items of the perceived audio-based nonverbal cues scale 54 Table 3: Zero-order correlations of items of the perceived synchronicity scale ....................... 54 Table 4: Zero-order correlations of items of the perceived persistence scale ........................... 55 Table 5: Zero-order correlations of items of the perceived nonaudio-based nonverbal cues scale ........................................................................................................................................... 56 Table 6: EFA results for the perceived audio-based nonverbal cues scale ............................... 59 Table 7: EFA results for the perceived persistence scale .......................................................... 59 Table 8: EFA results for the perceived synchronicity scale ...................................................... 59 Table 9: EFA results for the perceived nonaudio-based nonverbal cues scale ......................... 60 Table 10: Means and standard deviation of all scale items of the CFA study .......................... 72 Table 11: Zero-order correlations of items of the perceived audio-based nonverbal cues scale ................................................................................................................................................... 73 Table 12: Zero-order correlations of items of the perceived synchronicity scale ..................... 73 Table 13: Zero-order correlations of items of the perceived persistence scale ......................... 74 Table 14: Zero-order correlations of items of the perceived nonaudio-based nonverbal cues scale ........................................................................................................................................... 75 Table 15: Model fit indices of the CFA models ........................................................................ 79 Table 16: Descriptive statistics of main study variables ........................................................... 80 Table 17: Zero-order correlations of main study variables ....................................................... 81 Table 18: Basic moderation model: Relationship closeness as the moderator between responsiveness and affective well-being ................................................................................... 83 Table 19: Basic mediation model: Nonaudio cues, audio cues, and synchronicity and affective well-being via perceived partner responsiveness ...................................................................... 85 ix Table 20: Moderated mediation model: Synchronicity as the main independent variable ....... 86 Table 21: Moderated mediation model: Audio-based nonverbal cues as the main independent variable ...................................................................................................................................... 87 Table 22: Moderated mediation model: Nonaudio-based nonverbal cues as the main independent variable.................................................................................................................. 88 Table 23: Basic mediation model: Memorability as the mediator between persistence and affective well-being ................................................................................................................... 90 Table 24: A summary of the key findings ................................................................................. 91 Table 25: Study 1 Cognitive interview participant information.............................................. 109 x LIST OF FIGURES Figure 1: Conceptual diagram of the study ............................................................................... 40 Figure 2: CFA model for perceived audio-based nonverbal cues ............................................. 77 Figure 3: CFA model for perceived synchronicity .................................................................... 78 Figure 4: CFA model for perceived persistence ........................................................................ 78 Figure 5: CFA model for perceived nonaudio-based nonverbal cues ....................................... 79 xi CHAPTER ONE INTRODUCTION The desire to share positive news with others is an inherent characteristic of human sociality (Peters et al., 2018). Capitalization refers to the process of communicating a positive personal event to another person and reaping the additional benefit of such communication on individuals’ well-being (independent of the positive event itself) (Gable et al., 2010; Gable et al., 2004). Capitalization leads to intrapersonal benefits, such as increased positive emotions, subjective well-being, self-esteem, and decreased loneliness (Gable et al., 2010; Gable et al., 2004). Capitalization research belongs to a larger body of literature on self-disclosure and social sharing as it involves disclosing positive personal events and their related emotions to others. Nevertheless, it has a narrower focus by centering on the disclosure of important positive events. Social support has long been viewed as a fundamental element of interpersonal relationships and has a significant relationship with individuals’ health and well-being (Cohen et al., 2000; Uchino, 2004). In its recent reconceptualization, social support has been categorized into secure-based support that takes place during non-adverse life contexts and safe-haven support that is enacted during times of adversity. As a form of secure-based support (Feeney & Collins, 2015), capitalization also belongs to a form of supportive communication due to it seeking understanding, acceptance, and affirmation from one’s partners in celebrating positive events (Feeney & Collins, 2015). Social support has long been studied in both sociology and psychology and is categorized as social integration or functional support (i.e., informational, emotional, tangible, and companionship support) (Cohen et al., 2000). In the 1980s, communication scholars adopted a unique communicative approach in studying social support and conceptualized it as “supportive 1 communication.” Supportive communication is defined as “verbal and nonverbal behavior produced with the intention of providing assistance to others perceived as needing that aid” (MacGeorge et al., 2011, p.317). Burleson et al. (1994) proposed that “social support should be studied as communication because it is ultimately conveyed through messages directed by one individual to another in the context of a relationship that is created and sustained through interaction.” (p. xviii) Therefore, research on social support in communication focuses on the messages through which people seek and provide support, the interactions in which supportive messages are constructed and interpreted, and the relationships that are influenced by the supportive interaction in which people engage (Burleson et al., 1994). The sharing of positive events, and the subsequent interactions with one’s partners (i.e., capitalization), most aptly fit into the realm of supportive communication. One gap in the current research on capitalization is a lack of focus on the communication technologies through which the capitalization processes are mediated (Gable et al., 2010; Gable et al., 2004). Mediated capitalization, which is defined in this study as sharing positive events using information-communication technologies (ICTs), such as voice/video calls, texts, or social media. Few scholars have examined whether (and how) mediated capitalization generates the same benefits on well-being as face-to-face capitalization. This is despite the numerous theories and empirical evidence concerning how the processes and outcomes of social sharing and self- disclosure through computer-mediated communication can be different from the traditional face- to-face context (Ruppel et al., 2017; Tong & Walther, 2011; Walther, 2011; Yang et al., 2019). One diary study that focused on channels of sharing found that participants utilized various channels, including texts, phone calls, Facebook, instant messaging, Twitter, video chats, blogs, and emails to share positive events, and participants shared their positive events in a face-to-face 2 context in only approximately half of the instances (Choi & Toma, 2014). It is arguable that due to the introduction of new communication technologies, the phenomenon of capitalization has now become more frequent and integrated into people’s daily lives. Given the current COVID-19 global pandemic and the severe limits governments and public health agencies have placed on face-to-face social interactions, particularly with people outside of one’s household, it is important to understand the effects of using ICTs to share positive events with one’s social networks. Besides the lack of focus on mediated capitalization, most research on self-disclosure or social sharing has been either conducted on one specific social media platform (Naaman et al., 2010; Vitak, 2012; Young & Quan-Haase, 2009) or highlighted the differences between ICTs and face-to-face communication for their effects on self-disclosure (Gonzales, 2014; Ruppel et al., 2017), self-esteem (Gonzales, 2014), impression management (O’Sullivan, 2000), informational control (Feaster, 2013), social support (Wohn et al., 2017; Wohn & Peng, 2015), and intimacy (Jiang et al., 2011; Jiang & Hancock, 2013). However, these approaches are limited in the current polymedia environment, in which individuals use multiple technologies to consciously manage their emotions and relationships with a variety of social network members (Madianou & Miller, 2013). Therefore, to understand how the different effects revealed by previous literature may have come into being, the current literature needs to move beyond the differences between text, phone calls, Facebook, Snapchat, or WhatsApp, and focus on the features common to different technologies. Therefore, in considering how mediated capitalization may occur through multiple ICTs, this study borrows from the literature concerning supportive communication and how features of ICTs may contribute to supportive communication processes and outcomes. Specifically, 3 nonverbal immediacy (Jones & Guerrero, 2001), emphatic accuracy (Verhofstadt et al., 2008), and synchronicity (Dennis et al., 2008) are essential components in facilitating social support. Features that allow nonverbal cues (whether it is nonaudio-based nonverbal or audio-based nonverbal cues) and synchronous feedback may be more conducive to achieving favorable outcomes related to supportive communication and capitalization. Additionally, given that the manners in which people use ICTs can also vary based on the relationship history between interaction partners (Carlson & Zmud, 1999), this study draws from the affordance utilization model (Ruppel, 2015), which argues that the effects of using ICTs on relationship outcomes are moderated by the stages of relationship development between interaction partners. A key mechanism through which capitalization leads to changes in individuals’ subjective well-being is perceived partner responsiveness, which refers to the extent to which individuals find their interactants to be supportive, understanding, as well as to show excitement and positivity toward their capitalization attempts (Kumashiro, 2009; Reis et al., 2004). Prior research has not empirically examined the antecedents of perceived partner responsiveness, however, Davis (1982) theorized two factors that are particularly relevant to the mediated communication context and determine the perceived partner responsiveness: attention to one’s partner and communication accuracy. Individuals form perceptions regarding the attentiveness and accuracy of an interaction partner based on the verbal and nonverbal cues available and the feedback received during social interaction (Davis, 1982), both of which are part of the ICT features individuals adopt for social interaction. Therefore, features related to nonaudio-based nonverbal cues, audio-based nonverbal cues, and synchronous feedbacks may also be affecting perceived partner responsiveness. In a manner like how stages of relationship development 4 between partners may moderate the effects of ICT features on capitalization outcomes, it is also possible that the same moderating effect exists between features and responsiveness. Another mechanism underlying capitalization’s positive effects on subjective well-being is the memorability of the event, which refers to the salience and accessibility of the event in memory (Gable et al., 2010). Capitalizing using ICTs that provide digital traces related to the relative permanence of communication should be more conducive to an individual remembering the event, consequently impacting well-being. By drawing insights from social psychology and computer-mediated communication, this study aims to understand the effects of mediated capitalization and the associated mediating and moderating mechanisms. This study examines how features of ICTs are associated with individuals’ perceived partner responsiveness and affective well-being. This study contributes to the literature by 1) introducing the role of ICTs during capitalization and examining how mediated capitalization shapes well-being, 2) moving beyond specific differences between text, phone calls, or social media, and focusing on the features common to these ICTs (i.e., nonaudio- based nonverbal cues, audio-based nonverbal cues, synchronicity, and persistence), 3) testing the mediating (perceived partner responsiveness) and moderating (relationship closeness) mechanisms associated with the effects of mediated capitalization on affective well-being. 5 CHAPTER TWO LITERATURE REVIEW 2.1. Capitalization as a form of supportive communication As introduced above, capitalization can be viewed as a form of supportive communication, despite scholars of capitalization and social support generally having worked in independent traditions (Shorey & Lakey, 2011). Social support is traditionally viewed as occurring in the context of stress or adversity, and its primary goal is to alleviate adverse outcomes and return to a baseline level of functioning for individuals undergoing hardships (Uchino, 2004). In this context, social support has been defined as the “provision of psychological and material resources intended to benefit an individual’s ability to cope with stress” (Cohen, 2004, p. 676). The stress-buffering model of social support has received extensive support in previous literature (Cohen & Wills, 1985; Lee et al., 2004; Uchino, 2004). However, in recent years, scholars have argued for a broader perspective on social support by suggesting that social support must be considered in non-adverse life contexts (Cutrona & Russell, 2017). Social support has now been reconceptualized as an interpersonal process that can be enacted by coping with adversities and pursuing better opportunities for growth and development (Collins & Feeney, 2000; Feeney, 2004; Feeney & Collins, 2015; Feeney & Thrush, 2010). The reconceptualization of social support categorizes it into two types of support: safe- haven and secure-base support. Safe-haven support is the type most often examined in prior literature on social support and is defined as a safe place for individuals to enter and seek comfort, reassurance, and assistance during distress (Collins & Feeney, 2000; Feeney, 2004). In comparison, secure-base support functions by supporting behaviors that involve venturing out of a relationship and partaking in autonomous exploration and growth in the environment (Feeney, 6 2004; Waters & Cummings, 2000). Secure-base support is defined as support for an individual’s autonomy and exploration when unburdened by stress or adversity (Feeney & Collins, 2015). Both types of support have reciprocal effects on each other, as secure base support can build a reservoir of trust that individuals may draw upon during distress (Feeney & Collins, 2015). Capitalization is the retelling and celebration of positive events, the goal of which is to augment and sustain positive emotions (Rimé, 2007). Capitalization research has been an integral part of the reconceptualization of social support, cited by scholars as evidence supporting the importance of social support in celebrating successes and accomplishments (Cutrona & Russell, 2017; Feeney & Collins, 2015). Supporting capitalization by providing active and constructive responses to one’s partners is listed as a form of secure base support as secure-base support is about promoting thriving through full participation in life opportunities in the absence of adversity (Feeney & Collins, 2015). Research on capitalization reveals that, when close others respond to people’s sharing of positive events actively and constructively, the discloser experiences positive affect and well-being beyond the impact of the positive event (Gable et al., 2004, 2006). The shift in broadening social support to incorporate supportive relational processes not only during but also outside of distressing situations is also supported by other empirical evidence. For example, one study reported that perceived social support and perceived capitalization support (measured by perceived responses to capitalization attempts by mothers, fathers, and peers) are substantially correlated and have similar links to other constructs, such as conflict, provider similarity, and provider agreeableness (Shorey & Lakey, 2011). Since the positive effect of perceived social support on stress-buffering, emotional well-being, and mental health has sometimes been hard to replicate (Lakey & Cronin, 2008), scholars have proposed that 7 social support’s main effect on health may exist not through explicit conversations about how to cope with stress during distress (Lakey & Orehek, 2011). Rather, such favorable effect may be reflective of the moment-to-moment affect regulation by relationship partners through “ordinary yet affectively consequential conversations and shared activities,” regardless of whether the subject at hand is a positive or a negative event. From this perspective, the safe-haven support and the secure-base support function in ways similar to supportive communication in helping to regulate recipients’ affect, thoughts, and actions. Even though social support and capitalization support are not entirely redundant, given capitalization emphasizes important positive events (Lakey & Orehek, 2011), these two constructs are interrelated in reflecting social relationships’ supportive functions and, based on the evidence provided above, are interchangeable in certain situations. Therefore, the current study adopts the view that capitalization is a form of social support and offers arguments based on this premise. 2.2. Antecedents and outcomes of capitalization A considerable amount of literature has examined how people cope with negative and stressful events through processes such as social support in relation to physical and mental health (Cohen et al., 2000). Despite the obvious consequences of negative life events, there are generally more positive than negative events in daily life, with approximately three positive events to every one negative event reported in daily experience studies (Gable & Haidt, 2005). How people respond to and “cope with” positive events has become a topic worth exploring given its frequency and connection to individuals’ well-being and mental health. In a seminal study on the effects of positive events, Langston (1994) used the term “capitalizing” to describe “the process of beneficially interpreting positive events.” (p.1112) The 8 concept was later adapted into “capitalization,” which refers to the process of communicating a positive personal event (i.e., capitalizers) to another person (i.e., responders) and reaping the additional benefit of such communication (Gable et al., 2010; Gable et al., 2004). For capitalization, mundane everyday events may be just as influential as major life-changing events, as it is the acts of sharing and the responses of one’s partner that shapes the intrapersonal and interpersonal outcomes of capitalization (Peters et al., 2018). Capitalization usually occurs between close relationship partners, such as friends, siblings, or parents (Gable et al., 2004). Gable and Reis (2010) developed a theoretical model of the capitalization process by reviewing empirical evidence regarding the social sharing of positive events, its effects on personal and interpersonal benefits, and its mediating mechanisms. The capitalization process consists of sharing positive events (i.e., capitalization attempts), and the perceived response to those capitalization attempts by an interaction partner. Capitalization attempts provide opportunities for relationship partners to provide (or not) a positive and engaged response, which determines the success of capitalization attempts. When the capitalization process occurs successfully, it is followed by personal and interpersonal benefits. In contrast, unsuccessful capitalization causes harmful consequences to both the capitalizer and the relationship between the capitalizer and the partner. There are many personal benefits associated with the capitalization process unfolding successfully (Gable et al., 2010; Gable et al., 2004; Lambert et al., 2013; Maisel & Gable, 2009; Reis et al., 2010). For example, experimental studies have repeatedly found that telling someone about the most positive event of the day was associated with higher positive affect and life satisfaction (Gable et al., 2004; Reis et al., 2010). People who shared their positive experiences reported increased positive affect, happiness, and life satisfaction across four weeks through a 9 four-week journal study (Lambert et al., 2013). Successful capitalization is also associated with a host of interpersonal benefits, such as intimacy, daily marital satisfaction (Gable & Reis, 2010; Gable et al., 2004). In their interpersonal model of capitalization (InterCAP), Peters et al. (2018) summarized the empirical evidence for the intrapersonal and interpersonal benefits of individuals’ successful capitalization attempts. The intrapersonal outcomes associated with capitalization processes include increased positive emotions, subjective well-being, self-esteem, and decreased loneliness. The interpersonal outcomes associated with capitalization processes include satisfaction, intimacy, commitment, trust, liking, closeness, and stability. In Gable and Reis (2010) ’s review, the mechanisms by which capitalization leads to intrapersonal benefit include the maximization of the significance of the events and increased memorability of the event. Maximizing refers to how capitalization can help increase the event’s value through the feedback provided by one’s valued others. Positive feedback validates the event itself and signals the partner’s valuing of the person, which in turn helps maximize the perceived value of the event (Gable & Reis, 2010; Langston, 1994). Supporting the mechanism of maximizing, Reis et al. (2010) found that the ratings of the positivity of the event increased not because of the act of recounting itself but because of a process of interaction in which the listeners provided enthusiastic responses. Gable et al. (2004) also provided initial empirical evidence of memorability as the mediator between capitalization and its intrapersonal benefits. Notably, memorability of the event does not appear to be contingent upon an enthusiastic response (Gable et al., 2004). The mechanisms underlying the interpersonal benefits of capitalization include perceived responsiveness and positive emotions. Gable and Reis (2010) proposed that the interpersonal benefits of capitalization are contingent upon perceived partner responsiveness, which they 10 describe as “a process by which individuals come to believe that relationship partners both attend to and react supportively to central, core defining features of the self.” (Reis et al., 2004). A subsequent section regarding responsiveness examines this aspect in more detail. In general, active and constructive responses that indicate concern for the capitalizer’s well-being create closeness with the partner, whereas passive or destructive responses that signal a lack of care for the capitalizer’s well-being create gaps in the relationship. As capitalization attempts and enthusiastic responses are associated with the positive affect of the capitalizer, such positive affect also plays a role in capitalization’s interpersonal benefits. Gable and Reis (2010) outlined that even though perceived partner responsiveness represents only a single mediator underlying capitalization’s interpersonal benefits; it can determine all of the interpersonal outcomes and part of the intrapersonal outcomes associated with capitalization. Maximization, as a process underlying intrapersonal benefits, occurs when a partner responds with enthusiasm that enhances an event’s perceived significance; positive emotions, as a process underlying interpersonal benefits, only emerge when the partner displays similar emotions and provides positive responses. Therefore, for the purposes of this study, the memorability of the capitalizing event and the perceived partner responsiveness are examined as two independently crucial pathways of capitalization’s intrapersonal benefits. 2.3. Affective well-being Capitalization’s intrapersonal outcomes are predominantly studied in relation to their implications for well-being. Well-being is one of the most prominent areas of research in psychology. Well-being refers to “optimal psychological functioning and experience,” and does not merely represent the absence of mental illness (Ryan & Deci, 2001). There are two overlapping yet theoretically and empirically different traditions of studying well-being: the 11 hedonic approach, which defines well-being as subjective well-being, and the eudaimonic approach, which defines well-being as psychological well-being. Both subjective well-being and psychological well-being have distinct conceptualizations and operationalizations that are divergent in some respects and complementary in others (Ryan & Deci, 2001). The hedonic approach focuses on happiness. Within this paradigm, well-being is defined as subjective well-being, which emphasizes minimizing pain and maximizing pleasure. Subjective well-being consists of two aspects of evaluation of one’s life, one is about the cognitive aspect that asks individuals about their life satisfaction and the second one is about affective evaluations represented by both general and state level positive and negative affect. This hedonic approach originated in the 1950s as scientists started to explore the quality of life (Keyes, Shmotkin, & Ryff, 2002). Kahneman et al. (1999) defined hedonic psychology as studying “what makes experiences and life pleasant and unpleasant” (p. ix). A significant element of subjective well-being research is that it is a subjective evaluation of one’s own life, which may be less concerned with objective factors such as health, socioeconomic status, or income (Diener, 2009). Subjective well-being has been linked to heredity (Lykken & Tellegen, 1996), personality (McCrae & Costa, 1991), and currently accessible information (Schwarz & Strack, 1999). Research within this paradigm usually measures subjective well-being based on three components: life satisfaction as a representation of cognitive well-being, the presence of positive mood, and the absence of negative mood summarized as happiness as representations of affective well-being (Diener et al., 1999). Life satisfaction measures an individual’s perceived distance from their ideal life, which involves a judgmental and a long-term assessment of one’s life that stands in contrast to the positive and negative moods that reflect one’s immediate experiences. (Campbell, Converse, & Rodgers,1976). Despite the debate concerning whether 12 positive and negative affect is independent, substantial evidence supports the separation of positive and negative affect within subjective well-being (Cacioppo et al., 1999; Diener et al., 1995). Researchers have exclusively used subjective well-being (i.e., positive and negative affect and life satisfaction), to measure capitalization outcomes (Gable & Reis, 2010; Gable & Reis, 2004; Langston, 1994). The decision to use subjective well-being instead of PWB is intertwined with the nature of capitalization. As with the sharing of positive events, capitalization varies by the frequency and nature of these positive events, which are only partially predictable. Studies have long established the relationships between daily pleasurable events and positive affect (Diener et al., 1999; Stallings et al., 1997). A sustained stream of successful capitalization attempts will likely boost individuals’ PWB on a long-term basis. However, as capitalization focuses on the micro-level perceptions that are primarily event-based, affective well-being that better captures the state-level fluctuations across the day is more suitable for this line of research. Scholars of capitalization have studied its relationship with subjective well-being and the underlying mechanisms of such relationships. Langston (1994) found that expressive responses (e.g., communicating an event to others, celebrating, etc.) were associated with positive affect in addition to the benefits drawn from the valence of the positive events themselves, an effect that was replicated in other studies (Gable et al., 2004; Lambert et al., 2013; Monfort et al., 2014; Otto et al., 2015). As mentioned above, Peters et al. (2018) provided strong empirical support to the positive associations between individuals’ successful capitalization attempts and well-being. 13 2.4. The current gap in capitalization research: Mediated capitalization and its effects Capitalization researchers have largely ignored the technologies through which capitalization processes are mediated. With the emergence of social media, studies have found that people often turn to channels through which they can share positive news with more audience to share positive news, which is linked to positive affect and the satisfaction of sharing (Bazarova et al., 2015; Choi & Toma, 2014). One study found that more responses received on Facebook were associated with an increase in the perceived value and memorability of an event to a poster. The studies further found that the poster experienced a higher level of happiness and self-esteem, and perceived their Facebook network to be more caring than when a lower number of likes or comments was received (Zell & Moeller, 2018). Another study that specifically focused on Facebook sharing found that people are more satisfied after sharing positive emotions than negative emotions across all channels on Facebook (i.e., status updates, posts written on others’ timelines, and private messages (Bazarova et al., 2015). Therefore, posters’ overall satisfaction after sharing emotions on Facebook was related to the quantity and quality of replies received (Bazarova et al., 2015). Choi and Toma (2014) found that sharing the “most important” personal events across multiple media, including Facebook, face-to-face (FtF), texting, and phone calls, increased positive affect (if the event was positive) and increased negative affect (if the event was negative), with FtF offering the most significant increase in positive affect and the smallest increase in negative affect. Given the current polymedia environment, it is less useful to focus on the descriptive differences between ICTs regarding their effects on relational processes and outcomes. Instead, scholars have argued that ICTs can be described in a more consistent and standardized way, so direct comparisons across technologies can be made (Walther, 2013). Studying inherent 14 functional attributes instead of specific ICTs can offer more detailed and enduring theorizing across contexts (Ellison et al., 2015). Therefore, in filling the gap of mediated capitalization, the current study focuses on the features of ICTs that are related to supportive communication. It is essential to consider the features of the mediated channels, as different channels may facilitate supportive communication processes, such as the production of supportive messages or the possibility for nonverbal immediacy during capitalization, in different ways. 2.4.1. Perceived features of mediated channels and outcomes of supportive communication Computer-mediated communication (CMC) as a viable channel for seeking and providing social support has been supported by extensive literature (Rains & Wright, 2016; Rains & Young, 2009). Nevertheless, most research on CMC use and supportive outcomes has focused on anonymous online communities for support seekers who need specific emotional and informational support for managing a particular type of disease (Braithwaite et al., 1999; Coulson, 2005; Coursaris & Liu, 2009). CMC provides an environment in which individuals feel comfortable asking for help and can reach able and willing support providers conveniently (Rains & Wright, 2016). For example, studies found that online support groups provided access to relevant information for patients (Braithwaite, Waldron, & Finn, 1999; Coulson, 2005; Coursaris & Liu, 2009), while allowing them to share emotions and experiences, as well as exchanged support (Barak et al., 2008; Coursaris & Liu, 2009; Mo & Coulson, 2008). Ample research has also reported the positive role of social networking sites in providing social support to different people in different situations (Bender et al., 2011; Ellison et al., 2011; Greene et al., 2011). Compared with average Americans, Americans on Facebook perceived significantly higher social support levels (Hampton, Goulet, Rainie, & Purcell, 2011). However, most of the empirical studies have focused on reasons for (lack of support offline, perceived stigma, 15 interaction control, accessibility) and outcomes of (received and perceived social support in specific subject matters) support-seeking in a CMC context (Rains & Wright, 2016). The studies have not, however, answered the questions of how CMC has reshaped the entire support- exchanging process. Considering that social support is provided by close family members and friends in most scenarios (Cohen, 2004), it is more relevant to ask what ICTs are doing to the traditional supportive exchanges offline. As technological development evolves rapidly, new forms of ICTs emerge, and old forms disappear. For example, Myspace used to dominate the social media space before Facebook emerged (boyd & Ellison, 2007), and teens are currently adopting platforms other than Facebook (Anderson & Jiang, 2018). By that same token, Zoom has overtaken Skype to become the leading platform for video conferencing in the 2020 pandemic (Stokel-Walker, 2020). More important than the IT industry's changing landscape, however, is that individuals of different ages and geographic locations are adopting different ICTs to form unique media repertoires or personal communication systems based on personal, social, and cultural attributes (Lim & Pham, 2016; Qiu et al., 2013; Yang et al., 2014). People who live in China likely use Wechat and Weibo for most of their communication, whereas people who reside in the U.S. use Facebook Messenger, Twitter, or WhatsApp (Ortiz-Ospina, 2019). Despite differences in specific platforms, what is common across these platforms is the media features, such as their ability to send text messages, make voice/video calls, start group chats, share memes/gifs/stickers/emojis, and post social updates to multiple audiences (i.e., engage in masspersonal communication). Additionally, as individuals now live in a polymedia environment in which there is always a large selection of ICTs available, they usually use more than one form (Madianou, 2015; Madianou & Miller, 2013). Differences in the specific and fleeting sites, platforms, and 16 technologies require a uniform method of standardizing communication technologies and isolating common features of the ICTs in discussing how they may impact supportive communication. To address the moving-target problem in studying technology, scholars have advocated for a more robust approach to ensure the long-term replicability of such research (Bayer et al., 2020). Among the proposed approaches, the concept of affordance provides a flexible means of describing objects of study and facilitating comparisons regardless of the historical trajectories or geographical variations of various technologies (Bayer et al., 2020). The term affordance was first coined by Gibson (1979), who refers to it as an action possibility made possible by the environment. Affordances are the inherent functional attributes of a particular object that do not change even if the goals or needs of the actor interacting with the object do change. In comparison, Norman (1988) viewed affordances as the design aspect of an object that informs how it should be used, as affordances provide strong clues as to the operation of objects through the psychology of causality (Norman, 1988). As scholarships develop, the current approach to affordance takes a relational view in suggesting that the objective design of technology influences but does not determine how the object could be used. Individuals engage in experimentation and adaption when interacting with technologies (Gaver, 1991; Leonardi, 2011). Affordances are thus negotiated within dynamic processes that include users' abilities, the materiality of technologies, and technology use contexts (Evans et al., 2017). Nevertheless, the affordance approach is not without its criticisms and limitations. Notably, its conceptualization and application have been inconsistent (Evans et al., 2017). The primary inconsistency lies in the failure to link the object and outcome in research that invokes 17 affordances, as research focuses either on people’s perceptions of technologies (i.e., perceptions of usefulness) or features used (Evans et al., 2017). In contrast, media features, or capabilities, are less controversial in their definitions and applications. Within the study of communication technologies, media features have been a crucial component of channel-focused theories, such as social presence theory (Short et al., 1976), media richness theory (Daft et al., 1987), and channel expansion theory (Carlson & Zmud, 1999). Media features can also be perceived differently by different users or by the same user in various manners across time (Carlson & Zmud, 1999; Fulk et al., 1990; Walther, 1992), without the complex relational connotations in affordances. For example, social information processing theory (later adapted as social influence theory) argues that instead of being an objective and physical property of a medium, media richness as a feature (characterized by feedback, multiple cues, language variety, and personal focus) is partially socially constructed, as individuals may have different perceptions of the same medium’s richness (Lee, 1994). As noted by Schmitz and Fulk (1991), viewing media richness as only an objective feature of the medium “reduces our ability to accurately predict communication behavior in organizations” (p. 491). For example, even though individuals can plan and edit messages when sending an email, the possibility of planning and editing is irrelevant if the individual does not have the knowledge or willingness to take advantage of this feature. Channel expansion theory also posits that users’ perception of the communication medium’s richness (e.g., email) varies according to their experiences with the technology, the communication partner, and their experiences within a given organization context (Carlson & Zmud, 1999). Moreover, richness perceptions of specific technology can also vary as a function of individuals’ knowledge bases and perceived social 18 influence. Individuals may also maintain different richness perceptions for the same technology, contingent upon situational contexts (Carlson & Zmud, 1999). The conceptual debate on whether technology is best described as objective features or human perceptions of such features can be seen most clearly in the conceptual trajectory of interactivity. Defined as “technological attributes of mediated environments that enable reciprocal communication or information exchange, which afford interaction between communication technology and users, or between users through technology” (Bucy & Tao, 2007, p. 647), interactivity has been viewed as the defining characteristic of new media technologies, though the conceptualizations of interactivity have varied (Bucy & Tao, 2007; Leiner & Quiring, 2008; Liu, 2003; Lowry et al., 2009; Sundar et al., 2003). Initially, interactivity has been viewed as either a medium or a message feature (Bucy, 2004a, 2004b; Bucy & Tao, 2007). The medium- centered approach treats interactivity as an inherent attribute of the mediated communication through digital technologies, such as through hyperlinks, mouseover, or the richness of the message presented, affecting the user’s experiences indiscriminately (Sundar, 2004). The message-centered approach treats interactivity as a continuous process that facilitates user interactions with each other or with the system (Bucy & Tao, 2007; Stromer-Galley, 2004). Nevertheless, as scholars noticed the discrepancy between objective and perceived interactivity, it became evident that the appearance of interactive features does not ensure that users would perceive the same level of interactivity (Bucy & Tao, 2007; Voorveld et al., 2011). Consequently, after explicating interactivity by summarizing the different approaches to studying interactivity, which includes the message-centered, structural, and perceptual approach, Bucy and Tao (2007) proposed a mediated moderation model of interactivity, which states that interactivity serves as the manipulated independent variable while perceived interactivity, i.e., 19 user perceptions, transforms the impact of interactivity by serving as a mediator between intrinsic media attributes and media effects. Perceived interactivity occurs before media effects but stems from individuals’ engagement with interactive media (Bucy & Tao, 2007). Such an argument on how users’ intentions and actions are more important than objective interactive attributes is also supported by theoretical formulations and empirical evidence (Chu & Yuan, 2013; Hoffman & Novak, 1998; Sundar et al., 2015; Wu, 2005). Therefore, the degree to which interactivity can be effective depends on the locus of interactivity (objective vs. perceived) tested in each study (Yang & Shen, 2018). Additionally, research on communication technology has routinely relied on experimental methods to study the effects of computer-mediated communication features such as synchronicity and modality on relational processes and outcomes. (Kashian & Walther, 2020; Park & Sundar, 2015). However, across these studies that manipulated modality or synchronicity as technological features, researchers consistently asked manipulation-check questions to confirm that the manipulations were successful based on the participants’ perceptions of the specific features. Arguably, technological features’ effects on interpersonal communication are based on user perceptions of these technological features rather than these features per se. Overall, a variety of factors, ranging from an individual’s cognitive capacity, media literacy, or physical limitations, may influence individuals’ ability to evaluate and utilize a channel's features (which is an area worth further exploration but beyond the scope of this study). Individual users may fail to perceive features that exist or perceive features that are not inherent to the technology. For example, even though most instant messaging apps allow audio or video calls, individuals still perceive them as similar to texting (Bailey et al., 2016). In treating them as texting, individuals may not perceive these technologies as high in facilitating the 20 perceptions of the interactant’s nonverbal cues (regardless of audio or nonaudio-based). Similarly, when communicating with strangers, it is unlikely that individuals will save the messages on their devices indefinitely even though the technology allows a relatively permanent record of such communication. Therefore, in contrast to previous studies that have labeled various communication technologies as high in cue availability or low in synchronicity based on researchers’ ratings (Hancock et al., 2004; Jiang & Hancock, 2013), the current study argues that users’ behaviors are better predicted by their perceptions of features rather than actual features. In the following discussions, the current study uses the term perceived features to describe individuals’ usage of ICTs. 2.4.2. Perceived nonverbal cues and supportive communication Social support consists of nonverbal and verbal messages that people use when trying to reduce others’ emotional anguish. Nonverbal and verbal messages, defined as nonverbal immediacy and verbal person-centeredness, have consistently been found to be particularly beneficial in prompting emotional change (Burleson & Holmstrom, 2008). Nonverbal immediacy describes behaviors that reflect interpersonal warmth, empathy, and psychological closeness (Andersen, 1989; Andersen, 2008), and are evidenced by close proximity, leaning forward, facial expressions, vocal expressiveness, eye gaze, smiling, or physical animation. The presence or absence of nonverbal cues that can convey nonverbal immediacy in mediated communication has been a long-standing topic of discussion in the literature of CMC (Walther, 2011). In the early stages of CMC research, scholars proposed cues-filtered-out theories to argue how the lack of nonverbal cues (e.g., clothing, facial expressions, posture, gestures) and para-verbal cues (e.g., voice pitch, talking speed) and an overreliance on textual communication create fewer effective interactions by leading to impersonal orientations among 21 users (Walther, 1992; Walther, 1996). The social presence theory (Short et al., 1976) and media richness theory (Daft & Lengel, 1986) are most representative of this research line. In contrast, another group of cues-filtered-in theories discusses how communicators adapt to or exploit the cue limitations of CMC systems to achieve or surpass FtF levels of closeness (Walther, 2011), by using emoticons or smileys to symbolize smiles “:-)” or frowns “:-(”, expressive disclaimers like “*shrug*”, “LOL”, “*sigh*”. Additionally, chronemics and proxemics cues, such as short response time, can also be used to convey thoughtfulness, eagerness, or closeness (Döring & Pöschl, 2009). Theories along this line include the social information processing model, the social identity model of deindividuation effects (Lea et al., 2001), and the hyperpersonal model (Walther, 1996). Nonverbal immediacy and ICTs’ facilitation of the perceptions of nonverbal immediacy provide a framework to understand ICTs’ associations with social support perceptions. Despite being unable to convey some of the components of nonverbal immediacy, such as physical touch, CMC still can transmit the other types of nonverbal cues displayed by an interaction partner, such as concerned facial expressions or body language that demonstrates interest. As ICTs advance, video conferencing tools are becoming widely adopted. The cues-filtered-out theories no longer apply to all scenarios of ICT usage, as the cues present in ICTs range from visual cues that allow interactants to see one another, auditory cues that enable interactants to hear speech, and textual cues via typed messages. Besides visual cues that provide an immediate visual representation of the interactants’ nonverbal body languages, the pervasive use of emoticons and emojis can also be understood as a form of nonverbal cues. Short for emotion icons and constructed as indicators of affective states, emoticons convey nonlinguistic or paralinguistic information that usually accompanies face-to-face communication through facial 22 expressions, gestures, or other bodily expressions (Dresner & Herring, 2010). Emoticons can compensate for the lack of nonverbal and para-verbal cues in the online context and act as surrogates for nonverbal cues (Derks et al., 2008). Individuals use emoticons to express emotion and humor and strengthen the verbal part of the messages (Derks & Bos, 2008). As an evolution of emoticons, emojis convey more emotional messages (Ganster et al., 2012). More recently, the introduction of memes, gifs, stickers, and other stimuli such as kinesics and reactions have added new formats for communicating nonverbal, visual, or verbal cues in a mediated environment. Therefore, it is arguable that CMC can facilitate the perception of nonverbal immediacy through either visual cues that allow interactants to see one another or the use of emoticons that act as a proxy for nonverbal cues. 2.4.3. Perceived synchronicity and supportive communication As previously indicated, person-centered messages are a critical element of supportive communication is, and such messages are positively associated with both the perceived and actual effectiveness of social support outcomes (Bodie & Jones, 2012; Burleson & Holmstrom, 2008; High & Dillard, 2012; Jones, 2004). The hallmarks of sophisticated person-centered messages are that they acknowledge, contextualize, and legitimize another’s feelings and encourage the interactants to explore and elaborate more on such feelings (High & Dillard, 2012). When offered in the context of soothing the distressed, person-centered messages are also likely to be a crucial component in capitalization successes. In the production of person-centered messages that enable the supportive communication process to unfold successfully, the media synchronicity theory provides a useful perspective on how the communication process could benefit from the synchronous or asynchronous nature of the media. Synchronicity refers to the extent to which messages are exchanged instantaneously 23 and in real-time (Hancock et al., 2004), or how soon the messages occur in succession (Culnan & Markus, 1987; Rice & Steinfield, 1994). Media synchronicity theory proposes that communication processes can be distinguished into conveyance or convergence types, and media can be categorized as being high or low in synchronicity. Conveyance refers to the process of transmitting large amounts of information for later processing. Convergence refers to the process of exchanging messages about the information that has already been processed to create shared meaning (Dennis et al., 2008). Convergence is an interactive communication process to understand interactants’ interpretation of the information shared, and conveyance is a transmission process to share the information itself. A medium with a high degree of synchronicity can facilitate a communication process that is aimed at better convergence, whereas a medium with a low degree of synchronicity is more suitable for communication that is for conveyance. Interpersonal processes that require interactants to reach shared meaning and agreement is a convergence process that benefits from synchronous media (Jiang & Hancock, 2013; Kashian & Walther, 2020). Youngvorst (2018) found that synchronicity was a significant predictor of support receivers’ emotional improvement. In another experimental study that manipulated synchronicity, it is also found that synchronicity is indirectly related to social support through the perceived social presence (Petrocchi et al., 2020). Producing person-centered messages is oriented around acknowledging and caring for one’s interaction partner, which is unlikely to be adequately enabled by the one-way conveyance process. Therefore, as a convergence process that requires the continuous production and interpretation of person- centeredness messages, supportive communication could be best served by media with a high degree of synchronicity. 24 2.4.4. Perceived audio modality and supportive communication In addition to impacting the perceptions of nonverbal immediacy and the exchange of person-centered messages, another characteristic of using mediated technologies for social support lies in how the verbal messages are delivered. When individuals use communication technologies for interpersonal communication, they choose, through the media they adopt, whether to convey the messages textually, vocally, visually, paralinguistically, or through a mixture of these modalities. Such choices are closely related to another critical component of intimate relationships, which is empathic accuracy. As the cognitive aspect of empathy, empathic accuracy refers to an individual’s ability to accurately read another’s cognitive and affective states (Ickes et al., 1990; Ickes, 1993), which directly contributes to the understanding of one’s partner. Most recently, empathic accuracy has also been viewed as the equivalent of successful perspective-taking in specific situations (Devoldre et al., 2010; Verhofstadt et al., 2016). The accuracy in recognizing a partner’s emotional state and inferring the interaction partner’s specific thoughts and concerns will prompt the provision of support and facilitate the selection and enactment of appropriate and timely support behaviors (Verhofstadt et al., 2008). Through reviewing studies on cognitive empathy and social relationships, Davis (2017) empirically concluded that there is consistent empirical support for cognitive empathy’s effect on social support. Greater empathic accuracy was associated with social support in interactions with a romantic partner (Verhofstadt et al., 2008). Empathic accuracy also enables more responsive behaviors in listeners with greater empathic concern in a sample consisted of couples (Winczewski et al., 2016). In summary, empathic accuracy should be positively related to social support due to the evidence mentioned above and its importance in intimate relationships in general (Davis, 2017; Ickes & Simpson, 2001; Simpson et al., 2001). 25 In terms of the determinants of empathic accuracy, the modality via which viewing and listening to a partner occurs has been studied extensively. Voice, as the tool humans use to communicate the contents of their mind to others (Pinker & Bloom, 1990), has been found to play a crucial role in empathic accuracy, as extensive studies have found that a person’s mind is most explicitly communicated through voice (with nonverbal vocal cues and verbal cues; Hall & Schmid Mast, 2007; Ickes, 2003; Kruger et al., 2005; Mehrabian & Wiener, 1967). Besides verbal information, paralinguistic cues, such as pauses, tones, pitches, and intonations are also reflective of the process of conscious thinking and authentic internal emotional states (Ambady & Rosenthal, 1992; DePaulo & Rosenthal, 1979). In developing and administering a standardized empathic accuracy test, Gesn and Ickes (1999) presented it to perceivers in three modalities: original full video, audio-only, and video plus electronically filtered audio. In a later study, perceivers viewed the test in four modalities: original full video, audio-only, transcript only, and silent video only. Both studies found that the conditions containing words have a much higher accuracy rate, highlighting the primary contribution of verbal content and its associated vocalic cues in empathic accuracy (Hall & Schmid Mast, 2007). Most recently, Kraus (2017) reported that voice-only communication (consisting of nonverbal vocal cues and verbal messages) elicits higher rates of empathic accuracy in comparison with vision-only and a combination of voice and vision communication. In sum, voice-only communication should be more conducive to the provision of support through its increased empathic accuracy. Some preliminary empirical evidence also demonstrated that ICTs with voice (Skype, telephone) were significantly related to emotional support (Wohn & Peng, 2015). Another study that added voice communication to an existing online gaming community also found that such addition led to an increase in liking and trust 26 among community members, in comparison with communication using text only (Williams et al., 2007). 2.5. Relationship development as a moderator of perceived features of mediated channels and outcomes of supportive communication Even though few survey-based studies exist that examine perceptions and effects of the specific features of mediated technologies during interpersonal communication, many theories and experimental and qualitative studies have indicated the importance of considering the relationship closeness of interaction partners in the context of CMC (Manago et al., 2020; Rains & Brunner, 2018; Walther, 2011). Closeness plays a critical role in how individuals select different media for communication with various members of their networks (Dimmick et al., 2000; Dimmick et al., 2007, 2011; Feaster, 2008, 2009; Ramirez et al., 2008). By drawing on the social penetration theory, the affordance utilization model (AUM) argues that the associations between communication technologies’ features, including asynchronicity and reduced cues, and communication competence and self-disclosure are moderated by relationship development. Although certain features of ICTs may be more useful for the supportive communication to unfold, the relational history between interaction partners can render the features more or less important in determining supportive communication outcomes. Specifically, the AUM model proposes that as a relationship develops further, the interaction partner will rely less on ICT features for self-presentation and communication competence (Ruppel, 2015). These two features, reduced cues and asynchronicity, are identified based on previous research (Nowak et al., 2005, 2009; Ramirez et al., 2008; Utz, 2007), as studies have found that people use these features to manage their conversations (Kalyanaraman & Sundar, 2008; Walther, 1996). Notably, in defining reduced cues, Ruppel (2015) refers to 27 verbal, nonverbal, vocal, nonverbal vocal, and visual cues delivered by various modalities, such as audio, video, or text. The outcomes of the AUM include communication competence and self-disclosure. Below I introduce the definitions of these two concepts and their relationships to the provision of supportive communication between partners. As there is no direct evidence on ICT features and social support, I argue that the relationship between ICT features and social support (related to competence and self-disclosure in previous literature as outlined below) may also vary by relationship development suggested by AUM. Communication competence refers to the ability to engage in effective interactions (i.e., accomplishing one’s goals) and appropriateness (i.e., adhering to relevant norms;Canary & Spitzberg, 1989; Spitzberg & Cupach, 1984). As relationships develop, relational partners tend to feel more competent in their interactions as they become familiar with each other’s verbal and nonverbal cues and can better predict responses (Altman & Taylor, 1973). Communication competence is related to relational outcomes, such as increased relational satisfaction and social support (Canary & Lakey, 2006; Canary & Spitzberg, 1987; Query & Wright, 2003). Self-disclosure is at the core of building and maintaining intimate relationships and is closely related to intimacy and trust (Knapp et al., 2014). The social penetration theory predicts that relationship development changes communication patterns between partners (Altman & Taylor, 1973), including the breadth and depth of their self-disclosure. The breadth and depth of unacquainted dyads’ self-disclosure increases as time passes (Taylor et al., 1973), and the perceived friendship intensity is related to the breadth and depth of self-disclosure (Hays, 2016). The reciprocal act of making oneself known to others, and knowing them more deeply in turn, cultivate a relationship by increasing intimacy (Taylor & Altman, 1987). Self-disclosure helps 28 buffer stressful life events due to the cathartic effect generated by the unburdening of negative thoughts and the event’s reevaluation (Feldman et al., 2008; Stiles, 1987). It has also been related to social support in mediated channels, such as social networking sites and online support groups (Yang et al., 2019; Zhang, 2017). The AUM categorizes relationship development into three stages (i.e., low, moderate, and high), represented as acquaintances, casual friends, and close friends in friendships (Hays, 2016), or casual dating, serious dating, and engagement in romantic relationships (Braiker et al., 1979). Different stages of relationship development involve various self-presentation concerns, which predispose individuals to utilize the features of ICTs to a different degree, leading to different relational outcomes (i.e., communication competence and self-disclosure). At low levels of relationship development, asynchronicity and reduced cues facilitate the idealized self- presentation of interaction partners by hiding undesirable nonverbal cues, allowing ample time for planning and editing messages, and reallocating cognitive resources from monitoring nonverbal and environmental behavior to the conversation itself. These features are likely to increase one’s communication competence. The use of asynchronicity and reduced cues is also expected to facilitate more self-disclosure as strangers disclose more frequently and more private information when communicating through mediated technologies (Antheunis et al., 2007; Joinson, 2001; Tidwell & Walther, 2002). At moderate levels of relationship development, similar self-presentation concerns are present, but to a lesser degree, as interactions have become relatively more open and personal (Cupach & Metts, 1994). Partners are likely to rely less on asynchronicity and reduced cues for communication competence, as they can construct effective messages without the need to contrive, overthink, edit, re-edit, and mask nonverbal cues. Self- disclosure, at this stage, also starts to become more profound and comprehensive. Nevertheless, 29 asynchronous and reduced cues in mediated technologies still provide a buffer against face threats (O’Sullivan, 2000; Rettie, 2009), which is necessary at this relationship development stage. As relationships continue to develop, the self-presentation concerns are minimized, and the need to discuss intimate topics arise. Interaction partners have more experience with each other, allowing them to construct effective messages without relying on the asynchronicity and reduced cues of mediated technologies. The breadth and depth of self-disclosure are also less likely to be contingent on the communication medium. Studies have reported that discussing intimate topics occurs more through telephone than email (Dimmick et al., 2000; Utz, 2007), therefore highlighting the limitations of asynchronicity and reduced cues in relation to communication competence and self-disclosure in highly developed relationships. The main propositions of the AUM can be summarized as asynchronicity and reduced cues having a more significant positive effect on communication competence and self-disclosure at low levels of relationship development. In contrast, these features have a weaker impact on both outcomes when relationships are highly developed. Furthermore, as relationships develop, individuals are also more likely to use synchronous channels with rich cues (Ruppel, 2015).The AUM model provides a framework to reconsider how the utilization of ICT features intertwined with relationship development stages. It suggests that all ICTs features are not used in the same way with the same outcomes for interpersonal communication across all interactions. The nuances outlined by the model help advance the understanding of how interactions in the same forms of mediated communication can result in different outcomes, depending on the relational history between partners. As mentioned above, communication competence and self-disclosure have both been found to be related to social support (Canary & Lakey, 2006; Canary & 30 Spitzberg, 1987; Query & Wright, 2003; Yang et al., 2019; Zhang, 2017); therefore, the current study draws on AUM in arguing that relationship strengths between interaction partners moderate the associations between the features of ICTs and capitalization outcomes. Specifically, asynchronous and reduced cues are better for supportive communication at the early stages of relationships, whereas at later stages of relationships, these features may be less relevant. Unlike the AUM, which did not differentiate between audio-based nonverbal cues and the nonaudio-based nonverbal cues and referred to them as cues in general, the current study views them as two separate sets of constructs to emphasize the possibly different contributions of both types of cues, based on the literature on nonverbal immediacy and emphatic accuracy. Nonverbal immediacy is conveyed by nonverbal cues that involve all nonverbal gaze, facial, auditory, kinesic, and proxemic cues. However, as the literature also suggests the audio modality only is more predictive of emphatic accuracy, it is justifiable to separate the audio-based nonverbal cues from the nonaudio-based nonverbal cues in identifying how both types of cues may be related to the processes and outcomes of support provision differently. Given that the AUM suggests that benefits to well-being may be derived from asynchronicity and reduced cues when using mediated technologies for communication with partners with a lower degree of relationship closeness, I propose the following hypotheses: H1: During mediated capitalization with partners with whom relationship closeness is low, the use of ICTs with higher levels of perceived (a) synchronicity, (b) audio-based nonverbal cues, and (c) nonaudio-based nonverbal cues is negatively related to affective well-being. H2: During mediated capitalization with partners with whom relationship closeness is high, the use of ICTs with higher levels of perceived (a) synchronicity, (b) audio-based 31 nonverbal cues, and (c) nonaudio-based nonverbal cues is positively related to affective well- being. 2.6. Perceived partner responsiveness and affective well-being Perceived partner responsiveness has been identified as one of the central processes influencing intimate relationships (Graber et al., 2009; Laurenceau et al., 1998, 2005; Reis, 2012, 2014; Selcuk & Ong, 2013). Responsiveness is positively related to a host of outcomes related to physical, psychological, and relationship health, such as better sleep quality (Selcuk et al., 2017), healthier cortisol levels (Slatcher et al., 2015), eudaimonic well-being (Selcuk et al., 2016), personal well-being (i.e., affect, coping, self-efficacy), interpersonal well-being (i.e., more positive sentiments toward targets) (Lemay & Neal, 2014), and social support (Maisel & Gable, 2009). Gable and Reis (2010) identified perceived partner responsiveness as one of the critical components that determine the success of capitalization. When an individual’s interaction partner responds with supportiveness, understanding, excitement, and positivity toward their capitalization attempts, they report more significant positive affect, lower negative affect, increased subjective well‐being, and greater relationship satisfaction, commitment, intimacy, trust, and quality (Gable et al., 2006, 2004). Capitalization attempts do not guarantee responsiveness, as partners may respond to someone’s positive events with ambivalence, indifference, or negativity (Reis, 2014). More accurately, capitalization attempts create opportunities for responsiveness, which begets further positive affect for the capitalizer, improved personal well-being, trust and appreciation for relationship partners, and mutual growth (Reis, 2014). 32 Gable et al. (2004) found perceived partner responsiveness is associated with relationship satisfaction, trust, and intimacy. Feeney (2004) also found that responsive support for a relationship partner’s goal strivings and explorations predicted increases in the recipient’s personal well-being and self-reported likelihood of goal attainment. Gable et al. (2006) demonstrated that positive partner responses to capitalization attempts (as rated by the individual or independent observers) were associated with higher relationship well-being, which refers to a composite of scores on scales of commitment, satisfaction, and passionate love. In sum, past literature has demonstrated that perceived partner responsiveness is related to individuals’ well-being during capitalization. However, previous capitalization literature has found that an overwhelming percentage (as high as 98%) of capitalization attempts are directed at close others (e.g., friends, roommates, parents, romantic partners) (Gable et al., 2010; Gable et al., 2004), which suggests that the importance of perceived partner responsiveness concerning increased well-being during capitalization is predominantly established based on intimate ties (Reis et al., 2004). New technologies such as social media, which are now used by most of the U.S. population daily (Smith & Anderson, 2018), encourage individuals to mass-share their positive events to a vast and diverse group of network members (O’Sullivan & Carr, 2018). There is also a positivity bias in how people present themselves on social media (Bazarova, 2012; McLaughlin & Vitak, 2012), suggesting that capitalization may be a frequent phenomenon in such sites. When individuals mass-share, responses to mediated capitalization likely come from a mix of family members, close friends, acquaintances, or strangers. This phenomenon begs a new question regarding whether responsiveness from interaction partners who are not close others has the same effect on well-being as responsiveness from close others. One previous study found responsiveness on Facebook was associated with various positive indicators of well-being, 33 but the study measured a general level of perceived partner responsiveness by treating the Facebook audience as one homogenous community and did not discriminate between responses from close others or non-close others (Zell & Moeller, 2018). When a higher level of responsiveness is perceived from non-close ties, it is unclear whether it is also positively related to the benefits of capitalization (i.e., subjective well-being). Therefore, I ask the following research question: RQ1: Does perceived partner responsiveness relate to affective well-being differently depending on relationship closeness with one’s communication partner? 2.6.1. Perceived partner responsiveness and its antecedents Previous literature on responsiveness has primarily focused on its effects and failed to examine its antecedents (Demir & Davidson, 2013; Maisel & Gable, 2009; Selcuk et al., 2016, 2017; Slatcher et al., 2015). A general account of responsiveness’ antecedents involves it being a function of each partner’s needs and motives, expectations of the other’s responsiveness, and the existing relationship’s nature. It may also be influenced by individual differences in personality, expectations, and relationship history (Kumashiro, 2009). Davis (1982) presented a theoretical analysis of the determinants of responsiveness in dyadic interaction. Unpacking the implicit demands inherent in each communicative behavior, which include the responses from the interaction partner, the content relevance of the response, and a certain degree of elaboration in the response, Davis (1982) proposed that responsiveness may be viewed as the degree to which these demands are met. For each interaction, responsiveness contributes to the continuance of social interactions and a focus on conversation topics. Concerning interpersonal interactions in general, responsiveness facilitates the achievement of interpersonal goals, improves communication efficiency and accuracy, and 34 increases the attraction between communication partners. Even though Davis’s (1982) conceptualization of responsiveness is not precisely the same as Reis and Shaver’s (1988) or Gable and Reis’ (2004; 2006; 2010), the essence of capturing how an interaction partner is perceived to be responsive is arguably identical. Based on Davis (1982) account, cues and feedback facilitated by ICTs used for social interaction can impact responsiveness because they constitute the characteristics of the situation, which affect both attention to and accuracy in understanding one’s partner. Many other factors also impact attention and accuracy, such as interaction partners’ characteristics, role relationships, and individual differences in personalities. However, these factors are not the focus in this study as they are preexisting determinants inherent either in the individuals or relationships, which are unlikely to vary by the ICTs used for each interaction. As for the function of cues for attention to one’s partners, the number of cues and the specific categories of cues that one attends to is indispensable for individuals to notice their interaction partners’ behaviors that require responses. In Davis’ (1982) definition, cues refer to verbal, facial, vocal, postural, or other cues that may vary by the medium one adopts for communication. As delivered by the medium of interaction, the cues represent the demands of both the situation and one’s interaction partner. The lack of such cues undermines one’s attention to partners since they can neither receive nor process the information associated with a full range of nonverbal communication signals. There is a direct and empirical causal link between gaze perception and social attention that has been demonstrated in numerous experiments (Bayliss et al., 2011; Christopher Blair et al., 2017), highlighting the importance of cues such as eye gaze in directing an individual’s attention. 35 Cues also affect the accuracy of understanding one’s partners, given that communication accuracy is achieved when the content encoded (expressed) by the communicator corresponds to that decoded (understood) by the receiver (Mehrabian & Reed, 1968). The encoded content includes information that is intentionally (either verbally or nonverbally) or unintentionally delivered (e.g., nonverbal expressions of emotion). The encoded content also consists of the linguistic, paralinguistic, and nonverbal cues that indicate what types of response are demanded and when (Davis, 1982). Both the encoding and decoding process becomes limited when expressed through mediated technologies that do not allow a full range of verbal and nonverbal cues. Based on Davis (1982), the immediacy of the feedback allowed by the medium also impacts the accuracy of understanding one’s partners. Feedback allows the communicator to know if the interaction partner has decoded the message correctly and decide whether to adjust the message in subsequent conversations. Accuracy is shaped by (1) whether the feedback is available at all, (2) the amount of feedback available within a given channel (e.g., yes-no vs. detailed commentary), and (3) the number of channels (auditory, visual, textual, etc.) through which feedback is provided (Faules, 1967; Feffer & Suchotliff, 1966; Leavitt & Mueller, 1951). For example, when communicating FtF, any lack of comprehension becomes clear from the interaction partners’ facial expressions or follow-up comments; however, this type of feedback information may not come as quickly or readily during texting or emails due to the asynchronous nature of these media. Mehrabian and Reed (1968), for example, presented results from multiple studies that state communication that provides feedback from the interaction partner permits a more accurate understanding of the communicated message than communication without feedback. In more recent studies that use video communication, feedback delay is also found to 36 interfere with the process of forming correct interpersonal judgments (Ehrlich et al., 2000; Hinds, 1999; Powers et al., 2011). Therefore, the sooner the interaction partner can receive feedback, the more smoothly the communication is likely to proceed due to the increased accuracy involved in it. It is implicit in Davis (1982)’s account that features of ICTs such as cues and the immediacy of feedback should be more accurately termed as perceived features. Responsiveness is determined by the attention to one’s partners and the communication accuracy facilitated by these features. It is therefore unlikely that the channels’ objective capabilities for transmitting certain cues automatically leads to communication partners’ undivided attention or absolute accuracy during the conversation. Rather, responsiveness is the outcome of a combination of how these features are perceived and individuals’ cognitive efforts in utilizing these features for communication. Therefore, following Davis’ (1982) theorizing on how cues and feedbacks allowed by the medium used for an interaction are related to attention to as well as accuracy in understanding one’s partners that shape responsiveness, perceived features of ICTs on cues that facilitate comprehensive understanding as well as synchronicity that facilitates timely feedbacks should be related to responsiveness. With the advancement of ICTs in the 21st century, new features in addition to cues and feedback, such as auto-filled responses, increased accessibility, and the mobility of various ICTs, may also affect response repertoires and motivations to be responsive. For example, when responses do not take more than a second by clicking the “like” button on social media, individuals may be more motivated to reply immediately. Nevertheless, Davis (1982) did not consider how relationship development between partners may play a role in how the perceived features of ICTs are related to responsiveness. As argued by the AUM, asynchronicity and reduced cues are more useful in achieving a higher level 37 of communication competence and self-disclosure when relationship development between interaction partners is low (Ruppel, 2015). As perceived partner responsiveness is positively related to self-disclosure (Laurenceau et al., 1998, 2005; Manne et al., 2004) and likely communication competence, associations between synchronicity, audio-based nonverbal cues, nonaudio-based nonverbal cues, and perceived partner responsiveness may also vary by relationship strengths between partners during mediated capitalization. Given that there is no previous literature that examines these specific relationships, the current study asks the following research question: RQ2: Do the perceived features of ICTs, including (a) synchronicity, (b) audio-based nonverbal cues, and (c) nonaudio-based nonverbal cues, relate to perceived partner responsiveness differently by relationship closeness with one’s partner? 2.7. Memorability and affective well-being Enhanced memorability provides an additional mechanism for the benefits of capitalization (Gable & Reis, 2010). Capitalization entails rehearsing, reliving, and elaborating a positive event, which increases the salience and accessibility of the event in memory. Langston (1994) first proposed that the memorability of an event may be the intervening mechanism underlying capitalization’s effects on positive affect. In testing this proposition, Gable et al. (2004) found that participants’ probability of recalling positive events was positively related to the number of people they told about the event, which suggests that sharing the event with many others may have increased its memorability. Moreover, they found that the more people they told about the positive event, the more significant the increase in positive affect and satisfaction with life (Gable et al., 2004). More responses received on Facebook was also associated with 38 increased memorability of the event for the poster and a higher level of happiness and self- esteem (Zell & Moeller, 2018). Therefore, the current study proposes the following hypothesis: H3: Increased memorability of a capitalizing event is positively related to affective well- being. 2.7.1. Perceived persistence as a determinant of memorability In contrast to unmediated conversations that are more ephemeral, persistence as a feature of ICTs refers to the relative permanence of the communication (Fox & McEwan, 2017). It is similar to recordability, which refers to the extent to which the interaction is automatically documented (Hancock et al., 2004), and reviewability, which describes how communication remains as an artifact that can be reviewed by either of the partners or a third party afterward (H. H. Clark & Brennan, 1991). The replicable nature of digital materials also enhances the persistence of communication, as anyone who views the content can easily capture, save, copy, and redistribute the materials for it to remain accessible even if the owner removes the content. Persistence has been proposed to be a critical feature that shapes organizational communication processes and the formation of a networked public (boyd, 2011; Treem & Leonardi, 2012). In the context of capitalization, using ICTs that preserve the entirety of past communication can help individuals memorize positive events. Individuals’ ability to retrieve messages describing an event and their partners’ responses to the messages enhances the ability to recall the events. Therefore, the current study proposes the following hypothesis: H4: During mediated capitalization, the use of ICTs with higher levels of persistence is positively related to an individual’s memorability of the capitalizing event. The conceptual diagram of the current study is shown below in Figure 1. 39 Figure 1: Conceptual diagram of the study 40 CHAPTER THREE SCALE DEVELOPMENT 3.1. Chapter preview Chapter 3 describes study one that involves developing scales that can measure individuals’ perceptions of the relevant features of ICTs. The first study was conducted in three phases: 1) item development, which involved using the deductive methods to identify items from similar existing scales and inviting expert judges to evaluate each item on their relevance, representativeness, and technical quality for each domain measured; 2) scale development, which involved four rounds of cognitive interviews and administering the survey to gather data for exploratory factor analysis; 3) scale evaluation, which overlapped with the second study and involved confirming the factor structures using data from a different sample for confirmatory factor analysis. 3.2. Item generation Substantive literature reviews were conducted before constructing the items related to perceived features of communication technologies. Scholars have only recently started to understand and measure the features or affordances that span across different communication channels (Fox & McEwan, 2017; Rains, 2019). After identifying the lack of existing scales that measure perceptions of multiple affordances associated with communication channels simultaneously, Fox and McEwan (2017) reviewed the literature on various channels, media, technological features, attributes, and affordances and developed the perceived social affordances of communication channels scale. As the only scale that has been designed for measuring perceptions across mediated technologies, three of its subscales, including bandwidth, persistence, and synchronicity, are closely related to the features outlined by this study. Bandwidth refers to the breadth of social cues potentially transmitted in a channel, including text, 41 audio, nonverbal cues, photos, or graphical icons such as emojis, which is a measure that encompasses both perceived audio-based nonverbal cues and perceived nonaudio-based nonverbal cues. Persistence refers to the relative permanence of the communication. Synchronicity refers to the capability for providing instant feedback without lag between message transmission, receipt, and subsequent response (Fox & McEwan, 2017). There is one critical difference between the current study and Fox and McEwan (2017) regarding cues in measuring nonverbal cues. Fox and McEwan (2017) did not separate the audio- based nonverbal cues from nonaudio-based nonverbal cues. Many previous studies have aggregated all types of cues when discussing communication technologies’ differences with face-to-face communication and their subsequent impacts on relationship formation and maintenance (Ruppel, 2015; Xu & Liao, 2020), which reflects how modern mediated technologies can and have converged multiple cues (e.g., text, voice, video) into one platform. However, such a design also obscured the possibly different functions of individual cues regarding their roles in supportive communication. The literature on emphatic accuracy and nonverbal immediacy suggested that audio-based and nonaudio-based nonverbal cues could facilitate the perception of social support through different theoretical mechanisms, which necessitates measuring an individual’s perceptions of both types of cues separately. Considering that there is previous work in measuring the constructs of interest, i.e., perceived audio-based nonverbal cues and nonaudio-based nonverbal cues, synchronicity, and persistence, the study uses a deductive method through literature review and assessment of existing scales to generate an initial pool of items for further scale testing (Boateng et al., 2018). Seventeen items related to perceived audio-based nonverbal cues and 27 items related to nonaudio-based nonverbal cues were identified and adapted for this study based on the literature 42 on media richness (Daft & Lengel, 1986) and the coding of behaviors related to nonverbal immediacy and conversational involvement (Andersen, 1989; Coker & Burgoon, 1987; Walther et al., 2005). Twenty-one items measuring perceived synchronicity and seven items measuring perceived persistence were also identified (Fox & McEwan, 2017; Kashian & Walther, 2020; Park & Sundar, 2015; Park & Lee, 2019). In generating the initial item pool, issues regarding (a) wording clarity, (b) wording redundancy, (c) positively and negatively worded items, and (d) choice of response formats were carefully considered. A good item should be unambiguous to the degree that all interviewees understand its meaning in the same manner. Questions that leave the interviewee in confusion for their correct meaning were eliminated. A certain degree of wording redundancy to ensure reliability in measurement was incorporated without introducing purely trivial wording/grammar differences. The perceived features of ICTs are all positively worded to reflect a high level of the construct, as it has been shown that positively worded items tend to load differently with negatively worded items (Herche & Engelland, 1996), which compromises the performance of the scale (DeVellis, 2012). Considering the advantage of multichotomous scales over dichotomous scales, the 6-point Likert scale is used as a response format without a middle point (Netemeyer et al., 2003). 3.3. Establishing content validity IRB approval was obtained before all procedures involving human subjects began. The items were evaluated by five expert judges with a Ph.D. from media and communication-related fields with prior experience researching mediated communication to determine the initial pool's suitability and appropriateness for content validity. They were given definitions of the constructs that the items were intended to measure and asked to rate each item’s extent to represent the 43 constructs adequately. Expert judges rate each item on a 4-point scale ranging from (1) “Not Relevant” to (4) “Highly Relevant,” and the content validity index for measuring proportional agreement was calculated. In addition, qualitative feedback of each item was requested regarding the clarity and readability of each item, and improvements on wording and formatting were made based on these feedbacks. Items with an index of less than 0.83 were removed (Lynn, 1986). After eliminating items with low agreement among expert judges, 13 items for measuring perceived nonaudio-based nonverbal cues, nine items for measuring perceived audio-based nonverbal cues, eleven items for measuring perceived synchronicity, and seven items for measuring perceived persistence were retained. Content validity was assessed further using cognitive interviews with 19 participants (consisting of individuals of various ages, levels of education, race, and both men and women). The interview took place virtually in April and May 2021. Each interview lasted about 60 to 90 minutes (demographic information of the interview participants was provided in the appendix). Interviewees were compensated with either 1 SONA credit or $20. During the interview, the participants went through the drafted questionnaire with the researcher as if they were taking the survey. Participants answered the questions related to the perceived features of ICTs based on their first used technology to share positive events. There were also variations in the technology they had in mind for answering the questionnaire. A combination of the think-aloud approach and the verbal probing approach was used (Beatty & Willis, 2007; Ryan et al., 2012). For the think-aloud method, respondents were instructed to verbalize their thought process through which they arrive at their answer (e.g., “tell me what you are thinking … how are you coming up with your answer to this?”). The open format allowed respondents to answer in ways they choose, independent of the interviewer’s influences. In addition, concurrent verbal probing, in 44 which the interviewer posed questions about the participants’ perceptions of and answers to the specific questions (e.g., “Can you walk me through the steps of how you came to that answer?” “How do you remember the perception on your interaction partners’ nonverbal cues?”) was used. The cognitive interview data revealed problems participants have with understanding questions and retrieving and integrating information needed to answer questions, especially questions that may lead to “confusion, contradictions, ambiguity, and reluctance” in the respondent. The interviews were conducted iteratively, and after each round of interviews, the interviews' results were used to modify, clarify, and augment questions to fit the study objectives better and remove difficult or problematic questions (Boateng et al., 2018). As there is no consensus on the set number of interviews needed to reach saturation, the general guiding principle was to conduct cognitive interviews in “rounds” with a small sample of relevant participants (5-15 participants) (Beatty & Willis, 2007; Boateng et al., 2018). The questionnaire was revised after the first round of interviews for the next round until the researchers decided to stop the process because they yield relatively few new insights. Therefore, four rounds of cognitive interviews with four to five participants each were conducted. The criteria used in modifying questions are consistent with the guidelines of writing a good scale item, which include (1) relevance to the survey purpose and construct (i.e., items are to address the construct of interest, logically related to the survey purpose, exhibit no crossover to related construct, concrete and precise, objective), (2) cater to the target audience (i.e., items are easily understood by and answerable for the target audience, information needed to answer the items needs to be easily retrievable, and items represent the diversity of the participants in their perceptions regarding its meaning and relevance), (3) appropriate language, (i.e., items use words that are understood by participants and do not have multiple meanings, items use current 45 language and there are no abstractions), and (4) reasonable item structure, (i.e., items are brief, in complete statements, present a single idea in one item, use positive wording, no unnecessary repetition in the phrasing of items) (Johnson & Morgan, 2016). The most common problems revealed through cognitive interviews involved unclear instructions and difficult-to-understand wordings, which were corrected based on the results of the interviews. Additional instruction was also added before participants were presented with questions related to perceived nonverbal cues. Specifically, participants were asked to think back to the scenario where they have shared this positive event (with piped text describing the specific event), and they were asked to “answer the following questions about whether you are able to have a mental image of your communication partners' nonverbal behaviors during communication.” Below, I outlined the specific changes of the scale items after each round based on the feedback from the participants. After the first round, one item for measuring nonaudio-based nonverbal cues, “If my communication partner(s) has an open posture (i.e., arms are not crossed, and legs are not crossed or pulled in toward the body” was changed to “If my communication partner(s) has an open posture (i.e., arms and legs are not crossed).” Five items related to the global assessment of one’s communication partner’s eye gaze, posture, gestures, facial expressions, and body movement/body language were added. Several items for measuring audio- based nonverbal cues were also changed, including “If my communication partner(s)'s voice involves variation in volume” was changed to “If my communication partner(s)'s voice is going up or down,” “If my communication partner(s)'s voice involves variation in pace (i.e., speed of talk and whether or not it is fast and hurried)” was changed to “If my communication partner(s)'s speech is fast or slow,” “If my communication partner(s)'s voice involves the rise and fall in pitch (i.e., amount of intonation or variation)” was changed to “If my communication partner(s)'s 46 voice rises and falls,” “If my communication partner(s)'s positive affect is communicated in the vocal tone” was changed to “If my communication partner(s)'s voice sounds happy or sad,” “The vocal expressiveness in my communication partner(s)'s voice” was changed to “If my communication partner(s)'s voice is expressive with emotions,” and the item “My communication partner(s) through many voice cues, such as volume variations, tone of voice, pace of speech, etc.” was dropped. One item for measuring synchronicity, “Expect to get my message right away,” was dropped. Another item for measuring synchronicity was changed from “Receive and promptly respond to my communication partner(s)'s messages” to “Promptly receive and respond to my communication partner(s)'s messages.” After the second round, one item for measuring audio-based nonverbal cues, “If my communication partner(s) use fillers such as um, and, uh” was dropped. Another item for measuring audio-based nonverbal cues was changed from “If my communication partner(s)'s voice is monotonous or dull” to “If my communication partner(s)'s voice is flat or dull.” One item for measuring persistence was changed from “Save and archive the conversation” to “Archive the conversation.” After the third round, only minimal changes were made, including the item “Archive the conversation” was changed to “Store the conversation.” The most crucial difference in this round was the order of the questions that were presented in the survey. Questions related to nonaudio-based and audio- based nonverbal cues were placed after synchronicity and persistence since it was evident that participants exerted more cognitive effort while answering questions related to nonverbal cues. Minor changes in grammar, word choice and answer options were also made throughout the revision process. 47 3.4. Sample and procedure The improved questionnaire was distributed among adult participants using Mturk. Multiple experimental and survey studies using Mturk samples have consistently replicated findings from prior research, supporting it as a valuable platform for social science research (Casler et al., 2013; Gosling et al., 2004; Holden et al., 2013; Mason & Suri, 2012). The inclusion criteria included participants who are at least 18 years old, reside in a country where English is one of the dominant languages, use technology to communicate with others over the past week, and indicate they are “somewhat likely” or “extremely likely” to respond to hypothesized positive events (e.g., getting a promotion at work, winning a small lottery, celebrating anniversaries/birthdays, accomplishing something they did not think they could, receiving compliments, receiving gifts, enjoying a great meal, spending quality time with family/friends/animals/nature) by sharing the event with someone else (Gentzler et al., 2016). After the screening criteria, an attention check question that involves paying extremely close attention to the instruction and mental calculations was placed in front of the survey to screen out participants who were not paying close attention. After the attention check question, participants were prompted to recall a scenario in which they shared a positive event using a technology that was randomly assigned to them from this list: group text message, group voice call, group video call, group email, posting on social media, one-to-one text messages, one-to-one voice calls, one-to-one video calls, and one-to-one emails after consulting relevant literature and considering expert judges’ feedback on this survey question. If participants indicated that they did not share using the randomly assigned technology, they were asked to write a scenario in which they shared a positive event using a technology that they used the most frequently. Participants were instructed to “briefly describe 48 what was the event, who did you share it with on this technology, and what was the most frequently used technology.” An example write-up was also provided with the instructions. The sharing scenario provided by the participant was used to build the follow-up survey questions, as the questions that followed were asking participants to provide more details on that specific sharing scenario, such as the specific feature/device used, number of people, and the nature of the relationship with whom the participants shared this positive event. Participants rated each of their reported technology on nonaudio-based nonverbal cues, audio-based nonverbal cues, synchronicity, and persistence. Texts provided by participants were also used as piped text to remind them of the specific experience of technology use while they answered questions related to perceived features of communication technologies during that sharing scenario. Demographic information was also collected. The survey took about 20 minutes to complete. A total of 753 participants consented to the survey, and 491 participants finished the complete survey. Among them, 476 participants passed all attention filters (n = 4). However, considering how the survey is built upon participants’ authentic experiences using technology to share positive events with their social networks, responses from participants who obviously failed to describe an authentic and personal experience involving using technology to share were eliminated. A compensation of $2.5 was provided to those who provided usable data involving an accurate and careful description of their personal experience using technology. Participants were also asked questions regarding their attention given and level of efforts made in answering the survey and if their data should be used in the final analysis. Only data from participants who reported that their data should be used were included. A total of 227 participant responses remained in the data. The average age of the sample was 36 (SD = 9.55). Sixty-three percent of the sample was male, and 37 were female. 49 Sixty-five percent of the sample was white; nine percent were black or African American, and 20 percent were Asian. Participants reside in seven countries, include 80 percent in the U.S., 15 percent in India, and two percent in Brazil. In terms of education, 63% of participants reported earning a bachelor’s degree, 21% had completed a master’s degree, 7% had some college but no degree, and 5% had an associate degree in college. 3.5. Measures Perceived nonaudio-based nonverbal cue was measured by asking participants to indicate their agreement with statements that detailed their communication partners’ nonaudio-based nonverbal behaviors during communication over the specific positive event they described earlier in the survey. The specific instruction was “When communicating about this event with [number of] person/people, including my [the nature of the relationship with whom participants shared] using [feature used] on [app/program used], it allowed me to picture, or imagine…”. Sample statements included “If my communication partner(s) is using various facial expression,” “If my communication partner(s) is smiling,” “If my communication partner(s) is using various facial expressions,” and “If my communication partner(s) is gesturing using their hands or fingers.” Participants rated each item on a 6-point Likert scale without a middle/neutral point (1 = “Strongly disagree,” 6= “Strongly agree”). Perceived audio-based nonverbal cues was measured using similar instruction as perceived nonaudio-based nonverbal cue, except the wording of “picture, or imagine” was replaced with “hear or seemingly hear” that reflected the nature of audio cues. Sample statements included “If my communication partner(s)'s voice volume is going up or down,” “If my communication partner(s)'s voice sounds happy or sad,” “If my communication partner(s)'s voice is flat or dull,” and “If my communication partner(s)'s voice suggests he/she is interested in the 50 conversation.” Participants rated each item on a 6-point Likert scale without a middle/neutral point (1 = “Strongly disagree,” 6= “Strongly agree”). Perceived synchronicity was measured using similar instruction as perceived nonaudio- based nonverbal cue, except the wording of “it allowed me to picture or imagine” was replaced with “it allowed me to.” Sample statements included “Engage in real-time back-and-forth interaction,” “Quickly send messages or comments back and forth,” and “Reply as soon as I receive a message or a comment from my communication partner(s).” Participants rated each item on a 6-point Likert scale without a middle/neutral point (1 = “Strongly disagree,” 6= “Strongly agree”). Perceived persistence was measured using the instruction as perceived synchronicity. Sample statements included “Keep a record of communication that I can go back and look at,” “Keep a record of communication that can last long after the initial communication,” and “Have my conversations with my communication partner(s) stay available after the conversation ends.” Participants rated each item on a 6-point Likert scale without a middle/neutral point (1 = “Strongly disagree,” 6= “Strongly agree”). Demographic variables. Standard demographic variables including age, sex, education, and race were measured. 3.6. Data analysis Exploratory factor analysis (EFA) with principal factors extraction was conducted to determine the factor structure of the perceived features of different communication channels. Decisions on the number of factors to extract were made based on eigenvalue, scree plot, item loadings, variance explained by an extracted factor, and a priori criteria based on theory (Netemeyer et al., 2003). Promax rotation was used to make factors more interpretable; however, 51 considering most of the factors were unidimensional, only items of nonaudio-based nonverbal cues were rotated due to a two-factor solution. Decisions on whether to retain an item were based on item-based statistics (e.g., inter-item correlations, item-total correlations, coefficient alpha) in conjunction with EFA results. Items that show cross-loadings were deleted. Only items with a factor loading of .40 and above were retained for further analysis (Nunnally & Bernstein, 1994). 3.7. Results Nineteen percent of the participants reported using one-to-one text messages for sharing positive events. Sixteen percent of the participants used one-to-one video calls, 13 percent used one-to-one voice calls, 11 percent used group text messages, 11 percent used posting on social media, and 10 percent used group video calls, and eight percent used group voice calls. Most of the participants shared using their cellphone, followed by 23 percent of participants that shared using a laptop, 10 percent that shared using a desktop, and 6 percent that used a tablet. Thirty percent of the participants shared with only one individual, 60 percent of the participants shared with 2-20 individuals, and 95 percent of the individuals shared with less than 50 individuals. The descriptive statistics of all the items and the correlation among the items within each scale were presented in Table 1-5. Table 1: Means and standard deviation of all scale items of the EFA study Mean S.D. Minimum Maximum audio1 4.43 1.29 1 6 audio2 4.39 1.44 1 6 audio3 4.41 1.46 1 6 audio4 4.29 1.47 1 6 audio5 4.59 1.29 1 6 audio6 4.06 1.42 1 6 audio7 4.37 1.25 1 6 audio8 4.35 1.38 1 6 audio9 4.62 1.22 1 6 synchronicity1 4.95 0.82 1 6 synchronicity2 5.15 0.91 2 6 synchronicity3 5.12 0.96 1 6 52 Table 1 (cont’d) synchronicity4 5.07 0.89 1 6 synchronicity5 5.05 0.87 1 6 synchronicity6 5.06 0.88 1 6 synchronicity7 5.15 0.92 2 6 synchronicity8 5.03 0.92 2 6 synchronicity9 4.89 1.03 1 6 synchronicity10 4.97 0.94 1 6 persistence1 4.51 1.34 1 6 persistence2 4.52 1.42 1 6 persistence3 4.45 1.36 1 6 persistence4 4.45 1.43 1 6 persistence5 4.41 1.30 1 6 persistence6 4.59 1.38 1 6 persistence7 4.45 1.42 1 6 nonaudio1 4.78 1.14 1 6 nonaudio2 4.49 1.40 1 6 nonaudio3 4.07 1.55 1 6 nonaudio4 3.48 1.70 1 6 nonaudio5 3.60 1.70 1 6 nonaudio6 3.52 1.71 1 6 nonaudio7 3.58 1.70 1 6 nonaudio8 3.72 1.63 1 6 nonaudio9 3.55 1.76 1 6 nonaudio10 4.14 1.61 1 6 nonaudio11 3.90 1.59 1 6 nonaudio12 4.55 1.23 1 6 nonaudio13 4.55 1.25 1 6 nonaudio14 4.07 1.61 1 6 nonaudio15 4.00 1.56 1 6 nonaudio16 4.04 1.54 1 6 nonaudio17 4.34 1.47 1 6 nonaudio18 4.17 1.58 1 6 *N=227; Audio is short for perceived audio-based nonverbal cues; Synchronicity is short for perceived synchronicity; Persistence is short for perceived persistence; Nonaudio is short for perceived nonaudio-based nonverbal cues. 53 Table 2: Zero-order correlations of items of the perceived audio-based nonverbal cues scale 1 2 3 4 5 6 7 8 9 audio1 1.00 audio2 .67*** 1.00 audio3 .69*** .75*** 1.00 audio4 .72*** .75*** .75*** 1.00 audio5 .72*** .60*** .63*** .62*** 1.00 audio6 .60*** .69*** .72*** .69*** .57*** 1.00 audio7 .74*** .67*** .64*** .64*** .78*** .56*** 1.00 audio8 .69*** .61*** .66*** .66*** .67*** .62*** .70*** 1.00 audio9 .70*** .65*** .66*** .61*** .77*** .55*** .75*** .72*** 1.00 Note: N = 227; Audio is short for perceived audio-based nonverbal cues; * p < 0.05 ** p < 0.01 *** p < 0.001. Table 3: Zero-order correlations of items of the perceived synchronicity scale 1 2 3 4 5 6 7 8 9 10 synchronicity 1 1.00 synchronicity .43** 2 * 1.00 synchronicity .46** .47** 3 * * 1.00 synchronicity .36** .42** .38** 4 * * * 1.00 synchronicity .38** .37** .37** .55** 5 * * * * 1.00 synchronicity .59** .53** .49** .38** .46** 6 * * * * * 1.00 synchronicity .49** .54** .37** .35** .34** .46** 7 * * * * * * 1.00 synchronicity .42** .52** .61** .47** .40** .57** .45** 8 * * * * * * * 1.00 synchronicity .40** .51** .40** .39** .37** .41** .37** .48** 9 * * * * * * * * 1.00 synchronicity .42** .49** .36** .37** .48** .52** .42** .53** .58** 1.0 10 * * * * * * * * * 0 Note: N = 227; Synchronicity is short for perceived synchronicity; * p < 0.05 ** p < 0.01 *** p < 0.001. 54 Table 4: Zero-order correlations of items of the perceived persistence scale 1 2 3 4 5 6 7 persistence1 1.00 persistence2 .78*** 1.00 persistence3 .76*** .79*** 1.00 persistence4 .75*** .72*** .79*** 1.00 persistence5 .70*** .65*** .70*** .74*** 1.00 persistence6 .76*** .73*** .74*** .76*** .68*** 1.00 persistence7 .75*** .71*** .79*** .77*** .70*** .75*** 1.00 Mean 4.51 4.52 4.44 4.45 4.41 4.59 4.44 SD 1.35 1.42 1.37 1.43 1.31 1.38 1.43 Note: N = 227; Persistence1is short for perceived persistence; * p < 0.05 ** p < 0.01 *** p < 0.001. 55 Table 5: Zero-order correlations of items of the perceived nonaudio-based nonverbal cues scale 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 nonaudi o1 1.00 nonaudi .56* o2 ** 1.00 nonaudi .47* .63* o3 ** ** 1.00 nonaudi .22* .33* .47* o4 ** ** ** 1.00 nonaudi .31* .51* .61* .71* o5 ** ** ** ** 1.00 nonaudi .28* .48* .59* .61* .75* o6 ** ** ** ** ** 1.00 nonaudi .30* .47* .54* .57* .66* .76* o7 ** ** ** ** ** ** 1.00 nonaudi .31* .49* .59* .42* .53* .61* .47* o8 ** ** ** ** ** ** ** 1.00 nonaudi .21* .36* .50* .52* .52* .58* .60* .56* o9 * ** ** ** ** ** ** ** 1.00 nonaudi .49* .63* .58* .37* .46* .51* .47* .54* .52* o10 ** ** ** ** ** ** ** ** ** 1.00 nonaudi .38* .52* .59* .41* .45* .47* .44* .54* .50* .63* o11 ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi .58* .46* .38* .30* .23* .27* .29* .34* .25* .48* .40* o12 ** ** ** ** ** ** ** ** ** ** ** 1.00 56 Table 5 (cont’d) nonaudi .50* .47* .36* .27* .25* .23* .24* .40* .28* .49* .42* .57* o13 ** ** ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi .47* .56* .53* .37* .46* .49* .44* .59* .47* .70* .61* .49* .50* o14 ** ** ** ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi .45* .59* .61* .37* .49* .50* .50* .58* .50* .67* .63* .50* .50* .80* o15 ** ** ** ** ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi .48* .62* .64* .38* .52* .56* .48* .59* .45* .68* .62* .52* .55* .74* .79* o16 ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi .57* .67* .59* .37* .51* .54* .53* .50* .49* .71* .65* .55* .59* .69* .73* .78* o17 ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi .48* .62* .66* .39* .51* .53* .53* .55* .50* .65* .66* .49* .48* .67* .76* .74* .79* 1. o18 ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** 00 Note: N = 227; Nonaudio is short for perceived nonaudio-based nonverbal cues; * p < 0.05 ** p < 0.01 *** p < 0.001. 57 Based on eigenvalues and the percentage of variance accounted for by the factors, scales of perceived audio-based nonverbal cues, synchronicity, and persistence reached a one-factor solution, and perceived nonaudio-based nonverbal cues reached a two-factor solution. For perceived audio-based nonverbal cues, the single factor had an eigenvalue of 6.08, which explained 98% of the variance. It consisted of 9 items with loadings ranging from .76–.84. The Cronbach’s alpha coefficient was .95. For perceived audio-based nonverbal cues, the single factor had an eigenvalue of 6.08, which explained 98 % of the variance. It consisted of 9 items with loadings ranging from .76–.84. The Cronbach’s alpha coefficient was .95. For perceived synchronicity, the single factor had an eigenvalue of 4.51, which explained 97% of the variance. It consisted of 10 items with loadings ranging from .60–.75. The Cronbach’s alpha coefficient was .89. For perceived persistence, the single factor had an eigenvalue of 5.15, which explained 103% of the variance1. It consisted of 7 items with loadings ranging from .80–.89. The Cronbach’s alpha coefficient was .95. Lastly, for perceived nonaudio-based nonverbal cues, the two factors had an eigenvalue of 9.52 and 1.50, which explained 95% of the variance in combination. Two items with cross-loading were deleted. For the rest of the 16 items, the factor loadings ranged from .54 to .90. Specific items and factor loadings for each item of each scale were shown in Table 6-9. Based on the results, all items in perceived audio-based nonverbal cues (9 items), synchronicity (10 items), and persistence (7 items), and 16 items for perceived nonaudio-based nonverbal cues were retained for the next stage of confirmatory factor analysis, which is detailed in the next chapter. 1 The proportion of variance exceeded one because of how principal factor as a method of estimating factor analysis sometimes produce negative eigenvalues, which means the cumulative proportion of variance would exceed 1 and then decline to 1 as the negative eigenvalues were added (Rencher & Christensen, 2012). 58 Table 6: EFA results for the perceived audio-based nonverbal cues scale Perceived audio-based nonverbal cues Factor loadings 1. The tone, pace, and volume in my communication partner(s)' voice .84 2. If my communication partner(s)'s voice volume is going up or down .82 3. If my communication partner(s)'s speech is fast or slow .83 4. If my communication partner(s)'s voice pitch rises and falls .83 5. If my communication partner(s)'s voice sounds happy or sad .82 6. If my communication partner(s)'s voice is flat or dull .75 7. If my communication partner(s)'s voice is expressive with emotions .84 8. If my communication partner(s)'s vocal cues reflect attentiveness .81 9. If my communication partner(s)'s voice suggests he/she is interested in the .83 conversation *N = 227; EFA with principal factors extraction. Table 7: EFA results for the perceived persistence scale Perceived persistence Factor loadings 1. Keep a record of communication that I can go back and look at .87 2. Keep a record of communication that can last long after the initial .84 communication 3. Retrieve past communication in this space .88 4. Save the communication long after the interaction is finished .87 5. Find information about prior conversations with my communication .80 partner(s) 6. Have my conversations with my communication partner(s) stay available .85 after the conversation ends 7. Store the conversation .86 *N = 227; EFA with principal factors extraction. Table 8: EFA results for the perceived synchronicity scale Perceived synchronicity Factor loadings 1. Give and receive timely feedback .65 2. Engage in real-time back-and-forth interaction .71 3. Engage in instant communication .65 4. Quickly send messages or comments back and forth .60 5. Promptly receive and respond to my communication partner(s)'s messages .61 or comments 6. Immediately express my reactions to my communication partner(s) .74 7. Reply as soon as I receive a message or a comment from my .63 communication partner(s) 8. Interact with my communication partner(s) without delay .75 59 Table 8 (cont’d) 9. Expect the other person(s) to respond quickly .65 10. Learn right away what my communication partner(s) thinks of my .70 information shared *N = 227; EFA with principal factors extraction. Table 9: EFA results for the perceived nonaudio-based nonverbal cues scale Perceived nonaudio-based nonverbal cues Factor loadings 1. If my communication partner(s) is smiling .74 2. If my communication partner(s) is using various facial .62 expressions 3. If my communication partner(s) is gesturing using their .38 .47 hands or fingers 4. If my communication partner(s) is yawning .78 5. If my communication partner(s) is shaking his/her head .87 side-to-side 6. If my communication partner(s)'s arms are folded .90 7. If my communication partner(s)'s eyes rolled .80 8. If my communication partner(s) has an open posture (i.e., .31 .48 arms and legs are not crossed) 9. If my communication partner(s)'s face is turned away from .65 me 10. If my communication partner(s) is looking directly at me .67 11. If my communication partner(s) is nodding his/her head .53 12. If my communication partner(s) is engaged and focused .76 13. If my communication partner(s) is at ease in the interaction .78 14. The direction of my communication partner(s)'s eye gaze .72 15. My communication partner(s)'s posture .72 16. My communication partner(s)'s gestures .74 17. My communication partner(s)'s facial expressions .79 18. My communication partner(s)'s body movements or body .67 language *N = 227; EFA with principal factors extraction and promax rotation. 60 CHAPTER FOUR MAIN SURVEY STUDY 4.1. Chapter preview Chapter 4 describes the second study that involves conducting a cross-sectional survey that measures individuals’ sharing of positive events using ICTs, perceptions of ICTs features during sharing, and individual trait variables. 4.2. Sample and procedure Before the study began, all study procedures received ethics approval from the Institutional Review Board at Michigan State University. Participants were recruited using CloudResearch (formerly TurkPrime)’s Mturk Toolkit. The Toolkit uses MTurk’s global labor force but adopts a stringent set of measures to screen participants to ensure high-quality responses (Litman et al., 2017). Procedures similar to the ones described for the scale development study with slight modifications were adopted, detailed below. The survey was conducted in August 2021. The inclusion criteria for participants were those who are at least 18 years old, use English as one of their primary languages, use technology to communicate with others over the past week, indicate they are “somewhat likely” or “extremely likely” to respond to hypothesized positive events and use technology to share a positive event during the past two weeks. Strict attention check questions were placed at the beginning of the survey to make sure participants were paying close attention to the survey instructions. After passing the screening criteria and attention check questions, participants selected the technology they first used to share their positive events from a list consisting of group text message, group voice call, group video call, group email, posting on social media, one-to-one text messages, one-to-one voice calls, one- to-one video calls, and one-to-one emails. A quota roughly based on the percentage of technology used by participants in the scale development study for sharing was used to define 61 how respondents were distributed in the sample to increase the diversity of the technology adopted for sharing within the sample. After identifying the technology they used to share a positive event during the past two weeks, participants were instructed to “briefly describe what/when was the event, and who did you share it with on this technology.” An example write-up that read, “Last Friday, I used [piped text for the technology they selected before] to contact XXX after I got a promotion” was also provided. The time of the event, participants’ recall of the event, and the perceived importance, desirability, and positivity of the event were asked. Similar to the scale development study, the sharing scenario provided by the participant was used to build the follow-up survey questions. Participants were asked to access their communication on the technology they used to share and provided more details on the specific sharing scenario, including the feature/device used, number of people, and the nature of the relationship with which the participants shared this positive event. Scenario descriptions provided by participants were also used as piped text to remind them of the specific experience of technology use while they answered questions related to perceived features of communication technologies during that sharing scenario. After rating each of their reported technology on nonaudio-based nonverbal cues, audio-based nonverbal cues, synchronicity, and persistence, participants rated measures related to their well-being, life satisfaction, self-esteem, and attachment styles. Demographic information was also collected at the end of the survey. The survey took about 25 minutes to complete. A total of 1039 participants consented to the study, and 686 participants finished the complete survey. The attrition mainly resulted from participants being screened out of the study because they did not meet the participation criteria. Among the 686 participants, 667 of them passed more than half of the attention filters (n = 5), 62 and their responses described an authentic and personal experience involving using technology to share. Participants who provided usable data were reimbursed $3 for completing the survey. The final sample consisted of 667 participants. The average age of the sample was 38 (SD = 12.52). Sixty-two percent of the sample was female, and 37 percent were male. Seventy-two percent of the sample was white, 12 percent was Black or African American, eight percent was Asian. The majority (n = 665) of the participants came from the U.S., and the rest resided in Uruguay. Regarding education, 34% reported earning a bachelor’s degree in college, 18% had completed a master’s degree, 21% had some college but no degree, 13% had an associate degree in college, and 9% had a high school diploma or equivalent including GED. 4.3. Measures Sharing through technology. After providing a brief description of the positive event they had shared over the past two weeks using technology, participants provided more nuanced details regarding the features they used for sharing, the number of people, and the nature of the relationship with whom they shared this positive event. Participants were asked to write down the feature (e.g., text, voice/video calls, status update), the app/program/software (e.g., phone/text app, FaceTime, Snapchat), the name of the electronic devices (e.g., desktop, laptop, tablet, cellphone), the specific number of people, and the nature of the relationship with the person/people the participant shared the event with. After typing their answers to these questions, participants were also given a series of follow-up questions to categorize the features, the number of people, and the nature of relationships. Memorability of the event was measured using an adapted scale based on the memorability subscale of the Experience Outcome Measure developed by Zatori et al. (2018). Sample items included “I have wonderful memories of this event,” “I remember many positive 63 things about this event,” “I will not forget my experience of this event,” and “This details of this event is easy to recall.” Participants rated each item on a 6-point Likert scale without a middle/neutral point (1 = “Strongly disagree,” 6= “Strongly agree”). The Cronbach alpha for the scale was 0.86. Perceived significance of the event was measured by asking participants to indicate their attitude on a 7-point semantic differential scale regarding the importance, desirability, and positivity of the event (1 = not important/desirable/positive at all, 7 = extremely important/desirable/positive). Items on perceived importance and desirability were adapted from Tausig (1982), and item on positivity was adapted from Gable et al. (2004). The Cronbach alpha for the scale was 0.72. Relationship closeness was assessed in two separate ways for individuals who shared with either one person or with multiple people. The same questions were administered for both scenarios with a slight change in wordings. First, participants rated their closeness with the communication partner using the one-item inclusion of others in the self scale (Aron et al., 1992). Second, participants indicated their level of agreement using a two-item scale including “I feel very close to the person I have shared the event with” and “The person I have shared the event and I are very close to each other” on a 7-point scale with the anchors (1) strongly disagree and (7) strongly agree (Rains & Brunner, 2018). If they have shared the event with a group of individuals, the level of closeness was rated at the group level. The Cronbach’s α for closeness with an individual partner was 0.96, and the Cronbach’s α for closeness with a group was 0.95. Perceived features of the ICT, including perceived nonaudio-based nonverbal cues (α = .93), audio-based nonverbal cues (α = .97), synchronicity (α = .93), and persistence (α = .97) of the technologies, were measured by the scales developed in study 1 (detailed descriptions of each 64 scale were provided in Chapter 3). Detailed CFA analysis and related results were reported in the Results section. Perceived partner responsiveness was measured by the 12-item Perceived Responses to Capitalization Attempts scale along two dimensions on a 6-point scale (Active-Passive, Constructive-Destructive) (Gable et al., 2004). Sample item includes “My communication partner reacted to my good fortune enthusiastically,” “My communication partner tried not to make a big deal out of it, but was happy for me,” “My communication partner reminded me that most good things have their bad aspects as well,” and “My communication partner didn’t pay much attention to me.” When the audience is more than one person, the word “partner” was replaced with “partners.” The composite score was computed by subtracting the Active– Destructive, Passive–Destructive, and Passive–Constructive scales from the Active–Constructive scale. The variable has a negative mean score of -1.69 with a minimum value of -12 and a maximum of 3. The maximum value represents those individuals strongly agreed with all three statements along the dimension of active-constructive and strongly disagreed with all statements on other dimensions. The perceived valence of the responses of their communication partners was measured by adapting the enacted social support scale that measures the quality of supportive messages from one’s support providers (Goldsmith et al., 2000). One item that measured the global level of perceived positivity in their communication partners’ responses was added to the scale to reflect the nature of the study. Participants rated their communication partner’s reactions to their positive event using 11 pairs of opposing adjectives (positive-negative, helpful-unhelpful, supportive-unsupportive, sensitive-insensitive, generous-selfish, reassuring-upsetting, comforting-distressing, encouraging-discouraging, compassionate-heartless, considerate- 65 inconsiderate, understanding-misunderstanding) on a 5-point semantic differential scale. The Cronbach alpha for the scale was 0.91. Affective well-being is measured in two ways. State affective well-being was measured by asking participants about their affective states after receiving their communication partners' responses regarding this positive event. Participants were asked to rate the extent to which they experienced certain positive (happy, joyful, pleased, enjoyment/fun, delighted) and negative feelings (depressed/blue, unhappy, frustrated, angry/hostile, worried/anxious) using a 7 point scale that ranges from 0 to 6, where a 0 means they did not experience this feeling at all and a 6 means the feeling was very strong (Diener & Emmons, 1984; Dolan et al., 2017). Two variables, including state negative affect and state positive affect, were constructed based on this set of questions. The Cronbach’s α for the state negative affect was 0.90, and the Cronbach’s α for the state positive affect was 0.93. Global affective well-being for the past two weeks was measured using the Positive and Negative Affect Schedule (PANAS) (Watson et al., 1988). The PANAS consists of 10 positive (e.g., interested, excited, enthusiastic, proud) and ten negative emotion adjectives (e.g., irritable, alert, ashamed, inspired). PANAS was adapted to measure participants’ affective state for the past two weeks, as participants were asked to “the extent you have felt this way during the past two weeks.” Each item was rated on a 5-point scale (1 = Very Slightly or not at all, 2 = A little, 3 = Moderately, 4 = Quite a bit, 5 = Extremely) to measure the extent to which the affect has been experienced for the past two weeks. Two variables, including global negative affect and global positive affect were constructed based on this scale. The Cronbach’s α for the global negative affect was 0.88, and the Cronbach’s α for the global positive affect was 0.91. 66 Previous literature also suggested that self-esteem, life satisfaction, and attachment styles were among the most salient predictors of individuals’ well-being (Diener, 1994; Gable et al., 2010). Therefore, these variables were also measured to use as covariates during the analysis. Life satisfaction was measured using the five-item Satisfaction with Life Scale (Diener et al., 1985). Participants responded to each item using a 6-point scale (1 = strongly disagree, 6 = strongly agree). The example items included “In most ways, my life was close to my ideal,” “I am satisfied with my life,” and “The conditions of my life are excellent.” Cronbach’s α is 0.92. Self-esteem was measured with the 4-item shortened version of the Rosenberg self- esteem scale (Rosenberg, 2015). The psychometric properties of the shortened scale were deemed acceptable by a previous study (Tambs & Røysamb, 2014). The scale measures global self-worth, and sample items include “I take a positive attitude toward myself,” “I certainly feel useless at times,” “I feel I do not have much to be proud of,” and “I feel that I am a person of worth, at least on an equal plane with others.” Participants indicated their level of agreement with both positive and negative feelings about the self on a 6-point Likert scale. Two items were reverse coded so that a higher value stands for a higher level of self-esteem. Scores of the four items were averaged to create an indicator of self-esteem. Cronbach’s α is 0.88. Attachment styles were measured using the Experiences in Close Relationship Scale (ECR)-Short Form (Wei et al., 2007), which was a scale widely used for measuring attachment styles. It consisted of two subscales, including the anxiety (α = .84) and avoidance (α = .86) subscale. Sample items for the avoidant attachment scale included “I want to get close to people, but I keep pulling back,” “I am nervous when people get too close to me,” and “I turn to others for many things, including comfort and reassurance.” Sample items for the anxious attachment scale included “I worry that people won’t care about me as much as I care about them,” “I do not 67 often worry about being abandoned,” and “I find that people don’t want to get as close as I would like.” Participants indicated their level of agreement with these statements on a 6-point Likert scale (1 = strongly disagree, 6 = strongly agree). Some items were reverse coded so that a higher value stands for a higher level of avoidant/anxious attachment style. Demographic variables, including sex, education, age, race, and relationship status, were also measured. 4.4. Data analysis Prior to all the analysis, the data were evaluated for univariate and multivariate normality by inspecting each variable’s distribution and skewness and kurtosis scores. Normality assumptions were violated for the relevant variables. Two methods were adopted to deal with the nonnormal distribution. First, during the CFA analysis, the maximum likelihood parameter estimates with standard errors and a mean-adjusted chi-square test statistic robust to non- normality were adopted (Curran et al., 1996). Second, to correct the nonnormality during the SEM analyses, the maximum likelihood estimator with bootstrap standard errors and bias- corrected bootstrap confidence intervals with 1000 bootstrapping samples for the model parameter estimates were obtained (Nevitt & Hancock, 2001). When estimating indirect effects, bias-corrected bootstrap confidence intervals were obtained for the indirect effects. All analyses were conducted using Mplus 7. The final sample size was 667, and no missing data existed in the dataset besides the two percent missing data in the variable of perceived event significance. The two percent missing data resulted from the questions on event significance being added later to the survey. They, therefore, were assumed to be missing at random (Little & Rubin, 2019). The full information 68 maximum likelihood estimation was used to estimate the models (Enders & Bandalos, 2001; Mplus, 2017). As a first step, this sample was used to test the factor structure identified in the EFA in Chapter 3. The purpose was to establish if the hypothesized factor models with the three unidimensional models and one two-factor model measuring the perceived features of ICTs fit the data. The number of factors and the factor structure (i.e., which items load on which factors) (i.e., whether the factors are correlated) was specified a priori (Netemeyer et al., 2003). The CFA is undertaken within the framework of structural equation modeling (SEM) and assessed with common goodness-of-fit (GFI) and adjusted goodness-of-fit (AGFI) indices, including Chi- square test, root mean square error of approximation (RMSEA), the comparative fit index (CFI) Tucker Lewis Index (TLI), and standardized root mean square residual (SRMR) (Boateng et al., 2018; Hu & Bentler, 1999; Kline, 2015). If the model fits well, convergent and construct validity will be established using the statistical significance of each item’s loading and its magnitude (with a rule of thumb of standardized item-to-factor loading being an average of .70) as well as the reliability test each of the four scales (α > .80) (Netemeyer et al., 2003). Considering that CFA models place strict restrictions on the parameters, as they represent a perfect simple structure in which variables represent one factor and have no direct relationship (i.e., zero pattern coefficients) with other factors, it is often the case that they don’t fit the data well (Brown & Moore, 2012). In this instance, the three unidimensional models and one two- factor model did not initially fit the data well based on the fit criteria. Modification indices were used for determining the sources of the misfit. The modification index revealed that all the model misfits could be fixed by adding one or more error covariances among certain items, which were usually between items that were placed next to each other in the survey and worded in very 69 similar ways. CFA model’s original specification assumes that indicators are related only because of the shared influence of the latent construct, which may not always be the case as the shared covariation could also be due to the shared method variance (Brown, 2015). There are well-documented arguments for how covariation among indicators may be justified due to the method effects (Brown, 2015). Correlated errors are possible among items using similar wordings, appearing near to each other on the questionnaire, or using a repeated stem for multiple items (Smolkowski, 2020), which were the case with the items that needed covariance among errors terms during my analysis. After confirming the factor structure of the four scales of the perceived features of ICTs, mean scores of the items within each scale were constructed and treated as observed variables. The same method was used for creating the value of all other variables, except for the dependent variables. Global affective well-being and state affective well-being were specified as latent variables measured by global positive/negative and state positive/negative affect. As shown in Figure 1, H1-H2 and RQ2 focused on whether the direct relationships between perceived synchronicity, perceived nonaudio-based nonverbal cues, perceived audio- based nonverbal cues, and perceived partner responsiveness and subjective well-being were moderated by relationship closeness. For brevity, below, I use nonaudio cues refer to perceived nonaudio-based nonverbal cues, audio cues refer to perceived audio-based nonverbal cues. To answer the research question and test the two hypotheses, a basic mediation model was first run to determine the main effects between nonaudio cues, audio cues, and synchronicity and subjective well-being via perceived partner responsiveness without the relationship closeness as a moderator. 70 Second, six moderated mediation models were constructed with the perceived features (along with the covariates including the number of people shared, self-esteem, life satisfaction, avoidance and anxious attachment style, the perceived valence of responses, and perceived event significance) as independent variables, perceived partner responsiveness as the mediator, and relationship closeness as the moderator (that moderates the direct independent variable - dependent variable paths and the independent variable - mediator paths), and global and state affective well-being as the dependent variables. All continuous independent variables were centered around their grand mean. For each independent variable (nonaudio cues, audio cues, and synchronicity), two models, each using global well-being and state well-being as dependent variables, were tested. Interaction terms were constructed with each independent variable and relationship closeness. Significant interaction terms were probed using the pick-a-point approach (Hayes, 2013). RQ1 is concerned with whether the relationship between perceived partner responsiveness and subjective well-being was moderated by relationship closeness. For answering RQ1, a simple moderation model was specified with relationship closeness as the moderator and perceived partner responsiveness as the independent variable, along with other covariates and global/state well-being as two dependent variables. A simple mediation model was specified with perceived persistence as the main independent variable, memorability of the event as the mediator, and global/state well-being as two dependent variables to test H3 and H4. 4.5. Results Thirty-six percent of the participants reported using one-to-one text messages for sharing positive events, 18 percent of the participants used posting on social media, 14 percent used 71 group text messages, and 14 percent used one-to-one voice calls, 9 percent used one-to-one video calls, 3 percent used group video calls, and 2 percent used group voice calls. The least used technology was one-to-one emails (1 percent) and group emails (1 percent). The predominant category of devices used was cellphones (84%), followed by laptops (8%), desktops (5%), and tablets (3%). About 46 percent of the participants shared with only one individual, and a total of 91 percent of the individuals shared with less than 100 individuals. To confirm the factor structure of the four scales developed in Chapter 3, four separate CFA models were built. The descriptive statistics of all the items and the correlation among the items within each scale were presented in Table 10-14. Table 10: Means and standard deviation of all scale items of the CFA study Mean S.D. Minimum Maximum audio1 4.34 1.55 1 6 audio2 4.03 1.71 1 6 audio3 4.01 1.72 1 6 audio4 3.99 1.73 1 6 audio5 4.47 1.58 1 6 audio6 3.76 1.77 1 6 audio7 4.35 1.60 1 6 audio8 4.24 1.67 1 6 audio9 4.41 1.61 1 6 synchronicity1 5.16 0.87 1 6 synchronicity2 5.17 1.00 1 6 synchronicity3 5.19 1.01 1 6 synchronicity4 5.22 0.90 1 6 synchronicity5 5.22 0.88 1 6 synchronicity6 5.30 0.82 1 6 synchronicity7 5.27 0.85 1 6 synchronicity8 5.15 0.94 1 6 synchronicity9 4.72 1.19 1 6 synchronicity10 5.00 1.03 1 6 persistence1 4.68 1.44 1 6 persistence2 4.63 1.50 1 6 persistence3 4.52 1.55 1 6 persistence4 4.44 1.62 1 6 persistence5 4.53 1.50 1 6 persistence6 4.51 1.60 1 6 persistence7 4.34 1.67 1 6 72 Table 10 (cont’d) nonaudio1 4.71 1.29 1 6 nonaudio2 4.23 1.46 1 6 nonaudio3 2.62 1.62 1 6 nonaudio4 2.58 1.61 1 6 nonaudio5 2.31 1.51 1 6 nonaudio6 2.45 1.64 1 6 nonaudio7 2.27 1.56 1 6 nonaudio8 3.35 1.87 1 6 nonaudio9 3.61 1.66 1 6 nonaudio10 4.45 1.43 1 6 nonaudio11 4.55 1.38 1 6 nonaudio12 3.27 1.79 1 6 nonaudio13 3.14 1.72 1 6 nonaudio14 3.43 1.72 1 6 nonaudio15 3.92 1.67 1 6 nonaudio16 3.46 1.73 1 6 *N=667; Audio is short for perceived audio-based nonverbal cues; Synchronicity is short for perceived synchronicity; Persistence is short for perceived persistence; Nonaudio is short for perceived nonaudio-based nonverbal cues. Table 11: Zero-order correlations of items of the perceived audio-based nonverbal cues scale 1 2 3 4 5 6 7 8 9 audio1 1.00 audio2 0.82*** 1.00 audio3 0.80*** 0.86*** 1.00 audio4 0.82*** 0.87*** 0.91*** 1.00 audio5 0.85*** 0.78*** 0.78*** 0.79*** 1.00 audio6 0.68*** 0.72*** 0.74*** 0.74*** 0.68*** 1.00 audio7 0.82*** 0.77*** 0.78*** 0.80*** 0.89*** 0.71*** 1.00 audio8 0.83*** 0.80*** 0.81*** 0.80*** 0.86*** 0.70*** 0.88*** 1.00 audio9 0.83*** 0.75*** 0.77*** 0.76*** 0.89*** 0.66*** 0.89*** 0.89*** 1.00 Note: N = 667; Audio is short for perceived audio-based nonverbal cues; * p < 0.05 ** p < 0.01 *** p < 0.001. Table 12: Zero-order correlations of items of the perceived synchronicity scale 1 2 3 4 5 6 7 8 9 10 synchronicit y1 1.00 synchronicit 0.57* y2 ** 1.00 synchronicit 0.52* 0.79* y3 ** ** 1.00 73 Table 12 (cont’d) synchronicit 0.50* 0.50* 0.58* y4 ** ** ** 1.00 synchronicit 0.57* 0.64* 0.65* 0.69* y5 ** ** ** ** 1.00 synchronicit 0.49* 0.55* 0.59* 0.49* 0.56* y6 ** ** ** ** ** 1.00 synchronicit 0.49* 0.55* 0.55* 0.61* 0.70* 0.62* y7 ** ** ** ** ** ** 1.00 synchronicit 0.47* 0.69* 0.69* 0.52* 0.63* 0.71* 0.68* y8 ** ** ** ** ** ** ** 1.00 synchronicit 0.40* 0.56* 0.53* 0.39* 0.51* 0.49* 0.49* 0.60* y9 ** ** ** ** ** ** ** ** 1.00 synchronicit 0.46* 0.63* 0.64* 0.47* 0.61* 0.59* 0.56* 0.64* 0.70* 1.0 y10 ** ** ** ** ** ** ** ** ** 0 Note: N = 667; Synchronicity is short for perceived synchronicity; * p < 0.05 ** p < 0.01 *** p < 0.001. Table 13: Zero-order correlations of items of the perceived persistence scale 1 2 3 4 5 6 7 persistence1 1.00 persistence2 0.92*** 1.00 persistence3 0.85*** 0.86*** 1.00 persistence4 0.84*** 0.87*** 0.88*** 1.00 persistence5 0.77*** 0.78*** 0.82*** 0.79*** 1.00 persistence6 0.81*** 0.84*** 0.87*** 0.88*** 0.81*** 1.00 persistence7 0.79*** 0.80*** 0.83*** 0.88*** 0.77*** 0.87*** 1.00 Note: N = 667; Persistence is short for perceived persistence; * p < 0.05 ** p < 0.01 *** p < 0.001. 74 Table 14: Zero-order correlations of items of the perceived nonaudio-based nonverbal cues scale 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 nonaudi o1 1.00 nonaudi 0.68* o2 ** 1.00 nonaudi 0.23* 0.33* o3 ** ** 1.00 nonaudi 0.24* 0.40* 0.71* o4 ** ** ** 1.00 nonaudi 0.23* 0.37* 0.71* 0.77* o5 ** ** ** ** 1.00 nonaudi 0.23* 0.38* 0.73* 0.82* 0.80* o6 ** ** ** ** ** 1.00 nonaudi 0.21* 0.35* 0.70* 0.75* 0.79* 0.77* o7 ** ** ** ** ** ** 1.00 nonaudi 0.43* 0.45* 0.20* 0.30* 0.35* 0.27* 0.41* o8 ** ** ** ** ** ** ** 1.00 nonaudi 0.52* 0.58* 0.31* 0.45* 0.43* 0.38* 0.42* 0.65* o9 ** ** ** ** ** ** ** ** 1.00 nonaudi 0.56* 0.49* 0.17* 0.21* 0.19* 0.16* 0.22* 0.54* 0.57* o10 ** ** ** ** ** ** ** ** ** 1.00 nonaudi 0.56* 0.48* 0.15* 0.16* 0.14* 0.11* 0.17* 0.49* 0.50* 0.80* o11 ** ** ** ** ** * ** ** ** ** 1.00 nonaudi 0.41* 0.48* 0.22* 0.32* 0.37* 0.31* 0.43* 0.72* 0.62* 0.53* 0.48* o12 ** ** ** ** ** ** ** ** ** ** ** 1.00 75 Table 14 (cont’d) nonaudi 0.42* 0.49* 0.24* 0.32* 0.39* 0.29* 0.42* 0.65* 0.64* 0.50* 0.47* 0.72* o13 ** ** ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi 0.53* 0.57* 0.26* 0.35* 0.39* 0.34* 0.39* 0.65* 0.68* 0.56* 0.50* 0.67* 0.82* o14 ** ** ** ** ** ** ** ** ** ** ** ** ** 1.00 nonaudi 0.57* 0.58* 0.22* 0.31* 0.32* 0.32* 0.33* 0.62* 0.65* 0.61* 0.57* 0.62* 0.70* 0.81* 1.0 o15 ** ** ** ** ** ** ** ** ** ** ** ** ** ** 0 nonaudi 0.49* 0.57* 0.22* 0.32* 0.37* 0.29* 0.36* 0.64* 0.68* 0.56* 0.52* 0.68* 0.80* 0.84* 0.80* 1. o16 ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** 00 Note: N = 667; Nonaudio is short for perceived nonaudio-based nonverbal cues; * p < 0.05 ** p < 0.01 *** p < 0.001. 76 The three unidimensional models and one two-factor model did not initially fit the data well. Modification indices were used for determining the sources of the misfit. After adjusting the correlated error terms due to the method effects, I reported the overall goodness-of-fit indices for each finalized model in Table 15. The unstandardized and standardized parameter estimates were presented in Figure 2-5. All freely estimated standardized parameters were statistically significant (p < .001) for all four models. Factor loading estimates revealed that the indicators were strongly related to the latent factor (range of R2s = .76–.88 for perceived audio-based nonverbal cues, range of R2s = .42–.70 for perceived synchronicity, range of R2s = .78–.89 for perceived persistence, range of R2s = .34–.83 for perceived nonaudio-based nonverbal cues). Moreover, estimates from the two-factor solution of the perceived nonaudio-based nonverbal cues indicate a moderate relationship between the two dimensions (.44). Figure 2: CFA model for perceived audio-based nonverbal cues Note: Unstandardized and completely standardized parameter estimates from the single factor CFA model of perceived audio-based nonverbal cues. Specific items for audio 1-9 can be found in Table 6. Completely standardized parameter estimates are presented in parentheses. All freely estimated unstandardized parameter estimates are statistically significant (p < .001). Estimates obtained from Mplus 7. 77 Figure 3: CFA model for perceived synchronicity Note: Unstandardized and completely standardized parameter estimates from the single factor CFA model of perceived synchronicity. Specific items for synchronicity 1-10 can be found in Table 8. Completely standardized parameter estimates are presented in parentheses. All freely estimated unstandardized parameter estimates are statistically significant (p < .001). Estimates obtained from Mplus 7. Figure 4: CFA model for perceived persistence Note: Unstandardized and completely standardized parameter estimates from the single factor CFA model of perceived persistence. Specific items for persistence 1-7 can be found in Table 7. Completely standardized parameter estimates are presented in parentheses. All freely estimated unstandardized parameter estimates are statistically significant (p < .001). Estimates obtained from Mplus 7. 78 Figure 5: CFA model for perceived nonaudio-based nonverbal cues Note: Unstandardized and completely standardized parameter estimates from the single factor CFA model of perceived persistence. Specific items for perceived nonaudio-based nonverbal cues 1-16 can be found in Table 9. Completely standardized parameter estimates are presented in parentheses. All freely estimated unstandardized parameter estimates are statistically significant (p < .001). Estimates obtained from Mplus 7. Table 15: Model fit indices of the CFA models CFA model χ2 (df) RMSEA SRMR CFI TLI Perceived audio-based χ2(26) = 0.11 (90% CI = .03 .95 .94 nonverbal cues 243.14, p < .001 0.10 –.13) Perceived χ2(33) = .07 (90% CI .04 .95 .93 synchronicity 141.79, p < .001 =.06 –.08) Perceived persistence χ2(25) = 25.53, .04 (90% CI .01 1.00 .99 p < .05 =.02 –.06) Perceived nonaudio- χ2(101) = .08 (90% CI .06 .94 .93 based nonverbal cues 511.21, p < .001 =.07 –.09) 79 After confirming the factor structure of the scales, mean scores were constructed for each of the scales. The descriptive statistics and correlation of all the variables were presented in Table 16-17. Table 16: Descriptive statistics of main study variables Mean S.D. Minimum Maximum Nonaudio-based nonverbal cues 3.39 1.15 1 6 Audio-based nonverbal cues 4.18 1.50 1 6 Synchronicity 5.14 .74 1.9 6 Persistence 4.52 1.44 1 6 Relationship closeness 5.82 1.09 1 7 Memorability of the event 5.20 .75 1.75 6 Number of people shared 76.17 369.00 1 5000 Responsiveness -1.69 2.96 -12 3 State_positive 5.04 1.04 0 6 State_negative .24 .59 0 5.6 Global_positive 3.46 .82 1.1 5 Global_negative 1.61 .64 1 4.6 Valence of responses 4.44 .52 2 5 Self-esteem 3.65 .48 1 6 Attach_avoid 2.90 1.05 1 6 Attach_anxiety 2.92 1.08 1 6 Life satisfaction 3.86 1.20 1 6 Perceived event significance 6.29 .83 2.33 7 Education 4.40 1.39 1 8 Age 38.51 12.48 18 75 Note: N = 667. 80 Table 17: Zero-order correlations of main study variables 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Nonaudio 1.00 .48** 1.00 Audio * Synchron .21** .35*** 1.00 icity * Persisten -.15* -.35** -.13** 1.00 ce ** * .19** .20*** .34*** .02 1.00 Closeness * Memorab .15** .13** .24*** .15*** .25*** 1.00 ility * Number -.09* -.16** -.16*** .06 -.35*** -.05 1.00 of people * shared Responsi .07+ .23*** .32*** -.07+ .29*** .32*** -.16** 1.00 veness * State_pos .21** .23*** .22*** .06+ .29*** .47*** -.14** .37*** 1.00 itive * * State_neg -.03 -.06 -.15*** .03 -.18*** -.20** .10* -.33*** -.37** 1.00 ative * * Global_p .17** .11** .14*** .11** .22*** .38*** .02 .19*** .34*** -.16* 1.00 ositive * ** Global_n -.06 -.04 -.06 .03 -.16*** -.20** .00 -.24*** -.26** .42** -.34* 1.00 egative * * * ** Valence .19** .24*** .31*** .04 .31*** .40*** -.09* .38*** .46*** -.22* .32** -.19* 1.00 of * ** * ** responses Self- .03 .03 -.02 .02 .00 -.06 -.03 -.18*** -.03 .10** -.01 .26** -.04 1.00 esteem * 81 Table 17 (cont’d) -.03 -.07+ -.14*** -.03 -.21*** -.13** .02 -.26*** -.17** .20** -.24* .27** -.13* .16** 1.00 Avoidant * * * ** * ** * -.06 -.06 -.10* .07+ -.15*** -.17** .02 -.26*** -.13** .21** -.28* .40** -.14* .26** .30** 1.00 Anxious * * * ** * ** * * Life .01 -.00 .08* -.01 .16*** .16*** -.02 .14*** .16*** -.15* .39** -.37* .17** -.24* -.30* -.29* 1.00 satisfacti ** * ** * ** ** ** on Event .14** .18*** .25*** .05 .27*** .47*** -.01 .30*** .33*** -.12* .31** -.11* .40** -.02 -.12* -.09* .11* 1.00 significan * * * * * * * ce Note: Note: N = 667; Nonaudio is short for perceived nonaudio-based nonverbal cues; Audio is short for perceived audio-based nonverbal cues; Synchronicity is short for perceived synchronicity; Persistence is short for perceived persistence; + p < 0.1 * p < 0.05 ** p < 0.01 *** p < 0.001. 82 RQ1 asked whether perceived partner responsiveness relates to subjective well-being differently depending on relationship closeness. A basic moderation model was constructed with relationship closeness as the moderator between responsiveness and global/state well-being (Table 18). Results showed that there were interaction effects between perceived partner responsiveness and relationship closeness in predicting state affective well-being (β = -.036, p < .001)., but not in predicting global affective well-being (β = -.004, p > .05). By using picking a point approach (mean, one standard deviation below and above the mean) to probe the interaction, it was found that as the relationship closeness increased, perceived partner responsiveness’ positive effect on state affective well-being decreased. Specifically, when relationship closeness was low, the positive direct effect of responsiveness on state well-being was .113 with a 95% CI [0.071, 0.154]. When relationship closeness was at average, the positive direct effect of responsiveness on state affective well-being was .074 with a 95% CI [0.050, 0.099]. When relationship closeness was high, the positive direct effect of responsiveness on state affective well-being was .035 with a 95% CI [0.002, 0.071]. Table 18: Basic moderation model: Relationship closeness as the moderator between responsiveness and affective well-being State affective well-being Global affective well-being as DV as DV Global_positive 1.00 (.00) Global_negative -.755 (.077)*** State_positive 1.00 (.00) State_negative -.399 (.069)*** Perceived partner .282 (.082)*** .035 (.041) responsiveness Relationship closeness -.011 (.039) .030 (.024) Responsiveness* -.036 (.014)*** [RQ1] -.004 (.007) [RQ1] closeness Number of people shared -.011 (.013) .006 (.005) Self-esteem .072 (.039) .240 (.027)*** Attachment_avoid -.051 (.031) -.009 (.024) Attachment_anxiety .007 (.037) -.055 (.027)* 83 Table 18 (cont’d) Life satisfaction .026 (.032) .089 (.024) *** Valence of response .524 (.106) *** .186 (.052)*** Event significance .107 (.060) .061 (.031)* Model fit χ2(9) = 38.88, p < .001, χ2(9) = 43.74, p < .001, SRMR = .02, RMSEA SRMR = .02, RMSEA = .08 = .07 (90% CI = .05 (90% CI = .05 –.10), TLI –.09), TLI = .84, CFI = .93. = .82, CFI = .92. Note: Unstandardized coefficients; Standard error in parentheses. All continuous variables are centered at the grand mean. The variable number of people shared was divided by 100 to reduce the variance and make the model converge. * p < 0.05 ** p < 0.01 *** p < 0.001. A two-step approach was used for testing H1 and H2 and answering RQ2. First, a basic mediation model with the perceived features of ICTs as independent variables, the perceived partner responsiveness as the mediator, and global and state affective well-being as the dependent variables were constructed. Results were used to illustrate the main effects of each independent variable on the mediator and the dependent variables before testing moderations (Table 19). Responsiveness was positively related to state-well-being (β = .075, p < .001), and unrelated to global well-being (β = .014, p > .05). Nonaudio cues was negatively related to perceived partner responsiveness (β = -.264, p < .05), while being positively related to state affective well-being (β = .067, p < .05) and global affective well-being (β = .045, p < .05). Audio cues was positively related to perceived partner responsiveness (β = .224, p < .01), while being unrelated to state affective well-being (β = .015, p > .05) and global affective well-being (β = -.011, p > .05). Similarly, synchronicity was positively related to perceived partner responsiveness (β =.545, p < .001), while being unrelated to state affective well-being (β = -.028, p > .05) and global well-being (β = -.049, p > .05). Relationship closeness was unrelated to responsiveness (β =.143, p > .05), state affective well-being (β = .058, p > .05) and global affective well-being (β = .041, p > .05). 84 Table 19: Basic mediation model: Nonaudio cues, audio cues, and synchronicity and affective well-being via perceived partner responsiveness Perceived partner State affective well- Global affective responsiveness being well-being State_positive 1.000 (.000)*** State_negative -.365 (.065)*** Global_positive 1.000 (.000)*** Global_negative -.747 (.075)*** Perceived partner .075 (.014)*** .014 (.011) responsiveness Nonaudio cues -.264 (.113)* .067 (.033)* .045 (.021)* Audio cues .224 (.080) ** .015 (.026) -.011 (.018) Synchronicity .545 (.148) *** -.028 (.048) -.049 (.031) Relationship .143 (.116) .058 (.036) .041 (.023) closeness Number of people -.062 (.035) -.017 (.014) .005 (.005) shared Self-esteem -.073 (.127) .065 (.041) .240 (.027)*** Attachment_avoid -.397 (.106) -.045 (.032) -.010 (.024) Attachment_anxiety -.478 (.112) .010 (.037) -.053 (.026)* Life satisfaction -.024 (.127) .024 (.033) .090 (.024) *** Valence of response 1.247 (.228)*** .528 (.107)*** .188 (.053) *** Event significance .478 (.138)** .111 (.058)* .063 (.031)* Model fit χ (11) = 46.51, p 2 χ2(11) = 45.09, p < .001, SRMR = .02, < .001, SRMR = .02, RMSEA = .07 (90% RMSEA = .07 (90% CI = .05 –.09), TLI CI = .05 –.09), TLI = .80, CFI = .94. = .85, CFI = .95. Note: Unstandardized coefficients; Standard error in parentheses. All continuous variables are centered at the grand mean. The variable number of people shared was divided by 100 to reduce the variance and make the model converge. * p < 0.05 ** p < 0.01 *** p < 0.001. In terms of indirect effects, synchronicity was positively related to responsiveness, which was positively related to state affective well-being (effect estimate = .039, 95% confidence interval: [.016 - .069]). Audio cues was positively related to responsiveness, which was positively related to state affective well-being (effect estimate = .032, 95% confidence interval: [.011 - .066]). Nonaudio cues was negatively related to responsiveness, which was positively 85 related to state affective well-being (effect estimate = -.029, 95% confidence interval: [-.059 - .005]). None of the indirect effects were significant between the IVs and global well-being. After testing for the main effects, six moderated mediation models were constructed. Results were presented in Table 20-22. Each table contains results from two models since model results were largely identical with minor fluctuations in the last decimal points for responsiveness (and therefore only reported once) and only differed on the state/global affective well-being. For H1a, H1b, H1c, H2a, H2b, and H2c’s arguments on how the use of ICTs with perceived (a) synchronicity, (b) audio cues, and (c) nonaudio cues were differently related to well-being depending on relationship closeness with one’s communication partner, it was found that none of the interaction terms between (a) synchronicity, (b) audio cues, and (c) nonaudio cues and relationship closeness was significant in predicting state and global affective well-being (specific coefficients and standard errors can be seen in Table 20-22). Thus, H1 and H2 were not supported. RQ2 asked about how (a) synchronicity, (b) audio-based nonverbal cues, and (c) nonaudio-based nonverbal cues, may relate to perceived partner responsiveness differently by relationship closeness with one’s partner. Again, it was found that none of the interaction terms between (a) synchronicity, (b) audio cues, and (c) nonaudio cues and relationship closeness was significant in perceived partner responsiveness (coefficients and standard errors can be seen in Table 20-22). Table 20: Moderated mediation model: Synchronicity as the main independent variable Perceived partner State affective well- Global affective responsiveness being as DV well-being as DV Global_positive 1.00 (.00) Global_negative -.744 (.075)*** State_positive State_negative -.364 (.065)*** 86 Table 20 (cont’d) Perceived partner .075 (.014)*** .014 (.011) responsiveness Nonaudio cues -.271 (.114)* .065 (.033)* .045 (.021)* Audio cues .232 (.081)** .018 (.026) -.010 (.018) Synchronicity -.163 (.762) -.025 (.048) -.110 (.120) Relationship -.461 (.649) -.136 (.221) -.011 (.107) closeness Synchronicity* .123 (.126) .039 (.041) [H1a and .011 (.021) [H1a closeness [RQ2a] H2a] and H2a] Number of people -.066 (.040) -.018 (.014) .005 (.005) shared Self-esteem -.073 (.128) .065 (.041) .240 (.027)*** Attachment_avoid -.388 (.106)*** -.042 (.032) -.010 (.024) Attachment_anxiety -.479 (.112) *** .009 (.037) -.053 (.027)* Life satisfaction -.021 (.127) .025 (.033) .090 (.024) *** Valence of response 1.243 (.226)*** .527 (.107)*** .188 (.053)*** Event significance .462 (.140)*** .108 (.057) .063 (.031)* Model fit χ2(12) = 46.56, p χ2(12) = 45.63, p < .001, SRMR = .02, < .001, SRMR RMSEA = .07 (90% = .02, RMSEA CI = .05 –.09), TLI = .07 (90% CI = .05 = .81, CFI = .94. –.09), TLI = .85, CFI = .95. Note: Unstandardized coefficients; Standard error in parentheses. All continuous variables are centered at the grand mean. The variable number of people shared was divided by 100 to reduce the variance and make the model converge. * p < 0.05 ** p < 0.01 *** p < 0.001. Table 21: Moderated mediation model: Audio-based nonverbal cues as the main independent variable Perceived partner State affective Global affective responsiveness well-being as DV well-being as DV Global_positive 1.00 (.00) Global_negative -.744 (.075)*** State_positive State_negative -.364 (.065)*** Perceived partner .075 (.014) *** .014 (.011) responsiveness Nonaudio cues -.268 (.114)* .066 (.033)* .045 (.021)* Audio cues -.039 (.356) -.080 (.125) -.024 (.062) Synchronicity .556 (.149)*** -.025 (.048) -.049 (.031) Relationship -.041 (.259) -.009 (.098) .032 (.050) closeness 87 Table 21 (cont’d) Audio .045 (.059) [RQ2b] .017 (.021) [H1b -.021 (.014) [H1b cues*closeness and H2b] and H2b] Number of people -.067 (.035)* -.019 (.015) .005 (.005) shared Self-esteem -.070 (.127) .066 (.041) .240 (.027)*** Attachment_avoid -.391 (.107)*** -.043 (.032) -.010 (.025) Attachment_anxiety -.485 (.113) *** .007 (.038) -.054 (.027)* Life satisfaction -.021 (.127) .025 (.033) .090 (.024)*** Valence of response 1.235 (.226)*** .524 (.107)*** .187 (.053)*** Event significance .469 (.139)*** .110 (.057)* .063 (.031)* Model fit χ2(12) = 46.70, p χ2(12) = 46.71, p < .001, SRMR < .001, SRMR = .02, RMSEA = .02, RMSEA = .07 (90% CI = .07 (90% CI = .05 –.09), TLI = .05 –.09), TLI = .81, CFI = .94. = .85, CFI = .95. Note: Unstandardized coefficients; Standard error in parentheses. All continuous variable centered at grand mean. The variable number of people shared was divided by 100 to reduce the variance and make the model converge. * p < 0.05 ** p < 0.01 *** p < 0.001. Table 22: Moderated mediation model: Nonaudio-based nonverbal cues as the main independent variable Perceived partner State affective well- Global affective well- responsiveness being as DV being as DV Global_positive 1.00 (.00) Global_negative -.744 (.075)*** State_positive 1.00 (.00) State_negative -.364 (.065)*** Perceived partner .077 (.014) *** .015 (.011) responsiveness Nonaudio cues -1.433 (.647)* .220 (.176) .171 (.085)* Audio cues .229 (.078)** .015 (.026) -.012 (.018) Synchronicity .540 (.148)*** -.028 (.048) -.049 (.031) Relationship -.494 (.32) .141 (.098) .109 (.050)* closeness Nonaudio .197 (.105) -.026 (.028) [H1c -.021 (.014) [H1c and cues*closeness [RQ2c] and H2c] H2c] Number of people -.069 (.037)* -.016 (.014) .006 (.005) shared Self-esteem -.056 (.125) .063 (.040) .238 (.026)*** Attachment_avoid -.396 (.106)*** -.044 (.032) -.010 (.024) Attachment_anxiety -.477 (.110) *** .010 (.037) -.053 (.026)* 88 Table 22 (cont’d) Life satisfaction -.018 (.126) .023 (.033) .090 (.024) * Valence of response 1.230 (.229)*** .529 (.107)*** .189 (.054)*** Event significance .458 (.139)*** .114 (.057)* .066 (.032)* Model fit χ2(12) = 46.55, p χ2(12) = 45.96, p < .001, SRMR = .02, < .001, SRMR = .02, RMSEA = .07 (90% RMSEA = .07 (90% CI = .05 –.09), TLI CI = .05 –.09), TLI = .81, CFI = .94. = .85, CFI = .95. Note: Unstandardized coefficients; Standard error in parentheses. All continuous variable centered at grand mean. The variable number of people shared was divided by 100 to reduce the variance and make the model converge. * p < 0.05 ** p < 0.01 *** p < 0.001. However, moderated mediation models also allowed for the testing of whether the indirect effects between the independent variables and state affective well-being identified in the basic mediation model was moderated by relationship closeness. By constructing and testing for the significance of an additional parameter that was the product of the coefficients of the interaction term of nonaudio cues and closeness in predicting responsiveness and the coefficient of responsiveness in predicting state well-being, it was found that the indirect effect between nonaudio cues and state well-being was moderated by relationship closeness (effect estimate = .015, 95% CI [0.001, 0.033]). In probing the moderated mediation through picking a point approach, the negative indirect effect of nonaudio cues on state well-being was found to decrease as relationship closeness increases. Specifically, when relationship closeness was low, the indirect effect of nonaudio cues on state well-being was -.039 with a 95% CI [-0.073, -0.016]. When relationship closeness was average, nonaudio cues' indirect effect on state well-being was -.022 with a 95% CI [-0.044, -0.005]. When relationship closeness was high, the negative indirect effect of nonaudio cues on state well-being was no longer significant (effect estimate = -.006 with a 95% CI [-0.029, 0.016]). 89 Lastly, a basic mediation model was constructed to test H3 and H4 regarding the hypothesized positive relationship between memorability of the event and well-being and the hypothesized positive relationship between persistence and memorability (Table 23). Persistence was positively related to the memorability of the event (β = .068, p < .001), and memorability of the event was positively related to state affective well-being (β = .412, p < .001) and global affective well-being (β = .166, p < .001). The indirect effects were also significant, i.e., persistence was related to memorability of the event, which in turn, was related to both state well-being (indirect effect estimate = .028, 95% CI [.015, .046]) and global well-being (indirect effect estimate = .011, 95% CI [.005, .020]). Table 23: Basic mediation model: Memorability as the mediator between persistence and affective well-being Memorability State affective well- Global affective being as DV well-being as DV Global_positive 1.00 (.00) Global_negative -.709 (.071)*** State_positive 1.00 (.00) State_negative -.304 (.058)*** Memorability .412 (.056)*** [H4] .166 (.036)***[H4] Persistence .068 (.017) *** -.002 (.025) .009 (.013) [H3] Number of people -.006 (.009) -.030 (.012)* .001 (.004) shared Self-esteem .017 (.033) .047 (.042) .241 (.027)*** Attachment_avoid -.002 (.026) -.079 (.031)* -.017 (.024) Attachment_anxiety -.057 (.028)* -.004 (.037) -.054 (.026)* Life satisfaction .026 (.024) .012 (.033) .089 (.024) *** Valence of response .323 (.062)*** .579 (.096) .172 (.046)*** Event significance .324 (.044)*** .054 (.053) .029 (.030) Model fit χ (9) = 31.78, p 2 χ2(9) = 45.03, p < .001, SRMR < .001, SRMR = .03, RMSEA = .03, RMSEA = .07 (90% CI = .04 = .06 (90% CI –.09), TLI = .05 –.11), TLI = .87, CFI = .96. = .84, CFI = .95. 90 Note: Unstandardized coefficients; Standard error in parentheses. All continuous variable centered at grand mean. The variable number of people shared was divided by 100 to reduce the variance and make the model converge. * p < 0.05 ** p < 0.01 *** p < 0.001. A summary of the main findings and the significance testing results in relation to the research questions and hypotheses were provided in Table 24. Table 24: A summary of the key findings Perceived Memorab State Global partner ility affective affective responsiven well-being well-being ess Mediators Perceived partner Positive No responsiveness association association Memorability H3 supported H3 supported (positive (positive association) association) Moderator Relationship No No No closeness association association association Mediator*M Perceived partner RQ1 No oderator responsiveness (negative association *closeness association) Independent Synchronicity Positive No No variables association association association Audio-based Positive No No nonverbal cues association association association Nonaudio-based Negative Positive Positive nonverbal cues association association association Persistence H4 supported (positive associatio n) Independent Synchronicity* RQ2a (no H1a & H2a H1a & H2a variables*M closeness association) (no (no oderator association) association) Audio-based RQ2b (no H1b & H2b H1b & H2b nonverbal cues* association) (no (no closeness association) association) Nonaudio-based RQ2c (no H1c & H2c H1c & H2c nonverbal association) (no (no cues*closeness association) association) 91 CHAPTER FIVE DISCUSSION By borrowing from the capitalization research in social psychology, this study applies Gable and Reis’s (2010) model of capitalization processes to the context of computer-mediated communication. Through extensive theoretical and empirical development, CMC scholars have identified a host of critical features or affordances that could impact the interpersonal communication processes and outcomes (Park & Sundar, 2015; Sundar et al., 2015; Walther, 1996, 2011). However, the gap exists in CMC literature that findings based on experimental or lab research on the effects of features of mediated communication (e.g., interactivity, synchronicity) on interpersonal communication may or may not be generalizable to everyday use scenarios. By focusing on how capitalization as a form of secure-based support can be impacted by the common features or characteristics of communication technologies through scale development and survey, this study outlines the mediating and moderating mechanisms through which the usage of communication technologies is related to individuals’ affective well-being, immediately after using it for capitalization or two weeks after the capitalization has taken place. Given the prevalence of using technology for social interactions, especially during the COVID-19 pandemic when there are many risks and restrictions associated with in-person interactions, understanding how technology use may be related to our well-being is imperative. Notably, with the advent of smartphones and social media, the issue between technology and well-being has become more salient (Hampton, 2019; Twenge, 2017; Twenge et al., 2020; Twenge & Farley, 2021). While social media has features unique to platforms, it also shares a host of features common to previous generations of communication technologies, such as text messages and voice/audio calls. The emergence of masspersonal communication and its similarities and dissimilarities with interpersonal communication regarding effects on well-being 92 can also be better understood through direct comparisons of perceived features of ICTs. Therefore, when studying how media and technology use may be related to health and well- being, singling out one or two forms of technology without considering the current polymedia environment is less productive (Bayer et al., 2020; Fox & McEwan, 2017; Walther, 2013). Based on this premise and in response to previous scholars’ call for developing a more systematic way to examine features or affordances that transcend specific products or platforms (Bayer et al., 2020; Fox & McEwan, 2017; Walther, 2013), the first part of the dissertation is a scale development study that aims at developing more accurate and thorough scales to measure individuals’ perceptions of different features of ICTs. In continuation with Fox and McEwan (2017)’s work on developing the perceived social affordances of communication channels scale, this study aims at measuring features that are the most relevant for supportive communication when sharing positive events. Particularly, by discriminating between nonaudio-based cues and audio-based cues, the scales developed in the current study add more nuances to the different implications of these two types of cues. Previous research has emphasized the importance of cues during mediated communication (Xu & Liao, 2020), and cues are the focus of several channel-focused theories and models (Daft & Lengel, 1986; Short et al., 1976; Walther, 2011). However, scholars have predominantly theorized and measured auditory and non-audio-based cues in the same way that obscures the utility of different types of cues. Such choices may be necessary and reflect how modern mediated technologies have readily converged multiple cues in a single platform, which forces the researchers to think in such broad terms as well. However, based on research related to empathic accuracy and nonverbal immediacy, this study distinguishes the perceptions and measurements 93 of these two types of cues and subsequently uncovers their different relationships with key mechanisms and outcomes of capitalization. Below, I discuss the main findings of the dissertation in four parts: (1) the significant direct and conditional effect of perceived partner responsiveness on state well-being; (2) the significant direct and indirect effect of synchronicity, audio-based nonverbal cues, nonaudio- based nonverbal cues, on perceived partner responsiveness and affective well-being; (3) the insignificant direct and conditional effect of synchronicity, audio-based nonverbal cues, and nonaudio-based nonverbal cues on state/global well-being; (4) the significant indirect effect of persistence on state/global well-being through memorability of the positive event. 5.1. The direct and conditional effect of perceived partner responsiveness on state affective well-being Consistent with previous literature, perceived partner responsiveness has been found to have a positive association with state affective well-being, which reflects the intensity of positive emotions upon receiving interaction partners’ responses during capitalization (Gable et al., 2010; Gable et al., 2004; Reis et al., 2010). It does not relate to global affective well-being, a more general measure of the individuals’ emotions and mood for the past two weeks. The different association of responsiveness with well-being is interesting, suggesting that the positive emotions obtained through capitalizing on a specific event may depend highly on context and partner that does not carry beyond the immediate situation. Perhaps more interestingly, the relationship between perceived partner responsiveness and state well-being was significantly moderated by relationship closeness. With the advent of masspersonal communication, this ability to reach a broad range of audiences is reshaping individuals’ social relationships, self-presentations, health behaviors, and psychological well- 94 being (Chua & Chang, 2016; DeVito et al., 2017; O’Sullivan & Carr, 2018; Zhang, 2017). Previous literature primarily focuses on capitalization with close others. This study introduces more relationship variety by sampling various sharing methods that allow individuals to reach interaction partners with whom they are less familiar. As a result, it was found that the positive effect of perceived partner responsiveness’ on state affective well-being decreased as relationship closeness increased. When individuals perceived responsiveness, i.e., active and constructive feedback about the specific event, from interaction partners that are less close to them, it boosted their positive mood to a greater extent than when they receive the same responsive feedback from individuals with whom they are already close. I explain the reason for this finding below. This finding may be explained through the different expectations embedded in various types of social relationships. Based on Clark and Mills (1979)’s research on communal relationships, whether people perceive the partner to be responsive depends on the expectations of the degree of obligation ascribed to that partner (Reis et al., 2004). Expected levels of responsiveness vary from person to person within a network. Such variations are contingent upon the strength of communal relationships or tie strengths. Simply stated, most people expect their romantic partners to be more responsive to their needs than their neighbors. Therefore, when one’s romantic partner does one’s duty and fulfill the expectation of being responsive, it does not exceed the individual’s expectation and thus brings less positive affect. However, when one’s neighbor went above and beyond and offered unexpectedly positive, warm, and supportive feedback, it plays a more significant role in lifting individuals’ moods and spirits. Quality social interaction and receiving a supportive response are related to positive affect (Berry & Hansen, 1996; Monfort et al., 2014). In this case, it relates to greater positive emotions if it comes from 95 people you expect less from. What’s worth noting is also the lift in mood is also not sustained to boost global affective well-being. 5.2. The direct and indirect effect of synchronicity, audio-based nonverbal cues, nonaudio- based nonverbal cues on perceived partner responsiveness and affective well-being Davis (1982) argued that responsiveness is heavily influenced by attention to one’s partner and the accuracy of the understanding of one’s partner’s communication. Davis (1982) theorizes that the more cues and synchronous feedback, the more responsiveness one can perceive their interaction partner through increased attention and accuracy. Along with that logic, the number of cues and degree of feedback available through the mediated channel can determine both attention to one’s partners and communication accuracy, shaping responsiveness. By measuring individuals’ perceived synchronicity, audio-based nonverbal cues, and nonaudio- based nonverbal cues during mediated interactions, this study found that synchronicity and audio-based nonverbal cues are positively related to higher levels of perceived partner responsiveness in accordance with Davis (1982)’s argument. However, perceiving a higher level of nonaudio-based nonverbal cues is negatively related to perceived partner responsiveness, contradicting Davis’ theorizing. The expected positive relationships demonstrated that when using technology for sharing a positive event, the perceived feature of a certain medium or technology is related to whether individuals perceive their partners to be responsive or not. Again, responsiveness is characterized by active and constructive replies that appropriately celebrate the joyous event. As enthusiasm and joy are best conveyed in timely responses without much delay, they are most easily facilitated through synchronous media that allows instant communication in accordance with media synchronicity theory (Dennis et al., 2008) and Davis (1982)’s theorizing. When 96 capitalizing using communication technologies that enable immediate and unrestricted feedback, the recipient is also given the most effective tools for communicating their responses to the communicator. Considering that sharing positive events is a highly emotional experience, it is arguable that most of the information exchanged during capitalization is related to emotions. Previous research has demonstrated that voice-only communication is more conducive for emphatic accuracy than vision-only communication, which explains why audio cues are positively related to responsiveness as expected. However, it is somewhat surprising that nonaudio-based nonverbal cues are negatively associated with perceiving a higher level of responsiveness. As it has been argued by Davis (1982) that when the recipients express their responses using multiple cues, including both verbal and non-verbal, the communicator will have an easier time deducing the meaning and nature of the responses, therefore, making it easier to perceive one’s partner as responsiveness. However, considering Davis’ work took place in the 1980s when technology- mediated social interaction is not the norm, and the only media Davis used briefly as an example for the different channel of cues was a telephone, the arguments about how cues were positively related to attention and accuracy were most likely referring to offline interactions where nonverbal cues come naturally with each interaction in an unmediated and undistorted manner. I argue below how nonverbal cues perceived through mediated technologies may not contribute to attention to one’s interaction partners or communication accuracy. First, the online environment is notoriously known for being full of stimulants and distractors that make it harder for individuals to focus on one single task, regardless of the nature of the task. When communicating with one’s partner about a specific event, it is highly likely that the partner or the person sharing the event does not have their full attention to this one interaction. They may also 97 be listening to music, watching Netflix, shopping, or writing a paper. So, with whatever nonverbal cues transmitted through the technology, individuals are processing it with limited cognitive capacity while engaging in other tasks. Second, the communication accuracy may also be compromised. When the technology mediates and translates the nonverbal cues during interpersonal communication, the nonverbal cues are more likely to be limited, omitted, interrupted, distorted, imagined, or overly interpreted with today’s technology. For example, the current widely adopted technology that conveys the most nonverbal cues of the communication partner is video conferencing that only shows facial expressions and limited body language. Individuals consciously or unconsciously adapt their nonverbal cues that reflect their discomfort with the video conferencing while overcompensating for the deficit with artificially exaggerated cues. In fact, the prevalence of Zoom fatigue can be explained by the shortage of nonverbal cues and the cognitive load that comes with trying to produce and interpret nonverbal cues (Fauville et al., 2021). In a way, the scope or definition of nonverbal cues had fundamentally changed when humans transitioned from the unmediated to the mediated environment for communication. Social interactions have shifted from a fully present bodily experience offline to a disembodied experience online where authentic nonverbal cues are impoverished. As a solution and substitution, new conventions of expression have evolved to clarify the meaning of messages (Crystal, 2008). Because the scope of nonverbal cues now incorporates the new inventions in CMC, such as gifs, emojis, emoticons, chronemics, and proxemics cue, it comes with a level of uncertainty that could be due to technical difficulties, digital skills or literacy, or individual nonverbal sensitivities that must be tolerated. There is a great range of substitutions developed to compensate for the offline nonverbal cues, and different forms are perceived differently by different people. For example, one experimental study found that using 98 an emoticons :-) (character strings) vs. smilies (graphical pictograms) in text-based interactions was related to person perceptions, i.e., smilies exerted a stronger influence on reader’s personal mood and the perception of the sender’s commitment (Ganster et al., 2012). Another study also found that digital natives (those born after the mid-1980s) ascribe more negative meaning to the use of a period in text messages than digital immigrants (those born in 1984 or earlier), suggesting that the subtlety in mediated communication may be missed by many (Riordan et al., 2018). It is arguable that in expanding the scope of nonverbal cues, the meaning of nonverbal cues online becomes less universal, explicit, or easily interpretable. With these changes, the relationship between perceived nonverbal cues and what is perceived to be responsive behaviors from one’s partner have also shifted. When more offline nonverbal cues may be positively related to perceived responsiveness, more online nonverbal cues demonstrated a negative relationship with perceived partner responsiveness in the current study. Distortions, ambiguities, and uncertainties associated with processing and identifying the meanings of online nonverbal cues could be responsible for the negative association found here due to the decreased communication accuracy and attention. The opposite finding regarding nonaudio-based and audio-based nonverbal cues’ relationship to perceived partner responsiveness highlights the necessity of separating these two types of cues. Even though both belong to the nonverbal cues, the mediated environment likely poses a more significant threat to the fidelity or authenticity of nonaudio-based nonverbal cues than to audio-based nonverbal cues. First, the nonaudio-based nonverbal cues are much more varied than the audio-based nonverbal cues, making it harder to reach a consensus about its meaning. It may be that someone’s posture is tight and enclosed. In contrast, their face is smiling, which leaves the interaction partner wondering about their actual emotional state. On 99 the other hand, voice cues have been found to be more accurate at revealing emotions (Kraus, 2017). Second, there is less ambiguity in understanding the feelings embedded in human voice than the meanings conveyed by less standardized or somewhat individual expressions of nonaudio-based nonverbal cues in memes, gifs, or stickers. Third, in terms of the technological requirements for transmitting both types of cues, audio-based nonverbal cues do not require as much bandwidth or Internet speed, which means there is less chance of distortion and interruption. Since perceived partner responsiveness had a positive relationship with state affective well-being, synchronicity and audio-based nonverbal cues’ indirect effects on state affective well-being were also significant. However, synchronicity and audio-based nonverbal do not directly relate to state and global affective well-being. Put it differently, using technology that is synchronous or allows audio cues to share positive events do not relate to increased positive affect. However, it facilitates the perception of responsiveness from one’s communication partner, which theoretically and empirically leads to an increase in positive affect. This finding highlights the highly situational and relationship-dependent nature of the effects of sharing positive events on affect. Additionally, nonaudio-based nonverbal cues have a negative indirect effect on state affective well-being due to its negative association with responsiveness. In the meantime, nonaudio-based nonverbal cues also had a direct positive effect on both state affective well- being and global affective well-being. It is also the only independent variable that demonstrated a positive relationship with global affective well-being. As biased or distorted online nonverbal cues can make it harder to discern whether one’s communication partner is exhibiting responsive behaviors, the satisfaction that comes with seeing (or imagining) one’s communication partners’ 100 smiley face in the wake of sharing a particular positive event seems to be enough for acting as an immediate and relatively enduring booster for one’s positive affect. The perceived partner responsiveness served as a suppressor of the positive direct effect of nonaudio cues on state affective well-being. In the section below, I also discuss how the suppression of the effect is moderated by relationship closeness. 5.3. The conditional effect of synchronicity, audio-based nonverbal cues, nonaudio-based nonverbal cues on perceived partner responsiveness and affective well-being Relationship closeness plays a significant role in developing the research questions and hypotheses in explaining how individuals use the same features of ICTs with different communication partners can lead to different outcomes. This study found that none of the interaction terms between synchronicity, audio-based nonverbal cues, nonaudio-based nonverbal cues, and relationship closeness was significant in their relationship to responsiveness and state/global affective well-being. Nonetheless, the significant negative indirect effect of nonaudio cues on state affective well-being was moderated by relationship closeness. Admittedly, the AUM proposes that features like asynchronicity and reduced cues are more conducive for self-disclosure and communication competence at the early stages of relationship development. Still, self-disclosure and communication competence are not equivalent to the outcomes of successful supportive communication, i.e., increased affective well-being. Knowing how to utilize better asynchronicity and reduced cues for self-disclosure and communication competence when the relationship is not so close is not the same as getting an increase in positive affect. The failure to detect any interaction may reflect this gap in logic during the process of hypotheses development. However, this null effect does not suggest that relationship closeness does not matter in mediated interpersonal communication. Instead, the 101 interaction effect may be more salient if the outcome variables are not about perceptions of and results of other people’s responsive behaviors but individuals’ own technology use behaviors. For example, Zhang et al. (2021) found that the affordance of visibility control and relationship closeness interacted in influencing the depth of individuals’ distress disclosure. Even though not directly hypothesized, the negative indirect effect of nonaudio cues on state affective well-being through decreased partner responsiveness was moderated by relationship closeness. When individuals are communicating with someone they are less familiar with, the negative association between nonaudio cues and state affective well-being through the perception of decreased partner responsiveness was higher. This could be because individuals know less about the meanings of nonverbal behaviors of individuals they are not that close to, making it harder to see the meaning behind their interaction partners’ nonverbal cues. Because of such unfamiliarity, individuals pay a higher price in terms of a decrease in state affective well- being when seeing or picturing responses from someone they don’t know well. In contrast, when someone is communicating with people they know very well, the negative indirect effect between nonaudio cues and state affective well-being through perceived partner responsiveness becomes insignificant. The suppression effect caused by uncertainty with interpreting nonverbal cues went away, and only the direct positive effects of nonverbal cues on state affective well- being remain. 5.4. The significant indirect effect of persistence on affective well-being through memorability of the positive event According to the prediction of the model of capitalization process, memorability of the positive event is positively related to state and global affective well-being, and using technology that provides a record of communication is indeed related to the memorability of the positive 102 event. Even though previous studies have established that memorability is one of the underlying mechanisms of capitalization’s positive effect, this is the first study that confirmed that the persistence feature of the technologies is directly related to individuals’ high memorability of the event, which consequently related to an increase in affective well-being. Before ICTs have become a widely adopted communication tool, media such as photographs and films have been used as ways to access the past. There is no memory of the past in unmediated forms, as memory relies on “the materiality of the trace, the immediacy of the recording, the visibility of the image” (Kuhn, 1995, p. 13). Özkul and Humphreys (2015) illustrated how media and memory often mutually shape each other, as mobile media is used to remember and relive the associated places and experiences, through which a renewed sense of meaning is achieved. Like the act of verbally telling others about a positive event during in- person interaction, typing out the event and sending it to someone on a screen, or calling someone through voice/video calls also helps individuals remember the details of the event. The unique feature of persistence also allows documentation of the conversation, especially through texts that are automatically saved on one’s devices. Even though recording voice/video calls is not a common practice, individuals can also go back to the time stamps and other information related to the interaction on their device easily. These contextual cues could also facilitate the memorization of positive experiences and events. Memorability of the positive event may also be influenced by the perceived partner responsiveness, in that warm and supportive responses augment and amplify the sharing experience more. 5.5. Limitations and Future Directions There are several limitations to the current study. First, since both surveys use self-report measures, participants’ answers may be subjected to recall bias and social desirability bias. 103 Individuals may not accurately remember their partners’ responses or how they felt at the time after sharing the event. Obtaining digital trace, behavioral, or physiological data regarding the processes and outcomes of sharing positive events across multiple forms of communication technologies is extremely difficult and raises significant concerns over privacy protection. Even though perceptions may not accurately reflect the objective features of technologies or partners’ actual behaviors, it is individuals’ perceptions that dictate attitudes and affect more so than the objective materiality of technology or behaviors of partners. And considering that participants were only asked to report positive events within the past two weeks, the decay in participants’ memory could be minimum. Nevertheless, future research could consider using digital trace data and self-report surveys to examine the same relationships, compare the effects across different datasets, and observe the possible discrepancies. Second, the mediation analysis is limited by the nature of the cross-sectional study design (O’Laughlin et al., 2018). Even though there is some temporal sequence of the processes tested in this study, i.e., perceptions of responsiveness in the sharing scenario precede global affect, future research should collect longitudinal data to allow testing of alternative longitudinal mediation models (e.g., cross-lagged panel, latent growth curve, and latent difference score models) to replicate the findings. Third, the study may be underpowered to answer particular research questions since the sample size for the main survey could be too small for estimating the complex moderated mediation models, which have interactions between continuous variables and with non-normal distributions (Kline, 2015). Even using a relatively liberal N:q ratio of 10:1 (Kline, 2015), where q is the number of free parameters (q = 123 in the moderated mediation models estimated in this study), the analysis requires almost double the sample size of what I have collected to obtain a 104 more adequate and trustworthy significance testing results. Nevertheless, the sample size appears to be sufficient for estimating the main effects in the basic moderation and mediation models. The lack of power in SEM studies is longstanding and difficult to resolve. It requires balancing the need for complex modeling with the significant amount of time, effort, and resources needed to obtain an adequate sample size frequently unavailable to academic researchers. Fourth, this study focuses on several features of ICTs that are the most relevant for their relationships to responsiveness and affective well-being. Nevertheless, other ICT features may also play a role in the sharing process. For example, features related to audience transparency, which refers to how individuals come to understand the identity and scope of their audience, and visibility control, which relates to how individuals control the visibility of their user-generated content in a particular platform, can also be related to the perceptions of responsiveness of one’s interaction partners (DeVito et al., 2017). This study incorporates the role of the audience by controlling for the number of people with whom individuals have shared the joyous event. However, this approach could be more refined by incorporating other dimensions related to not only the size but also the diversity, transparency, and visibility. Future research could build on the findings of the features studied in the current study and incorporate the effects of other features of various technologies and platforms. Fifth, this study did not focus on what factors may be related to the perceptions of the features of ICTs, even though I emphasized how perceptions are highly individual, situational, and relational. For example, it is technically possible to achieve synchronous communication using any media as long as there are cell services or Internet connections, even for media that were previously construed as primarily asynchronous, such as text messages or emails. However, while communication technologies are equipped with capabilities to do so does not automatically 105 make individuals perceive and use the technology synchronously. Instead, whether or not the communication is perceived to be synchronous depends on individuals’ use patterns of a specific form of technology and the particular interaction history between interaction partners via that specific form of technology. Future research could focus on identifying the mental models individuals employ when forming the perceptions regarding features of ICTs. Lastly, this study measured well-being based on positive affect. Other dimensions of well-being, such as social, physical, cognitive, and psychological well-being, could be measured using various instruments to validate how using technology to share positive events can be related to other well-being indicators. Moreover, measuring physiological data about sharing and receiving responses in mediated environments compared to offline environments could also help compare and contrast the extent of the positive affect experienced via these two modes of sharing. 5.6. Conclusion How people share positive events has received relatively little attention either in social psychology or in communication. Plenty of literature has examined how individuals disclose distress through technologies such as social media (Andalibi & Forte, 2018; Bazarova et al., 2017; Zhang et al., 2021). However, little research has chosen to focus on the specific phenomenon of sharing positive events. The fact about individuals’ everyday life is that people commemorate special occasions and share positive events more often than they experience or disclose distress and stressors. Moreover, positivity bias overrules the social media space, as people are more motivated and willing to post positive events than negative ones (Bazarova, 2012; McLaughlin & Vitak, 2012). When discussing the effects of self-disclosure as a form of support-seeking behavior on individuals’ health and well-being, the notion about how “bad is 106 stronger than good” may be outdated (Baumeister et al., 2001). Positive psychology, a new movement in psychology that shifts its focus from relieving human suffering to enabling human flourishing, emphasizes examining and amplifying the factors that facilitate positive emotions, good relationships, engagement, meaning, and accomplishment (Seligman, 2018). The model of thriving also outlines that the interpersonal processes through which close relationships promote well-being are not only through helping individuals cope with adverse life events but also aiding the pursuit of opportunities for growth and development (Feeney & Collins, 2015). Capitalization research has shown that when individuals share positive events with responsiveness others, the intrapersonal and interpersonal benefits from such sharing bring benefits above and beyond the positive event itself (S. Gable et al., 2010; S. L. Gable et al., 2004a). However, considering most of the capitalization research has not integrated the role of technology through which individuals may share positive life events, the question that remains understudied is, what happens when we share using technology? The current study set out to answer this question and found that regardless of specific platforms, features embedded in technologies shape individuals’ perceptions of their partners’ responsiveness, which has immediate and short-term implications for their affective well-being. Theoretically, the study informs the understanding of the social-psychological processes that underlie the effects of sharing positive events through technology. The findings can also be used to develop better communication technologies that facilitate perceptions of responsiveness, a core and defining construct of the relationship science, in the online environment and help cultivate better social relationships and communities. 107 APPENDICES 108 APPENDIX A: Table 25: Study 1 Cognitive interview participant information Race/ethnicity Sex Age Education First used technology to share 1 White Male 18-24 In college Text with mom 2 White Female 18-24 In college Text with mom 3 White Female 18-24 In college Text with boyfriend 4 Asian Female 18-24 In college Voice call on Wechat Snapchat messages with 11-15 5 White Male 18-24 In college friends 6 White Male 18-24 In college Instagram posts with 900 people Wechat sharing posts with 200 7 Asian Female 18-24 In college people 8 White Male 18-24 In college Text with girlfriend In grad 9 White Male 28 school Imessage with friends In grad Facebook post with 10 White Female 26 school acquaintances In grad Text message with 11 South Asian Female 27 school friends/mostly family Bachelor’s Text message on whatsapp/voice 12 White (Arabic) Male 36-40 degree call on phone Masters 13 White Female 37 degree Text message using phone Bachelor’s Video call using whatsapp or 14 White Female 48 degree zoom/text on whatsapp Masters 15 South Asian Male 35 degree Group chats on Whatsapp Bachelor’s 16 White Female 38 degree Status update on Facebook Masters Text message on 17 South Asian Female 33 degree Whatsapp/facebook Bachelor’s 18 White Female 41 degree Text on iMessage Some college no 19 White Female 42 degree Voice call on phone 109 APPENDIX B: Study 1 Cognitive interview probes 1. How did you arrive at that answer? 2. Was that easy or hard to answer? I noticed that you hesitated (if they are hesitating). Tell me what you were thinking. 3. What were you thinking when you first answered the question? What is going through your mind? 4. Why did you respond that way? 5. What does that word (a specific term in the question text) mean to you? 6. Can you repeat the question I just asked you in your own words? 7. How do you remember that you perceive your interaction partners’ nonverbal cues? 8. How do you remember that you perceive your interaction partners’ nonverbal cues? 9. Can you walk me through the steps of how you came to that answer? 110 APPENDIX C: Study 1 key survey questions After experiencing positive events, most of us would like to spread the joy by sharing the good news with others. We often share them using various technologies, especially when we cannot see the people we want to share the news with in person. Thinking back to the scenarios where you have experienced positive events, please briefly describe one scenario in which you shared a positive event with others using a technology that you used the most frequently. Please briefly describe what was the event, who did you share it with on this technology, and what was the most frequently used technology. Here is an example of a hypothetical write-up: After I got a promotion, I used group text on iMessage to contact XXX. Please note that this survey is built on your accurate description of your experiences, so it is crucial that you provide an accurate and complete answer to the best of your ability. ________________________________________________________ Please provide more detail regarding the sharing scenario you mentioned (${q://QID159/ChoiceTextEntryValue}${q://QID141/ChoiceTextEntryValue}) by filling in the blanks below. I shared the positive event by using the (1)________ (feature, e.g., text, voice/video call, social media post) of (2)________ (app/program, e.g., phone/text app, FaceTime, Snapchat) on my (3)________ (electronic devices, e.g., desktop, laptop, tablet, cellphone) to contact (4)______ (number of, e.g., 1, 2, or more) person/people who is/are my (5)______ (nature of relationship, e.g., mom, dad, boyfriend, friends, or a mix of friends and acquaintances). If you shared with multiple people (with different nature of relationships) roughly at the same time on the same technology, please add them up. For example, if you messaged eight people (including your parents, friends, and siblings) through texting simultaneously, please put "8" as the people and parents, friends, and siblings as the nature of relationships. If you post something on social media, please put the estimated people who can see your post below. If you don't know, please put "999." (1) Please type the feature (e.g., text, voice/video calls, status update) below (1) (2) Please type the app/program/software (e.g., phone/text app, FaceTime, Snapchat) below (3) Please type the name of the electronic devices (e.g., desktop, laptop, tablet, cellphone) below (4) Please type the specific people or provide your best estimate (e.g., 1, 2, or more) below (5) Please type your relationship with the person/people you shared below You identified sharing using “${Q4/ChoiceTextEntryValue/7}” in the scenario. How would you categorize ${Q4/ChoiceTextEntryValue/7} into one of the following categories? One-to-one text messages (1) One-to-one voice calls (2) One-to-one video calls (3) One-to-one emails (4) Group text messages (5) Group voice calls (6) Group video calls (7) 111 Group emails (8) Posting on social media, such as Facebook or Instagram (9) Posting to an anonymous online community (10) Posting to a personal blog or website (11) None of the above, please specify below (12) ________________________________________________ When posting on social media, how would you categorize it into one of the following categories? A private audience is when your post cannot be accessible by anyone on the Internet. Posting to a private audience on social media that are permanent unless you delete it (1) Posting to a private audience on social media that automatically disappears (2) Posting to a public audience on social media that are permanent unless you delete it (3) Posting to a public audience on social media that automatically disappears (4) None of the above, please specify below (5) For "${Q4/ChoiceTextEntryValue/9}", how would you categorize it into one of the following categories? Desktop (1) Laptop (2) Tablet (3) Cellphone (4) Gaming consoles (5) None of the above, please specify below (6) You identified sharing with "${Q4/ChoiceTextEntryValue/10}" person/people. How would you categorize it into one of the following categories? 1 (1) 2-5 (2) 6-15 (3) 16-30 (4) 31-50 (5) 51-100 (6) 101-150 (7) 151-200 (8) 201-500 (9) 500+ (10) You identified sharing with your ${Q4/ChoiceTextEntryValue/11}, how would you categorize it into one of the following categories (select all that apply)? immediate family member (1) significant other (2) close friend (3) friend (4) extended family member (5) acquaintance (6) 112 stranger (7) None of the above, please specify below (8) ________________________________________________ Please select the picture below that best describes your relationship with your ${Q4/ChoiceTextEntryValue/11} with whom you have shared the event. A (1) B (2) C (3) D (4) E (5) F (6) G (7) Please indicate your level of agreement with the following two items regarding your relationship with your ${Q4/ChoiceTextEntryValue/11} with whom you have shared the event. (1 = strongly disagree, 7 = strongly agree) I feel very close to the person I have shared the event with (1) The person I have shared the event and I are very close to each other (2) You identified sharing with ${Q4/ChoiceTextEntryValue/10} person/people, including your ${Q4/ChoiceTextEntryValue/11} in the sharing scenario. Please select the picture below that best describes your relationship with them as a group. A (1) B (2) C (3) D (4) E (5) F (6) G (7) 113 You identified sharing with ${Q4/ChoiceTextEntryValue/10} person/people, including your ${Q4/ChoiceTextEntryValue/11} in the sharing scenario. Please indicate your level of agreement with the following two items regarding your relationship with them as a group. (1 = strongly disagree, 7 = strongly agree) I feel very close to the group of people I have shared the event with (1) The group of people I have shared the event and I are very close to each other (2) Thinking back to the scenario where you have shared this positive event (${q://QID159/ChoiceTextEntryValue}${q://QID141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${q://QID54/ChoiceTextEntryValue/10} person/people, including my ${q://QID54/ChoiceTextEntryValue/11} using ${q://QID54/ChoiceTextEntryValue/7} on ${q://QID54/ChoiceTextEntryValue/8}, it allowed me to… (1 = strongly disagree, 6 = strongly agree) Give and receive timely feedback (1) Engage in real-time back-and-forth interaction (2) Engage in instant communication (3) Quickly send messages or comments back and forth (4) Promptly receive and respond to my communication partner(s)'s messages or comments (5) Immediately express my reactions to my communication partner(s) (6) Reply as soon as I receive a message or a comment from my communication partner(s) (7) Interact with my communication partner(s) without delay (8) Expect the other person(s) to respond quickly (9) Learn right away what my communication partner(s) thinks of my information shared (10) Thinking back to the scenario where you have shared this positive event (${Q159/ChoiceTextEntryValue}${Q141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${Q4/ChoiceTextEntryValue/10} person/people, including my ${Q4/ChoiceTextEntryValue/11} using ${Q4/ChoiceTextEntryValue/7} on ${Q4/ChoiceTextEntryValue/8}, it allowed me to… (1 = strongly disagree, 6 = strongly agree) Keep a record of communication that I can go back and look at (1) 114 Keep a record of communication that can last long after the initial communication (2) Retrieve past communication in this space (3) Save the communication long after the interaction is finished (4) Find information about prior conversations with my communication partner(s) (5) Have my conversations with my communication partner(s) stay available after the conversation ends (6) Store the conversation (7) Thinking back to the scenario where you have shared this positive event (${q://QID159/ChoiceTextEntryValue}${q://QID141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${q://QID54/ChoiceTextEntryValue/10} person/people, including my ${q://QID54/ChoiceTextEntryValue/11} using ${q://QID54/ChoiceTextEntryValue/7} on ${q://QID54/ChoiceTextEntryValue/8}, it allowed me to picture, or imagine… (1 = strongly disagree, 6 = strongly agree) Please take into account of all the information available during mediated communication. For example, if you are texting, consider emoijs, memes, or reactions you received. Please answer based on how you think they would have behaved if they were interacting with you face to face. If you are finding it hard to picture or imagine certain behaviors, please choose "strongly disagree". If my communication partner(s) is smiling (1) If my communication partner(s) is using various facial expressions (2) If my communication partner(s) is gesturing using their hands or fingers (3) If my communication partner(s) is yawning (4) If my communication partner(s) is shaking his/her head side-to-side (5) If my communication partner(s)'s arms are folded (6) If my communication partner(s)'s eyes rolled (7) If my communication partner(s) has an open posture (i.e., arms and legs are not crossed) (8) If my communication partner(s)'s face is turned away from me (9) If my communication partner(s) is looking directly at me (10) If my communication partner(s) is nodding his/her head (11) If my communication partner(s) is engaged and focused (12) If my communication partner(s) is at ease in the interaction (13) The direction of my communication partner(s)'s eye gaze (14) My communication partner(s)'s posture (15) My communication partner(s)'s gestures (16) My communication partner(s)'s facial expressions (17) My communication partner(s)'s body movements or body language (18) Thinking back to the scenario where you have shared this positive event (${q://QID159/ChoiceTextEntryValue}${q://QID141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${q://QID54/ChoiceTextEntryValue/10} person/people, including my ${q://QID54/ChoiceTextEntryValue/11} using 115 ${q://QID54/ChoiceTextEntryValue/7} on ${q://QID54/ChoiceTextEntryValue/8}, it allowed me to hear or seemingly hear…(1 = strongly disagree, 6 = strongly agree) Please answer based on how you think they would have behaved if they were interacting with you face to face. The tone, pace, and volume in my communication partner(s)' voice (1) If my communication partner(s)'s voice volume is going up or down (2) If my communication partner(s)'s speech is fast or slow (3) If my communication partner(s)'s voice pitch rises and falls (4) If my communication partner(s)'s voice sounds happy or sad (5) Please select "Strongly agree" for this statement (6) If my communication partner(s)'s voice is flat or dull (7) If my communication partner(s)'s voice is expressive with emotions (8) If my communication partner(s)'s vocal cues reflect attentiveness (9) If my communication partner(s)'s voice suggests he/she is interested in the conversation (10) Over the past week, how often have you communicated with others through the following channels? Face-to-face (1) Phone/voice call (2) Video call (3) Text messaging (4) Instant messaging (5) Email (6) Social media (e.g., posting, commenting, liking) (7) Video games (8) In which country do you reside in right now? ______________ What is your year of birth? ______________ What is the highest level of school you have completed or the highest degree you have received? Less than high school degree (1) High school graduate (high school diploma or equivalent including GED) (2) Some college but no degree (3) Associate degree in college (2-year) (4) Bachelor's degree in college (4-year) (5) Master's degree (6) Doctoral degree (7) Professional degree (JD, MD) (8) Are you Spanish, Hispanic, or Latino or none of these? Yes (1) None of these (2) Choose one or more races that you consider yourself to be: 116 White (1) Black or African American (2) American Indian or Alaska Native (3) Asian (4) Native Hawaiian or Pacific Islander (5) Other (6) _________________________________ Are you now married, widowed, divorced, separated or never married? Married (1) Widowed (2) Divorced (3) Separated (4) Never Married (5) What is your assigned sex at birth? Male (1) Female (2) 117 APPENDIX D: Study 2 Main survey key questions Thinking back to a positive event that you may have experienced during the past two weeks, did you share it with others using a technology such as voice call, video call, text message, email, or social media? Yes (1) No (2) What was the technology that you first used to share your positive event(s) during the past two weeks? One-to-one text messages (7) One-to-one voice calls (8) One-to-one video calls (9) One-to-one emails (10) Group text message (1) Group voice call (2) Group video call (4) Group email (5) Posting on social media, such as Facebook or Instagram (6) I did not share positive events using technology for the past two weeks (11) Please briefly describe what/when was the event, and who did you share it with using ${Q10/ChoiceGroup/SelectedChoices}. Please only describe one event that happened during the past two weeks. Here is an example of a hypothetical write-up: Last Friday, I used ${Q10/ChoiceGroup/SelectedChoices} to contact XXX after I got a promotion. This survey is built on your accurate description of your experiences and your submission may be rejected if you fail to do so. Please provide an accurate and complete answer to the best of your ability. ________________________________________________________ Please complete the statements regarding the significance and nature of the event you just described below (${q://QID141/ChoiceTextEntryValue}). (7-point semantic differential scale) I think this event is ______ Extremely important – Not important at all Extremely desirable – not desirable at all Extremely positive – not positive at all How many days ago did the positive event take place? ____________ Thinking back to the positive event you have shared over the past two weeks (${q://QID141/ChoiceTextEntryValue})... Please rate your level of agreement regarding your memory of the event. I have wonderful memories of this event. (1) I remember many positive things about this event. (2) 118 I will not forget my experience of this event. (3) This details of this event is easy to recall. (4) Please provide more detail regarding the sharing scenario you mentioned by filling in the blanks below (${q://QID141/ChoiceTextEntryValue}). I shared the positive event by using the (1)________ (feature, e.g., text, voice/video call, social media post) of (2)________ (app/program, e.g., phone/text app, FaceTime, Snapchat) on my (3)________ (electronic devices, e.g., desktop, laptop, tablet, cellphone) to contact (4)______ (number of, e.g., 1, 2, or more) person/people who is/are my (5)______ (nature of relationship, e.g., mom, dad, boyfriend, friends, or a mix of friends and acquaintances). If you shared with multiple people (with different nature of relationships) roughly at the same time on the same technology, please add them up. For example, if you messaged eight people (including your parents, friends, and siblings) through texting simultaneously, please put "8" as the people and parents, friends, and siblings as the nature of relationships. If you post something on social media, please put the estimated people who can see your post below. If you don't know, please put "999." You identified sharing using “${q://QID54/ChoiceTextEntryValue/7}” in the scenario. How would you categorize ${q://QID54/ChoiceTextEntryValue/7} into one of the following categories? One-to-one text messages (1) One-to-one voice calls (2) One-to-one video calls (3) One-to-one emails (4) Group text messages (5) Group voice calls (6) Group video calls (7) Group emails (8) Posting on social media, such as Facebook or Instagram (9) Posting to an anonymous online community (10) Posting to a personal blog or website (11) None of the above, please specify below (12) ________________________________________________ When posting on social media, how would you categorize it into one of the following categories? A private audience is when your post cannot be accessible by anyone on the Internet. A private audience is when your post cannot be accessible by anyone on the Internet. Posting to a private audience on social media that are permanent unless you delete it (1) Posting to a private audience on social media that automatically disappears (2) Posting to a public audience on social media that are permanent unless you delete it (3) Posting to a public audience on social media that automatically disappears (4) None of the above, please specify below (5) For "${q://QID54/ChoiceTextEntryValue/9}", how would you categorize it into one of the following categories? 119 Desktop (1) Laptop (2) Tablet (3) Cellphone (4) Gaming consoles (5) None of the above, please specify below (6) You identified sharing with "${q://QID54/ChoiceTextEntryValue/10}" person/people. How would you categorize it into one of the following categories? 1 (1) 2-5 (2) 6-15 (3) 16-30 (4) 31-50 (5) 51-100 (6) 101-150 (7) 151-200 (8) 201-500 (9) 500+ (10) You identified sharing with your ${q://QID54/ChoiceTextEntryValue/11}, how would you categorize it into one of the following categories (select all that apply)? immediate family member (1) significant other (2) close friend (3) friend (4) extended family member (5) acquaintance (6) stranger (7) None of the above, please specify below (8) ________________________________________________ Please select the picture below that best describes your relationship with your ${q://QID54/ChoiceTextEntryValue/11} before you have shared the event with him/her. A (1) B (2) C (3) 120 D (4) E (5) F (6) G (7) Please indicate your level of agreement with the following two items regarding your relationship with your ${q://QID54/ChoiceTextEntryValue/11} with whom you have shared the event. (1 = strongly disagree, 7 = strongly agree) I feel very close to the person I have shared the event with (1) The person I have shared the event and I are very close to each other (2) You identified sharing with ${q://QID54/ChoiceTextEntryValue/10} person/people, including your ${q://QID54/ChoiceTextEntryValue/11} in the sharing scenario. Please select the picture below that best describes your relationship with them as a group. A (1) B (2) C (3) D (4) E (5) F (6) G (7) You identified sharing with ${q://QID54/ChoiceTextEntryValue/10} person/people, including your ${q://QID54/ChoiceTextEntryValue/11} in the sharing scenario. Please indicate your level of agreement with the following two items regarding your relationship with them as a group. (1 = strongly disagree, 7 = strongly agree) I feel very close to the group of people I have shared the event with (1) The group of people I have shared the event and I are very close to each other (2) To answer the questions, please access your communication with ${q://QID54/ChoiceTextEntryValue/11} on your ${q://QID54/ChoiceTextEntryValue/9}, and browse through the related record of the ${q://QID88/ChoiceGroup/SelectedChoices} you used to share this positive event (${q://QID141/ChoiceTextEntryValue}). For example, if you used text messages, emails, or social media posts, please read through them. If you used voice/video calls, please locate the relevant information of the call, such as time stamp and duration, on your device. 121 Thinking back to the scenario where you have shared this positive event (${q://QID141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${q://QID54/ChoiceTextEntryValue/10} person/people, including my ${q://QID54/ChoiceTextEntryValue/11} using ${q://QID54/ChoiceTextEntryValue/7} on ${q://QID54/ChoiceTextEntryValue/8}, it allowed me to… (1 = strongly disagree, 6 = strongly agree) Give and receive timely feedback (1) Engage in real-time back-and-forth interaction (2) Engage in instant communication (3) Quickly send messages or comments back and forth (4) Promptly receive and respond to my communication partner(s)'s messages or comments (5) Immediately express my reactions to my communication partner(s) (6) Reply as soon as I receive a message or a comment from my communication partner(s) (7) Interact with my communication partner(s) without delay (8) Expect the other person(s) to respond quickly (9) Learn right away what my communication partner(s) thinks of my information shared (10) Thinking back to the scenario where you have shared this positive event (${q://QID141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${q://QID54/ChoiceTextEntryValue/10} person/people, including my ${q://QID54/ChoiceTextEntryValue/11} using ${q://QID54/ChoiceTextEntryValue/7} on ${q://QID54/ChoiceTextEntryValue/8}, it allowed me to…(1 = strongly disagree, 6 = strongly agree) Keep a record of communication that I can go back and look at (1) Keep a record of communication that can last long after the initial communication (2) Retrieve past communication in this space (3) Save the communication long after the interaction is finished (4) Find information about prior conversations with my communication partner(s) (5) Have my conversations with my communication partner(s) stay available after the conversation ends (6) Store the conversation (7) Thinking back to the scenario where you have shared this positive event (${q://QID141/ChoiceTextEntryValue}), please answer the following questions about whether you are able to have a mental image of your communication partners' nonverbal behaviors during communication. Thinking back to the scenario where you have shared this positive event (${q://QID141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${q://QID54/ChoiceTextEntryValue/10} person/people, including my ${q://QID54/ChoiceTextEntryValue/11} using 122 ${q://QID54/ChoiceTextEntryValue/7} on ${q://QID54/ChoiceTextEntryValue/8}, it allowed me to picture, or imagine…(1 = strongly disagree, 6 = strongly agree) Please take into account of all the information available during mediated communication. For example, if you are texting, consider emoijs, memes, or reactions you received. Please answer based on how you think they would have behaved if they were interacting with you face to face. If you are finding it hard to picture or imagine certain behaviors, please choose "strongly disagree". If my communication partner(s) is smiling (1) If my communication partner(s) is using various facial expressions (2) If my communication partner(s) is yawning (3) If my communication partner(s) is shaking his/her head side-to-side (4) If my communication partner(s)'s arms are folded (5) If my communication partner(s)'s eyes rolled (6) If my communication partner(s)'s face is turned away from me (7) If my communication partner(s) is looking directly at me (8) If my communication partner(s) is nodding his/her head (9) If my communication partner(s) is engaged and focused (10) If my communication partner(s) is at ease in the interaction (11) The direction of my communication partner(s)'s eye gaze (12) My communication partner(s)'s posture (13) My communication partner(s)'s gestures (14) My communication partner(s)'s facial expressions (15) My communication partner(s)'s body movements or body language (16) Thinking back to the scenario where you have shared this positive event (${q://QID141/ChoiceTextEntryValue}), please rate your level of agreement with the statements below. When communicating about this event with ${q://QID54/ChoiceTextEntryValue/10} person/people, including my ${q://QID54/ChoiceTextEntryValue/11} using ${q://QID54/ChoiceTextEntryValue/7} on ${q://QID54/ChoiceTextEntryValue/8}, it allowed me to hear or seemingly hear…(1 = strongly disagree, 6 = strongly agree) Please answer based on how you think they would have behaved if they were interacting with you face to face. The tone, pace, and volume in my communication partner(s)' voice (1) If my communication partner(s)'s voice volume is going up or down (2) If my communication partner(s)'s speech is fast or slow (3) If my communication partner(s)'s voice pitch rises and falls (4) If my communication partner(s)'s voice sounds happy or sad (5) Please select "Strongly agree" for this statement (6) If my communication partner(s)'s voice is flat or dull (7) If my communication partner(s)'s voice is expressive with emotions (8) If my communication partner(s)'s vocal cues reflect attentiveness (9) If my communication partner(s)'s voice suggests he/she is interested in the conversation (10) 123 To answer the questions below regarding your communication partner(s)' response, please access your communication with ${q://QID54/ChoiceTextEntryValue/11} on your ${q://QID54/ChoiceTextEntryValue/9}, and browse through the related record of the ${q://QID88/ChoiceGroup/SelectedChoices} you used to share this positive event (${q://QID141/ChoiceTextEntryValue}). Please take a moment to read through or recall your communication with ${q://QID54/ChoiceTextEntryValue/11} on this event and rate the statements below. When I tell ${q://QID54/ChoiceTextEntryValue/11} about the event, I think my ${q://QID54/ChoiceTextEntryValue/11}'s response(s) is/are ________( 5-point semantic differential scale) Positive – negative Helpful – unhelpful Supportive – unsupportive Sensitive – unsensitive Generous – selfish Reassuring – upsetting Comforting – distressing Encouraging – discouraging Compassionate – heartless Considerate – inconsiderate Understanding – misunderstanding Please consider to what extent your communication partner(s) does the following things in response to the positive event you shared and rate your agreement with the statements below. (1 = strongly disagree, 6 = strong agree) My communication partner(s) reacted to my good fortune enthusiastically (1) I got the sense that my communication partner(s) is even more happy and excited than I am (2) My communication partner(s) asked a lot of questions and showed genuine concern about the good event (3) My communication partner(s) tried not to make a big deal out of it but was happy for me (4) My communication partner(s) was silently supportive of the good things that occur to me (5) My communication partner(s) said little, but I know he/she is happy for me (6) My communication partner(s) found a problem with it (7) My communication partner(s) reminded me that most good things have their bad aspects as well (8) My communication partner(s) pointed out the potential downsides of the good event (9) I got the impression that communication partner(s) doesn’t care much (10) My communication partner(s) didn’t pay much attention to me (11) My communication partner(s) seemed disinterested (12) Please use the scale from 0 to 6, where a 0 means you did not experience this feeling at all and a 6 means the feeling was very strong, to indicate how you felt after receiving your communication partners' responses regarding this positive event (${q://QID141/ChoiceTextEntryValue}). Happy (1) 124 Joyful (2) Pleased (3) Enjoyment/Fun (4) Delighted (5) Please use the slider scale from 0 to 6, where a 0 means you did not experience this feeling at all and a 6 means the feeling was very strong, to indicate how you felt after receiving your communication partners' responses regarding this positive event (${Q12/ChoiceTextEntryValue}). Depressed/blue (1) Unhappy (2) Frustrated (3) Angry/hostile (4) Worried/anxious (5) Indicate the extent you have felt this way during the past two weeks. (1 = very slightly or not at all, 5 = extremely) __________ 1. Interested __________ 11. Irritable __________ 2. Distressed __________ 12. Alert __________ 3. Excited __________ 13. Ashamed __________ 4. Upset __________ 14. Inspired __________ 5. Strong __________ 15. Nervous __________ 6. Guilty __________ 16. Determined __________ 7. Scared __________ 17. Attentive __________ 8. Hostile __________ 18. Jittery __________ 9. Enthusiastic __________ 19. Active __________ 10. Proud __________ 20. Afraid Please indicate your agreement with the five statements below. Please be open and honest in your responding. (1 = strongly agree, 6 = strongly disagree) In most ways, my life is close to my ideal. (1) The conditions of my life are excellent (2) I am satisfied with my life (3) So far, I have gotten the important things I want in life (4) If I could live my life over, I would change almost nothing (5) The following questions concern how you generally feel in important close relationships in your life. Think about your past and present relationships with people who have been especially important to you, such as family members, romantic partners, and close friends. Please use the scale below to indicate how you generally feel in these relationships. (1 = strongly disagree, 6 = strongly agree) I want to get close to people, but I keep pulling back (1) I am nervous when people get too close to me (2) I try to avoid getting too close to people (3) I usually discuss my problems and concerns with others (4) 125 It helps to turn to other people in times of need (5) I turn to others for many things, including comfort and reassurance (6) I worry that people won’t care about me as much as I care about them (7) My desire to be very close sometimes scares people away (8) I need a lot of reassurance that I am loved by others (9) I do not often worry about being abandoned (10) I find that people don’t want to get as close as I would like (11) I get frustrated if people are not available when I need them (12) To what extent do you agree or disagree with the following statements? (1 = strongly disagree, 6 = strongly agree) I take a positive attitude toward myself (1) I certainly feel useless at times (2) I feel I do not have much to be proud of (3) I feel that I am a person of worth, at least on an equal plane with others (4) Over the past week, how often have you communicated with others through the following channels? Face-to-face (1) Phone/voice call (2) Video call (3) Text messaging (4) Instant messaging (5) Email (6) Social media (e.g., posting, commenting, liking) (7) Video games (8) In which country do you reside in right now? ____________ What is your year of birth? ____________ What is the highest level of school you have completed or the highest degree you have received? Less than high school degree (1) High school graduate (high school diploma or equivalent including GED) (2) Some college but no degree (3) Associate degree in college (2-year) (4) Bachelor's degree in college (4-year) (5) Master's degree (6) Doctoral degree (7) Professional degree (JD, MD) (8) Are you Spanish, Hispanic, or Latino or none of these? Yes (1) None of these (2) Choose one or more races that you consider yourself to be: White (1) 126 Black or African American (2) American Indian or Alaska Native (3) Asian (4) Native Hawaiian or Pacific Islander (5) Other (6) ________________________________________________ Are you now married, widowed, divorced, separated or never married? Married (1) Widowed (2) Divorced (3) Separated (4) Never Married (5) What is your assigned sex at birth? Male (1) Female (2) 127 REFERENCES 128 REFERENCES Altman, I., & Taylor, D. A. (1973). Social Penetration: The Development of Interpersonal Relationships. Holt, Rinehart and Winston. Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychological Bulletin, 111(2), 256–274. https://doi.org/10.1037/0033-2909.111.2.256 Andalibi, N., & Forte, A. (2018). Announcing pregnancy loss on Facebook: A decision-making framework for stigmatized disclosures on identified social network sites. In CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14). ACM. https://doi.org/10.1145/3173574.3173732 Andersen, P. A. (1989). Nonverbal immediacy in interpersonal communication. In A. W. Siegman & S. Feldstein (Eds.), Multichannel integrations of nonverbal behavior (pp. 1– 36). Erlbaum. https://doi.org/10.4324/9781315802305-8 Andersen, P. A. (2008). Nonverbal communication: Forms and functions (2nd ed.). Waveland Press Incorporated. Anderson, M., & Jiang, J. (2018, May 31). Teens, Social Media & Technology 2018. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2018/05/31/teens-social-media-technology-2018/ Antheunis, M. L., Valkenburg, P. M., & Peter, J. (2007). Computer-mediated communication and interpersonal attraction: An experimental test of two explanatory hypotheses. Cyberpsychology & Behavior: The Impact of the Internet, Multimedia and Virtual Reality on Behavior and Society, 10(6), 831–835. https://doi.org/10.1089/cpb.2007.9945 Aron, A., Aron, E. N., & Smollan, D. (1992). Inclusion of Other in the Self Scale and the structure of interpersonal closeness. Journal of Personality and Social Psychology, 63(4), 596–612. http://dx.doi.org.proxy1.cl.msu.edu/10.1037/0022-3514.63.4.596 Bailey, S. K. T., Schroeder, B. L., Whitmer, D. E., & Sims, V. K. (2016). Perceptions of mobile instant messaging apps are comparable to texting for young adults in the united states. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 1235–1239. https://doi.org/10.1177/1541931213601288 Barak, A., Boniel-Nissim, M., & Suler, J. (2008). Fostering empowerment in online support groups. Computers in Human Behavior, 24(5), 1867–1883. https://doi.org/10.1016/j.chb.2008.02.004 129 Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370. https://doi.org/10.1037/1089- 2680.5.4.323 Bayer, J. B., Triệu, P., & Ellison, N. B. (2020). Social media elements, ecologies, and effects. Annual Review of Psychology, 71(1), null. https://doi.org/10.1146/annurev-psych- 010419-050944 Bayliss, A. P., Bartlett, J., Naughtin, C. K., & Kritikos, A. (2011). A direct link between gaze perception and social attention. Journal of Experimental Psychology: Human Perception and Performance, 37(3), 634–644. https://doi.org/10.1037/a0020559 Bazarova, N. N. (2012). Public intimacy: Disclosure interpretation and social judgments on Facebook. Journal of Communication, 62(5), 815–832. https://doi.org/10.1111/j.1460- 2466.2012.01664.x Bazarova, N. N., Choi, Y. H., Schwanda Sosik, V., Cosley, D., & Whitlock, J. (2015). Social sharing of emotions on Facebook: Channel differences, satisfaction, and replies. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ’15, 154–164. https://doi.org/10.1145/2675133.2675297 Bazarova, N. N., Choi, Y. H., Whitlock, J., Cosley, D., & Sosik, V. (2017). Psychological distress and emotional expression on facebook. Cyberpsychology, Behavior, and Social Networking, 20(3), 157–163. https://doi.org/10.1089/cyber.2016.0335 Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311. https://doi.org/10.1093/poq/nfm006 Bender, J. L., Jimenez-Marroquin, M.-C., & Jadad, A. R. (2011). Seeking support on Facebook: A content analysis of breast cancer groups. Journal of Medical Internet Research, 13(1), e16. https://doi.org/10.2196/jmir.1560 Berry, D. S., & Hansen, J. S. (1996). Positive affect, negative affect, and social interaction. Journal of Personality and Social Psychology, 71(4), 796–809. https://doi.org/10.1037/0022-3514.71.4.796 Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6. https://doi.org/10.3389/fpubh.2018.00149 Bodie, G. D., & Jones, S. M. (2012). The nature of supportive listening II: The role of verbal person centeredness and nonverbal immediacy. Western Journal of Communication, 76(3), 250–269. https://doi.org/10.1080/10570314.2011.651255 130 boyd, D. M. (2011). Social network sites and networked publics: Affordances, dynamics and implications. In Networked Self & Z. Papacharissi (Eds.), A networked self: Identity, community and culture on social network sites (pp. 39–58). Routledge. http://qut.eblib.com.au.ezp01.library.qut.edu.au/patron/Read.aspx?p=574608&pg=48 boyd, D. M., & Ellison, N. B. (2007). Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication, 13(1), 210–230. https://doi.org/10.1111/j.1083-6101.2007.00393.x Braiker, H. B., Kelley, H. H., Burgess, R. L., & Huston, T. L. (1979). Conflict in the development of close relationships. In Social exchange in developing relationships. Academic Press. Braithwaite, D., Waldron, V., & Finn, J. (1999). Communication of social support in computer- mediated groups for people with disabilities. Papers in Communication Studies. https://digitalcommons.unl.edu/commstudiespapers/104 Brown, T. A. (2015). Confirmatory factor analysis for applied research (Second Edition). Guilford Publications. Brown, T. A., & Moore, M. T. (2012). Confirmatory factor analysis. In Handbook of structural equation modeling (pp. 361–379). The Guilford Press. Bucy, E. P. (2004a). Interactivity in society: Locating an elusive concept. The Information Society, 20(5), 373–383. https://doi.org/10.1080/01972240490508063 Bucy, E. P. (2004b). The debate. The Information Society, 20(5), 371–371. https://doi.org/10.1080/01972240490508045 Bucy, E. P., & Tao, C.-C. (2007). The mediated moderation model of interactivity. Media Psychology, 9(3), 647–672. https://doi.org/10.1080/15213260701283269 Burleson, B., & Holmstrom, A. (2008). Comforting communication. In D. Wolfgang (Ed.), The International Encyclopedia of Communication. Blackwell Publishing. Burleson, B. R., Albrecht, T. L., Goldsmith, D. J., & Sarason, I. G. (1994). The communication of social support. In B. R. Burleson, T. L. Albrecht, & I. G. Sarason (Eds.), Communication of social support: Messages, interactions, relationships, and community (pp. xi–xxx). Sage. Cacioppo, J. T., Gardner, W. L., & Berntson, G. G. (1999). The affect system has parallel and integrative processing components: Form follows function. Journal of Personality and Social Psychology, 76(5), 839–855. https://doi.org/10.1037/0022-3514.76.5.839 131 Campbell, A., Converse, P. E., & Rodgers, W. L. (1976). The quality of American life: Perceptions, evaluations, and satisfactions. Russell Sage Foundation; JSTOR. http://www.jstor.org/stable/10.7758/9781610441032 Canary, D. J., & Lakey, S. G. (2006). Managing conflict in a competent manner: A mindful look at events that matter. In J. G. Oetzel & S. Ting-Toomey (Eds.), The Sage handbook of conflict communication (pp. 185–210). Sage. Canary, D. J., & Spitzberg, B. H. (1987). Appropriateness and effectiveness perceptions of conflict strategies. Human Communication Research, 14(1), 93–120. https://doi.org/10.1111/j.1468-2958.1987.tb00123.x Canary, D. J., & Spitzberg, B. H. (1989). A model of the perceived competence of conflict strategies. Human Communication Research, 15(4), 630–649. https://doi.org/10.1111/j.1468-2958.1989.tb00202.x Carlson, J., & Zmud, R. (1999). Channel expansion theory and the experiential nature of media richness perceptions. The Academy of Management Journal, 42, 153–170. https://doi.org/10.2307/257090 Casler, K., Bickel, L., & Hackett, E. (2013). Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Computers in Human Behavior, 29(6), 2156–2160. https://doi.org/10.1016/j.chb.2013.05.009 Choi, M., & Toma, C. L. (2014). Social sharing through interpersonal media: Patterns and effects on emotional well-being. Computers in Human Behavior, 36, 530–541. https://doi.org/10.1016/j.chb.2014.04.026 Christopher Blair, Francesca Capozzi, & Jelena Ristic. (2017). Where is your attention? Assessing individual instances of covert attentional orienting in response to gaze and arrow cues. Vision, 1(3), 19. https://doi.org/10.3390/vision1030019 Chu, K.-M., & Yuan, B. J. C. (2013). The effects of perceived interactivity on e-trust and e- consumer behaviors: The application of fuzzy linguistic scale. 14(1), 13. Chua, T. H. H., & Chang, L. (2016). Follow me and like my beautiful selfies: Singapore teenage girls’ engagement in self-presentation and peer comparison on social media. Computers in Human Behavior, 55, 190–197. https://doi.org/10.1016/j.chb.2015.09.011 Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In Perspectives on socially shared cognition (pp. 127–149). American Psychological Association. https://doi.org/10.1037/10096-006 132 Clark, M. S., & Mills, J. (1979). Interpersonal attraction in exchange and communal relationships. Journal of Personality and Social Psychology, 37(1), 12–24. https://doi.org/10.1037/0022-3514.37.1.12 Cohen, S. (2004). Social relationships and health. The American Psychologist, 59(8), 676–684. https://doi.org/10.1037/0003-066X.59.8.676 Cohen, S., Underwood, L. G., & Gottlieb, B. H. (2000). Social support measurement and intervention: A guide for health and social scientists. Oxford University Press. Cohen, S., & Wills, T. A. (1985). Stress, social support, and the buffering hypothesis. Psychological Bulletin, 98(2), 310–357. https://doi.org/10.1037/0033-2909.98.2.310 Coker, D. A., & Burgoon, J. (1987). The nature of conversational involvement and nonverbal encoding patterns. Human Communication Research, 13(4), 463–494. https://doi.org/10.1111/j.1468-2958.1987.tb00115.x Collins, N. L., & Feeney, B. C. (2000). A safe haven: An attachment theory perspective on support seeking and caregiving in intimate relationships. Journal of Personality and Social Psychology, 78(6), 1053–1073. https://doi.org/10.1037/0022-3514.78.6.1053 Coulson, N. S. (2005). Receiving social support online: An analysis of a computer-mediated support group for individuals living with irritable bowel syndrome. CyberPsychology & Behavior, 8(6), 580–584. https://doi.org/10.1089/cpb.2005.8.580 Coursaris, C. K., & Liu, M. (2009). An analysis of social support exchanges in online HIV/AIDS self-help groups. Computers in Human Behavior, 25(4), 911–918. https://doi.org/10.1016/j.chb.2009.03.006 Crystal, D. (2008). Txtng: The Gr8 Db8. OUP Oxford. Culnan, M. J., & Markus, M. L. (1987). Information technologies. In F. M. Jablin, L. L. Putnam, K. H. Roberts, & L. W. Porter (Eds.), Handbook of organizational communication (pp. 420–443). Sage. Cupach, W., & Metts, S. (1994). Facework (Vol. 1–7). https://doi.org/10.4135/9781483326986 Curran, P. J., West, S. G., & Finch, J. F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1(1), 16– 29. https://doi.org/10.1037/1082-989X.1.1.16 Cutrona, C. E., & Russell, D. W. (2017). Autonomy promotion, responsiveness, and emotion regulation promote effective social support in times of stress. Current Opinion in Psychology, 13, 126–130. https://doi.org/10.1016/j.copsyc.2016.07.002 133 Daft, R. L., & Lengel, R. H. (1986). Organizational information requirements, media richness and structural design. Management Science, 32(5), 554–571. https://doi.org/10.1287/mnsc.32.5.554 Daft, R. L., Lengel, R. H., & Trevino, L. K. (1987). Message equivocality, media selection, and manager performance: Implications for information systems. MIS Quarterly, 11(3), 355– 366. https://doi.org/10.2307/248682 Davis, D. (1982). Determinants of responsiveness in dyadic interaction. In W. Ickes & E. S. Knowles (Eds.), Personality, Roles, and Social Behavior (pp. 85–139). Springer. https://doi.org/10.1007/978-1-4613-9469-3_4 Davis, M. (2017). Empathy, compassion, and social relationships. In E. M. Seppälä, E. Simon- Thomas, S. L. Brown, M. C. Worline, C. D. Cameron, & J. R. Doty (Eds.), The Oxford Handbook of Compassion Science. Oxford University Press. Demir, M., & Davidson, I. (2013). Toward a better understanding of the relationship between friendship and happiness: Perceived responses to capitalization attempts, feelings of mattering, and satisfaction of basic psychological needs in same-sex best friendships as predictors of happiness. Journal of Happiness Studies, 14(2), 525–550. https://doi.org/10.1007/s10902-012-9341-7 Dennis, A. R., Fuller, R. M., & Valacich, J. S. (2008). Media, tasks, and communication processes: A theory of media synchronicity. MIS Quarterly, 32(3), 575–600. https://doi.org/10.2307/25148857 DePaulo, B. M., & Rosenthal, R. (1979). Telling lies. Journal of Personality and Social Psychology, 37(10), 1713–1722. https://doi.org/10.1037/0022-3514.37.10.1713 Derks, D., & Bos, A. E. R. (2008). Emoticons in computer-mediated communication: Social motives and social context. CyberPsychology & Behavior, 11(1), 99–101. https://doi.org/10.1089/ cpb.2007.9926 Derks, D., Fischer, A. H., & Bos, A. E. R. (2008). The role of emotion in computer-mediated communication: A review. Computers in Human Behavior, 24(3), 766–785. https://doi.org/10.1016/j.chb.2007.04.004 DeVellis, R. F. (2012). Scale development: Theory and applications. SAGE Publications. DeVito, M. A., Birnholtz, J., & Hancock, J. T. (2017). Platforms, people, and perception: Using affordances to understand self-presentation on social media. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 740–754. https://doi.org/10.1145/2998181.2998192 134 Devoldre, I., Davis, M. H., Verhofstadt, L. L., & Buysse, A. (2010). Empathy and social support provision in couples: Social support and the need to study the underlying processes. The Journal of Psychology, 144(3), 259–284. https://doi.org/10.1080/00223981003648294 Diener, E. (1994). Assessing subjective well-being: Progress and opportunities. Social Indicators Research, 31(2), 103–157. https://doi.org/10.1007/BF01207052 Diener, E. (Ed.). (2009). The science of well-being: The collected works of Ed Diener. The Science of Well-Being: The Collected Works of Ed Diener., xi, 271–xi, 271. https://doi.org/10.1007/978-90-481-2350-6 Diener, E., & Emmons, R. A. (1984). The Independence of Positive and Negative Affect. Journal of Personality and Social Psychology, 47(5), 1105–1117. https://doi.org/10.1037//0022-3514.47.5.1105 Diener, E., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The satisfaction with life scale. Journal of Personality Assessment, 49(1), 71–75. https://doi.org/10.1207/s15327752jpa4901_13 Diener, E., Smith, H., & Fujita, F. (1995). The personality structure of affect. Journal of Personality and Social Psychology, 69, 130–141. https://doi.org/10.1037/0022- 3514.69.1.130 Diener, E., Suh, E. M., Lucas, R. E., & Smith, H. L. (1999). Subjective well-being: Three decades of progress. 27. Dimmick, J., Feaster, J. C., & Ramirez, A. (2011). The niches of interpersonal media: Relationships in time and space. New Media & Society, 13(8), 1265–1282. https://doi.org/10.1177/1461444811403445 Dimmick, J., Kline, S., & Stafford, L. (2000). The gratification niches of personal e-mail and the telephone: Competition, displacement, and complementarity. Communication Research, 27(2), 227–248. https://doi.org/10.1177/009365000027002005 Dimmick, J., Ramirez, A., Wang, T., & Lin, S.-F. (2007). `Extending Society’: The role of personal networks and gratification-utilities in the use of interactive communication media. New Media & Society, 9(5), 795–810. https://doi.org/10.1177/1461444807081225 Dolan, P., Kudrna, L., & Stone, A. (2017). The measure matters: An investigation of evaluative and experience-based measures of wellbeing in time use data. Social Indicators Research, 134(1), 57–73. https://doi.org/10.1007/s11205-016-1429-8 Döring, N., & Pöschl, S. (2009). Nonverbal cues in mobile phone text messages: The effects of chronemics and proxemics. In The reconstruction of space and time: Mobile communication practices (pp. 109–135). Transaction Publishers. 135 Dresner, E., & Herring, S. C. (2010). Functions of the nonverbal in CMC: Emoticons and illocutionary force. Communication Theory, 20(3), 249–268. https://doi.org/10.1111/j.1468-2885.2010.01362.x Ehrlich, S. M., Schiano, D. J., & Sheridan, K. (2000). Communicating facial affect: It’s not the realism, it’s the motion. CHI ’00 Extended Abstracts on Human Factors in Computing Systems, 251–252. https://doi.org/10.1145/633292.633439 Ellison, N. B., Steinfield, C., & Lampe, C. (2011). Connection strategies: Social capital implications of Facebook-enabled communication practices. New Media & Society. https://doi.org/10.1177/1461444810385389 Ellison, N. B., Vitak, J., & Sundar, S. (2015). Social network site affordances and their relationship to social capital processes. In The handbook of the psychology of communication technology (pp. 205–227). Wiley & Sons. Enders, C. K., & Bandalos, D. L. (2001). The relative performance of full information maximum likelihood estimation for missing data in structural equation models. Structural Equation Modeling: A Multidisciplinary Journal, 8(3), 430–457. https://doi.org/10.1207/S15328007SEM0803_5 Evans, S. K., Pearce, K. E., Vitak, J., & Treem, J. W. (2017). Explicating affordances: A conceptual framework for understanding affordances in communication research. Journal of Computer-Mediated Communication, 22(1), 35–52. https://doi.org/10.1111/jcc4.12180 Faules, D. (1967). The relation of communicator skill to the ability to elicit and interpret feedback under four conditions. Journal of Communication, 17(4), 362–371. https://doi.org/10.1111/j.1460-2466.1967.tb01194.x Fauville, G., Luo, M., Muller Queiroz, A. C., Bailenson, J. N., & Hancock, J. (2021). Nonverbal mechanisms predict Zoom fatigue and explain why women experience higher levels than men [SSRN Scholarly Paper]. https://doi.org/10.2139/ssrn.3820035 Feaster, J. C. (2008). A planning approach to interpersonal media use and selection [The Ohio State University]. https://etd.ohiolink.edu/pg_10?0::NO:10:P10_ACCESSION_NUM:osu1214939245#abst ract-files Feaster, J. C. (2009). The repertoire niches of interpersonal media: Competition and coexistence at the level of the individual. New Media & Society, 11(6), 965–984. https://doi.org/10.1177/1461444809336549 Feaster, J. C. (2013). Great expectations: The association between media-afforded information control and desirable social outcomes. Communication Quarterly, 61(2), 172–194. https://doi.org/10.1080/01463373.2012.751434 136 Feeney, B. C. (2004). A secure base: Responsive support of goal strivings and exploration in adult intimate relationships. Journal of Personality and Social Psychology, 87(5), 631– 648. https://doi.org/10.1037/0022-3514.87.5.631 Feeney, B. C., & Collins, N. L. (2015). A new look at social support: A theoretical perspective on thriving through relationships. Personality and Social Psychology Review, 19(2), 113– 147. https://doi.org/10.1177/1088868314544222 Feeney, B. C., & Thrush, R. L. (2010). Relationship influences on exploration in adulthood: The characteristics and function of a secure base. Journal of Personality and Social Psychology, 98(1), 57–76. https://doi.org/10.1037/a0016961 Feffer, M., & Suchotliff, L. (1966). Decentering implications of social interactions. Journal of Personality and Social Psychology, 4(4), 415–422. https://doi.org/10.1037/h0023807 Feldman, G. C., Joormann, J., & Johnson, S. L. (2008). Responses to positive affect: A self- report measure of rumination and dampening. Cognitive Therapy and Research, 32(4), 507–525. https://doi.org/10.1007/s10608-006-9083-0 Fox, J., & McEwan, B. (2017). Distinguishing technologies for social interaction: The perceived social affordances of communication channels scale. Communication Monographs, 84(3), 298–318. https://doi.org/10.1080/03637751.2017.1332418 Fulk, J., Schmitz, J., & Steinfield, C. (1990). A social influence model of technology use (pp. 117–140). https://doi.org/10.4135/9781483325385.n6 Gable, S., & Haidt, J. (2005). What (and why) is positive psychology? Review of General Psychology, 9. https://doi.org/10.1037/1089-2680.9.2.103 Gable, S. L., Gonzaga, G. C., & Strachman, A. (2006). Will you be there for me when things go right? Supportive responses to positive event disclosures. Journal of Personality and Social Psychology, 91(5), 904–917. https://doi.org/10.1037/0022-3514.91.5.904 Gable, S. L., Reis, H. T., Impett, E. A., & Asher, E. R. (2004a). What do you do when things go right? The intrapersonal and interpersonal benefits of sharing positive events. Journal of Personality and Social Psychology, 87(2), 228–245. https://doi.org/10.1037/0022- 3514.87.2.228 Gable, S., Reis, H., Reis, H., & Reis, H. (2010). Good news! Capitalizing on positive events in an interpersonal context. Advances in Experimental Social Psychology, 42, 195–257. Ganster, T., Eimler, S. C., & Krämer, N. C. (2012). Same Same But Different!? The Differential Influence of Smilies and Emoticons on Person Perception. Cyberpsychology, Behavior, and Social Networking, 15(4), 226–230. https://doi.org/10.1089/cyber.2011.0179 137 Gaver, W. W. (1991). Technology affordances. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Reaching through Technology - CHI ’91, 79–84. https://doi.org/10.1145/108844.108856 Gentzler, A. L., Palmer, C. A., & Ramsey, M. A. (2016). Savoring with intent: Investigating types of and motives for responses to positive events. Journal of Happiness Studies; Dordrecht, 17(3), 937–958. http://dx.doi.org.proxy2.cl.msu.edu/10.1007/s10902-015- 9625-9 Gesn, P. R., & Ickes, W. (1999). The development of meaning contexts for empathic accuracy: Channel and sequence effects. Journal of Personality and Social Psychology, 77(4), 746– 761. https://doi.org/10.1037/0022-3514.77.4.746 Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin. Goldsmith, D. J., McDermott, V. M., & Alexander, S. C. (2000). Helpful, supportive and sensitive: Measuring the evaluation of enacted social support in personal relationships. Journal of Social and Personal Relationships, 17(3), 369–391. https://doi.org/10.1177/0265407500173004 Gonzales, A. L. (2014). Text-based communication influences self-esteem more than face-to- face or cellphone communication. Computers in Human Behavior, 39, 197–203. https://doi.org/10.1016/j.chb.2014.07.026 Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires. The American Psychologist, 59(2), 93–104. https://doi.org/10.1037/0003-066X.59.2.93 Graber, E., Laurenceau, J.-P., & Belcher, A. (2009). Interpersonal process model of intimacy. In Encyclopedia of Human Relationships (pp. 899–900). SAGE Publications, Inc. https://doi.org/10.4135/9781412958479 Greene, J. A., Choudhry, N. K., Kilabuk, E., & Shrank, W. H. (2011). Online social networking by patients with diabetes: A qualitative evaluation of communication with facebook. Journal of General Internal Medicine, 26(3), 287–292. https://doi.org/10.1007/s11606- 010-1526-3 Hall, J. A., & Schmid Mast, M. (2007). Sources of accuracy in the empathic accuracy paradigm. Emotion, 7(2), 438–446. https://doi.org/10.1037/1528-3542.7.2.438 Hampton, K., Goulet, L. S., Rainie, L., & Purcell, K. (2011, June 16). Social networking sites and our lives. Pew Research Center: Internet, Science & Tech. http://www.pewinternet.org/2011/06/16/social-networking-sites-and-our-lives/ Hampton, K. N. (2019). Social media and change in psychological distress over time: The role of social causation. Journal of Computer-Mediated Communication, 24(5), 205–222. https://doi.org/10.1093/jcmc/zmz010 138 Hancock, J. T., Thom-Santelli, J., & Ritchie, T. (2004). Deception and design: The impact of communication technology on lying behavior. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 129–134. https://doi.org/10.1145/985692.985709 Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press. Hays, R. B. (2016). The development and maintenance of friendship. Journal of Social and Personal Relationships. https://doi.org/10.1177/0265407584011005 Herche, J., & Engelland, B. (1996). Reversed-polarity items and scale unidimensionality. Journal of the Academy of Marketing Science, 24(4), 366. https://doi.org/10.1177/0092070396244007 High, A. C., & Dillard, J. P. (2012). A review and meta-analysis of person-centered messages and social support outcomes. Communication Studies, 63(1), 99–118. https://doi.org/10.1080/10510974.2011.598208 Hinds, P. J. (1999). The cognitive and interpersonal costs of video. Media Psychology, 1(4), 283–311. https://doi.org/10.1207/s1532785xmep0104_1 Hoffman, D. L., & Novak, T. P. (1998). Bridging the racial divide on the Internet. Science, 280(5362), 390–391. https://doi.org/10.1126/science.280.5362.390 Holden, C. J., Dennie, T., & Hicks, A. D. (2013). Assessing the reliability of the M5-120 on Amazon’s mechanical Turk. Computers in Human Behavior, 29(4), 1749–1754. https://doi.org/10.1016/j.chb.2013.02.020 Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118 Ickes, W. (1993). Empathic accuracy. Journal of Personality, 61(4), 587–610. https://doi.org/10.1111/j.1467-6494.1993.tb00783.x Ickes, W. J. (2003). Everyday mind reading: Understanding what other people think and feel. Prometheus Books. Ickes, W., & Simpson, J. (2001). Motivational aspects of empathic accuracy (pp. 229–249). https://doi.org/10.1002/9780470998557.ch9 Ickes, W., Stinson, L., Bissonnette, V., & Garcia, S. (1990). Naturalistic social cognition: Empathic accuracy in mixed-sex dyads. Journal of Personality and Social Psychology, 59(4), 730–742. https://doi.org/10.1037/0022-3514.59.4.730 139 Jiang, L. C., Bazarova, N. N., & Hancock, J. T. (2011). The disclosure-intimacy link in computer-mediated communication: An attributional extension of the hyperpersonal model. Human Communication Research, 37(1), 58–77. https://doi.org/10.1111/j.1468- 2958.2010.01393.x Jiang, L. C., & Hancock, J. T. (2013). Absence makes the communication grow fonder: Geographic separation, interpersonal media, and intimacy in dating relationships. Journal of Communication, 63(3), 556–577. https://doi.org/10.1111/jcom.12029 Johnson, R. L., & Morgan, G. B. (2016). Survey scales: A guide to development, analysis, and reporting. Guilford Publications. Joinson, A. N. (2001). Self-disclosure in computer-mediated communication: The role of self- awareness and visual anonymity. European Journal of Social Psychology, 31(2), 177– 192. https://doi.org/10.1002/ejsp.36 Jones, S. (2004). Putting the person into person-centered and immediate emotional support: Emotional change and perceived helper competence as outcomes of comforting in helping situations. Communication Research, 31(3), 338–360. https://doi.org/10.1177/0093650204263436 Jones, S. M., & Guerrero, L. K. (2001). The effects of nonverbal immediacy and verbal person centeredness in the emotional support process. Human Communication Research, 27(4), 567–596. https://doi.org/10.1111/j.1468-2958.2001.tb00793.x Kahneman, D., Diener, E., & Schwarz, N. (Eds.). (1999). Well-being: The foundations of hedonic psychology. Well-Being: The Foundations of Hedonic Psychology., xii, 593–xii, 593. Kalyanaraman, S., & Sundar, S. S. (2008). Impression formation effects in online mediated communication. Mediated Interpersonal Communication, 217–233. https://doi.org/10.4324/9780203926864 Kashian, N., & Walther, J. B. (2020). The effect of relational satisfaction and media synchronicity on attributions in computer-mediated conflict. Communication Research, 47(5), 647–668. https://doi.org/10.1177/0093650218789581 Keyes, C. L. M., Shmotkin, D., & Ryff, C. D. (2002). Optimizing well-being: The empirical encounter of two traditions. Journal of Personality and Social Psychology, 82(6), 1007– 1022. https://doi.org/10.1037//0022-3514.82.6.1007 Kline, R. B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford Publications. Knapp, M. L., Vangelisti, A. L., & Caughlin, J. (2014). Interpersonal communication and human relationships (7th ed.). Pearson Allyn & Bacon. 140 Kraus, M. W. (2017). Voice-only communication enhances empathic accuracy. American Psychologist, 72(7), 644–654. https://doi.org/10.1037/amp0000147 Kruger, J., Epley, N., Parker, J., & Ng, Z.-W. (2005). Egocentrism over e-mail: Can we communicate as well as we think? Journal of Personality and Social Psychology, 89(6), 925–936. https://doi.org/10.1037/0022-3514.89.6.925 Kumashiro, M. (2009). Responsiveness. In Encyclopedia of Human Relationships (pp. 1374– 1376). SAGE Publications, Inc. https://doi.org/10.4135/9781412958479 Lakey, B., & Cronin, A. (2008). Low social support and major depression: Research, theory and methodological issues. In K. S. Dobson & D. J. A. Dozois (Eds.), Risk Factors in Depression (pp. 385–408). Elsevier. https://doi.org/10.1016/B978-0-08-045078-0.00017- 4 Lakey, B., & Orehek, E. (2011). Relational regulation theory: A new approach to explain the link between perceived social support and mental health. Psychological Review, 118(3), 482– 495. https://doi.org/10.1037/a0023477 Lambert, N. M., Gwinn, A. M., Baumeister, R. F., Strachman, A., Washburn, I. J., Gable, S. L., & Fincham, F. D. (2013). A boost of positive affect: The perks of sharing positive experiences. Journal of Social and Personal Relationships, 30(1), 24–43. https://doi.org/10.1177/0265407512449400 Langston, C. A. (1994). Capitalizing on and coping with daily-life events: Expressive responses to positive events. Journal of Personality and Social Psychology, 67(6), 1112–1125. http://dx.doi.org.proxy1.cl.msu.edu/10.1037/0022-3514.67.6.1112 Laurenceau, J.-P., Barrett, L. F., & Pietromonaco, P. R. (1998). Intimacy as an interpersonal process: The importance of self-disclosure, partner disclosure, and perceived partner responsiveness in interpersonal exchanges. Journal of Personality and Social Psychology, 74(5), 1238–1251. https://doi.org/10.1037/0022-3514.74.5.1238 Laurenceau, J.-P., Barrett, L. F., & Rovine, M. J. (2005). The interpersonal process model of intimacy in marriage: A daily-diary and multilevel modeling approach. Journal of Family Psychology, 19(2), 314–323. https://doi.org/10.1037/0893-3200.19.2.314 Lea, M., Spears, R., & de Groot, D. (2001). Knowing me, knowing you: Anonymity effects on social identity processes within groups. Personality and Social Psychology Bulletin, 27(5), 526–537. Leavitt, H. J., & Mueller, R. A. H. (1951). Some effects of feedback on communication. Human Relations, 4(4), 401–410. https://doi.org/10.1177/001872675100400406 Lee, A. S. (1994). Electronic mail as a medium for rich communication: An empirical investigation using hermeneutic interpretation. MIS Quarterly, 18(2), 143–157. https://doi.org/10.2307/249762 141 Lee, J.-S., Koeske, G. F., & Sales, E. (2004). Social support buffering of acculturative stress: A study of mental health symptoms among Korean international students. International Journal of Intercultural Relations, 28(5), 399–414. https://doi.org/10.1016/j.ijintrel.2004.08.005 Leiner, D. J., & Quiring, O. (2008). What interactivity means to the user essential insights into and a scale for perceived interactivity. Journal of Computer-Mediated Communication, 14(1), 127–155. https://doi.org/10.1111/j.1083-6101.2008.01434.x Lemay, E. P., & Neal, A. M. (2014). Accurate and biased perceptions of responsive support predict well-being. Motivation and Emotion, 38(2), 270–286. https://doi.org/10.1007/s11031-013-9381-2 Leonardi, P. M. (2011). When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies. MIS Quarterly, 35(1), 147–167. https://doi.org/10.2307/23043493 Lim, S. S., & Pham, B. (2016). ‘If you are a foreigner in a foreign country, you stick together’: Technologically mediated communication and acculturation of migrant students. New Media & Society, 1461444816655612. https://doi.org/10.1177/1461444816655612 Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433–442. https://doi.org/10.3758/s13428-016-0727-z Little, R. J. A., & Rubin, D. B. (2019). Statistical analysis with missing data (3rd ed.). John Wiley & Sons. Liu, Y. (2003). Developing a scale to measure the interactivity of websites. Journal of Advertising Research, 43(2), 207–216. https://doi.org/10.2501/JAR-43-2-207-216 Lowry, P. B., Romano, N. C., Jenkins, J. L., & Guthrie, R. W. (2009). The CMC interactivity model: How interactivity enhances communication quality and process satisfaction in lean-media groups. Journal of Management Information Systems, 26(1), 155–196. https://doi.org/10.2753/MIS0742-1222260107 Lykken, D., & Tellegen, A. (1996). Happiness is a stochastic phenomenon. Psychological Science, 7(3), 186–189. https://doi.org/10.1111/j.1467-9280.1996.tb00355.x Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35(6), 382–386. MacGeorge, E. L., Feng, B., & Burleson, B. R. (2011). Supportive communication. In M. L. Knapp & J. A. Daly (Eds.), Handbook of interpersonal communication (4th ed., pp. 317– 354). Sage. 142 Madianou, M. (2015). Polymedia and ethnography: Understanding the social in social media. Social Media + Society, 1(1), 2056305115578675. https://doi.org/10.1177/2056305115578675 Madianou, M., & Miller, D. (2013). Polymedia: Towards a new theory of digital media in interpersonal communication. International Journal of Cultural Studies, 16(2), 169–187. https://doi.org/10.1177/1367877912452486 Maisel, N. C., & Gable, S. L. (2009). The paradox of received social support: The importance of responsiveness. Psychological Science, 20(8), 928–932. https://doi.org/10.1111/j.1467- 9280.2009.02388.x Manago, A. M., Brown, G., Lawley, K. A., & Anderson, G. (2020). Adolescents’ daily face-to- face and computer-mediated communication: Associations with autonomy and closeness to parents and friends. Developmental Psychology, 56(1), 153–164. https://doi.org/10.1037/dev0000851 Manne, S., Ostroff, J., Rini, C., Fox, K., Goldstein, L., & Grana, G. (2004). The interpersonal process model of intimacy: The role of self-disclosure, partner disclosure, and partner responsiveness in interactions between breast cancer patients and their partners. Journal of Family Psychology, 18(4), 589–599. https://doi.org/10.1037/0893-3200.18.4.589 Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44(1), 1–23. https://doi.org/10.3758/s13428-011-0124-6 McCrae, R. R., & Costa, P. T. (1991). Adding liebe und arbeit: The full five-factor model and well-being. Personality and Social Psychology Bulletin, 17(2), 227–232. https://doi.org/10.1177/014616729101700217 McLaughlin, C., & Vitak, J. (2012). Norm evolution and violation on Facebook. New Media & Society, 14(2), 299–315. https://doi.org/10.1177/1461444811412712 Mehrabian, A., & Reed, H. (1968). Some determinants of communication accuracy. Psychological Bulletin, 70(5), 365–381. Mehrabian, A., & Wiener, M. (1967). Decoding of inconsistent communications. Journal of Personality and Social Psychology, 6(1), 109–114. https://doi.org/10.1037/h0024532 Mo, P. K. H., & Coulson, N. S. (2008). Exploring the communication of social support within virtual communities: A content analysis of messages posted to an online hiv/aids support group. CyberPsychology & Behavior, 11(3), 371–374. https://doi.org/10.1089/cpb.2007.0118 Monfort, S. S., Kaczmarek, L. D., Kashdan, T. B., Drążkowski, D., Kosakowski, M., Guzik, P., Krauze, T., & Gracanin, A. (2014). Capitalizing on the success of romantic partners: A laboratory investigation on subjective, facial, and physiological emotional processing. 143 Personality and Individual Differences, 68, 149–153. https://doi.org/10.1016/j.paid.2014.04.028 Naaman, M., Boase, J., & Lai, C.-H. (2010). Is it really about me? Message content in social awareness streams. Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work - CSCW ’10, 189. https://doi.org/10.1145/1718918.1718953 Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scaling procedures: Issues and applications. SAGE Publications. Nevitt, J., & Hancock, G. R. (2001). Performance of bootstrapping approaches to model test statistics and parameter standard error estimation in structural equation modeling. Structural Equation Modeling, 8(3), 353–377. https://doi.org/10.1207/S15328007SEM0803_2 Norman, D. A. (1988). The psychology of everyday things. Basic Books. Nowak, K. L., Watt, J., & Walther, J. B. (2005). The influence of synchrony and sensory modality on the person perception process in computer-mediated groups. Journal of Computer-Mediated Communication, 10(3). https://doi.org/10.1111/j.1083- 6101.2005.tb00251.x Nowak, K. L., Watt, J., & Walther, J. B. (2009). Computer mediated teamwork and the efficiency framework: Exploring the influence of synchrony and cues on media satisfaction and outcome success. Computers in Human Behavior, 25(5), 1108–1119. https://doi.org/10.1016/j.chb.2009.05.006 O’Laughlin, K. D., Martin, M. J., & Ferrer, E. (2018). Cross-sectional analysis of longitudinal mediation processes. Multivariate Behavioral Research, 53(3), 375–402. https://doi.org/10.1080/00273171.2018.1454822 Ortiz-Ospina, E. (2019). The rise of social media. Our World in Data. https://ourworldindata.org/rise-of-social-media O’Sullivan, P. B. (2000). What you don’t know won’t hurt me: Impression management functions of communication channels in relationships. Human Communication Research, 26(3), 403–431. https://doi.org/10.1111/j.1468-2958.2000.tb00763.x O’Sullivan, P. B., & Carr, C. T. (2018). Masspersonal communication: A model bridging the mass-interpersonal divide. New Media & Society, 20(3), 1161–1180. https://doi.org/10.1177/1461444816686104 Otto, A. K., Laurenceau, J.-P., Siegel, S. D., & Belcher, A. J. (2015). Capitalizing on everyday positive events uniquely predicts daily intimacy and well-being in couples coping with breast cance. Journal of Family Psychology : JFP : Journal of the Division of Family Psychology of the American Psychological Association (Division 43), 29(1), 69–79. https://doi.org/10.1037/fam0000042 144 Özkul, D., & Humphreys, L. (2015). Record and remember: Memory and meaning-making practices through mobile media. Mobile Media & Communication, 3(3), 351–365. https://doi.org/10.1177/2050157914565846 Park, E. K., & Sundar, S. S. (2015). Can synchronicity and visual modality enhance social presence in mobile messaging? Computers in Human Behavior, 45, 121–128. https://doi.org/10.1016/j.chb.2014.12.001 Park, Y. W., & Lee, A. R. (2019). The moderating role of communication contexts: How do media synchronicity and behavioral characteristics of mobile messenger applications affect social intimacy and fatigue? Computers in Human Behavior, 97, 179–192. https://doi.org/10.1016/j.chb.2019.03.020 Peters, B. J., Reis, H. T., & Gable, S. L. (2018). Making the good even better: A review and theoretical model of interpersonal capitalization. Social and Personality Psychology Compass, 12(7), e12407. https://doi.org/10.1111/spc3.12407 Petrocchi, S., Marciano, L., Annoni, A. M., & Camerini, A.-L. (2020). “What you say and how you say it” matters: An experimental evidence of the role of synchronicity, modality, and message valence during smartphone-mediated communication. PLOS ONE, 15(9), e0237846. https://doi.org/10.1371/journal.pone.0237846 Pinker, S., & Bloom, P. (1990). Natural language and natural selection. Behavioral and Brain Sciences, 13(4), 707–784. Powers, S. R., Rauh, C., Henning, R. A., Buck, R. W., & West, T. V. (2011). The effect of video feedback delay on frustration and emotion communication accuracy. Computers in Human Behavior, 27(5), 1651–1657. https://doi.org/10.1016/j.chb.2011.02.003 Qiu, L., Lin, H., & Leung, A. K. -y. (2013). Cultural differences and switching of in-group sharing behavior between an american (Facebook) and a Chinese (Renren) social networking site. Journal of Cross-Cultural Psychology, 44(1), 106–121. https://doi.org/10.1177/0022022111434597 Query, J. L., & Wright, K. (2003). Assessing communication competence in an online study: Toward informing subsequent interventions among older adults with cancer, their lay caregivers, and peers. Health Communication, 15(2), 203–218. https://doi.org/10.1207/S15327027HC1502_8 Rains, S. (2019). Communication technology affordances and social support provision. In N. Egbert & K. Wright (Eds.), Social Support and Health in the Digital Age (pp. 29–46). Rowman & Littlefield. 145 Rains, S. A., & Brunner, S. R. (2018). The outcomes of broadcasting self-disclosure using new communication technologies: Responses to disclosure vary across one’s social network. Communication Research, 45(5), 659–687. https://doi.org/10.1177/0093650215598836 Rains, S. A., & Wright, K. B. (2016). Social support and computer-mediated communication: A state-of-the-art review and agenda for future research. Annals of the International Communication Association, 40(1), 175–211. https://doi.org/10.1080/23808985.2015.11735260 Rains, S. A., & Young, V. (2009). A meta-analysis of research on formal computer-mediated support groups: Examining group characteristics and health outcomes. Human Communication Research, 35(3), 309–336. https://doi.org/10.1111/j.1468- 2958.2009.01353.x Ramirez, A., Dimmick, J., Feaster, J., & Lin, S.-F. (2008). Revisiting interpersonal media competition: The gratification niches of instant messaging, e-mail, and the telephone. Communication Research, 35(4), 529–547. https://doi.org/10.1177/0093650208315979 Reis, H. T. (2012). Perceived partner responsiveness as an organizing theme for the study of relationships and well-being. In Interdisciplinary research on close relationships: The case for integration (pp. 27–52). American Psychological Association. https://doi.org/10.1037/13486-002 Reis, H. T. (2014). Responsiveness: Affective interdependence in close relationships. In Mechanisms of social connection: From brain to group (pp. 255–271). American Psychological Association. https://doi.org/10.1037/14250-015 Reis, H. T., Clark, M. S., & Holmes, J. G. (2004). Perceived partner responsiveness as an organizing construct in the study of intimacy and closeness. In Handbook of closeness and intimacy. (pp. 201–225). Lawrence Erlbaum Associates Publishers. Reis, H. T., Smith, S. M., Carmichael, C. L., Caprariello, P. A., Tsai, F.-F., Rodrigues, A., & Maniaci, M. R. (2010). Are you happy for me? How sharing positive events with others provides personal and interpersonal benefits. Journal of Personality and Social Psychology, 99(2), 311–329. http://dx.doi.org.proxy1.cl.msu.edu/10.1037/a0018344 Rencher, A. C., & Christensen, W. F. (2012). Methods of multivariate analysis. John Wiley & Sons. Rettie, R. (2009). SMS: Exploiting the interactional characteristics of near-synchrony. Information, Communication & Society, 12(8), 1131–1148. https://doi.org/10.1080/13691180902786943 Rice, R. E., & Steinfield, C. (1994). Experiences with new forms of organizational communication via electronic mail and voice messaging. In J. H. E. Andriesson & R. A. Roe (Eds.), Telematics and work (pp. 109–132). Erlbaum. 146 Rimé, B. (2007). Interpersonal emotion regulation. In Handbook of emotion regulation. (pp. 466–485). The Guilford Press. Riordan, M. A., Kreuz, R. J., & Blair, A. N. (2018). The digital divide: Conveying subtlety in online communication. Journal of Computers in Education, 5(1), 49–66. https://doi.org/10.1007/s40692-018-0100-6 Rosenberg, M. (2015). Society and the adolescent self-image. Princeton University Press. Ruppel, E. K. (2015). The affordance utilization model: Communication technology use as relationships develop. Marriage & Family Review, 51(8), 669–686. https://doi.org/10.1080/01494929.2015.1061628 Ruppel, E. K., Gross, C., Stoll, A., Peck, B. S., Allen, M., & Kim, S.-Y. (2017). Reflecting on connecting: Meta-analysis of differences between computer-mediated and face-to-face self-disclosure. Journal of Computer-Mediated Communication, 22(1), 18–34. https://doi.org/10.1111/jcc4.12179 Ryan, K., Gannon-Slater, N., & Culbertson, M. J. (2012). Improving survey methods with cognitive interviews in small- and medium-scale evaluations. American Journal of Evaluation, 33(3), 414–430. https://doi.org/10.1177/1098214012441499 Ryan, R. M., & Deci, E. L. (2001). On happiness and human potentials: A review of research on hedonic and eudaimonic well-being. Annual Review of Psychology, 52(1), 141–166. https://doi.org/10.1146/annurev.psych.52.1.141 Schmitz, J., & Fulk, J. (1991). Organizational colleagues, media richness, and electronic mail: A test of the social influence model of technology use. Communication Research, 18(4), 487–523. https://doi.org/10.1177/009365091018004003 Schwarz, N., & Strack, F. (1999). Reports of subjective well-being: Judgmental processes and their methodological implications. In Well-being: The foundations of hedonic psychology (Vol. 178, pp. 61–84). Selcuk, E., Gunaydin, G., Ong, A. D., & Almeida, D. M. (2016). Does partner responsiveness predict hedonic and eudaimonic well-being? A 10-year longitudinal study. Journal of Marriage and the Family, 78(2), 311–325. https://doi.org/10.1111/jomf.12272 Selcuk, E., & Ong, A. D. (2013). Perceived partner responsiveness moderates the association between received emotional support and all-cause mortality. Health Psychology, 32(2), 231–235. https://doi.org/10.1037/a0028276 Selcuk, E., Stanton, S. C. E., Slatcher, R. B., & Ong, A. D. (2017). Perceived partner responsiveness predicts better sleep quality through lower anxiety. Social Psychological and Personality Science, 8(1), 83–92. https://doi.org/10.1177/1948550616662128 147 Seligman, M. (2018). PERMA and the building blocks of well-being. The Journal of Positive Psychology, 13(4), 333–335. https://doi.org/10.1080/17439760.2018.1437466 Shorey, R. C., & Lakey, B. (2011). Perceived and capitalization support are substantially similar: Implications for social support theory. Personality & Social Psychology Bulletin, 37(8), 1068–1079. https://doi.org/10.1177/0146167211406507 Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. Wiley. Simpson, J., Ickes, W., & Oriña, M. (2001). Empathic accuracy and pre-emptive relationship maintenance (pp. 27–46). Slatcher, R. B., Selcuk, E., & Ong, A. D. (2015). Perceived partner responsiveness predicts diurnal cortisol profiles 10 years later. Psychological Science, 26(7), 972–982. https://doi.org/10.1177/0956797615575022 Smith, A., & Anderson, M. (2018, March 1). Social media use 2018: Demographics and statistics. Pew Research Center. http://www.pewinternet.org/2018/03/01/social-media- use-in-2018/ Smolkowski, K. (2020, July 18). Correlated errors in CFA and SEM models. https://homes.ori.org//keiths/Tips/Stats_SEMErrorCorrs.html Spitzberg, B. H., & Cupach, W. R. (1984). Interpersonal communication competence. SAGE Publications. Stallings, M. C., Dunham, C. C., Gatz, M., Baker, L. A., & Bengtson, V. L. (1997). Relationships among life events and psychological well-being: More evidence for a two- factor theory of well-being. Journal of Applied Gerontology, 16(1), 104–119. https://doi.org/10.1177/073346489701600106 Stiles, W. B. (1987). “I Have to Talk to Somebody.” In V. J. Derlega & J. H. Berg (Eds.), Self- Disclosure: Theory, Research, and Therapy (pp. 257–282). Springer US. https://doi.org/10.1007/978-1-4899-3523-6_12 Stokel-Walker, C. (2020, May 12). How Skype lost its crown to Zoom. Wired UK. https://www.wired.co.uk/article/skype-coronavirus-pandemic Stromer-Galley, J. (2004). Interactivity-as-product and interactivity-as-process. The Information Society, 20(5), 391–394. https://doi.org/10.1080/01972240490508081 Sundar, S. S. (2004). Theorizing interactivity’s effects. The Information Society, 20(5), 385–389. https://doi.org/10.1080/01972240490508072 148 Sundar, S. S., Jia, H., Waddell, T. F., & Huang, Y. (2015). Toward a theory of interactive media effects (time): Four models for explaining how interface features affect user psychology. In S. S. Sundar (Ed.), The Handbook of the Psychology of Communication Technology (1st ed., pp. 47–86). Wiley. https://doi.org/10.1002/9781118426456.ch3 Sundar, S. S., Kalyanaraman, S., & Brown, J. (2003). Explicating web site interactivity: Impression formation effects in political campaign sites. Communication Research, 30(1), 30–59. https://doi.org/10.1177/0093650202239025 Tambs, K., & Røysamb, E. (2014). Selection of questions to short-form versions of original psychometric instruments in MoBa. Norsk Epidemiologi, 24(1–2). https://doi.org/10.5324/nje.v24i1-2.1822 Tausig, M. (1982). Measuring life events. Journal of Health and Social Behavior, 23(1), 52–64. https://doi.org/10.2307/2136389 Taylor, D. A., & Altman, I. (1987). Communication in interpersonal relationships: Social penetration processes. In Interpersonal processes: New directions in communication research (pp. 257–277). Sage Publications, Inc. Taylor, D. A., Wheeler, L., & Altman, I. (1973). Self-disclosure in isolated groups. Journal of Personality and Social Psychology, 26(1), 39–47. https://doi.org/10.1037/h0034233 Tidwell, L. C., & Walther, J. B. (2002). Computer-mediated communication effects on disclosure, impressions, and interpersonal evaluations: Getting to know one another a bit at a time. Human Communication Research, 28(3), 317–348. https://doi.org/10.1111/j.1468-2958.2002.tb00811.x Tong, S. T., & Walther, J. B. (2011). Relational maintenance and CMC. Computer-Mediated Communication in Personal Relationships, 98–118. Scopus. Treem, J. W., & Leonardi, P. M. (2012). Social media use in organizations: Exploring the affordances of visibility, editability, persistence, and association [SSRN Scholarly Paper]. https://papers.ssrn.com/abstract=2129853 Twenge, J. M. (2017, September). Have smartphones destroyed a generation? The Atlantic. https://www.theatlantic.com/magazine/archive/2017/09/has-the-smartphone-destroyed-a- generation/534198/ Twenge, J. M., Blake, A. B., Haidt, J., & Campbell, W. K. (2020). Commentary: Screens, teens, and psychological well-being: Evidence from three time-use-diary studies. Frontiers in Psychology, 11. https://doi.org/10.3389/fpsyg.2020.00181 Twenge, J. M., & Farley, E. (2021). Not all screen time is created equal: Associations with mental health vary by activity and gender. Social Psychiatry and Psychiatric Epidemiology, 56(2), 207–217. https://doi.org/10.1007/s00127-020-01906-9 149 Uchino, B. N. (2004). Social support and physical health: Understanding the health consequences of relationships. Yale University Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10170042 Utz, S. (2007). Media use in long‐distance friendships. Information, Communication & Society, 10(5), 694–713. https://doi.org/10.1080/13691180701658046 Verhofstadt, L., Devoldre, I., Buysse, A., Stevens, M., Hinnekens, C., Ickes, W., & Davis, M. (2016). The role of cognitive and affective empathy in spouses’ support interactions: An observational study. PLOS ONE, 11(2), e0149944. https://doi.org/10.1371/journal.pone.0149944 Verhofstadt, L. L., Buysse, A., Ickes, W., Davis, M., & Devoldre, I. (2008). Support provision in marriage: The role of emotional similarity and empathic accuracy. Emotion, 8(6), 792– 802. https://doi.org/10.1037/a0013976 Vitak, J. (2012). The impact of context collapse and privacy on social network site disclosures. Journal of Broadcasting & Electronic Media, 56(4), 451–470. https://doi.org/10.1080/08838151.2012.732140 Voorveld, H. A. M., Neijens, P. C., & Smit, E. G. (2011). The relation between actual and perceived interactivity. Journal of Advertising, 40(2), 77–92. https://doi.org/10.2753/JOA0091-3367400206 Walther, J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19(1), 52–90. https://doi.org/10.1177/009365092019001003 Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3–43. https://doi.org/10.1177/009365096023001001 Walther, J. B. (2011). Theories of computer-mediated communication and interpersonal relations. The Handbook of Interpersonal Communication, 4, 443–479. Walther, J. B. (2013). Commentary: Affordances, effects, and technology errors. Annals of the International Communication Association, 36(1), 190–193. https://doi.org/10.1080/23808985.2013.11679131 Walther, J. B., Loh, T., & Granka, L. (2005). Let me count the ways: The interchange of verbal and nonverbal cues in computer-mediated and face-to-face affinity. Journal of Language and Social Psychology, 24(1), 36–65. https://doi.org/10.1177/0261927X04273036 Waters, E., & Cummings, E. M. (2000). A secure base from which to explore close relationships. Child Development, 71(1), 164–172. https://doi.org/10.1111/1467-8624.00130 150 Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063–1070. https://doi.org/10.1037/0022-3514.54.6.1063 Wei, M., Russell, D. W., Mallinckrodt, B., & Vogel, D. L. (2007). The experiences in close relationship scale (ecr)-short form: Reliability, validity, and factor structure. Journal of Personality Assessment, 88(2), 187–204. https://doi.org/10.1080/00223890701268041 Williams, D., Caplan, S., & Xiong, L. (2007). Can you hear me now? The impact of voice in an online gaming community. Human Communication Research, 33(4), 427–449. https://doi.org/10.1111/j.1468-2958.2007.00306.x Winczewski, L. A., Bowen, J. D., & Collins, N. L. (2016). Is empathic accuracy enough to facilitate responsive behavior in dyadic interaction? Distinguishing ability from motivation. Psychological Science, 27(3), 394–404. https://doi.org/10.1177/0956797615624491 Wohn, D. Y., & Peng, W. (2015). Understanding perceived social support through communication time, frequency, and media multiplexity. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA ’15, 1911–1916. https://doi.org/10.1145/2702613.2732866 Wohn, D. Y., Peng, W., & Zytko, D. (2017). Face to face matters: Communication modality, perceived social support, and psychological wellbeing. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA ’17, 3019–3026. https://doi.org/10.1145/3027063.3053267 Wu, G. (2005). The mediating role of perceived interactivity in the effect of actual interactivity on attitude toward the website. Journal of Interactive Advertising, 5(2), 29–39. https://doi.org/10.1080/15252019.2005.10722099 Xu, K., & Liao, T. (2020). Explicating cues: A typology for understanding emerging media technologies. Journal of Computer-Mediated Communication, 25(1), 32–43. https://doi.org/10.1093/jcmc/zmz023 Yang, C., Brown, B. B., & Braun, M. T. (2014). From Facebook to cell calls: Layers of electronic intimacy in college students’ interpersonal relationships. New Media & Society, 16(1), 5–23. https://doi.org/10.1177/1461444812472486 Yang, D., Yao, Z., Seering, J., & Kraut, R. (2019). The channel matters: Self-disclosure, reciprocity and social support in online cancer support groups. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3290605.3300261 Yang, F., & Shen, F. (2018). Effects of web interactivity: A meta-analysis. Communication Research, 45(5), 635–658. https://doi.org/10.1177/0093650217700748 151 Young, A. L., & Quan-Haase, A. (2009). Information revelation and internet privacy concerns on social network sites: A case study of Facebook. Proceedings of the Fourth International Conference on Communities and Technologies, 265–274. https://doi.org/10.1145/1556460.1556499 Youngvorst, L. (2018). The influence of communication modality on verbal person-centered supportive conversations between friends. http://conservancy.umn.edu/handle/11299/201152 Zatori, A., Smith, M. K., & Puczko, L. (2018). Experience-involvement, memorability and authenticity: The service provider’s effect on tourist experience. Tourism Management, 67, 111–126. https://doi.org/10.1016/j.tourman.2017.12.013 Zell, A. L., & Moeller, L. (2018). Are you happy for me … on Facebook? The potential importance of “likes” and comments. Computers in Human Behavior, 78, 26–33. https://doi.org/10.1016/j.chb.2017.08.050 Zhang, R. (2017). The stress-buffering effect of self-disclosure on Facebook: An examination of stressful life events, social support, and mental health among college students. Computers in Human Behavior, 75, 527–537. https://doi.org/10.1016/j.chb.2017.05.043 Zhang, R., N. Bazarova, N., & Reddy, M. (2021). Distress disclosure across social media platforms during the covid-19 pandemic: Untangling the effects of platforms, affordances, and audiences. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–15). Association for Computing Machinery. https://doi.org/10.1145/3411764.3445134 152