EXPLORING FEASIBILITY, EFFECTIVENESS, AND ACCEPTABILITY OF AN ONLINE TRAUMA INTERVENTION FOR RACIALLY AND ETHNICALLY MINORITIZED YOUNG PEOPLE ENTERING COLLEGE By Madeline Esterer A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of School Psychology – Doctor of Philosophy 2024 ABSTRACT The current study extends an uncontrolled pilot study of Life Improvement for Teens (LIFT; Jaycox et al., 2019), an online stress and trauma program, to examine its pre-post effectiveness (depression, anxiety, posttraumatic stress, coping, negative cognitions) via a waitlist randomized control design and intent-to-treat sample (N = 47) of 18–19-year-old racial or ethnic minority (REM) young people entering college, delivered fully self-administered. Students were randomly assigned to the treatment condition (N = 24) or waitlist control condition (N = 23). LIFT was provided over 4-7 weeks to the treatment group with external weekly reminders. Feasibility analyses found low retention (6 out of 24; 25%), moderate adherence for those who began LIFT (6 of 11 met minimum criteria indicative of program completion; 55%), and varied engagement patterns (e.g., 5 out of 11 participants who began LIFT did not meet minimum criteria for engagement; 45%) with LIFT. In sum, many participants did not engage with LIFT content as intended. Effectiveness findings, while accounting for missing data utilizing multiple imputations (N = 40 imputations), indicated positively trending but insignificant improvements in anxiety and depression in comparison to the control condition. Contrary to hypotheses based on prior literature, no improvements were found for PTSS or any mechanisms of action (negative cognitions, coping skills). Global acceptability of LIFT was reported as moderate-to-high, with more mixed/neutral qualitative reports of acceptability. Cultural acceptability findings were inconclusive due to the small interview sample size, but promising strengths and areas for improvement were highlighted in relation to cultural acceptability of LIFT. This study contributes to the literature on effective and accessible mental health programs for REM young people impacted by traumatic stress. ACKNOWLEDGEMENTS I would first like to thank my family and my friends for supporting me throughout my Ph.D. and dissertation journey. You are appreciated for always providing a listening ear and helping remind me of the importance of work/life balance and taking time away from work and research. I am also in extensive debt to my amazing cohort-mates. I could not have completed this Ph.D. if not for the friends I made within my cohort who helped teach me about discipline and inner strength, advocacy, how to say no, and who helped me overcome my imposter syndrome. I am also grateful to my advisor, Dr. John Carlson, who always supported my individuality as a scientist-scholar-practitioner and greatly contributed to the timely completion of this dissertation. Finally, thank you to everyone else along the way over these past 5 years who supported me emotionally, socially, physically, and professionally. I am deeply thankful for this village behind me who helped me realize that I can do hard things. iii TABLE OF CONTENTS CHAPTER 1 INTRODUCTION .................................................................................................... 1 CHAPTER 2 LITERATURE REVIEW ....................................................................................... 21 CHAPTER 3 METHODS ............................................................................................................. 55 CHAPTER 4 RESULTS ............................................................................................................... 96 CHAPTER 5 DISCUSSION ....................................................................................................... 132 REFERENCES ........................................................................................................................... 159 APPENDIX A: CONSENT FORM ............................................................................................ 177 APPENDIX B: RECRUITMENT FLYER FOR SOCIAL MEDIA .......................................... 182 APPENDIX C: QUALITATIVE INTERVIEW PROTOCOL ................................................... 183 APPENDIX D: ACCEPTABILITY CODEBOOK .................................................................... 185 APPENDIX E: ANCOVAS FOR MECHANISMS OF ACTION (5 OUT OF 40 IMPUTATIONS AND ORIGINAL DATA) .............................................................................. 186 APPENDIX F: POOLED INTENT-TO-TREAT ANCOVA RESULTS FOR MECHANISMS OF ACTION ............................................................................................................................... 194 APPENDIX G: ANCOVAS FOR TARGETED OUTCOMES (5 OUT OF 40 IMPUTATIONS AND ORIGINAL DATA) .......................................................................................................... 195 APPENDIX H: POOLED INTENT-TO-TREAT ANCOVA RESULTS FOR TARGETED OUTCOMES .............................................................................................................................. 198 APPENDIX I: ACCEPTABILITY CODES AND QUOTES ACROSS OPEN RESPONSE AND INTERVIEW TOOLS ................................................................................................................ 199 APPENDIX J: BY-PARTICIPANT RESULTS FOR N = 6 ADHERENT SUB-GROUP ........ 206 APPENDIX K: DEMOGRAPHIC QUESTIONNAIRE AND TRAUMATIC EVENTS SCREENER ................................................................................................................................ 207 APPENDIX L: CHILD PTSD SYMPTOM SCALE .................................................................. 209 APPENDIX M: CHILDREN’S COPING STRATEGIES CHECKLIST .................................. 210 APPENDIX N: CHILD POST-TRAUMATIC COGNITIONS INVENTORY SHORT FORM MEASURE ................................................................................................................................. 211 iv APPENDIX O: REVISED CHILDREN’S ANXIETY AND DEPRESSION SCALE .............. 212 v CHAPTER 1 INTRODUCTION It is estimated that about 1 in 5 young people meet criteria for a mental health disorder (Whitney & Peterson, 2019), yet most don’t receive any formal support (Merikangas et al., 2011). This is exacerbated for those transitioning from high school into higher education, due to the variety of novel stressors (e.g., academic, social), increasing the likelihood for mental health challenges like depression and anxiety (Garett et al., 2017; Hankin et al., 1998; Crone & Dahl, 2012; Bertha & Balázs, 2013). Recently graduated high schoolers are at unique risk to experience mental health issues during the transition to college if adaptive coping skills are not acquired (Garett et al., 2017; Cleary et al., 2011; Cook, 2007). Further, disparities exist for racial and ethnic minority (REM; i.e., Black, Latinx, Asian American or Pacific Islander) young people, who experience disproportionately higher mental health challenges than their White counterparts; yet communities with higher percentages of REM’s often have fewer services embedded into their systems (Alegria et al., 2015). The COVID-19 pandemic also exacerbated student distress into a state of national crisis (i.e., post-traumatic stress, depression, anxiety; Racine et al., 2021; Guessoum et al., 2020; Office of the Surgeon General, 2021). Educational systems are typically the first system where mental health concerns are treated (Lu, 2020), yet many students with concerns remain untreated into college (Bruffaerts et al., 2019). One area of particular importance for students is managing stressful or potentially traumatic events (PTEs). The definition of a PTE has widened over the years to include various types of common stressful events (e.g., parental divorce, witnessing violence, significant health/violent threat towards a family member) that have the potential to cause emotional harm (i.e., worry about one’s wellbeing). A PTE meets the criteria to become a traumatic event when it 1 evokes post-traumatic stress symptoms (PTSS; intrusion, avoidance, negative cognitions/mood, arousal/reactivity symptoms, impaired functioning), since not everyone develops PTSS after a PTE (APA, 2013). Both unmitigated traumatic stress and PTEs have the potential to disturb functioning if one has a lack of skills to manage them. Recent research suggests that by age 16-18, two-thirds of youth have experienced at least one PTE, and one-third of these youth have experienced more than one (Porche et al., 2016; Copeland et al., 2007). By the time an individual reaches college age, they are at a peak time to have experienced at least one PTE, and having experienced a PTE prior to college is associated with additional stress in college freshmen (Filipkowski et al., 2016). PTE’s can lead to traumatic stress reactions within young people, ultimately increasing risk for mental health concerns including post-traumatic stress disorder (PTSD; Schnurr & Lohman, 2013). Individuals experiencing PTSS require support if they are experiencing functional difficulties. Negative outcomes associated with untreated traumatic stress may include social, emotional, cognitive, and physiological symptoms that can alter functioning in home, community/workplace, and educational environments (Porche et al., 2016; Perfect et al., 2016). It is critical to assess the intersection of race, ethnicity, and the prevalence of PTSS, especially for incoming college freshmen. There are many factors associated with increased risk for PTE exposure, including urban residence, lower socioeconomic status, and REM status (Stein et al., 2003a; Crouch et al., 2000). Additionally, young people from REM backgrounds have to acquire additional competencies to cope with systemic racism (Garcia-Coll et al., 1996); specifically, adaptive coping patterns are necessary to manage race-related stressors, such as racist/hostile interactions facilitated through macro- and micro-aggressions or witnessing perpetration towards others in one’s racial or ethnic group (Williams, 2018). The literature also 2 supports a rise in discriminatory events perpetrated via the internet in young people aged 11-19 either through consumption of media (i.e., seeing a video of someone in your racial group as a victim of police brutality or immigration detention; Tynes et al., 2019) or consuming discriminatory messages on social media (Ermis-Demirtas et al., 2022). Schools and universities are a setting where racism and discrimination are rampant (Gonzalez et al., 2014; Ingram & Wallace, 2019), signifying the importance across systems of care of ensuring students have the skills necessary to manage stress associated with PTEs (Hope et al., 2014; Cogburn et al., 2011). Race-related stressors have been demonstrated to cause psychological and emotional distress (i.e., avoidance, re-experiencing, helplessness, fear; Bryant-Davis & Ocampo, 2005; Flores et al., 2010) to a similar degree as other PTE’s (e.g., abuse, neglect). For instance, epigenetic vulnerability from intergenerational trauma interacts with current racist event(s) to cause PTSS in African American (Seaton et al., 2008), Latinx (Gonzalez et al., 2014), and Asian- American (Ermis-Demirtas et al., 2022; Saleem et al., 2020) young people. However, race- related traumas (Comas-Diaz et al., 2019) are overall an understudied area of the literature (Jernigan & Daniel, 2011) even though most REM individuals have experienced race-related and discriminatory events before age 18 (Seaton et al., 2008). This indicates the importance of providing stress/trauma intervention at this key transition time in a REM young person’s developmental trajectory. Emerging young adults experience more advanced forms of racism and are more acutely aware of the meaning of racial encounters, increasing sensitivity to these events and changing coping patterns (i.e., rebellion, social withdrawal, dissociation; Saleem et al., 2020). REM high schoolers and college students are at increased risk for PTSD and may reside in communities with fewer trauma resources (Roberts et al., 2011; Garland et al., 2005; Cappella et al., 2008). REM emerging young adults, as they straddle adolescence and adulthood, often 3 experience a combination of race-related stressors across settings, including overt or covert discrimination in an educational setting or the workplace, interactions with police or the juvenile justice system, such as racial profiling, increased exposure to race-related events on social media, collective traumatic experiences of intergenerational trauma, navigating new culturally influenced roles and relationships (e.g., critical period of racial and ethnic identity development), medical care discrimination, and more (Jones et al., 2020). Cognitive-behavioral therapy (CBT) has emerged as the gold standard for young people with PTSS, with meta-analyses suggesting effectiveness in different formats against waitlist and active controls (Mavranezouli et al., 2019). CBT strategies include psychoeducation, gradual exposure (for anxiety and PTSS), addressing cognitive distortions, problem-solving, safety planning, and teaching coping skills. Reviews and meta-analyses (Dorsey et al., 2017; Mavranezouli et al., 2020) highlight that individual CBT approaches, particularly Trauma- Focused-CBT (TF-CBT), have the highest level of evidence for treating PTSS in young people using rigorous methods criteria (Southam-Gerow & Prinstein, 2014). CBT treatments are recommended by the American Psychological Association for treating PTSS in adults and youth (APA, 2008; APA, 2017). Specific CBT treatment models with the most evidence are TF-CBT (Cohen et al., 2017), an individual approach, and Cognitive Behavioral Intervention in the Schools (CBITS; Jaycox et al., 2012), a group approach used in schools. Reviews suggest CBT provided in educational settings can improve overall mental health (ages 11-19; Kavanaugh et al., 2009; d = .15-.27) and reduce PTSS (age 6-19; Rolfsnes et al., 2011; d = .68). In order to understand treatment efficacy at a deeper level for this age group, treatment feasibility/adherence and acceptability should be explored alongside effectiveness. Theoretical models posit that acceptability (i.e., the extent to which people receiving an intervention consider 4 it to be appropriate based on anticipated or experiential cognitive and emotional responses to the intervention) and user engagement (i.e., adherence, under the umbrella of feasibility) interact with cultural variables to simultaneously influence effectiveness (Sekhon et al., 2017; Perski et al., 2021). Few of the meta-analyses on traditional CBT address cultural generalizability of samples, feasibility/adherence, and acceptability of interventions. There are also few studies within these reviews that included 18–19-year-olds, as most studies focused on solely adolescents 18 and under or young adults older than 18. These gaps limit understanding of how evidence-based trauma practices work in various systems of care with various populations (i.e., REM, 18- and 19-year-old students who are of consenting age). 18- and 19-year old’s are a uniquely important age given the new developmental challenges often present at this time (e.g., transitioning from high school into workforce or higher education, increased independence). The literature suggests high levels of depression and stress during this time that are predicted by existing depression and stress symptoms from high school (Marcotte et al., 2017). This transition time is characterized by, especially for those with preexisting PTSS, increases in depression, risky behaviors, and a higher likelihood of dropping out of higher education, particularly for REM young people (Boyraz et al., 2013; Rytwinski et al., 2013). Those entering college are also at increased risk for acquiring additional PTE’s (Galatzer-Levy et al., 2012). As alluded to above, there are important cultural variables that require consideration when supporting the mental health needs of young people. Considerations related to race and ethnicity (i.e., client identity factors, acknowledging racism/discrimination) were not taken into account during the development process of the “gold standard” CBT treatments described above, shown through early empirical examinations of entirely white samples and the centering of therapy within Western/White value systems (Bernal & Scharrón-del-Rio, 2001; Kira, 2010); 5 this is problematic given the fact that one’s cultural identity has been shown to significantly impact processing of traumatic events, help-seeking behaviors, and attitudes towards mental health treatments (Roberts et al., 2011; Yamashiro & Matsuoka, 1997). Research also suggests various pitfalls in therapeutic contexts when cultural factors are not addressed, such as impacts on acceptability and response to services (Meyer & Zane, 2013). Despite this need, there are few “well-established” general mental health treatments, and only a handful of “probably efficacious” and “possibly efficacious” treatments, for REM young people (Huey & Polo, 2008; Pina et al., 2019). For PTSS, there are a few efficacious treatments for different subgroups (i.e., Resilient Peer Treatment for African American youth, TF-CBT, Fostering Individualized Assistance Program). However, these treatments lack manualization, replication, strong effect sizes, and/or randomized control trial evidence, making all of these treatments only “possibly efficacious”. TF-CBT is an exception as it has been defined as a “well- established” treatment for REM young people (Huey & Polo, 2008), but reservations about cultural acceptability remain (Pina et al., 2019). Educational settings may be systems of care in which treatments are more accessible to REM young people; however, there is even less support (i.e., educational samples) for the effectiveness of CBT-based practices to treat post-traumatic stress in these settings. For instance, although a systematic review found that interventions in educational settings, particularly cognitive-behavioral therapy (CBT)-based programs, have displayed initial effectiveness and potential for success in reducing PTSS in young people in educational contexts, there has not been consistent inclusion of and disaggregation by REM status or young people in the transition between educational systems. Also, there are established barriers that present when attempting to administer and serve young people entering college in lower-resource communities (e.g., 6 availability of qualified providers, waitlists, transportation, stigma; Kosyluk et al., 2021; Beames et al., 2021). However, it is essential to explore whether these interventions can reach REM young people who face barriers to accessing care during transitions when traditional services may not be accessible (i.e., summer between graduating high school and starting college). Additionally, exploring the cultural acceptability of interventions is critical to ensure intervention strategies/procedures are perceived as appropriate and not in conflict with cultural values (Njardvik & Kelley, 2008; Barker et al., 2010; Parra Cardona et al., 2012). Prioritizing this within implementation of an intervention can address barriers related to cultural mistrust of mental healthcare systems, and not addressing it can hinder effectiveness (Whaley, 2001; Orengo-Aguayo et al., 2020). Cultural acceptability is also hypothesized to be related to perceived effectiveness, fidelity, and user engagement (Perski et al., 2021; Sekhon et al., 2017). Thus, some suggest that more flexible intervention modalities (e.g., self-administered formats; Elgar & McGrath, 2003) may be more feasible and acceptable for low-resource community providers to implement in order to address systemic and logistical barriers while facilitating increased access to mental health treatments for diverse youth. Online Self-Administered Mental Health Interventions Online, self-administered interventions are believed to be a more flexible mental health treatment option for addressing student needs across an array of systems of care, including schools. Self-administered programs inherently require little to no provider involvement, and the client can access content at their own pace (Elgar & McGrath, 2003). Computerized versions of gold standard treatments are evidenced to improve depression and anxiety symptoms in adolescent and young adult populations compared to passive controls, with small-medium pooled effect sizes (e.g., computerized CBT; Wickersham et al., 2022; Christ et al., 2020). Similar 7 results for depression and anxiety were found in a sample of Black adults (Jonassaint et al., 2020). Game-based digital interventions (CBT-based and non-CBT-based) also have foundational evidence for adults experiencing depression (Li et al., 2014; d = -.47). For college students specifically, a recent review found that 81% of digital programs were effective or partially effective (72 programs; Lattie et al., 2019), but effects for college freshmen specifically were not disaggregated. Digital programs may be more cost-effective than in-person supports (i.e., Lee et al., 2021), but this has not yet been summarized within reviews, limiting conclusions about global cost-effectiveness (Lehtimaki et al., 2021); however, the case has been made for using digital programs within stepped-care service delivery models to improve resource allocation and reduce high caseloads of mental health professionals (Taylor et al., 2020). Digital programs are generally considered to improve accessibility (i.e., more flexibility with scheduling, immediate access instead of waitlists) for young people, while also improving feasibility for providers to implement programs and collect progress/outcome data (Andersson & Titov, 2014). Digital interventions may also be able to bridge the gap for digitally marginalized young people (Piers et al., 2023). Recent national data suggests that 97% of young people ages 18-29 say they have access to the internet, 97% report having a smartphone, and 20% of those with smartphones in this age group report relying solely on their smartphone for internet access (Pew Research Center, 2024). Digital, self-administered treatments can be viewed as a replacement for in-person therapy for some individuals (Bucci et al., 2019); however, digital interventions are generally considered most useful as universal or indicated tools for people experiencing mild/moderate symptoms (i.e., a “gateway” to in-person help; Barak & Grohol, 2011), not severe symptoms. 8 Further, self-administered interventions are considered feasible and acceptable because they function like “self-help books” and promote autonomy by allowing individuals to control their pacing and focus on components aligned with their values/goals. Interventions that allow for online self-directed administration can be easily implemented and accessed in various settings to reach a wide array of individuals. Also, mental health providers and extra time/resources (i.e., trainings) are not required for implementation. Previous studies have uncovered that young people, including students in higher education, enjoy the increased anonymity/privacy and decreased stigmatization of digital mental health platforms (Garrido et al., 2019; Oti & Pitt, 2021). It is critical to prioritize these benefits of digital care for 18- and 19- year-old’s who are a part of the “digital generation” and have access to devices, and can make treatment decisions without parental consent, unlike their younger peers. Clinician and client- rated acceptability of digital mental health programs is robust for adults (i.e., Gamble et al., 2021; Stallard et al., 2010), with no disaggregation for this unique 18–19 age group. Digital approaches within various settings (i.e., schools, universities, clinics) have been highlighted as particularly beneficial for REM young people who face disproportionate barriers to accessing services (e.g., transportation, cost, stigma; Lu et al., 2021; Lee et al., 2021; Lattie et al., 2019). Digital interventions for young people have been summarized within systematic reviews (e.g., Lehtimaki et al., 2021; Zhou et al., 2021; Garrido et al., 2019; Bergin et al., 2020; Reyes- Portillo et al., 2014) and show promise for improving a variety of mental health symptoms (e.g., depression, anxiety) in community samples; in some cases, digital mental health interventions have shown comparable effect sizes to in-person counterparts (Lehtimaki et al., 2021). Table 1 summarizes student demographics/settings, effectiveness, feasibility/adherence, and acceptability findings of digital mental health studies that included school samples of 18–19-year-olds in high 9 Table 1. Summary of Findings From CBT-Based Digital Intervention Studies with Educationally-Recruited Samples that Include 18– 19-Year-Old High Schoolers or College Freshmen Author/Year, Intervention Lillevoll et al., 2014; MoodGYM No significant change in depression, self-efficacy, or self- esteem compared to control group 8.5% accessed; Non-use due to time, forgetting, and doubt of usefulness of program Feasibility / Adherence Effectiveness Acceptability NR Sample demographics / Setting N = 707 across 4 groups, Norway, 15–20-year-old students, Race/ethnicity NR N = 50 across two groups; Australia, 13–19-years w/ suicidal ideation in last 4 weeks, Race/ethnicity NR N = 27; Australia; 14–18- years w/ suicidal ideation in last month; 0% Aboriginal or Torres Strait Islander N = 34 across two groups; New Zealand; 13-18- years; 71% NZ European, 6% Maori, 9% Chinese, 6% Pacific Island, 6% South African, 3% Indian N = 51; United States; 11- 18-years; 49% Hispanic, 51% Non-Hispanic Black Hetrick et al., 2017; Reframe-IT Robinson et al., 2016; Reframe-IT Stasiak et al., 2014; The Journey Jaycox et al., 2019; Life Improvement for Teens (LIFT) No significant change in depression, anxiety, suicide ideation, hopelessness, negative problem ideation, or skill acquisition Decreases in suicidal ideation*, depressive symptoms*, and hopelessness* (not against control) Recruitment difficulties reported; 50% dropout rate; Average of 5/8 modules completed 78% retention rate; Reasons for dropout included feeling better, changing schools, and having schoolwork NR NR Decreases in clinician-rated depression* and improvements in problem-solving coping* compared to a control group; No significant changes in self- reported depression compared to a control Decreases in PTSS*, total difficulties*, negative thinking*, approach coping *; No significant findings for avoidant coping, depression, or anxiety 94% retention rate Feasible to deliver within school counseling service Retention ranged 36%-100% across schools (avg. 78%); avg. of 6.37/7 chapters; students who started a chapter watched between 63-89% of video content 89% liked it a lot or okay; 89% recommend the program with minimal improvements Avg. of 1.85 (0-3 scale) on survey = moderately satisfied; middle schoolers rated lower 10 Table 1 (cont’d) Calear, 2016; E-Couch Program Melnyk et al., 2015; Creating Opportunities for Personal Empowerment (COPE) Kanuri et al., 2020; Mana Maali Digital Anxiety Program Short et al., 2020; FSET Anxiety and Sleep Treatment N = 1767 across 3 groups; Australia; 12-18-years; 3% indigenous background N = 121 college freshmen across 2 groups; United States; 86% freshmen (average age: 18.6); 82% White, 4% Asian, 2% Black, 12% unreported No significant changes in anxiety, social anxiety, anxiety sensitivity, depression, or wellbeing No significant change in depression, decrease in anxiety* only for students with high anxiety at baseline 36% completed entire 6-week program; 43% completed at least 4 weeks of program 99% adherence, with required homework completion between modules N = 15; India; one group; 60% college freshmen aged 18-19; Race/ethnicity NR n/a “High” usability scores, feasibility supported by qualitative data Decreases in anxiety* compared to control group 100% adherence due to one- session nature of program N = 61 undergraduates across 2 groups; U.S. (average age: 19.43); 80% White, 10% Black, 3% Asian/Pacific Islander, 7% other; with subclinical anxiety and insomnia n/a 58% reported COPE was helpful and 61% would recommend it to peers Qualitative data suggests high acceptability High acceptability (credibility and expectancy) *Statistically significant change (p < .05) NR: Not reported Note: Interventions were considered Cognitive Behavioral Therapy (CBT) if they self-identified as such or were focused on building coping skills, addressing cognitive and behavioral factors, and reducing symptoms of depression, anxiety, and/or PTSD as the primary goals of the intervention 11 school (n = 6) or with a focus on freshmen in higher education settings (age 18-19; n = 3). College/university-based studies were only included in Table 1 if at least 50% of the sample was identified as age 18-19 or as college freshmen. 82% of articles in Zhou and colleagues’ (2021) systematic review were educational samples (i.e., high school, college). From recent review articles, n = 22 studies took place in college settings, but only three contained >50% of the sample identified as 18-19 year old’s (Levin et al., 2014; Short et al., 2020; Melnyk et al., 2015). This indicates that few studies have examined the efficacy of digital mental health interventions in educational samples with young people in the transition phase between high school and college. The data in the table also suggests digital programs have promise, but the mechanisms for symptom change (i.e., coping, negative cognitions) were not examined as consistently as targeted symptom outcomes. Despite these promising results, feasibility/adherence and acceptability data has not been reported consistently and is mixed. Adherence and retention for digital programs with young people varies widely across studies (retention range: 36%-100%; adherence range: 32-39%; Liverpool et al., 2020; O’Dea et al., 2015; Table 1). Researchers have been challenged to pinpoint effective retention strategies (e.g., use of reminders, frequent check-ins; Lillevoll et al., 2014). A systematic review found that 17% (14/83) of digital intervention studies reported acceptability findings (Liverpool et al., 2020), and studies highlighted in Table 1 parallel this. No studies have yet reported on the cultural acceptability of the interventions being investigated. This is problematic given that user engagement and adherence often interacts with cultural variables to influence effectiveness and acceptability of a digital intervention. Literature supports qualitative methods (open responses, interviews) in addition to quantitative surveys to fully capture acceptability of digital interventions, given the complex overlap with other constructs 12 (i.e., usability, perceived effectiveness; Perski et al., 2021). There have been few digital mental health studies in the U.S. and on REM subgroups (see Table 1). More research is needed at the intersection of effectiveness, feasibility, and acceptability of digital interventions. Digital Self-Administered CBT Programs for PTSS Few interventions from Table 1 directly discuss PTSS following adverse events or explore outcomes tied to negative cognitions and PTSS. Depression and anxiety are often co- occurring with PTSS (24% of young people with PTSS report depression; Vibhakar et al., 2019). It is evident that young people in today’s day and age require access to effective mental health supports that explicitly use CBT approaches to build resilience following traumatic events. Adult studies with large age ranges testing the effectiveness of TF-CBT-based digital interventions (i.e., PE Coach; Trauma TIPS; PTSD Online) found improved PTSS and depression, with mostly positive, but mixed, results (Kuhn et al., 2017; Mouthaan et al., 2013; Klein et al., 2010). There are larger effect sizes for self-administered digital programs than for mobile applications in adults (Kuhn et al., 2020). Digital adult CBT programs were most effective for PTSS (g = .60-.072), depression, and anxiety (Kuester et al., 2016), with trauma protocols acting as a moderator (Lewis et al., 2019) in these adult samples. Digital trauma intervention studies in adult populations typically had wide age ranges (i.e., few studies with average age under 30) or focus on other unique sub-populations (i.e., veterans), making it hard to generalize effects to this unique 18-19 year old emerging adult population (Wickersham et al., 2019); these adult reviews have found inconsistent evidence of effectiveness of PTSD-focused digital interventions for adults (i.e., some studies led to improvement, with small effect sizes), and had underpowered study designs and little evidence of superiority to control conditions. The most common adult digital PTSD intervention, that has not only been tested with veteran 13 populations, was PTSD Coach, which has inconsistent findings displaying that it was only effective when it was clinician-guided (rather than fully self-guided), and it worsened outcomes when provided in a self-guided format (Wickersham et al., 2019). No meta-analyses have summarized PTSS treatments provided in digital formats for young people, only in these general adult populations (e.g., Wickersham et al., 2019; Stefanopoulou et al., 2020; Simblett et al., 2017). It is important to disaggregate findings for 18–19-year-old’s due to the unique effects of digital interventions that may exist separate from larger general samples of adults. In the current “digital generation”, many young people entering college are already familiar with digital tools to manage their mental health (Topoco et al., 2022). Literature reviews focused on reviewing the effects of digital interventions on college students found inconsistent, yet promising, findings on effectiveness (81% were found to be at least partially effective), but do not disaggregate effects for PTSS outcomes for college freshmen (Lattie et al., 2019). Table 2 summarizes digital CBT interventions for this unique age group that target PTSS, aggregated from the separate adolescent/child and adult systematic reviews cited above. Feasibility/adherence data was mixed, and only one study utilized a school sample, while two utilized a university/college sample. Only 50% of studies from a college digital intervention review included usability or acceptability outcomes (Lattie et al., 2019). Table 2 highlights that few digital PTSS interventions target older or recently graduated high schoolers/college freshmen. No studies specifically targeted emerging adults transitioning between educational systems. Many of these PTSS programs in Table 2 were developed for younger adolescents or general adult populations. One study found improvement for a REM sample of Hispanic college students after engaging in an online CBT-like writing task related to processing emotions tied to trauma, while controlling for acculturation (Hirai et al., 2012), but only 4% were freshmen. 14 Table 2. Summary of digital intervention studies targeting posttraumatic stress in young people Intervention Name / Authors / Year Study Design / Setting Sample / Race and Ethnicity Demographics Reported Intervention / Implementation Characteristics Effectiveness Findings (Measures Used) Feasibility / Adherence Findings Acceptability / Cultural Acceptability Findings Life Improvement for Teens (LIFT); Jaycox et al. (2019) One group pre-post pilot study; School; U.S. N = 51 students, n = 5 urban and charter schools 7th-12th graders; 100% African American and Latinx sample Decreases in PTSS* (CPSS), negative thinking* (CPTAS), coping skills* (approach; CCSC); No significant findings for avoidant coping (CCSC), depression, or anxiety (RCADS) 7 modules (1/week) After-school setting; CBT strategies meant for those with low-to- moderate symptoms who have experienced stressful/traumatic event; Self- administered with adult supervision in after-school environment Retention averaged 78%, omitting disciplinary cases: 87%; Students on avg. completed 6.37/7 modules and watched 63- 89% of the video content Coping Coach; Kassam-Adams et al. (2016) RCT; Hospital; U.S. N = 72 pediatric medical patients who experienced acute medical event (ages 8-12); N = 36 in intervention group; 33% 3 modules over 6 weeks; interactive game with storyline; game-based; Self- administered with regular reminders; Prevention for PTSD (provided within 2 weeks of event) Decreases in PTSS (CPSS) at 6 weeks (d = -.68) and 12 (d = -.55) weeks based on mean change scores; no significant effect against waitlist group; No significant findings for maladaptive cognitive 15 Children spent around an hour engaging with activities; 97% logged on at least once, 53% completed program Moderatel y satisfied (mean = 1.85; range 0-3 across 14 items); middle school reported lower satisfaction than high school*; Cultural acceptabili ty not explored n/a Table 2 (cont’d) Coping Coach; Marsac et al. (2015) Black, 63% White, 4% Other N = 42 pediatric medical patients (ages 8-12); 60% White, 24% Black, 7% Asian, 10% Other; 12% Hispanic One group feasibility study; Communit y; Hospital; U.S. and Australia Bounce Back Now (BBN); Ruggiero et al. (2015) RCT, Communit y sample; U.S. Natural disaster- affected (N = 2000); in Missouri and Alabama (12- 17 years); 62.5% White, 22.6% Black, 3.8% Other; 2.7% Hispanic Kids and Accidents; Cox et al. (2010); Kenardy et al. (2015) RCT; Communit y sample; Hospital; Australia N = 85 children (7-16 years) in Australia, N = 56 completed; No race /ethnicity data reported appraisals (CPTCI) or coping (HICUPS) n/a Decreases in PTSD* (NSA-PTSD) and depression* (NSA-D) symptoms post- intervention and at 1 year follow up, when compared to control condition High user engagemen t and satisfaction ; overall reported as easy to use (parent and child- reports) n/a Some difficulties with functionality of platform (i.e., too much text); overall engagement was high 49.4% of families accessed BBN, 37.5% completed at least 1 module, 43.9% accessed all modules Decrease in anxiety* (d = - .33) compared to control group; No significance for PTSS or depression (TSCC-A); High initial distress moderated effectiveness for PTSS* 56% of children reported accessing website; Some dropout Self- reported as “helpful, not as many said it was “effective” Same intervention as above; Children guided by researcher with parent watching nearby; Entirely self- administered with reminders to parents 4 modules, self-select content on website; Preventative intervention with adolescent and parent components (separate); Focused on depression, PTSD, and substance use; Entirely self- administered with reminders Prevention website; combined with parent information on print; For youth who experienced unintentional injury 16 Table 2 (cont’d) and overnight hospitalization High satisfaction Survivor to Thriver; Littleton et al., (2016) 84% accessed the program at least once; 15% dropped Decrease in PTSS* (PSS- I) post-intervention and at 3 month follow up Emotion- Focused Expressive Writing; (Hirai et al., 2012) N = 104 Hispanic undergraduate students; 4 (4%) freshmen, 96% sophomore+ N = 87 college women with rape- related PTSD; 41% White, 22% Black, 9% Asian- American, 20% Multi-ethnic Therapist-facilitated 9 modules website; CBT training and rape-specific content; tailored written/video feedback from therapist RCT; compared to psycho- educationa l program college sample; U.S. 2-group trial (Emotion/f act focused vs. fact focused); college sample; U.S. Note: *Statistically significant at p < .05 level Note: Child PTSD Symptom Scale (CPSS); Child Post-Traumatic Cognitions Inventory (CPTCI); How I Coped Under Pressure Scale (HICUPS); Pediatric Quality of Life Inventory (PedsQL); Self-Efficacy Questionnaire for Children (SEQ-C); Children’s Coping Strategies Checklist (CCSC); Child Post-Trauma Attitudes Scale (CPTAS); Strengths and Difficulties Questionnaire (SDQ); Revised Children’s Anxiety and Depression Scale (RCADS); National Survey of Adolescents PTSD module (NSA-PTSD) and Depression module (NSA-D); Trauma Symptoms Checklist for Children-A (TSCC-A); Impact of Events Scale Revised (IESR); PTSD Symptom Scale Interview (PSS-I) Log on 3 times for 3 days, guided to write about emotions and facts related to PTE; fact group was instructed to write only about facts Both groups decreased in PTSS* (IESR), combined group showed stronger effect at 5 week follow up 22% dropped out after first writing day n/a 17 None of the studies presented in Table 2 directly explored cultural acceptability or race- related traumas. Perski and Short (2021) note that the literature does not provide enough information on how social/cultural norms influence acceptability of digital interventions. For PTSD interventions to be effective for REM’s, researchers need to consider how race-related trauma can manifest. For instance, an overtly racist event (i.e., harassment by law enforcement, being called a racial slur) may cause PTSS, or one may experience PTSS as a result of accumulating environmental stressors (e.g., racial microaggressions, discrimination). Lastly, one may experience vicarious cultural trauma as a result of exposure to sociopolitical racist events in the media (Williams et al., 2018). These events need to be reflected within digital PTSS supports. To summarize, there are no digital interventions with a focus on trauma/stress that have been examined with 18-19 year old REM young people in educational settings, except for the Life Improvement for Teens (LIFT) program (Jaycox et al., 2019). Life Improvement for Teens (LIFT) Intervention Program LIFT is an, online, self-administered program for adolescents that aims to build resilience and teach coping skills for PTSS, anxiety, and depression, while helping teens learn more about stress and trauma. LIFT is the only known digital mental health intervention program that has been developed for and specifically examined with REM older adolescents (Jaycox et al., 2019; see Table 2). Its fully self-administered modality (i.e., flexibility), privacy/anonymity, and initial development with cultural considerations in mind indicate the potential for LIFT to address cultural, logistical, and accessibility barriers. LIFT contains 7 sequential modules (1/week) with videos, interactive questions, activities, and games. An uncontrolled pre-post pilot study highlighted the positive impact of LIFT on targeted symptom outcomes and mechanisms of change in a sample (N = 51) of African American and 18 Latinx middle and high schoolers (Jaycox et al., 2019). Variables with significant improvements from pre-to-post were PTSS, negative cognitions, and approach coping. No changes were reported on measures of depression, anxiety, or avoidant coping skills. LIFT was determined to be feasible in a high resource implementation context (i.e., after-school setting with supervised adult; 78% retention, average of 6/7 modules) and moderately acceptable within the REM sample (African American, Latinx; Jaycox et al., 2019). Cultural acceptability or race-related PTE exposure, however, were not directly explored. Further study of LIFT utilizing a control condition and further exploration feasibility/acceptability without implementation of school- based participation supports, given that the pilot researchers administered LIFT in a high implementation school-based support context. Pilot Studies Given that LIFT has research demonstrating promise in an uncontrolled pilot study in a high resource setting, a logical next step in the process is piloting the intervention against a waitlist control with increased focus on feasibility and acceptability in a way that mirrors semi- real-world implementation (i.e., no adult supervision, minimal external supports to facilitate completion). Sheridan (2014) notes that focusing on feasibility/adherence, effectiveness, and acceptability is justified within an intervention study when an intervention is still in its infancy. Sheridan’s (2014) 10-step intervention trajectory model indicates the importance of first examining feasibility and acceptability in addition to effectiveness in a real world setting before moving into large-N randomized controlled trials. Bowen and colleagues (2009) propose specific methods that are appropriate for small-N studies examining interventions that are in an early stage of development and require efficacy testing. Examining feasibility is of great importance for pilot studies that need to provide 19 rationale via their results for larger-N randomized controlled trials. In order to provide that rationale, Bowen and colleagues (2009) note that important areas of focus include implementation (i.e., adherence), limited efficacy testing, and satisfaction of participants. The research design that can most efficiently and effectively answer these questions about an intervention in its early stages is a pre-post small-scale RCT with a waitlist control. This method helps establish whether the intervention is appropriate for a particular population/setting (Eldridge et al., 2016). Qualitative methods are also encouraged to explore acceptability and culture within intervention feasibility studies (Bowen et al., 2009). Current Study The current study used a pre-post waitlist RCT design to examine the feasibility and effectiveness of LIFT (7 chapters over 7 weeks, online, self-administered) compared to a waitlist control group on outcomes of interest (depression, anxiety, PTSS, coping skills, and negative cognitions). In addition, the acceptability/cultural acceptability of LIFT for REM young people who have recently graduated high school and are entering college was explored after completing LIFT in a self-administered fashion. Participants were randomized on a non-concurrent rolling basis either to the intervention group, who received access to LIFT immediately, or the waitlist control group, who received access to LIFT 7 weeks after consent. This study sheds light on how LIFT’s adherence, effectiveness, and acceptability may present in a more realistic context against a control condition. This program’s modality allowed for young people of consenting age to receive CBT without experiencing common barriers to access (e.g., need for parent consent, stigma associated with seeking out mental health support, transportation, cost). Overall, the current study contributes to the literature on how digital interventions can support young people with PTSS and provides context on cultural acceptability and fidelity of digital interventions. 20 CHAPTER 2 LITERATURE REVIEW This literature review provides an overview of the importance of the current study. The following sections include: (a) trauma defined, (b) evidence-based practice for trauma, (c) barriers to implementing EST’s, (d) digital self-administered EST’s, (e) mechanism for change, (f) digital CBT treatments for PTSS in young people, (g) evaluation of digital CBT programs for mental health, (h) rationale for a small-n randomized study on intervention feasibility, and (i) the current study’s research questions and hypotheses. Trauma Defined Trauma is defined as an adverse reaction to a potentially traumatic event. Per the Diagnostic and Statistical Manual of Mental Disorders- 5th Edition (DSM-5; American Psychological Association [APA]) definition of post-traumatic stress disorder (PTSD), one must have experienced “exposure to actual or threatened death, serious injury, or sexual violence in one (or more) of the following ways: directly experiencing the traumatic event, witnessing in person, the event as it occurred to others, learning that the traumatic event occurred to a close family member or friend, or experiencing repeated or extreme exposure to aversive details of the traumatic event” (APA, 2013). Post-traumatic stress symptoms (PTSS) must be present in intrusion (i.e., recurrent memories/dreams, dissociation, distress when exposed to cues/triggers), avoidance (i.e., avoiding distressing memories, thoughts, or feelings associated with the event or external reminders of the event), negative cognitions/mood (i.e., dissociative amnesia, negative beliefs or expectations about oneself/others/the world, distortions about cause of event, negative emotional state, diminished interests, anhedonia), and alterations in arousal/reactivity (i.e., irritability, self-destructive behavior, hypervigilant, concentration and sleep issues; APA, 2013). 21 Research has suggested the salience of race-related traumas for individuals who identify as REM’s (i.e., African American, Latinx; Comas-Diaz et al., 2019); specifically, racism can lead to PTSS via macro-aggressions, accumulation of microaggressions, racist macro- aggressions occurring to loved ones, or vicariously via the media in such a way that causes worry about safety/wellbeing (Williams et al., 2018). Tools exist to examine prevalence of PTSS following race-related events (e.g., UCONN Racial/Ethnic Stress and Trauma Survey [UnRESTS]; Williams et al., 2018), and researchers advocate for adding race-related events to screeners that assess for common PTE’s (e.g., Jernigan & Daniel, 2011; culturally informed ACES model; Bernard et al., 2021). PTSS following these events, once occurring to a degree that impacts functioning, can constitute a diagnosis of PTSD and increase likelihood for negative developmental outcomes in young people (Saleem et al., 2020). Exposure to potentially traumatic events (PTE) is staggeringly high among young people in the U.S., with nationally representative samples estimating that 60.8% of all individuals aged 17 and younger have experienced at least one PTE in their lifetime (Finkelhor et al., 2015). Other studies estimate that 32% of young people prior to age 18 have experienced two or more PTE’s in their lifetime (McLaughlin et al., 2013). Individuals who identify as REM’s are also more likely to be exposed to certain PTE’s (e.g., death of a loved one, assault by a romantic partner or caregiver; McLaughlin et al., 2013). Lifetime prevalence of PTSD for young people who have experienced at least one PTE is estimated to be 7.6% (McLaughlin et al., 2013). Some traumas become more likely in emerging adulthood as a function of gender (e.g., peer assaults in males, sexual victimization in females), but most PTEs are equally likely across the lifespan (Finkelhor et al., 2009). Still, research shows that by the time one reaches 18, one will have likely experienced a PTE (Copeland et al., 2007). Pooled estimates suggest that 15% of young people 22 aged 2-18 develop PTSD after exposure to a PTE, and 13% of people aged 12-25 meet the criteria for PTSD (Alisic et al., 2014; Nooner et al., 2012). Some studies even suggest rates of PTSD are highest for adolescents aged 16-17 specifically, given peak levels of risk behaviors that occur during this developmental period and increased risk for PTE’s (Nooner et al., 2012). REM young people are likely to experience an interaction between intergenerational trauma and negative race-based experiences that leads to unique presentations of PTSS, and potentially PTSD (Saleem et al., 2020). Within a national sample of 1,170 African American and Caribbean American young people, all reported at least one discriminatory experience in the last year, with higher perceptions of discrimination associated with higher depression (Seaton et al., 2008). Most research on racism has been conducted with African Americans, however there are similarly high prevalence rates in other subgroups. A study of N = 179 Latinx students uncovered that 80% reported at least one experience with discrimination at school (Gonzalez et al., 2014), with research also supporting potential post-traumatic effects of discrimination for Mexican American young people (Flores et al., 2010). The prevalence of online race-related traumatic events (i.e., vicarious experiences via the media) is estimated to be high in African American and Latinx students aged 11-19 and linked to increased depression and PTSS (Tynes et al., 2019). A study of N = 116 Asian students aged 12-19 reported that 50.9% of the sample had experienced at least one race or ethnic-based discriminatory experience in their lifetime and 73% had experienced a separate discriminatory experience (either offline or online) related to the COVID-19 pandemic; further, discrimination related to COVID-19 predicted PTSS in this sample of Asian-Americans (Ermis-Demirtas et al., 2022). Discrimination experiences may differ across groups but are still highly salient in the developmental process for REM young people (Garcia-Coll et al., 1996). 23 Untreated PTSS can increase likelihood for negative outcomes socially, emotionally, academically, and relationally in a young person’s life (i.e., home, school, work, socially; Perfect et al., 2016; Copeland et al., 2007; Villalta et al., 2018). It is important for various systems of care that work with young people, including educational systems, to provide trauma supports to students due to the strong connection between PTSS and school problems (e.g., failing grades; Nooner et al., 2012; Perfect et al., 2016). PTSD and high exposure to PTE’s are commonly co- morbid with other mental health concerns such as depression and anxiety in young people (Famularo et al., 1996; Porche et al., 2016), indicating the importance of developing treatments that are effective in reducing commonly overlapping concerns. There are particularly unique and severe consequences of untreated PTSS in young people, such as increased rates of substance abuse, suicide, and risk behaviors that can contribute to increased health problems (Nooner et al., 2012). Young people experiencing race-related traumas and PTSS are also more likely to experience negative outcomes (e.g., higher depression; Tynes et al., 2019), especially those entering college (Cusack et al., 2019). Without proper skills in place, one may have difficulty coping with feelings/thoughts tied to a PTE and thus may experience PTSS that impedes functioning. This supports that emerging adults could benefit from trauma-focused supports. Evidence-Based Practice for Trauma The definition of Evidence-Based Practice in Psychology (EBPP) is defined by the APA as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (2005). This highlights the importance of considering individual differences in treatment and ensuring that treatment is acceptable and culturally sensitive in addition to just effective for targeted outcomes. Standardized criteria have been developed to determine which treatments are an “empirically supported treatment” (EST). EBPP 24 is said to include EST’s as specific treatments that have established efficacy in randomized control trial (RCT) studies (larger EBPP umbrella includes general practices such as establishing therapeutic alliance; APA Presidential Task Force on Evidence Based Practice, 2006). Southam-Gerow and Prinstein (2014) have provided guidelines to accomplish this goal of evaluating which treatments are EST’s, that expands on previous standardization criteria (e.g., Chambless et al., 1996) with a more intentional focus placed on methods used to obtain results. Research methods that are most often used to explore the characteristics of interventions include RCTs, meta-analyses, single-case designs, systematic case studies, clinical observation, ethnography, and qualitative methods (Greenberg & Newman, 1996). RCT’s are seen as the “gold-standard” for determining treatment efficacy via comparison to a control group, and the best way to initially determine if a treatment is efficacious (APA, 2002). Using Southam and Gerow’s (2014) criteria, treatment families (rather than “brand name” treatments) are distinguished as either well-established (Level 1; highest distinction of efficacy), probably efficacious (Level 2), possibly efficacious (Level 3), or experimental (Level 4). Each intervention is provided one distinction based on both methodological criteria and evidence criteria. For a well-established treatment, the methodological criteria include RCT design, manualization, a defined population and problem, reliable and valid measures, and appropriate data analyses in addition to adequacy in sample size to detect effects. The well- established evidence criteria entails statistical significance compared to a placebo or active treatment (or equivalent to an already well-established treatment within the experiment) and exploration by at least two independent research settings and teams (Southam-Gerow & Prinstein, 2014). A Level 1 treatment is considered to be an EST. 25 While APA has an official document outlining best practices for the treatment of PTSD in adults (2017), there is not the same degree of official guidance offered by APA on how to treat PTSD in young people (i.e., EST’s for PTSD). There is outdated guidance from APA that suggests that CBT techniques are the most effective course of treatment to pursue for young people who are experiencing PTSS (2008). Thus, a starting point to identify EST’s for young people with PTSS is to use the guidelines set forth by Southam-Gerow & Prinstein (2014). Dorsey and colleagues (2017) provided a review of psychosocial treatments for young people exposed to traumatic events using Southam & Prinstein’s (2014) criteria. They expanded on previous summaries of the literature (Silverman et al., 2008) by also including rigorous studies that were non-RCTs, including studies that measured trauma/mental health symptom impact, and by providing evaluations on classes of treatments instead of specific manualized treatments (N = 37; ages 0-18). Well-established treatments included variations of CBT, with similar shared components (psychoeducation, emotion regulation strategy training, imaginal exposure, in vivo exposure, cognitive processing, and problem solving; Dorsey et al., 2017). A meta-analysis of CBT-based interventions across settings for young people through age 18 also found significant improvement in PTSS and depression symptoms for young people following TF-CBT (d = -.66 for PTSS; d = -.44 for depression), CBITS (d = -.53 for PTSS; d = - .41 for depression) and Teaching Recovery Techniques (d = -.32 for PTSS; d = -.25 for depression); there were more wide-ranging, yet still positive and encouraging results found for other CBT-based treatments (e.g., prolonged exposure therapy for adolescents, ERASE Stress; Yohannan et al., 2022). A recent network meta-analysis indicated similar findings that cognitive therapy (standardized mean difference [SMD] = -2.94), the Cohen TF-CBT model/Cognitive Processing Therapy (CPT; SMD = -1.74), narrative exposure (SMD = -1.49), and prolonged 26 exposure (SMD = -1.34) provided via individual TF-CBT were more effective than waitlist conditions in reducing PTSS to below clinically significant levels in individuals through age 18 (N = 32 studies, 17 treatments, 2,260 participants; Mavranezouli et al., 2020), with less robust findings at 1–4-month follow-ups (SMD = -1.74 for Cohen TF-CBT/CPT; SMD = -.94 for narrative exposure; SMD = -.92 for prolonged exposure; no SMD reported for cognitive therapy). The critical CBT components for trauma included cognitive therapy, narrative exposure, exposure/prolonged exposure, and/or the Cohen TF-CBT model (Mavranezouli et al., 2020). Neither review adequately covered the unique 18–19-year-old demographic. One system of care in which young people can easily access mental health treatment are via their education (high schools, college campuses). A systematic review of group CBT programs in schools for young people (ages 11-19) suggested that for N = 17 randomized controlled trials, CBT-based treatments were effective at reducing depression in the short-term (i.e., three months) when provided in primary and secondary educational environments; however, there effect sizes were small (d = .15 - .27) and there was inconsistent data to support long-term gains (Kavanaugh et al., 2009). Other systematic reviews of school-based interventions for young people up to age 19 for trauma found that CBT had a medium-to-large positive effect (N = 20/21 studies had significant reductions in PTSS; d = .68 Yohannan & Carlson, 2019; Rolfsnes & Idsoe, 2011). At the college level, a recent meta-analysis (N – 51 RCT articles) suggest that CBT based treatments also demonstrate moderate effects for depression (g = -.60) and anxiety (g = -.48; Huang et al., 2018), but PTSS intervention findings were inconsistent and limited given a small sample of three articles. To determine if a treatment is an EST for a given population, it needs to be explored whether all demographic characteristics for that population are represented in the research 27 (Southam-Gerow et al., 2014). Thus, there are significant limitations to the conclusions noted above for REM’s and 18–19-year-old’s. Specifically, many studies/reviews cited above had strict inclusion criteria (e.g., only single incident traumas, did not include race-related traumas) or did not disaggregate by race/ethnicity. Studies that included REM’s consistently had smaller samples (Dorsey et al., 2017). Young people aged 18-19 were also excluded from reviews (Mavranezouli et al., 2020 excluded 18-19; Dorsey et al., 2017 excluded 19+), or were often left out of trials that either prioritized youth < 18 or adults > 18. Despite the clear need for supports tailored towards REM emerging adults with PTSS, the research does not robustly support any one treatment in a school or university setting. In university settings, a recent systematic review found four studies examining CBT interventions for PTSD in college samples, and only one out of the four studies found a large effect in PTSS following treatment (g = .92), displaying that more research is needed on PTSD interventions in college populations (Barnett et al., 2019); however, CBT-based treatments in college settings for anxiety (17 studies; g = .62 medium effect) and depression (24 studies; g = .71) displayed more robust significance across studies. Few studies from this review differentiated results based on race or ethnicity, and none focused on freshmen specifically. Barriers to Implementing EST’s There are various barriers that can hinder the effectiveness of an EST that are a result of the widening research-to-practice gap. Barriers that are experienced by stakeholders include: Accessibility, systems-level, logistical/implementation, and cultural barriers. Accessibility Barriers. Research has demonstrated that a high percentage of young people, especially those living in low-income neighborhoods and identifying as REM’s (holding SES constant), with mental health symptoms experience more barriers to initiating and 28 completing treatment (Saloner et al., 2014). This is due to a combination of factors: structural (i.e., poverty, systemic racism/discrimination, access to insurance, waitlists), perceptions about mental health (i.e., cultural factors influencing conceptualization of symptoms, knowledge about symptom severity), and perceptions about services (i.e., lack of trust in providers, stigma, cultural values conflicting with treatment approach; Owens et al., 2002). 91% in a sample of 479 college/university students from 23 institutions self-reported experiencing barriers to accessing mental health care on their campuses, and therefore “preferred to deal with issues on my own” (Topoco et al., 2022). Systems-Level Barriers. Educational settings (i.e., schools, universities) are a natural setting to intervene with young people experiencing mental health challenges. Adolescents are more likely to receive mental health treatment via their educational environment, than to be identified and receive treatments in the community; this trend holds true for REM and low- income populations (Lu, 2020; Kataoka et al., 2003). However, school-based providers have reported that more than 50% of the students on their caseload have at least one PTE they are aware of, and over 50% also are actively showing a symptom of PTSD (Connors et al., 2021), and a university sample found that 70% of freshmen students reported experiencing at least one PTE and 34% displayed significant PTSS symptoms (Cusack et al., 2019). There is a shortage of qualified mental health professionals in these educational settings who can provide services, and university settings are especially understaffed to address increasing needs for services (Xiao et al., 2017). Educational settings need to employ more innovative and resource-efficient approaches to trauma supports in order to meet this need (Chafouleas et al., 2016). Additionally, barriers exist for students who are in a transition period between two systems (e.g., summer between graduating high school and starting college). 29 Logistical/Implementation Barriers. Many of the school-based and university-based programs highlighted above take place in an individual or group setting, where a student has regular sessions. A qualitative study examining middle school mental health providers’ experiences implementing a common group program, CBITS, uncovered difficulties with competing responsibilities (limited time/flexibility) and logistics (scheduling, space, materials) when implementing an intensive program in-person (Langley et al., 2010; Atkinson et al., 2013). Educational staff have reported difficulties with stigma associated with participation in a trauma program (Connors et al., 2021; Langley et al., 2010). For education systems with fewer resources (e.g., time, personnel), supporting individuals or small groups without waitlists is not always feasible. Also, as mentioned above, many young people face logistical challenges in accessing direct services during the transition period between high school and college (Cleary et al., 2011). Cultural Barriers. REM young people experience more frequent barriers to accessing mental health care than White, non-Hispanic counterparts (Fraynt et al., 2014) for many reasons. As noted above, cultural factors have a strong influence on one’s understanding of their mental health symptoms, their willingness to engage with a treatment, and their perception of a given treatment. PTE reactions are heavily influenced by cultural norms, which can impact PTSS presentations or help-seeking (Roberts et al., 2011). For instance, there may be salient feelings of stigma or mistrust due to past experiences in discriminatory or low-quality systems of care (Whaley, 2001). This has been found in prior TF-CBT studies (Orengo-Aguayo et al., 2020), and in samples of college freshmen from underrepresented backgrounds, who reported increased stigma related to seeking out traditional mental health services through their university despite increased feelings of depression in their first semester (Kook et al., 2023). 30 However, culture, race, and ethnicity are not consistently considered by providers when implementing treatments. Some suggest that the diagnostic criteria for PTSD and the assumptions within treatments, such as CBT, are Westernized and White-centered (i.e., focus on a single trauma, developed using White and Western samples; Ennis et al., 2019; Kira, 2010; Bernal & Scharrón-del-Rio, 2001). Others suggest that CBT simultaneously can improve cultural acceptability of an intervention by helping clients develop a sense of control (Kira, 2010). Examples of group CBT components that may not align with other cultures include the expectations to face the traumatic event by talking about it with others (sharing with others as a medium to symptom relief; Nicolas et al., 2015) and to progress through universal stages of trauma (Ennis et al., 2019). If culture is not addressed within an intervention, and the client perceives cultural factors to be salient to their needs, there is potential for negative therapeutic outcomes such as dissatisfaction, poor therapeutic, and less likelihood of achieving treatment goals (Meyer & Zane, 2013; Huang & Zane, 2016). Despite this need, no known CBT interventions for REM young people have included race-related PTEs. Given the many barriers that REM individuals face, there is a clear need for flexible and innovative treatment modalities. Digital Self-Administered ESTs Thus, researchers have begun to explore the possibilities of leveraging technology to remove barriers to care, which may help providers reach more individuals, including those at- risk for mental health challenges. The larger context of “distance-delivered” therapy encompasses online, self-administered approaches as well as telehealth approaches (phone calls, videoconferencing apps). Distance-delivered practices are generally more accessible (fewer waitlists and quicker progression from inquiry to point of access to treatment) and more cost- effective for providers than in-person supports; another advantage is increased efficiency for 31 clinicians to monitor client progress/outcomes and implement a program as part of a stepped care model to improve allocation of resources (Andersson & Titov, 2014). A cost-benefit analysis was conducted on a digital mental health program (MoodGym) in Australia (aged 11-17) and compared to an active control condition, which found significant net savings (Lee et al., 2021). As noted above, distance-delivered interventions exist on a spectrum from fully self- administered (i.e., no therapist involvement) to therapist-guided (e.g., self-administered by client with therapist moderation) to entirely therapist-administered (Elgar & McGrath, 2003). Fully self-administered programs provide evidence-based components of treatments in a more flexible manner, where clients can autonomously complete it at their own pace, rather than completing sessions during a specific time. Digital self-administered interventions leverage videos, audio recordings, virtual games, and other interactive components to teach evidence-based strategies (e.g., coping skills). Using a combined digital and self-administered modality can ameliorate many barriers that students (e.g., transportation, scheduling, stigma, waitlists) and mental health professionals (e.g., time/resource constraints) face (Kazdin & Blase, 2011). Meta-analyses suggest that computerized CBT (cCBT; digital self-administered CBT) can reduce depression and anxiety in young people. A systematic review (N = 18) on digital mental health interventions found that cCBT was the only effective treatment against passive controls for depression and anxiety (not against active controls; Lehtimaki et al., 2021). Two more recent meta-analyses on cCBT found consistent results. Specifically, one meta-analysis of N = 24 studies examined the response of young people ages 12-25 to cCBT and found effects of g = .51 for depression and g = .44 for anxiety (Christ et al., 2020). Another meta-analysis of N = 16 cCBT studies of young people ages 11-19 years old found effect sizes using standardized mean differences (SMD) to be -0.23 for depression and -0.21 for anxiety (Wickersham et al., 32 2022). These results held against passive controls, but not active controls (e.g., CBT or other treatment in-person). This was replicated with an African American sample of adults aged 18-75 (d = -.47 for depression, d = -.54 for anxiety; Jonassaint et al., 2020), but no other studies have examined cCBT in racial or ethnic minority young people. A systematic review of RCT’s examining web-based CBT interventions (N = 17) with individuals aged 7-25 found effect sizes against control conditions to fall between 0.15-3.65 for internalizing symptoms (Reyes-Portillo et al., 2014); however, no interventions were “well established” (Southam-Gerow & Prinstein, 2014). At the college level, a systematic review suggests that CBT based digital programs are effective (81% of programs were partially or very effective; Lattie et al., 2019). None of these reviews summarized digital CBT interventions for PTSS in young people. Some have found that a degree of therapist support or an in-person component within a self-administered intervention produces better outcomes for adults (Olthuis et al., 2016; Lehtimaki et al., 2021), but this finding is inconsistent within samples of young people (Christ et al., 2020). Lehtimaki and colleagues (2021) discussed this in the context of adherence and reported that in the few reviews that did report adherence data (6/18 articles), there were wide ranges (10%-94% completion). Even fewer report retention and adherence results in samples with higher education students (i.e., N = 3 studies from scoping review reported drop-out-rates; range 20%-54%; Oti & Pitt, 2021). Few studies report on acceptability. Game-based CBT has foundational research for improving depression (Li & Foo, 2014), with interactive components improving user engagement for digital CBT (Garrido et al., 2019). Other design elements that have been tied to acceptability/adherence for digital programs in young people, including college students, are: videos, personalization, flexible, anonymous and private, non-judgmental language, options to receive text reminders, limited text, surface credibility, age-appropriate 33 aesthetic, simple tasks, clear instructions, and the availability of self-monitoring (Wozney et al., 2017; Liverpool et al., 2020; Garrido et al., 2019; Oti & Pitt, 2021). A sample of 479 college students across 23 colleges and universities suggested that college students see the benefits of digital mental health interventions, and 74% of the sample had used some sort of digital tool in the past to manage their physical/mental health (Topoco et al., 2021). Despite the strong evidence for digital mental health interventions for young people, there are few studies that explore implementation of programs in educational samples (i.e., college, university). Digital interventions can span across settings in terms of their ease of access, however there are many school-specific considerations related to implementation (i.e., feasibility, adherence) that warrant exploration given that educational systems are often the first system of care in which young people access mental health treatment (Lu et al., 2021). Few studies have explored if samples of young people can complete and benefit from self- administered programs without external support. To exemplify this, systematic reviews on digital mental health interventions for young people (Lehtimaki et al., 2021 Garrido et al., 2019; Clarke et al., 2015) only cited a few articles that included educational samples, and a recent systematic review in college populations yielded limited articles with a focus on college freshmen (Lattie et al., 2019). Only nine studies from these reviews included 18–19-year-old high schoolers or a focus on college freshmen and provided a CBT intervention in a self-administered format (see Table 1). These included: MoodGYM , Reframe-IT (2 studies), The Journey, E-Couch Anxiety and Worry Program, Life Improvement for Teens (LIFT), Creating Opportunities for Personal Empowerment, Mana Maali Digital Anxiety Program, FSET Anxiety and Sleep Treatment. Three took place in colleges with a focus (> 50% of sample) on college freshmen (age 18-19; Short et al., 2020; Melnyk et al., 2015; 34 Kanuri et al., 2020). Only four of the nine studies provided effectiveness, feasibility/adherence, and acceptability data together, and five found significant results (four against a control) for either PTSS, depression, anxiety, or coping. However, results by outcome area were inconsistent across studies. All nine studies reported some feasibility/adherence data, but the type of descriptive information reported was not consistent (e.g., retention rates, # of modules accessed, % of videos watched), making it difficult to summarize. In a systematic review of digital mental health interventions in college populations, one component of feasibility, uptake, was variable (range: 32% - 100%), but overall high for those who initiated use of digital tool (D’Adamo et al., 2023). Retention rates ranged from 36%-94%, with some studies finding that weekly email reminders did not improve adherence (e.g., Lillevoll et al., 2014). Six studies took place internationally, and two studies examined a sample of 100% REM’s (Jaycox et al., 2019; Hirai et al., 2012), limiting generalizability. Acceptability information was collected in six of the nine studies, overall moderate satisfaction reported using a variety of tools (i.e., 89% thought intervention “liked it a lot” or thought it was “just okay”; average of 1.85 on 14 item survey using 0-3 scale). The article with the most information on acceptability in college freshmen populations included check-ins with an outside therapist external to the intervention, which was a big contributor to acceptability (Hirai et al., 2012). Only some acceptability information was reported for all other studies; for instance, qualitative data from older adolescents uncovered a tension between increased privacy/anonymity associated with digital interventions alongside a desire to have human interaction or moderation by a professional in addition to the online content (Garrido et al., 2019; Lehtimaki et al., 2021). This suggests that comprehensive research on acceptability and adherence is necessary, both qualitative and quantitative, on digital mental health programs. 35 Only two studies utilized an educational sample entirely made up of REM students and examined outcomes related to PTSS and provided a fully self-administered, digital, trauma-focused, intervention based on CBT principles (Hirai et al. 2012; Jaycox et al., 2019; see Table 2), and found at least moderate acceptability. Mechanisms for Change The goal of both in-person and digital trauma-focused CBT programs is to reduce trauma-related and associated mental health symptomology. In order to facilitate symptom change for PTSS, there are certain mechanisms of change that need to be targeted first within treatments. For instance, negative cognitions about the world and oneself are highly correlated with levels of PTSS, such that negative post-trauma cognitions mediate the relationship between trauma-focused treatments and PTSS outcomes (Brown et al., 2019); this relationship holds true cross-culturally (Berzengi et al., 2017). Similar mediation findings for depression and PTSS have resulted within samples of traumatized adolescents engaging in exposure and client-centered therapies (McLean et al., 2015; Jensen et al., 2018). Researchers highlight the importance of measuring beliefs about oneself and about the world when measuring negative post-trauma cognitions, to more comprehensively understand the range of cognitive distortions that individuals may be experiencing. The Child Post-Traumatic Cognitions Inventory (CPTCI; Meiser-Steadman et al., 2009; McKinnon et al., 2016) has subscales for both types, and has been used in trauma intervention research with young people (Kassam-Adams et al., 2016). Furthermore, individuals who have experienced a stressful or PTE require coping skills to manage difficult thoughts and feelings following the event. Coping researchers separate coping strategies as either avoidant or approaching (Compas et al., 2001). Most evidence-based trauma treatments use a cognitive-behavioral framework to change coping behaviors and challenge any 36 negative/inaccurate cognitions that will ideally facilitate a reduction in post-traumatic stress and associated mental health (i.e., depressive, anxious) symptoms. Higher levels of perceived stress in adolescents are linked to indicators of mental health (i.e., internalizing symptoms) via the amount and type of coping skills used (Galaif et al., 2003; Suldo et al., 2008). Certain coping skills can act as protective factors for developing PTSD, but results are mixed regarding which types of coping skills are most effective in reducing PTSS. A study on adolescent girls (ages 12-18) found that participants with higher trauma exposure had less PTSS, but only when more avoidant coping was endorsed (Elzy et al., 2013). However, avoidance coping is often not supported as an adaptive and helpful type of coping to reduce PTSS, but the literature supports some level of nuance (potential moderators: trauma type [whether individual has control], gender; Elzy et al., 2013; Tiet et al., 2006). One can shape avoidant strategies to be healthy and decrease unhealthy avoidance (e.g., substance use). However, the literature robustly supports approach coping (i.e., problem-solving) for improving psychological distress in young people (Clarke, 2006). There are tools to examine coping skills in young people that measure both approach and avoidant coping (e.g., Children’s Coping Strategies Checklist; CCSC; Ayers et al., 1996) and have been used in digital trauma intervention studies (Jaycox et al., 2019). Literature in adult population suggests that one’s confidence in their ability to cope with trauma- related symptoms and activation of these skills are predictors for PTSD symptom reduction (Yeager & Benight, 2022). Coping with traumatic events becomes more nuanced when exploring how members of racial and ethnic groups cope with racial trauma. Coping patterns required to moderate positive mental health outcomes after a non-race related traumatic event may not directly translate to what coping patterns are adaptive vs. maladaptive for REM’s. Coping with systemic racism, acts 37 of discrimination, and cumulative micro-aggressions may require different coping resources. The literature is not clear on which coping patterns may be adaptive for Black individuals, for instance. Some suggest that an active coping style (approach) is more effective at buffering the negative effects of racial discrimination on mental health (i.e., PTSS, depression) than a passive coping style (avoidant; Mekawi et al., 2022). However, others suggest avoidance may be adaptive in the short-term in uncontrollable situations, such as overtly racist experiences (Gaylord-Harden et al., 2008). This suggests the importance of exploring if trauma treatments that aim to increase approach behaviors and decrease avoidance are perceived as helpful for REM young people who are experiencing stressful race-related events. Digital CBT Treatments for PTSS in Young People As noted above, no systematic reviews have summarized digital treatments for PTSS in young people, only general adult populations, without proper disaggregation (Wickersham et al., 2019; Stefanapolou et al., 2020). One meta-analysis on digital interventions for PTSD (CBT and non-CBT) in adults summarized N = 33 RCTs and found significant improvement in PTSS compared to an active control condition (SMD = .35; Barnett et al., 2021), but also lacked proper disaggregation for young emerging adults. Many individual studies have explored the efficacy, feasibility/adherence, and acceptability of digital trauma interventions in different contexts (see Table 2). One digital CBT intervention, Kids and Accidents, is a preventative program provided via a website to young people who have experienced a medical PTE (i.e., unintentional injury, hospitalization; Cox et al., 2010; Kenardy et al., 2015). The program consists of: psychoeducation, relaxation strategies, coping statements, problem solving, identifying strengths, pleasant events, and reflection on the event to promote growth (Cox et al., 2010). An efficacy study in Australia found a significant 38 decrease in anxiety (N = 56 individuals aged 7-16) following the intervention compared to a control group, but not in PTSS (trending positively). 56% of the sample accessed the site, and acceptability data suggested moderate satisfaction (more reported that it was “helpful” than “effective”). Another study found that high initial distress moderated effectiveness of Kids and Accidents for PTSS (Kenardy et al., 2015). This intervention is for ages 7-16 in a hospital setting and has not been examined in a racial/ethnic minority or school sample. It also has parent components, making it difficult to utilize when trying to reduce barriers to care for young people. Another digital CBT intervention, Coping Coach, is a preventative program that aims to improve psychological functioning (ages 8-12) following a distressing event by first improving coping skills (i.e., decreasing avoidant coping) and improving the accuracy of cognitive appraisals (i.e., promoting appraisals that are adaptive) using an online game-like structure (Kassam-Adams et al., 2016). Components include: identifying feelings, understanding connections between thoughts/feelings/behaviors, recognizing unhelpful thoughts, developing adaptive appraisals, discussing pros/cons of avoidant behaviors (Kassam-Adams et al., 2016). One published RCT found significant decreases in PTSS at 6 weeks and 12-weeks post- intervention within N = 36 participants who completed the intervention, but this effect was not consistent when compared to a waitlist group and controlling for baseline distress (Kassam- Adams et al., 2016). There were also no significant findings related to improvement in post- trauma cognitive appraisals or coping. Feasibility, adherence and acceptability data from this study and a separate feasibility study in the U.S. and Australia (Marsac et al., 2015) suggested that engagement was moderate/high (53% completed program, 97% logged on at least once, average of one hour spent on activities) and satisfaction was also high. These studies were both in hospital settings and had samples of < 50% racial or ethnic minorities. 39 Another intervention, Bounce Back Now, is a preventative CBT program for young people aged 12-17 who have experienced a natural disaster (Ruggerio et al., 2015). This intervention is primarily psychoeducational, but includes interactive modules on stress (i.e., PTSS), substance use (i.e., smoking and alcohol), and mood (i.e., depression). The PTSS module includes exposures and activities to support reduction in avoidant coping, improvements in approach coping, and anxiety reduction. The depression module facilitates behavioral activation. The two substance use modules focus on motivational-enhancement and cognitive behavioral activities. The one published RCT on this intervention found that significant decreases in PTSS and depression at post-intervention and a one-year follow up, when compared to a control group (Ruggerio et al., 2015). Feasibility data suggested moderate adherence (44% accessed all modules, 38% accessed at least one module), and no acceptability information was collected. Another intervention, From Survivor to Thriver, is a therapist-facilitated TF-CBT program specifically for PTSD related to rape (Littleton et al., 2017) that has been analyzed in college samples. This program is 9 modules and includes traditional CBT activities with a focus on common rape-related traumatic reactions (e.g., self-blame), and individually tailored written and video feedback from a program therapist. Results from an RCT with college women (freshmen status not disaggregated) displayed that this program had a large effect for improving PTSS, and a medium/large effect for improving depression and anxiety immediately following the intervention and at a 3-month follow-up. “High satisfaction” was reported in addition to a strong perception of the working therapeutic alliance, 84% of participants accessed the program at least once, and 15% of participants dropped out (Littleton et al., 2017). These results should be contextualized within the higher external support provided compared to other studies. 40 Another intervention, Emotion-Focused Expressive Writing, is an expressive writing protocol loosely based in CBT and the theoretical framework that writing about emotions related to a trauma in addition to facts of the trauma improves exposure and reduces PTSS (Hirai et al., 2012). This program is a 3 day-long writing protocol where participants log on at least once a day over three days to write based on instructions that prompt users to write about emotions and facts tied to their traumatic experience, as an exposure modality. A two-group trial examining effectiveness of this program against a fact-only writing control group found improvements in PTSS compared to the control group and at 5 -week follow up. 22% dropped out of the study after accessing the first writing day, and no acceptability data was reported as a part of the study. Life Improvement for Teens (LIFT) LIFT was developed for students experiencing PTSS and/or difficulties with coping after a variety of stressful or PTEs. Components include psychoeducation, relaxation skills, identifying and challenging dysfunctional thinking, approaching rather than avoiding trauma reminders, developing a trauma narrative/anxiety hierarchy, and social problem-solving. These are facilitated via 7 subsequent modules/chapters (see Table 3). LIFT has two tracks, a stress track and a trauma track that share very similar content. All participants start on the stress track by default, but individuals are routed onto the trauma track if they meet a PTSS threshold (> 10 on embedded CPSS survey; at least one PTE reported). Literature supports that one way to make trauma treatments more culturally sensitive is not forcing youth to conceptualize their experiences as a “trauma”, but rather allowing them to conceptualize it as “stress” instead to be responsive to cultural differences in attitudes about mental health (Langley et al., 2013). There is one published trial examining LIFT. Jaycox and colleagues (2019) explored the effectiveness, feasibility, and general acceptability of LIFT in middle and high school REM 41 Table 3. Life Improvement for Teens (LIFT) Chapter Descriptions Chapter # / Name 1: Welcome to LIFT 2: Feelings 3: Thoughts 4: Facing fears 5: Processing trauma 6: Problem solving 7: Putting it all together Description Introduction and self-assessment of past experiences, emotions, and behaviors Reducing stress through relaxation and activity scheduling Understanding the link between thoughts and feelings; how to combat unhelpful thoughts Reducing avoidance of anxiety-provoking situations Link between thoughts and actions, writing or talking about stress and trauma (narrative) Social problem-solving, writing about stress and trauma (trauma narrative) Putting it all together (digital game) students (N = 51; 100% African American or Latinx; average age: 15.02 [SD = 1.86], 25.5% 12th graders). Students completed the modules independently in a supervised after-school computer lab setting once a week for 7 weeks. Results displayed significant decreases in PTSS and negative cognitions and significant increases in approach coping (direct problem solving, cognitive decision making, optimistic thinking) from pre-to-post. There were no significant findings for depression, anxiety, or avoidant coping. This may be due to low distress at baseline, causing potential minimization of treatment effects; LIFT is described as appropriate for students who are experiencing a certain level of symptoms or impairment due to stress/trauma, and not as a universal intervention or for those with severe mental health needs, suggesting the importance of screening participants for match with intervention criteria (i.e., using cutoff scores for inclusion; Foa & Meadows, 1997). Results also suggested high fidelity (average of 6/7 modules completed, overall 63-89% of videos content watched if a video was started) and moderate retention (average 78%). However, LIFT was provided with high implementation support, in an afterschool setting with 42 an adult, signifying that the feasibility and adherence characteristics of LIFT in its intended format (i.e., fully self-administered) have not yet been explored. The literature is inconclusive regarding whether online and self-administered interventions are appropriate and feasible for young people (e.g., Lillevoll et al., 2014) given the repeated intrinsic motivation needed to reach adequate fidelity (goal of 70% adherence; see feasibility/adherence column in Tables 1 and 2). Additionally, LIFT is the only known intervention to have been examined with REM young people and included 18-year old’s experiencing PTSS in an educational setting (Jaycox et al., 2019; see Table 2). LIFT was originally developed alongside REM young people with cultural factors in mind and displayed initial effectiveness for REM (e.g., African American, Latinx) high school students in the prior study (Jaycox et al., 2019). However, this was not against a control condition, indicating the importance of exploring the true efficacy of this intervention for this population within future research. A quantitative survey was used to measure acceptability of LIFT (e.g., perceived effectiveness, platform user-friendliness). Results suggested LIFT had moderate acceptability (M = 1.85 on 0-3 scale). Cultural acceptability was not explored; however, scores on the satisfaction questionnaire did not vary by race/ethnicity of participants. Middle school students were significantly less satisfied with LIFT than high school students. These findings indicate a need to explore cultural and general acceptability of LIFT in- depth (i.e., qualitatively) with REM young people. Evaluation of Digital CBT Programs for Mental Health User engagement and adherence (i.e., exposure to intervention components) is linked to effectiveness of interventions targeting internalizing symptoms generally and within educational samples (N = 97 articles; positive association for 30%-100% of cases; Rojas-Andrade & Bahamondes, 2019). Figure 1 displays the theorized dynamic relationships between the 43 Figure 1. Dynamic Model of Engagement/Adherence, Effectiveness, and Acceptability for Digital Interventions (Adapted from Perski et al., 2021) Amount of Use, Depth of Use, Interest, Attention, Enjoyment Opportunity to engage Acceptability Engagement / Adherence Perceived Effectiveness, Ethicality, Usability, Perceived Norms, Want/Need, Burden, Knowledge, Opportunity Costs, Affective Attitude Effectiveness constructs of effectiveness, engagement/fidelity, and acceptability within digital intervention research (Perski & Short, 2021). Acceptability is “the extent to which people receiving a healthcare intervention consider it to be appropriate, based on experiential cognitive and emotional responses to the intervention” (Sekhon et al., 2017), which is made up of the following Perski & Short sub-components: Affective Attitude, Burden, Ethicality, Want/Need, Knowledge, Usability, Opportunity Costs, Perceived Norms, and Perceived Effectiveness. Cultural factors are intertwined within these elements of cultural acceptability of an intervention (see Figure 1), which influence engagement and effectiveness. Specifically, sociocultural context (socioeconomic class, race, ethnicity, geographic region) and cultural norms/values interact with acceptability, which can predict participation via “motivation to use” and “user engagement”, and ultimately impact “perceived effectiveness” (Sekhon et al., 2017). Further, if an intervention 44 does not fit within a cultural value system or does not have face validity (i.e., appears likely to achieve its purpose), then there may be a decrease in motivation to use, a decrease in engagement with the program, and less likelihood that the intervention will be received as intended (i.e., low fidelity), which directly links to effectiveness via exposure to the intervention’s active ingredients (Sekhon et al., 2017; Perski & Short, 2021). Thus, some suggest that CBT may not be culturally acceptable for all individuals (Ennis et al., 2019; Kira, 2010), potentially impacting engagement and effectiveness. It is critical to examine these interrelated constructs (adherence, effectiveness, acceptability) alongside how culture impacts acceptability (via Ethicality, Perceived Norms, etc.) in such a way that trickles down to effectiveness. Some research has been conducted using Sekhon’s (2017) base framework, which later led to Perski & Short’s (2021) framework, as a codebook for qualitative acceptability of a web- based psychological intervention to prevent and reduce self-harm in adults (Keyworth et al., 2021; Keyworth et al., 2022), and in medical adolescent contexts (mixed reality CBT for n = 3 participants aged 13-17 with asthma in Australia; Sharrad et al., 2023). Positive user comments within qualitative studies were consistent for Affective Attitude (i.e., related to using technology for mental health; 76% of comments were positive) and Opportunity Costs (73% positive; Sharrad et al., 2023). More negative user comments were related to Ethicality (i.e., concerns with privacy and accessibility; 33% negative; 53% neutral) and Intervention Coherence (similar to Usability and Knowledge codes in updated Perski & Short framework); further, users had neutral or mixed overall perceptions for Perceived Effectiveness (66% of comments were positive; 14% negative; 25% neutral), Burden (43% positive; 50% neutral in Sharrad et al., 2023; overall positive in Keyworth et al., 2022). Another study quantified the base framework and had participants rate the different components for a digital intervention in adults, where positive 45 acceptability perceptions were related to Affective Attitude, Opportunity Costs, intervention coherence (i.e., Usability, Knowledge in Perski & Short), and Perceived Effectiveness, while Burden and Ethicality had more negative perceptions (Keyworth et al., 2021). However, these investigations of the acceptability of digital interventions using this framework were not focused on trauma, did not sample the target population of the current study, and did not explicitly weave culture into the discussion of acceptability elements as outlined in the updated Perski & Short model. No studies were able to be located that use the updated Perski & Short (2021) model to evaluate acceptability of a digital intervention. Rationale for Small-N, Randomized Study on Intervention Feasibility Initial literature (i.e., a pilot study) highlighted that providing CBT in an online, self- administered fashion via the LIFT platform may be effective for REM students experiencing PTSS and associated mental health concerns (Jaycox et al., 2019). Sheridan (2014) highlights the different purposes of complementary methodologies (e.g., large N studies vs. single case designs) in helping researchers answer different and hierarchical questions in a sequential fashion to build knowledge around intervention effectiveness after initial pilot feasibility studies. Sheridan (2014) provides a 10-step intervention research trajectory that first highlights the importance of contributions from smaller-sample studies that examine the mechanisms of change present in the relationship between the intervention and the outcome variables of interest for novel interventions (e.g., the “why”). Once this initial efficacy data is collected, Sheridan suggests that more rigorous studies can be undertaken with larger sample sizes to answer more complex questions about what works for whom, and when. Jaycox and colleagues’ (2019) study on LIFT fills the gap of a pilot feasibility study that examined initial effectiveness (the “why”), 46 acceptability, and feasibility of LIFT, without usage of an experimental design, and with implementation protocols in place to support fidelity. Thus, the next logical step per Sheridan’s (2014) recommendation was to bridge the gap between understanding in more depth about the “why” than what Jaycox and colleagues (2019) produced (i.e., targeting efficacy against a control condition) and assess feasibility in a truly self- administered context with a narrower sample, random assignment, and a control group in order to better understand what works for whom, and when (i.e., Step 4/Step 5; Sheridan, 2014). When engaging with Step 4, a large sample size is not necessary given that research questions at this point are still focused mostly on feasibility, acceptability, and understanding the mechanisms of change that facilitate meaningful outcomes. These areas of focus are supported by literature that suggests that pilot studies need to select constructs and research designs that have strong rationale based on the status of the literature (Bowen et al., 2009); given LIFT’s current status, it was logical to pursue an RCT. These results inform the next steps (i.e., Steps 7-10) that require larger sample sizes to answer questions tied to generalizability of effects. To rigorously explore the feasibility of a self-administered program such as LIFT when at Steps 4-5 in Sheridan’s (2014) intervention research trajectory, an experimental design was harnessed to examine the questions noted above related to feasibility, mechanisms of change that facilitate outcomes, and acceptability, while providing important information on how outcomes compared to a control condition. LIFT had only been examined in one context, which supported the need for within and between group outcome research in other contexts that is more aligned with a true self-administered approach. Thus, a randomized control trial in a real-world context was a logical next step. As highlighted above, this study explores the feasibility (i.e., adherence), acceptability, and effectiveness of LIFT in a self-administered context using a control group; this 47 design for a relatively novel intervention was described as a “randomized pilot study”, falling under the umbrella of a feasibility study (Eldridge et al., 2016). Research Questions and Hypotheses The purpose of this study was to examine the feasibility/adherence, effectiveness, and acceptability of LIFT compared to a waitlist control for 18- and 19-year-old REM recently graduated high schoolers and college freshmen who are experiencing at least mild PTSS. Measures were collected on depression, anxiety, PTSS, coping skills, negative cognitions, usage of LIFT, and general/cultural acceptability of LIFT. Data was collected in a screener, a pre-test, throughout the LIFT intervention (i.e., embedded into the system for those in LIFT group) over the course of 7 weeks, at post-test, and via qualitative interviews with interested participants. Research questions/hypotheses are depicted below and in Table 4. Treatment Feasibility/Adherence Question 1. To what extent do 18-and-19-year-old recent high school graduates who are entering college complete LIFT in an entirely self-administered format with adequate fidelity as measured by usage metrics and self-reported information about LIFT activities and self-care plan completion? User engagement and adherence are operationalized as aspects of feasibility within this study. Researchers have demonstrated that fidelity to intervention protocols and user engagement may significantly influence an intervention’s effectiveness, perceived effectiveness, and acceptability (Sekhon et al., 2017), especially for digital self-administered formats where participants can flexibly access components (Perski & Short, 2021), and digital interventions targeting PTSS (Yeager & Benight et al., 2018). It was important to establish first whether 18- 19-year-old students can feasibly complete the program with adequate adherence before 48 assessing outcomes. Adherence was reported within LIFT’s pilot study as adequate (average of 6.37/7 chapters; 63-89% of video content per chapter when a video was started; Jaycox et al., 2019) when completed in a supervised after-school setting. Uptake and retention rates varied (avg: 36%-100% for cCBT; avg: 44%-78% for PTSD interventions; see Tables 1 and 2), and true adherence out of users who began the intervention was not consistently reported upon. Thus, it is important to explore retention and adherence to LIFT when it is provided in a self-administered and remote format (with weekly external reminders). Interventions provided in this way have found mixed adherence, signifying the importance of this question. It was hypothesized that participants would display at least moderate retention and adherence (i.e., 70%). Treatment Effectiveness: Mechanisms of Action Question 2. To what extent is LIFT effective in improving self-reported coping skills and negative cognitions from pre-to-post intervention for REM recent high school graduates entering college when compared to a waitlist control group as measured by the CCSC and the CPTCI-S? Certain mechanisms/skills have been linked to improvements in psychological symptoms for young people (i.e., increase in coping skills and decrease in negative cognitions improves depression; Clarke, 2006; Brown et al., 2019), including REM samples (Mekawi et al., 2022; Berzengi et al., 2017). Digital CBT interventions have the potential to improve important mechanisms of action for some young people (i.e., Stasiak et al., 2014), and LIFT facilitated improvements in negative cognitions and approach coping skills in a previous trial with high implementation support (Jaycox et al., 2019). Some trials have reported mixed or null results regarding the effectiveness of digital CBT-based interventions in improving these mechanisms within samples that were not racially and ethnically diverse or did not disaggregate for 18-19- year old’s (Kassam-Adams et al., 2016; Hetrick et al., 2017), and not all trials have been 49 controlled. This demonstrates the need to explore how LIFT improves approach coping, reduces avoidant coping, and reduces negative cognitions against a control condition for REM young people. It was hypothesized that LIFT would increase approach, decrease avoidance, and decrease negative cognitions in 18-19-year-old REM young people who received LIFT compared to a waitlist control, assessed with the Children’s Coping Strategies Checklist (CCSC) and the Child Post-Traumatic Cognitions Inventory – Short Form (CPTCI-S) at pre and post. Treatment Effectiveness: Targeted Outcomes Question 3. To what extent is LIFT effective in improving self-reported PTSS, anxiety, and depression from pre-to-post intervention for REM recent high school graduates entering college when compared to a waitlist control group as measured by the CPSS and the RCADS? Digital CBT treatments consistently facilitate improvements in psychological symptoms for young people (i.e., Wickersham et al., 2022; Lattie et al., 2019). Depression, anxiety, and PTSS have been observed to improve following young people’s participation in digital interventions against control groups (e.g., Littleton et al., 2017; Robinson et al., 2016; Ruggerio et all., 2015; Cox et al., 2010), but not all studies found improvements against control groups or similar findings across multi-operationalized variables (e.g., Lillevoll et al., 2014; Stasiak et al., 2014; Calear et al., 2016; Kassam-Adams et al., 2016). Few studies included or disaggregated by REM status and 18-19-year-olds, especially during the transition time between high school and college. The uncontrolled pilot study on LIFT found improvements in PTSS for Black and Latinx youth aged 11-18, but not for depression or anxiety (Jaycox et al., 2019). This was potentially due to low reported symptoms at baseline. This suggests that an exploration of LIFT’s effectiveness in improving symptoms (PTSS, anxiety, depression) is necessary using a controlled design and a narrower sample. It was hypothesized that REM young people with at 50 least mild PTSS would experience reductions in PTSS, anxiety, and depression symptoms following LIFT compared to a waitlist control. This was assessed using the Child PTSD Symptom Scale (CPSS) and the Revised Children’s Anxiety and Depression Scale (RCADS). Treatment Acceptability Question 4. How acceptable is LIFT as rated by REM recent high school graduates who are entering college, as measured via a quantitative acceptability survey and open response questions? Treatment acceptability can significantly influence user engagement/adherence and treatment outcomes for those completing digital mental health interventions (Sekhon et al., 2017; Perski & Short, 2021). Treatment acceptability has been measured in samples of young people via quantitative surveys and qualitative interviews developed for specific interventions. Moderate-to-high treatment acceptability has been found in multiple trials (i.e., Cox et al., 2010; Marsac et al., 2015; Stasiak et al., 2014; Littleton et al., 2017) via these methods. The pilot study on LIFT found moderate acceptability [M = 1.85 on 0 (low) -3 (high) scale]when provided in a self-administered format with high implementation support. It is crucial to explore the acceptability of LIFT when provided in a purely self-administered format, with minimal external support, given the frequent barriers to accessing and benefitting from mental healthcare reported in the literature (e.g., stigma, scheduling). This is not a full examination of acceptability in a true “real world” setting, but rather a closer approximation to such from the pilot study. It is also important to explore acceptability with specific age groups, which the current study employs (18–19-years-old). It was hypothesized that participants would report at least moderate acceptability with LIFT. This was assessed using a 14-item questionnaire created by the LIFT developers (see Jaycox et al., 2019) that aligns with some elements of Perski & Short’s (2021) 51 conceptual model. An average score across items of 2 or above indicated “high acceptability”, an average of 1-2 indicated “moderate acceptability”, and an average of less than 1 indicated “low acceptability”, in line with Jaycox and colleagues’ (2019) conceptualization. Additional qualitative data was obtained at the end of the questionnaire via three open-response questions, “Please write in the things you liked best about LIFT”, “Please write in the things that you didn’t like about LIFT” and “Please write in the things you would change about LIFT”. Question 5. For REM recent high school graduates who are entering college, how culturally acceptable is LIFT as measured by individual qualitative interviews? One’s cultural identity, a construct made up of one’s race and ethnicity alongside other factors, influences experiences with mental health, reactions after a PTE, and approaches to treatment (Roberts et al., 2011; Yamashiro & Matsuoka, 1997). Exploring cultural acceptability of an intervention is crucial to understand the full scope of an individual’s perception of the intervention, given that acceptability is a construct that is made up of variables such as Ethicality and Perceived Norms (Perski & Short, 2021), along with elements outside of this framework that were salient within the literature review (i.e., perceptions on using LIFT to cope with race- related stressors, relevance and relatability of content for cultural groups). Specific CBT techniques and interventions have been described by some as potential facilitators for improving cultural sensitivity of an intervention (Kira, 2010), while others described that CBT-based interventions may not be universally culturally acceptable due to a lack of acknowledgement of systemic racism or intergenerational trauma (Hays; 2009). However, there are no published studies on the cultural acceptability of any digital CBT intervention in young people who identify as REM’s (see Table 1 and 2). LIFT is the only known digital trauma intervention developed with cultural considerations in mind, and the previous pilot study did not specifically 52 evaluate the cultural acceptability of the program. Many suggest that qualitative methods are the most effective way to gather information about the cultural acceptability of an intervention, so the current study utilized qualitative interviews to answer this question. This question did not have a specific directional hypothesis due to the lack of research in this area, so the researchers instead investigated this construct using an exploratory approach to inform how culturally acceptable LIFT might be with 18-19-year-old recently graduated high school students and college freshmen who identified as REM’s. 53 Table 4. Research Questions, Hypotheses, Measures, and Data Analyses Research Question Hypothesis Question 1: To what extent do 18-and-19-year- old recent high school graduates who are entering college complete LIFT in an entirely self- administered format with adequate fidelity as measured by usage metrics and self-reported information about LIFT activities and self-care plan completion? Question 2: To what extent is LIFT effective in improving self-reported coping skills and negative cognitions from pre-to-post intervention for REM recent high school graduates entering college when compared to a waitlist control? Usage data indicate adherence and retention similar to that reported in the digital mental health literature (70%) with similar rates of self- reported completion of activities Participants in the LIFT intervention group experience more significant improvements in coping skills and negative cognitions than those in the waitlist control group. Question 3: To what extent is LIFT effective in improving self-reported PTSS, depression, and anxiety from pre-to-post intervention for REM recent high school graduates entering college when compared to a waitlist control group as measured by the CPSSS and the RCADS? Participants in the LIFT intervention group experience more significant reductions in PTSS, anxiety, and depression symptoms than those in the waitlist control group. Question 4: How acceptable is LIFT as rated by REM recent high school graduates entering college, measured via a quantitative acceptability survey and open response questions? Participants in the LIFT intervention group report LIFT to be a moderately acceptable program. Measure(s) Retention data, usage metrics, self-reported completion of LIFT activities Children’s Coping Strategies Checklist (CCSC); Child Post- traumatic Cognitions Inventory – Short Form (CPTCI-S), self-reported progress towards goals Revised Children’s’ Anxiety and Depression Scale (RCADS) –Anxiety and Depression subscales; Child PTSD Symptom Scale (CPSS), self-reported progress towards goals Quantitative acceptability questionnaire, open response questions Question 5: For REM recent high school graduates entering college, how culturally acceptable is LIFT as measured by individual qualitative interviews? Exploratory question, no directional hypothesis Qualitative interview questionnaire (see Appendix C) Data Analyses Descriptive analyses ANCOVAs for each variable, baseline scores as a covariate ANCOVAs for each variable, baseline scores as a covariate Descriptive analyses, narrative content analysis Narrative content analysis 54 CHAPTER 3 METHODS Positionality Statement The author is a White cisgender female with a master’s degree who is pursuing a doctorate in school psychology. The first author acknowledges her many privileged identities and her position of power as a researcher who is conducting a study alongside a REM sample. She has engaged in reflection about her unconscious biases and the ways in which the limitations of her knowledge and worldview may impact her interpretation of the results, including what qualitative information is highlighted, which cases are included/excluded in analyses, and the a priori codebook, research questions, theoretical framework, and epistemological approach (i.e., postpositivist) selected for the study. She also has clinical experience supporting young people with PTSS, anxiety, and depression. The author has completed a graduate level quantitative methods course, a qualitative methods course (with a focus on positionality), a mixed methods seminar, a course on African American families, and a course on cultural considerations in research. To address these biases, reliance on quotes in the acceptability results section, a second coder, and member checks were utilized to promote validity of findings. The second coder identifies as a White cisgender female who is pursuing an education specialist degree in school psychology. Further discussion around how the author’s positionality may have influenced the research process, and the author’s process of ongoing reflexivity, is included in the Methods and Limitations sections to contextualize the power and privileged status of the researcher in relation to the study participants who hold more marginalized identities. Participants See Figure 2 for a flow diagram of participant retention throughout the study. Participants 55 Figure 2. Flow Diagram of Participants Through the Trial g n i n e e r c S t n e m l l o r n E n o i t a c o l l A p u w o l l o F s i s y l a n A Assessed for eligibility (n =127) Randomized (n = 47) Excluded (n = 80) Not meeting inclusion criteria (n = 67) Duplicates/bots removed (n = 13) Allocated to Waitlist Control Group (n = 23) Lost to follow up due to non- response to post-survey contacts (n = 3) Completed all study requirements1 (n = 20) Analyzed for Intent to Treat Analysis (n = 23) - Imputed post-test (all variables; n = 3) - Original pre and post data1 (n = 20) Analyzed for Post-Hoc Analysis1 (n = 20) Allocated to LIFT Treatment Group (n = 24) Received access to LIFT (n = 21) Did not receive login access to LIFT due to non-response to initial email contact (n = 3) Began LIFT Intervention (n = 11) - Started LIFT but did not finish (n = 2) - Completed all study requirements1 (n = 9) o Finished LIFT without minimum dosage criteria2 (n = 3) o Finished LIFT with minimum dosage criteria2 (n = 6) Analyzed for Intent-to Treat Analysis (n = 24) - Original pre- and post-data1 (n = 9 for depression and anxiety; n = 12 for all other variables) - Imputed post-test (n = 15 for depression, anxiety, and PTSS; n = 12 for all other variables) - Imputed pre-tests (depression and anxiety only; n = 13) Analyzed for Post-Hoc Analysis (n = 6 who met minimum dosage criteria)2 56 Figure 2 (cont’d) 1 Completed all study requirements (pre-test, 7 chapters of LIFT, and post-test for treatment group; pre-test and post-test for waitlist group), all data included in analyses for these participants is original (non-imputed) 2 Minimum dosage adherence criteria established post-hoc (completing all 7 chapters, > 50% of content watched per video in each chapter across > 50% of chapters, > 10 minutes duration across > 50% of chapters) Were recruited between March-September 2023 following approval from MSU IRB. Participants were included in the study if they were at least 18 years of age, identified as a racial or ethnic minority, reporting having access to technology/internet, reported at least one PTE, and at least mild PTSS at baseline. Participants whose scores fell above or below the PTSS cutoff or those who did not meet these other criteria within the screener were excluded. 127 participants accessed and completed the screener survey and consent form. 67 total participants were excluded automatically by the Qualtrics system and were not randomized to a group or routed to a pre-test due to not meeting inclusion criteria: 11 participants were excluded due to not meeting demographic criterion (age, race, ethnicity), 37 participants were excluded due to exceeding the PTSS cutoff (i.e., were encouraged to seek more intensive/appropriate services for their challenges), 18 participants were excluded due to their score falling below the lower threshold for PTSS for the study, and one participant was excluded due to not reporting experiencing a PTE. Finally, 13 screener/pre-test responses were excluded by the researcher after Qualtrics identified the response as “bot” or the researcher ascertained that an individual was completing the survey multiple times under the same IP address with different responses. All participants screened out were directed to a page of stress resources. Thus, 47 participants met study criterion, were randomized, completed a pre-survey, and were enrolled. The participants were randomized non-concurrently at the point of consent into either the treatment or waitlist control group. 24 participants were assigned to the treatment group and 23 participants were assigned to 57 the waitlist group. All 47 participants were included in primary ANCOVA analyses, consistent with an intent-to-treat analytic approach. Treatment group participants were required to communicate with the study coordinator via email or text, depending on preference, on three instances. The first was to establish communication for the study by responding to confirm the study coordinator has accurate contact information for each participant. Three treatment group participants completed all pre-surveys but did not respond to multiple first contact attempts, and thus were considered non-responsive and removed from the study. 21 participants were sent instructions to access to the LIFT online system directly after this first contact. Participants received weekly reminders to move on to the next chapter. The second and third check-ins where participants were required to communicate with the study coordinator were in response to the Chapter 4 and Chapter 7 reminder emails. The other chapter weekly reminders did not request a participant response. Treatment group participants were required to check in with a “yes/no” to the study coordinator in response to their Chapter 4 and Chapter 7 check-in reminders, to indicate whether they were on-track. The messages state that participants can have extra time to catch up if they are behind, that they just need to communicate via a response in a timely manner. Six participants did not respond to the Chapter 4 reminder email/text after multiple attempts and were excluded from the study. In response to the LIFT Chapter 7 reminder, treatment group participants must share when they have finished LIFT or if they need more time to complete the final chapter. Three participants did not respond to the Chapter 7 check-in after multiple attempts and were excluded from the study. These nine participants who were non-responsive did not receive a post-test or second incentive due to their non-response. Lastly, three participants indicated to the study coordinator via these check-ins that they had completed LIFT, leading them to receive and complete a post- 58 test, but their user data was later found to be incomplete in the LIFT system (i.e., completed less than one chapter of LIFT or never logged in). These three participants’ pre and post-test scores were still included in effectiveness analyses in line with an intent-to-treat framework. Three waitlist control group participants did not respond to the post-survey sent 7 weeks following their date of consent, after multiple contact attempts. Nine treatment group participants completed the LIFT program with complete LIFT user data, and all pre- and post-measures, and 20 waitlist group participants completed all pre- and post-measures. However, all participants who completed a pre-test survey were included in primary intent-to-treat analyses (N = 47). The demographic characteristics and mean pre-scores of participants across groups are summarized in Table 5. Due to the small sample sizes of the two groups, chi-square tests (for categorical demographic variables) and independent samples t-tests (for continuous pre- variables) were conducted to establish that the two groups did not systematically differ at baseline for these characteristics. The RCT design was selected to answer the study research questions in such a way that addresses this concern via randomization, but baseline equivalence analyses are still important to ensure no additional confounds are present. There was a significant difference across groups at baseline for sex assigned at birth (p = .045; more females in treatment group [83.3%] than waitlist group [56.5%]), but not for gender identity (cisgender vs. gender non-binary/non-conforming). There was also a significant difference across groups at baseline for history of pharmacological treatment (p = .018; more participants in the treatment group had previously taken a pharmacological medication to manage their mental health [29.2%] compared to the waitlist group [0%]), but not for current pharmacological/psychotherapy treatment. Thus, the researcher was not concerned with either of these differences at baseline becoming confounding with study results based on the literature, but they are important to contextualize 59 findings. There were significant differences (p < .05; see Table 5) at baseline across groups for pre-Cognitive Coping and FPSW cognitions (i.e., treatment participants reported less cognitive coping and more negative cognitions related to FPSW at baseline compared to the waitlist group). This suggests the importance of controlling for pre-scores within the analyses described below to account for these differences. The remaining variables were not significantly different across groups at baseline. 60 Table 5. Demographic Breakdown of Participants Across Analytic Groups in Intent-to-Treat Analysis Demographic Characteristic All Randomized to Treatment Group (N = 24) - % (n) All Randomized to Waitlist Group (N = 23) - % (n) Race African American/Black American Indian/Alaska Native Asian Native Hawaiian/Pacific Islander Hispanic/Latinx Multiracial Biracial Other (Non-White) Ethnicity Hispanic/Latinx Non-Hispanic/Latinx Other Age 18 19 Sex Assigned at Birth* Male Female Gender Identity Male/Cisgender Male Female/Cisgender Female Non-binary/genderqueer Genderfluid Agender Primary Language English Spanish Portuguese 29.2% (7) 0% (0) 12.5% (3) 0% (0) 29.2% (7) 12.5% (3) 12.5% (3) 4.2% (1) 41.7% (10) 54.2% (13) 4.2% (1) 87.5% (21) 12.5% (3) 16.7% (4) 83.3% (20) 16.7% (4) 70.8% (17) 8.3% (2) 4.2% (1) 0% (0) 87.5% (21) 12.5% (3) 0% (0) 61 21.7% (5) 0% (0) 26.1% (6) 0% (0) 34.8% (8) 13.0% (3) 0% (0) 4.3% (1) 43.5% (10) 56.5% (13) 0% (0) 91.3% (21) 8.7% (2) 43.5% (10) 56.5% (13) 43.5% (10) 47.8% (11) 0% (0) 4.3% (1) 4.3% (1) 78.3% (18) 17.4% (4) 4.3% (1) Table 5 (cont’d) Currently Receiving (or Received in Last Year) Financial Government Assistance Yes No Previous Psychotherapy Treatment Met with School/University Provider Met with Non-School Provider Not met with anyone Current Psychotherapy Treatment With School/University Provider With Non-School Provider Not meeting with anyone History of Pharmacological Treatment* Yes, Current Taking Yes, Not currently taking None Pre-PTSS total score mean (Mean, SD) # of PTEs (Mean, SD) Pre-Anxiety mean (Mean, SD) Pre-Depression mean (Mean, SD) 41.7% (10) 58.3% (14) 8.3% (2) 41.7% (10) 50.0% (12) 4.2% (1) 16.7% (4) 79.2% (19) 8.3% (2) 29.2% (7) 62.5% (15) 18.88 (5.59) 4.08 (2.80) 9.64 (3.80) (n = 11 original data)a 12.64 (6.56) (n = 11 original data)a 7.46 (2.17) 6.25 (2.82) 6.00 (2.41) 6.38 (2.06) 21.88 (4.19) 10.63 (2.58) 11.25 (2.92) 62 Pre-Avoidant Coping (Mean, SD) Pre-Problem Solving Coping (Mean, SD) Pre-Cognitive Coping (Mean, SD)* Pre-Optimistic Coping Mean (Mean, SD) Pre-Negative Cognitions Total (Mean, SD) Pre-PDC Cognitions (Mean, SD)* Pre-FPSW Cognitions (Mean, SD)* * p <.05 significant difference across groups at baseline a Only contains original data, prior to multiple imputation and pooling of missing pre-depression and pre-anxiety scores 8.00 (2.00) 7.13 (2.20) 8.57 (2.56) 7.65 (2.93) 20.52 (5.86) 11.00 (4.06) 9.52 (2.54) 39.1% (9) 60.9% (14) 4.3% (1) 21.7% (5) 73.9% (17) 0% (0) 0% (0) 100% (23) 8.7% (2) 0% (0) 91.3% (21) 17.96 (5.14) 4.48 (2.47) 9.78 (3.72) 12.04 (5.18) Participants reported a variety of racial and ethnic identities, with the most common being African American/Black, Hispanic/Latinx, and Asian. The majority of participants reported being 18 years old at the time of consent, with only 5 total participants across groups reporting being 19 years old. All participants confirmed that they graduated high school in Spring 2023, but participants were not required to confirm if they were starting college in Fall 2023. It was assumed that participants were starting college in Fall 2023 due to their participation/engagement with associated social media pages (Class of 2027 pages for Universities/Colleges). More cisgender females than cisgender males participated in the study across both groups, with four total participants reporting a non-cisgender identity. No participants identified as Transgender, Androgynous, Bigender, Gender Questioning, or Two Spirit. The majority of participants reported that English was their primary language, with 7 participants reporting Spanish and 1 participant reporting Portuguese. Slightly less than half of participants reported receiving financial government assistance over the last year. Only 11% (n = 5) participants across groups were currently seeing a provider for psychotherapy, but 38% (n = 18) of participants across groups reported previously participating in psychotherapy or counseling. States participants reported currently living in at the time of pre-test included: Texas, New Jersey, Oregon, California, New York, Maryland, Virginia, Iowa, Florida, and Washington. 20 participants did not report a current state of residence. Participants across groups reported mild-to-moderate pre-PTSS symptoms on average at baseline (> 10; Foa et al., 2001). Participants’ pre-scores on the RCADS across both groups fell above the clinical cutoff for predicting a DSM-IV disorder for anxiety (> 7) and for depression (> 11; Chorpita et al., 2005). However, only 11 treatment group participants had RCADS pre- scores available. The other 13 participants did not reach the point in Chapter 1 of LIFT where 63 RCADS data is collected. Treatment group participants did not receive an RCADS in their pre- test since the LIFT system collects this data as a part of the program. Waitlist control participants received the RCADS within their pre-test. Pre-test scores for four sub-scales of coping and two subscales of negative cognitions are also reported in Table 5. No subscale-specific cut-off scores exist within the literature for either measure. Participants randomized to the treatment group averaged 4.08 (SD = 2.80) PTEs at pre and participants randomized to the waitlist control group averaged 4.48 (SD = 2.47) PTEs at pre. The most commonly reported PTEs across both groups were: “Someone close to you was very sick or injured”, “You were impacted by racism via something you saw (e.g., on the news) that involved someone you did not know, that made you worry about your own wellbeing, health, or safety (i.e., seeing videos of people from your racial group being beaten, or arrested) [Vicarious Racism]”, “You've experienced > one racial micro-aggression (i.e., comments/insults/gestures that send disrespectful messages to people of color because they belong to a minority group) that over time have made you feel worried about your wellbeing, health, or safety”, “Someone close to you died”, and “Someone very close to you had an upsetting experience with another person that was related to their race or ethnicity, where you feared for the life, health, or safety of that person”. See Figure 3 below for a summary of PTEs endorsed across groups for all participants at pre-LIFT. The majority of participants (18/24 treatment group and 16/23 waitlist group; 72% of total participants) indicated experiencing at least one PTE related to their race or ethnicity. One treatment group participant’s only PTE was a race-related trauma. This participant did not move beyond the first chapter of LIFT, and they would have likely been routed to the stress track within LIFT if they had not indicated any other PTEs within the LIFT system, since the LIFT system does not include race-related PTEs as inclusion criteria for the trauma track. 64 Figure 3. Potentially Traumatic Events (PTEs) Across Groups at Baseline PTE = Potentially Traumatic Event; WLC = Waitlist Control Group; Tx = Treatment Group Missing Data. As proposed, within an intent-to-treat framework it is essential to assess the amount and scope of missing data within a clinical trial prior to proceeding with analyses, in order to take the proper steps to account for this missing data to prevent introduction of bias into the analyses (e.g., CONSORT Updated Clinical Trial Standards; Butcher et al., 2022). There was no missing data for demographic variables; however data was missing, as alluded to above, for pre and post-test outcome variables. Overall rates of incomplete cases and missing data across groups shows that significantly more participants in the treatment group did not complete study requirements compared to the waitlist control group (50% completed study requirements in treatment group, 87% completed study requirements in waitlist group), but this is not adequate to summarize the complexities associated with the missing data. This difference across groups was expected given that the treatment group had much higher expectations (i.e., two more required check-ins, completion of weekly LIFT chapters) during their 7-week period between pre/post 65 timepoints compared to the waitlist group. This significant difference suggested a need to look at demographic and pre-treatment predictors of missingness across all participants. Chi-square tests of independence (for categorical demographic variables) and binary logistic regressions (for continuous pre-treatment variables) were used to analyze if any demopgrahic or pre-treatment variables were predictors of missingness. No demographic characteristics were predictors of missingness. The three pre-treatment scores that were significant predictors of missingness were negative cognitions (FPSW, PDC) and coping skills (cognitive). This suggests that participants with higher FPSW, PDC (e.g., worse negative cognitions), and cognitive coping skills (e.g., better cognitive coping skills) at pre-test were more likely to have missing data throughout the study. Identification of these significant predictors of missingness indicate that the mechanism of missing data within the study is at least Missing At Random (MAR), rather than Missing Not At Random (MNAR) or Missing Completely at Random (MCAR). MAR suggests that missing data within the current study are related to observed values (i.e., predictors of missingness), while MNAR suggests that missing data are related to unobserved values, thus supporting that my data are at least MAR given these predictors of missingness identified (Enders, 2022). The nuanced complexities and scope of the study’s missing data are summarized within Table 6 below, specified by variable, by variable/timepoint, by group, by analysis, and as a total across groups, timepoints, and variables. The literature supports that looking at the overall fraction of missing information (FMI) and the % of incomplete cases are two metrics within clinical trials that can guide decision-making surrounding how to address missing data appropriately (White et al., 2010). The current study contained an FMI of 20% and 38% 66 Table 6. Missing Data By-Variable and By-Analysis Depression Pre Depression Post Anxiety Pre Anxiety Post PTSS Pre PTSS Post Coping – Problem Solving Pre Coping – Problem Solving Post Coping – Cognitive Pre Coping – Cognitive Post Coping – Optimism Pre Coping – Optimism Post Coping – Avoidance Pre Coping – Avoidance Post Negative Cognitions – FPSW Pre Negative Cognitions – FPSW Post Negative Cognitions – PDC Pre Negative Cognitions – PDC Post FMI By-Group - % (missing values over total possible values) Treatment Group FMI1 (out of N = 24) - % (n) missing 54% (13) 63% (15) 54% (13) 63% (15) 0 63% (15) 0 50% (12) 0 50% (12) 0 50% (12) 0 50% (12) 0 50% (12) Waitlist Group FMI1 (out of N = 23) - % (n) missing 0 13% (3) 0 13% (3) 0 13% (3) 0 13% (3) 0 13% (3) 0 13% (3) 0 13% (3) 0 13% (3) 0 50% (12) 33% (143/432) 0 13% (3) 7% (27/414) Overall FMI1 by Variable and Across Groups (out of N = 47) - % (n) missing 28% (13) 38% (18) 28% (13) 38% (18) 0 38% (18) 0 32% (15) 0 32% (15) 0 32% (15) 0 32% (15) 0 32% (15) 0 32% (15) 20% FMI across groups (170/846) FMI1 (out of N = 94 total pre and post values), % of Incomplete Cases (out of N = 47) By-Analysis 33% FMI, 38% incomplete cases 33% FMI, 38% incomplete cases 19% FMI, 38% incomplete cases 16% FMI, 32% incomplete cases 16% FMI, 32% incomplete cases 16% FMI, 32% incomplete cases 16% FMI, 32% incomplete cases 16% FMI, 32% incomplete cases 16% FMI, 32% incomplete cases 38% incomplete cases across groups (n = 18 out of 47) 1FMI: fraction of missing information (the number of incomplete values over total possible values) PTSS = posttraumatic stress symptoms; FPSW = Fragile Person in a Scary World; PDC = Permanent and Disturbing Change 67 incomplete cases across all variables and participants. Looking closer, the variables and subsequent analyses with the most missing data were depression and anxiety, with 54% of pre- data missing and 63% of post-data missing within the treatment group specifically, equaling out to 33% FMI and 38% incomplete cases for the depression and anxiety ANCOVA analyses. These numbers are referenced below in relation to the next steps taken by the researcher to address these significant proportions of missing data within the study in line with an intent-to- treat framework. Other important components of a missing data analysis are looking at missing data by- participant and missing data by-item. There was no missing data at the item level. Looking by- participant, 38% of total participants were missing at least one data point (“incomplete cases”), as stated above. For the treatment group, all participants who were missing a pre-test variable (depression and anxiety) were also missing a post-test (all post-test variables). The 13 participants who were missing depression and anxiety pre-scores, but not other pre-test scores, occurred because these participants did not begin the LIFT program and therefore did not receive the measures within Chapter 1 of LIFT (see Table 8 for a description of which measures were provided in the treatment group pre-test and which measures were provided within Chapter 1 of LIFT). Special attention was paid to depression and anxiety given that 54% of the treatment group participants were missing both pre-and-post depression and anxiety scores. The 12 participants missing all post-test variables, including depression and anxiety, occurred due to the participants exiting the study at an earlier timepoint due to non-response to email/text contacts. 3 additional participants were missing anxiety, depression, and PTSS at post-test due to not completing these measures as embedded in Chapter 7 of LIFT (see Table 8), but they completed other post-measures through the Qualtrics post-test. These 3 participants had responded to all 68 email check-ins with the study coordinator confirming completion of LIFT and study milestones, but after the study concluded and LIFT user data was obtained from the online platform, these participants had user data to support that they completed less than one chapter or never logged into LIFT. It is possible that the participants completed more of LIFT and user data did not reflect their progress due to a technical error, but the LIFT system is believed to be reliable and valid representation of user data, and when asked about any issues with LIFT (inclusive of technical issues) anonymously within the acceptability post-test items, these participants did not report anything that would lead the study coordinator to believe they had experienced technical difficulties. Thus, these participants’ available post-test scores from the Qualtrics survey were still included in effectiveness analyses in line with intent-to-treat framework, but their acceptability data was invalidated and not included in acceptability analyses. One of these participants was also an interviewee, which led their interview to be subsequently invalidated as well and not included in qualitative results. The implications of this decision to remove participants from acceptability analyses based on user data suggesting they had not completed any of the LIFT program was acknowledged by the researcher to carry certain limitations in line with culturally sensitive research practices. These will be discussed in more detail within the Discussion section below. The 3 waitlist participants who were missing all post-test variables did not complete the Qualtrics post-survey following their waitlist period. The 15 participants who exited the study due to non-response were sent an exit survey with one question (“Please share why you did not finish the LIFT surveys [or respond to LIFT emails/texts], or any reasons why you no longer wanted to participate in the LIFT study. Your responses will remain anonymous and private and will only be used for continuous improvement of the study/program. Feel free to share as much as you'd like”), and only one participant 69 completed this survey with the response: “Got busy and it got tedious”. These responses were intended to be included as a part of study findings, but were excluded due to this low response rate (1/15; 7%). As was proposed a priori, utilizing data for all enrolled and randomized participants, consistent with an intent to treat framework, was undertaken. Intent to treat purports that all participants, regardless of adherence or dropout from the study should be included within final analyses, with an attempt to replace any missing values and preserve the power and size of the original randomized sample (Armijo-Olivo et al., 2009). Intent-to-treat is the gold standard design for evaluating the effectiveness of an intervention in a clinical setting, where patients are not guaranteed to follow the treatment exactly as prescribed. However, given the presence of missing data and issues with treatment adherence, intent-to-treat can underestimate treatment effects if the treatment is effective for those who adhere most to the intervention (Armijo-Olivo et al., 2009). Given this potential concern and the degree of missing data that occurred within the study (see table 6), a review of the literature was conducted on best practices for handling missing data within an intent-to-treat framework, and alternative options were reviewed to determine what next analytic step would best answer the research questions set forth by the researcher while introducing the least amount of bias possible. The literature supports that when missing data are MAR within a clinical trial study, a researcher can still move forward with intent-to-treat analyses even if the FMI or % of incomplete cases are high (Madley-Dowd et al., 2019). Specifically, using a technique such as multiple imputation (MI) to replace missing values in MAR simulation studies with 50% missing data (Enders, 2022) and up to 90% missing data (Madley-Dowd et al., 2019) can produce unbiased estimates of missing values and accurate results. The accuracy of MI is improved when 70 the researcher includes the identified predictors of missingness as auxiliary variables within the imputation model, which reduces standard error of the estimates by relying on these predictors to better estimate what the missing values may be as a part of the imputation process (White et al., 2010). Alternatives to intent-to-treat that were considered include using available complete data as a part of a complete case analysis to avoid reliance on imputed estimates. For a trial where significant concerns with intervention adherence were observed, benefits to this approach would include trying to establish a mediation or moderation effect of adherence on outcomes for those treatment group participants with complete pre and post data (n = 9 for depression, anxiety, and PTSS; n = 12 for all other variables). However, research suggests that complete case analyses are severely biased unless data are Missing Completely At Random (MCAR) with less than 5% of data missing (Jakobsen et al., 2017; Salim et al., 2008), and these criteria are not met within the current study. Benefits of complete case analyses would include reduced Type 1 error due to relying only on raw data values, rather than imputed or replaced values; however, results of a complete case analysis would only represent those who were compliant in the intervention and study requirements, even with a mediation/moderation for adherence since there was such little post-data obtained from those who were non-adherent, which would be a biased representation that decreases the external validity of the study (Salim et al., 2008). A complete case analysis would decrease power to an even lower degree, also decreasing the researcher’s ability to detect any present effects. Complete case analyses also break randomization and reduce integrity of the RCT design and the researcher’s ability to compare outcomes from a real world treatment group to “treatment as usual” (waitlist group). Even though a complete case analysis with a mediation/moderation effect for adherence could produce an important finding in line with the study’s theoretical framework (Perski & Short, 2021), the literature supports that intent-to-treat is 71 still a less biased and appropriate approach in line with the study’s a priori plan. Additionally, the extended 2022 CONSORT standards for intervention clinical trials report that using MI is appropriate when data are at least MAR, and that missing data must not be ignored (as it would be within a complete case analysis) or else it will introduce bias in relation to estimating real world clinical effects (Butcher et al., 2022). Further, recently published clinical trials that examine similar digital mental health interventions utilized intent-to-treat approaches with similar amounts of missing data and sample sizes (43% cases missing post-test out of N = 46 within Littleton et al., 2016; > 50% data imputed at one timepoint within Mouthaan et al., 2013), and subsequently moved forward with a rigorous missing data replacement technique (e.g., MI, Full Information Maximum Likelihood; Cook et al., 2019; Littleton et al., 2016; Short et al., 2020) or a less rigorous technique (e.g., Last Observation Carried Forward; Stasiak et al., 2014). MI is a valid and appropriate method to handle the missing data within the current study in the least biased fashion to estimate treatment effects of LIFT (Salim et al., 2007; Armijo-Olivo et al., 2009). MI is stronger than other missing data techniques, such as single imputation or last observation carried forward, because MI relies on a regression that uses all constructs/variables to inform the replacement values across a specified number of imputations, leveraging constructs/participants that have more complete data to inform replacement values for those with more missing data across each imputation within plausible ranges for each value (Van Ginkel et al., 2020). In other words, MI creates complete datasets of the full randomized sample (N = 47) by completing random draws from a conditional distribution based on a linear regression model when variables are continuous, and can correct the bias identified from predictor variables by incorporating those variables that predict missingness into the regression model as auxiliary variables, and are therefore reflected in the imputed values to produce more valid estimates (van 72 Ginkel et al., 2020). Other research has suggested that the benefits of using MI outperformed complete case analyses in smaller and larger datasets (Kontopantelis et al., 2017), and that MI is appropriate with up to 40% missing data for key variables; MI can still be used if > 40% of data is missing for a given variable, but the results need to be interpreted as “hypothesis-generating” (Jakobsen et al., 2017). All variables in the current study fell below this threshold (see Table 6), indicating the appropriateness of using MI to develop multiply imputed datasets of complete data for the full randomized sample of N = 47 prior to performing the ANCOVAs. Overall, using intent-to-treat within the current study with subsequent rigorous missing data replacement techniques (e.g., MI) for a study with 20% FMI and 38% incomplete cases is appropriate and aligned with best practices for clinical trials and handling missing data, and recently published digital intervention research. Thus, the researcher performed MI using SPSS Statistics Version 27 to replace the missing observations with a set of possible random values within a conditional distribution (e.g., set minimum and maximum values for each variable based on the possible range of scores that can be obtained). Following the results of the missing data analysis above, the researcher included the predictors of missingness (FPSW, PDC, and Cognitive coping) along with all other pre-scores as auxiliary variables in the imputation process to inform the replacement values as described above (Enders, 2022). Recent literature suggests that the number of imputations when using MI should be at least equal to or greater than the percentage of incomplete cases (38% in current study = 38 imputations) to increase confidence in the reproducibility of the methods/results and reduce the potential for Monte Carlo error (White et al., 2010). In line with this recommendation, MI using n = 40 imputations was used to address missing data for pre- scores (anxiety, depression) and post-scores (all variables) due to high dropout from the study. 73 MI is generally agreed upon as a more unbiased model of addressing missing data to estimate a treatment effect than complete case analyses, and literature suggests that the decision to use MI should not be influenced by the amount of missing data, as long as data are MAR (Enders, 2017). The researcher performed data analyses for effectiveness using the intended statistical method across all 40 imputed datasets and provide below the by-imputation results for the first 5 imputations, available pooled results for each analysis, and ranges for statistics where pooled results were not available (van Ginkel et al., 2020). Measures Screener. Participants first provided their consent or dissent to the screening process via an online form on Qualtrics. Following their consent, they were directed to a screener to ensure fit with inclusion criteria. Participants were asked the racial identity in which they identify with, from the options of “African American”, “American Indian/Alaska Native”, “Asian”, “Native Hawaiian/Pacific Islander”, “White”, “Multiracial”, “Biracial”, or “Other”. Any category other than “White” qualifies for inclusion within the study as a racial minority. Participants were also asked their ethnicity of identification, either “Hispanic/Latino” or “Not Hispanic/Latino”. If participants selected “Hispanic/Latino” in addition to a race selection of “White”, they were also eligible as an ethnic minority. Students identified as either a racial or ethnic minority were included within the study. Students also indicated their age, in years. Students under 18 or who did not report identifying as a racial/ethnic minority were excluded. Participants were also asked if they had access to reliable technology/internet access, a device available to them in an after- school or weekend setting that they do not own (after school program, library, etc.). Participants would have been excluded if they reported no access to technology, but no participants were excluded for this reason. 74 Next, participants were asked to read a list of potentially traumatic events (PTE’s) and select which event(s) they had experienced in their lifetime within the screener (see Figure 3). This traumatic life events screener consisted of 17 items directly pulled from previous school- based trauma intervention screenings (Jaycox et al., 2019; Jaycox et al., 2009), in addition to four novel items. The four additional items added to the traumatic events checklist were about experiencing a racial or ethnic-related trauma (see Appendix K; items adapted from the UConn Racial/Ethnic Stress and Trauma Survey; Williams et al., 2018; Tynes et al., 2019). Students received a score from 0-21 based on how many events were selected (>1 = eligible; see Table 5). LIFT has two tracks (stress and trauma). Participants began in the stress track and self-selected into the trauma track based on indication of a traumatic event via the same checklist in Chapter 1 (see Jaycox et al., 2019) in addition to meeting a certain threshold of PTSS. Participants received the same PTE checklist again at post to account for any additional traumas that occurred during LIFT (i.e., occurred within the past 7 weeks of the LIFT program). All 9 participants who completed a valid post-test reported at least one PTE (range 1-6; M = 2.67; SD = 1.66) that occurred during the LIFT program, and the most common PTEs reported at post were experiencing vicarious racism via the media, an accumulation of micro-aggressions related to race/ethnicity, and an upsetting event that occurred with another person related to your race. Lastly, participants were screened for their current post-traumatic stress symptoms (PTSS) using the Child PTSD Symptom Scale (CPSS; Foa et al., 2001). The total score was used for screening and analysis. This tool measures the severity of PTSS in alignment with DSM-4 characteristics of PTSD. Respondents rated 17 items using a 4-point Likert scale (0 = not at all; 3 = almost always). The total score can range from 0-51 (higher score = more symptoms). It has demonstrated strong internal consistency reliability within multiple studies of individuals 8-19 75 years of age, including a Hispanic sample (total score a = .83-.93; subscales range from .70-.80; Foa et al., 2001; Stewart et al., 2015; Nixon et al., 2012; Hasson et al., 2021). One-to-two-week test-retest reliability is acceptable (range: r = .63-.85). This measure has convergent validity with other PTSD tools (e.g., UCLA PTSD-RI; CRTES-R), moderate correlation with PTSD diagnosis (r = .51), and divergent validity with other constructs (e.g., loneliness measured via the LQSF; Stewart et al., 2015). Within intervention research, it is suggested to clearly define target symptoms as a part of inclusion criteria given that it may become more difficult to detect effects if participants are experiencing few symptoms (Foa & Meadows, 1997). Solely experiencing a traumatic event is not strong enough rationale that a trauma-focused treatment is appropriate without assessing one’s current PTSS. Thus, the current study required that all participants score a 10 or more (range 0-51; considered to be a “mild-moderate” threshold of symptoms) on the CPSS (Foa et al., 2001) to be included. Participants were excluded if their score fell above a 31 (one standard deviation above the average in Jaycox et al., 2019 study), given that LIFT is not appropriate for severe PTSS. If participants scored above this threshold, they were be directed to a page of resources. This concluded the screening phase. Demographic Information. Participants completed a pre-test that asked participant grade, sex assigned at birth (Male, Female, Intersex, choose not to disclose), gender identity (Man/Cisgender male, Woman/Cisgender female, Transgender Man, Transgender Woman, Agender, Androgynous, Bigender, Genderfluid, Gender Questioning, Non-binary/Genderqueer, Two Spirit, Other, or choose not to disclose), income level (“Are you currently receiving, or have received in the last year, any financial government assistance? [e.g., food stamps, free or reduced lunch, etc.]”), primary language (English, Spanish, Other), status in high school and/or college (“I am still in high school”, “I have recently graduated high school (Spring 2023)”, “I 76 graduated high school before Spring 2023”, or “Other”), and whether they were currently participating in mental health services or taking any psychotropic medications. Participants were also asked about mental health service participation and psychotropic medications after the LIFT/waitlist period at post-test using the same questions over a different timeframe (i.e., over the past 7 weeks). Three treatment group participants out of those who completed the post-test indicated that during LIFT, they met with a counselor or therapist outside of school/university, one participant met with a counselor or therapist through their school/university, two participants reported that they started or continued taking pharmacological medication to manage their mental health, and four participants reported engaging in no psychotherapy or pharmacological treatment since starting LIFT. Feasibility/Adherence. Usage data was collected by the LIFT platform surrounding the number of chapters accessed/completed, the amount of time used to complete each chapter, and the percentage of videos watched in each chapter for participants in the treatment group. This provided information on fidelity to the LIFT model and engagement. The device(s) used to complete LIFT were also assessed at post-test (computer, tablet, phone, or combination). The LIFT system also records how many times participants replay key games or activities as a measure of how many times a skill was practiced. Participants were asked questions at the beginning of modules 2-7 about whether they applied strategies from LIFT to their life since the last session, and specifically whether the self-care plan components were completed from the prior module. These variables were the primary measures of feasibility/adherence of completing an online self-administered mental health intervention. Mechanisms of Action. Coping skills were measured at pre- and post-LIFT using four subscales from the first primary instrument, the Children’s Coping Strategies Checklist (CCSC; 77 Ayers et al., 1996): direct problem solving (4 items), cognitive decision making (4 items), optimistic thinking (4 items), and avoidance (4 items). The first three subscales are considered types of approach coping, while the last subscale represents avoidant coping. This measure was administered to all participants across both groups, and the subscale scores were analyzed from pre-to-post-LIFT. Participants were asked to frame their answers during the last month. 16 items are rated on a 0-3 scale (0 = Never; 1 = Sometimes; 2 = Often; 3 = Most of the time). An example item from the approach coping styles is “You did something to solve the problem”. An example item from the avoidant coping style is “You avoided it by going to your room”. Internal consistency reliability is mixed (a range across subscales = .59-.76; Jaycox et al., 2019; Ayers et al., 1996). A confirmatory factor analysis confirmed four factors (Active Coping [approach subscales], Avoidance [avoidance subscale], Distraction, and Support Seeking; Ayers et al., 1996), however some suggest that a two-factor model is more supported for African American adolescents (mapped onto Active Coping and Avoidant Coping with evenly distributed factor loadings; Gaylord-Harden et al., 2008). Test-retest reliability ranges from r = .49-.73 (Program for Prevention Research, 1999). Psychometric support for this measure only goes up to age 14, but the tool has appropriate face validity for young adults; also, given that LIFT is a novel program, it is important to prioritize comparison of results across studies, which can be accomplished by using the same measure as previous researchers (e.g., Jaycox et al., 2019 used with 18 year olds). Also, this is the only known coping measure that has evidence of invariance for Black adolescents (Gaylord-Harden et al., 2008). No other coping measures in this population (18–19-year-old REM’s) have strong alphas across all subscales for approach and avoidant coping, suggesting that this measure is the most appropriate for the current study. 78 Second, negative cognitions were measured at pre- and post-LIFT using the second primary instrument, the Child Post-Traumatic Cognitions Inventory– Short Form (CPTCI-S; 10 items). This self-report measure has two subscales: “Permanent and Disturbing Change” (PDC; example item: ‘I will never be able to have normal feelings again”) and “Fragile Person in a Scary World” (FPSW; example item: ‘I don’t trust people”). The short form has moderately two- month test-retest reliability (r = .78 overall; r = .74 for PDC subscale; r = .77 for FPSW subscale; McKinnon et al., 2016). The short form measure has strong construct validity (factor loadings range: .64-.79) when compared to the long version (factor loadings range: .53-.81), and strong internal consistency (a = .92 overall; a = .91 for PDC; a = .81 for FPSW; McKinnon et al., 2016). All items are moderately correlated with PTSD status (all items > r = .49; McKinnon et al., 2016). Strong sensitivity and specificity were found for cutoff scores of 16-18 to indicate clinically significant levels of maladaptive cognitions (McKinnon et al., 2016). This tool measured degree of agreement with two types of cognitions (beliefs about oneself and the world). Participants rated each item on a 4-point Likert scale: Don’t agree at all, Don’t agree a bit, Agree a bit, Agree a lot. Scores for subscales are summed (higher score = more frequent maladaptive cognitions, less frequent adaptive cognitions). Limitations of this measure are lack of validation with REM’s and the fact it diverges from the measure used within the previous study on LIFT; however, this is a criticism of all negative cognition measures for young people present in the literature and this measure has much stronger psychometrics than the measure used in the 2019 evaluation, but with similar subscales, allowing it to still be conducive to comparisons across studies. Third, a secondary instrument used to measure mechanisms of action within this study was embedded into the LIFT program. Within Chapter 1 of LIFT, participants selected specific 79 goal(s) from up to 9 options answering the following prompt, “I want to change the way I do things and think about things so that I can…”. Examples of goals participants can select include, “stop avoiding things that make me nervous” and “calm myself down when I feel upset”. Participants were asked again in Chapter 7 about all 9 goals from Chapter 1 with the prompt, “Which of the following things have you accomplished in LIFT?” (0-2 Likert Scale; 0 = Not true at all, 1= Sort of True, 2 = Very True). These responses are compared to responses from Chapter 1 below. Targeted Outcomes. First, participants self-reported their post-traumatic stress symptoms using the first primary instrument, the Child PTSD Symptom Scale (CPSS; Foa et al., 2001) as highlighted above within the screener for the study. Intervention group participants received this same measure in Chapters 1 and 7 within the LIFT platform (Chapter 7 assessment acts as post- test). Waitlist group participants received it within their Qualtrics pre and post-tests. Second, participants self-reported their depression and anxiety symptoms using the second primary instrument, the Revised Children’s’ Anxiety and Depression Scale (RCADS; Chorpita et al., 2000). The Major Depression (10 items) and Generalized Anxiety (6 items) subscales were administered at pre- and post-LIFT. A 4-point Likert scale is used for both subscales (0 = never, 3 = always). Scores are summed and total scores were used for analyses (Depression range: 0-30; Anxiety range: 0-18). Higher scores indicate higher symptoms. Symptoms are reported from the last week. A meta-analysis aggregating psychometric studies (N = 146) on the RCADS found that it has robust alpha coefficients and is reliable and appropriate for use across different cultures (Anxiety: a = .93; Depression: a = .82; Piqueras et al., 2017). Convergent, discriminant, and factorial validity of the RCADS Anxiety and Depression scales is supported (Chorpita et al., 2005). Literature suggests that a clinical cutoff (i.e., adequate 80 sensitivity and specificity for predicting a DSM-IV disorder) for the Depression scale is 11 and a cutoff for the Anxiety scale is 7 (Chorpita et al., 2005). This tool is sensitive to changes in symptom levels in young people (Mathyssek et al., 2013). Even though the psychometric studies only used samples through age 18, face validity of the tool indicates appropriateness for 19-year- olds. If participants scored in a clinically significant range at post-test, they were provided with a handout of resources. Third, a secondary instrument used to measure targeted outcomes within this study was embedded into the LIFT program. In Chapter 1 of LIFT, participants were asked which emotions they hope to feel less of (i.e., upset, angry, sad, nervous, scared) and which emotions they hope to feel more of (i.e., happy, calm, excited, relaxed) by the end. In the final chapter, participants were asked again about the emotions from Chapter 1 that they wanted to feel “more of” or “less of”. They were provided with the prompt, “Since completing the LIFT program, I have been feeling…” and a Likert rating scale option for each emotion they selected in Chapter 1 (Much less, A Little Less, About the Same, A Little More, and Much More). Goal attainment scaling was used to measure perceptions of improvement for each emotion (scale: -1 to 1; negative score indicates feeling less of an emotion after LIFT, positive score indicates feeling more of an emotion after LIFT, 0 indicates no change). Participants were asked to rate on a sliding scale (0- 100; 0 = not well at all; 100 = very well), “How do you think you did overall?” in relation to the emotion goals they selected in Chapter 1 (e.g., “When you started the LIFT course, you said you wanted to feel LESS angry and MORE happy. How do you think you did overall?”). Acceptability. It is critical to explore the general and cultural acceptability of an intervention to develop a deep understanding of how an intervention can be transported into a particular setting/population; effectiveness is heavily influenced by, and thus can be hindered by, 81 both treatment adherence and treatment acceptability (Perski & Short, 2021). The current study examined whether LIFT is generally perceived as an acceptable program in its intended format (i.e., self-administered, no adult involvement). The current study explored general facets of acceptability and how culturally acceptable LIFT was perceived as for REM young people who recently graduated high school and are entering college. Treatment group participants completed a 14-item quantitative survey at post regarding satisfaction with LIFT (same items used in Jaycox et al., 2019). Example items include, “LIFT talks about stress and trauma that I can relate to” and “The things I learned in LIFT helped me solve my problems” (4-point Likert Scale; 0 = not at all true; 3 = very true). The two coders reached 94.4% (disagreement with one code) reliability with the primary coder for the distinctions of which codes map onto which quantitative survey items. Two items did not map onto any codebook items due to their specificity (Items 2 and 3). Items 13 and 14 did not map onto a codebook item due to their general nature, and were conceptualized as global acceptability and global social validity of LIFT, in that order. The following open-ended items were asked following the quantitative items to gain additional detail on acceptability within this post-survey, “Please write in the things that you liked best about LIFT” “Please write in the things that you didn’t like about LIFT”, and “Please write in the things that you would change about LIFT”. Lastly, participants were asked whether they would be willing to be contacted to participate in an optional 30-minute Zoom interview to earn a final $30 gift card regarding their experiences with LIFT (Yes, No, or Unsure and want more information). Six treatment group participants indicated interest in completing a qualitative interview for an additional gift card. Three participants were contacted to schedule individual interviews via Zoom within 1-2 weeks of completing LIFT, and all three participants completed an individual interview. Interview data 82 was removed from analysis for one of the participants who completed an interview due to LIFT user data displaying that this participant completed less than one chapter of LIFT. The researcher acknowledged that the decision to remove this participant’s interview from qualitative analyses occurred in the context of the power differential between the researcher and participant in relation to race, educational/professional status, and potentially other factors. Before LIFT user data was obtained, this interview was conducted and the participant responses appeared surface- level to the researcher and was difficult to code (e.g., not many details, one word answers). Additionally, even though there is a possibility that the participant completed LIFT and their responses were not saved due to a technical error, the LIFT user system has been tested and retested for both reliability and validity of user behavior. User data lost to a technical error was considered but determined to be an unlikely possibility given that the participant did also not mention any specific technical errors in their qualitative responses; despite these factors, the researcher’s unconscious biases and positionality that may have contributed to this decision are important to acknowledge given the historical lack of inclusion of the experiences of people from marginalized backgrounds in digital intervention research (Ellis et al., 2022). Thus, the two interviewees included in analyses identified as an African American cisgender female (Interview 1) and a Hispanic/Latinx cisgender female (Interview 2). The responses were transcribed using Otter.io and the transcriptions were checked by the researcher for accuracy prior to coding. Two coders reached overall 82% intercoder agreement (ICA) using the codebook, which is above the minimum ICR standard agreed upon within the literature (80%; O’Connor & Joffe, 2020) and added one additional code to this codebook following coding of the transcripts (i.e., engagement with study components outside of the LIFT program; see Appendix D). The coders also specified that some of the codes pulled from Perski & Short’s (2021) model are worded in such a way that 83 sounds inherently positive or negative (e.g., Burden); however, the coders agreed to use the codes for a positive or negative variation of that code (e.g., if LIFT felt like a burden or did not feel like a burden). The coders openly acknowledged their biases as highly educated White women coding data from REM young people, and had regular check ins at every coding session regarding quotes where biases may have been impeding judgment. This led to open conversations regarding biases, but still is a limitation of this analytic procedure. The interview protocol (see Appendix C) was administered via a Zoom call. The protocol was developed using models of acceptability for DMHIs (Sekhon et al., 2017; Perski et al., 2021), information from the LIFT developer about what cultural factors went into intervention development, and interview tools that measure race related PTEs and PTSS (UnRESTS; Williams et al., 2018). The interview protocol began with the interviewer showing a series of images and brief objective description of major activities in each chapter of LIFT to refresh their memory of the program elements (e.g., the videos, the games, self-care plan elements). The protocol was developed with the theoretical framework and a priori codebook in mind (see Appendix D), aligned with procedures for direct content analysis research. It is common in direct content analysis qualitative research to ask open ended questions first, then more direct questions in line with theoretical framework elements (Hsieh & Shannon, 2005). As evidenced by the middle column in Table 15, not all elements of acceptability and Perski & Short’s (2021) model are represented in the acceptability quantitative survey questions (e.g., Ethicality, Opportunity Costs). Additionally, critical elements of young people of color’s experiences were not directly addressed in these questions (e.g., coping with racism and cultural stress, relevance/relatability of characters and examples in LIFT), along with items meant to pull apart the participants’ experiences specific to the research study procedures outside of the standard LIFT program (e.g., 84 weekly reminders). Thus, the researcher explored additional depth of participant experiences with LIFT to gather data more comprehensively in line with Perski & Short’s (2021) model via qualitative interviews as described below. As seen in Appendix C, the qualitative interview questions mapped on to all elements of Perski & Short’s (2021) model, and four additional codes added into the qualitative codebook that were designed to pull out cultural and study-specific elements of participant experiences from the interviews (see Appendix D). The codes from the final codebook considered to be directly tied to cultural acceptability are Ethicality, Perceived Norms, Coping with Racism and Cultural Stress, and Relevance/Relatability/Realistic. All other codes are considered to influence cultural acceptability, but are a part of more general acceptability and are still discussed within analyses. At the onset of the interview, the researcher confirmed consent for the call to be audio recorded, but stored without any identifying information, and deleted following completion of the study. Each interview lasted around 30 minutes. The interviewer acknowledged their identity as a White woman during the interview and explained the purpose of the interview to the participant (See Appendix C). A risk management protocol was developed (see Appendix C) in case of participant crisis, but it was not indicated for use in either interview. The researcher gathered face validity of the interview tool by obtaining and integrating feedback from a school psychologist who works with high school seniors. Following their interviews, participants received a third $30 gift card. Design A two-group randomized control trial using a waitlist control group was used to compare the effects of the intervention from pre-to-post to a control condition. Participants were randomly placed into either the LIFT intervention group or the waitlist control group. Those in the waitlist 85 group received “treatment as usual” during the 7-week period (i.e., directed to a page of mental health resources following pre-test), and received access to LIFT 7 weeks from their consent date. Randomization occurred non-concurrently to either the treatment group or waitlist control group at the point of consent given that recruitment and subsequent access to LIFT was occurring on a rolling basis. Randomizing at the point of consent ensured that group assignment was not influenced by rolling recruitment procedures, bolstering external validity. The waitlist control group received LIFT following their waitlist period, but their post-LIFT data were not available to the researcher for analyses. A mixed methods design for acceptability was also used to promote triangulation of findings in an effort to enhance validity of results (Creswell & Clark, 2007). Recruitment Participants were recruited through two phases. Phase A of recruitment was targeting high schoolers age 18+ across the US from March-June 2023. As shown in Table 7, the researchers used a variety of methods to reach high schoolers through school mental and behavioral health professionals via email, provider Facebook groups, and a statewide behavioral health provider training, with a request to distribute the study flyer with a QR link to the consent/screener form. 18 providers indicated interest and intent to distribute the flyer, but often finding difficulty gaining student buy-in or did not follow-up with the study coordinator to provide updates on distribution. Providers were instructed to distribute the flyer in whichever way they saw fit (i.e., to individual students, to student group leaders, post in high traffic areas in the school, to psychology teachers to distribute to students in their classes, etc.) The researcher used strategic education approach with school staff when discussing LIFT, which is an evidence- based implementation approach to recruitment that supports adherence and recruitment efforts to 86 Table 7. Outcomes of Recruitment Phases Recruitment Phases, Dates, and Associated Recruitment Strategies Phase A: High school seniors (3/23/23- 6/29/23) • Direct contacts to school professionals to distribute flyer to specific eligible students or to student groups, psychology teachers, etc. • Presentation at statewide school behavioral health provider training • Facebook groups of school mental and behavioral health providers Phase B: Recently graduated high school seniors and college freshmen (6/30/23- 9/30/23) • Reached out to administrators of “Class of 2027” University/College Instagram and Facebook accounts to request a post of the study flyer and screener link to their Instagram profile or an Instagram story (24-hour post) Participants Enrolled 0 # Providers Distributed 18 indicated intent to distribute # Provider Contact Attempts N = 45 individuals, schools, districts, and Facebook groups contacted; 90 providers at 1 statewide training N = 141 Instagram accounts and 4 Facebook pages messaged 35 posted flyer (4 posted twice) 47 ensure that providers have a clear understanding of how directing youth to this service opportunity can support closing of the mental health gap in their schools (Lyon et al., 2019). These approaches are summarized below in Table 7, with the Phase A outcome of yielding no participant enrollment. Thus, Phase B of recruitment was initiated from July-September 2023 as shown in Table 7, which yielded N = 47 participants enrolled. In Phase B, the researcher reached out to 141 Instagram and Facebook accounts that are designed for incoming freshmen at a particular university (e.g., XYZ University Class of 2027) to connect with each other. The researcher direct messaged these accounts and asked if they would be willing to post the study flyer as an Instagram story post that is visible for 24 hours with an easily clickable link to the study screener. 35 accounts posted the study flyer for a 24-hour period of time, with 11 of those 87 accounts requesting a small fee from the researcher to post the flyer, and four of these accounts posting the study flyer twice on two separate weeks. Interested students were able to easily click the screener/consent link via the Instagram story. Recruitment, screening, and subsequent access to LIFT or waitlist materials occurred on a rolling basis until the pool of potential participants via these methods had been saturated. Participants were incentivized to participate via two $30 Amazon gift cards (one after pre-test and one after post-test), with the potential to earn an additional $30 gift card. Study Phases and Procedures Directly after the screener survey, participants were randomized into either the intervention group or the waitlist control group, non-concurrently by the Qualtrics survey system. Following randomization, participants are immediately routed to the appropriate pre-test survey. The groups received slightly different versions of the pre-assessment and post- assessment surveys because the intervention group received the Depression, Anxiety, and PTSS measures embedded within Chapter 1 (pre) and Chapter 7 (post) of LIFT, while the waitlist control group did not. Group placement determined whether participants received access to the intervention immediately (treatment) or in 7 weeks (waitlist control). Following the pre-test, participants were immediately provided expectations for the study (i.e., one module per week, sent reminders weekly, required to respond to three check-ins throughout the study) and a subsequent email with directions for how access LIFT and begin Chapter 1. The LIFT developers provided the researcher with a list of new LIFT account logins that were de-identified (i.e., lift1@email.com), and the researcher was able to provide these to participants on a rolling basis to begin LIFT. This allowed participants to keep their private information off of the LIFT website. Participants were able to change their password after the first log-on. Since the program 88 is self-directed, the intervention group was provided reminders to stay on track for completing a module/week for 7 weeks, either via text or email depending on preference. 11 out of 24 (45.83%) participants randomized to the treatment group opted for text reminders in addition to email correspondence for communication related to the study and reminders to start the next chapter of LIFT, while the remainder shared that their preference was for email reminders only. 8 out of 23 (34.78%) participants randomized to the waitlist group opted for text reminders in addition to email correspondence for communication related to the study and reminders for LIFT when they receive access to the program after their waitlist period. Participants were directed to leave at least two days between modules if they complete two in a week to ensure there was enough time to engage with the “self-care plan” in-between chapters. Participants were required to send a brief response (yes/no) to the Ch. 4 and Ch. 7 reminder messages, to let the research coordinator know if they were on track to finish. Up to two additional contacts were sent to participants if they did not respond to the initial Chapter 4 and 7 check-ins. Participants were provided up to two extra weeks to finish the modules if they reported that they needed more time at the check-ins. Participants were discontinued from the study and a post-test was not sent if they did not respond to the either the Ch. 4 or 7 check-in attempts. When placed in the waitlist control group, participants were informed that they will receive access to the program in approximately 7 weeks from their consent date. In the meantime, they were provided with a page of mental health resources directly following the pre- test. All participants completed the post-test 7 weeks from their date of consent. After completing post-tests, both groups received their second $30 gift card, and the waitlist group received access to LIFT (emailed de-identified login code). 89 See Table 8 for a breakdown of treatment phases. Not included in this table is the final phase of the study, which entailed post-LIFT data collection within the group that was initially a waitlist control. Also not included in the table is mid-test data, which is collected by the LIFT platform. Neither were summarized/analyzed within the current study. The waitlist control group’s post-LIFT and LIFT user data were also not available to the researcher and were unable to be included in analyses as a part of the current study. LIFT Program. LIFT has 7 chapters (see Table 3), completed sequentially, building off skills learned in earlier chapters. One chapter should be completed in one sit-down period, but it could be completed over the course of multiple days if needed. Within each chapter are a variety of engaging videos, games, and activities. Cognitive-behavioral content is provided (e.g., psychoeducation about stress and trauma, teaching of relaxation skills, challenging dysfunctional thinking, approaching rather than avoiding trauma reminders, problem-solving skills) in video format, with many application opportunities. These opportunities take the form of games (e.g., create your own adventure) that students could replay if desired, matching or drag-and-drop activities, open response questions, and prompts to think about examples in their own lives. Each chapter also has users develop a “self-care plan” (i.e., homework) at the end, to prompt users to select a goal for the next week related to LIFT content learned. At the beginning of Chapters 2-7, there are brief check-ins about self-care plan completion and any barriers to completion. The LIFT program has two tracks, a “trauma track” and a “stress track”. The trauma track contains additional trauma-focused content compared to the stress track, including a trauma- specific fear hierarchy activity that uses exposures to address traumatic experiences and PTSS (increasing approach and decreasing avoidance). This track also includes a trauma narrative (“newspaper story”), which is central to the success of evidence-based trauma treatments 90 Table 8. Breakdown of Study Phases and Administered Measures Phase Phase 1: Screener (same for both groups; Qualtrics) Intervention Group – Measures Administered Screener/treatment consent, demographic questionnaire, traumatic experiences checklist, post-traumatic stress (CPSS) Waitlist Control Group – Measures Administered Phase 2: Randomization into 2 groups, followed by Pre- Assessment (Qualtrics) Coping skills (CCSC), negative cognitions (CPTCI-S) Post-traumatic stress (CPSS), depression (RCADS), anxiety (RCADS), coping (CCSC), negative cognitions (CPTCI-S) Phase 3: Measures embedded LIFT platform (intervention group only; measures at Chapter 1 [pre-score] and Chapter 7 [post-score]) Depression and anxiety (pre/post; RCADS), post- traumatic stress (post; CPSS), other feasibility and effectiveness variables n/a Phase 4: Post-Assessment (Qualtrics) Coping skills (CCSC), negative cognitions (CPTCI-S), acceptability survey (quant/qual) Post-traumatic stress (CPSS), depression (RCADS), anxiety (RCADS), coping (CCSC), negative cognitions (CPTCI-S) Cultural acceptability (qualitative interview) Phase 5: Qualitative interviews (selected participants) *Note: Child PTSD Symptom Scale (CPSS); Child Post-Traumatic Cognitions Inventory – Short Form (CPTCI-S); Revised Children’s Anxiety and Depression Scale (RCADS); Children’s Coping Skills Checklist (CCSC) n/a (TF-CBT; Cohen et al., 2017). Users on the stress track receive similar content that uses language tied to general stressful events. LIFT routes users onto the trauma track if they indicate specific trauma criteria in Chapter 1 (i.e., >1 PTE and mild PTSS). No participants were routed onto the stress track. All content and data were stored securely and confidentially on the LIFT platform and the researchers did not have access to any of the open responses that participants typed in (e.g., trauma narrative). Participants had the option to print out open response activities. 91 Data Analysis: Treatment Feasibility/Adherence Feasibility and adherence data was collected by the LIFT system (usage metrics) and was analyzed descriptively to answer Research Question 1 across all participants randomized to the treatment group. This includes how many participants started LIFT out of those provided with access to the program, number of chapters completed, number of participants who completed the program out of the number of those who started, % of videos watched, number of minutes spent per chapter (duration), and self-reported engagement with LIFT activities. This data was collected by the LIFT system via user metrics and questions within the chapters asking about whether “self-care plan” elements were completed and which activities were replayed, and is analyzed descriptively below. Examining these different facets of engagement with LIFT are in line with literature recommendations for measuring digital intervention adherence using multiple metrics of the “enacted dose” in relation to the “intended dose” by measuring “Intervention Actions” (receiving [reading, listening to] intervention content; duration and % videos watched), “Participant Actions” (producing and delivering content to the intervention; replaying activities), and “Behavioral Target Actions” (engaging in health behaviors outside of the intervention; self- care plan; McVay et al., 2019). Data Analysis: Treatment Effectiveness To measure effectiveness across groups and over time for variables of interest, Analysis of Covariance (ANCOVA) was used for Research Questions 2 and 3, with pre-treatment scores as the covariate. As proposed and discussed above, analyses followed an intent-to-treat approach and missing data was addressed using MI. Intent-to-treat approaches are best practice for intervention trial research, even with significant proportions of missing data (Madley-Dowd et al., 2019), and MI is one of the more rigorous strategies to replace missing values in line with an 92 intent-to-treat approach that includes all participants in analyses regardless of non-adherence or dropout from the study by replacing their missing values with plausible values across a range of imputations (N = 40 imputations; Salim et al., 2007). ANCOVA is widely accepted as an efficient and rigorous method to reduce bias and error variance when calculating pre-post changes in outcome variables for studies that use RCT designs (Dimitrov & Rumrill, 2003; Wan, 2021). The literature supports that ANCOVAs are an appropriate statistical method across two groups and two time points (Liang & Zeger, 2000; Gliner et al., 2003). Homogeneity of variances across groups was also assessed below since group sizes were unequal. Secondary data on treatment effectiveness for those who completed the LIFT program is also presented below. This includes self-reported questions throughout LIFT regarding self-care plan completion (Chapters 3-7) and goal achievement scaling initially completed in Chapter 1 and then again in Chapter 7. Data Analysis: Treatment Acceptability Quantitative acceptability data was summarized for the intervention group descriptively by item and via an average of all items from the acceptability questionnaire. Jaycox and colleagues (2019) defined a total score of 1.85 as “moderate”; for the current study, an average score of < 1.0 indicates “low” acceptability, an average score between 1.0-1.9 indicates “moderate” acceptability, and an average score of > 2.0 indicates “high” acceptability. The last 3 questions on the acceptability questionnaire were open-response questions regarding experiences with LIFT. Participant responses to these questions were mapped onto Perski & Short’s (2021) model for understanding the complexities of engagement and acceptability in relation to effectiveness. This addresses Research Question 4. Three individual interviews were conducted to gather information on general and cultural acceptability to answer Research Question 5. A 93 directed content analysis approach was utilized (Hsieh & Shannon, 2005), which is appropriate when using existing theory (Perski & Short, 2021) to guide data interpretation. Each interview was coded by both coders separately, with an initial a priori codebook, in addition to open coding of new inductive codes that arose. It is common for qualitative content analyses to take this approach by including a combination of deductive codes based on theory and inductive codes derived once immersed in the data (Forman & Damschroder, 2008). A priori codes included affective attitude, burden, ethicality, opportunity costs, perceived effectiveness, usability, perceived norms, knowledge, want/need (aligned with Perski et al., 2021 model), coping with racism, relevant/relatable/realistic, and suggestions for how LIFT can be improved (see Appendix D). Following an initial discussion about the transcripts after the first round of coding, the two coders updated the codebook with clearer operationalizations and 1 new code (engagement with study components outside of the LIFT program), then moved on to the second transcript. Next, they completed a second round of individual coding for each transcript using the updated codebook. Inter-coder reliability was calculated (see above). Disagreements were resolved with discussion and reference to the operationalizations that were established. Data was summarized numerically and narratively. The researchers completed member checks following the coding process by emailing a brief summary of qualitative results to interviewed participants with the option to provide feedback via an anonymous Qualtrics survey during a two-week period. The member check process used the structured McKim (2023) framework by presenting a summary of acceptability findings in an email followed by the following optional questions: After reading through the findings, what are your general thoughts? How accurately do you feel the findings captured your thoughts and experiences? What could be added to the findings to capture your experiences better? If there is anything you’d like removed, what would that be and 94 why? Qualitative/quantitative acceptability findings were triangulated using the finalized codes within the discussion, which promotes increased validity of findings (Creswell & Clark, 2007). 95 CHAPTER 4 RESULTS Research Question 1: Feasibility/Adherence Usage data was collected by the LIFT platform surrounding the number of chapters accessed/completed. This provided information on retention and adherence, components of intervention feasibility, which are summarized within . This information on retention and adherence also is partially summarized in Figure 2 in relation to participant follow-up throughout the study and analyses. Only 6 out of the 24 total (25%) allocated treatment group participants completed all chapters of LIFT and met minimum adherence criteria for having interacted with the content and videos in meaningful ways to suggest that the LIFT intervention could have its intended effects. Of the 18 treatment group participants who did not complete LIFT as intended, 3 (17%) participants did not receive their login information for LIFT due to non-response to the initial study contact, 10 (56%) participants received access to LIFT but never began the program, 2 (11%) participants began LIFT but did not complete all chapters, and 3 (17%) participants finished all 7 Chapters of LIFT but failed to engage with the content in a meaningful way in line with the minimum dosage criteria outlined as a part of post-hoc analyses, described below. Adherence to LIFT was calculated only for those who received login information for and started the LIFT program (n = 11 participants), using the same usage data on LIFT chapter completion. Out of these 11 participants who started the program, 9 (82%) completed all 7 chapters, but only 6 (55%) engaged with the content in a meaningful way in line with the minimum dosage criteria. This rate of true adherence with LIFT (55%) is below the study hypothesis of 70%. Finally, the retention rate was higher in the waitlist group (87%; N = 20/23) due to study-specific factors (i.e., no study requirements during 7 weeks between pre and post). 96 Participant 1 2* 3* 4 5* 6 7* Table 9. Summary of Engagement Characteristics of N = 11 Participants Who Began LIFT Number of videos (%) where > 50% of content watched, Chapter duration (mins) Chapter Chapter 7 – 1 – Applicat Psycho- ion of educati Content on Chapter 2 - Relaxation, Activity Scheduling Chapter 3 - Addressing Cognitive Distortions Chapter 6 – Social Problem Solving Chapter 5 – Trauma Narrati ve Chapter 4 - Reducin g Avoidan ce Across All Chapters Minimu m dose met- Videos % Minimu m dose met- Duratio n Media n duratio n (minut es) 5/5 (100%) 21.23 5/5 (100%) 14.98 5/5 (100%) 11241. 82 4/5 (80%) 10.47 5/5 (100%) 28.00 5/5 (100%) 22.03 5/5 (100%) 21.82 2/8 (25%) 1348.65 4/8 (50%) 20.53 7/8 (88%) 3619.93 8/8 (100%) 413.95 8/8 (100%) 122.72 8/8 (100%) 381.30 5/8 (63%) 6.28 2/8 (25%) 8.72 6/8 (75%) 22.95 8/8 (100%) 18.90 3/7 (43%) 15.80 7/7 (100%) 27094.1 3 7/7 (100%) 55.02 1/7 (14%) 9.85 7/7 (100%) 15.35 1/8 (13%) 15.02 2/8 (25%) 12.20 0/7 (0%) 16.37 8/8 (100%) 32.55 7/8 (88%) 21.27 3/7 (43%) 12.17 97 1/6 (17%) 8.37 0/3 (0%) 7.73 6/6 (100%) 17.07 3/3 (100%) 11.0 3/3 (100%) 2255.77 0/3 (0%) 8.68 3/3 (100%) 13.70 6/6 (100%) 972.68 2/6 (33%) 6.77 6/6 (100%) 237.27 0/6 (0%) 15.83 5/6 (83%) 1134.7 5 3/3 (100%) 20.82 3/3 (100%) 18.33 3/3 (100%) 22.87 2/3 (67%) 1843.10 3/3 (100%) 52.28 21.23 No Yes 18.33 Yes Yes 381.30 Yes Yes 8.72 No No 22.95 Yes Yes 0/3 (0%) 10.73 0/3 (0%) 27.65 15.83 No Yes 1/3 (33%) 7.33 3/3 (100%) 19.85 21.27 Yes Yes Table 9 (cont’d) 8* 9* 10 11 Median Duration (Range, Outliers; for participants 1- 9) 5/5 (100%) 30.9 5/5 (100%) 95.62 5/5 (100%) 48.00 5/5 (100%) 125.75 22.03 min Range: 10.47 min- 1.12 weeks Outlier: 1.12 weeks 7/8 (88%) 61.63 8/8 (100%) 154.47 6/8 (75%) 28.70 5/8 (63%) 18.48 6/8 (75%) 12.58 3/8 (38%) DNF 0/8 (0%) DNF -- 32.55 min Range: 6.28 min- 2.51 days Outliers: 2.51 days, 22.48 hours 20.53 min Range: 12.20 min - 6.90 hours Outliers: 6.90 hours, 6.36 hours 7/7 (100%) 290.90 3/7 (43%) 15.90 6/6 (100%) 36.93 0/6 (0%) 12.68 3/3 (100%) 24.43 1/3 (33%) 6.02 2/3 (67%) 66.68 3/3 (100%) 24.67 61.63 Yes Yes 18.48 Yes Yes -- -- 15.90 min Range: 9.85 min- 18.79 days Outliers: 18.79 days, 4.85 hours -- -- 17.07 min Range: 6.77 min- 18.91 hours Outliers : 16.21 hours, 3.95 hours, 18.91 hours -- -- -- -- -- -- -- -- -- -- 10.73 min Range: 6.02 min- 37.60 hours Outlier: 37.60 hours 24.67 min Range: 18.33 min - 30.72 hours Outlier: 30.72 hours -- -- -- * = minimum dosage met across video content and chapter duration (> 50% of content watched per video in a given chapter, > 10 minutes duration across > 50% of chapters); minimum dosage criteria was developed post-hoc as inclusion for post-hoc analyses Note: DNF = Did not finish Note: Participants 12-21 who were given access to the LIFT program did not begin the program and are excluded from this table Note: Outlier = > 3 hours chapter duration (likely completed chapter in more than one sitting) 98 Usage data was also collected by the LIFT platform surrounding chapter duration and the percentage of videos watched in each chapter for participants in the treatment group who started LIFT (N = 11; see Table 9). This provided information on adherence to and general engagement with LIFT, subcomponents of feasibility. Given the low retention rate and moderate adherence rate for the treatment group reported above, it was essential to examine these subcomponents of engagement and adherence to more comprehensively understand how participants interacted with the LIFT program. Within Table 9, it is evident that participants clicked through much of the LIFT content quickly given the low ends of the chapter duration ranges present across chapters. Medians were presented rather than averages given the many outliers present. A duration was considered an outlier if it was longer than 3 hours for a given chapter, as this assumes the participant completed the chapter over multiple sittings. Some participants took multiple days or weeks to complete a given chapter. This displays the varied patterns of engagement that participants displayed with the LIFT program, and displayed the need for the researcher to establish a minimum dosage criterion for the intervention since some participants appeared to move quicker through the chapters than what would be expected to absorb the content. Even though only 6 participants completed LIFT as intended, engagement characteristics described below will be in the context of the group of 9 participants who completed all chapters of LIFT, to better understand the range of engagement patterns observed in the study for those in the treatment group who met all study requirements. First, it took on average 5.73 weeks for these 9 participants to complete the program from start to finish (range: 4.47-7.47 weeks), indicating that most participants completed the chapters at a quicker pace than intended by the study instructions and reminders (i.e., more than one chapter per week). The available LIFT usage data did not include how many times participants logged in for each chapter. Some 99 participants appeared to complete a chapter over more than one sitting (see “outliers” in Table 9), and other participants appeared to click through chapters at a quicker rate (< 10 minutes) than what is expected to complete all activities and videos. Five participants reported completing LIFT entirely on their personal computer/laptop, three participants used a combination of their personal computer/laptop and smartphone, and one participant endorsed that they used a combination of a smartphone and tablet device. Participants engaged with LIFT in a way that was inconsistent compared to the intended usage characteristics from the pilot study. The LIFT user metrics also provided information on whether self-care plan components were completed between LIFT chapters, and how many times participants replayed key games or activities within LIFT chapters. Self-care plan completion and activity replays within LIFT are important secondary sub-components of engagement, adherence, and feasibility for LIFT in line with “Participant Actions” and “Behavioral Target Actions” cited within the literature as components of adherence (McVay et al., 2019). As displayed in Table 10, the 9 participants who completed LIFT self-reported (yes/no) starting in Chapter 3 if they had the chance to practice skills from previous weeks since they completed the last chapter. Based on the data in this table, the most common items of self-care plans that were not completed between chapters were items related to exposures: writing or drawing about a stressful event and facing fears. Most participants reported completing at least half or all of the self-care plan items since the last chapter. Each of the nine participants who fully completed LIFT had attended to all self-care plan elements at least once, signifying every skill was practiced at least once. All participants were required to complete activities a certain number of times within the chapter to move on, but could revisit/replay activities more than once if desired. For instance, out of the nine participants who fully completed LIFT, six completed the minimum amount of relaxation practices (three). 100 Table 10. Self-Reported Self-Care Plan Completion by Chapter (N = 9) Participant Chapter 3 Chapter Chapter 5 Chapter 6 Chapter 7 4 1 2 3 4 5 6 7 8 9 AS, RLX NG, AS, RLX AS, RLX NG, AS, RLX AS, RLX NG, AS, RLX AS, RLX NG, AS, RLX AS, RLX NG, AS, AS, RLX RLX NG, RLX (No AS) NG, AS, RLX, FF NG, AS, FF (no RLX) NG, AS, RLX, FF NG, AS, RLX, FF AS, RLX, FF (no NG) NG, AS, RLX, FF NG, AS, RLX, FF, WD NG, AS, RLX, WD (no FF) NG, AS, RLX, FF (no WD) NG, AS, RLX, FF, WD NG, AS, RLX, FF (no WD) NG, AS, RLX, FF (no WD) NG, AS, RLX, (no WD or FF) NG, AS, RLX, FF, WD NG, AS, RLX, FF, WD NG, AS, RLX, FF (no WD) NG, AS, RLX, WD (no FF) NG, AS, RLX, FF, WD AS, RLX NG, AS, RLX AS, RLX NG, AS, RLX AS, RLX NG, AS, RLX NG, AS, RLX, FF NG, AS, RLX, FF NG, AS, RLX, FF NG, AS, RLX, WD (no FF) NG, AS, RLX, FF, WD NG, AS, RLX, FF, WD NG, AS, RLX, FF, WD NG, AS, RLX, FF, WD NG, AS, RLX, (no WD or FF) % of Self- Care Plan Elements Completed At Least One Time 100% 100% 100% 100% 80% 100% 100% 100% 100% Key: *AS (activity scheduling); RLX (relaxation), NG (replacing negative thoughts with helpful thoughts), FF (facing fears); WD (writing or drawing about a stressful event Two participants did one extra practice each and one participant did four extra practices. Overall, 25% (6 out of 24) of participants in the treatment group completed LIFT with appropriate adherence. Global LIFT usability concerns remain as more than half (13/24; 54%) of those randomized to treatment group did not begin LIFT at all despite consenting to do so. Out of those who began LIFT, about half appeared to adhere to the treatment as intended (6 out of 11; 55%). In addition, some participants who began LIFT engaged inconsistently with chapter content (5 101 out of 11 did not meet minimum dosage criteria; 45%), despite consistent engagement with self- care plans between chapters, leading to questionable feasibility of LIFT in this context. Research Question 2 – Effectiveness (Mechanisms of Action) Prior to examining the primary effectiveness data for both mechanisms of action and targeted outcome analyses using an intent-to-treat framework, the large amount of missing data needed to be systematically addressed. Multiple imputation (n = 5 imputations) was used to address missing data for all pre-post variables of interest due to noncompliance with the LIFT program and dropout from the study. Original data (n = 9 for treatment group; n = 20 for waitlist group) were used to impute data for the remaining participants with missing pre or post scores, to allow the analyses to proceed with the full sample of 24 treatment group participants and 23 waitlist control group participants. This is aligned with an intent-to-treat approach that includes data from all participants randomized to a group in the analysis, regardless of noncompliance or dropout from the study (Armijo-Olivo et al., 2009). For both mechanism of action and targeted outcome variables used within the ANCOVAs below, equal variables across groups for all imputed datasets was established (p > .05) prior to analysis via Levene’s Test of Equality of Error Variances, and all other assumptions were met (i.e., independent observations, normality for dependent variables, homogeneity of regression slopes, linear relationship between covariates [pre-test scores] and dependent variables [post-test scores]). A post-hoc power analysis was calculated with G*Power software using the obtained sample size (N = 47) and an alpha of .05, to detect an effect size of .25 (power = 0.39). This displays a moderate likelihood for a Type II error and underestimation of effects within all ANCOVA analyses in the current study given the obtained sample size. See Table 11 below for a summary of the descriptive statistics for the 102 mechanisms of action. See Appendix E for the ANCOVA results for the first 5 imputations and Appendix F for a pooled summary of intent-to-treat findings across all 40 imputations. Table 11. Pre and Post Mean Scores for Mechanisms of Action Tx Group (N = 24) WLC Group (N = 23) Pre-Score (Original data) 6.25 6.00 6.38 7.46 Coping – Problem Solving Coping – Cognitive Coping – Optimistic Coping – Avoidance Pre-Score (Original data) 7.13 8.57 7.65 8.00 Observed Post- Score (Original data [12], Original + Imputed) 6.75 6.88a 7.08 7.02a 7.67 6.91a 7.42 6.93a 11.33 12.26a 8.25 9.04a Observed Post- Score (Original data [20], Original + Imputed) 7.70 7.74a 8.10 8.01a 6.70 6.69a 6.80 6.84a 12.40 12.55a 9.60 9.73a 10.63 Negative cognitions – PDC Negative cognitions – FPSW Note: Pooled standard deviations are not available a Pooled scores across N = 40 imputations PDC = Permanent and Disturbing Change; FPSW = Fragile Person in a Scary World 11.25 11.00 9.52 Coping Skills. Four one-way ANCOVAs were conducted to compare whether LIFT was effective in improving self-reported coping skills (CCSC; Ayers et al., 1996) across the four subscales for participants in the treatment group when compared to the WLC group while controlling for pretest scores. Self-reported coping skills did not significantly change across the LIFT and waitlist group participants for any subscale across imputations while holding pre- scores constant, so effect sizes are not reported. For the Problem Solving subscale, there was not a significant difference in self-reported problem solving coping scores across imputed datasets, displayed by the range of F-values [F (1, 46) = .000-4.627, pooled p = 0.46] between the intervention group and the control group. Problem solving coping pre-test was insignificant at p < .05 at the pooled level, suggesting that problem solving coping pre-scores did not impact 103 problem solving coping post-scores. When computing the pooled EMMs, participants in the treatment group reported similar problem solving coping at post-test (M= 6.953, SE= .816) when compared to participants in the waitlist group (M= 7.671, SE= .621). The problem solving coping treatment group post-score using original data only (n = 9) was similar (M = 6.56) to the imputed and EMM post-scores. For the Cognitive Coping subscale, there was not a significant difference in self-reported cognitive coping scores across imputed datasets and the original dataset, displayed by the range of F-values [F (1, 46) = .001-5.928, pooled p = 0.55] between the intervention group and the control group. Cognitive coping pre-score was insignificant at p < .05 at the pooled level, suggesting that cognitive coping pre-scores did not impact cognitive coping post-scores. When computing the pooled EMMs, participants in the treatment group reported similar cognitive coping at post-test (M= 7.244, SE= .700) when compared to participants in the waitlist group (M= 7.777, SE= .551). The cognitive coping treatment group post-score using original data only (n = 9) was similar (M = 7.99) to the imputed and EMM post-scores. For the Optimistic subscale, there was not a significant difference in self-reported optimism coping scores across imputed datasets, displayed by the range of F-values [F (1, 46) = .000-4.809, pooled p = 0.88] between the intervention group and the control group. Optimism coping pre-test was insignificant at p < .05 at the pooled level, suggesting that optimism coping pre-scores did not impact optimism coping post-scores. When computing the pooled EMMs, participants in the treatment group reported similar optimism coping at post-test (M= 6.901, SE= 1.069) when compared to participants in the waitlist group (M= 6.695, SE= .778). The optimistic coping treatment group post-score using original data only (n = 9) was similar (M = 7.67) to the imputed and EMM post-scores. For the Avoidance subscale, there was not a significant difference in self-reported avoidance coping scores across imputed datasets, displayed by the 104 range of F-values [F (1, 46) = .000-3.627, pooled p = 0.89] between the intervention group and the control group. Avoidant coping skills pre-score was insignificant at p < .05 at the pooled level, suggesting that avoidant coping pre-scores likely did not impact avoidant coping post- scores. When computing the pooled estimated marginal means (EMMs), participants in the treatment group reported similar avoidant coping at post-test (M= 6.948, SE= .783) when compared to participants in the waitlist group (M= 6.823, SE= .543). The pooled EMMs represent the pooled post-score after adjusting for the pre-score. The avoidant coping treatment group post-score using original data only (n = 9) was similar (M = 7.22) to the pooled imputed and EMM post-scores. These results indicate that when controlling for pre-test, participants in the treatment group did not display significantly different changes in approach or avoidant coping at post compared to the waitlist group participants following LIFT. Negative Cognitions. Two one-way ANCOVAs were conducted to compare whether LIFT was effective in decreasing negative cognitions (PDC; FPSW; CPTCI-S; 10 items) for participants in the treatment group when compared to the waitlist group while controlling for pre-test scores. Self-reported negative cognitions did not significantly change across the LIFT or waitlist group for either subscale, across imputations, while holding pre-scores constant, so effect sizes are not reported. There was not a significant difference in self-reported negative cognitions related to Permanent and Disturbing Change (PDC) across imputed datasets, displayed by the range of F-values [F (1, 46) = .000-1.737, pooled p = 0.89] between the intervention and the control group. PDC pre-test was insignificant at p < .05 at the pooled level, suggesting that PDC pre-scores did not impact PDC post-scores. When computing the pooled EMMs, participants in the treatment group reported similar PDC-related cognitions at post-test (M= 12.297, SE= 1.276) when compared to participants in the waitlist group (M= 12.513, SE= 1.059). The PDC treatment 105 group post-score using original data only (n = 9) was slightly lower (M = 11.78) than the imputed and EMM post-scores, suggesting that further analyses may be useful with the original data from an adherent sub-sample of participants (see Post-Hoc Analysis section below). There was not a significant difference in self-reported negative cognitions related to Fragile Person in a Scary World (FPSW) across imputed datasets, displayed by the range of F-values [F (1, 46) = .049-7.154, pooled p = 0.37] between the intervention and control group. FPSW pre-test was insignificant at p < .05 at the pooled level, suggesting that FPSW pre-scores likely do not impact FPSW post-scores. When computing the pooled EMMs, participants in the treatment group reported similar FPSW-related cognitions at post-test (M= 8.921, SE= .771) when compared to participants in the waitlist group (M= 9.852, SE= .693). The FPSW treatment group post-score using original data only (n = 9) was slightly lower (M = 7.89) to the imputed and EMM post- scores, suggesting that further analyses may be useful with the original data from an adherent sub-sample of participants (see Post-Hoc Analysis section). These results indicate that when controlling for pre-test, participants in the treatment group did not display significantly different changes in negative cognitions at post-test compared to the waitlist participants following LIFT. The following secondary data on treatment effectiveness was only available for the 9 participants who completed all 7 chapters of LIFT (see Table 12). Participants could endorse any of the following goals from a checklist in response to the question, “I want to change the way I do things and think about things so I can…”, as summarized in Table 12 (# Endorsed Goal at Ch. 1). In Chapter 7 of LIFT, participants were provided with the same list of these goals and asked, “Which of the following things have you accomplished in LIFT?”, which is summarized below in Table 12 (% Endorsed “Sort of True” or “Very True” for Reaching Goal): 106 This secondary effectiveness data supports that participants self-reported that they met the goals they set forth within Chapter 1, and also that participants shared many different goals coming into the LIFT program that may not entirely align with study measures (e.g., have fewer Table 12. Participant Perceptions of Progress Towards LIFT Goals (out of N = 9) Response Options # (%) Endorsed Goal at Ch. 1 Calm myself down when I feel upset Stop avoiding things that make me nervous Do more of the things that I used to do Think more about things before I do them Have fewer problems with my family Think about things that happened without feeling upset Talk about things that happened without feeling upset Make better decisions Have fewer problems with my friends 8 (88.9%) 6 (66.7%) 6 (66.7%) 5 (55.6%) 5 (55.6%) 5 (55.6%) 4 (44.4%) 4 (44.4%) 3 (33.3%) # (%) Endorsed “Sort of True” or “Very True” for Reaching Goal 9 (100%) 9 (100%) 8 (88.89%) 9 (100%) 8 (88.89%) 8 (88.89%) 8 (88.89%) 9 (100%) 9 (100%) problems with my family). 89%-100% of treatment group participants who completed LIFT self- reported improved coping (i.e., “calm myself down when I feel upset) and reduced avoidance (i.e., stop avoiding things that make me nervous”), which is not aligned with intent-to-treat analysis findings above that found no improvement in these areas. Research Question 3 – Effectiveness (Targeted Symptoms) PTSS. A one-way ANCOVA was conducted to compare whether LIFT was effective in improving PTSS (CPSS; Foa et al., 2001) for participants in the treatment group when compared to the waitlist group while controlling for pre-test scores. Self-reported PTSS did not significantly change across the LIFT and waitlist group participants across imputations and in the original dataset while holding pre-scores constant, so an effect size is not reported. There was not a significant difference in self-reported PTSS scores across imputed datasets, displayed by the 107 range of F-values [F (1, 46) = .000-4.097, pooled p= 0.82] between the intervention group and the control group. PTSS pre-test was insignificant at p < .05 at a pooled level, suggesting that PTSS pre-scores did not impact PTSS post-scores. When computing the pooled EMM, participants in the treatment group reported similar PTSS at post-test (M= 19.276, SE= 4.667) when compared to participants in the waitlist group (M= 18.122, SE= 3.013) at post-test. The pooled EMMs represent the pooled post-score after adjusting for the pre-score. The ANCOVA conducted with the original non-imputed data set (n = 9 treatment, n = 20 waitlist) was also non- significant across groups for PTSS (see Appendix G). The PTSS treatment group post-score using original data only (n = 9) was slightly lower (M = 14.89) than the imputed and EMM post- scores, suggesting that further analyses may be useful with the original data from an adherent sub-sample of participants (see Post-Hoc Analysis section below). Scores in this range indicate participants across groups were still experiencing mild-moderate PTSS at post-test (Foa et al., 2001). These results indicate that when controlling for pre-test scores, participants in the treatment group reported no significant changes in PTSS when compared to participants in the waitlist group following LIFT. Depression and Anxiety. Two one-way ANCOVAs were conducted to compare whether LIFT was effective in decreasing anxiety and depression symptoms (RCADS Generalized Anxiety and Depression subscales; Chorpita et al., 2000) for participants in the treatment group when compared to the WLC group while controlling for pre-test scores across N = 40 imputations. It is important to note that missing pre-and post-score values were also imputed for the 13 participants who were randomized to the treatment group but did not complete the RCADS pre-and post-assessment embedded within Chapter 1 and Chapter 7 of LIFT, as outlined in Table 13 below. Best practices for handling significant portions of missing data suggest that as 108 long as important variables have < 40% missing information (current study is within this threshold [33% FMI, 38% incomplete]; see Table 6) MI can be used and can produce valid estimates of treatment effects (Jakobsen et al., 2017). See Appendix G for the ANCOVA results for the first 5 imputations and Appendix H for a more detailed summary of intent-to-treat Table 13. Pre and Post Mean Scores for Targeted Outcomes Tx Group (N = 24) WLC Group (N = 23) Pre-Score (Original [11], Original + Imputed) 18.88 (Original only) 12.64 13.49a Observed Post-Score (Original data [9], Original + Imputed) 14.89 19.38a 7.56 10.62a Pre-Score (Original data) 17.96 12.04 Observed Post-Score (Original data [20], Original + Imputed) 17.70 18.01a 11.55 11.77a PTSS Depression* Anxiety* 9.64 10.15a Note: Pooled standard deviations are not available Note: Clinical significance cut-offs to determine sub-threshold symptoms: £10 PTSS, £11 Depression, £7 Anxiety a Pooled score across N = 40 imputations 9.25 9.33a 5.33 7.29a 9.78 findings across all 40 imputations. See Table 13 below for a summary of the descriptive statistics for targeted outcome measures. There were no significant improvements in self-reported depression symptoms overall across imputations, displayed by the range of F values [F (1, 46) = 0.070-12.986 pooled p= 0.18], for those in the treatment group compared to those in the waitlist group. Depression symptoms appear to be somewhat trending in a positive direction given that 17 out of the 40 imputations produced significant results, although they were still statistically insignificant at the pooled level. The pooled p-value is typically considered to be the best indicator of overall significance across imputations. Partial eta-squared was selected as the measure of effect size due to literature suggesting that it is an appropriate option that can be applied universally to all research designs with clear interpretation standards (Richardson, 2011), 109 and its availability within the selected statistical program (SPSS). Pooled partial eta squared estimates were not able to be obtained within the statistical program utilized, so partial eta- squared was calculated by-imputation and reported in range format (see Appendix H). For the 17 out of 40 imputations with a significant p-value, 7 were considered to represent a large effect and 10 were considered to represent a medium effect. Depression pre-test was significant at p < .05 across imputations, suggesting that depression pre-scores significantly impacted depression post- scores and controlling for pre-score was important within the model. A one-point increase in the depression pre-score was associated with an increase in .585 points in the depression post-score (pooled p = .007). When computing the EMMs, the participants in the treatment group reported slightly less depression symptoms (M= 9.793, SE=1.918) compared to those in the control group (M= 12.628, SE=1.453) at post-test, after adjusting for pre-test. The ANCOVA conducted with the original data set (n = 9 treatment, n = 20 waitlist) was also significant across groups for depression. The depression treatment group post-score using original data only (n = 9) was slightly lower (M = 7.56) than the imputed and EMM post-scores, suggesting that further analyses may be useful with the original data from an adherent sub-sample of participants (see Post-Hoc Analysis section below). Additionally, there were no significant improvements in self-reported anxiety symptoms overall across imputations, displayed by the range of F values [F (1, 46) = 0.006-21.302 pooled p= 0.11] for those in the treatment group compared to those in the waitlist group. Given that more than half of the imputations (23 out of 40) led to a statistically significant positive outcome, anxiety symptoms appear to be trending in a positive direction, although they were still statistically insignificant at the pooled level. Pooled partial eta squared estimates were not able to be obtained within the statistical program utilized, so partial eta-squared was calculated by- 110 imputation and reported in range format (see Appendix H). For the 23 out of 40 imputations with a significant p-value, partial eta-squared was calculated as a measure of effect size, where 19 were considered to represent a large effect and 4 were considered to represent a medium effect. Anxiety pre-test was significant at p < .05 in the original data and 39 out of 40 imputations, suggesting that Anxiety pre-scores impacted Anxiety post-scores and controlling for pre-score was important within the model. A one-point increase in the anxiety pre score was associated with an increase in .596 points in the anxiety post-score (pooled p = .005). When computing the pooled EMMs, the participants in the treatment group reported slightly less anxiety symptoms (M= 7.081, SE= 1.495) compared to those in the control group (M= 9.549, SE= .991) at post-test. The pooled EMMs represent the pooled post-score after adjusting for the pre-score. The ANCOVA conducted with the original data set (n = 9 treatment, n = 20 waitlist) was also significant across groups for anxiety. The anxiety treatment group post-score using original data only (n = 9) was also slightly lower (M = 5.33) than the imputed and EMM post-scores, suggesting that further analyses may be useful with the original data from an adherent sub- sample of participants (see Post-Hoc Analysis section below). These results indicate that when controlling for pre-test scores, participants in the treatment group experienced some improvements in anxiety and depression symptoms, albeit statistically insignificant at the pooled level, at post-test when compared to participants in the waitlist group. Despite overall insignificance of the ANCOVAs, treatment group post scores fell on or below the RCADS clinical cut scores (7 for anxiety; 11 for depression; Chorpita et al., 2005), while waitlist group post scores fell above these cutoffs, demonstrating clinical significance of these trending symptom improvements. See Table 13 below for a brief summary of the intent-to-treat analyses for targeted outcomes and associated pre-and-post means. 111 The secondary data on treatment effectiveness was only available for the 9 participants who completed all 7 chapters of LIFT. First, participants were asked at the beginning of LIFT about what emotions they want to feel more or less of. Table 14 contains a summary of goal Table 14. Goal Attainment Scaling for Participant Emotion Goals (n = 8) Emotion Goal Attainment Sad Angry Upset Nervous Scared Happy Excited Calm Relaxed Scaling* -0.56 -0.56 -0.67 -0.56 -0.56 0.78 0.56 0.56 0.33 % of Participants (Out of Those Who Endorsed this Goal in Ch. 1) Who Reported Improvement at LIFT Ch. 7 50% (2/4) reported improvement, 50% (2/4) reported no change 100% (3/3) reported improvement 100% (2/2) reported improvement 66.67% (4/6) reported improvement, 16.67% (1/6) reported no change, and 16.67% (1/6) reported worsening 50% (2/4) reported improvement, 25% (1/4) reported no change, 25% (1/4) reported worsening 71.43% (5/7) reported improvement, 28.5% (2/7) reported no change 66.67% (4/6) reported improvement, 33.33% (2/6) reported no change 50% (3/6) reported improvement, 50% (3/6) reported no change 25% (1/4) reported improvement, 50% (2/4) reported no change, 25% (1/4) reported worsening *Range: -1 to 1; -1 = A little or much less; 1 = a little or much more; 0 = about the same attainment scaling information for each emotion. Four participants reported wanting to feel less sad, two participants reported wanting to feel less angry, three participants reported wanting to feel less upset, six participants reported wanting to feel less nervous, and four participants reported wanting to feel less scared. Seven participants reported wanting to feel happier, six participants reported wanting to feel more excited, six participants reported wanting to feel more calm, and five participants reported wanting to feel more relaxed. At the end of LIFT, participants were reminded of these individualized goals (e.g., “When you started the LIFT course, you said you wanted to feel LESS sad and MORE calm. How do you think you did overall?”) and asked to rate their percentage using a slider from 1-100%. Across the 8 participants with this available data, the average percentage endorsed was 66% (median = 76; SD 112 = 25.97; range: 19-89%). Goal Attainment Scaling was used to measure participant perceptions of improvement on each individual emotional goal (scale -1 to 1; negative score indicates feeling less of an emotion after LIFT, positive score indicates feeling more of an emotion after LIFT, 0 indicates no change). This is interpreted as “improvement”, “no change”, or “worsening” in the table based on the coding of the emotion (e.g., -1 for Sad is “improved”, but -1 for Happy” is “worsened”). This secondary effectiveness data supports that participants mostly met or experienced little change related to their emotion-regulation goals set in Chapter 1. This secondary data is in alignment with ANCOVA analyses above that found some improvements in anxiety and depression for the treatment group. Overall, effectiveness analyses for targeted outcomes displays positively trending improvements in depression, anxiety, and emotion regulation following the LIFT program. Research Question 4 – Acceptability Questionnaire All 12 treatment group participants completed the LIFT post-survey and the corresponding acceptability survey measure, inclusive of quantitative acceptability items and 3 open response items. However, responses from three participants were excluded because LIFT data indicated they had failed to complete at least one chapter. Thus, the evaluation of LIFT acceptability excluded those who failed to complete any of the LIFT intervention. For the treatment group participants (n = 9) with complete data LIFT usage data (e.g., all 7 chapters completed), the mean score was 2.00 (SD = 0.54), which suggests moderate/high acceptability of the LIFT program per the interpretation standards used in the LIFT pilot study (Jaycox et al., 2019). Similarly, for the global acceptability (Item 13), participants rated LIFT as “highly” acceptable (M = 2.33, SD = .71), and for the global social validity item (Item 14), participants rated LIFT as “highly” acceptable (M = 2.22, SD = .83). 113 Overall, the quantitative data suggests that participants who completed LIFT viewed it as moderately/highly acceptable. See Table 15 for a by-item summary of participant acceptability ratings from this tool, and associated codes for each item from Appendix D. Subscale analyses indicated that Relevance/Relatability/Realistic items had a mean of 2.95 (highly positive view), Table 15. Quantitative Acceptability Survey Results and Applicable Codes (N = 9) Acceptability Survey Item 1. LIFT graphics and design are interesting and appealing to me. 2. The LIFT voices are interesting and appealing to me. 3. I like the LIFT music. 4. LIFT talks about stresses and trauma that I can relate to. 5. The material presented in LIFT is clear and I can understand it. 6. Instructions in LIFT are clear and I knew what to do in each section. 7. I got information about the kinds of problems I’ve been having. 8. I got advice on what I can do to feel better. 9. The things I learned in LIFT help me feel calmer. 10. The things I learned in LIFT help me solve my problems. 11. The things I learned in LIFT helped me feel better about stresses that happened to me. M (SD) 1.89 (.93) 2.11 (.93) 1.56 (.88) 2.22 (.83) Applicable Codes Usability n/a n/a Relevance, relatability, and degree of realistic 2.44 (.73) Knowledge, Burden 2.0 (.87) Knowledge, Burden 1.67 (.71) 2.0 (.50) 2.0 (.71) Knowledge, Relevance, relatability, and degree of realistic Knowledge Perceived effectiveness 1.67 (1.0) Perceived effectiveness 2.0 (.71) Perceived effectiveness 1.89 (1.05) 2.33 (.71) 12. It helped to write about stresses. 13. I liked LIFT. 14. If my friend had something bad happen to him/her, I’d recommend that he/she try LIFT. Overall Average Note: average < 1.0 = “low” acceptability; average between 1.0-1.9 = “moderate” acceptability; average > 2.0 = “high” acceptability; rating of 0 = “not at all true”, rating of 1 = “a little bit true”, rating of 2 = “mostly true”, rating of 3 = “very true” Perceived effectiveness Global acceptability Global social validity 2.00 (0.54) 2.22 (.83) Knowledge items had a mean of 2.03 (positive view), Burden items had a mean of 2.22 (positive view), Perceived Effectiveness items had an average of 1.89 (moderately positive view), and the 114 single Usability item had a mean of 1.89 (moderately positive view). The lowest rated items were “I like the LIFT music”, “I got information about the kinds of problems I’ve been having”, and “It helped me to write about stresses”, rated in the “moderate” acceptability range (between “a little bit true” and “mostly true”). No items were rated in the “low” acceptability range. Eight participants wrote in at least one response for the three open response items. These items prompted participants to report on anything they liked, disliked, or would change about the LIFT program, respectively. A brief summarization of responses is presented narratively below and in more detail in Appendix I in alignment with the acceptability codebook created for this study (Appendix D), inclusive of Perski and Short’s (2021) digital intervention framework. Open responses reflecting positive perceptions of LIFT first indicated reduced feelings of Burden or stigma they might encounter through traditional mental health services: “I liked being able to be honest about my experiences without having to talk to someone about it face to face” (also displays positively viewed opportunity cost [not having to sacrifice personal values or routines]). Participants also reported low burden related to the time commitment of the program: “The lessons were concise and quick to complete yet very informative.” Participants also reported upon Knowledge gained, such as liking “the different situations they placed in those games and for me to figure out what would be the best option to lower stress”. Additionally, one participant implied that the scenarios felt Relevant, Relatable, and Realistic by stating “I liked how in each section they gave interesting videos about certain situations and how we react to them”. Open responses reflecting negative perceptions of LIFT or recommended changes to LIFT suggested first that some elements of LIFT felt burdensome: “I felt that the flow of the activities was sometimes off… sometimes it felt more like a tedious assignment rather than a 115 helpful tool.”, and multiple participants noted that graphics, instructions, survey questions, videos, or prompts felt repetitive. In relation to Knowledge, another participant implied a lack of building knowledge across chapters by stating “I don't like how repetitive it was. Every other chapter felt like I did the same thing as the last”. Next, one participant implied a lack of Relevance, Relatability, and Realistic by stating a need to “include more games related to situations of different types of trauma.” Finally, two participants reported a neutral view of the Usability of LIFT, demonstrated by conflicting comments with suggested changes such as “I like the flow… [but,] some of the games were mediocre” and, “[I liked] the variety of exercises and approaches… [but,] add in more choices for exercises in individual chapters for accessibility”. Across qualitative and quantitative items, this tool pulled for some similar constructs (Burden, Knowledge, Relevance, and Usability). Conclusions can be made for the n = 8 participants who completed both the quantitative and qualitative portions regarding convergence of findings for each code. Across this tool, participants appeared to perceive LIFT neutrally in relation to Burden (some skewing more negatively), with noted concerns related to the design, pacing, and flow of the content, but rated associated quantitative items highly. For Knowledge, participants rated quantitative items moderate/high in terms of skills/knowledge learned, but qualitative comments were more neutral and less specific about knowledge learned. For Relevance, Relatability, and Realistic, participants rated quantitative items moderately/highly, but reported mixed statements about the relatability of LIFT content. Finally, for Usability, participants rated the quantitative item moderately, and made neutral statements about the LIFT platform’s design strengths and areas for improvement for the qualitative items. This data triangulation approach across qualitative and quantitative items typically improves the internal 116 validity of findings, but in this case it displays at least moderate acceptability overall, but with many mixed perceptions of acceptability facets. Research Question 5 – Cultural Acceptability Within Qualitative Interviews Two qualitative interviews were completed to gather cultural acceptability information, with an African American cisgender female (Interview 1) and a Hispanic/Latinx cisgender female (Interview 2). Both met the minimum dose criteria (i.e., watched 50% video content, spent > 10 minutes on 50% of chapters). The qualitative results are summarized in Appendix I and below. Many elements of acceptability were present in the interviews in line with Perski and Short’s (2021) digital intervention framework. Certain codes in the study created codebook are directly related to cultural elements of acceptability (i.e., Ethicality, Perceived Norms, Coping with Racism or Cultural Stress), while all other codes are indirectly related to culture but still an important component of understanding perceptions of cultural acceptability. Interview comments related to overall Affective Attitude (global acceptability) and engagement with study elements outside of LIFT are not summarized in Appendix I but are summarized below as they contribute to the interviewee’s perceptions of LIFT acceptability. Codes Directly Tied to Cultural Acceptability. No mention of cultural acceptability was made in response to the initial questions about liking/disliking the LIFT program and its components. The second half of the interview protocol probed for concerns and strengths related to the cultural acceptability of LIFT. First, Ethicality describes whether an intervention fits within or conflicts with someone’s values/morals, as influenced by one’s cultural identities, given that a value conflict can negatively impact motivation to engage with a program. Interviewee 1 was not able to think of anything that did not feel aligned with her values when asked this question in the interview. Interviewee 2 shared that discussing aspects of mental 117 health with her family, including sharing things from the LIFT program, would conflict with her family’s values around mental health, in that she does not “generally feel comfortable speaking with mental health with my family.” Completing LIFT independently, because it is focused on mental health, was “outside of the norms of my own culture [Latinx community]”. Overall, completing a mental health program online seems to be aligned with both interviewees’ personal values and goals, but not necessarily with Interviewee 2’s family’s values. Interviewee 2’s tone indicated appreciation of the program’s online self-administered modality for this reason. Next, Perceived Norms represents how the sociocultural context in which an individual resides and the associated sociocultural norms influence how someone engages with/perceives LIFT. Interviewee 1 described how they did not ever feel “personally targeted” by any questions in LIFT as a person of color, which neutrally influenced their perception of the program (see Appendix I). Interviewee 2 did not share any information directly related to Perceived Norms; however, her responses related to Ethicality above seem to also reflect how the sociocultural context of her Latinx family influenced perceptions of LIFT. Coping with Racism and Race-Related Stress was added to the codebook outside of the Perski & Short (2021) framework to specifically isolate comments describing how someone used, could use, or did not use LIFT to help cope with race-related/cultural stressors. Interviewee 1 described how the mock scenarios within LIFT feel specifically applicable to young people of color with problem-solving and managing stress around peer pressure related to drugs and alcohol (See Appendix I). Interviewee 1 also shared: “…and I like gained some insight on how to deal with stuff, especially as a person who is targeted in the world and is not treated fairly than other people like the majority. So yeah, 118 I think I think I would rate it like an eight [on a scale of 1-10, regarding how much I’d recommend the tool to people my age who share my cultural background]” Conversely, Interviewee 2 described that LIFT could be improved by adding more examples of microaggressions as a type of stressor and examples of stigmatizing statements that young people from diverse backgrounds might be hearing from their communities related to mental health, with examples for how to problem-solve those situations (see Appendix I). Interviewees 1 and 2 presented differing opinions related to whether LIFT in its current state is helpful for young people coping with stressors related to racism or one’s cultural background. Codes Indirectly Tied to Cultural Acceptability. Findings pertaining to the remaining codes are summarized below. Using the Perski & Short (2021) model, it is important to consider how all facets of acceptability may contribute to cultural acceptability. Affective Attitude is a measure of global acceptability and reflective of general impressions of LIFT. Interviewee 1’s impressions were generally neutral or positive, seen through statements such as “I think the program is awesome” and “I liked how the first the week was… so I was like, oh, yeah. I just wanted to kind of, you know, go back in and see how good it was… okay, I'll keep logging back in basically, right.” Interviewee 2 reported an overall positive perception of LIFT, such that many activities were “helpful and calming” and the “pace of the program was really good”. Burden represents whether LIFT feels effortful to use. Interviewee 1 reported that they had to repeat a section following a technical difficulty (described below); but, outside of this Interviewee 1 reported low Burden, in that “it wasn't really anything that was like heavy… it was low key like light work” and that the homework assignments weren’t overwhelming because she “put it like near somewhere where I can see it every day and try to at least complete one thing for the day or for the week. Be like, you know, not make my mind go like all crazy.” Interviewee 2 119 reported that none of the chapters or videos felt too long “…which made it a lot easier to process”. She reported that if she got distracted, she was able to re-do things, except for the choose your own adventure games. Overall, interviewees reported few concerns with Burden. Knowledge represents existing or gained knowledge about the purpose of LIFT and mental health. Interviewee 1 shared that it was helpful that LIFT reinforced content/skills learned from previous therapy settings, and learning new knowledge related to strategies and handling stressful situations. She shared how the “advice” given in the chapters motivated her to log on again. Interviewee 2 reported that she enjoyed the “tips” (self-care plan) to use in her life to promote her health. Overall, interviewees cited learning many skills that were also tied to perceptions of effectiveness of the program, such as learning how to break downs stressful tasks in day-to-day life (described below) and recognizing cognitive distortions. Opportunity Cost reflects the extent to which benefits, profits, values, or routines, must be given up to engage with LIFT. Interviewee 1 shared that it was hardest for her to complete LIFT because, “If I'm just like, tired from work, or anything, when I do kind of, like, go into like a little sleep hibernation, where I just don't really want to do anything…”. Interviewee 1 also shared that the increased flexibility of being able to log in at any time was helpful in the summertime with her work schedule, which allowed her to “put her full attention to it”. Interviewee 2 shared that she liked learning exercises (deep breathing) to practice at any time of the week. Both interviewees appeared to not give up any routines or benefits to engage in LIFT. Perceived Effectiveness represents whether the intervention appeared likely to achieve or did achieve its purpose or met someone’s goals. Interviewee 1 reported more specific skills learned (Knowledge), but did not discuss applicability of skills into daily life besides learning “how to handle her stress if I’m feeling out of place in my body”. Interviewee 2 described more 120 about how specific activities improved her functioning in her daily life, such as the stress thermometer (scaffolding situations that make her feel overwhelmed) and the newspaper publication (written exposure; See Appendix I). Interviewee 2 specifically shared how LIFT supported her in processing grief, and that LIFT is “really great for anxiety, it gave great tips and like the situations all were more about anxiety than depression…”. Relevance, Relatability, and Degree of Realistic was added to the codebook outside of the Perski & Short (2021) framework to specifically isolate how realistic and relevant the LIFT examples/characters felt to the problems in their own lives. Interviewee 1 shared how the situations felt relatable for most people, including culturally diverse folks: “it was just like questions genuinely anybody can have about a stressful event. And that stressful event could come from like, you know, racism, or not come from racism, depending on, you know, what area you’re from…”. Interviewee 1 also talked about how the examples for social problem-solving related to common scenarios youth of color might encounter in areas where drug or alcohol usage are common (see Appendix I). Interviewee 2 discussed, as described above, how including more situations related to microaggressions might make the content more relatable. Interviewee 2 shared the most relatable examples were tied to grief, and she appreciated seeing people of color in the characters, which contributed to relatability as a person of color. Usability represents technology issues or issues with the LIFT online platform. Interviewee 1 reported “two moments out of the whole thing where the website wasn’t working as much” and “it was a tiny bit buggy”, but overall “it was an easy format, and you aren’t confused about where to go”. Interviewee 2 shared that it may help if the user interface was updated to a more “modern” appearance, compared to its current state as seeming “outdated”, but that this concern did not impact the helpfulness of the content. 121 Want/Need is a code that represents one’s motivation to change. However, participants did often not speak to their internal motivation to change within the interviews. Interviewee 2 stated, “…just like wanting to finish the course and see how I would be different from start to end. So yeah, I just wanted to finish it just so I could see like, learn more skills and implement into my life” when asked about motivation. Interviewee 1 shared that they liked the strategies in the first chapter, which motivated her to log back on again. Finally, interviewee comments related to engagement with study components outside of LIFT were coded separately to pull apart study-specific factors contributing to acceptability. Both interviewees positively discussed the utility of the weekly text/email reminders sent external from the LIFT program. Interviewee 1 shared: “[the reminders were] definitely helpful. Because sometimes I did forget at times… And then once you once I got like, a little daily reminder, I did it automatically.” Interviewee 2 shared that reminders were helpful for building her motivation, particularly the text reminders: “I have ADHD. And that's definitely, that was what made me finish the chapter every week was on the last day it like reminded me. And if I couldn't finish it all in one sitting, like, the email would remind me to go back to it. And finish the chapter that I hadn't finished. So that was really helpful. Both the text and the email, because I don't check my emails often. But the text was like, the primary resource.” Overall, mixed data was provided related to the facets of acceptability that directly and indirectly make up cultural acceptability. These results display that LIFT has the potential to be culturally acceptable for some young people of color, via applicability of existing content to stressors associated with racism, and allowing young people who share different values related to mental health than their families to engage with a mental health program, but it is hard to draw 122 conclusions about concrete cultural acceptability based on the low sample size. One participant responded to the anonymous member check survey sent following completion of data analyses and confirmed accuracy of most study conclusions from the interviews, but noted she did not agree with a statement made by the other interviewee. This participant stated: “Pretty accurate. I didn't personally agree with the point about LIFT helping ‘managing stress around peer pressure related to drugs and alcohol’ but I especially agree with the bullet point explaining how ‘LIFT could be improved by adding more explicit examples of microaggressions as a type of stressor and examples of stigmatizing statements that people of color might be hearing from their communities related to mental health, with examples for how to problem-solve those situations.’” Interviewer positionality codes were applied when the interviewer said anything outside of the interview protocol: Validates (5 occurrences) and Reflects/Summarizes (12 occurrences). These were essential to foster a safe space within the interview, given the inherent power differentials between the interviewer and interviewee and the potentially sensitive topics. Post-Hoc Analysis Post-hoc analyses were undertaken to supplement overall findings in an adherent sub- group of LIFT treatment group participants. The rationale for doing so included the low retention rate, moderate adherence to LIFT treatment expectations, and an unexpected high reliance on imputed data within the intent-to-treat analysis (i.e., see differences between imputed [non- bolded] and original [bolded] observed post-score estimates in Tables 10 and 12). Given these limitations of the engagement, effectiveness, and acceptability analyses, minimum dosage adherence criteria for LIFT were established post-hoc by the researcher in collaboration with the intervention developers to determine the cut-off for an adherent sub-group. Even though 123 participants clicked through the content of all 7 chapters, the self-guided nature of the program allowed for users to click quickly through certain videos or activities. Most activities and games were required to move forward but could be completed quickly, and videos could also be skipped. It is essential that participants spent enough time on each chapter and watched enough of the video content to assume that the core ingredients of the intervention were received in a meaningful way. If participants watched < 50% of video content in < 50% of chapters or spent < 10 minutes on < 50% of chapters, then it was determined that they did not receive a minimum dosage of LIFT to assume they had adequately received the LIFT intervention. The current study’s hypothesis aims to see if participants can reach 70% adherence to LIFT, which is in line with the standards in the literature and levels of adherence from the pilot study; however, the “minimum dosage” of LIFT was decided to be a lower percentage (50%) because it represents the minimum amount of LIFT that needs to be engaged with in order to receive the main components of LIFT. This criterion was determined with input from one of the LIFT developers and with consideration of aligning the criterion with the theoretical model and LIFT’s theoretical change model (Perski & Short, 2021; Jaycox et al., 2019); specifically, components throughout LIFT build on each other throughout each chapter using a CBT framework (e.g., building skills related to mechanisms of change [coping and negative cognitions] will facilitate improvements in outcomes), signifying the importance of receiving at least 50% of content across at least 50% of chapters (via duration spent on chapter and videos watched). The active ingredients within LIFT are spread across these chapters and reiterated in each subsequent chapter, building up to the user engaging in exposures to reduce avoidance and anxiety while implementing the skills taught and reinforced in prior chapters. It is hypothesized that within this theory of change, engaging significantly with at least half of the chapters would allow the participant to receive all 124 core skills at least once. This was also reflected secondarily through LIFT self-care plans, where participants were asked to practice each previously learned skill and new skills simultaneously between subsequent chapters. These duration and video criteria for the “minimum dose” are also in line with digital intervention dose-response theories that emphasize the importance of measuring “intervention actions” (McVay et al., 2019); secondary adherence criteria that were included in results, but not proposed a priori to be the primary adherence criteria within the current study (self-care plan completion and activity replays), are also in line with other recommended components of adherence to measures for digital interventions (“Participant actions”, “Behavioral target actions”; McVay et al., 2019). Without evidence that LIFT had been accessed as expected, exploring intervention effects in those not adequately exposed to LIFT seemed futile. Thus, those who failed to meet minimum dosage criteria were excluded from this post-hoc analyses. Post-hoc analyses below are summarized below in relation to engagement, effectiveness, and acceptability within the sub-sample of participants who met these criteria (n = 6). These criteria are also reflected in Table 9 and Figure 2. These post-hoc findings are discussed in conjunction with intent-to-treat findings to establish greater validity/depth of overall findings, and should not be viewed in isolation as they may be biased over-estimates of real- world treatment effects (Armijo-Olivo et al., 2009). First, selected demographic and pre-treatment characteristics for the group of 6 adherent participants, in comparison to the 3 participants who completed study requirements without meeting the minimum dose (not including the three participants who reported completion of study requirements but were later discovered to have incomplete LIFT user data) are summarized in Table 16 below. Medians and ranges are provided for the targeted outcome variables at baseline. There were no major differences in mechanisms of action at pre-test, and are thus these 125 values are not included in the table. Some minor demographic differences were apparent across those who were adherent to LIFT and those who completed study requirements but were not adherent to LIFT, and the larger N = 24 group of participants randomized to the treatment group (see Table 5), but no specific conclusions can be made regarding potential differences with such 126 Table 16. Demographic Breakdown of Treatment Group Participants Who Met Study Requirements and Participants Who Met Minimum Dose Demographic Characteristic Met Study Requirements, Without Minimum Dose (n = 3) - n Met Study Requirements and Minimum Dose (n = 6) - n Race African American/Black American Indian/Alaska Native Asian Native Hawaiian/Pacific Islander Hispanic/Latinx Multiracial Biracial Other (Non-White) Ethnicity Hispanic/Latinx Non-Hispanic/Latinx Other Age 18 19 Sex Assigned at Birth Male Female Gender Identity Male/Cisgender Male Female/Cisgender Female Non-binary/genderqueer Genderfluid Agender Pre-PTSS - Median (Range) Pre-Anxiety - Median (Range) Pre-Depression - Median (Range) PTSS = post-traumatic stress symptoms 1 0 1 0 0 0 1 0 1 2 0 3 0 0 3 2 0 0 0 1 2 0 1 2 3 1 6 0 1 5 0 3 0 0 0 18.00 (11) 10 (11) 11.00 (20) 1 3 1 1 0 19.00 (12) 8.50 (7) 12.00 (15) 127 small sample sizes across these sub-groups. This table suggests potential areas for future research in relation to determining which participant characteristics may be associated with varying levels of adherence of digital interventions. For engagement analyses within this sub-group, Table 9 above also summarizes engagement information in relation to the minimum dosage criteria. 3 participants (Participants 1, 4, and 6; 33%) did not meet video criteria. One participant (Participant 4; 11%) did not meet duration criteria. Participants watched the most video content in Chapter 1, and the least video content in Chapter 6. This displays why it is important to look at both facets of engagement with digital self-guided programs, given that two participants met one component of the minimum dosage, but not the other. For instance, it appears that completing a chapter over multiple sittings can still result in watching an adequate amount of video content. For the 6 participants who received the minimum dosage of LIFT, it took 6.29 weeks (SD = 1.26; range: 4.57-7.47) to complete the program, which is slightly longer than the larger sample of those who completed LIFT study requirements, and closer to the intended duration of 7 chapters over 7 total weeks. The same ANCOVA procedures for the variables representing mechanisms of change and targeted outcomes were used for this post-hoc analysis, but only within the treatment adherent sub-group (n = 6) and the waitlist participants who completed all study requirements n = 20; see Figure 2), with no reliance on imputed data. For effectiveness related to mechanisms of action within this sub-group, the post-hoc findings found no significant changes across groups in line with the intent-to-treat findings for problem solving coping [F (1, 25) = 3.837, p = 0.062], cognitive coping [F (1, 25) = 2.560, p = 0.123], optimistic coping [F (1, 25) = .557, p = 0.463], avoidant coping [F (1, 25) = .006, p = 0.937], PDC-related negative cognitions [F (1, 25) = .690, p = 0.415], and FPSW-related negative cognitions [F (1, 25) = 4.132, p= 0.054]. For 128 effectiveness related to targeted outcomes within this sub-group, the post-hoc analyses found no significant changes across groups in line with the intent-to-treat findings for PTSS [F (1, 25) = .606, p= 0.444]. The post-hoc findings related to anxiety were aligned with the intent-to-treat analyses, but provided stronger evidence for significant improvements, in that participants in the treatment group who met the minimum dosage displayed statistically significantly larger improvements in anxiety than those in the waitlist group [F (1, 25) = 6.712, p= 0.016], with a large effect size (partial eta squared = .226). Participants who met the minimum dosage in the treatment group reported significantly lower anxiety post-scores (M = 5.17; SD = 5.622) compared to the waitlist group’s anxiety post-scores (M = 9.25; SD = 3.768), but similar scores to that of the larger group of 9 participants who completed LIFT (M = 5.33). This suggests that completing meeting the minimum dose criteria likely did not make an additional difference in further improving anxiety compared to those who completed all 7 chapters but did not meet the minimum dose. The anxiety post-score also falls below the tool’s established clinical cutoff range for anxiety (< 7; Chorpita et al., 2000). The post-hoc findings are more aligned with the intent-to-treat findings for depression, such that depression was positively trending but not statistically significant across groups from pre-to-post LIFT [F (1, 25) = 3.741, p = 0.065]. This suggests that completing the minimum dosage of LIFT did not lead to consistently improved outcomes in this adherent sub-sample. These secondary analyses suggests that anxiety may improve most following LIFT for those who complete at least the minimum dosage of the LIFT program, compared to depression and PTSS, and in combination with the ITT findings supports that LIFT may be able overall improve anxiety for users. It appears the minimum dosage may have been essential to receive in order to facilitate a stronger reduction in anxiety at post-test, but it may not have been essential for participants to receive in order to facilitate changes in 129 depression or any mechanisms of action from pre-to-post. Next, quantitative acceptability scores were moderate within this sub-group (M = 1.96; SD = .61) and similar to the overall acceptability reported by the larger group of nine participants who completed LIFT (M = 2.00). Finally, outcomes and acceptability for the N = 6 adherent sub-group were examined descriptively on a by-participant basis to add depth to the interpretation of how effective LIFT was for each participant who received the intervention. See Appendix J for a table summarizing these by-participant outcomes. Only 2 participants appeared to have improvements in mechanisms of action, but these measures lack clear interpretation standards for clinical significance. It appeared that Participants 3 and 9 displayed improved coping (Cognitive and Optimistic) and FPSW negative cognitions from pre-to-post LIFT. All other participants displayed minimal changes in mechanisms of action (i.e., less than 4 point change for coping measure; score < 16 for negative cognitions measure). For targeted outcomes, Participants 3 and 8 displayed significant improvement across depression, anxiety, and PTSS to below the clinical thresholds for the RCADS measure. Participant 9 experienced clinically significant improvement in PTSS (i.e., below clinical threshold), and minimal to no changes in the other outcomes given their pre-scores were already under the clinical threshold. Participants 2 and 5 displayed minimal to no changes in any outcome, and Participant 7 displayed worsening of depression and PTSS, and no change in anxiety. All participants’ overall acceptability scores fell at least into the moderate range. Participants 3 and 8 reported high acceptability, and they were the two participants who completed valid qualitative interviews. This suggests that the interviewee perspectives in this study only represent the experiences of those who benefitted most from LIFT and had the most positive overall perceptions of LIFT. This information also suggests that out of those who were most adherent to LIFT within the treatment group, few to no participants 130 experienced negative effects, 3 participants reported clinically significant improvements in at least one outcome, and 3 participants experienced positively trending improved outcomes that were not clinically significant. 131 CHAPTER 5 DISCUSSION This study was the first to examine a digital mental health intervention using a methodologically rigorous RCT design with a mixed methods cultural acceptability focus in a sample of 18–19-year-old REM recently graduated high schoolers and incoming college freshmen. This study intentionally examined the feasibility, effectiveness, and cultural acceptability of LIFT alongside REM young people (Perski & Short, 2021). Feasibility must first be established before examining treatment outcomes (Perski & Short, 2021). When discussing overall feasibility of LIFT, it is essential to review data related to recruitment/retention, adherence, and other measures of engagement (qualitative and quantitative). First, Phase B of recruitment was very successful (n = 47 participants enrolled) in comparison to Phase A, indicating that remote social media-based recruitment methods may be a more feasible and effective at reaching REM recently graduated high schoolers and incoming college freshmen in a summer and early fall timeframe. A recent systematic review found that social media recruitment methods with adults were similarly or more effective than traditional recruitment methods for 68.3% of studies (N = 176 articles; Sanchez et al., 2020). Research supports the use of social media (i.e., Facebook, Instagram) to recruit “harder to reach” individuals for mental health research, such as REM’s (Kayrouz et al., 2016), adolescents (Kutok et al., 2021), and emerging adults aged 18-23 (Loxton et al., 2015). As shown in Table 6, strategies for recruitment used in Phase A were unsuccessful (n = 0 participants enrolled) and may not be a feasible approach for recruiting interested and symptomatic REM 18 and 19-year old’s for a program such as LIFT. Many school-based providers indicated interest and buy-in related to the program in Phase A, but struggled with finding students who met the criteria, or disseminated the flyer but struggled to obtain buy-in from students who fit the study criteria. The 132 LIFT pilot study did not describe challenging recruitment efforts in schools, potentially due (a) an earlier timeline than the current study which was trying to recruit high school seniors during their final few months of school, (b) use of existing recruitment strategies inherent within relationships already in play when study personnel are embedded within schools, and/or (c) greater youth involvement given the use of parent consent that was required for many participants under 18 (Jaycox et al., 2019). Overall retention for the LIFT treatment group was quite low as only 25% (6/24) of those who agreed to participate completed the online intervention as intended. This suggests questionable feasibility of LIFT with this sub-population when delivered within the contextual set up of this study. This retention rate falls within the lower end of the range reported for digital CBT interventions (36%-100%) and slightly lower than the completion rate for PTSS digital interventions (44%-85%; see Table 2). The retention rate in the LIFT pilot study (78%; Jaycox et al., 2019) was more than double that found within the current study; however, the pilot study did not establish a minimum dosage criteria, as was done in the current study to explore outcomes in those who received an adequate dosage of LIFT for where changes in key variables could be attributed to LIFT. The pilot study likely did see a need to address this due to fewer issues with intervention uptake following consent compared to the current study, and higher implementation support throughout LIFT compared to the fully remote self-guided delivery of LIFT within the current study. Over half of the allocated treatment group participants in the current study never began the program (13 out of 24; 54%), displaying low uptake of LIFT. Given the remote recruitment methods (compared to school-based recruitment), this lower uptake is in line with the mixed findings from digital intervention studies with children and adolescents (range of 50%-97% of participants accessed a digital program at least once; D’Adamo et al., 2023; 133 Ruggiero et al., 2015; Kassam-Adams et al., 2016; Calear et al., 2016), and is lower than the uptake rate from a digital trauma intervention study with a college sample (84% accessed once; Littleton et al., 2017). Results indicated that about half of participants who began LIFT were able to complete the program as intended. Specifically, participants who began LIFT displayed moderate levels of overall adherence to the program (i.e., finished all chapters with minimum dosage; 55% adherence). Digital intervention literature has not reported true “adherence” using the current study’s operationalization, but other digital mental health studies have reported how many participants completed a certain each module or logged on at least once. Fewer studies specifically examined adherence in the context of participants who started LIFT as the current study calculated. Ruggiero and colleagues (2016) found 53% of those who accessed their program completed all modules, and out of those who completed one module, 26% completed the entire program. Littleton and colleagues (2017) found that out of those college students who initiated once, 16% completed all modules of a trauma intervention. This displays the importance of reporting this metric since these true adherence rates, with the context of uptake rates, may help contextualize the low/mixed retention rates found in the literature. Adherence has not also been measured consistently across studies, and most studies have not operationalized a “minimum” or “therapeutic” dosage of digital programs as was done in the current study. Participants who started LIFT completed an average of 6.37/7 chapters in the pilot study in the higher implementation support environment (e.g., 1 chapter per week completed in after- school setting) and those who started a video watched 63-89% of video content per chapter (Jaycox et al., 2019). Participants watched significantly less video content in the current study (i.e., 3 participants watched < 50% of video content within and across chapters; see Table 9) than 134 was reported in the pilot study, and likely spent less time on the chapters (i.e., 1 participant spent less than 10 minutes on at least half of chapters) than the pilot study participants. The range of median durations across chapters in the current study were 10.73 (Ch. 6) to 32.55 (Ch. 2) minutes. Chapter durations were not reported in the pilot study, nor how long participants were provided in the after-school setting to complete the modules. Participants in the current study watched the most video content in Chapter 1, and the least video content in Chapter 6, which differs slightly from the pilot study (most Ch. 1; least Ch. 3; Jaycox et al., 2019). On the other hand, participants in the current study also self-reported completing high rates of self-care plan elements (all elements completed at least once by 100% of participants who completed LIFT), and some participants replayed key relaxation activities. The pilot study did not report this data. Self-care plan completion and activity replays within LIFT are considered sub-components of engagement, in line with “Participant Actions” and “Behavioral target actions” cited within the literature as components of digital intervention adherence (McVay et al., 2019). One other digital intervention study has reported duration (i.e., young people used a digital trauma tool for an hour total over one month; Kassam-Adams et al., 2016), and few other studies reported duration or specific percentages of content/videos accessed (e.g., average of 50% of activities completed; Hetrick et al., 2017) within modules beyond reporting the number of modules completed. There is limited to no research defining what “minimum” dosage of a digital intervention is needed to be received to conclude with certainty an individual received the core evidence-based ingredients of a program. It is critical for future research to explore this issue with greater depth, as many digital intervention studies conducted remotely do not delineate a minimum or “therapeutic” dosage, which may be overinflating, or undervaluing, intervention effects. Non-digital CBT-based intervention adherence research suggests that an 135 optimal dose of CBT for anxiety/depression/PTSS for adolescents and adults is 5-8 sessions, and many young people respond well to lower doses of trauma treatment (e.g., 4 sessions of TF- CBT; Wamser-Nanney et al., 2014; Levy et al., 2020; Klein et al., 2024), and not all young people (e.g., more mild/moderate symptoms) with PTSS may need “in depth trauma processing” to benefit from a CBT-based trauma therapy. This all suggests that the current study’s set minimum dosage for a digital intervention is appropriate for this mild/moderately symptomatic sub-group of young people. The current study’s minimum dosage is also overall a more liberal cut-off than in the traditional intervention literature, such that recent articles suggest that the traditional dose-response literature that focuses mostly on number of sessions attended cannot be directly generalized to the digital intervention space due to the self-paced elements (McVay et al., 2019). This is an especially salient issue for PTSS research (Yeager & Benight, 2018), since repeated exposures are often considered the cornerstone of trauma interventions, and it is unclear for digital trauma interventions the adherence to those exposure elements. Future research should more intentionally measure the frequency and duration of exposures within digital interventions such as LIFT, so that it can be included within minimum adherence criteria. As seen by the varied types and metrics of engagement and adherence data reported in Tables 2 and 3, standardization of engagement/adherence expectations and a minimum therapeutic dose is needed for self-guided trauma interventions. Both participants cited in their interviews their reliance on the external reminders provided within the study. This study did not measure the impact of the reminders on adherence, but given the remote nature of the study and the self-paced nature of the program, it can be assumed that adherence would be lower without the external reminders provided by the study coordinator. Given that the effectiveness of digital interventions relies so heavily on engagement 136 from the user due to the self-administered nature of these programs, it is essential to promote consistent engagement for users. Engagement strategies supported by the research to improve adherence include “human-support-mimicking features” (i.e., e-coaching, personalized feedback, in-app reminders; Saleem et al., 2021; Hollis et al., 2017), where the user still has check-ins with a provider. Some literature has also suggested that personalized and standardized email reminders may not improve adherence (Lillevoll et al., 2014). Future research should explore the degree of external support that may be necessary to promote more consistent engagement with digital interventions. Also, the literature supports that when examining digital intervention engagement, it is essential to continue to examine the “Intervention Actions” (duration and % videos watched), “Participant Actions” (replaying activities), and “Behavioral Target Actions” (self-care plans; McVay et al., 2019) to understand the “enacted dose” in comparison to the intended or minimum dose. Overall, the current study findings support low uptake of LIFT, low retention, moderate adherence for those who accessed the program at least once, varied engagement in terms of time spent on chapters and videos watched, and high engagement with self-care plan tasks in-between chapters. Following this exploration of feasibility and adherence, effectiveness is an important next construct to measure for the LIFT program. Feasibility, made up the sub-components of adherence and engagement, directly links to effectiveness via participant exposure to an intervention’s “active ingredients” (Perski & Short, 2021). Given the self-administered nature of the LIFT program, varying levels of engagement were anticipated a priori, which led the researchers to select an intent-to-treat analytic framework to account for concerns related to non- adherence and non-response. The researchers also intentionally selected a waitlist randomized control trial design to account for any potential history effects, maturation effects, or selection 137 biases that were not addressed in the pilot study (Jaycox et al., 2019). This rigorous study design can facilitate a higher level of certainty that LIFT is more effective than a “treatment as usual” condition, and it can increase the ability to rule-out potential confounds by inclusion of a control condition, utilizing non-concurrent randomization to conditions, establishing baseline equivalence of key variables across groups, and collecting pre-and-post data at similar intervals (e.g., 7 weeks). Reducing concerns for these confounds increases the ability of the researcher within the current study to establish that changes in mechanisms of action or targeted outcomes were likely related to the LIFT intervention. This is a more rigorous research design that extends the LIFT pilot study findings to ascertain if observed changes from the pilot study can be replicated with more potential confounds taken into consideration. Mechanisms of action are critical to examine within digital intervention effectiveness studies due to the importance of understanding the “why” (Sheridan et al., 2014), and what mechanisms facilitate symptom improvement (Perski & Short, 2021). Cognitive-behavioral interventions, including similar trauma-focused CBT-based digital interventions and the LIFT pilot study, have measured changes in coping skills and cognitions as the primary mechanisms. No significant improvement in either coping skills or negative cognitions across any subscale following LIFT was found in the larger treatment group (n = 24) when compared to the larger control group (n = 23). Post-hoc analyses for those who met the minimum dosage criteria (n = 6) showed similar nonsignificant findings. This is discrepant with the LIFT pilot study (Jaycox et al., 2019), where LIFT significantly improved negative cognitions (incompetence and danger) and approach coping skills (cognitive, problem-solving, and optimism). This previous trial had high implementation support and used a different measure for negative cognitions than the current study (CPTAS; Jaycox et al., 2019), which could be reasons for this discrepancy. 138 Participants in the treatment group displayed similar problem solving (M = 6.25), cognitive (M = 6.00), optimistic (M = 6.38), and avoidant (M = 7.46) coping at baseline compared to the pilot study’s problem-solving (M = 6.55), cognitive (M = 6.59), optimistic (M = 6.37), and avoidant (M = 7.35) coping pre-scores using the same tool (CCSC; Ayers et al., 1996; Jaycox et al., 2019). Participants also displayed high overall negative cognitions (M = 21.88 total score across PDC and FPSW subscales; above clinical cutoff of 16-18; CPTCI-S; Meiser-Steadman et al., 2009) at baseline that were similar to high baseline measurements in the pilot study (Incompetence and Danger subscales within the CPTAS; Jaycox et al., 2019). At post, participants across both groups were reporting moderate negative cognitions. These results from the current study extend the literature on mechanisms leading to symptom improvement within digital CBT interventions, because few studies have explored mechanisms in college-aged populations of 18–19-year old’s or REM’s (see Tables 1-2). The research supports that cognitive and behaviorally-oriented mechanisms (i.e., improving approach, reducing avoidance) are a pathway to reducing depression, anxiety, and PTSS (Berzengi et al., 2017); however, a trauma-focused digital intervention study measuring coping skills and post-trauma cognitive appraisals found no changes in coping (avoidance, cognitive, support-seeking) from pre-to-post (Kassam-Adams et al., 2016) using the long version of the current study’s measure, the CPTCI (Meiser-Steadman et al., 2009). This study did not include 18–19-year-olds or REM’s. Another non-trauma digital CBT intervention study also measured similar mechanisms and found no significant effects for negative problem ideation and CBT skill acquisition (behavioral activation and cognitive restructuring) against a control group (Hetrick et al., 2017), but found significant improvements in problem solving coping from pre-to-post against a control. No available digital mental health intervention studies have used the CCSC to 139 measure coping skills, and no digital mental health intervention studies focusing on college freshmen have explored coping skills or negative cognitions as mechanisms. Overall, this study extended from previous literature in that most digital intervention studies cited in Tables 1 and 2 did not examine variables related to mechanisms of action. Future research is needed to explore which mechanisms might be leading to targeted outcome improvement, as other mechanisms or types of coping may be more relevant for this population. Additionally, coping may be more difficult to improve using a digital intervention given the inherent aversiveness of increasing approach and reducing avoidance, suggested by the exposure-based self-care plan items being the most frequently skipped elements of self-care plans. Additionally, the literature is conflicting whether the same mechanisms facilitate symptom improvement (Berzengi et al., 2017) or if different mechanism pathways may exist outside of traditional CBT conceptualizations for REM young people, especially in the context of racial or cultural stressors; for instance, higher approach coping (i.e., direct problem solving, cognitive coping, optimistic coping; Mekawi et al., 2022) and higher avoidant coping (Gaylord-Harden et al., 2008) may both contribute to managing stress around discrimination and racism. This opposes traditional CBT pathways that state reductions in avoidance are essential to improve PTSS and anxiety, and as cited in the purpose of LIFT (Jaycox et al., 2019). Future research should explore different mechanisms using measures that have been consistently validated with REM samples. Secondary effectiveness data also supports that participants engaged in coping and addressing cognitive distortion-related self-care plan tasks outside of chapters, despite these mechanisms not improving. Qualitative data related to the Knowledge code provide supplementary information around skills or knowledge learned within LIFT that may shed light on these mechanisms, more 140 in the interviews than open response questions. The two interviewees reported learning knowledge and skills within LIFT, and having content from previous therapy reinforced (i.e., breaking down stressful situations into small steps, and learning to address cognitive distortions), so it is possible the quantitative mechanism measures are not accurately capturing the specific types of coping and cognitive skills that lead to symptom improvement observed. These interviewee results should not be viewed in isolation as they may present a biased perspective given the small sample and high engagement/motivation to participate in interviews. Despite insignificant findings for mechanisms of action within the current study, changes in targeted outcomes also require attention as symptom improvement is the overall goal of LIFT. No significant improvements were observed for PTSS following LIFT, across either intent-to- treat or post-hoc analyses. This contrasts with the pilot study’s findings that PTSS improved following LIFT (Jaycox et al., 2019). One noted difference was that the pilot study appeared to involve a more distressed sample of participants (M = 24.48; SD = 11.0), indicative of a more moderate level of stress symptoms compared to the current study’s mild-to-moderate baseline PTSS (M = 18.88; SD = 5.59). This lower baseline PTSS score in the current study compared to the pilot study may be due to the many potential participants (n = 37) in the current study who were screened out for severe PTSS (> 31 on CPSS), despite efforts to recruit a more distressed subgroup with PTSS inclusion criteria not used within the pilot study (> 10 on CPSS). Due to persistent challenges in digital intervention research with adherence and retention, many digital intervention studies either relied on datasets with estimated/replaced values (Littleton et al., 2017; Short et al., 2020), or only examined those cases with high adherence, when sample sizes were large enough to accommodate this and when data were MCAR. Both of these approaches have distinct limitations, so the current study attempted to bridge this gap by 141 conducting the primary intent-to-treat analyses with all randomly assigned participants (and complete case analyses using “original data” included), as well as post-hoc analyses involving only those who received enough of the LIFT intervention to attribute any participant changes to the content and skill training provided. Slight improvements in PTSS within the treatment group were observed from pre-to-post using original post scores (decrease of 4 points), compared to the imputed/pooled post scores (increase of 1 point), suggesting that those most compliant with the LIFT study may have experienced slightly greater, albeit still statistically insignificant as shown through post-hoc analyses, reductions in PTSS. These decreases in PTSS are smaller than that of participants from the pilot study (decrease of 7 points, still in “mild-to-moderate” range; Jaycox et al., 2019). The current study’s treatment group post-scores suggest that participants were still experiencing mild-to-moderate PTSS following LIFT (see Table 13; Foa et al., 2001). The current study’s imputed dataset post-score (M = 19.38) was slightly higher than the post-score from the pilot study (M = 17.91), but both fell into the mild-to-moderate range on the CPSS, suggesting that the current and pilot study reduced PTSS to similar post-intervention levels. The current study’s insignificant findings for PTSS align with the mixed literature on whether digital interventions can improve PTSS. The only other trauma digital intervention study in the literature that used the CPSS found a significant 5 point within-group reduction in PTSS following the intervention, but found no significant effect against a waitlist condition; this study’s average PTSS pre-score was similar to that of the current study (M = 18.40; Kassam- Adams et al., 2016). Other digital trauma intervention studies found improvement in PTSS from pre-to-post between control groups using different PTSS measures in college populations (Littleton et al., 2017; Hirai et al., 2012; Short et al., 2020), adolescents (Ruggiero et al., 2015), and significant findings not against control groups within adults (Mouthaan et al., 2013; Klein et 142 al., 2010) with medium to large effect sizes. Fewer studies have found similar effects to the current study with no significant improvements in PTSS compared to a control condition (Cox et al., 2010). The current study findings suggest LIFT may not lead to improved PTSS within this unique sub-population or via this fully self-administered modality. Additionally, study results indicated that young people did not consistently improve in their depression and anxiety symptoms following LIFT compared to young people in a waitlist group. Positively trending, but statistically insignificant, results were found for both depression and anxiety with large effect sizes for the imputed datasets where significant results were obtained. This was consistent across the full imputed dataset (N = 24 in treatment group) and original dataset complete case (N = 9) analyses. Post-hoc analyses supported improvements in anxiety in the subgroup of 6 participants who met the minimum dosage criteria, but did not support improvements in depression within this adherent subgroup. However, complete case post-hoc analyses should not be interpreted in isolation given the high risk for bias. Triangulated findings across these analyses suggest that LIFT is more likely to be effective for anxiety symptoms than depression symptoms within 18–19-year-old REM youth. The pilot study found less promising, but overall similar results to the current study, such that no significant effects were uncovered for depression or anxiety from pre-to-post. Treatment group participants in the current study also displayed significantly higher depression (M = 13.49 imputed data; M = 12.64 original data) and anxiety (M = 10.15 imputed data; M = 9.64 original data) at baseline compared to the pilot study’s depression (M = 10.10; SD = 6.90) and anxiety (M = 6.66; SD = 4.30) pre-scores as measured with the same tool (RCADS; Chorpita et al., 2000). It is essential when examining intervention efficacy to ensure enough symptomology at baseline to detect change over time. Score reductions from pre-to-post for the treatment group were slightly larger 143 for both depression and anxiety (see Table 13) compared to the pilot study (depression reduced 1.16 points; anxiety reduced .58 points; Jaycox et al., 2019). The anxiety and depression treatment group post-scores using original data only (n = 9) were each a few points lower than their respective imputed post-scores, displaying that those who completed all study requirements may have experienced slightly more reduction in these symptoms compared to the larger treatment group in the current study. Despite a lack of consistent statistical significance across imputations, the findings appeared to have potential clinical significance; scores for the current study’s treatment group at post-test for depression and anxiety were at or below the clinical thresholds on the RCADS (Chorpita et al., 2005). The current study findings also displayed that pre-test anxiety scores were a significant predictor of anxiety post-test scores and pre-test depression scores were a significant predictor for depression post-scores, signifying the importance for controlling for these in future statistical analyses. Controlling for pre-test and including a control group extended the rigor from the pilot study’s analytic methods (Jaycox et al., 2019), and better accounted for missing data by using an intent to treat approach. The pilot study did not report the amount of missing data or the mechanism of missing data before conducting a complete case analysis (i.e., best practice is that data must be MCAR before conducing a complete case analysis; Jakobsen et al., 2017), which may have introduced bias (Salim et al., 2008). The results of the current study fit into the mixed literature on digital mental health intervention trials that have found improvements in depression and anxiety in general child (i.e., Ruggiero et al., 2015; Stasiak et al., 2014) and college (i.e., Melnyk et al., 2015; Short et al., 2020) samples against a control condition, and those reporting insignificant changes in these target areas over time against control conditions (i.e., Lillevoll et al., 2014, Hetrick et al., 2017; 144 Calear et al., 2016; Mouthaan et al., 2013). Some trials examining a trauma-focused digital intervention found improvements in anxiety (Cox et al., 2010) and depression (Ruggiero et al., 2015) at post-test, but they did not include the target sample demographic within the current study (REM’s; 18–19-year old’s). The current study supports that this unique population may benefit from LIFT in relation to anxiety if they are entering the program with a moderate level of symptoms at baseline. Qualitative interview findings under the code Perceived Effectiveness can also be considered secondary data for understanding participant perceptions of effectiveness. As described in Appendix I, an interviewee specified how they thought LIFT was more directly helpful for managing anxiety than for depression, which is in alignment with overall study findings, and also that it was helpful for improving grief symptoms. Interviewees also cited generalization of novel skills to day-to-day functioning over time that were not covered in study measures. Interviewee results should not be viewed in isolation as they may present a biased perspective given the small sample and high engagement/motivation to participate in interviews. It was also essential to measure acceptability of LIFT, given how perceptions of acceptability can influence engagement with and adherence to self-administered digital interventions, which directly influences what components of the intervention are received, and thereby how effective the intervention can be (Perski & Short, 2021). General acceptability was measured with a post-survey tool with quantitative and qualitative items with the sample of participants who completed LIFT, and cultural acceptability was measured within two qualitative interviews. Quantitatively, the current study found that participants who completed all study requirements (n = 9) viewed LIFT as a moderately-to-highly acceptable program overall via the results from this tool, especially related to Affective Attitude (global acceptability), via the high 145 post-survey tool overall average rating (M = 2.00; SD = .54), the highly rated global acceptability item (M = 2.33; SD = .71). and the highly rated global social validity item (M = 2.22; SD = .83; average score of score < 1.0 = “low” acceptability; 1.0-1.9 = “moderate” acceptability; > 2.0 = “high” acceptability), and the fact that all participants who completed the open-ended part of the tool wrote in at least one thing they liked about LIFT. However, many participants also displayed overall neutral or mixed feelings about the LIFT program within the open response questions, displayed by 7 out of 8 respondents providing responses to all 3 optional open response questions (likes, dislikes, and things to change). The overall mean across items in the current study indicates slightly higher perceptions of acceptability compared to the pilot study (M = 1.85) that used the same acceptability tool, but both studies appear to suggest at least moderate acceptability of LIFT. No differences were observed between the larger sample of LIFT completers and those who met the minimum dosage in relation to acceptability scores, suggesting that acceptability does not appear to be influenced by time spent on chapters or amount of video content watched. This suggests that LIFT was at least moderately generally acceptable to the current study’s population of 18–19-year-old REM recently graduated higher schoolers and college freshmen and complete LIFT in a fully self-administered fashion, but with many areas for improvement also highlighted. This finding is aligned with previous literature on acceptability of digital health tools (see Tables 1 and 2), but this study is the first to examine acceptability of a trauma-focused digital tool in this unique population with fully self-guided and remote delivery. When acceptability was included within digital intervention trials, it was often reported as a one-question measure of global acceptability or a short quantitative measure. The current study’s acceptability ratings are in alignment with literature displaying moderate-to-high global acceptability for digital 146 interventions (i.e., 56% “liked” the program and 33% thought it was “ok”; Stasiak et al., 2014). Acceptability results related to global social validity have been mixed within the research with some studies finding low social validity (i.e., 22% would recommend once improved to peers; Stasiak et al., 2014), and others finding moderate/high social validity in college freshmen samples following digital programs (62% endorsed all first-year students should receive the program; Melnyk et al., 2015). Another study measured perceptions of “credibility” of a one- session digital intervention focused on improving insomnia alongside mental health outcomes with college students as a measure of pre-intervention acceptability, and found mostly positive perceptions of expecting the intervention to be successful (Short et al., 2020). However, none of the above acceptability findings were sought out in previous trials of digital trauma interventions, a major gap in the literature that prompted the current study to measure acceptability more comprehensively. The pilot study also did not report any qualitative data related to acceptability. The current study also extends from the pilot study by providing acceptability data outside of a school setting, with less implementation support, for emerging adults entering college. The current study’s qualitative acceptability items on the post-survey indicated an overall mixed view of LIFT. One domain, Perceived Effectiveness, was only reflected in quantitative items, and indicated moderately positive ratings of the helpfulness of LIFT. Across qualitative and quantitative items, participants who completed LIFT had a generally positive view of the Opportunity Cost of LIFT (i.e., did not have to give up routines or values to engage), while most other elements of LIFT were viewed neutrally or with mixed responses (Knowledge, Usability, Relevance/Relatability/Realistic), and some responses were more negative and tied to suggestions for improvement (Burden). Only one published digital intervention study on college students in India (Kanuri et al., 2020) found high Usability and Acceptability using a quantitative 147 measure; supplementary qualitative data indicated high perceived feasibility and low burden with accessing the program so conveniently on their phone and implementing the digital program into their schedule, but also some reported activities were inefficient or not user-friendly. This maps onto the current study’s findings of mixed perceptions within digital tools related to Burden and Usability. The current study findings related to general acceptability are also in alignment with some previous research on other non-trauma digital interventions with adults that used a similar qualitative framework (Sekhon et al., 2017; base framework for Perski & Short, 2021) for acceptability. These prior studies found more positive perceptions of global Affective Attitude towards using technology to manage their mental health, positive views towards Opportunity Cost, and mixed/neutral perceptions of the Burden and “Intervention Coherence” (understanding the intervention and how it works, combination of Usability and Knowledge; Sekhon et al., 2017) of digital programs (Sharrad et al., 2023; Keyworth et al., 2022; Keyworth et al., 2021), but did not explore perceived relevance of programs. Cultural acceptability is essential to distinctly assess for when trying to evaluate the helpfulness of an intervention for REM young people with managing stressors, to ensure content and delivery of the intervention is aligned with cultural values and is relatable for young people from a variety of cultural backgrounds. If an intervention does not have cultural validity, there may be a decrease in motivation to use and engagement with the program, which directly links to effectiveness via exposure to the intervention’s active ingredients (Sekhon et al., 2017; Perski & Short, 2021). When two participants were asked about cultural acceptability of LIFT within qualitative interviews, varied experiences were shared and inconsistent findings were obtained related to whether LIFT is a culturally appropriate program for REM young people. Due to the 148 small sample of interviews, the current study was not able to conclude with certainty how culturally acceptable LIFT is in its current state. Interviewees had differing opinions for questions relating to Coping with Racism and Cultural Stress and Relevance, Relatability, and Realistic characteristics of LIFT, in that strengths and areas of improvement were highlighted, with one participant displaying a neutral to positive tone regarding these cultural acceptability facets and the other participant displaying a more mixed to negative tone. All other codebook areas related directly (Ethicality, Perceived Norms) and indirectly to cultural acceptability were mostly positive or neutral (Perceived Effectiveness, Burden, Knowledge, Opportunity Cost, Usability), or did not have enough quotes to draw any conclusions (Want/Need). Global acceptability of LIFT within the interviews, also shown through Affective Attitude (see Appendix I), appeared to be positively perceived by interviewees overall, but there is not enough data to support conclusions regarding Affective Attitude towards cultural acceptability specifically. CBT may not inherently teach the skills needed to manage race-related stressors or teach skills that are actually not adaptive in certain cultural contexts (Hays, 2009); or, some suggest CBT may be invalidating for folks experiencing discrimination or micro-aggressions who may be displaying a presentation of PTSS that is not aligned with traditional “stages” of trauma (Ennis et al., 2019) as conceptualized within CBT. LIFT was developed in collaboration with racially diverse youth (Jaycox et al., 2019) and an interview participant cited appreciation of visuals/characters that represented people of color, but it is evident that true cultural acceptability of an intervention goes beyond these surface components (i.e., characters with varying skin tones). An interview participants suggested LIFT needed to more explicitly discuss how to manage mental health stigma within their culture and families, and discuss more explicit examples of race related stressors (i.e., microaggressions). 149 Some participants provided similar, but more general, feedback in the open response questions regarding increasing the variety of trauma/stress scenarios and examples within the program. Thus, the current study extends upon the pilot study (Jaycox et al., 2019) and previous literature on digital interventions (See Table 1) and digital trauma interventions (see Table 2) by intentionally evaluating cultural acceptability of the LIFT program within a REM sample where 72% of participants endorsed a previous PTE at baseline related to their race or ethnicity, and all participants who completed LIFT endorsed a race-related PTE that occurred during LIFT. This is the first digital intervention study that included racial traumas as PTEs and intentionally asked participants about culture-specific experiences related to a digital intervention for trauma. Kanuri and colleagues (2020) reported high cultural acceptability of their intervention with a sample of college students in India, but did not operationalize this construct or finding clearly. No studies in the literature have used the Perski & Short (2021) or Sekhon et al., (2017) framework to examine cultural acceptability specifically, beyond general acceptability, within a digital intervention evaluation study. There are also limited digital intervention studies across adult and youth populations that specifically sampled REM individuals and also explored acceptability (i.e., only 2 studies within Table 2, including the LIFT pilot), and neither included qualitative methods. A previous study examining acceptability of a non-trauma digital program using the Sekhon (2017) base framework in adults provided brief information about negative views of Ethicality, but quotes were from the health professionals sampled (not young people/users) and discussed concerns with privacy or accessibility, not specific cultural factors (Sharrad et al., 2023). Mixed data on Ethicality was also found in a sample of adults participating in another non-trauma digital intervention, where participants suggested that some assumptions made and the lack of flexibility for entering different responses for activities did not align with their values, 150 but no specific cultural factors were mentioned and the sample contained only 3% REM’s (Keyworth et al., 2021). Previous literature has not explored use of digital interventions for Coping with Race or Cultural Stressors or perceptions of Relevance/Relatability/Realistic of a digital intervention for a REM young person. This is a significant limitation of the literature, and in combination with the results from the current study it is evident that cultural acceptability needs to be explored more concretely within future research using qualitative and quantitative measures, in addition to recruiting more REM samples. Across the two acceptability tools in this study, some commonalities emerged (Appendix I). Participants shared similar perceptions across tools related to Knowledge (neutral/positive) and Opportunity Cost (neutral/positive) and divergent or mixed perceptions across tools related to Burden (open response more negative, interviews more positive), Relevance, Relatability, and Realistic (mixed perceptions across both tools), and Usability (most participants mentioned positive and negative examples). Future research should prioritize triangulation of acceptability data across tools for digital interventions in REM samples given the complex perceptions of acceptability found within the current study. Limitations Many limitations are important to acknowledge within the current study, including (a) concerns related to recruitment and the study’s small sample size, (b) varied engagement, high dropout, and high non-adherence, (c) concerns related to potential biases, the influence of systemic racism, and inequities, and (d) reliance on self-report for key variables. First, due to recruitment challenges, the small sample size led to an underpowered study that may be underestimating present effects for key variables of interest. The researcher also only completed two valid qualitative interviews, which was less than intended to explore cultural 151 acceptability, and limited conclusions. Recruitment challenges also prolonged recruitment from the original high schooler population of interest to college freshmen, and LIFT was initially developed for use through age 18 in school settings. Students who have just graduated high school and are entering college may exhibit different engagement and acceptability characteristics. Also, baseline equivalence analyses found that there were more females (based on sex assigned at birth, not gender identity) in the treatment group, and more individuals in the treatment group who had previously received pharmacological treatment for mental health (but not current) compared to the waitlist group. These factors were acknowledged to contextualize group findings, but may have clinically influenced study results (e.g., sex assigned at birth could have confounded ANCOVA findings). Second, many participants in the current study did not complete study requirements or completed LIFT with concerningly lower levels of engagement than expected. Due to limitations of the study design and the limited access the researcher had to the LIFT platform user metrics (i.e., chapter completion) during the study, there were some critical data issues that had to be addressed at the analysis stage (i.e., removal of three invalid post-tests). Having access to in-vivo engagement and adherence data within LIFT user metrics would have improved validity. The intent-to-treat analytic model and subsequent multiple imputation procedures to address missing data concerns was able to account for these considerations within statistical findings to increase accuracy of results and reduce bias; despite this being the least biased approach, intent-to-treat procedures may also increase the likelihood of under-estimating a present effect when there is significant non-adherence and subsequent reliance on estimated values (McCoy, 2017). Future research should attempt to recruit and retain larger samples to be able to conduct complete case analyses and measure the impact of adherence on outcomes. It is important to contextualize the 152 significant depression and anxiety findings with the caveat that those analyses relied more heavily on imputed data than other variables and should be interpreted with this consideration and in relation to the post-hoc data analyses using original data. The researchers conducted a post-hoc analysis intended to balance out these concerns by analyzing changes in outcome variables for those who had received a minimum dosage of LIFT, but this analysis should also be viewed with caution given the high risk of bias and over-estimating real-world effects by only including those who were the most compliant. Future research should recruit larger samples and promote more consistent engagement with LIFT to be able to conduct more robust statistical tests for effectiveness. Third, various concerns related to potential biases and inequities present in the study methods are critical to acknowledge. The post-positivist lens of the current study has limitations because of the researcher’s biases that may have affected decisions at each step of the research process (Sabnis & Wolgemuth, 2023). The purpose of the current study as set out a priori was to learn more about acceptability of the LIFT program in the context of specific cultural codes to begin the conversation in a sparse area of the digital mental health literature; however, this can be subject to confirmation bias and the limitation of there being “one objective truth” to answer the question of how acceptable LIFT is, so future research should consider other epistemological approaches that foster a deeper interpretation of participants’ culturally tied experiences with digital programs from the “bottom-up” rather than the “top-down” (e.g., constructionism; Sabnis & Newman, 2023). Despite the promise of digital interventions to act as a modality to bridge the digital divide for digitally marginalized young people, this study’s recruitment tactics on social media do not accomplish this purpose and do not represent the true characteristics of engagement, effectiveness, and acceptability with those who may lack access to technology. 153 These findings should be understood in the context of the Instagram recruitment procedures through college social media accounts. Further, the current study discusses “cultural acceptability” broadly, but findings should not be generalized to any specific cultural group given the low sample size and high racial and ethnic heterogeneity in the sample. Not all study measures had been validated within this age group and REM sample, as the researcher was attempting to balance replication of the pilot study measures with the scant tools validated in these populations. Next, the researcher and secondary coder identified as highly educated cisgender White women and thus may not be fully representing the experiences of the REM sample in interpreted findings. The researcher attempted to proactively address concerns related to systemic racism by openly acknowledging their race during the interviews. There is inherent bias in being interviewed by the study coordinator which may have prevented participants from being honest about negative experiences with LIFT. Further, the researcher proactively attempted to reduce bias by including a second coder and incorporating regular discussions about biases and “blind spots” during the coding process, and utilizing anonymous member checks. The researcher tried to balance asking open-ended questions in interviews first and following up with specific probes regarding cultural factors to get at the research questions and the codebook elements. These questions may have primed participants for certain responses (Hsieh & Shannon, 2005). Using direct content analytic approaches in qualitative research can lead to biased findings when applying an a priori codebook rather than letting the data guide development of the codebook. Future research should balance these factors and continue to contextualize findings given the lack of research in this area and the pervasive effects of systemic racism in intervention efficacy research with minority populations. 154 Finally, this study was conducted remotely and required self-reports for all elements of data collection, including demographic factors, inclusion criteria, measures for key variables of interest, and reporting throughout the study regarding LIFT progress to the study coordinator as discussed above. It could be the case that participants were untruthful about their characteristics in relation to the study criteria based on the inclusion criteria in the study flyer. The researcher removed 13 participants at the screening stage after Qualtrics identified them as a “bot” or it was evident they had completed the screener multiple times from the same IP address and only met study eligibility on a second survey attempt. Key variables of interest should also be measured using other tools besides self-report rating scales (i.e., interview measures) in the future. Future Directions and Conclusions Overall, more research is needed at the intersection of culturally informed evidence-based practices for trauma and digital modalities for interventions to increase access for young people to evidence-based supports. This pre-post waitlist RCT design was a logical next step in evaluating the feasibility and adherence of LIFT following the existing one-group pilot study and lack of RCTs examining trauma-focused digital programs for REM young people. It is important to determine the feasibility/appropriateness of more novel interventions, such as LIFT, within certain populations and contexts using this design as rationale for future large-scale RCTs (Eldridge et al., 2016; Bowen et al., 2009). Sheridan (2014) suggests that feasibility trials focusing on adherence, effectiveness, and acceptability are essential before initiating large-scale RCTs, to establish the overall appropriateness and mechanisms of change (the “why”) associated with the intervention. Given the promising and positively trending findings related to anxiety within the current study compared to a control condition, future evaluations of LIFT in adolescent and emerging adult populations are necessary with larger samples, and less missing 155 data due to non-adherence and non-response, representative of more REM groups. However, it is essential to continue collecting mixed methods research on engagement, to be able to detect more concrete effects, mechanisms of action (the “why”), and acceptability to develop a more comprehensive regarding the interplay between these factors for digital tools. Culturally responsive and evidence-based practice in psychology (EBPP) should always include the voices of those intended to be served by digital interventions in the development and refinement of these interventions (Oti & Pitt, 2021). Participant suggestions from the study’s qualitative data should be incorporated into future digital tool development. Future research should first focus on developing an understanding of how to improve uptake, retention, and engagement in digital interventions, given the potential for large reach of digital tools (D’Adamo et al., 2023), while also considering the significant impact non-adherence can have on effectiveness. Further, standardized minimum dosages for digital tools should be established and evaluated statistically in an attempt to quantify the amount of engagement with self-paced intervention components that leads to positive outcomes in line with the cognitive behavioral theory of change. Future research should also attempt to collect more exit survey and acceptability information from non-adherent users. Additionally, it is critical to examine the degree of external support that leads to stronger treatment outcomes (i.e., weekly reminders, daily reminders). Further, digital interventions have the potential to break down barriers to care for emerging adults, especially cultural barriers (i.e., stigma); there is a need for future research to use “evidence-based digital inclusion strategies” to avoid perpetuating the digital divide, and continue to co-design interventions while elevating voices of young people to avoid “digital exclusion by default” (Piers et al., 2023). Once engagement and adherence are prioritized in self- 156 guided digital intervention studies, it will increase the validity of future research findings by allowing researchers to better pinpoint how effective these interventions are at improving outcomes for emerging adults, and what mechanisms may facilitate these outcomes. Implications for practice include opportunities for mental health providers across contexts to implement digital tools into their practice. On college campuses, research has found low reach when advertising availability of digital interventions in college counseling centers (D’Adamo et al. 2023). These publicly available tools may be a method to reduce long waitlists and get incoming college freshmen connected to digital care during the transition to college before traditional care can be established, by breaking down barriers to access that are common for REM young people (i.e., stigma, cost). It is essential to be a critical consumer of the mixed evidence base for digital CBT and trauma interventions in relation to the clinical population served. Within the current literature, it is recommended to use digital tools alongside some form of check-in, or in a “blended” approach (i.e., completing digital intervention alongside psychotherapy, to can reinforce concepts between sessions, inform homework, or collect in-vivo data used in later sessions; Hollis et al., 2017), rather than utilizing entirely self-administered delivery. Once implemented, frequent progress monitoring is necessary to ensure digital tools are the most appropriate modality of care for an individual. The benefit of these tools is that they can be used and accessed across settings, and don’t necessarily require a provider to facilitate them. In conclusion, the current study was a novel exploration of the feasibility/adherence, effectiveness, and general/cultural acceptability of the LIFT program provided during a vulnerable transition period within a unique REM sample of 18–19-year-old recent high school graduates who are entering college. The results suggest users of LIFT engage in a variety of ways with the program when it is delivered remotely with limited implementation support (i.e., 157 email/text weekly reminders). Low retention and varied adherence rates may have impacted the ability to establish accurate estimates of LIFT’s effects for participants, per the Perski & Short (2021) model that outlines how engagement directly influences intervention effectiveness for digital interventions via the degree of exposure to the intervention’s evidence-based components. Despite the moderate observed adherence for those who started LIFT, there are still lingering questions regarding the feasibility of LIFT delivered in this context. Larger scale feasibility trials are necessary to further untangle why this digital intervention appears to create challenges for completion. Effectiveness findings strongly supported positively trending, yet insignificant, improvements in anxiety, with robust improvements in the adherent sub-group, and limited evidence supporting changes in PTSS, depression, coping skills, and negative cognitions. Further exploration of the mechanisms of action leading to symptom improvement is needed for those who complete LIFT in the intended manner, with clear exposure to active intervention ingredients (Perski & Short, 2021). General acceptability findings reflected moderate-to-high perceptions of the overall acceptability of LIFT, but cultural acceptability findings were inconclusive. These findings overall add to the literature on culturally informed evidence-based practices that can meet the mental health and traumatic stress needs of young people who experience barriers to care, via innovative digital modalities. Yet, questions remain pertaining to who and under what conditions LIFT might be most effective within the framework that has been established for evidence-based digital mental health interventions (Perski & Short, 2021). It is essential for future researchers to explore and further disentangle how cultural factors may influence the perceived acceptability of, engagement with, and therefore effectiveness of digital mental health programs. 158 REFERENCES Alegria, M., Carson, N. J., Goncalves, M., & Keefe, K. (2011). Disparities in treatment for substance use disorders and co-occurring disorders for ethnic/racial minority youth. Journal of the American Academy of Child & Adolescent Psychiatry, 50(1), 22–31. https://doi.org/10.1016/j.jaac.2010.10.005 Alisic, E., Zalta, A. K., Van Wesel, F., Larsen, S. E., Hafstad, G. S., Hassanpour, K., & Smid, G. E. (2014). Rates of post-traumatic stress disorder in trauma-exposed children and adolescents: meta-analysis. The British Journal of Psychiatry, 204(5), 335-340. American Psychological Association. (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57, 1052-1059. American Psychological Association. (2008). Children and trauma: Update for mental health professionals. http://www.apa.org/pi/families/resources/children-trauma-update.aspx American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). https://doi.org/10.1176/appi.books.9780890425596 American Psychological Association. (2017). Clinical practice guideline for the treatment of PTSD. Washington, DC: Author. American Psychological Association Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61, 271–285. Andersson, G., & Titov, N. (2014). Advantages and limitations of internet‐based interventions for common mental disorders. World Psychiatry, 13(1), 4-11. Armijo-Olivo, S., Warren, S., & Magee, D. (2009). Intention to treat analysis, compliance, drop- outs and how to deal with missing data in clinical research: a review. Physical therapy reviews, 14(1), 36-49. Ayers, T. S., Sandler, I. N., West, S. G., & Roosa, M. W. (1996). A dispositional and situational assessment of children’s coping: Testing alternative models of coping. Journal of Personality, 64, 923–958. Barak, A., & Grohol, J. M. (2011). Current and future trends in internet-supported mental health interventions. Journal of Technology in Human Services, 29(3), 155-196. Barker, C. H., Cook, K. L., & Borrego Jr, J. (2010). Addressing cultural variables in parent training programs with Latino families. Cognitive and Behavioral Practice, 17(2), 157- 166. Barnett, P., Arundell, L. L., Saunders, R., Matthews, H., & Pilling, S. (2021). The efficacy of psychological interventions for the prevention and treatment of mental health disorders in university students: a systematic review and meta-analysis. Journal of affective disorders, 280, 381-406. 159 Beames, J. R., Johnston, L., O'Dea, B., Torok, M., Christensen, H., Boydell, K. M., & Werner- Seidler, A. (2021). Factors that help and hinder the implementation of digital depression prevention programs: School-based cross-sectional study. Journal of Medical Internet Research, 23(8), e26223. Bernal, G., & Scharrón-del-Río, M. R. (2001). Are empirically supported treatments valid for ethnic minorities? Toward an alternative approach for treatment research. Cultural Diversity and Ethnic Minority Psychology, 7(4), 328. Bernard, D. L., Calhoun, C. D., Banks, D. E., Halliday, C. A., Hughes-Halbert, C., & Danielson, C. K. (2021). Making the “C-ACE” for a culturally-informed adverse childhood experiences framework to understand the pervasive mental health impact of racism on Black youth. Journal of Child & Adolescent Trauma, 14(2), 233-247. Bertha, E. A., & Balázs, J. (2013). Subthreshold depression in adolescence: a systematic review. European Child & Adolescent Psychiatry, 22(10), 589-603. Berzengi, A., Berzenji, L., Kadim, A., Mustafa, F., & Jobson, L. (2017). Role of Islamic appraisals, trauma-related appraisals, and religious coping in the posttraumatic adjustment of Muslim trauma survivors. Psychological Trauma, 9(2), 189–197. Bowen, D. J., Kreuter, M., Spring, B., Cofta-Woerpel, L., Linnan, L., Weiner, D., ... & Fernandez, M. (2009). How we design feasibility studies. American Journal of Preventive Medicine, 36(5), 452-457. Boyraz, G., Horne, S. G., Owens, A. C., & Armstrong, A. P. (2013). Academic achievement and college persistence of African American students with trauma exposure. Journal of Counseling Psychology, 60(4), 582–592. Brown, L. A., Belli, G. M., Asnaani, A., & Foa, E. B. (2019). A review of the role of negative cognitions about oneself, others, and the world in the treatment of PTSD. Cognitive Therapy and Research, 43(1), 143-173. Bruffaerts, R., Mortier, P., Auerbach, R. P., Alonso, J., Hermosillo De la Torre, A. E., Cuijpers, P., ... & WHO WMH‐ICS Collaborators. (2019). Lifetime and 12‐month treatment for mental disorders and suicidal thoughts and behaviors among first year college students. International journal of methods in psychiatric research, 28(2), e1764. Bryant-Davis, T., & Ocampo, C. (2005). The trauma of racism: Implications for counseling, research, and education. The Counseling Psychologist, 33(4), 574-578. Bucci, S., Schwannauer, M., & Berry, N. (2019). The digital revolution and its impact on mental health care. Psychology and Psychotherapy: Theory, Research and Practice, 92(2), 277- 297. Butcher, N. J., Monsour, A., Mew, E. J., Chan, A. W., Moher, D., Mayo-Wilson, E., ... & Offringa, M. (2022). Guidelines for reporting outcomes in trial reports: the CONSORT- outcomes 2022 extension. Jama, 328(22), 2252-2264. 160 Calear, A. L., & Christensen, H. (2010). Systematic review of school-based prevention and early intervention programs for depression. Journal of Adolescence, 33(3), 429-438. Calear, A. L., Christensen, H., Brewer, J., Mackinnon, A., & Griffiths, K. M. (2016). A pilot randomized controlled trial of the E-Couch anxiety and worry program in schools. Internet Interventions, 6, 1-5. Cappella, E., Frazier, S. L., Atkins, M. S., Schoenwald, S. K., & Glisson, C. (2008). Enhancing schools’ capacity to support children in poverty: An ecological model of school-based mental health services. Administration and Policy in Mental Health and Mental Health Services Research, 35(5), 395-409. Chafouleas, S. M., Johnson, A. H., Overstreet, S., & Santos, N. M. (2016). Toward a blueprint for trauma-informed service delivery in schools. School Mental Health, 8(1), 144-162. Chambless, D. L., Sanderson, W. C., Shoham, V., Johnson, S. B., Pope, K. S., Crits-Christoph, P., ... & McCurry, S. (1996). An update on empirically validated therapies. The Clinical Psychologist, 49(2), 5-18. Chorpita, B. F., Moffitt, C. E., & Gray, J. (2005). Psychometric properties of the Revised Child Anxiety and Depression Scale in a clinical sample. Behaviour Research And Therapy, 43(3), 309-322. Chorpita, B. F., Yim, L., Moffitt, C., Umemoto, L. A., & Francis, S. E. (2000). Assessment of symptoms of DSM–IV anxiety and depression in children: A revised child anxiety and depression scale. Behaviour Research and Therapy, 38, 835–855. Christ, C., Schouten, M. J., Blankers, M., van Schaik, D. J., Beekman, A. T., Wisman, M. A., ... & Dekker, J. J. (2020). Internet and computer-based cognitive behavioral therapy for anxiety and depression in adolescents and young adults: systematic review and meta- analysis. Journal of Medical Internet Research, 22(9), e17831. Clarke, A. T. (2006). Coping with interpersonal stress and psychosocial health among children and adolescents: A meta-analysis. Journal of Youth and Adolescence, 35(1), 10-23. Clarke, A. M., Kuosmanen, T., & Barry, M. M. (2015). A systematic review of online youth mental health promotion and prevention interventions. Journal of Youth and Adolescence, 44(1), 90-113. Cleary, M., Walter, G., & Jackson, D. (2011). “Not always smooth sailing”: Mental health issues associated with the transition from high school to college. Issues in Mental Health Nursing, 32(4), 250-254. Cogburn, C. D., Chavous, T. M., & Griffin, T. M. (2011). School-based racial and gender discrimination among African American adolescents: Exploring gender variation in frequency and implications for adjustment. Race and Social Problems, 3(1), 25-37. 161 Cohen, J. A., Mannarino, A. P., & Deblinger, E. (2017). Treating trauma and traumatic grief in children and adolescents (2nd ed.). Guilford Publications. Compas, B. E., Connor-Smith, J. K., Saltzman, H., Thomsen, A. H., & Wadsworth, M. E. (2001). Coping with stress during childhood and adolescence: Problems, progress, and potential in theory and research. Psychological Bulletin, 127(1), 87. Cook, L. J. (2007). Striving to help college students with mental health issues. Journal of Psychosocial Nursing and Mental Health Services, 45(4), 40-44. Comas-Díaz, L., Hall, G. N., & Neville, H. A. (2019). Racial trauma: Theory, research, and healing: Introduction to the special issue. American Psychologist, 74(1), 1. Copeland, W. E., Keeler, G., Angold, A., & Costello, E. J. (2007). Traumatic events and posttraumatic stress in childhood. Archives of General Psychiatry, 64(5), 577-584. Cox, C. M., Kenardy, J. A., & Hendrikz, J. K. (2009). A randomized controlled trial of a web- based early intervention for children and their parents following unintentional injury. Journal of Pediatric Psychology, 35(6), 581-592. Creswell, J. W., & Clark, V. L. P. (2017). Designing and conducting mixed methods research. Sage publications. Crone, E. A., Dahl, R. E. (2012). Understanding adolescence as a period of social–affective engagement and goal flexibility. Nature Reviews Neuroscience, 13(9), 350–636. Crouch, J. L., Hanson, R. F., Saunders, B. E., Kilpatrick, D. G., & Resnick, H. S. (2000). Income, race/ethnicity, and exposure to violence in youth: Results from the National Survey of Adolescents. Journal of Community Psychology, 28, 625– 641. Curran, G. M., Bauer, M., Mittman, B., Pyne, J. M., & Stetler, C. (2012). Effectiveness- implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical care, 50(3), 217. Cusack, S. E., Hicks, T. A., Bourdon, J., Sheerin, C. M., Overstreet, C. M., Kendler, K. S., ... & Amstadter, A. B. (2019). Prevalence and predictors of PTSD among a college sample. Journal of American College Health, 67(2), 123-131. D'Adamo, L., Paraboschi, L., Grammer, A. C., Fennig, M., Graham, A. K., Yaeger, L. H., ... & Fitzsimmons-Craft, E. E. (2023). Reach and uptake of digital mental health interventions based on cognitive-behavioral therapy for college students: A systematic review. Journal of Behavioral and Cognitive Therapy. Dimitrov, D. M., & Rumrill Jr, P. D. (2003). Pretest-posttest designs and measurement of change. Work, 20(2), 159-165. 162 Dorsey, S., Briggs, E. C., & Woods, B. A. (2011). Cognitive-behavioral treatment for posttraumatic stress disorder in children and adolescents. Child and Adolescent Psychiatric Clinics of North America, 20, 255–269. Dorsey, S., McLaughlin, K. A., Kerns, S. E., Harrison, J. P., Lambert, H. K., Briggs, E. C., ... & Amaya-Jackson, L. (2017). Evidence base update for psychosocial treatments for children and adolescents exposed to traumatic events. Journal of Clinical Child & Adolescent Psychology, 46(3), 303-330. Elgar, F. J., & McGrath, P. J. (2003). Self‐administered psychosocial treatments for children and families. Journal of Clinical Psychology, 59(3), 321-339. Ellis, D. M., Draheim, A. A., & Anderson, P. L. (2022). Culturally adapted digital mental health interventions for ethnic/racial minorities: A systematic review and meta-analysis. Journal of Consulting and Clinical Psychology, 90(10), 717. Elzy, M., Clark, C., Dollard, N., & Hummer, V. (2013). Adolescent girls’ use of avoidant and approach coping as moderators between trauma exposure and trauma symptoms. Journal of Family Violence, 28(8), 763-770. Enders, C. K. (2022). Applied missing data analysis. Guilford Publications. Enders, C. K. (2017). Multiple imputation as a flexible tool for missing data handling in clinical research. Behaviour research and therapy, 98, 4-18. Ermis-Demirtas, H., Luo, Y., & Huang, Y. J. (2022). The Trauma of COVID-19–Fueled Discrimination: Posttraumatic Stress in Asian American Adolescents. Professional School Counseling, 26(1b), 2156759X221106814. Famularo, R., Fenton, T., Kinscherff, R., & Augustyn, M. (1996). Psychiatric comorbidity in childhood posttraumatic stress disorder. Child Abuse & Neglect, 20(10), 953-961. Filipkowski, K. B., Heron, K. E., & Smyth, J. M. (2016). Early adverse experiences and health: The transition to college. American Journal of Health Behavior, 40(6), 717-728. Finkelhor, D., Ormrod, R. K., & Turner, H. A. (2009). The developmental epidemiology of childhood victimization. Journal of Interpersonal Violence, 24(5), 711-731. Finkelhor, D., Turner, H. A., Shattuck, A., & Hamby, S. L. (2015). Prevalence of childhood exposure to violence, crime, and abuse: Results from the national survey of children’s exposure to violence. JAMA Pediatrics, 169(8), 746-754. Flores, E., Tschann, J. M., Dimas, J. M., Pasch, L. A., & de Groat, C. L. (2010). Perceived racial/ethnic discrimination, posttraumatic stress symptoms, and health risk behaviors among Mexican American adolescents. Journal of Counseling Psychology, 57(3), 264. 163 Foa, E. B., Johnson, K. M., Feeny, N. C., & Treadwell, K. R. (2001). The Child PTSD Symptom Scale: A preliminary examination of its psychometric properties. Journal of Clinical Child Psychology, 30(3), 376-384. Foa, E. B., & Meadows, E. A. (1997). Psychosocial treatments for posttraumatic stress disorder: A critical review. Annual Review of Psychology, 48(1), 449-480. Forman, J., & Damschroder, L. (2007). Qualitative content analysis. In Empirical methods for bioethics: A primer (pp. 39-62). Emerald Group Publishing Limited. Galaif, E. R., Sussman, S., Chou, C. P., & Wills, T. A. (2003). Longitudinal relations among depression, stress, and coping in high risk youth. Journal of Youth and Adolescence, 32(4), 243-258. Galatzer-Levy, I. R., Burton, C. L., & Bonanno, G. A. (2012). Coping flexibility, potentially traumatic life events, and resilience: A prospective study of college student adjustment. Journal of Social and Clinical Psychology, 31(6), 542–567. Gamble, B., Depa, K., Holmes, E. A., & Kanstrup, M. (2021). Digitalizing a brief intervention to reduce intrusive memories of psychological trauma: Qualitative interview study. JMIR Mental Health, 8(2), e23712. Garcia-Coll, C., Crnic, K., Lamberty, G., Wasik, B. H., Jenkins, R., Garcia, H. V., & McAdoo, H. P. (1996). An integrative model for the study of developmental competencies in minority children. Child Development, 67(5), 1891-1914. Garett, R., Liu, S., & Young, S. D. (2017). A longitudinal analysis of stress among incoming college freshmen. Journal of American college health, 65(5), 331-338 Garland, A. F., Lau, A. S., Yeh, M., McCabe, K. M., Hough, R. L., & Landsverk, J. A. (2005). Racial and ethnic differences in utilization of mental health services among high-risk youths. American Journal of Psychiatry, 162(7), 1336-1343. Garrido, S., Millington, C., Cheers, D., Boydell, K., Schubert, E., Meade, T., & Nguyen, Q. V. (2019). What works and what doesn’t work? A systematic review of digital mental health interventions for depression and anxiety in young people. Frontiers in Psychiatry, 10, 759. Gaylord-Harden, N. K., Gipson, P., Mance, G., & Grant, K. E. (2008). Coping patterns of African American adolescents: a confirmatory factor analysis and cluster analysis of the Children's Coping Strategies Checklist. Psychological Assessment, 20(1), 10. Gliner, J. A., Morgan, G. A., & Harmon, R. J. (2003). Pretest-posttest comparison group designs: Analysis and interpretation. Journal of the American Academy of Child & Adolescent Psychiatry, 42, 500-503. https://doi.org/10.1097/01.CHI.0000046809.95464.BE 164 Gonzalez, L. M., Stein, G. L., Kiang, L., & Cupito, A. M. (2014). The impact of discrimination and support on developmental competencies in Latino adolescents. Journal of Latina/o Psychology, 2(2), 79. Greenberg, L. S., & Newman, F. L. (1996). An approach to psychotherapy change process research: Introduction to the special section. Journal of Consulting and Clinical Psychology, 64(3), 435. Guessoum, S. B., Lachal, J., Radjack, R., Carretier, E., Minassian, S., Benoit, L., & Moro, M. R. (2020). Adolescent psychiatric disorders during the COVID-19 pandemic and lockdown. Psychiatry Research, 291, 113264. Hankin, B. L., Abramson, L. Y., Moffitt, T. E., Silva, P. A., McGee, R., & Angell, K. E. (1998). Development of depression from preadolescence to young adulthood: emerging gender differences in a 10-year longitudinal study. Journal of Abnormal Psychology, 107(1), 128. Hasson, R. G., Easton, S. D., Iriarte, A. D. V., O’Dwyer, L. M., Underwood, D., & Crea, T. M. (2021). Examining the psychometric properties of the Child PTSD Symptom Scale within a sample of unaccompanied immigrant children in the United States. Journal of Loss and Trauma, 26(4), 323-335. Hays, P. A. (2009). Integrating evidence-based practice, cognitive–behavior therapy, and multicultural therapy: Ten steps for culturally competent practice. Professional Psychology: Research and Practice, 40(4), 354. Hetrick, S. E., Yuen, H. P., Bailey, E., Cox, G. R., Templer, K., Rice, S. M., ... & Robinson, J. (2017). Internet-based cognitive behavioural therapy for young people with suicide- related behaviour (Reframe-IT): a randomised controlled trial. Evidence-Based Mental Health, 20(3), 76-82. Hirai, M., Skidmore, S. T., Clum, G. A., & Dolma, S. (2012). An investigation of the efficacy of online expressive writing for trauma-related psychological distress in Hispanic individuals. Behavior therapy, 43(4), 812-824. Hollis, C., Falconer, C. J., Martin, J. L., Whittington, C., Stockton, S., Glazebrook, C., & Davies, E. B. (2017). Annual Research Review: Digital health interventions for children and young people with mental health problems–a systematic and meta‐review. Journal of Child Psychology and Psychiatry, 58(4), 474-503. Hope, E. C., Skoog, A. B., & Jagers, R. J. (2015). “It’ll never be the white kids, it’ll always be us” black high school students’ evolving critical analysis of racial discrimination and inequity in schools. Journal of Adolescent Research, 30(1), 83-112. Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative health research, 15(9), 1277-1288. 165 Huang, C. Y., & Zane, N. (2016). Cultural influences in mental health treatment. Current Opinion in Psychology, 8, 131-136. Huang, J., Nigatu, Y. T., Smail-Crevier, R., Zhang, X., & Wang, J. (2018). Interventions for common mental health problems among university and college students: A systematic review and meta-analysis of randomized controlled trials. Journal of psychiatric research, 107, 1-10. Huey Jr, S. J., & Polo, A. J. (2008). Evidence-based psychosocial treatments for ethnic minority youth. Journal of Clinical Child & Adolescent Psychology, 37(1), 262-301. Ingram, L., & Wallace, B. (2019). " It Creates Fear and Divides Us:" Minority College Students' Experiences of Stress from Racism, Coping Responses, and Recommendations for Colleges. Journal of Health Disparities Research & Practice, 12(1). Jakobsen, J. C., Gluud, C., Wetterslev, J., & Winkel, P. (2017). When and how should multiple imputation be used for handling missing data in randomised clinical trials–a practical guide with flowcharts. BMC medical research methodology, 17, 1-10. Jaycox, L. H., Ayer, L., Vona, P., Hehman, C., Stein, B. D., Mahmud, A., Woodley, M., Meza, E., Thornton, E., & Venkatesh, B. (2019). Development and preliminary evaluation of a self-guided, internet-based tool for coping with stress and trauma: Life Improvement for Teens (LIFT). Psychological Services, 16(1), 85. Jaycox, L. H., Kataoka, S. H., Stein, B. D., Langley, A. K., & Wong, M. (2012). Cognitive behavioral intervention for trauma in schools. Journal of Applied School Psychology, 28(3), 239-255. Jaycox, L. H., Langley, A. K., Stein, B. D., Wong, M., Sharma, P., Scott, M., & Schonlau, M. (2009). Support for students exposed to trauma: A pilot study. School Mental Health, 1, 49–60. Jensen, T. K., Holt, T., Mørup Ormhaug, S., Fjermestad, K. W., & Wentzel-Larsen, T. (2018). Change in post-traumatic cognitions mediates treatment effects for traumatized youth—A randomized controlled trial. Journal of Counseling Psychology, 65(2), 166. Jernigan, M. M., & Daniel, J. H. (2011). Racial trauma in the lives of Black children and adolescents: Challenges and clinical implications. Journal of Child & Adolescent Trauma, 4(2), 123-141. Johnson, K. M., Foa, E. B., Jaycox, L. H., & Rescorla, L. (1996, November). Post-trauma attitudes in traumatized children. Paper presented at the 12th Annual Meeting of the International Society for Traumatic Stress Studies, San Francisco, CA. Jonassaint, C. R., Belnap, B. H., Huang, Y., Karp, J. F., Abebe, K. Z., & Rollman, B. L. (2020). Racial differences in the effectiveness of internet-delivered mental health care. Journal of General Internal Medicine, 35(2), 490-497. 166 Jones, S. C. T., Anderson, R. E., Gaskin-Wasson, A. L., Sawyer, B. A., Applewhite, K., & Metzger, I. W. (2020). From “crib to coffin”: Navigating coping from racism-related stress throughout the lifespan of Black Americans. American Journal of Orthopsychiatry, 90(2), 267–282. https://doi.org/10.1037/ort0000430 Kanuri, N., Arora, P., Talluru, S., Colaco, B., Dutta, R., Rawat, A., ... & Newman, M. G. (2020). Examining the initial usability, acceptability and feasibility of a digital mental health intervention for college students in India. International Journal of Psychology, 55(4), 657-673. Kassam-Adams, N., Marsac, M. L., Kohser, K. L., Kenardy, J., March, S., & Winston, F. K. (2016). Pilot randomized controlled trial of a novel web-based intervention to prevent posttraumatic stress in children following medical events. Journal of Pediatric Psychology, 41(1), 138-148. Kavanagh, J., Oliver, S., Lorenc, T., Caird, J., Tucker, H., Harden, A., ... & Oakley, A. (2009). School-based cognitive-behavioural interventions: A systematic review of effects and inequalities. Health Sociology Review, 18(1), 61-78. Kayrouz, R., Dear, B. F., Karin, E., & Titov, N. (2016). Facebook as an effective recruitment strategy for mental health research of hard to reach populations. Internet Interventions, 4, 1-10. Kazdin, A. E., & Blase, S. L. (2011). Rebooting psychotherapy research and practice to reduce the burden of mental illness. Perspectives on Psychological Science, 6(1), 21-37. Kenardy, J. A., Cox, C. M., & Brown, F. L. (2015). A web‐based early intervention can prevent long‐term PTS reactions in children with high initial distress following accidental injury. Journal of Traumatic Stress, 28(4), 366-369. Keyworth, C., O'Connor, R., Quinlivan, L., & Armitage, C. J. (2021). Acceptability of a brief web-based theory-based intervention to prevent and reduce self-harm: mixed methods evaluation. Journal of medical internet research, 23(9), e28349. Keyworth, C., Quinlivan, L., Leather, J. Z., & Armitage, C. J. (2022). Exploring the acceptability of a brief online theory-based intervention to prevent and reduce self-harm: a theoretically framed qualitative study. BJPsych open, 8(6), e184. Kira, I. A. (2010). Etiology and treatment of post-cumulative traumatic stress disorders in different cultures. Traumatology, 16(4), 128. Klein, B., Mitchell, J., Abbott, J., Shandley, K., Austin, D., Gilson, K., ... & Redman, T. (2010). A therapist-assisted cognitive behavior therapy internet intervention for posttraumatic stress disorder: pre-, post-and 3-month follow-up results from an open trial. Journal of Anxiety Disorders, 24(6), 635-644. Klein, T., Breilmann, J., Schneider, C., Girlanda, F., Fiedler, I., Dawson, S., ... & Kösters, M. (2024). Dose–response relationship in cognitive behavioral therapy for depression: A 167 nonlinear metaregression analysis. Journal of Consulting and Clinical Psychology, 92(5), 296. Kontopantelis, E., White, I. R., Sperrin, M., & Buchan, I. (2017). Outcome-sensitive multiple imputation: a simulation study. BMC medical research methodology, 17, 1-13. Kook, Y., Le, T. P., Robey, N., & Raposa, E. B. (2023). Mental health and resource utilization among underrepresented students transitioning to college in the United States. Journal of Diversity in Higher Education. Advance online publication. https://doi.org/10.1037/dhe0000496 Kosyluk, K. A., Conner, K. O., Al-Khouja, M., Bink, A., Buchholz, B., Ellefson, S., ... & Corrigan, P. W. (2021). Factors predicting help seeking for mental illness among college students. Journal of Mental Health, 30(3), 300-307. Kuester, A., Niemeyer, H., & Knaevelsrud, C. (2016). Internet-based interventions for posttraumatic stress: A meta-analysis of randomized controlled trials. Clinical Psychology Review, 43, 1-16. Kuhn, E., Kanuri, N., Hoffman, J. E., Garvert, D. W., Ruzek, J. I., & Taylor, C. B. (2017). A randomized controlled trial of a smartphone app for posttraumatic stress disorder symptoms. Journal of Consulting and Clinical Psychology, 85(3), 267. Kuhn, E., & Owen, J. E. (2020). Advances in PTSD treatment delivery: The role of digital technology in PTSD treatment. Current Treatment Options in Psychiatry, 7(2), 88-102. Kutok, E. R., Doria, N., Dunsiger, S., Patena, J. V., Nugent, N. R., Riese, A., ... & Ranney, M. L. (2021). Feasibility and cost of using Instagram to recruit adolescents to a remote intervention. Journal of Adolescent Health, 69(5), 838-846. Langley, A. K., Nadeem, E., Kataoka, S. H., Stein, B. D., & Jaycox, L. H. (2010). Evidence- based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health, 2(3), 105-113. Langley, A., Santiago, C. D., Rodríguez, A., & Zelaya, J. (2013). Improving implementation of mental health services for trauma in multicultural elementary schools: Stakeholder perspectives on parent and educator engagement. The journal of Behavioral Health Services & Research, 40(3), 247-262. Lattie, E. G., Adkins, E. C., Winquist, N., Stiles-Shields, C., Wafford, Q. E., & Graham, A. K. (2019). Digital mental health interventions for depression, anxiety, and enhancement of psychological well-being among college students: systematic review. Journal of medical Internet research, 21(7), e12869. Lee, Y. Y., Le, L. K. D., Lal, A., Engel, L., & Mihalopoulos, C. (2021). The cost-effectiveness of delivering an e-health intervention, MoodGYM, to prevent anxiety disorders among Australian adolescents: A model-based economic evaluation. Mental Health & Prevention, 24, 200210. 168 Lehtimaki, S., Martic, J., Wahl, B., Foster, K. T., & Schwalbe, N. (2021). Evidence on digital mental health interventions for adolescents and young people: Systematic overview. JMIR Mental Health, 8(4), e25847. Lewis, C., Roberts, N. P., Simon, N., Bethell, A., & Bisson, J. I. (2019). Internet‐delivered cognitive behavioural therapy for post‐traumatic stress disorder: Systematic review and meta‐analysis. Acta Psychiatrica Scandinavica, 140(6), 508-521. Levin, M. E., Pistorello, J., Seeley, J. R., & Hayes, S. C. (2014). Feasibility of a prototype web- based acceptance and commitment therapy prevention program for college students. Journal of American College Health, 62(1), 20-30. Levy, H. C., Worden, B. L., Davies, C. D., Stevens, K., Katz, B. W., Mammo, L., ... & Tolin, D. F. (2020). The dose-response curve in cognitive-behavioral therapy for anxiety disorders. Cognitive behaviour therapy, 49(6), 439-454. Li, J., Theng, Y. L., & Foo, S. (2014). Game-based digital interventions for depression therapy: A systematic review and meta-analysis. Cyberpsychology, Behavior, and Social Networking, 17(8), 519-527. Liang, K. Y., & Zeger, S. L. (2000). Longitudinal data analysis of continuous and discrete responses for pre-post designs. Sankhyā: The Indian Journal of Statistics, Series B, 134- 148. Lillevoll, K. R., Vangberg, H. C. B., Griffiths, K. M., & Eisemann, M. R. (2014). Uptake and adherence of a self-directed internet-based mental health intervention with tailored e-mail reminders in senior high schools in Norway. BMC Psychiatry, 14(1), 1-11. Littleton, H., Grills, A. E., Kline, K. D., Schoemann, A. M., & Dodd, J. C. (2016). The From Survivor to Thriver program: RCT of an online therapist-facilitated program for rape- related PTSD. Journal of anxiety disorders, 43, 41-51. Liverpool, S., Mota, C. P., Sales, C. M., Čuš, A., Carletto, S., Hancheva, C., ... & Edbrooke- Childs, J. (2020). Engaging children and young people in digital mental health interventions: Systematic review of modes of delivery, facilitators, and barriers. Journal of Medical Internet Research, 22(6), e16317. Loxton, D., Powers, J., Anderson, A. E., Townsend, N., Harris, M. L., Tuckerman, R., ... & Byles, J. (2015). Online and offline recruitment of young women for a longitudinal health survey: findings from the australian longitudinal study on women’s health 1989-95 cohort. Journal of medical Internet research, 17(5), e4261. Lu, W. (2020). Treatment for adolescent depression: National patterns, temporal trends, and factors related to service use across settings. Journal of Adolescent Health, 67(3), 401- 408. 169 Madley-Dowd, P., Hughes, R., Tilling, K., & Heron, J. (2019). The proportion of missing data should not be used to guide decisions on multiple imputation. Journal of clinical epidemiology, 110, 63-73. Marcotte, D., Diallo, T. M., & Paré, M. L. (2018). Adjustment to college and prediction of depression during post-secondary transition. European Journal of Psychology of Education, 33, 727-748. Marsac, M. L., Winston, F. K., Hildenbrand, A. K., Kohser, K. L., March, S., Kenardy, J., & Kassam-Adams, N. (2015). Systematic, theoretically grounded development and feasibility testing of an innovative, preventive web-based game for children exposed to acute trauma. Clinical Practice in Pediatric Psychology, 3(1), 12. Mathyssek, C. M., Olino, T. M., Hartman, C. A., Ormel, J., Verhulst, F. C., & Van Oort, F. V. (2013). Does the Revised Child Anxiety and Depression Scale (RCADS) measure anxiety symptoms consistently across adolescence? The TRAILS study. International Journal of Methods in Psychiatric Research, 22(1), 27-35. Mavranezouli, I., Megnin‐Viggars, O., Daly, C., Dias, S., Stockton, S., Meiser‐Stedman, R., ... & Pilling, S. (2020). Research Review: Psychological and psychosocial treatments for children and young people with post‐traumatic stress disorder: A network meta‐ analysis. Journal of Child Psychology and Psychiatry, 61(1), 18-29. McKim, C. (2023). Meaningful member-checking: a structured approach to member- checking. American Journal of Qualitative Research, 7(2), 41-52. McKinnon, A., Smith, P., Bryant, R., Salmon, K., Yule, W., Dalgleish, T., ... & Meiser‐Stedman, R. (2016). An update on the clinical utility of the children's post‐traumatic cognitions inventory. Journal of Traumatic Stress, 29(3), 253-258. McLean, C. P., Yeh, R., Rosenfield, D., & Foa, E. B. (2015). Changes in negative cognitions mediate PTSD symptom reductions during client-centered therapy and prolonged exposure for adolescents. Behaviour Research and Therapy, 68, 64-69. McVay, M. A., Bennett, G. G., Steinberg, D., & Voils, C. I. (2019). Dose–response research in digital health interventions: Concepts, considerations, and challenges. Health Psychology, 38(12), 1168. Meiser‐Stedman, R., Smith, P., Bryant, R., Salmon, K., Yule, W., Dalgleish, T., & Nixon, R. D. (2009). Development and validation of the Child Post‐Traumatic Cognitions Inventory (CPTCI). Journal of Child Psychology and Psychiatry, 50(4), 432-440. Melnyk, B. M., Amaya, M., Szalacha, L. A., Hoying, J., Taylor, T., & Bowersox, K. (2015). Feasibility, acceptability, and preliminary effects of the COPE online cognitive‐ behavioral skill‐building program on mental health outcomes and academic performance in freshmen college students: A randomized controlled pilot study. Journal of Child and Adolescent Psychiatric Nursing, 28(3), 147-154. 170 Merikangas, K. R., He, J. P., Burstein, M., Swendsen, J., Avenevoli, S., Case, B., Georgiades, K., Heaton, L., Swanson, S., & Olfson, M. (2011). Service utilization for lifetime mental disorders in US adolescents: results of the National Comorbidity Survey–Adolescent Supplement (NCS-A). Journal of the American Academy of Child & Adolescent Psychiatry, 50(1), 32-45. Meyer, O. L., & Zane, N. (2013). The influence of race and ethnicity in clients’ experiences of mental health treatment. Journal of Community Psychology, 41(7), 884-901. Mouthaan, J., Sijbrandij, M., De Vries, G. J., Reitsma, J. B., Van de Schoot, R., Goslings, J. C., ... & Olff, M. (2013). Internet-based early intervention to prevent posttraumatic stress disorder in injury patients: randomized controlled trial. Journal of Medical Internet Research, 15(8), e2460. Nicolas, G., Wheatley, A., & Guillaume, C. (2015). Does one trauma fit all? Exploring the relevance of PTSD across cultures. International Journal of Culture and Mental Health, 8(1), 34-45. Nixon, R. D., Meiser-Stedman, R., Dalgleish, T., Yule, W., Clark, D. M., Perrin, S., & Smith, P. (2013). The Child PTSD Symptom Scale: An update and replication of its psychometric properties. Psychological Assessment, 25(3), 1025. Njardvik, U., & Kelley, M. L. (2008). Cultural effects on treatment acceptability. Nordic Psychology, 60(4), 283-294. Nooner, K. B., Linares, L. O., Batinjane, J., Kramer, R. A., Silva, R., & Cloitre, M. (2012). Factors related to posttraumatic stress disorder in adolescence. Trauma, Violence, & Abuse, 13, 153–166. doi:10.1177/1524838012447698. O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: debates and practical guidelines. International journal of qualitative methods, 19, 1609406919899220. O’Dea, B., Calear, A. L., & Perry, Y. (2015). Is e-health the answer to gaps in adolescent mental health service provision?. Current Opinion in Psychiatry, 28(4), 336-342. Office of the Surgeon General (OSG). (2021). Protecting Youth Mental Health: The U.S. Surgeon General’s Advisory. US Department of Health and Human Services. Olthuis, J. V., Watt, M. C., Bailey, K., Hayden, J. A., & Stewart, S. H. (2016). Therapist‐ supported Internet cognitive behavioural therapy for anxiety disorders in adults. Cochrane Database of Systematic Reviews, (3). Orengo-Aguayo, R., Stewart, R. W., Villalobos, B. T., Hernandez Rodriguez, J., Dueweke, A. R., de Arellano, M. A., & Young, J. (2020). Listen, don’t tell: Partnership and adaptation to implement trauma-focused cognitive behavioral therapy in low-resourced settings. American Psychologist, 75(8), 1158. 171 Oti, O., & Pitt, I. (2021). Online mental health interventions designed for students in higher education: A user-centered perspective. Internet interventions, 26, 100468. Parra Cardona, J. R., Domenech‐Rodriguez, M., Forgatch, M., Sullivan, C., Bybee, D., Holtrop, K., ... & Bernal, G. (2012). Culturally adapting an evidence‐based parenting intervention for Latino immigrants: The need to integrate fidelity and cultural relevance. Family Process, 51(1), 56-72. Perfect, M. M., Turley, M. R., Carlson, J. S., Yohannan, J., & Saint Gilles, M. P. (2016). School- related outcomes of traumatic event exposure and traumatic stress symptoms in students: A systematic review of research from 1990 to 2015. School Mental Health, 8(1), 7-43. Perski, O., & Short, C. E. (2021). Acceptability of digital health interventions: embracing the complexity. Translational behavioral medicine, 11(7), 1473-1480. Pew Research Center (2024). Mobile fact sheet. Pew Research Center. Piers, R., Williams, J. M., & Sharpe, H. (2023). Can digital mental health interventions bridge the ‘digital divide’for socioeconomically and digitally marginalised youth? A systematic review. Child and Adolescent Mental Health, 28(1), 90-104. Pina, A. A., Polo, A. J., & Huey, S. J. (2019). Evidence-based psychosocial interventions for ethnic minority youth: The 10-year update. Journal of Clinical Child & Adolescent Psychology, 48(2), 179-202. Piqueras, J. A., Martín-Vivar, M., Sandin, B., San Luis, C., & Pineda, D. (2017). The Revised Child Anxiety and Depression Scale: A systematic review and reliability generalization meta-analysis. Journal of Affective Disorders, 218, 153-169. Porche, M. V., Costello, D. M., & Rosen-Reynoso, M. (2016). Adverse family experiences, child mental health, and educational outcomes for a national sample of students. School Mental Health, 8(1), 44-60. Program for Prevention Research. (1999). Manual for the Children’s Coping Strategies Checklist and the How I Coped Under Pressure Scale. Racine, N., McArthur, B. A., Cooke, J. E., Eirich, R., Zhu, J., & Madigan, S. (2021). Global prevalence of depressive and anxiety symptoms in children and adolescents during COVID-19: A meta-analysis. JAMA Pediatrics, 175(11), 1142-1150. Reyes-Portillo, J. A., Mufson, L., Greenhill, L. L., Gould, M. S., Fisher, P. W., Tarlow, N., & Rynn, M. A. (2014). Web-based interventions for youth internalizing problems: A systematic review. Journal of the American Academy of Child & Adolescent Psychiatry, 53(12), 1254-1270. Richardson, J. T. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational research review, 6(2), 135-147. 172 Roberts, A. L., Gilman, S. E., Breslau, J., Breslau, N., & Koenen, K. C. (2011). Race/ethnic differences in exposure to traumatic events, development of post-traumatic stress disorder, and treatment-seeking for post-traumatic stress disorder in the United States. Psychological Medicine, 41(1), 71-83. Robinson, J., Hetrick, S., Cox, G., Bendall, S., Yuen, H. P., Yung, A., & Pirkis, J. (2016). Can an Internet‐based intervention reduce suicidal ideation, depression and hopelessness among secondary school students: Results from a pilot study. Early Intervention In Psychiatry, 10(1), 28-35. Rojas-Andrade, R., & Bahamondes, L. L. (2019). Is implementation fidelity important? A systematic review on school-based mental health programs. Contemporary School Psychology, 23(4), 339-350. Rolfsnes, E. S., & Idsoe, T. (2011). School‐based intervention programs for PTSD symptoms: A review and meta‐analysis. Journal of Traumatic Stress, 24(2), 155-165. Ruggiero, K. J., Davidson, T. M., McCauley, J., Gros, K. S., Welsh, K., Price, M., ... & Amstadter, A. B. (2015). Bounce Back Now! Protocol of a population-based randomized controlled trial to examine the efficacy of a Web-based intervention with disaster- affected families. Contemporary Clinical Trials, 40, 138-149. Rytwinski, N. K., Scur, M. D., Feeny, N. C., & Youngstrom, E. A. (2013). The co-occurrence of major depressive disorder among individuals with posttraumatic stress disorder: A meta- analysis. Journal of Traumatic Stress, 26(3), 299–309. Sabnis, S. V., & Newman, D. S. (2023). Epistemological diversity, constructionism, and social justice research in school psychology. School Psychology Review, 52(5), 625-638. Sabnis, S. V., & Wolgemuth, J. R. (2023). Reflexive disclosure in qualitative research publications in school psychology (2006–2021). Psychology in the Schools, 60(10), 3738-3753 Saleem, F. T., Anderson, R. E., & Williams, M. (2020). Addressing the “myth” of racial trauma: Developmental and ecological considerations for youth of color. Clinical Child and Family Psychology Review, 23(1), 1-14. Salim, A., Mackinnon, A., Christensen, H., & Griffiths, K. (2008). Comparison of data analysis strategies for intent-to-treat analysis in pre-test–post-test designs with substantial dropout rates. Psychiatry research, 160(3), 335-345 Saloner, B., Carson, N., & Cook, B. L. (2014). Episodes of mental health treatment among a nationally representative sample of children and adolescents. Medical Care Research and Review, 71(3), 261-279. Sanchez, C., Grzenda, A., Varias, A., Widge, A. S., Carpenter, L. L., McDonald, W. M., ... & Rodriguez, C. I. (2020). Social media recruitment for mental health research: A systematic review. Comprehensive psychiatry, 103, 152197 173 Schnurr, M. P., & Lohman, B. J. (2013). Longitudinal impact of toddlers' exposure to domestic violence. Journal of Aggression, Maltreatment & Trauma, 22(9), 1015-1031. Seaton, E. K., Caldwell, C. H., Sellers, R. M., & Jackson, J. S. (2008). The prevalence of perceived discrimination among African American and Caribbean Black youth. Developmental Psychology, 44(5), 1288. Sharrad, K., Martini, C., Tai, A., Spurrier, N., Smith, R., Esterman, A., ... & Carson-Chahhoud, K. (2023). Mixed Reality Technology to Deliver Psychological Interventions to Adolescents With Asthma: Qualitative Study Using the Theoretical Framework of Acceptability. JMIR Human Factors, 10, e34629. Sheridan, S. M. (2014). Single-case designs and large-N studies: The best of both worlds. In T. R. Kratochwill & J. R. Levin (Eds.), Single-Case Intervention Research: Methodological and Statistical Advances (pp. 299-308). Washington, DC: American Psychological Association. Short, N. A., & Schmidt, N. B. (2020). Developing and testing a novel, computerized insomnia and anxiety intervention to reduce safety aids among an at-risk student sample: A randomized controlled trial. Behavior therapy, 51(1), 149-161. Simblett, S., Birch, J., Matcham, F., Yaguez, L., & Morris, R. (2017). A systematic review and meta-analysis of e-mental health interventions to treat symptoms of posttraumatic stress. JMIR Mental Health, 4(2), e5558. Southam-Gerow, M. A., & Prinstein, M. J. (2014). Evidence base updates: The evolution of the evaluation of psychological treatments for children and adolescents. Journal of Clinical Child & Adolescent Psychology, 43(1), 1-6. Stallard, P., Richardson, T., & Velleman, S. (2010). Clinicians' attitudes towards the use of computerized cognitive behaviour therapy (cCBT) with children and adolescents. Behavioural and Cognitive Psychotherapy, 38(5), 545-560. Stasiak, K., Hatcher, S., Frampton, C., & Merry, S. N. (2014). A pilot double blind randomized placebo controlled trial of a prototype computer-based cognitive behavioural therapy program for adolescents with symptoms of depression. Behavioural And Cognitive Psychotherapy, 42(4), 385-401. Stefanopoulou, E., Lewis, D., Mughal, A., & Larkin, J. (2020). Digital interventions for PTSD symptoms in the general population: A review. Psychiatric Quarterly, 91(4), 929-947. Stein, B. D., Jaycox, L. H., Kataoka, S., Rhodes, H. J., & Vestal, K. D. (2003a). Prevalence of child and adolescent exposure to community violence. Clinical Child and Family Psychology Review, 6, 247–264. Stein, B. D., Jaycox, L. H., Kataoka, S. H., Wong, M., Tu, W., Elliot, M. N., & Fink, A. (2003b). A mental health intervention for schoolchildren exposed to violence: A randomized controlled trial. The Journal of American Medical Association, 290, 603–611. 174 Stewart, R. W., Ebesutani, C., Drescher, C. F., & Young, J. (2017). The Child PTSD Symptom scale: An investigation of its psychometric properties. Journal of Interpersonal Violence, 32(15), 2237-2256. Suldo, S. M., Shaunessy, E., & Hardesty, R. (2008). Relationships among stress, coping, and mental health in high‐achieving high school students. Psychology in the Schools, 45(4), 273-290. Taylor, C. B., Ruzek, J. I., Fitzsimmons-Craft, E. E., Sadeh-Sharvit, S., Topooco, N., Weissman, R. S., ... & Oldenburg, B. (2020). Using digital technology to reduce the prevalence of mental health disorders in populations: Time for a new approach. Journal Of Medical Internet Research, 22(7), e17493. Tiet, Q. Q., Rosen, C., Cavella, S., Moos, R. H., Finney, J. W., & Yesavage, J. (2006). Coping, symptoms, and functioning outcomes of patients with posttraumatic stress disorder. Journal of Traumatic Stress: Official Publication of The International Society for Traumatic Stress Studies, 19(6), 799-811. Topooco, N., Fowler, L. A., Fitzsimmons-Craft, E. E., DePietro, B., Vázquez, M. M., Firebaugh, M. L., ... & Taylor, C. B. (2022). Digital interventions to address mental health needs in colleges: Perspectives of student stakeholders. Internet Interventions, 28, 100528. Tynes, B. M., Willis, H. A., Stewart, A. M., & Hamilton, M. W. (2019). Race-related traumatic events online and mental health among adolescents of color. Journal of Adolescent Health, 65(3), 371-377. Van Ginkel, J. R., Linting, M., Rippe, R. C., & van der Voort, A. (2020). Rebutting existing misconceptions about multiple imputation as a method for handling missing data. Journal of personality assessment, 102(3), 297-308. Vibhakar, V., Allen, L. R., Gee, B., & Meiser-Stedman, R. (2019). A systematic review and meta-analysis on the prevalence of depression in children and adolescents after exposure to trauma. Journal of Affective Disorders, 255, 77-89. Villalta, L., Smith, P., Hickin, N., & Stringaris, A. (2018). Emotion regulation difficulties in traumatized youth: a meta-analysis and conceptual review. European Child & Adolescent Psychiatry, 27(4), 527-544. Wamser-Nanney, R., Scheeringa, M. S., & Weems, C. F. (2016). Early treatment response in children and adolescents receiving CBT for trauma. Journal of pediatric psychology, 41(1), 128-137. Wan, F. (2021). Statistical analysis of two arm randomized pre-post designs with one post- treatment measurement. BMC Medical Research Methodology, 21(1), 1-16. Whaley, A. L. (2001). Cultural mistrust and mental health services for African Americans: A review and meta-analysis. The Counseling Psychologist, 29(4), 513-531. 175 White, I. R., Royston, P., & Wood, A. M. (2011). Multiple imputation using chained equations: issues and guidance for practice. Statistics in Medicine, 30(4), 377-399 Whitney, D. G., & Peterson, M. D. (2019). US national and state-level prevalence of mental health disorders and disparities of mental health care use in children. JAMA Pediatrics, 173(4), 389-391. Wickersham, A., Barack, T., Cross, L., & Downs, J. (2022). Computerized cognitive behavioral therapy for treatment of depression and anxiety in adolescents: Systematic review and meta-analysis. Journal Of Medical Internet Research, 24(4), e29842. Wickersham, A., Petrides, P. M., Williamson, V., & Leightley, D. (2019). Efficacy of mobile application interventions for the treatment of post-traumatic stress disorder: a systematic review. Digital Health, 5, 2055207619842986. Williams, D. R. (2018). Stress and the mental health of populations of color: Advancing our understanding of race-related stressors. Journal of Health and Social Behavior, 59(4), 466-485. Wozney, L., Huguet, A., Bennett, K., Radomski, A. D., Hartling, L., Dyson, M., ... & Newton, A. S. (2017). How do eHealth programs for adolescents with depression work? A realist review of persuasive system design components in internet-based psychological therapies. Journal of Medical Internet Research, 19(8), e7573. Xiao H, Carney DM, Youn SJ, Janis RA, Castonguay LG, Hayes JA, et al. Are we in crisis? National mental health and treatment trends in college counseling centers. Psychol Serv (2017) 14(4):407–15. doi: 10.1037/ser0000130 Yamashiro, G., & Matsuoka, J. K. (1997). Help-seeking among Asian and Pacific Americans: A multiperspective analysis. Social Work, 42(2), 176-186. Yeager, C. M., & Benight, C. C. (2022). Engagement, predictors, and outcomes of a trauma recovery digital mental health intervention: longitudinal study. JMIR Mental Health, 9(5), e35048. Yeager, C. M., & Benight, C. C. (2018). If we build it, will they come? Issues of engagement with digital health interventions for trauma recovery. Mhealth, 4. Yohannan, J., & Carlson, J. S. (2019). A systematic review of school‐based interventions and their outcomes for youth exposed to traumatic events. Psychology in the Schools, 56(3), 447-464. Yohannan, J., Carlson, J. S., & Volker, M. A. (2022). Cognitive behavioral treatments for children and adolescents exposed to traumatic events: A meta‐analysis examining variables moderating treatment outcomes. Journal of Traumatic Stress, 35(2), 706-717. 176 APPENDIX A: CONSENT FORM SCREENER CONSENT: Study Title: Exploring Feasibility, Effectiveness, and Acceptability of an Online, Self- Administered Stress Reduction Program for Racial and Ethnic Minority High School (Or Recently Graduated) Students Researcher(s) and Title: Maddy Esterer, M.A., John S. Carlson, PhD, LP, HSP Department and Institution: Michigan State University School Psychology Doctoral Program Contact Information: estererm@msu.edu Thank you for visiting the QR code or survey link regarding the Life Improvement for Teens (LIFT) program. This survey includes a few questions to determine whether you may be eligible for the research. You must be at least 18 years of age to be eligible for this study. Before I begin the screening I would like to tell you a little bit about the research. In this study, we are trying to learn if LIFT is appropriate and helpful for racial and ethnic minority students, and also learn about how high school (or recently graduated) students feel about the program. If you would like to continue the screening, please answer and submit the questions below. The screening will take 15 minutes. The survey includes questions which ask about your age, race, ethnicity, internet connection, stressful events you may have experienced, and your reactions to those stressful events. You do not have to answer any questions you do not wish to answer or are uncomfortable answering, and you may stop at any time. Your participation in the screening is voluntary. However, if you choose not to answer any of screener questions, you may not be eligible to participate. Your answers will be confidential. No one will know your answers except for the research team. Once you have finished the screener, you will be notified immediately whether or not you are eligible for the study. If you do not meet the qualifications of the research study, your information will be destroyed, and you will be provided with resources for stress. If you do qualify for the research study, you will be provided a consent form on whether you would like to participate in the study. If you provide consent, your survey data (e.g., name, email) will be de- identified and kept in a secure location in our research lab until the completion of the study. Thank you for answering the screening questions. You are eligible for the study based on your responses to the questions. Please finish reading this page and click to the next page to advance to the next part of the process (consent to receiving LIFT treatment and one additional 15-minute survey before you receive your group assignment and first $30 gift card). If you have concerns or questions about this study, such as scientific issues, how to do any part of it, or to report an injury, please contact Maddy Esterer through email (estererm@msu.edu) or by phone (614-315-7646) or mail (439 Erickson Hall, Michigan State University, East Lansing MI 48823. You may also contact Dr. John Carlson through email (carlsoj@msu.edu) or by phone (517-432-4856). 177 If you have questions or concerns about your role and rights as a research participant, would like to obtain information or offer input, or would like to register a complaint about this study, you may contact, anonymously if you wish, the Michigan State University’s Human Research Protection Program at 517-355-2180, Fax 517-432-4503, or e-mail irb@msu.edu or regular mail at 4000 Collins Rd, Suite 136, Lansing, MI 48910. Thank you again for your willingness to answer our questions. CONSENT FOR TREATMENT AND TO PARTICIPATE IN A RESEARCH STUDY- Life Improvement for Teens (LIFT) Program BRIEF SUMMARY (EXPLANATION OF THE RESEARCH): You are being asked to participate in a research study. Researchers are required to provide a consent form to explain the purpose of the study and let you know that participation is optional. The form will explain the potential risks and benefits of your participation to help you make an informed decision. You can ask the researcher any questions that you have about the study. This research study is examining the effects of a stress and coping program called LIFT (Life Improvement for Teens) that is meant to help young people manage stressful life events. LIFT is publicly available online and was developed by psychology researchers. You must be at least 18 years old to participate in this research. If you choose to participate, you will be expected to participate for 7-14 weeks (depending on group placement). You will be placed into one of two groups. The first group will receive access to the program immediately and be expected to complete it within 7 weeks (1 hour/week on your own time) online. The second group will receive delayed access in 7 weeks from your date of consent. The purpose of the delay is for the researchers to have confidence that this intervention can work for others. Both groups will be asked to complete one survey immediately following consent and one survey after 7 weeks. If you are in the delayed access group, you will receive one additional survey after the LIFT program is provided to you for 7 weeks following the delayed period (14 total weeks away from now). If you decide not to participate in the study, three are other options for coping support within your school and community. The most likely risk you may incur within the program is obtaining knowledge of the severity of your negative feelings/thoughts compared to other students your age. Information will be provided on the intervention website if you feel like you are in crisis. You will be required to think about stressful or traumatic situations that you have experienced and self-report your feelings of depression, anxiety, and stress during this intervention. You may feel a little uncomfortable doing so, but it should only be temporary and you are allowed to log in or log out from the website at any time if you need a break. The benefits of participating include learning more about skills to cope with stressful situations. PURPOSE OF RESEARCH: You are being asked to participate in a research study on the LIFT program. You have been selected as a participant because you completed the screener and were encouraged to participate by someone at your school. The screener you already filled out determined that you are eligible for the study. From this study, the researchers hope to learn about how this online program works for students your age. 178 WHAT IS EXPECTED OF ME? (Procedures and Time Commitment) If you agree to take part: • You will be randomly placed into one of two groups, both of whom will receive the online program. One group will have immediate access and one group will have delayed access (7 weeks later). One group will access LIFT immediately and be asked to log on to the LIFT website 7 times (i.e., once per week for 7 weeks), for about an hour each time. You will complete this program on your own time on a computer/tablet you have access to. You can complete LIFT at home, at the library, or you could request a room with a computer to use after school. Completion of LIFT on your smartphone is not recommended, but can be used if you have no other options. Within LIFT, you will watch videos and play games to learn some new ways to handle stressful events. You will provide your email (required) and phone number (optional) so that the researchers can send you an email/text reminder once per week to remind you to complete each LIFT chapter. The researchers will check in with you either via text or email (based on your indicated preference) halfway through LIFT (after 3 weeks) and at the end of LIFT (after 7 weeks). You will be expected to reply “Yes” or “No” at each checkpoint based on whether you have completed the modules yet or whether you need more time. You are not required to provide your phone number at the onset of the study unless you would prefer to opt in to text message reminders instead of email reminders. There will be an opportunity to receive an extra gift card if you are placed in this group and choose to participate in an optional interview. • If you are placed into the second group, you will be placed on a waitlist to gain access to the intervention in 7 weeks. After you are granted access, you will be able to access and complete the LIFT modules on your own time on a computer/tablet you have access to regularly (at home, the library, at school, etc.). In the meantime, you will receive a list of resources and have access to your school counselor if you are experiencing discomfort at any point. After you finish the 7-week LIFT intervention (7-week delayed period + one module from the LIFT website per week for 7 weeks = 14 total weeks), there is one final survey to complete. You will receive an extra gift card if you are in this group and complete all surveys. • To see how the program is working for you, we will ask you to fill out a survey two times – before you start LIFT, and after you finish it. We will give you a gift card for $20 to Amazon after the first survey, and a $40 gift card after the second survey, for a total of $60. You will not be sent the second survey until after you have confirmed completion of all 7 LIFT modules. Regardless of which group you are placed in, you will take both surveys and be eligible to receive both gift cards. As noted above, there will also be an incentive to receive a third gift card of $30, for a total of $90 if you complete one additional task after you finish the LIFT intervention (survey or interview). • LIFT will record your answers to some checklists about mood, stressful experiences, and goals for the course. We will use that information to see how you used the program. • If you begin LIFT and do not complete LIFT and/or the final survey, the researchers may reach out to you with questions or a short survey about your experiences. PRIVACY AND CONFIDENTIALITY • Your information will be safely and anonymously stored within the LIFT online platform. We will de-identify your scores on the questionnaires you fill out and they will not be 179 shared with anyone. There are open response boxes within LIFT that you can type in, but these are not saved (no one from the research team can see them). • We will use what you tell us on the questionnaires only for the program. We will not tell anyone that you are in the program or tell anything about you to anyone outside of the program. However, there are exceptions to this. • We will give information about you to others if we need to protect you from harm (for example, a medical emergency or if you are going to harm yourself). • If you tell us about the abuse of a child or an older person, or that you are being abused, we must report it to the authorities, like the Department of Child and Family Services. ("Abuse" includes physical, sexual, or emotional abuse, and neglect.) DATA TRANSFER FROM LIFT ONLINE PLATFORM TO MSU STUDY COORDINATOR: • An administrator of the LIFT program will download your information (data collected by the LIFT system [i.e., chapter completion, responses to questionnaires]) from the LIFT platform. The information will not contain your name or real email address and will be securely transferred to the MSU study coordinator to access via a secure file sharing application. EXPECTED BENEFITS • LIFT may help you learn how to handle stressful events better, so they bother you less and you are able to function better in school and at home. COMPENSATION • You will have the opportunity to earn up to $90. You will be compensated with a gift card to Amazon.com for each survey you complete (1st survey = $30; 2nd survey = $30). Regardless of which group you are placed in, you will take both surveys and be eligible to receive $60 in gift cards as long as you complete the surveys. You will not be eligible to receive the second survey and gift card if you have not confirmed via text or email (with a response of “yes”) your completion of all 7 modules. If you are placed in the immediate access group, you may have the opportunity to receive a third $30 gift card for participating in a short interview, but this is not required. If you are placed into the delayed group, you will take a third survey after your delayed access to LIFT in order to receive a third $30 gift card. Thus, no matter what group you are in, you can earn up to $90 in gift cards. You will not receive any extra credit in school for participating in this study. POTENTIAL RISKS • Answering the questions on the surveys or in the LIFT program may make you feel uncomfortable. You can skip any question or stop a survey at any time. Skipping questions or stopping the questionnaire won’t change your grades at school. Your counselor and teachers will not be informed of your participation in LIFT. If you feel like you need to talk to someone about stressful life events in your life or feelings of depression or anxiety while completing LIFT, or during your waitlist period, your school counselor will be available. • Within the LIFT program, we will ask you to write about a stressful or traumatic event, and this can sometimes be uncomfortable. You also may be asked about stressful experiences that happened to you as a result of your race or ethnicity. Writing about these 180 events should make you feel better over time. What you type will not be saved or recorded anywhere, but you can print it out if you want. It will be up to you to decide how much to write about it and whether to print it out and show it to anyone. You may refuse to take part in the study, and you can stop being in the program at any time, and it won’t change anything about your standing at your school or your grades. YOUR RIGHT TO PARTICIPATE, SAY NO, OR WITHDRAW Participation in this research project is completely voluntary. You have the right to say no. You may change your mind at any time and withdraw. You may choose not to answer specific questions or to stop participating at any time. Whether you choose to participate or not will have no effect on your grade or evaluation. No one at your school will know if you decide to participate or not. If you choose not to consent to this study, please know that this program is publicly available for a small cost ($20) at the following link: https://www.lift-program.org/ CONTACT INFORMATION FOR QUESTIONS OR CONCERNS If you have concerns or questions about this study, such as scientific issues, how to do any part of it, or to report an injury, please contact the researcher (Maddy Esterer; estererm@msu.edu; (614) 315-7646; 439 Erickson Hall, Michigan State University, East Lansing MI, 48823). If you have questions or concerns about your role and rights as a research participant, would like to obtain information or offer input, or would like to register a complaint about this study, you may contact, anonymously if you wish, the Michigan State University’s Human Research Protection Program at 517-355-2180, Fax 517-432-4503, or e-mail irb@msu.edu or regular mail at 4000 Collins Rd, Suite 136, Lansing, MI 48910. By answering the question below, you indicate your voluntary agreement to participate in this online survey (your answer to this question acts as your signature): In order to consent to participate in this study, you must fill out the following contact information below (your name, valid email address). By typing in this information and clicking to the next page, you are consenting to participate and be contacted by the study coordinator. Your contact information will only be used to send you gift cards for participation, give you access to LIFT, and contact/remind you about study requirements (once/week for 7 weeks). Your contact information will not be used for any other purpose, shared with anyone, or linked to the online LIFT system in any way. If you do not want to consent to participate, do not type in your contact information below. 181 APPENDIX B: RECRUITMENT FLYER FOR SOCIAL MEDIA Figure B1. Recruitment Flyer for Social Media FREE ONLINE STRESS & COPING PROGRAM FOR RACIALLY/ETHNICALLY DIVERSE HIGH SCHOOL SENIORS AND INCOMING COLLEGE FRESHMEN! Earn up to $90 by sharing your experiences about an online program and helping future students! Click the link below to see if you are eligible! What is Life Improvement for Teens (LIFT)? • A confidential and secure online program completed at your own pace that helps you learn how to cope with stress via videos, games, and activities. How long will it take? 1 hour/week for 7 weeks (on your own time!) + a few short surveys and an optional interview We are looking for students who: - Graduated high school or a senior in high school in 2023 and are at least 18 years old - Identify as a Black, Indigenous, or Person of Color (Black, Hispanic or Latinx, Asian American, Native American, Biracial, etc.) - Have internet access - Have experienced something - stressful or difficult in their life - This is a dissertation study for a student at Michigan State University. - Your name and contact information will be kept anonymous and private. Contact the study coordinator with any questions: Maddy Esterer (estererm@msu.edu) Note: Link to screener and consent form embedded onto the flyer within social media posts 182 APPENDIX C: QUALITATIVE INTERVIEW PROTOCOL Introduction: “I’m Maddy, the study coordinator you’ve been emailing with. Thank you so much for being willing to chat with me for 30 minutes about your experience with the LIFT program. Are you okay with me audio recording this interview? It will be de-identified and you are welcome to keep your camera off if you prefer. I will be asking you about your experiences with LIFT and how you felt while completing the program. This was the 7-week online program you completed, where you watched videos and did activities to help you cope with stress in your life. This is a safe space to share whatever feelings you have about LIFT, good or bad, and I want to know as much as you are comfortable sharing, even if you think it is something that I might not understand. Everything you share will stay anonymous and private unless you share anything that makes me concerned about someone’s safety. Our conversation should last ~30 minutes. Do you have any questions? 1. Describe LIFT briefly to remind participants of the program (e.g., share screen with LIFT intervention guide visual reminders of main LIFT activities throughout Chapters) 2. Ask about each aspect of the LIFT program below: a. What did you think about the fact that this program was online? (Usability) b. What did you think about the fact that you could complete the LIFT chapters at your own pace each week? (Opportunity Cost, Burden) c. What did you think about the videos? (probe for content and length) (Usability, Knowledge, Burden) d. What did you think about the games and activities? (Engagement with the program, Knowledge, Usability, Burden) e. What did you think about the amount of content provided in each chapter? (Burden, Opportunity Cost) f. What did you think about the homework/self-care plan assignments? (Engagement with the program, Knowledge, Burden, Opportunity Cost) g. What did you think about the weekly reminders? (probe for email vs. text) (Engagement with Study Components Outside of the LIFT program) h. What did you think about the examples provided within LIFT of problems that students your age typically face? Probe: were the examples relevant/relatable? (Perceived Norms, Relevant/Relatable/Degree of Realistic) 3. What would you change about the format of LIFT to make it more helpful? (Suggestions for how LIFT can be improved, Usability, Perceived Effectiveness) 4. What motivated you to log on to LIFT? (Want/Need, Engagement with Study Components Outside of the LIFT Program) 5. Tell me about anything that made it hard for you to complete LIFT. (Opportunity Cost, Burden, Engagement with the Program) My last few questions are a little different. LIFT is a program designed for students from racial and ethnic minority backgrounds. For these next few questions, think about your race, ethnicity, and culture (however you define culture; the norms, values, and customs of people who share your identities [give other examples of identities outside of race/ethnicity]). For example, if you remember at the very beginning of the program, we asked you about different stressful events 183 you may have experienced. We specifically asked about experiences with stressful things that were related to your race or ethnicity (i.e., microaggressions) because we know that racism can impact students’ feelings of stress and cause strong reactions and feelings just like any other stressful event. We want to know whether LIFT is helpful or is not helpful for young people who are dealing with stressors related to their race, ethnicity, or cultural background. I am not asking you to retell anything about these stressful events. As a reminder, feel free to share as much or as little as you’d like for these questions. We appreciate your honesty if you do have critiques of the program, including things you aren’t sure I would understand. Since I identify as White, I recognize it might feel uncomfortable to share some of these things. This is a safe space where you can share any aspects of your experience that you would like. 6. Did you feel like any of the aspects of your identity (i.e., race, ethnicity, or culture) impacted the way you used LIFT or influenced how you felt about LIFT? If yes, how so? (Perceived Norms, Affective Attitude) 7. Was there anything in the LIFT videos or games that felt like it conflicted with your culture or your values? (Ethicality, Perceived Norms) a. Anything that you felt like you wanted to skip because it didn’t align with what is “normal” for your family or community? 8. Thinking about the examples of problems and scenarios from the activities in LIFT, do you think they are relatable/relevant for other students your age from your cultural background? (Relevant/Relatable/Degree of Realistic, Coping with Racism or Cultural Stress) 9. Did you share anything from this program with a family member or close friend? (Perceived Norms, Ethicality) 10. On a scale of 1-10, how strongly would you recommend this program to students who share your race, identity, or culture? Why did you select that rating? (Coping with Racism or Cultural Stress, Affective Attitude, Perceived Norms, Ethicality) 11. Tell me about anything you think adding/changing about LIFT might make it more helpful for students from your race, ethnicity, or culture. (Coping with Racism or Cultural Stress, Perceived Effectiveness, Perceived Norms, Ethicality) 12. Do you have any other final thoughts about LIFT you’d like to share? Thank you for participating! Your gift card will be sent to your email within 48 hours. Risk management protocol: If participant discloses feelings of current discomfort (mild), redirect them to the stress resources provided at treatment onset. If participant discloses feelings of distress beyond mild discomfort (e.g., appears outwardly distressed or expresses desire to hurt oneself or another person), conduct a risk assessment and if they are in imminent danger then call 911. If participant discloses any ongoing child or elder abuse, inform them I have to break confidentiality and will need to collect more information from them to report it. 184 Table D1. Acceptability Codebook Code Affective attitude / Global acceptability Ethicality Burden Want/need Knowledge Usability Opportunity costs Perceived norms Perceived effectiveness Engagement with the program Coping with racism or cultural stress Relevance, relatability, and degree of realistic Engagement with study components outside of the LIFT program APPENDIX D: ACCEPTABILITY CODEBOOK Operationalization Willingness to engage in an intervention; an overall impression of how someone feels about the intervention (emotions, feelings words are used) Whether an intervention fits within someone’s or conflicts with someone’s value system Whether it feels effortful or not to use Motivation to change Existing or gained knowledge about the purpose of the intervention and about mental health; learning content or new skills Technology issues or issues with the platform (bugs, crashes, tech not working) The extent to which benefits, profits, values, or routines, must be given up to engage in the intervention Influence of the sociocultural context in which the individual resides in and the associated sociocultural norms [on how someone engages with or perceives the program] Whether the intervention appeared likely to achieve or did achieve its purpose or met someone’s goals; application of skills into your life, saying something is helpful Description of how someone engaged with the program (Attention, amount of use, depth of use, interest, enjoyment) Describing how someone used, could use, or did not use LIFT to help cope with racism or race-related/cultural stressors Whether or not participants could relate to the examples/scenarios in the games and videos. How realistic and relevant the examples and characters felt compared to the problems they experience in their own life Weekly reminders, communications with the study coordinator, surveys, gift cards, etc. 185 APPENDIX E: ANCOVAS FOR MECHANISMS OF ACTION (5 OUT OF 40 IMPUTATIONS AND ORIGINAL DATA) Table E1. ANCOVAs For Mechanisms of Action (5 Out Of 40 Imputations And Original Data) Coping – Problem Solving Original Data (N = 32) Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Imputation 4 (N = 47) F Statistic Parameter Estimate Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model 1.564 31.262 1.762 1.120 2.521 32.009 .584 3.785 2.276 35.025 3.376 .557 .114 39.548 .221 .000 1.545 25.384 .904 1.663 1.458 -- 6.357 .193 -.863 -- 7.218 .122 -1.563 -- 6.134 .256 -.523 -- 6.959 .073 .003 -- 6.932 .168 -1.149 -- Parameter 95% CI Upper Bound P Value Partial Eta Squared -- 8.663 .491 .805 -- 9.777 .443 .056 -- 8.366 .536 .889 .-- 9.448 .385 1.578 -- 9.770 .525 .647 -- .226 .000* .195 .299 .092 .000* .449 .058 .115 .000* .073 .459 .893 .000* .641 .997 .225 .000* .347 .204 .244 .097 .519 .057 .037 .103 .421 .013 .079 .094 .443 .071 .013 .005 .473 .005 .000 .066 .366 .020 .036 .062 Parameter 95% CI Lower Bound -- 4.051 -.104 -2.531 -- 4.659 -.199 -3.182 -- 3.902 -.025 -1.936 -- 4.469 -.240 -1.573 -- 4.094 -.188 -2.945 -- 186 Table E1 (cont’d) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) Coping – Cognitive Original Data (N = 32) Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Intercept Pre- Score Group Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score 44.582 1.899 .565 -- -- 6.531 .182 -.499 6.587 .162 -- F Statistic -.717 Parameter Estimate 2.267 41.830 2.263 .689 2.807 39.965 .686 2.486 3.992 36.748 4.974 .244 .129 70.248 .009 -- 6.536 .182 -.604 -- 7.437 .117 -1.219 -- 5.657 .263 -.321 -- 8.028 .012 8.647 .447 .839 9.295 .508 1.172 Parameter 95% CI Upper Bound -- 8.824 .429 .885 -- 10.057 .400 .339 -- 7.857 .501 .988 -- 10.314 .259 4.415 -.084 -1.838 3.878 -.183 -2.607 Parameter 95% CI Lower Bound -- 4.247 -.065 -2.094 -- 4.817 -.167 -2.778 -- 3.457 .025 -1.629 -- 5.743 -.235 187 .000* .175 .456 .000* .357 .456 P Value .122 .000* .143 .413 .071 .000* .412 .122 .026* .000* .031* .624 .880 .000* .923 .503 .041 .013 -- -- -- Partial Eta Squared .135 .591 .072 .023 .113 .476 .015 .053 .154 .455 .102 .006 .006 .615 .000 Table E1 (cont’d) Imputation 4 (N = 47) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) Coping – Optimism Original Data (N = 32) Imputation 1 (N = 47) Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group .156 4.546 30.238 3.276 1.660 3.932 21.515 2.375 1.831 -- -- -- F Statistic .326 17.991 .015 .634 .173 24.875 .022 .346 -.267 -- 6.235 .247 -.965 -- 6.055 .237 -1.145 6.475 .179 -1.626 -- 3.697 -.028 -2.475 -- 3.187 -.073 -2.851 3.782 -.107 1.093 -- 8.774 .521 .545 -- 8.923 .547 .561 9.168 .466 -.533 Parameter Estimate -2.275 Parameter 95% CI Lower Bound 1.210 Parameter 95% CI Upper Bound .695 .016* .000* .077 .204 .027* .000* .130 .183 .000* .219 .548 .004 .171 .407 .069 .036 .152 .328 .051 .040 -- -- -- P Value Partial Eta Squared -- 10.345 .504 3.619 -- 10.086 .415 2.550 .724 .000* .905 .432 .842 .000* .884 .559 .022 .383 .001 .021 .008 .361 .000 .008 -- 6.491 .028 1.014 -- 6.827 .028 .576 -- 2.637 -.448 -1.590 -- 3.569 -.358 -1.398 188 Table E1 (cont’d) Imputation 2 (N = 47) Imputation 3 (N = 47) Imputation 4 (N = 47) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) Coping – Avoidance Original Data (N = 32) Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Intercept Pre- Score Group Model Intercept Pre- Score .049 14.992 .030 .043 .162 18.654 .000 .298 .765 20.981 .031 1.303 .715 22.452 .269 1.377 -- -- -- F Statistic .518 24.264 .292 -- 6.363 .038 -.229 -- 6.922 .004 -.576 -- 6.406 -.036 1.199 -- 7.763 -.105 -1.216 6.774 -.011 .206 Parameter Estimate -- 6.080 .089 -- 2.677 -.400 -2.461 -- 3.414 -.412 -2.701 -- 2.913 -.451 -.917 -- 4.314 -.515 -3.305 3.062 -.468 -- 10.049 .475 2.004 -- 10.431 .421 1.549 -- 9.899 .378 3.314 -- 11.212 .304 .873 10.486 .445 .953 .000* .863 .837 .851 .000* .983 .588 .471 .000* .862 .260 .495 .000* .606 .247 .000* .961 .002 .254 .001 .001 .007 .298 .000 .007 .034 .323 .001 .029 .031 .338 .006 .030 -- -- -2.422 Parameter 95% CI Lower Bound -- 3.212 -.248 2.834 Parameter 95% CI Upper Bound -- 8.949 .425 .877 P Value .601 .000* .593 -- Partial Eta Squared .035 .456 .010 189 Table E1 (cont’d) Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Intercept Pre- Score Group .917 .200 41.924 .396 .000 .316 24.857 .145 .551 1.066 33.427 .341 1.563 .394 20.701 .723 .019 .280 27.972 .105 .393 -- -- -- Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Imputation 4 (N = 47) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) .700 -- 7.754 -.094 -.009 -- 6.283 .063 .501 -- 7.693 -.103 .908 -- 5.905 .137 -.092 -- 6.525 .048 -.387 6.361 .060 .126 2.194 -- 10.323 .208 1.234 -- 9.096 .393 1.862 -- 10.718 .252 2.372 -- 8.669 .462 1.245 -- 9.095 .350 .857 9.832 .468 1.932 .346 .819 .000* .532 .988 .730 .000* .705 .462 .353 .000* .562 .218 .677 .000* .400 .890 .757 .000* .748 .534 .000* .774 .891 .031 .009 .488 .009 .000 .014 .361 .003 .012 .046 .432 .008 .034 .018 .320 .016 .000 .013 .389 .002 .009 -- -- -- -.795 -- 5.184 -.396 -1.252 -- 3.469 -.268 -.860 -- 4.668 -.458 -.556 -- 3.140 -.188 -1.430 -- 3.955 -.253 -1.630 2.889 -.349 -1.681 190 Table E1 (cont’d) Negative Cognitions - PDC Original Data (N = 32) Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Imputation 4 (N = 47) Imputation 5 (N = 47) F Statistic Parameter Estimate Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score 1.399 7.291 2.374 .346 .352 20.018 .393 .352 1.978 12.976 3.902 .015 .607 18.651 1.176 .018 .392 32.880 .531 .296 2.007 12.516 3.211 -- 8.155 .363 -.964 0a 11.120 .143 .900 0a 8.024 .385 -.158 0a 9.872 .217 -.179 0a 15.568 -.170 -.843 0a 8.504 .357 Parameter 95% CI Upper Bound P Value Partial Eta Squared -- 14.151 .844 2.389 . 16.639 .604 3.958 . 12.730 .778 2.450 . 14.706 .621 2.499 . 21.205 .301 2.281 . 13.317 .759 .263 .011* .134 .561 .705 .000* .534 .556 .150 .001* .055 .903 .549 .000* .284 .893 .678 .000* .470 .589 .146 .001* .080 .088 .201 .076 .012 .016 .313 .009 .008 .082 .228 .081 .000 .027 .298 .026 .000 .018 .428 .012 .007 .084 .221 .068 Parameter 95% CI Lower Bound -- 2.160 -.119 -4.317 . 5.601 -.318 -2.159 . 3.317 -.008 -2.766 . 5.038 -.186 -2.858 . 9.930 -.641 -3.967 . 3.691 -.045 191 Table E1 (cont’d) Pooled (N = 47; 40 imputations) Negative Cognitions - FPSW Original Data (N = 32) Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Group Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept Pre- Score Group Model Intercept .630 -- -- -1.051 10.504 .186 -3.718 4.582 -.291 1.617 16.427 .662 -- F Statistic -.217 Parameter Estimate 1.217 13.052 .707 2.301 .299 32.030 .318 .456 .541 31.739 .018 .885 3.721 16.255 1.771 7.154 .648 30.341 -- 8.110 .160 -1.649 -- 9.049 .082 -.550 -- 9.763 -.021 -.821 -- 8.142 .213 -2.418 -- 10.357 2.923 Parameter 95% CI Upper Bound -- 11.957 .550 .574 -- 12.040 .373 1.092 -- 12.965 .291 .938 -- 11.459 .536 -.596 -- 13.821 -3.356 Parameter 95% CI Lower Bound -- 4.264 -.230 -3.872 -- 6.059 -.210 -2.193 -- 6.561 -.333 -2.579 -- 4.826 -.110 -4.239 -- 6.893 192 .431 .001* .444 .892 P Value .014 -- -- -- Partial Eta Squared .311 .001* .407 .140 .743 .000* .575 .503 .586 .000* .894 .352 .032* .000* .190 .010* .528 .000* .077 .310 .024 .074 .013 .421 .007 .010 .024 .419 .000 .020 .145 .270 .039 .140 .029 .408 Table E1 (cont’d) Imputation 4 (N = 47) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) *p < .05 Pre- Score Group Model Intercept Pre- Score Group Intercept Pre- Score Group .063 .961 1.821 15.487 2.779 1.946 -- -- -- -.042 -.925 -- 7.214 .261 -1.229 8.367 .143 -.931 -.380 -2.828 -- 3.980 -.054 -3.006 4.416 -.257 -2.947 .295 .977 -- 10.448 .576 .547 12.317 .542 1.085 .803 .332 .174 .000* .103 .170 .000* .483 .365 .001 .021 .076 .260 .059 .042 -- -- -- 193 APPENDIX F: POOLED INTENT-TO-TREAT ANCOVA RESULTS FOR MECHANISMS OF ACTION Table F1. Means, Standard Deviations, and One-Way Analyses of Covariance for Mechanisms of Action (Across N = 40 Imputations) Measure Treatment Waitlist Coping Skills – Problem Solving M SD M 6.88 n/a 7.74 Coping Skills - Cognitive Coping Skills - Optimism 7.02 6.91 Coping Skills - Avoidance 6.93 n/a n/a n/a 8.01 6.69 6.84 12.26 n/a 12.55 Negative Cognitions - PDC Negative Cognitions - FPSW F(1, 46) Range .000-4.627 .001-9.928 .000-4.809 .000-3.627 .000-1.737 SD n/a n/a n/a n/a n/a 9.04 n/a 9.73 n/a .049-7.154 η2 for Significant Imputations (# of significant imputations out of N = 40) .095 (1) .119 (1) .099 (1) n/a (0) n/a (0) n/a (0) 1Pooled across N = 40 imputations n/a = Pooled estimate unavailable and/or no statistically significant findings by-imputation, so no partial eta squared was reported *pooled p < .05. PDC = Permanent and Disturbing Change; FPSW = Fragile Person in a Scary World Coping Skills subscale score ranges: 0-12 Negative Cognitions PDC score range: 6-24 Negative Cognitions FPSW score range: 4-16 194 APPENDIX G: ANCOVAS FOR TARGETED OUTCOMES (5 OUT OF 40 IMPUTATIONS AND ORIGINAL DATA) Table G1. ANCOVAs For Targeted Outcomes (5 Out Of 40 Imputations And Original Data) PTSS F Statistic Original Data (N = 29) Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Imputation 4 (N = 47) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Intercept Pre-Score Group .927 .245 1.500 .712 .073 8.551 .008 .142 .528 4.739 .538 .427 .903 2.181 1.336 .339 .228 5.256 .300 .120 .677 10.397 .008 1.354 -- -- -- Parameter Estimate -- 6.832 .575 -4.083 -- 18.867 -.031 1.391 -- 13.616 .261 2.456 -- 8.632 .396 2.106 -- 15.822 .206 1.372 -- 20.356 -.032 4.497 13.759 .237 1.154 Parameter 95% CI Upper Bound -- 25.858 1.540 5.862 -- 32.571 .673 8.825 -- 27.584 .979 10.032 -- 22.066 1.086 9.393 -- 30.572 .964 9.373 -- 34.716 .706 12.286 37.117 1.411 11.369 P Value 0.41 0.63 .232 .406 .930 .005* .929 .708 .594 .035* .467 .517 .413 .147 .254 .563 .797 .027* .586 .731 .513 .002* .931 .251 .245 .690 .824 Partial Eta Squared 0.067 0.009 0.055 0.027 .003 .163 .000 .003 .023 .097 .012 .010 .039 .047 .029 .008 .010 .107 .007 .003 .030 .191 .000 .030 -- -- -- Parameter 95% CI Lower Bound -- -12.194 -.390 -14.027 -- 5.163 -.735 -6.042 -- -.352 -.456 -5.121 -- -4.802 -.294 -5.181 -- 1.072 -.552 -6.628 -- 5.995 -.770 -3.293 -9.600 -.938 -9.061 195 Table G1 (cont’d) Depression Original Data (N = 29) Table G1 (cont’d) Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Imputation 4 (N = 47) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Intercept Pre-Score Group F Statistic 14.002 .509 22.635 8.506 7.133 3.685 14.264 .180 25.538 .250 49.472 1.334 4.172 6.468 7.758 .070 26.841 2.076 52.544 3.246 5.409 17.811 9.101 2.462 -- -- -- Parameter Estimate -- 3.908 .682 -5.071 -- 4.341 .557 -.757 -- 1.863 .845 -1.852 -- 5.672 .453 .510 -- 3.346 .751 -2.433 -- 7.751 .331 -2.292 4.724 .585 -2.836 Parameter 95% CI Upper Bound -- 7.755 .977 -1.497 -- 8.739 .853 2.842 -- 5.503 1.088 1.380 -- 10.474 .781 4.392 -- 6.519 .960 .289 -- 11.140 .552 .652 9.920 1.006 1.343 P Value <.001* 7.755 <.001* .007* .002* .061 .000* .674 .000* .620 .000* .254 .022* .015* .008* .793 .000* .157 .000* .078 .008* .000* .004* .124 .074 .007* .183 Partial Eta Squared .519 .019 .465 .247 .245 .077 .245 .004 .537 .006 .529 .029 .159 .128 .150 .002 .550 .045 .544 .069 .197 .288 .171 .053 -- -- -- Parameter 95% CI Lower Bound -- .060 .388 -8.645 -- -.056 .260 -4.355 -- -1.778 .603 -5.085 -- .870 .125 -3.373 -- .173 .542 -5.155 -- 4.362 .110 -5.235 -.471 .164 -7.015 196 Table G1 (cont’d) Anxiety Original Data (N = 29) Table G1 (cont’d) Imputation 1 (N = 47) Imputation 2 (N = 47) Imputation 3 (N = 47) Imputation 4 (N = 47) Imputation 5 (N = 47) Pooled (N = 47; 40 imputations) *p<.05 Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Model Intercept Pre-Score Group Intercept Pre-Score Group F Statistic 15.623 .027 20.823 11.512 2.598 8.256 5.076 .006 6.649 1.561 11.723 4.186 11.370 1.568 20.179 2.913 16.997 .752 33.316 6.038 6.105 6.345 8.999 3.110 -- -- -- Parameter Estimate -- 2.330 .728 -4.119 -- 5.333 .379 .115 -- 3.613 .546 -2.656 -- 3.090 .711 -2.002 -- 2.498 .705 -2.517 -- 4.762 .429 -1.966 3.505 .596 -2.468 Parameter 95% CI Lower Bound Parameter 95% CI Upper Bound P Value -- 5.743 1.056 -1.624 -- 9.248 .719 3.042 -- 7.244 .868 -.040 -- 6.637 1.030 .362 -- 5.289 .951 -.453 -- 8.006 .717 .281 7.776 1.009 .562 .000 .871 .000* .002* .086 .006* .029* .937 .003* .218 .001* .047* .000* .217 .000* .095 .000* .391 .000* .018* .005* .015* .004* .085 .107 .005* .110 -- -1.082 .400 -6.614 -- 1.417 .040 -2.811 -- -.017 .225 -5.273 -- -.458 .392 -4.365 -- -.293 .459 -4.582 -- 1.519 .141 -4.214 -.766 .183 -5.499 197 Partial Eta Squared .546 .001 .445 .307 .106 .158 .103 .000 .232 .034 .210 .087 .341 .034 .314 .062 .436 .017 .431 .121 .217 .126 .170 .066 -- -- -- APPENDIX H: POOLED INTENT-TO-TREAT ANCOVA RESULTS FOR TARGETED OUTCOMES Table H1. Means, Standard Deviations, and One-Way Analyses of Covariance for Targeted Outcomes (Pooled Across N = 40 Imputations) Measure Treatment (N = 24) Waitlist (N = 23) F (1, 46) Range η2 Range for Significant Imputations (# of significant imputations out of N = 40) PTSS Depression Anxiety M 19.381 10.621 7.291 SD n/a n/a n/a M 18.011 11.771 9.331 SD n/a n/a n/a .000-4.097 0.070-12.986 0.006-21.302 .085 (1) .091-.228 (17) .084-.326 (23) 1Pooled across N = 40 imputations n/a = Pooled estimate unavailable *pooled p < .05 PTSS = Post-traumatic stress symptoms PTSS score range: 0-51 Depression score range: 0-30 Anxiety score range: 0-18 198 APPENDIX I: ACCEPTABILITY CODES AND QUOTES ACROSS OPEN RESPONSE AND INTERVIEW TOOLS Table I1. Acceptability Codes And Quotes Across Open Response And Interview Tools Interviews (N = 2) Code Burden Positive Open Response Questions (N = 8) Positive Negative or Would Change “If there was one that was long, I think it was just like the those like, like, choose your own adventure type ones. So it was like, an active. And so I wasn't, I wasn't bored.” (Interview 2) “The lessons were concise and quick to complete yet very informative.” Negative or Would Change “Re-doing the same prompts over and over again… add new prompts for writing sections” “I felt that the flow of the activities was sometimes off… sometimes it felt more like a tedious assignment rather than a helpful tool.” “Some of the videos were too long and repetitive” “Being forced to write was a love-hate relationship” Coping with racism or cultural stress n/a n/a “The length of the videos was fine. Honestly, they wasn't too long…It wasn't too long or too short. So like, it wasn't like unbearable to sit through” (Interview 1) “And I just put it like near somewhere where I can see it every day and try to at least complete one thing for the day or for the week. Be like, you know, not make my mind go like all crazy. Right. So I saw I did think the homework was like, what the homework assignments were good.” (Interview 1) “To most likely a lot of people in the like in the community because, you know, drugs and stuff was brought up to kind of destroy the community, and it kind of runs in the blood ever since. So I do think it's very relatable and it does, like, it does help I feel like it is gonna help a lot of people of color be like, you know, just recognize that you can be like, No, I don't “...I never really saw any stressful experiences that were specifically like about like, microaggressions or like, any, I think if it’s… more targeted for that purpose, like to help minorities, I think some of the stressful situations, at least more of them should have been related to like microaggressions or like, maybe not as big as like hate crimes, but I think just there should be more examples related 199 Table I1 (cont’d) Ethicality want to do it. It's just not me, you know... other people in the city, just like me has most likely encountered peer pressure to do something that you know, they wasn't brought up into doing but their fun was at like a young age. I remember there was like a situation [in LIFT] where it was like, hey… there's gonna be like drinks and stuff at the party. And technically like the person didn't want to do it. And I feel like that's probably very realistic to a lot of people in the city who are also people of color that was brought up or brought up on alcohol and stuff.” (Interview 1) “I feel like the questions wasn't, it didn't feel like it was targeted to people of color in like a bad way…Like how other companies…. Like, they kind of use it for like, let's say oh, like, for, you know, like gimmicks and pain and like laughter or whatnot, but… I think in this to instances of aggression...” (Interview 2) “... Like, maybe more situations related to like, like, mental health within like certain communities would have been more helpful… if the program addressed that more specifically, and maybe in other cultures, it's like different words, or different phrases, or different presentations, I think having those specific situations would have been helpful to see.” (Interview 2) “And maybe also me personally, I know, like the Latinx community is, there's a huge stigma around mental health, but I'm pretty open to it. So I guess, me doing this program is already more just like me, going outside of like the norms of my own culture, like, this isn't something that I would see there. 200 n/a n/a So I don't think there was ever a possibility for this program, or like, any mental health like, program to maybe fit into my culture… I managed this [LIFT] on my own… I don't really generally feel comfortable speaking with mental health with my family.” (Interview 2) n/a “Every other chapter felt like I did the same thing as the last” “[I liked] the different situations they placed in those games and for me to figure out what would be the best option to lower stress.” Table I1 (cont’d) Knowledge program, it didn't, because I do feel like, um if you're like a person of color... [LIFT] wasn't anything that was like racially targeted or anything…” (Interview 1) “…I did, like, when it was talking about some strategies, I did already know because like, I've kind of learned them in therapy. It was, like, good to have that little reminder of what I can do.” (Interview 1) “And the breathing exercise that was super helpful, because I made sure to do that, like every week. A thing I could do whenever” (Interview 2) “I used it mostly like to deal with, like I said, my grief…And I never thought about tackling it as like in steps. And I think I'm definitely using that more nowadays as like when I feel like I have something big like stressful coming up… I like start doing small things to like move up to it. So maybe my laundry has been piling up. So 201 Table I1 (cont’d) Opportunity costs maybe the first step is just doing my laundry. And then maybe the like, the next step is like actually confronting the person or something.” (Interview 2) ““Oh, the publication as well, when you had to read like the newspaper story from like a third person perspective, I hadn't, like I had written about my feeling.., but from a third person perspective, like having that activity was really helpful. Because it helped me like analyze my emotions…Like more coldly and not like, like getting so caught up in what I'm feeling the more just like from an outsider's perspective.” (Interview 2) “I feel like to me, it was helpful, because at the time, you know, it was summer, and I was working. So it did give me a lot of like, space to be flexible with when I wanted to log into the program, when I'm not busy. So that way, I can put my full attention to it. So I did really like that part of it." (Interview 1) “If I'm just like, tired from work, or anything, when I do kind of, like, go into like a little sleep hibernation, where I just don't really want to do anything. So that was probably one of like, the only period of finicky time with completing it.” (Interview 1) “I liked being able to be honest about my experiences without having to talk to someone about it face to face” n/a 202 n/a n/a Table I1 (cont’d) Perceived effectiveness “…identifying the emotions. Like me, personally, I didn't need help with that. But I can see how maybe someone else would have been able to benefit from that activity.” (Interview 2) “…and the one that helped me the most was, it was like, putting the, like addressing your biggest stressor as like a, like levels of goals, you had like a level one stressor that you had to do. And then like, once you were comfortable doing that you moved on to the like the level three one, the level four, eventually getting to like the level 10. So I didn't feel like I had to go straight to like, addressing my like trauma, like straight to the 10th when I could build up to it that was like the most helpful homework I had. Because even though I was uncomfortable, like I eventually got there, and I liked that it was like broken up into little steps.” (Interview 2) “I think because of my own trauma, like the course was more helping me process like death and the death of my mom and also my dog this summer… I think it's great for anxiety. It's really great for anxiety, it gave great tips and like the situations all were more about anxiety 203 Table I1 (cont’d) than depression...” (Interview 2) “And it did teach me some new stuff on how to handle my stress, if I'm like, feeling out of, you know, out of place in my body at the moment.” (Interview 1) Perceived norms “So I didn't, I didn't like feel like n/a n/a n/a I was personally… like, I was personally targeted in the questions [as a person of color] and like, I like in the little quizzes. So I did like that about the program.” (Interview 1) “And so some of the examples, anything that was related to grief was on point and relevant to me. But that's my personal experience. And I so I personally, can like relate to the ones that were about more like PTSD types of stressors.” (Interview 2) “I remember a lot of the characters in these situations being like.. animated characters, they were some more POC” (Interview 2) “…I wouldn't really change nothing, because I had an easy format. And so like, you're not Relevance, relatability, and degree of realistic Usability “I think there was some like about school shooting related or like gunshot related and that I didn't relate to those [situations].” (Interview 2) “… in each section they gave interesting videos about certain situations and how we typically react to them” “Include more games related to situations of different types of trauma” “…it was a bit like outdated, but I don't see how it maybe having a more modern like, appearance “I like the flow” “The user interface could have been more appealing… 204 Table I1 (cont’d) Want/need confused on where to go.” (Interview 1) would help at all, I don't really care.” (Interview 2) “it was a tiny bit buggy…” (Interview 1) n/a n/a “…just like wanting to finish the course and see how I would be different from the start to the end… I just wanted to finish it just so I could see like, learn more skills and implement into my life.” (Interview 2) Maybe more updated graphics” “Adding in more choices for exercises in individual chapters for accessibility.” n/a 205 2 3 5 7 8 9 APPENDIX J: BY-PARTICIPANT RESULTS FOR N = 6 ADHERENT SUB-GROUP Table J1. By-Participant Results For N = 6 Adherent Sub-Group Participant # Depression3 Score Pre à Post Mechanisms of Action4 (Coping and Negative Cognitions) Acceptability5 – Overall Mean PTSS1 Score Pre à Post 19 à 17 Anxiety2 Score Pre à Post 7 à 3 20 à 13 Minimal changes 1.07 (moderate) 19 à 9 13 à 1 11 à 5 29 à 23 12 à 12 18 à 14 Improvement in coping (Cognitive, Optimism) and cognitions (FPSW) Minimal changes 2.64 (high); Interviewee 2 1.64 (moderate) 25 à 32 10 à 10 5 à 10 Minimal changes 1.86 (moderate) 17 à 4 7 à 2 13 à 5 Minimal changes 18 à 6 6 à 3 5 à 2 Improvement in Optimism coping 2.64 (high); Interviewee 1 1.93 (moderate) 1 Post-traumatic stress symptoms score range: 0-51; clinical mild threshold = 10 2 Anxiety score range: 0-18; clinical threshold = 7 3 Depression score range: 0-30; clinical threshold = 11 4 Minimal = less than 4 point change for coping measure; score above clinical cut-off of 16 for negative cognitions measure 5 Range: 0-3 CA = cultural acceptability, FPSW = fragile person in a scary world subscale 206 APPENDIX K: DEMOGRAPHIC QUESTIONNAIRE AND TRAUMATIC EVENTS SCREENER Screener Demographic Questions: What race do you identify with? Options: African American, American Indian/Alaska Native, Asian, Native Hawaiian/Pacific Islander, White, Multiracial, Biracial, or Other (can type in) What ethnicity do you identify with? Options: Hispanic/Latino, Non-Hispanic/Latino, Other (can type in) What is your age? Dropdown menu Do you have consistent access on a weekly basis to a computer, tablet, or smartphone AND an internet connection (either at your house or a public place, such as your school or a library)? This question is meant to ensure that you will have a device and internet to complete the LIFT program. Options: yes or no Traumatic Events Screener: Check off any of the following things that have EVER happened to you at any time in your life. Make sure you only check things that really happened, not things in movies or video games. o You were in a serious accident or natural disaster where you could have been badly hurt or killed. o You've seen a serious accident where someone could have been (or was) badly hurt or died. o Someone close to you was very sick or injured. o Someone close to you died. o You had a serious illness or injury or had to be rushed to the hospital. o You were attacked by a dog or other animal. o Someone told you they were going to hurt you. o You've been slapped, punched, hit, or beaten up by someone. o You've seen someone being slapped, punched, hit, or beaten up by someone else. o You've been threatened or attacked with a weapon (gun or knife). o You've seen someone else being threatened or attacked with a weapon (gun or knife). o You had an upsetting experience with another person(s) that was related to your race or ethnicity (Black, White, Asian, Native American, Hispanic) where you feared for your life, health, or safety (i.e., you were followed in a store, called a racial slur) o Someone very close to you had an upsetting experience with another person that was related to their race or ethnicity, where you feared for the life, health, or safety of that person? o You were impacted by racism as a result of something you learned about – for example, on the news or in your community – that involved someone you did not know personally in such a way that made you worry about your own wellbeing, health, or safety (i.e., seeing videos/pictures of people from your racial or ethnic group being detained, beaten, killed, or arrested)? 207 o You have experienced more than one racial micro-aggression (i.e., brief comments/insults/gestures that send disrespectful messages to people of color because they belong to a minority group) that over time have made you feel worried about your wellbeing, health, or safety? o NONE of these things happened to you. o How upsetting was the worst thing(s) that happened to you? o not at all upsetting o somewhat upsetting o quite upsetting o extremely upsetting Pre-Test Demographic Questions: What is your sex assigned at birth? Options: Male, Female, Intersex What is your gender identity? Options: Man/Cisgender male, Woman/Cisgender female, Transgender Man, Transgender Woman, Agender, Androgynous, Bigender, Genderfluid, Gender Questioning, Non- binary/Genderqueer, Two Spirit, Other, or choose not to disclose What is your primary language? Options: English, Spanish, Other Do you have either of the following supports within your school? Options: IEP, 504 Plan, Neither, Unsure In the last year, I have: o Met with a therapist outside of school to talk about my mental health o Met with my counselor or psychologist at school to talk about my mental health o Not met with anyone to discuss my mental health o Other (type in) I am currently: o Meeting with a therapist outside of school to talk about my mental health o Meeting with my counselor or psychologist at school to talk about my mental health o Not meeting with anyone to discuss my mental health o Other (type in) Have you ever been prescribed and taken medication for your mental health or behavioral health issues? (e.g., anti-depressant, anti-anxiety, or ADHD medicine) o Yes, and I am currently taking this medication o Yes, but I am not currently taking this medication o No, I have never been prescribed or taken a medication for my mental health. 208 APPENDIX L: CHILD PTSD SYMPTOM SCALE The Child PTSD Symptom Scale (CPSS) – Part I The following is a list of problems people can have after a traumatic event. If you checked off more than one event, please think about the event that had the biggest impact on you or the one that bothers you the most. Then answer how often you had each problem in the LAST WEEK. 0: Not at all 1: Once in awhile 2: Half the time 3: Almost always 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 Having upsetting thoughts or images about the event that came into your head when you didn’t want them to Having bad dreams or nightmares Acting or feeling as if the event was happening again (for example, hearing something or seeing a picture about it and feeling as if I am there again) Feeling upset when you think about it or hear about the event (for example, feeling scared, angry, sad, guilty) Having feelings in your body when you think about or hear about the event (for example, breaking out into a sweat, heart beating fast) Trying not to think about, talk about, or have feelings about the event Trying to avoid activities, people, or places that remind you of the event (for example, not wanting to go to school or the park) Not being able to remember an important part of the upsetting event Having much less interest or not doing things you used to do Not feeling close to people around you Not being able to have strong feelings (for example, being unable to cry or unable to feel happy) Feeling as if your future plans or hopes will not come true (for example, you will not have a job or getting married or having kids) Having trouble falling or staying asleep Feeling irritable or having fits of anger Having trouble concentrating (for example, losing track of a story on the television, forgetting what you read, not paying attention in class) Being overly careful (for example, checking to see who is around you and what is around you) Being jumpy or easily startled (for example, when someone walks up behind you) 209 APPENDIX M: CHILDREN’S COPING STRATEGIES CHECKLIST Sometimes kids have problems or feel upset about things. When this happens, they may do different things to solve the problem or to make themselves feel better. For each item below, choose the answer that BEST describes how often you usually did this to solve your problems or make yourself feel better during the past month. Just indicate how often YOU USUALLY did each thing in order to solve your problems or make yourself feel better during the past month. When you had problems in the past month… 1. You thought about what you could do before you did something 0 1 2 Never Sometimes Often 2. You tried to stay away from the problem Never Sometimes Often 3. You did something to make things better Never Sometimes Often 4. You told yourself that things would get better Never Sometimes Often 5. You thought about what would happen before you decided what to do Never Sometimes Often 6. You told yourself that it would be OK Never Sometimes Often 7. You tried to stay away from things that made you feel upset Never Sometimes Often 8. You did something to solve the problem Never Sometimes Often 9. You tried to make things better by changing what you did Never Sometimes Often 10.You told yourself that in the long run, things would work out for the best Never Sometimes Often 11. You thought about which things are best to do to handle the problem Never Sometimes Often 12. You told yourself that it would work itself out Never Sometimes Often 13. You avoided the people who made you feel bad Never Sometimes Often 14. You thought about what you needed to know so you could solve the problem Never Sometimes Often 15. You avoided it by going to your room Never Sometimes Often 16. You did something in order to get the most you could out of the situation Never Sometimes Often 3 Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time Most of the time 210 APPENDIX N: CHILD POST-TRAUMATIC COGNITIONS INVENTORY SHORT FORM MEASURE CPTCI & CPTCI-S How I’ve been thinking and feeling since the frightening event We would like to know what kinds of thoughts and feelings you(cid:859)ve been having after the frightening event. Below is a list of statements. Please read each statement carefully and tell us how much you AGREE or DISAGREE with each statement by ticking one box. People react to frightening events in many different ways. There are no right or wrong answers to these statements. Don’t agree at all Don’t agree a bit Agree a bit Agree a lot 1. My reactions since the frightening event mean I have [ ] changed for the worse. 2. I don(cid:859)t trust people(cid:856) 3. My reactions since the frightening event mean something is seriously wrong with me. 4. 5. 6. I am no good. I can(cid:859)t cope when things get tough. I used to be a happy person but now I am always sad. 7. Bad things always happen. 8. I will never be able to have normal feelings again. 9. My life has been destroyed by the frightening event. 10. My reactions since the frightening event show that I must be going crazy. [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] 211 APPENDIX O: REVISED CHILDREN’S ANXIETY AND DEPRESSION SCALE 212