AN INVESTIGATION OF THE EFFECTIVENESS AND TRANSPORTABILITY OF THE INCREDIBLE YEARS SELF-ADMINISTERED PARENT TRAINING PROGRAM WITH AN AT-RISK HEAD START STAMPLE By Jessica L. Osburn A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Psychology-Doctor of Philosophy 2013 ABSTRACT AN INVESTIGATION OF THE EFFECTIVENESS AND TRANSPORTABILITY OF THE INCREDIBLE YEARS SELF-ADMINISTERD PARENT TRAINING PROGRAM WITH AN AT-RISK HEAD START SAMPLE By Jessica L. Osburn The primary purpose of this study was to evaluate the effectiveness of the Incredible Years Self-Administered Parent Training Program (IY-SAPTP) by assessing the degree to which parents alter their parenting strategies and in turn impact their children‟s conduct problems, carry out the program with integrity, and consider it an acceptable way to treat their child‟s behavior problems. Thirty-seven parents of children enrolled in Head Start who were identified as behaviorally at-risk via a reliable and valid screening approach served as participants in this research project. A replicated AB design across participants (N=37) was used to investigate individual change resulting from completing IY-SAPTP. Effectiveness data were gathered across two levels: the parent and the child. Parent report of their own parenting practices revealed statistically significant increases in the use of positive parenting practices (i.e., appropriate discipline, positive parenting, monitoring, clear expectations) and statistically significant decreases in the use of negative parenting practices (i.e., use of harsh discipline, inconsistent discipline) from pre to post intervention. The effectiveness of the intervention on child behavior change was assessed through both pretestposttest group design and single case research design techniques. There was a statistically significant increase in pro-social skills based on the Devereux Early Childhood AssessmentPreschool (DECA, LeBuffe & Naglieri, 1999 ) Total Protective Factors (TPF) scores from preintervention to post-intervention (M= 47.65, SD = 8.82), t (36) = 6.53, p < .0005 (two-tailed). There was also a statistically significant decrease in problem behavior scores from preintervention (M = 71.65, SD = 1.67) to post-intervention (M = 59.14, SD = 6.79), t (36) = 11.32, p < 0.0005 (two tailed) on the DECA Behavior Concerns scale. On average parents completed 67% (SD= 21.10) of workbook activities and 66% of target behaviors (SD = 19.52) indicating an overall high level of treatment integrity. Parents also reported a high level of treatment acceptability (M = 93.89, SD = 11.15) on the Treatment Evaluation Questionnaire- Parent form (Kelley, Heffer, Gresham, & Elliott, 1989). A secondary purpose of this study was to evaluate the transportability of the IY-SAPTP to a community-based setting. Transportability was evaluated through the use of random assignment or participants (N=37) to one of two conditions. One condition involved implementation of the steps needed for parents to complete the self-administered program by those external to the agency. The other condition involved implementation of the selfadministered program via collaboration with agency-related mental health consultants. Contrary to expectations, results indicated that there was not a difference in the implementation practices of Head Start consultants when compared to university-based consultants based on the time it took for families to complete the program or in attrition rate. Consistent with expectations, there were no statistically significant differences in effectiveness, integrity, or acceptability between the groups. This study adds to the body of evidence supporting the effectiveness of IY-SAPTP and supports the notion that this intervention can be effectively utilized within community-based settings. While there are design limitations important to note, results indicate that this intervention may be an accessible and beneficial program for parents of children at-risk for behavior problems. I dedicate this dissertation and degree to my family, without whose love, support and inspiration this would not have been possible. To my sweet Natalie Pearl. You make me want to be the best at everything I do, so that I can always give the best to you. To Chad. I am forever grateful to have the most wonderful husband, father and friend by my side. Your support has made all the difference. iv ACKNOWLEDGEMENTS This project represents a monumental accomplishment in my professional and personal life. As my wise advisor has been known to say, “Disequilibrium equals growth.” Upon reflecting on this dissertation process, a statement has never been so true for me as that one. I have questioned my choices and considered quitting on many occasions throughout this process. Thankfully, I have had many supportive, patient, and encouraging people on this path with me to help me see the project through until the end. First and foremost, I want to thank my advisor, Dr. John Carlson. He has been through many ups and downs with me, but has guided me through the process. Throughout my graduate school career he taught me very valuable lessons about the life and work balance (“Work hard, play hard”). There were definitely times when my balance was off, and he knew to push me when I needed to be pushed and was supportive when I needed the support. I most certainly would not have completed this project without his guidance. I would also like to thank my other committee members, Drs. Magen, Oka, and Witmer, for their valuable insights. Their varied perspectives and thoughtful questioning undoubtedly strengthened my final product. My dissertation certainly would not have been possible without the collaborative efforts with Head Start. Both the Michigan State and Head Start consultants were dedicated to improving the outcomes of the families they served. I appreciate the time and effort they provided both me and the Head Start families. And finally, I greatly appreciate and admire the parent participants in this study. Thank you for being open and willing to try new parenting strategies. I wish you and your families the best! v TABLE OF CONTENTS LIST OF TABLES ix LIST OF FIGURES x CHAPTER 1 INTRODUCTION Statement of the Problem Call for Early Intervention Current Study 1 1 1 4 CHAPTER 2 LITERATURE REVIEW Head Start Parent Training as an Evidence Based Intervention The Incredible Years Self-Administered Parent Training Program (IY-SAPTP) Efficacy and Effectiveness From Efficacy to Effectiveness Pre-Posttest Design Single-Subject Research Design Treatment Integrity Increasing Treatment Integrity Treatment Acceptability Research Questions, Hypotheses and Rationale Effectiveness Transportability 6 6 9 12 14 17 18 19 22 25 26 27 28 33 CHAPTER 3 METHODS Attrition Measures Pre-Post Measures Progress Monitoring Measures Treatment Acceptability Measures Procedure Research Design Data Analysis Repeated Measures 37 38 38 38 40 42 42 44 46 46 CHAPTER 4 RESULTS Change in Parent Behavior Change in Child Behavior Change in Pro-Social Skills Change in Disruptive Behavior Treatment Integrity Relationship Between Treatment Integrity and Behavior Change 48 48 50 50 52 54 56 vi Treatment Acceptability System Outcomes/Transportability Consultant Feedback Life Events Parent Comprehension of Materials Consultant Barriers 57 58 59 60 60 61 Chapter 5 DISCUSSION Change in Parent Behavior Change in Child Behavior Pro-social Skills Externalizing Behavior Treatment Integrity Relationship Between Treatment Integrity and Behavior Change Treatment Acceptability Transportability of IY-SAPTP Implications of Study Findings on Intervention Research Limitations Future Directions 62 63 65 65 66 68 69 70 71 73 74 76 APPENDICIES APPENDIX A Table 1. Program Contents APPENDIX B Table 2. Measures Collected APPENDIX C Table 3. Dependent Variables and Measures APPENDIX D Table 4. Demographic Information APPENDIX E Table 5. DECA Protective Factors and Behavioral Concerns APPENDIX F Table 6. Paired Sample t-test Results on LIFT PPI APPENDIX G Table 7. Effect Size Calculation on GAS and GCF Pro-Social Skills APPENDIX H Table 8. Effect Size Calculation on GCF Negative Behaviors APPENDIX I Table 9. Effect Size Calculations on BASC Monitor APPENDIX J Table 10. Mean Treatment Integrity on Videos, Workbooks, and Target Activities APPENDIX K Table 11. Mean Effect Sizes for Participants with Low and High Treatment Integrity APPENDIX L Table 12. Mean Parent Treatment Evaluation Questionnaire Scores APPENDIX M Figure 1. Global Change Form Social Change Scores from Baseline through Intervention APPENDIX N Figure 2. Goal Attainment Scaling Scores from Baseline through Intervention APPENDIX O Figure 3. Global Change Form Attention Scores from Baseline through Intervention APPENDIX P Figure 4. Global Change Form Hyperactivity/Impulsivity Scores from Baseline through Intervention APPENDIX Q Target Activity Checklists 79 80 81 83 85 88 89 vii 90 92 94 96 98 99 100 101 102 103 104 REFERENCES 109 viii LIST OF TABLES Table 1. Program Contents 80 Table 2. Measures Collected 81 Table 3. Dependent Variables and Measures 83 Table 4. DemographicInformation 85 Table 5. DECA Protective Factors and Behavioral Concerns 88 Table 6. Paired Sample t-test Results on LIFT PPI 89 Table 7. Effect Size Calculation on GAS and GCF Pro-Social Skills 90 Table 8. Effect Size Calculation on GCF Negative Behaviors 92 Table 9. Effect Size Calculations on BASC Monitor 94 Table 10. Mean Treatment Integrity on Videos, Workbooks, and Target Activities 96 Table 11. Mean Effect Sizes for Participants with Low and High Treatment Integrity 98 Table 12. Mean Parent Treatment Evaluation Questionnaire Scores 99 ix LIST OF FIGURES Figure 1. Global Change Form Social Change Scores from Baseline through Intervention 100 Figure 2. Goal Attainment Scaling Scores from Baseline through Intervention 101 Figure 3. Global Change Form Attention Scores from Baseline through Intervention 102 Figure 4. Global Change Form Hyperactivity/Impulsivity Scores from Baseline through Intervention 103 x CHAPTER 1 INTRODUCTION Statement of the Problem Disruptive behavior in preschool-age children is the most common reason for referral to child mental health services (Breitenstein, Hill, & Gross, 2009). Studies have reported prevalence rates as high as 23% for clinically significant disruptive behavior among toddlers (O‟Brien, 1996), and as high as 35% for economically disadvantaged preschoolers (WebsterStratton, Reid, & Hammond, 1998). In addition to its high prevalence, disruptive behavior exhibits a high degree of stability over time if not treated (Lahey et al., 1995). Behavior problems exhibited in early childhood pose significant challenges, not only for the affected child, but also for their family and for society as a whole. The presence of behavior problems in young children appears to be a common pathway for a wide range of psychiatric disorders in adolescence and adulthood, as well as for delinquency and criminal behavior (Farrington, 1995). Call for Early Intervention Given the enormous potential long-term societal costs of childhood disruptive behavior disorders, the need for early intervention is strongly indicated. The preschool years appear to be an optimal time for treating disruptive behavior disorders for several reasons. First, behavior problems in young children are less entrenched relative to older children and, second, parents have more of an influence on their child‟s behavior at this young age (Capage, Foote, McNeil, & Eyberg, 1998). In addition, available evidence suggests that interventions are more effective with this population at the preschool age versus later ages (Capage et al., 1998). Recognizing that early intervention can have a lasting impact on children, many states across the country have expressed interest in identifying and serving young children at-risk for 1 behavioral problems (Feil, Walker, Severson, & Ball, 2000). This recognition has created the opportunity for several promising interventions for child and adolescent behavioral problems to become more fully researched and utilized. These interventions include group therapy, social skills training, family therapy, pharmacological interventions, and parent education. Of these interventions, parent education is probably the best documented cost-effective treatment for child and adolescent behavioral problems (Mash & Dozois, 2003). There is clear evidence from randomized trials (Scott, Spender, Doolan, Jacobs, & Aspland, 2001; Webster-Stratton, Reid, & Hammond, 2004) and systematic reviews (Barlow & Stewart- Brown, 2000) that conduct problems can be prevented and treated with parenting interventions that combine both cognitive and behavioral techniques. For example, in a metaanalysis of 26 controlled studies, it was found that the average child whose parents received parent training had lower levels of observed and parent reported problem behaviors than 80% of children whose parents did not (Serketich & Dumas, 1996). These interventions help parents learn positive parenting techniques, including enhancing play and supportive interactions, and employing more consistent discipline and encouragement for good behavior. However, many intervention trials have been carried out in specialist clinics, with only a handful located in „realworld‟ child mental health settings (Scott et al., 2001). While the parent training model is promising, parent training tends to be less effective and less available to socioeconomically disadvantaged families (Webster-Stratton & Hammond, 1990; Serketich & Dumas, 1996). Families with a low socioeconomic status (SES) face many barriers to effective treatment as they often lack financial resources, reliable transportation, and childcare (Serketich & Dumas, 1996). Parent training often occurs in group formats, which may be threatening to many parents who are not confident in speaking in public or disclosing private 2 family information (Gordon, 2000). In addition, parents may not seek help for their children‟s problems because they are concerned about social stigma (Gordon, Graves, & Arbuthnot, 1995). Increasingly, home-based interventions are being advocated for use with families who are isolated from outside services because of distance, lack of transportation or related to personal concerns associated with treatment (Gordon et al., 1995). An alternative to the group parent training model is a self-administered program. Multiple studies have established the efficacy of a self-administered videotape modeling training program for parents (Webster-Stratton, Hollinsworth & Kolpacoff, 1988; Webster-Stratton, 1992; Kratochwill, Elliot, Loitz, Sladeczek, & Carlson, 2003), yet little research has been conducted on the effectiveness of these programs. While there has been a strong emphasis placed on the use of evidence-based interventions, little research has been conducted regarding “real world” application of these practices. How these interventions are used outside of the context of controlled clinical research settings is an area that needs closer examination. As researchers such as Chorpita (2003) and Fixsen (2005) have highlighted, simply putting a research-based intervention in the hands of community-based settings may not be enough to achieve the same results that were obtained by clinicians in more-controlled settings. A systemic evaluation of the effectiveness of these programs would seem valuable given the increasing need for efficient and effective parenting interventions. Many parents who could benefit from parent-training services never seek services, never follow through in obtaining those services when they are recommended, or terminate training early (Knitzner, 2000). Chorpita (2003) acknowledges that a gap between clinical service and research-based practices remains a significant one for the field of psychology. He proposes a model for how evidence may ultimately be connected to practice by highlighting four lines of research (I. 3 Treatment Efficacy; II. Effectiveness: Transportability; III: Effectiveness: Dissemination; IV. Effectiveness: System Evaluation). In order for research to find its way into practice the above four research lines, and results from each, must be conducted. The majority of existing research is limited to Type I: Treatment Efficacy (Chorpita, 2003). A prerequisite to carrying out this model is to develop a collaborative relationship with the entity in which shifts in practice may be sought. In line with this school of thought, The United States Centers for Disease Control (2004) recommend greater use of parenting interventions for preventing youth violence and conduct disorder. They stress the importance of interventions starting early, and to be locally-based and accessible, particularly given that families most at risk may find it hard to access conventional services. In sum, there is a great need to examine “efficacious” interventions within real-world settings, taking socioeconomic factors into consideration, while meeting the needs of families with children at-risk for the development of behavior problems. This project specifically targets the Chorpita‟ s (2003) Type II: Effectiveness: Transportability of an efficacious treatment in a self-administered format (Incredible Years) for parents of preschool children at-risk for later behavioral problems. Current Study The primary purpose of this study was to investigate the effectiveness of a selfadministered evidenced-based parent training program on children enrolled in Head Start who exhibit behavior problems [i.e., Incredible Years Self-Administered Parent Training Program (IY-SAPTP). Please see Table 1 for an overview of program contents (Webster-Stratton, 2002)]. A secondary purpose of this study was to closely examine possible barriers to successful implementation by Head Start staff. While numerous studies have demonstrated the efficacious nature of the Incredible Years program (e.g., Webster-Stratton, Reid, & Hammond, 2004), 4 previous work with Head Start revealed that the program was difficult to carry out with integrity given system constraints and barriers faced by the mental health consultants working within Head Start. A significant need to understand these challenges emerged. This study was designed to more closely examine the effectiveness, treatment integrity, acceptability and the process of implementing an evidence-based program (IY-SAPTP) within this community-based setting. 5 CHAPTER 2 LITERATURE REVIEW This literature review begins by highlighting the importance of intervening early with problem behaviors. Addressing these early emerging problem behaviors through the use of empirically-supported interventions in community based settings is discussed. Specifically, the research on the Incredible Years Parent Training Program is outlined. Finally, given the emphasis on maximizing cost effectiveness and efficiency, research on IY-SAPTP is highlighted. Treatment integrity and treatment acceptability are crucial constructs as they pertain to the selfadministered format. Research on these constructs is presented. The purpose of the current study is described and the research questions, hypotheses, and rationale are outlined. It has been estimated that the prevalence of maladaptive aggressive behavior in young children within low income preschool settings such as Head Start may be as high as 35%, or more than twice that found within the general preschool population (Webster-Stratton & Hammond, 2001). There is an increasing body of empirical literature indicating these early emerging problems are likely to persist over time (Feil, Walker, Severson, & Ball, 2000). Head Start Head Start is a preschool program for disadvantaged children that aims to improve their skills so that they can begin schooling on an equal footing with their more advantaged peers. Begun in 1965 as part of President Johnson‟s “War on Poverty,” Head Start now serves over 900,000 children mainly enrolled in part-day programs (www.headstartinfo.org). This enrollment represents 50 percent of eligible three and four-year old poor children (Children‟s Defense Fund, 2000). Head Start divides skill development into three areas: cognitive skills, school readiness and social and emotional development. All Head Start programs are governed by a set of 6 federally-legislated Program Performance Standards. First published in the 1970s and revised during subsequent Congressional reauthorizations, the Head Start Program Performance Standards define the services that programs are required to provide to children and families they serve. These standards provide standardized definitions of Head Start quality and are used as the structure for monitoring services, both at the local and federal levels. Research shows that children learn better when they have good physical and mental health and have families whose own needs are met so they can devote their energy to nurturing and educating their children. Therefore, the standards emphasize not only children‟s cognitive development, but also their social, emotional, and physical development, as well as parent involvement. The emphasis on social emotional competencies come from compelling evidence from developmental research that has revealed that early experiences and relationships at home and school set the stage for how a child learns self-regulation skills, as well as the ability to manage emotions, take the perspective of others, and develop close relationships (National Research Council and Institutes of Medicine, 2000). Evidence also exists that children‟s social and emotional competence is linked to their cognitive and academic competencies as manifested by their ability to learn and be successful at school (Mash & Dozois, 2003). Furthermore, evidence suggests that without intervention, emotional and behavioral problems in young children (e.g., aggression, antisocial behavior patterns) may be less amenable to intervention as a child ages (Barnett, 1995), resulting in an escalation of academic problems and antisocial behavior and eventual school drop out in later years (Qi & Kaiser, 2003). A growing body of evidence indicates that preschool children experience emotional and behavioral difficulties at similar prevalence rates as those of older children. Two independent studies reported that 20% of preschool children exhibit moderate to significant emotional and 7 behavioral problems, and that percentage is believed to be higher for preschoolers in a Head Start setting (Lavigne et al, 1996; Pianta & Caldwell, 1990). In recent years, emotional and behavioral problems have been observed in children at even younger ages. For example, over 5,000 preschool children were expelled in 2005. This frequency was over three times the rate of their K-12 peers in the same time frame (Gilliam, 2005). Unsurprisingly, Gilliam found that children expelled from preschool do not acquire the social, behavioral, and cognitive experiences that provide a foundation for later school success. They are thereby at an even higher risk of later school failure. Early identification of children at-risk for externalizing behavior problems is essential in order to mitigate these negative outcomes. It has been shown that behavior problems are relatively persistent and predictive of future developmental maladjustment (Campbell, 1997). Empirical studies indicate that young children living in high-risk environments are most likely to manifest emotional and behavioral maladjustment. Major risk factors associated with emotional and behavioral problems include: poverty, living in a single-female headed household, and exposure to multiple stressors associated with densely populated urban settings (Campbell, 1997; Lavigne et al., 1996). Identifying children at-risk for behavioral problems is the first step in linking them with appropriate services. For this reason, the Head Start Performance Standards require agencies to screen for social emotional difficulties within the first 45 days of school. Screening plays a vital role in the identification of children at-risk for behavioral problems and is significant as it leads to the early treatment of potential behavioral developmental problems. Only children who have been identified can receive an appropriate intervention, and the earlier the intervention is received, the less costly, in personal and monetary terms, the behavioral developmental 8 problems. In this sense, screening must be carried out in the context of a continuum of care that also provides for both assessment and treatment. Parent Training as an Evidence-Based Intervention It has recently been recognized that the optimum time for early intervention for behavior problems is the preschool age. This is because during this stage; behaviors tend to first emerge and occur at a high rate (Sanders et al., 2004), and because parent and preschoolers‟ behaviors are less entrenched and more amenable to change (Williford & Shelton, 2008). Therefore, there is a need for effective interventions to ameliorate behavior problems in preschoolers. After studying children displaying disruptive behaviors, Patterson and colleagues concluded that contingencies in the child's social environment, rather than internal psychological traits, were most responsible for the child's adjustment problems. They suggested that retraining the child's parents may not only be desirable but often absolutely necessary. Patterson published the first widely used parent training book in 1968 (as cited in Serketich & Dumas, 1996). Since then, a steady stream of parenting books have proliferated the market (Christophersen & Mortweet, 2003). Research has shown that adverse parenting practices have a major influence in the development, maintenance and exacerbation of behavior problems by unintentionally discouraging pro-social behaviors and inadvertently teaching negative behaviors through modeling and reinforcement (Kazdin & Weisz, 2003). Discipline may be inconsistent or overly permissive and it is harsh discipline in particular that is the strongest predictor of externalized child behavior (Reyno & McGrath, 2006). Conversely, children who are given clear, firm, consistent and appropriate consequences for misbehavior exhibit fewer behavior problems (Arnold, O„Leary, Wolf & Acker, 1993). 9 Incorporating the parent as the change agent is intended to provide (a) improved access to the child's natural environment, (b) more reliable and valid information, (c) better generalization, maintenance, and prevention, and (d) improved cost efficiency (Webster-Stratton, 1998). Once researchers demonstrated that parents could learn behavioral principles and apply them to change the behavior of their children, they began to investigate the effectiveness of parent training in ameliorating a variety of child behavior problems and focused on the most effective and efficient means of training parents. Parent training has been applied to non-compliant children, hyperactive children, and children with specific behavioral problems. The following sections, while not an exhaustive review, discuss key findings in the use of Parent Training to treat childhood behavior problems and the use of various training formats to teach parents new skills. The prominent role of dysfunctional parent-child interaction in the development of disruptive behavior problems (Campbell, 1997; Patterson, 1982) suggests the need for interventions aimed at modifying the contingencies that shape these dysfunctional interactions. Parent training focuses on the interactions at home between parents and children, particularly those that are coercive in nature. While there is no universal standard parent training protocol, many of the most effective and empirically supported versions target similar parenting behaviors. Parent training packages differ primarily in where they place their emphasis. Parenting behaviors frequently emphasized are: giving effective directions, noticing and rewarding good behavior, using non-coercive discipline, monitoring child activities, communicating about emotions, and problem solving. While more recent research has focused on validating the efficacy of different parent training packages, early parent training research focused on evaluating the effectiveness of its various components (Maughan et al., 2005). Knowledge of key findings in this area is valuable when assessing the relative strengths and liabilities of current parent training packages. 10 For example, a meta-analysis conducted by Maughan and colleagues (2005) concluded that behavioral parent training is an efficacious method for addressing the needs of children with externalizing behavior problems. Overall, parent training in the group and 1-on-1 format has been found to be an effective treatment for child behavior problems (Moore & Patterson, 2003). It has been found to be more effective than therapists‟ treatment as usual and based on observations of parent and child behaviors, more effective than family therapy (Webster-Stratton & Hooven, 1998). Additionally, more improvements in child behavior at home, mothers‟ confidence, and client satisfaction have been found for parent training than for eclectic treatments at a child mental health center (Webster-Stratton & Hooven, 1998). Further, a meta-analysis found that parent training resulted in 80% of children being “better adjusted” compared to control groups (Serketich & Dumas, 1996). Furthermore, two-thirds of children whose parents participate in parent training show clinically significant improvement at 1-year follow-up (Reid & Webster-Stratton, 2001). Overall, parent training has been effective in improving parent and child behaviors, communication, and parenting self-esteem (Reyno & McGrath, 2006). There is considerable support for the use of parent-training to improve behaviors in children with externalizing disorders. For example, Brestan and Eyberg (1998) conducted a meta-analysis of studies that examined psychosocial treatments for conduct disordered children and adolescents and evaluated them based on the American Psychological Association (APA) Division 12 criteria. Two programs met the stringent criteria to be recommended by APA Division 12 and were considered well established. These programs were videotaped modeling parent-training programs, specifically The Incredible Years and Patterson‟s parent training based on Living with Children (1968). This recommendation means that these programs have 11 statistically significant results in randomized, controlled group trials that used reliable and valid outcome measures and has been replicated at least twice in independent studies. The Incredible Years Self-Administered Parent Training Program (IY-SAPTP) Given the significant demands placed on Head Start, it is important that a given intervention is not only proven effective, but is also one that can be easily carried-out. For example, video-taped, parent-training programs that are self-administered have been shown to work as well as the group administered techniques, and represent a much more cost-effective strategy (Webster-Stratton, et al., 1988). The Parent Training portion of The Incredible Years program emphasizes positive strategies for behavior management administered in the home setting and is delivered in a self-administered videotape format. The topics that are part of this series include: How to Play with a Child and Helping Children Learn, Praise & Rewards, Effective Limit Setting and Dealing with Noncompliance, and Handling Misbehavior. In addition to the videotapes themselves, there are also workbooks for parents, books for parents, specific activity directions and refrigerator notes. Six, randomized, control group evaluations (www.incredibleyears.com) have revealed that The Incredible Years parent training has significantly (a) increased parent‟s positive affective responses and decreased the use of criticism, harsh discipline and negative comments, (b) increased parents use of effective limitsetting and non-violent discipline, (c) reduced parental depression and increased parental selfconfidence, (d) increased positive family communication and problem solving, (e) increased parents bonding and involvements with teachers and classrooms, and (f) reduced conduct problems in children‟s interaction with parents. Additionally, studies have indicated that the benefits of this program have persisted over time (Webster-Stratton & Hammond, 1997; Webster-Stratton, Hollinsworth, & Kolpacoff, 1989). 12 One study by Webster-Stratton and colleagues (1988) found no significant difference in improvement between self-administered parent-training, group discussion video modeling and group discussion groups for improving behavior in conduct disordered children. Mothers (n=114) and fathers (n=80) of children with conduct problems between the ages of 3 and 8, were randomly assigned to one of the 3 treatment groups or the wait-list control group. Both group administered and self-administered types in this study were found to show clinically significant parent-reported child behavior change and parent-child interaction improvements over the control group. Additionally, the effects that were reported in this study maintained one year later in all groups (Webster-Stratton, Hollinsworth, & Kalpacoff, 1989). The study also indicated the relevance of techniques that employ real-life settings and situations, which self-administered formats often have. This type of program can also be easily transported and replicated to other situations and families. Utilization by a variety of professionals is essentially built into the program. The use of a manual and prescribed activities allow parents to guide themselves through the treatment program. The self-administered format has also been compared to a self-administered format with therapist consultation and a control group (Webster-Stratton, 1990). Results indicated positive outcomes for both treatment groups with regard to parent reported improvement in child behavior, decreased parental stress, and improvement in some parenting practices. In terms of child behavior change, the therapist consultation group was superior to the strictly selfadministered group. Kratochwill and colleagues (2003) compared manual and video tape based parent and teacher training with a control group. Participants in this study included Head Start children with both internalizing and externalizing behavioral concerns. While the researchers reported some 13 behavioral improvements, the results were less robust compared to the aforementioned WebsterStratton studies. However, it should be noted that the researchers were not certified trainers thus limiting their ability to carry out the treatment as intended. Moreover, they did not carry out the comprehensive program in a manner consistent with expectations (i.e., content was not fully presented in the order expected). The concern with certification is eliminated with the selfadministered format. Ogg & Carlson (2009) utilized the IYSAPTP in a single case study with children diagnoses with ADHD. With regard to perceived effectiveness, all participants (n = 5) reported improvements in adaptive skills, with the improvements in other behavioral areas being less consistent. Walcott, Carlson, and Beamon (2009) also studied the effectiveness of a selfadministered parent training program for parents of children with ADHD. Three of the four participants demonstrated positive behavioral change during the intervention phase. Consistent with the aforementioned Ogg & Carlson (2009) study, changes in core ADHD symptoms were less consistent. Efficacy and Effectiveness Despite high levels of concern about the early identification of problem behaviors, and evidence of potentially effective treatments, only a small proportion of children access services for behavior problems (Webster-Stratton, 1998). Furthermore, despite many „efficacy‟ trials, studies suggest relatively few services have a firm evidence-base. Therefore a key policy question is how services can reach large numbers of families, through provision that is effective, yet accessible, of low-cost, and able to be carried out with integrity. This is an important line of 14 research given that the demand for mental health consultants in the Head Start community appears to far outnumber the availability (Knitzer, 2000). Health insurance companies and government agencies are pressuring workers in healthrelated fields to create cost-effective, short-term treatments and interventions and have become reluctant to reimburse for long-term therapy (Currie, 2001). This has resulted in a shift from long-term psychotherapy or incarceration to community-based agencies and managed care with treatment focusing on brief behavioral interventions, validated cost-effective treatments, multidisciplinary case management, and in-home services (Currie, 2001). This shift has resulted in significant pressure to provide evidence or to be accountable for the services that are provided to children and adolescents. The United States Centers for Disease Control (2004) recommend greater use of parenting interventions for preventing youth violence and conduct disorder. They stress the need for interventions to start early, and to be locally-based and accessible, particularly given that families most at risk may find it hard to access conventional services. To achieve this, they emphasize partnership between health services and community-based organizations. In keeping with these policies, parenting interventions for troubled families and children are increasingly being provided by community-based agencies. These typically aim to provide services, in many cases to reach families who are marginalized. On the other hand, common challenges for the agency may include insecure funding and employment; partial reliance on volunteers; neighborhood-based facilities which, although accessible for families, may be experienced by staff as dispersed, inconvenient, and poorly equipped. Furthermore, given the many demands on the staff, they are less likely to have formal professional training in social emotional interventions, raising issues about what is likely to be appropriate training and supervision for carrying out such complex interventions. Given these contextual differences 15 between interventions in the community-based settings, and those in more specialized clinic settings, it is necessary to investigate whether evidence-based programs can be translated into such settings, and still be effective. In psychology, defining what constitutes an “evidence-based” psychological intervention has been challenging. Yet, some general conclusions have been drawn. Characteristics of studies that provide empirical support for interventions (i.e., “treatment efficacy” studies) include the use of manual or protocol, appropriate use of experimental design including randomized controlled trials, comparisons to other treatments or placebo, replication across research teams, and replication found within studies utilizing single-case design methodology. Efficacy research tells us how strongly an intervention works to create change or improve functioning within an identified target syndrome (e.g., Conduct Disorder, Attention-Deficit/Hyperactivity Disorder). A criticism of this work is that these studies fail to bring light to how/if these treatments actually work within practice (i.e., real-world settings). It is important to recognize important ways in which effectiveness research not only can complement efficacy research but also can provide important information that is not otherwise available. There is long-standing recognition in intervention research involving both treatment and prevention that demonstrate what can happen under often ideal or quite special conditions but this may be different from what happens when the intervention is extended to situations and settings of everyday life. Extension of findings to clinical practice is critically important. Both internal validity and external validity are critical aspects of evaluating interventions. The major difference is that effectiveness research relaxes exclusionary criteria and broadens the scope of outcome measurement. Because the results can be generalized, effectiveness research has the added capability of informing decision making in a realistic setting. 16 From Efficacy to Effectiveness Dissemination of evidence-based programs is often compromised by low adherence to protocols, misapplication across diverse populations, inadequate resources, and poor infrastructure, support, training and planning (Fixsen et al., 2005). Investigations of effectiveness still involve oversight by the researcher, adherence to protocol, and systematic research gathering practices (Shirk, 2004). Transportability refers to the ease with which an empirically-supported treatment can be moved from the research setting in which it was developed and tested to the “real world.” (Schoenwald & Hoagwood, 2001). Transportability is integral to effectiveness as a treatment must be transportable in order to investigate effectiveness (Schoenwald & Hoagwood, 2001). While both efficacy and effectiveness studies are needed to advance the use of evidencebased practice in the “real world”, it is widely recognized that the majority of research studies emphasize the efficacy of interventions. Chorpita (2003) introduced a framework to advance evidence-based practice. This framework involves four lines of research. Type I involves Efficacy studies. These studies evaluate interventions in a controlled research context. Type II studies are called Effectiveness: Transportability studies. These studies examine the degree to which intervention effects generalize from research to practice settings, and also examine the acceptability and feasibility of implementing the intervention in a practice setting (Schoenwald & Hoagwood, 2001). Type III studies are called Effectiveness: Dissemination studies and they involve the implementation of a research protocol entirely by the practice-based implementer (i.e. not by primary researcher). While this research is carried out in a practice setting by a practitioner, Type III research still involves a research protocol and researcher control which could have an impact on the effectiveness of the intervention. Finally, Type IV research is 17 termed Effectiveness: System Evaluation. Chorpita (2003) characterized these studies as involving “the final inference to be made: whether the practice elements can lead to positive outcomes where a system stands entirely on its own.” (p. 46). Pre-Post Test Design Evaluating the effectiveness of an intervention in a community setting can be challenging (Chambless & Hollon, 1998). There are several methodological challenges inherent in conducting intervention research. While random assignment to intervention conditions is the gold standard, it is often difficult to achieve due to practical obstacles as well as ethical and philosophical objections. Second, intervention studies can be expensive and difficult to conduct by community agencies. Most intervention studies have to accept limitations to outcomes and generalization when they are measured against the ideal prototypes of experimental studies. Still they can offer important contributions to the growing body of knowledge of what works in the treatment of problem behavior (Chorpita, 2003). As previously mentioned, it is often difficult to meet the demands of the ideal experimental research design, which requires randomization and the use of a control group. The primary benefit of the pre-post design in which pre-intervention and post-intervention data are compared to investigate an intervention‟s effectiveness, is its simplicity and ease of use in community settings. However, despite its practical appeal, without a control group, it is impossible to rule out other factors such as historical events (other practice or policy changes occurring during the same time that the intervention takes place), the effects of repeated assessment (previous assessment exposure or test learning effects), or participant maturation (natural development related increases in performance) as equally plausible alternative explanations for any observed study finding. 18 Using the participant as its own control (i.e., single-case research design) offers improvement over pre-post designs with regard to internal validity. For example, in the one group pretest-posttest design, the differences in the simple arithmetic means for the group on the pretest and posttest are compared. However, when the individual is used as his or her own control, pretest and posttest comparisons are made by pairing each individual‟s pretest and posttest scores and calculating the differences and then determining whether these calculated differences across the group are significantly different from pretest to posttest using for example, a paired t-test analysis. The main threat to internal validity that remains with this paired comparisons design is that of history, because other simultaneous events cannot be ruled out as affecting pre to post-test changes (Horner et. al, 2005). Single-Subject Research Design Single-subject research designs represent another strategy that can be used in practicebased research to explore the effectiveness of interventions within clinical practice (Horner et. al, 2005). This design promotes the close inspection of potential intervention effects at the individual level through the use of the participant as their own control. Single-subject research design has been described as any research involving one subject or one group that is treated as a single entity (Gresham, 1998). Using repeated observations, the effect of an intervention can be established. Single-subject design has substantial support as a valid research design, and can effectively help answer questions about whether the independent variable(s) is having an affect on the dependant variable(s). Gresham (1998) suggests that the single-case design is useful for a number of purposes including, applying new treatment techniques and determining areas that are possible for future research. Additionally, Kazdin (1982) suggests that this technique helps 19 strengthen the connection between research and practice, which is central to thinking about the balance between efficacy and effectiveness of a treatment (Chorpita, 2004) which is a critical aspect of this study. Historically, single subject researchers have not used statistics to support conclusions for intervention effectiveness but have relied on the visual inspection of data (Olive & Smith, 2005). Single subject researchers have also cited the importance of applied clinical importance rather than statistical relevance as justification for their methods of strong reliance on visual inspection of data (Kazdin, 1982). A second reason often cited for not relying on statistics to support research conclusions is that data from single case designs often violate some of the assumptions on which various statistical assumptions depend (Kazdin, 1982). For example, data in single subject designs are autocorrelated, increasing the likelihood of Type I errors during calculations. Also, many participants often used in single subject research do not represent the normal population nor do they meet the assumption for the homogeneity of variance. Despite these valid reasons for not using statistical analysis with single subject data, realities of conducting research in applied settings may weaken the internal validity of singlecase research designs, thus limiting the ability to rely solely on visual inspection of data (Kazdin, 1982). Recently, the topic of using statistical techniques has begun to receive increased attention within the literature. For example, several authors have started reporting effect sizes for experiments conducted using single subject designs (Scruggs & Mastropieri, 2001). To date, there is no statistical approach for examining single-case research that has been able to control for auto-correlation (e.g. the fact that scores are not independent) and provides a metric that integrates the full constellation of variables used in visual analysis of single-case designs to assess the level of experimental control demonstrated by the data. Different proposals 20 have succeeded in elements of this task (Busk & Serlin, 1992; Parker & Hagan-Burke, 2007; Parker, Hagan-Burke & Vannest, 2007), but none has offered a model that addresses each of those concerns. The majority of effect size calculations that have been studied for use with single-case designs (Scruggs & Mastropieri, 2001) are either regression-based, measure the percentage of non-overlapping data (PND), or use standardized mean difference (SMD) methods. SMD methods are similar to the effect size calculations used for group designs (e.g Cohen‟s d). Busk and Serlin (1992) described how to calculate effect sizes using the standard mean difference (SMD) equation. Essentially, this method involves dividing the mean difference of the baseline and intervention phases by the pooled standard deviation from both phases. This procedure was first recommended for use with single-case designs by White and colleagues (1989), and gives an output similar to Cohen‟s d (Ross, 2012). Scruggs, Mastropieri and Casto (1987) first introduced the percentage of non-overlapping data (PND) technique. This technique involves determining the percentage of data points in the intervention phase that exceed the most extreme data point in the baseline phase. While some researchers support this method because of the ease of calculation, there are several situations in which the PND statistic will be inaccurate. First, if even one outlier is present in the baseline data, the PND will be greatly impacted. Second, outliers in the intervention phase can lead to small positive PND even if the general effect is negative. Finally, a positive score can be obtained even if the baseline trend is just continued throughout the intervention. Regression methods used to analyze single-case research data are based on the assumption that linearity exists. This is problematic, as data obtained from single subject studies are not linear. Olive and Smith (2005) argue against the use of regression effect size (r) with 21 single subject data. However, there are other studies (Brossart et al., 2005) that conclude that regression approaches are the best available (specifically Allison-MT method Reference). Olive and Smith (2005) conducted a study where they compared the results of using visual analysis with regression, SMD and PND models of calculating effect sizes. They found that each of the methods was successful in detecting intervention effects and when rank ordered, each method was consistent in identifying the participants with the largest effect. The authors ultimately recommended the use of the SMD approach when calculating the effect size in single subject data. Treatment Integrity Before manualized treatments, like the Incredible Years, are implemented in “real world” settings, research is needed that involves taking empirically supported treatments out of the tightly controlled research trials where they were developed and studying them in practice. The purpose would be to find out what modifications are needed to make the treatments effective with the clientele with the real-life constraints of clinical practice, where many challenges serve as barriers to similar levels of treatment response. Several investigators have taken steps in this direction, for example, by treating children with clinically significant levels of difficulties in university-based lab clinics (e.g., Kendall, 1994). However, more extensive attempts may be needed to incorporate lab-tested treatments into actual clinical practice, and test their effects, before it can be known just how exportable the experimentally derived treatments are, and what changes will be needed to make them work in a practice setting. Several reviews of the literature suggest that the measurement of treatment integrity is uncommon (Gresham, Gansle, & Noell, 1993; Wheeler, Baggett, Fox, & Blevins, 2006). Gresham, Gansle, and Noell examined 158 studies published in the Journal of Applied Behavior Analysis between 1980 and 1990 that were 22 child studies (<19 years of age). Of these 158 studies, only 16% (25 studies) systematically measured and reported levels of treatment integrity. Specific to the Incredible Years Program, efforts have been made to not only put more emphasis on the real world application of the program but also systematically measure treatment integrity. Two recent studies have employed single case design methodologies to examine the effectiveness of the self-administered program. One study conducted by Ogg and Carlson (2009) studied the effectiveness, acceptability and integrity of the self-administered program with children exhibiting symptoms of ADHD. Parents found the program to be an acceptable treatment. The researchers found that parents carried out the program with a high level of treatment integrity, with the exception of the completion of workbook forms. With regard to perceived effectiveness, all participants (n = 5) reported improvements in adaptive skills, with the improvements in other behavioral areas being less consistent. Walcott, Carlson, and Beamon (2009) also studied the effectiveness of a selfadministered parent training program for parents of children with ADHD. Three of the four participants demonstrated positive behavioral change during the intervention phase. Consistent with the aforementioned Ogg & Carlson (2009) study, changes in core ADHD symptoms were less consistent. Treatment integrity varied across participants and was found to coincide with child behavioral outcomes. Treatment integrity is one of the key variables related to the success of an intervention. One reason for failed research-based interventions may be related to the fact that the steps that were needed to achieve the results were overlooked or not closely followed (Gresham, 1989). As previously indicated, while much research focuses on various interventions aimed at addressing 23 particular types of behavior, very few studies identify the steps required in achieving a successful intervention (Gresham et al., 1993; Lane, Beebe-Frankenberger, Lambros & Pierson, 2001). Treatment integrity is defined as the degree to which an intervention plan is implemented in the manner that was originally intended (Gresham, 1989; Gresham et al., 1993). Treatment integrity is also characterized as the technical precision and consistency with which an intervention is implemented across time (Detrich, 1999). Lane et al. (2001) defined treatment integrity as “the measurement of the accuracy and consistency with which a treatment is implemented” (p. 367). Treatment adherence, or the precise delivery of a treatment on a consistent basis (Allen & Warzak, 2000) is also frequently used when discussing issues related to treatment implementation. Meichenbaum and Turk (as cited by Telzrow, 1995) defined treatment adherence as “the degree to which the consultee is committed to implementation of a specific intervention and actively demonstrates intervention-related behaviors” (page 501). Finally, plan implementation or the systematic, step-by- step process of implementing an intervention as planned (Flugum & Reschly, 1994), is also a term that is used synonymously with treatment integrity. Given the nature of a self-administered program, it is crucial to have knowledge of the level of treatment integrity and implementation integrity. In this study, the term treatment integrity refers to the degree to which the parent carries out the program as outlined in the treatment manual. Implementation integrity refers to the degree that the trained consultants facilitate/hinder the parent‟s ability to carry out the program. Both are essential to examine when bringing evidence-based approaches into practice within real-life conditions. Treatment integrity must be measured to gain a better understanding of why specific results were achieved. Gresham et al. (1993) noted the lack of data on treatment integrity issues 24 has significantly impeded school psychologists‟ understanding of which interventions or intervention components are the most effective in bringing out a desired behavioral change. Therefore, unless a research study is specifically monitoring treatment adherence, in many of the studies people may have made changes in the way in which the intervention was implemented. Such modifications should be reported as these modifications may be the key ingredient to the success of the intervention. Another reason for the importance of treatment integrity is linked to the issue of replication of the success of an intervention. Moncher and Prinz (1991) highlighted data indicating the levels of treatment integrity might directly increase the probability of replication of previous studies. Failure to control the experimental variables may significantly impact the ability to replicate similar behavioral changes. Lastly, in most behavioral studies, the psychologist seeks to target and manipulate the conditions in which the desired behavioral change should occur. By controlling the experimental conditions, the psychologist is able to control the internal validity of the study, thereby ensuring that the variable the psychologist intended to study is the one producing the desired or undesired results as opposed to other, unwanted variables. Moncher and Prinz (1991) noted that without a systematic examination of treatment implementation, the results of any study might be open to threats to internal and external validity, as information on the types of changes in the independent variables that affect changes in the independent variable cannot be verified. Increasing Treatment Integrity Research suggests that many people implement interventions in a manner different from how it was intended (Jones, Wickstrom, & Friman, 1997; Noell et al., 2001). As previously indicated, implementation integrity is the degree to which staff are mutually committed to 25 implementing an intervention according to the mutually agreed upon and prescribed specifications (Telzrow, 1995). Discussions regarding variables related to the lack of implementation integrity include a clearly developed treatment/intervention plan, opportunities for adapting the intervention, provision for ongoing support and/or consultation, and acceptability of the intervention by the implementers. There are several reasons why implementation integrity may not occur. One reason why implementation integrity fails to occur is because the treatment plan was not clearly defined or discussed in a manner that the treatment implementer clearly grasps (Gresham, 1989; Telzrow, 1995). Witt and Elliott (1985) observed, “A prerequisite to insuring treatment integrity is knowing exactly how an intervention should be conducted” (p.266). Many interventions do not allow room for adaptation to individual needs or circumstances. Witt and Elliott (1985) said teachers would frequently modify interventions that are complex in a user-friendlier manner. Implementers need to feel a sense of ownership with the intervention before they can embrace and adopt it. Interventions that do not allow for modifications are ones likely to be reputed as less acceptable. Other reasons include the lack of a clearly developed treatment plan, the presence or absence of appropriate support structures can also significantly impact treatment adherence. Telzrow (1995) observed that support structures provide the necessary assistance and guidance needed to encourage staff members to monitor and correct the way in which they are implementing an intervention. One form of supportive assistance utilized to increase treatment adherence is consultation, which can take a variety of forms. Treatment Acceptability Elliott, Von Brock and Robertson (1991) noted the importance of treatment acceptability to intervention adherence. Treatment acceptability is considered to be a social validity construct. 26 Social validity is “the degree that behavior-change efforts impact favorably upon consumers” (Carr, Austin, Britton, Kellum & Bailey, 1999, p. 223). Treatment acceptability is defined as the “judgments from treatment consumers pertaining to whether or not they like the treatment procedures or effects. It is the subjective evaluation of an individual‟s satisfaction with treatment” (Witt & Elliott, 1985, p. 254). Treatment acceptability is an important variable that is closely related to treatment integrity as the degree to which an intervention is acceptable will have direct impact on the degree to which the intervention is properly implemented (Reimers, Wacker & Koeppl, 1987). Research suggests that interventions will have a greater likelihood of being implemented if the implementers find them acceptable (Witt & Elliott). Interventions not deemed acceptable by those who will be using them can compromise the treatment adherence of the intervention (Detrich, 1999). Specific to the IY-SAPTP (2002), Stewart and Carlson (2010) obtained acceptability data from thirty parents of children ages 5 to 12 years old with reported externalizing behavioral difficulties. They found that parents found the IY-SAPTP to be an acceptable and appropriate treatment for their child in terms of effectiveness and amount of time for improvement. Research Questions, Hypotheses and Rationale While there has been a strong emphasis placed on the use of evidence-based interventions, little research has been conducted regarding “real world” application of these practices. How these interventions are used outside of the context of controlled clinical research settings is an area that needs closer examination (Chorpita, 2003). The primary purpose of this study is to evaluate the effectiveness of the IY-SAPTP by assessing the degree to which parents alter their parenting strategies and in turn impact their children‟s conduct problems, carry out the program with integrity and consider it an acceptable way to treat their child‟s behavior problems 27 after being treated with the Incredible Years program carried out in a self-administered format. This format allows a well-researched, evidence based program to be implemented in a real world context by taking it out of the lab setting and assessing the use of the program by Head Start. A secondary purpose of this study is to assess whether the IY-SAPT program is a treatment approach that can be feasibly carried out in the Head Start setting. Numerous studies have demonstrated the efficacy of IY-SAPTP, therefore randomization to group is not a necessary component for effectiveness research. However, implementation integrity has not been evaluated in previous studies paired with the knowledge that this program was not successfully transported to this Head Start agency in the past, it was determined that random assignment to group based on consultant affiliation was necessary to control for implementation differences. This study specifically examined some of the factors (e.g., time limitations) that may contribute to differences in program effectiveness when transporting an evidence-based intervention (IYSAPTP) to a community based setting. The following six research questions were addressed in the current study. The first four research questions address the effectiveness of the IY-SAPTP and builds off recently published Type II research that utilized only single-case design methods on fewer than 15 children across all studies [Ogg & Carlson (2009), Walcott et al., (2009)]. Both pre-posttest and single case research methodology were utilized in this investigation. The final two research questions address the transportability of this program to a community-based setting by utilizing random assignment to group and qualitative methodology within the study methods. Effectiveness 1. Will parents who complete the Incredible Years Parent Training Program show an increase in positive parenting practices and a decrease in negative parenting practices? 28 Hypothesis: Parents who complete the IY-SAPT program will show an increase in their ratings of positive parenting practices. After undergoing the training for 10 weeks, parents will use more skills related to (a) promoting positive relationships with their child; (b) use of appropriate discipline; (c) setting clear expectations and (d) monitoring their child and show a decrease in their use of (e) harsh discipline and (f) inconsistent discipline based on the six subscale scores on the LIFT (Linking the Interests of Families and Teachers Parent Practices Interview (1998). Rationale: While not all risk factors for conduct disorders are amenable to intervention (e.g., economic status), risk factors such as parenting practices, specifically harsh and inconsistent discipline and low nurturing, have been identified as the most important risk factors for early onset conduct problems. As such, the primary approach to treating externalizing behavior problems in children has been to teach parents to alter their parenting practices. Support for this hypothesis can be found in the numerous efficacy studies conducted by Webster-Stratton. For example, in a study (Webster-Stratton, 2000) of the self-administered format, parents reported significantly less use of spanking and lower stress levels. Analysis: A paired sample t-test will be conducted to analyze the six subscale scores (promoting positive parenting, use of appropriate discipline, setting clear expectations, monitoring, harsh discipline and inconsistent discipline) on the Parenting Practices interview to determine if there is a measurable difference in parenting techniques from pre to posttest for the 37 parents who completed the program. 2. Will children identified with early onset conduct problems whose parents complete the training program show significant behavioral improvement? 29 Hypothesis: Children whose parents fully complete the IY-SAPTP (2002) will show an increase in pro-social behavior (e.g., DECA TPF, Global Change Form and Goal Attainment Scaling) and decrease in disruptive behaviors [e.g., DECA Behavior Concerns scale (pre and post test), BASC Monitor subscales, and Global Change Form)]. Rationale: Children whose parents‟ discipline approaches are inconsistent and erratic and who are physically abusive, highly critical, or lacking in warmth are at high risk for conduct disorder, as are children whose parents are disengaged from their children‟s school experiences and provide little instruction for pro-social behavior (Patterson, Capaldi & Bank, 1991). Moreover, the risk of a child developing conduct disorders seems to increase. The Incredible Years Parent Training Program (2002) is intended to alter the coercive relationship and halt the progression of externalized behavior problems by helping to establish a positive relationship between parent and child. In theory, by breaking the coercive relationship cycle and teaching pro-social skills, children should exhibit fewer externalizing behavior problems. This has been demonstrated in a number of studies by Webster-Stratton, as well as by those conducting independent replications. Analysis: The change in pre and post test scores on the Devereaux Early Childhood Assessment, Total Protective Factors and Behavior Concerns subscales will be analyzed using a paired sample t-test. To ensure that individual variability is not obscured by the group analysis, Standardized Mean Difference (SDM) effect size calculations for each individual child will also be calculated to determine the degree of behavioral change based on the data collected from the progress monitoring tools [Behavior Assessment System for Children- Monitor (BASC-Monitor; Attention, Hyperactivity, Internalizing Problems and Adaptive Skills subscales)], the Global Change Form and the Goal Attainment Scale). SMD (Busk-Serlin model 2) Busk and Serlin models were less affected by autocorrelation than the regression-based methods (Manolov & 30 Solanas, 2008). PND and the Busk-Serlin Models 1 and 2 were also better able to differentiate between effective and non-effective interventions than the regression-techniques, although they were affected by general trend in the data not related to the treatment‟s introduction (Ross, 2012). Additionally, Cohen suggestion that effect sizes d of 0.20 are small, 0.50 are medium, and 0.80 are large enables comparisons to known benchmarks. 3. a) To what degree are the parent management techniques taught in the IY-SAPTP carried out with integrity? Hypothesis: Parents will demonstrate a high level of integrity when following the structured manualized approach of the IY-SAPTP. Rationale: Treatment integrity is the degree to which an intervention is implemented in the manner in which it was intended (Gresham, 1998). In this case the level of integrity parents have in carrying out behavior management techniques as taught in the parent training was measured. Within this study, treatment integrity refers to the degree to which parents carry out the program as intended. On the other hand, if the treatment was effective and integrity was not monitored, one cannot be certain that the treatment caused changes in the dependent variable and not some extraneous event. Within this study, it is believed that the manualized nature of the evidence based program will facilitate the parents‟ ability to carry out the program with a high level of treatment integrity. Analysis: Treatment integrity was calculated based on the percentage of workbook activities completed, percentage of videos watched and percentage of reported involvement in the target activities. Consistent with Rhymer and colleagues (2002), integrity levels of 50 to 75 percent will indicate a high level of integrity as this is the level found to be sufficient for producing rather large changes in behavior. 3. b) Will higher levels of treatment integrity influence degree of behavior change? 31 Hypothesis: Higher levels of treatment integrity will more consistently result in behavior change as assessed by the BASC-Monitor (Attention, Hyperactivity, Internalizing Problems and Adaptive Skills subscales to be analyzed via effect size calculations). Rationale: The exact relationship between treatment integrity and effectiveness is not well known. For instance, Gresham, Gansell, Noell et. al. (1993) suggest a median correlation between 0.54 between treatment integrity and treatment outcome. Similarly, Rhymer and colleagues (2002) suggest integrity levels of 50 to 75 percent were sufficient for producing rather large changes in behavior. Results from these studies have been consistent in confirming the necessity of having high but not total integrity for affecting behavior change. Analysis: The correlation between the effect size calculation and the treatment integrity percentage for each participant was examined using Pearson‟s correlation. Consistent with Gresham, Gansell, Noell et. al. (1993) correlations of at least 0.54 should indicate better behavioral outcomes. Additionally, based on the Rhymer and colleagues (2002) criteria, the behavioral outcomes on the BASC-Monitor for participants who report greater than 50% treatment integrity will be compared to those who report less than 50% treatment integrity. 4. Do parents find the Incredible Years Training Program an acceptable way to learn to address the needs of their children with behavioral problems? Hypothesis: It is hypothesized that parents find the Incredible Years Parent Training Program an acceptable way to address their child‟s behavioral difficulties as indicated in their responses on the treatment acceptability form that was completed at the end of the intervention. Rationale: Acceptability is a critical component in the transportability of efficacious and effective interventions from controlled to “real world” settings (Chorpita, 2003). Research on 32 treatment acceptability has found that treatments that use positive approaches rather than negative ones are reported as more acceptable. Those that take less time to implement are also rated as more acceptable (Cowan & Sheridan, 2003). Previous research on the acceptability of self-administered format of the Incredible Years Parent Training Program have found that parents report this as an acceptable way to address behavioral concerns (Stewart & Carlson, 2010; Ogg & Carlson, 2009; Walcott et. al, 2009). Analysis: The Consumer Satisfaction Survey (Kelley, Heffer, Gresham, & Elliott, 1989) and the Parent Video (Webster-Stratton, 2001) evaluation forms will be used to examine how acceptable each parent found the program for addressing their child‟s behavioral concerns. The overall mean on this 21-item scale will be used to determine the overall level of satisfaction. A mid-point score of 3.5 on each item indicates moderate acceptability. Therefore an overall score of 73.7 on the 21-items represents an adequate rating of acceptability. Additionally, the following subscale scores have been used in recent research (Kratochwill et al., 2003 & Stewart & Carlson, 2010) to indicate high acceptability for each of the following subscales: Acceptability, 55; Effectiveness 39; Amount of time, 9. Transportability 5. Will implementation integrity influence participant behavior change? Hypothesis: Michigan State consultants will have a higher level of implementation integrity than Head State consultants. Rationale: Another aspect of integrity is implementation integrity. Implementation integrity refers to the extent to which the treatment is implemented as a function of the number of times the opportunity exists to apply the treatment. For instance, the degree to which the consultants follow the provided protocol for meeting with and calling families. It is important to 33 monitor both aspects of integrity because if treatments fail and treatment integrity and implementation integrity have not both been monitored, one does not know if the treatment lacked sufficient strength or was simply not implemented accurately and/or consistently (Yeaton & Sechrest, 1981). IY-SAPTP (2002) has strong efficacy via past research when investigated within a Head Start population. It has clearly met the needs of a diverse racial, cultural, and economic group of parents. With this knowledge, this program was adopted for use within our local Head Start agency. Similar to Fixsen and colleagues (2005) findings on the implementation of evidencebased interventions, despite a year of consultation and support in implementation during the year prior to the study, significant barriers to the implementation of this approach within the Head Start agency emerged as none of the six mental health consultants were able to implement all aspects of the intervention. It is believed that the graduate student consultants represent a closer to “laboratory” condition and therefore will be better able to implement the program as intended. Southam-Gerow et al. (2012) have identified three levels of dissemination barriers as they relate to the dissemination of empirically supported interventions. They have identified client level, therapist level, and system level barriers. The self-administered nature of this program alleviates many of the client and therapist level barriers. However, it is hypothesized that system issues could impact dissemination (e.g. high case loads, intervention methods incongruent with typical service delivery etc.) and HS consultants will demonstrate those challenges within their implementation integrity measures. Analysis: The phone logs and implementation integrity checklists will be analyzed to determine if the program was delivered as intended. The mean completion rate on the logs and 34 the checklists will be combined to make one mean implementation integrity rate for each of the two groups. An independent samples t-test will be used to test for group differences. 6. What facilitated or hindered the mental health consultants‟ ability to work with the parents to implement the program within the larger roles and functions of their position within Head Start? Rationale: Testing effectiveness is vital for policy; however, investigators have also pointed to the clinical and theoretical importance of examining intervention mechanisms (Rutter, 2005). Researchers have recognized that making the right information available is only one piece of the puzzle. The process of transferring knowledge about evidence-based practices and implementing these in day-to-day work is highly complex. The study of this process is every bit as important as the study of the evidence-based practices themselves (Fixsen et al., 2005). To address both effectiveness as well as system-level implementation in a communitybased setting, one should aim to use an intervention with a strong evidence-base for reducing children‟s conduct problems, the Webster-Stratton Incredible Years program, was delivered in multiple neighborhood sites. Given the shift to effectiveness research, not only should quantitative measures be used to assess the integrity, acceptability, and effectiveness of the intervention, but mechanisms, related to system level implementation should be explored to determine predictors of child problem behavior outcomes (Fixsen et al., 2005). Southam-Gerow et al. (2012) have developed the Mental Health Service Ecological model to identify barriers to the dissemination of research-based interventions. Their model indicates that there can be 1) Child/Family factors 2) Therapist factors or 3) System level factors. Due to the self-administered nature of this evaluation, many of the child/family (e.g. access to facility, time to attend, insurance etc.) and therapist (e.x. level of training and treatment 35 preference/problem match) are non-issues. However, a qualitative study by Stern, Alaggia, Watson & Morton (2008) has identified some key issues with implementing the Incredible Years with adherence in a community setting. They identified barriers of time constraints, confusion with materials (e.g., time out) and lack of control (i.e., life events) among other barriers specific to the group format. Interviews of all consultants were conducted to assess the level of acceptability and perceived utility of this training program by the Head Start Administrators and Mental Health Consultants. This interview explored the following issues: 1) How they felt the treatment went, 2) What barriers were encountered to successful implementation, and 3) What could be done differently to be sure that this evidenced-based approach can be carried out as intended. Analysis: Interviews were transcribed and multiple coders were used to maximize consistency and breadth of themes to increase researcher neutrality and reduce bias (Miles & Huberman, 1994). These coders carried out independent analysis and then the group discussed themes until satisfactory agreement was reached. A summary of the prevalence of codes that discusses similarities and differences in related codes across the Head State and Michigan State groups will be provided. 36 CHAPTER 3 METHOD Participants Fifty families met criteria and expressed interest in participating in the research project. Thirty-seven of those families (74%) completed the program and were included in the analysis. To investigate issues of transportability, families were randomly assigned to one of two groups: 1) The intervention provided by mental health consultants (N=14) and 2) the intervention provided by graduate students trained in the implementation of the program (N=23). All participants were the biological mother except for one biological grandmother who was the legal guardian. Twenty of the mothers reported being single, three were living with their child‟s father, ten were married and four were separated from their child‟s father. Eleven of the parents (30%) th reported that they had no more than an 11 grade education. Fifteen parents (41%) reported earning their high school diploma or GED. Nine parents (24%) reported completing some college, while two parents (5%) completed their college education. The majority of the parents identified themselves as Caucasian (73%), while 6 parents (16%) identified themselves as African American, and two parents each identified as Latina (5%) and Asian (5%). The children of the participants ranged in age from three years, 11 months old to five years, eight months old. The sample consisted of 15 female children (41%) and 22 male children (59%). A summary of demographic information can be found in Table 4. All parents who indicated interest in participation and viewed all video segments of the program received a $100 gift card to compensate them for the time it took to participate in this study. 37 Attrition Overall, a high retention rate was achieved with 74% of parents completing the program from beginning to end. Three families stopped participating prior to random assignment (after first baseline meeting). Eight other families dropped out before the first set of videos were delivered (before the end of baseline). Two of the families who did not complete the program (one from each group) moved away while they were in the middle of the intervention. Six families with consultants from Head Start failed to complete the program, while four families with Michigan State consultants did not complete the program. The families who dropped out prior to completing the study did not differ significantly from the remaining 37 families with respect to child or family demographic characteristics (see table 4 for demographic information). Measures A variety of tools were used to collect effectiveness, integrity, and acceptability data. In addition to standardized pre-posttest measures of parent reported behavior changes (child and parent) that serve as the primary sources of effectiveness data, three measures were administered repeatedly over time to track child behavior changes in response to the intervention. These tools were included to ensure that mean differences did not obscure behavioral change. Additionally, treatment integrity checklists and parent acceptability measures were used to gauge adherence to the program curriculum. See Table 2 for a list of the measures and when they were collected. Pre-posttest Measures Devereux Early Childhood Assessment-Preschool (DECA, LeBuffe & Naglieri, 1999): The DECA is a nationally-normed behavior rating scale that evaluates within-child protective and risk factors in preschool children, ages 2-5. This instrument may be completed by parental guardians and/or early childhood professionals (preschool teachers and childcare providers), yet 38 all complete the same 37 items. The DECA evaluates the frequency of 27 positive behaviors (i.e., strengths) that are broken up into three subscales (i.e., Initiative, Self-Control, Attachment), exhibited by preschoolers and also contains a 10-item problem behavior screen. Children can receive ratings of „strength‟ (t > 60), „typical‟ (40 < t < 60), or „concern‟ (t > 60). Internal reliability alpha coefficients for the scales based on parent report are as follows: Initiative, .84; Self-control, .86; Attachment, .76; Total Protective Factors, .91; and, Behavior Concerns, .71 (LeBuffe & Naglieri, 1999). The standardization sample of the DECA Preschool consisted of a 2,000 preschool children with a diversity profile consistent with preschool children in the United States (LeBuffe & Naglieri, 1999). Lien and Carlson (2009) examined internal consistency, standard error of measurement and factor structure in a sample of 1,208 children enrolled in Head Start programs in Michigan and replicated the original factor structure almost precisely. The internal consistency and standard error of measurement coefficients in this study also replicated the original findings. These findings indicate that the DECA remains a reliable assessment and supports its validity for use with children from low-income backgrounds (Lien & Carlson, 2009). Parent Practices Interview (Webster-Stratton, 1998): This form was adapted from the Oregon Social Learning Center's (OSLC) discipline questionnaire and revised for young children. It contains seven scales with the following standardized alpha coefficients: Harsh Discipline, .75; Harsh for Age, .78; Inconsistent Discipline, .62; Appropriate Discipline, .82; Positive Parenting, .72; Clear Expectations, .62; and Monitoring, .64. The questionnaire takes approximately 15-25 minutes to complete. (Webster-Stratton, 1998). 39 Progress Monitoring Measures Behavior Assessment System for Children-Monitor for ADHD- Parent (Kamphaus & Reynolds, 1998): The BASC-Monitor is a norm-referenced rating scale used to measure attention problems, hyperactivity, internalizing problems, and adaptive skills (Kamphaus & Reynolds, 1998). The form is ideal for repeated use during treatment evaluation. The rating form uses a simple four point response scale for each behavior, ranging from "Never" to "Almost Always." This instrument takes approximately 10-20 minutes to complete. The BASC-Monitor was found to be highly valid and reliable with test-retest reliability coefficients ranging from 0.60 to 0.90. The internal consistency coefficients for the four subscales are as follows: Attention Problems, .81; Hyperactivity, .71; Internalizing Problems, .74; and Adaptive Skills, .79. (Kamphaus & Reynolds, 1998). Global Change Form (Jaeschke R, Singer J, & Guyatt G., 1989): This is a scale used to determine changes parents perceive in their child‟s behavior over the previous week. This seven point scale assesses parent perceptions of change in the following domains: attention, hyperactivity/impulsivity, social, and academic. Ratings of 1-3 indicate change in the positive direction (e.g. these behaviors are somewhat easier for the child), ratings of 4 indicate “no change” while ratings of 5-7 indicate change in the negative direction (e.g., these behaviors are somewhat more difficult for the child). The GCF was designed by the National Institute of Mental Health as a measure to evaluate drug treatment effects. This tool has frequently been used in evaluation studies of the Incredible Years program. Goal Attainment Scales (GAS) (Kiresuk, Smith & Cardillo, 1994): The GAS is a used to determine one‟s progress made on a target behavior and is helpful for progress monitoring on 5 point rating scale. , ranging from the worst possible behavior change (–2) to the best possible 40 behavior change (+2). A score of zero indicates no change in the target behavior. Reliability studies revealed high interrater reliability for this measure (product-moment correlations of r = .87 to r = .99) and lower reliability estimates when scores on different occasions, which is expected when measuring change. Kiresuk, Smith and Cardillo (1994) recommend that the GAS as a measure of change induced by treatment should not be used as an outcome measure alone. Treatment Integrity Measures Treatment Integrity Checklists: With each video segment, The Incredible Years program has a list of target behaviors that the parents are assigned to complete while watching that video series. When they have completed a video segment, consultants ask the parents if they have completed each target behavior. Checklists were specific to each of the program topic areas and obtained information regarding: (a) how much of the videos the parent watched, (b) how much of the workbooks were completed by the parents and (c) how many of the target behaviors the parents had practiced. Parents self-reported their participation in the video completion and engagement in target behaviors during the interviews conducted by investigators either by phone or during home visits. The intervention manuals that accompanied the program were collected and the percentage of homework completion was calculated for each parent. There is currently no psychometric information reported on this measure. Implementation Integrity Checklists: Each consultant was given a binder to collect all data for each family. The binder contained outlines for what was to be accomplished during each visit, and also contained calling logs for the consultant that were created by this author. These call logs and checklists were assessed to determine if the consultants were implementing and delivering the program as intended. There is currently no psychometric information reported on this measure. 41 Treatment Acceptability Measures Parent Video Evaluation (Webster-Stratton, 2001): On the parent video evaluation, parents are asked four questions related to each set of videos. The questions are rated on a five point scale with a rating of “1” meaning that the participant did not find the video helpful to “5” which meant that the participant found the video series very helpful. The Parent Video Evaluations are part of the Incredible Years system of materials. There is currently no psychometric information reported on this measure. Treatment Evaluation Questionnaire- Parent Form (TEQ-P) (Kelley, Heffer, Gresham, & Elliott, 1989): The development of this questionnaire is based on the Treatment Evaluation Inventory (Kazdin, 1980) which has an internal consistency reliability of .97. The TEQ-P is used to measure parents‟ perceptions of acceptability, appropriateness, and effectiveness of an intervention. It consists of 21 items and has three scales: the Acceptability Scale, the Effectiveness scale, and the Amount of Time scale. Responses to the questions were made on a 1-6 scale. A response of 1 indicates “strong disagreement” while a score of 6 represents “strong agreement”. A midpoint score of 3.5 on each item would indicate moderate acceptability. Therefore, an overall score at or above 73.7 on the 21 items would represent an adequate rating of acceptability. While there are no additional guidelines for interpretation of the TEQ-P, previous researchers (Kratochwill et al., 2003) interpreted the following subscale scores as reflective of high treatment acceptability: Acceptability, 55; Effectiveness, 36; and Amount of Time, 9. Procedure Participants in this study were recruited from a local Head Start agency. This agency has partnered with the university to bring evidence based treatments into their practice. Prior to the 42 initiation of this study, the agency attempted to implement the self-administered Incredible Years Parent Training Program (2002). Mental Health consultants attempted to deliver this program to 21 interested parents. However, despite high reported levels of treatment acceptability by the mental health consultants, none of the families completed the program. These results were unlike others reported by research on the self-administered program. Because of the promising results of this program in other studies by the university, this agency chose to continue the use of this program for the purposes of the current study. The Devereux Early Childhood Assessment (DECA) was administered to all families of children in this Head Start agency. For the purposes of this study, it was used to identify children who were at risk for the development of behavior problems. Based on these results, parents who rated their children‟s problem behaviors in the “concern” range (t > 60) were notified of the opportunity to participate in the parent training program via recruitment mailings and letters administered during parent/teacher conferences. Upon recruitment, families were randomly assigned to either have the program delivered by university or Head Start consultants. Because the program is manualized and the lessons are actually delivered via a videotape, the assumption is that there will not be a difference in parent and child outcomes. However, based on the Healthcare Ecological Model (Southam-Gerow et al., 2012) that recognizes that dissemination difficulties can be related to system and therapist barriers, it is important to examine these possibilities (i.e. dissemination barriers at the system level could impact effectiveness of the program). Regardless of group, researchers met with participants in their home or other community setting to complete baseline measures. During the first meeting parents completed a demographic information form, the Parenting Practices Interview, the DECA, the BASC-PMR, the GCF, and the GAS. Participants completed the 43 BASC-PMR, GCF and GAS one week later, and again at the end of baseline. The total baseline included three data collection points. The first set of measures took approximately two hours to complete, while the second meeting, as well as all future home visits, lasted approximately thirty minutes. After gathering three weeks of baseline information, participants were supplied with segments of the Incredible Years parent-training program on a bi-weekly basis. The four segments of the program were: How to Play with a Child and Helping Children Learn, Praise and Rewards, Effective Limit Setting and Dealing with Noncompliance, and Handling Misbehavior. Consultants were to be in contact with the parents, either via phone or in-person, every week. Phone calls were intended to be made to participant‟s homes to gather progress-monitoring information on weeks that videotapes and manuals were not distributed. In total, there were six visits to the home and five phone calls. See Table 2 for a summary of this process. The perceived effectiveness of the intervention was assessed through a pretest-posttest design. Some measures were administered pre- and post treatment (LIFT Parenting Practices Interview Form and DECA); other measures were repeatedly administered during each of the home visits (BASC-PMR, Parent Video Evaluation, Treatment Integrity Checklists); and a final group of measures was administered weekly (Global Change Form and Goal Attainment Scaling). Research Design This study involves both pretest-posttest design as well as single-case outcome research design strategies. The primary research design used to answer the first four research hypotheses related to the effectiveness of the IY-SAPT intervention was a pretest-posttest repeated measures design. Numerous studies have demonstrated the efficacy of IY-SAPTP (e.g., Webster-Stratton, 44 1989); therefore randomization to group is not perceived to be a necessary or even useful component for conducting effectiveness research. However, implementation integrity has not been evaluated in previous studies, paired with the knowledge that this program was not successfully transported to this Head Start agency in the past, it was determined that random assignment to group based on consultant affiliation was necessary to control for implementation differences and assess the transportability of the intervention (research questions 5 & 6). The use of randomized pre-posttest studies has obscured the individual change that occurs throughout an intervention (Horner et al., 2005). This is important because the outcomes from self-administered interventions have been found to have great individual variability (Mains & Scogin, 2003). Therefore, because it is essential to effectiveness research to pick up on the nuances of individual changes, aspects of single case design were also implemented. A replicated AB design was employed for this purpose as it effectively measure changes in the dependant variables within different phases of the study (Gresham, 1998). Because of the single-case design, the baseline phase is of critical importance for defining both the child‟s and parents‟ current level of functioning so that an accurate prediction can be made regarding the level of behavior if the intervention had not been implemented. With this in mind, baseline data was collected three times over a two week period. Furthermore, by employing an AB design in which the same instruments are used in the baseline and treatment phases, investigators are able to obtain estimates of variance between the two conditions. This allows more precise estimation of treatment effects by examining effect sizes on the behavior rating scales. There are a number of ways to calculate effect sizes. For the purposes of this study a SMD approach was used ((mean standard scores at the end of treatment- mean standard score of baseline)/ pooled standard deviation) to determine behavioral change (Busk & Serlin, 1992). 45 The open ended responses to the consultant interviews were analyzed using qualitative methods. A phenomenological approach was used in order to identify themes which derive from the consultants‟ responses (Creswell, 1998). Data Analysis The analysis varies depending on whether a measure has been repeatedly administered or administered only pre and posttreatment. Pre-post Measures A paired samples t-test was used to assess pre and posttest changes on the Lift Parent Practices Interview for the 37 participants. The paired samples t-test is appropriate because data has been collected for the group on two different occasions (pre and posttest). This test compares the means of these two variables for each case and tests to see if the average difference is significantly different. A paired sample t-test was also used to assess child behavior change on the DECA Total Protective Factors and Behavior Concerns scales. Repeated Measures For the repeatedly administered outcome measures (Goal Attainment Scaling, GCF, BASCPMR), effect sizes were calculated by finding the difference between the mean of the treatment standard scores and the mean of the baseline standard scores and by dividing this product by the standard deviation of the baseline. The SMD approach was selected because it was found to be less affected by autocorrelation and better able to discriminate between effective and noneffective interventions compared to regression based and PND models (Manolov & Solanas, 2008). Effect size calculations were used to assess behavioral change on the progress monitoring tools (attention and hyperactivity scales of the BASC monitor, the GCF and GAS) for the 37 children. 46 The Incredible Years Parent Training Program, in both the group and self-administered format, includes many features that would suggest good transportability into various settings while maintaining the fidelity of treatment (Webster-Stratton & Herman, 2010). The manuals are extensive and include scripts. Further, the videotape content increases the standardization and provides modeling (Kazdin, 2005). Based on these features of the self-administered program, it is believed that there will not be a difference in effectiveness based on the random assignment of group for families that complete the entire program. While not a hypothesis within this study, it was important to show that the two groups benefited equally, per our assumption. We tested this assumption by taking the effect size calculations for the group to create a mean effect size for each group to test for equivalency in outcomes across the MSU consultant-lead group of parents when compared to the Head Start consultant-lead group of parents. The decision to include random assignment of groups was made in an effort to gather the data necessary to demonstrate the transportability of this program from a research setting to a community-based setting (Chorpita‟ s Type II). In sum, it was expected that those outside of the system may be able to engage in stronger implementation integrity when compared to those professionals who were carrying out the intervention within the larger scope of their roles and responsibilities in that system. This hypothesis was tested in addressing the fifth research question. 47 CHAPTER 4 RESULTS The primary purpose of this study was to engage collaboratively with a community-based Head Start agency to investigate the effectiveness, treatment integrity and acceptability of a selfadministered evidenced-based parent training program on children who exhibit behavior problems. In addition to the evaluation of the IY-SAPTP, there is also a systemic focus added to this study in an effort to closely examine the transportability of the program and potential barriers to successful implementation as intended by staff. Change in Parent Behavior It was hypothesized that parents would show an increase in the four positive parenting practices addressed in the Incredible Years Program: Use of appropriate discipline, use of positive parenting, monitoring your child and setting clear expectations and a decrease in the use of negative parenting practices: use of harsh discipline and inconsistent discipline. Paired sample t-tests were conducted to determine if there were significant changes in the pre and post summary scale scores for the 37 participants. A paired sample t-test was conducted to evaluate changes in parents‟ ratings on the Parent Practices Interview as it relates to their use of appropriate discipline. There was a statistically significant increase from pre (M = 5.31, SD = 0.57) to post-intervention (M = 5.63, SD = 0.64), t (36) = 3.45, p < .005 (two- tailed). Parents showing improvements in this area reported being more likely to give their child a brief time-out, take away privileges or discuss the problem rather than showing anger or saying something they did not mean when disciplining their child. 48 A paired sample t-test also indicated a statistically significant improvement from pre (M = 5.20, SD = 0.72) to post-intervention (M = 5.58, SD = 0.63), t (36) = 7.35, p < .005 (twotailed) on the use of positive parenting subscale of the Parent Practices Interview. This indicates that parents reported being more likely to complement their child or praise their child for doing something well after completing the program. Parents also reported a statistically significant increase from pre (M = 5.90, SD = 0.34) to post-intervention (M = 6.13, SD = 0.43), t (36) = 3.80, p < .005 (two-tailed) in their monitoring of their child based on a paired samples t-test. This indicates that parents reported being more likely to know their child‟s friends and activities throughout the day. Finally, in terms of increased positive parenting practices, parents reported a statistically significant increase from pre (M = 5.77, SD = 0.92) to post-intervention (M = 6.19, SD = 0.61), t (36) = 4.73, p < .005 (two-tailed) in their setting of clear expectations. For example, these parents reported that they were more likely to set clear rules and expectations for chores and bedtime routines. In addition to statistically significant increases in positive parenting practices, parents reported a significant decrease in their use of negative parenting practices from pre to post intervention on both scales of negative parenting practices. In terms of the use of harsh discipline, parents reported a significant decrease from pre (M = 3.17, SD = 0.53) to postintervention (M = 2.75, SD = 0.60) t (36) = 7.13, p < .005. This indicates that parents reported being less likely spank their child or use physical force. Parents also reported a statistically significant decrease in their use of inconsistent discipline practices from pre (M = 3.33, SD = 0.69) to post-intervention (M = 2.80, SD = 0.55), t (36) = 6.44, p < .005. See Table 6 for a summary of results. 49 Change in Child Behavior Research consistently indicates that children given clear, firm, consistent and appropriate consequences for misbehavior exhibit fewer externalized behaviors (Arnold, O‟Leary, Wolf & Acker, 1993). The strong influence of parental behaviors on externalized child behavior has led to the focus on altering parenting practices to ameliorate externalized child behavior problems. In fact, parent-focused intervention is the most extensively studied and supported form of treatment for child behavior problems (Weisz, Hawley & Doss, 2004). Further, a review of the literature regarding youth conduct problems concluded that parent and family skills training should be a mandatory intervention given the clear role parent factors play on the escalation and effectiveness of parenting interventions (Bloomquist & Schnell, 2002). Thus, it was hypothesized that children of parents who complete the self-administered format of the Incredible Years Parent Training Program will show an increase in pro-social behavior and decrease in disruptive behavior. Having established that IY-SAPTP led to parenting practice changes in the hypothesized direction, and consistent with prior research, significant changes in child behavior were also predicted. Change in Pro-Social Skills Pre and Post intervention T-scores on the DECA Total Protective Factors scale were analyzed. Protective factors are characteristics that are thought to buffer the negative effects of stress and result in more positive behavioral outcomes in at-risk children (Masten & Garmezy, 1985). Children whose behavior reflects these protective factors tend to have positive outcomes despite stress and are often characterized as "resilient." Children lacking, or with underdeveloped protective factors, are more likely to develop emotional and behavioral problems under similar risk conditions and are described as "vulnerable." DECA evaluates the frequency of 27 positive 50 behaviors (i.e., strengths) exhibited by preschoolers. Typical items include “have confidence in his/her abilities,” “act good-natured or easygoing,” and “ask adults to play with or read to her/him.” The mean pre-intervention score on the Total Protective Factors (TPF) scale was 40.73 (SD= 9.28). A paired-samples t-test was conducted to evaluate the impact of the intervention on parent‟s ratings on the DECA TPF scale. There was a statistically significant increase in TPF scores from pre-intervention to post-intervention (M= 47.65, SD = 8.82), t (36) = 6.53, p < .0005 (two-tailed).Although not part of the hypothesis, the assumption that there would not be group differences was confirmed, as there were no significant differences in the pre or post test DECA TPF scales based on group. See Table 5 for a complete review of pre and post intervention scores on the Total Protective Factors scale. In addition to group level data, pro-social skills were also assessed at the individual level. On the Global Change Form, parents were also asked if they perceived social and/or academic changes in their children after the treatment. Effect sizes were calculated for each individual by finding the difference between the mean of the treatment standard score and the mean of the baseline standard scores and by dividing this product by the standard deviation of the baseline (or overall standard deviation, if there was no variability during the baseline). Based on Cohen‟s d (1992), (an effect size of 0.2 is indicative of a small effect, 0.5 a medium and 0.8 a large effect size), the mean effect size for the group was large (M = 1.44, SD = 0.50). Thirty-five parents (95%) reported large improvements (effect size calculations ranged from 0.89 to 1.97), one (3%) reported a moderate improvement (0.45) and one parent (3%) reported no change. Changes in pro-social skills were also addressed through Goal Attainment Scaling. Effect size calculations reveal an overall mean effect size of 0.92 (SD = 0.56). Twenty-nine (78%) parents reported large effect size changes from pre to post intervention (effect sizes ranged from 51 0.81 to 2.82) while one reported a moderate improvement (effect size = 0.71), one (3%) reported a small improvement (effect size = 0.40) and six parents (16%) reported no change in their child‟s pro-social skills. See Table 7 for a summary in changes in pro-social behaviors. Change in Disruptive Behavior In addition to examining the increase in pro-social behaviors, a change in disruptive behaviors was also analyzed. The Behavior Concerns scale on the DECA was analyzed pre and post intervention. A paired-samples t-test was conducted to evaluate the impact of the intervention on parents‟ scores on the Behavior Concerns scale. There was a statistically significant decrease in problem behavior scores from pre-intervention (M = 71.65, SD = 1.67) to post-intervention (M = 59.14, SD = 6.79), t (36) = 11.32, p < 0.0005 (two tailed). Another dependent variable focused on decreasing disruptive behaviors at an individual level. Effect sizes on the BASC Monitor scales were calculated for the thirty-seven participants to determine if using the Incredible Years Parent Training Program resulted in behavioral change. An examination of the effect sizes in relation to attention reveals the mean effect size for participants on the attention scale was 0.56 (SD = 0.53). The range of effect size calculations for those who exhibited some level of improvement in the attention scale of the BASC monitor from the baseline compared to the end of treatment was between 0.23 to 2.03. There was significant improvement in attention in 13 of the thirty-seven (35%) participants in this study. While not considered large, four participants (11%) showed moderately large improvements, and eight (22%) showed small improvements. Twelve participants (32%) showed no significant change in attention. A complete analysis of effect size by participant can be found in Table 10. Consistent with the study assumption, an independent sample t-test was conducted to compare the mean effect sizes for the MSU lead group compared to the Head Start group and there was no 52 significant difference in the mean effect size for the MSU group (M = 0.54, SD = 0.50) and the Head Start group (M= 0.60, SD = 0.59); t (35) = 0.32, p = 0.75 (two tailed). The mean effect size for participants as it related to the hyperactive/impulsive behavior rating was 0.64 (SD = 0.58). The range of effect size calculations for those who showed improvements on the hyperactive/impulsive scale of the BASC monitor was between 0.25 and 2.03. The effect size calculations related to hyperactivity/impulsivity revealed significant improvements in 13 (35%) of the participants. Nine participants (24%) were rated as showing moderately large improvements, while five (14%) showed small improvements. One participant (2%) showed a moderately large decrease in their behavior as it relates to hyperactivity while another showed a small decrease in their behavior. Again, there was no significant difference in the mean effect size for the MSU group (M = 0.64, SD = 0.54) and the Head Start group (M= 0.64, SD = 0.65); t (35) = 0.35, p = 0.97 (two tailed). Another measure used to examine change in child behavior was the Global Change Form (GCF). The GCF was used to measure parent perceptions of change in their child‟s behavior since the beginning of the treatment phase. Two domains assessed that specifically addressed behavior were change in “attention” and “hyperactivity/impulsivity.” Effect size calculations indicate an overall large effect size change on the attention scale for the group (M = 1.39, SD = 0.60). Thirty-three parents (89%) reported a large effect size change on the attention scale (effect size range from 0.89 to 1.95). One parent reported a moderate change (effect size = 0.45) and three parents (8%) reported no change. Effect size calculations also revealed an overall large positive change on the hyperactivity/impulsivity scale of the GCF (M = 1.31, SD = 0.58), meaning that parents reported decreases in hyperactive/impulsive symptomology. Thirty parents (81%) reported large changes 53 in their child‟s hyperactivity/impulsivity(effect sizes ranged from 0.89 to 1.95). Four parents (11%) reported moderate improvements (effect sizes ranged from 0.53 to 0.73) and one parent reported a small improvement (effect size = 0.45). Two parents (5%) reported no change in their child‟s hyperactivity/ impulsivity on the GCF. See Table 9 for a summary of effect size changes on the GCF. Changes in internalizing behaviors were also assessed. The mean effect size on the internalizing scale was 0.53 (SD=0.52). Eleven participants (30%) showed significant improvement in internalizing behaviors based on the effect size calculations. Nine (24%) showed moderately large improvements, while five (14%) showed small improvements. Two participants (5%) were rated as exhibiting decreased behaviors. There was no significant difference in the mean effect size for the MSU group (M = 0.54, SD = 0.54) and the Head Start group (M= 0.53, SD = 0.49); t (35) = 0.08, p = 0.93 (two tailed). In terms of adaptive skills, effect size calculations ranged from -0.21 to 2.17. The mean effect size was 0.57 (SD=0.53). 14 participants (38%) were rated as making large improvements. Seven participants (19%) showed moderately large improvements, and three (8%) were rated as making minor improvements in their adaptive skills. The MSU group had a mean effect size of 0.59 (SD = 0.57) while the Head Start group had a mean effect size of 0.55 (SD = 0.48). There was not a significant difference in these groups (t (35) = 0.20, p = 0.84). Treatment Integrity Given that IY-SAPTP is manually based, investigating how well the treatment program is carried out as intended is essential for thinking about study results. To measure treatment integrity, parents were asked about the percent of videos that they viewed, about the number of target behaviors that they engaged in during each of the three series of tapes, and finally the 54 completion of workbook activities completed was assessed by researchers after the end of each series. While all but one parent reported watching all of the videos, the time it took to complete the program varied. The videos were intended to be completed in eight weeks. However, when the consultants called to schedule their meeting to drop off the next series of tapes, some families reported that they had not completed the previous videos. It took the 37 families and average of 11.4 weeks to complete the program once the first set of tapes was delivered (it was intended to take 8 weeks). There was not a significant difference in the amount of time it took the Head Start group (M= 11.85, SD = 1.83) to complete the program compared to the MSU group (M= 11.17, SD = 1.90), t (35) = 1.07, p = 0.29. As part of the self-administered format, parents were asked to complete a number of activities in workbooks that coincide with the concepts being taught in the video vignettes. The workbooks were reviewed and the percentage of completed workbook activities was recorded. On average, participants completed 67% (SD = 21.10) of workbook activities. Again, there was no significant difference between the MSU group (M = 69.70, SD = 19.32) and the Head Start group (M = 62.93, SD = 23.88), t (35) = 0.95, p = 0.35. Participants were also asked to report on their engagement in the target behaviors that coincide with the video series. Participants reported engaging in an average of 66% of the target behaviors. Participation ranged from engaging in 90% of the behaviors to as little as 25%. There was no significant difference between in the groups in their report of target activity engagement. The MSU group reported engaging in an average of 69.22% of the activities (SD = 17.89) while the Head Start group reported engaging in 61.92% of the activities (SD = 21.86), t (35) = 1.11, p = 0.28. See Table 10 for a summary of mean treatment integrity scores. 55 Relationship Between Treatment Integrity and Behavior Change It was hypothesized that higher levels of treatment integrity would result in greater behavioral change. To examine this relationship, the mean treatment integrity score was correlated with the effect size calculations for each of the behavioral measures on the BASC monitor. The correlation between treatment integrity and change in the effect size on the attention scale was r = +0.72, n=37, p<.01. The correlation between the level of treatment integrity and effect size on the hyperactive/impulsive scale was also strong and positive (r = +0.60, n=37, p<.01), thus indicating that higher levels of treatment integrity are associated with better outcomes in terms of behavior change on the BASC Monitor scales. Interestingly, the correlation between treatment integrity and the effect sizes of change in internalizing behaviors and adaptive skills was not nearly as strong (r = +0.34 and +0.40 respectively, n=37, p<.01). Rhymers, Evan-Hampton, McCurdy, and Watson (2002), suggest that a 50% integrity level is sufficient for behavioral change. With this in mind, two groups were created (group 1 with participants who reported less than 50% treatment integrity and group 2 with participants reporting greater than 50% treatment integrity). When the mean effect size on the four BASCMonitor rating scales were recalculated based on participants (N=29) with over 50% treatment integrity (based on workbook and engagement in target behaviors) and participants (N= 8) with under 50% treatment integrity, a significant change in behavioral outcomes on the BASC monitor was found. The mean effect size calculations for the entire group were in the moderately large improvement range on each of the four subscales of the BASC monitor (range from 0.54 to 0.64). However, when the effect sizes were recalculated for the group reporting less than 50% treatment integrity (N = 8), each subscale score fell in the no change category (range of effect 56 sizes from 0.05 to 0.15). A summary of mean effect sizes based on percentage of treatment integrity can be found on Table 11. Treatment Acceptability This study not only examined the effectiveness of the intervention on the child‟s behavior, but also evaluated the intervention based on its acceptability to participants. It is important to consider whether or not parents felt comfortable with the intervention prescribed. It may not be enough to produce results, and positive outcomes may be difficult to achieve if a participant does not find the intervention acceptable. It is likely that the results would need to occur in a context where the client understands why certain actions are being prescribed, accepts the level of effort required by the intervention, and views the results as satisfactory given the time, energy, and effort invested in achieving those results. On the parent video evaluation, parents were asked four questions related to each set of videos. The questions were rated on a five point scale with a rating of “1” meaning that the parent did not find the video helpful to “5” which meant that the participant found the video series very helpful. In relation to the first video set on How to Play with a Child and Helping Children Learn, on average, parents rated the content of the video as “helpful” (M = 3.72, SD=0.45). Parents also felt that the video examples were “helpful” (M = 3.76, SD = 0.49). In terms of changing their own behavior and their children‟s behavior, parents rated the techniques as “helpful” (M = 4.0, SD=0.62 and M = 4.03, SD = 0.64, respectively). Similar to the first video series, parents found every aspect of the series on Praise and Rewards to be “helpful”. The average scores for content, video examples, techniques for changing parent behavior, and techniques for changing child behavior were as follows: M = 3.73, SD=0.50; M = 3.54, SD = 0.60; M =3.95, SD = 0.66 and M =4.0, SD = 0.67). Parents also rated 57 the video series on Effective Limit Setting as “helpful” in terms of the content, video examples and techniques for changing their child‟s behavior. However, on average parents rated the series as “very helpful” in terms of presenting techniques for changing their own behavior (M =4.59, SD = 0.55). Parents also felt that the techniques presented in the Handling Misbehavior series were “Very Helpful” in terms of changing both their own and their child‟s behavior (M = 4.67, SD = 0.52 and M =4.54, SD = 0.55). Overall, they found the content of that series and the examples to be helpful (M = 4.13, SD=0.54 and M =3.72, SD=0.65) Upon completion of the treatment phase, participants were asked a series of 21 questions related to how satisfied they were with the program. The total TEQ-P ratings ranged from 76 to 115, with an overall mean of 96.89 (SD = 11.15). The overall mean was greater than the midpoint score of 73.5, thus indicating that parents found this program an acceptable and appropriate way to address their child‟s behavior concerns. Additionally, the mean subscale scores for acceptability (M = 53.95, SD = 6.62), effectiveness (M = 34.84, SD = 6.05) and time (M = 8.11, SD = 2.33) are similar to previous studies assessing the acceptability of the Incredible Years (Kratochwill et al., 2003; Stewart & Carlson, 2010). See Table 12 for a summary of parent responses to the Treatment Evaluation Questionnaire. System Outcomes/Transportability The failure to provide intervention services to children with behavioral problems is not the result of a lack of appreciation of the importance of addressing these issues, nor is there a lack of evidence that clearly demonstrates that young children with early on-set behavior problems are at a significantly greater risk of having severe difficulties into adolescence and adulthood (Costello, Egger & Angold, 2004). Further, decades of research have resulted in the identification of a number of high quality programs for parents which have shown to reduce 58 conduct problems and strengthen pro-social behavior. Despite the acknowledgement of the problem and the identification of a myriad of possible solutions, the challenge lies in the transportation and dissemination of these programs. In addition to examining the effectiveness of the IY-SAPTP, the transportability of the program was also assessed. In addition to the treatment integrity of the parent participants, this study also evaluated the integrity of implementation of the Incredible Years program. The phone logs and checklists were assessed to determine if the program was being administered by the Michigan State and Head Start consultants as intended. Phone logs were assessed to determine if the Head Start and Michigan State consultants were contacting participants with the appropriate frequency for accurate data collection. On average it took consultants 19 attempted contacts to complete all home visits and phone calls (11 is the minimum number necessary). There was no difference in the number of attempted contacts between the MSU group (M = 18.78, SD = 3.99) and the Head Start group (M = 18.92, SD = 4.32); t (35) = 0.10, p = 0.92. As previously mentioned, it did take participants almost four weeks longer to complete the program than intended which can account for the increased number of contacts. Both groups of consultants contacted the participants at least once weekly. At the home visits, both groups of consultants delivered all of the necessary materials and collected the appropriate data. Based on the phone logs and delivery check-lists, there appears to be virtually no difference in the implementation of the program based on consultant. Consultant Feedback The 15 consultants (n = 6 from Head Start, n = 9 from Michigan State) were asked three open-ended questions about how they felt the treatment went, what barriers they encountered, and if they would do anything differently in the future. All respondents were asked identical 59 questions in the same sequence, but interviewers probed inductively on key responses. The interviews were taped and the responses were transcribed. The interviews were coded based on reoccurring themes. Fourteen of the 15 consultants felt that the treatment was helpful for their families. Nine consultants noted that the self-administered format helped to meet the needs of their parent participants. Some themes emerged related to the facilitators of adherence. Specifically, eight consultants reported that the structure of data collection helped keep parents and facilitators on track by holding them accountable. Also, parent knowledge that the manuals would be collected could have helped increase treatment integrity. In terms of barriers to successful implementation of the program, three primary themes emerged. Life Events Ten consultants said that “life events” (of the parent participant) interfered with their ability to deliver the contents of the program in a timely fashion (one consultant did not weather as their own personal life event that delayed delivery). For example, three consultants noted that their parents moved throughout the course of the intervention. Four other consultants noted that there were problems with phones lines being disconnected. Parent Comprehension of Materials Five consultants also questioned whether their parent functioned high enough to benefit from the self-administered format. One consultant noted that because their parent could not read, they needed a great deal of support. Further, three consultants noted that their parents needed a lot of guidance because of a suspected weak vocabulary. 60 Consultant Barriers The Head Start Consultants also felt that their own time was a barrier. In general, consultants worked with one family at a time and felt that they could not manage more than that. A few of the consultants felt comfortable managing two families at a time. Clinically this would indicate that perhaps a group format could be a better use of resources as there would be potential to reach a greater number of families in the same amount of time. However, as previously outlined, there are a number of barriers for some families in terms of completing a group program, including time, transportation and child care. The consultants had a variety of ideas on what they would do differently in the future. Seven consultants felt that the program needed to be more consultative (rather than strictly selfadministered). Although all of the Head Start consultants noted time as a barrier, they also said they felt the families would benefit from more time with the consultant to discuss the strategies and examples in the video. Eight consultants noted that they felt the incentive was very important and necessary for continued success. 61 CHAPTER 5 DISCUSSION The primary purpose of this study was to examine the effectiveness, treatment integrity, acceptability and the process of implementing an evidence-based program in a community-based setting. A secondary purpose of this study was to evaluate the transportability of the IY-SAPTP. This study is the first study of this scale to evaluate the IY-SAPTP. Prior studies reported within the literature involved a total of ten families. The significant impact on parenting and child behavior with this at-risk population speaks to the potential utilitity of this intervention in a community based setting. Thirdly, it is also the only study that has utilized randomization to assess the transportability of the intervention. While there has been a strong emphasis placed on the use of evidence-based interventions, how interventions are used outside of the context of controlled clinical research settings is an area that needs closer examination. As researchers such as Chorpita and Fixsen have highlighted, there is a great need to closely examine the effectiveness of bringing a research-based intervention into community-based settings. The demand for interventions that can produce great outcomes for little money is increasing. It is thought that while the discovery of new interventions for particular disorders may lead to positive outcomes, an even greater impact may result from focusing on how to transfer interventions that have already proven themselves in a research setting into a community-based setting. The main hypotheses of this study were that completing the IY-SAPTP would result in increases in positive parenting practices, a decrease in negative parenting practices and improvements in children„s pro-social behaviors and decreases in child externalizing behaviors. 62 Each of the hypotheses and associated findings, together with relevance to the literature, will now be discussed. Change in Parent Behavior Negative, coercive parenting practices not only exacerbate problems associated with a behavior disorder but may contribute to the development of additional behavior problems, (Patterson, Capaldi, & Bank, 1991; Reid, 1993). It follows that the implementation of prevention and early intervention strategies is critical in providing parents the skills necessary to overcome or correct early child difficulties and manage their day-to-day parenting stress. Consistent with the first hypothesis, parents reported a significant increase in their use of positive parenting strategies, appropriate use of discipline, improved monitoring of their children and the setting of clear expectations from pre to post test on the LIFT Parenting Practices Interview. These findings are similar to those conducted by Webster-Stratton. Webster-Stratton, Hollinsworth, and Kolpacoff (1988) who examined the effectiveness of three parent training programs based on videotape modeling, group discussion, or both. All three parent training programs led to clinically significant improvements in child problem behaviors and parenting behaviors after one year for approximately two thirds of the sample. They also found that mothers were less critical and had more positive affect in their interactions with their child compared to a control group. Kratochwill, Elliott, Loitz, Sladeczek & Carlson (2003) found that parents raised their voices less and they also found threats of punishment decreased after using a self-administered parent training intervention compared to pre-intervention for children with externalizing or internalizing behavior problems. While it is difficult to compare the results of these studies with the current study because they used different measures and ways to interpret parenting practices, the present study results are promising as they show similar large effects 63 compared to studies using a random control design. These results indicate that improvement as it relates to increasing positive parenting practices and reducing negative parenting practices is possible with a self-administered parent-training program. Two studies that also evaluated changes in parenting practices after the use of the selfadministered format had mixed results. The results of the present study are consistent with Ogg & Carlson‟s (2009) assessment of the Incredible Years SAPT. In this single subject design study of five parents with children with symptoms of ADHD, parents reported a decrease in all of the negative parenting practices and improvement in all of the positive parenting practices except their use of appropriate discipline. Findings from this study demonstrated that use of appropriate discipline strategies did significantly improve. The current results differ from those found by Walcott, Carlson & Beamon (2009). In their assessment of parenting practices using the Incredible Years SAPT for parents of children with ADHD, only two of the four parents had improved ratings on any of the four positive parenting practices measured and both only increased in one areas. This study differs from the current study in a few meaningful ways. First, the authors only reported on changes in positive parenting strategies. Secondly, the target audience for the study was parents of children between 7-12 years of age. Research suggests that change in parent and child behaviors are more likely for younger children as the coercive cycle is less ingrained (Dunst et al., 2004). As illustrated by the results of this study, parents can change their parenting practices by learning behavioral techniques through a self-administered videotape-based parent training program. Because professional time constraints can have an impact on what types of parent training formats can be implemented effectively and efficiently, this research in the area of selfadministered format has great promise for a program such as Head Start, where budget and time 64 constraints are ever-present. Additionally, this format is accessible to families who for a variety of potential reasons have not typically accessed services. Change in Child Behavior Pro-social Skills Consistent with hypothesis two this study shows that a self-administered intervention was effective in increasing pro-social behaviors and reducing levels of disruptive child behavior for some of the Head Start children. Pre-intervention, parents rated their children as having an average T-score of 40.73. This score is considered in the “concerning” range (T scores of 40 and below). Post-intervention, T-scores increased significantly to an average of 47.65. Additional support for an improvement in pro-social behavior comes from the parent reports on the Goal Attainment Scale and Global Change Forms. Specifically, based on parent report, there was a large overall positive effect size change on the Goal Attainment Scale (M=0.92, SD = 0.56) and the social scale on the Global Change Form (M = 1.44, SD = 0.50). While this data reveals improvement for all but seven participants, which is not evident on the other measures of pro-social skills, it is possible that the changes noted on these measures were a function of responder bias. It is also possible that the DECA was not sensitive or specific enough to find a noticeable change in the short time period for all participants. While there have been a number of recent studies on the effectiveness of the IY SAPT (Ogg & Carlson, 2009, Walcott et. al, 2009), The Webster-Stratton et al. study (1988) is one of the few studies to examine changes in a child‟s pro-social behavior and parenting strategies. Consistent with the current study, they determined that the self-administered videotape intervention resulted in significantly increased pro-social behavior. As those with externalizing behavior problems are at risk of developing poor relationships (Webster-Stratton & Hooven, 65 1998), the increase in pro-social skills could have important implications for improving peer relationships. Kaiser et al. (2011) also found that a parenting style characterized by warmth, support, and moderately directive parenting practices is associated with youth who have more positive peer interactions. Given that positive parenting practices also improved, it is possible that as parents modeled and reinforced pro-social skills, parent-child interactions improved, which in turn results in improved pro-social behaviors (i.e. positive behavior cycle). Externalizing Behavior Further, as predicted in the first hypothesis, reduction in children‟s disruptive behavior occurred on measures of the core (i.e. attention and hyperactivity) and peripheral features (i.e. adaptive and internalizing) of disruptive behavior problems. Based on the effect size calculations, parents reported a moderately large change in their child‟s behavior on all areas assessed, with the greatest change reported on the hyperactivity/impulsivity domain (M = 0.64, SD = 0.58). At least 11 of the 37 (30%) participants had large effect size changes in each of the domains assessed from baseline to the end of treatment. Overall, these findings support some of the research that has examined the use of self-administered treatments with children with behavior disorders (Webster-Stratton & Taylor, 2001). However, the outcomes were more favorable than those found in both the Ogg & Carlson (2009) and Walcott, Carlson & Beamon (2009) studies on the effectiveness of the self-administered format. Both of these studies found greater changes in the peripheral features of behavior concerns (adaptive skills, internalizing behaviors), and specifically studied children who were either formally diagnosed or who had symptoms consistent with an ADHD diagnosis. It is possible that both of these samples were more resistant to change in the core features of ADHD (hyperactivity/impulsivity and attention) because the core features have a greater dependency on neurobiological substrates. Additionally, 66 the age of the participants in the aforementioned studies was on average of two years older than the children in the current study, thus potentially making the older student less amenable to change. While the group analysis is compelling, the power of a single subject design comes from looking at individual variability. Findings in the present study suggests that many of the mothers that had improved discipline and used strategies to foster good behavior (e.g., participants 6,7,12, 27, 31) were associated with children with the greatest reduction in disruptive behaviors and increase in pro-social behaviors post-intervention, which supports the underlying premise of parent training. It also suggests that despite a myriad of possible causes of disruptive behavior, changing just one environmental determinant of problem behaviors in children (parents‟ behavior management), can have a substantial impact on improved child behavior. However, despite the many families who experienced positive gain, and the overall moderately large positive behavior changes, there is a segment of our participants (21%) that reported little-to-no gain and 13 families that dropped out prior to completing the program. It is possible that participants with low levels of integrity or lower treatment acceptability levels have such scores because they are in need or additional support in order to be able to carry out the program. Webster-Stratton and colleagues have recognized that parents with multiple risk factors may not have the same child and family intervention outcomes as parents with fewer risk factors (Webster-Stratton & Hammond 1990). It is possible that in these cases the self-administered format may not be supportive enough to meet their needs. To address this issue, Webster-Stratton created the ADVANCE series to supplement the basic training program. The ADVANCE series targets parental risk factors by promoting effective coping and communication strategies (Webster-Stratton 2000). Given the many risk- 67 factors Head Start families face, some may also benefit from the ADVANCE components. In the context of the self-administered format, it would be beneficial to know if there is an identifiable factor that can help predict positive outcomes with minimal support. Conversely, it would also be beneficial and potentially save time, money and resources if there were identifiable risk factors that could predict a higher chance of not responding to the self-administered format and could benefit from a supplemental program such as the ADVANCE series. Another focus of this study was related to distinguishing between efficacy and effectiveness research, and understanding the gap between research and practice. The remainder of this paper is devoted to the discussion of these distinctions. These ideas were the primary reasons for measuring the acceptability and integrity in this study because if a treatment is viewed as acceptable and can be carried out with integrity in a self-administered format, the likelihood of a successful transfer from a clinical to community based setting will increase. Treatment Integrity Given the self-administered format, it was necessary to determine if this efficient format could be carried out with the same level of integrity as the traditional format, which is delivered within a controlled, therapist-directed setting. Based on a study by Rhymers, Evan-Hampton, McCurdy, and Watson (2002), integrity levels between 50-75% have been found to be sufficient for producing rather large changes in behavior. Based on the proposed analysis, all parents reported a high level of treatment integrity (mean percentage of videos watched, workbook activities completed and engagement in target behaviors). However, that number may be positively skewed due to the fact that with only one exception, parents reported watching all of the videos (and consultants reported delaying the delivery of the next videos until the previous ones were viewed). When considering only their completion of workbook activities and reported 68 engagement in target behaviors, the mean treatment integrity for the group declined from 78% to 67% and 29 families met criteria for a high level of treatment integrity. On average, parents reported participating in an equal number of target activities as work-book activities (M = 67.13, SD = 21.13; M = 66.46, SD = 19.52 respectively). Both of these averages met the threshold for being considered high enough to result in behavior change. Relationship between Treatment Integrity and Behavior Change Treatment integrity is thought to be an important variable, especially in the selfadministered format because if a family is unable to carry out the treatment, theoretically there would be little chance for behavioral change. However, the degree of that relationship is not well understood. It was hypothesized that higher levels of treatment integrity would be associated with greater behavioral change. There was a strong positive correlation between the effect size of the attention (r = 0.73) and hyperactivity/impulsivity (r = 0.61) scales on the BASC monitor and overall treatment integrity. This means that the participants who reported engaging in more of the target behaviors and completed more of the workbook activities experienced greater change in their child‟s ability to attend and maintain behavioral control. Interestingly, the correlation between internalizing problems (r = 0.34) and adaptive skills (r= 0.40) and treatment integrity was not nearly as strong. Since the Incredible Years program targets disruptive behavior it does follow that there would be the strongest relationship between the behaviors directly targeted by the program. The strong positive correlation between treatment integrity and behavioral outcomes may be the result of having a structured manual and video examples when conducting a self-administered treatment. Consistent with Rhymers, Evan-Hampton, McCurdy, and Watson (2002), when the mean effect size on the four BASC-Monitor rating scales were recalculated based on participants with 69 over 50% treatment integrity (based on workbook and engagement in target behaviors) and participants with under 50% treatment integrity, the group of participants with less than 50% treatment integrity did not achieve significant behavioral improvement (effect sizes less than 0.2 on the four BASC monitor subscales). While this construct has not been previously systematically examined with the Incredible Years SAPT literature, Walcott et al. (2009) found a similar result in their single case design study. As they hypothesized, the parent-child dyad with the greatest level of treatment integrity also demonstrated the best outcomes across the target behaviors. Conversely, the dyad with the lowest treatment integrity demonstrated change in the fewest number of target behaviors. Given the strong association between treatment integrity and positive behavioral outcomes in the Incredible Years SAPT, it is crucial that future research examine ways to increase treatment integrity. Treatment Acceptability Overall, parents agreed that the program was a satisfactory way to address their child‟s behavioral difficulties, which supports the hypothesis that parents would find the IY-SAPTP an acceptable treatment for their child. The overall mean score on the TEP-Q was 96.89 (SD = 11.15). The overall mean was greater than the midpoint score of 73.5, thus indicating that parents found this program an acceptable and appropriate way to address their child‟s behavior concerns. Additionally, the mean subscale scores for acceptability (M = 53.95, SD = 6.62), effectiveness (M = 34.84, SD = 6.05) and time (M = 8.11, SD = 2.33) are similar to previous studies assessing the acceptability of the Incredible Years (Kratochwill et al., 2003; Stewart & Carlson, 2010). While there have been recent studies demonstrating the effectiveness of the IY-SAPTP (Kratochwill et al., 2003; Ogg & Carlson, 2009; Walcott, Carlson & Beamon, 2009) there is little 70 reported data about its acceptability. Treatment acceptability is an important component in its transportability to real world settings because higher treatment acceptability it likely to improve treatment integrity by increasing compliance (Chorpita, 2003). In addition to the parent‟s finding the Incredible Years to be an acceptable way to address their child‟s behavior problems, the MSU and Head Start consultants also felt this was an appropriate way to address the needs of the Head Start families. Overall, results from this study are consistent with previous research on adherence challenges and facilitators reported in the literature on group based interventions (Webster-Stratton, 2004). Key contributors to adherence included the structure of the materials (e.g. manuals, videos, handouts) and the integrity checklists. Schoenwald (2000) suggests that adherence is increased when supervision and monitoring procedures are strong. How that looks in the self-administered program has not been previously explored, but based on consultant interviews it appears that when parents are aware that their materials will be reviewed and there is built-in support (check-lists) it potentially improves adherence to a program‟s protocol. Transportability of IY-SAPTP A focus of this study was related to bridging the research to practice gap. The partnership with Head Start allowed for the promotion of the use of evidence-based practices and to bring best practices to a community-based setting. However, making the right information available is only one piece of the puzzle. The process of transferring knowledge about evidence-based practices and implementing these in their day-to-day work with children and families is challenging. The study of this process is every bit as important as the study of the evidencebased practices themselves because without “buy-in” (i.e. treatment acceptability) from the community there can be no transfer from efficacious to effective practice. 71 Dissemination of evidence based programs is often compromised by low adherence to protocols, misapplication to the wrong populations, inadequate resources, and poor infrastructure, support, training and planning (Fixsen, Naoom, Blasé, Friedman & Wallace, 2005). Within this study, a self-administered, manualized approach was employed in an effort to make this transition a bit easier and result in a greater likelihood of improvement from the lab to the real-world setting. There was no difference between the implementation integrity of the Head Start consultants and MSU consultants. It was hypothesized that because of system and job related constraints, the Head Start consultants would have more difficulty implementing the program as intended. This did not prove to be the case. However, in the open-ended interviews with consultants, the Head Start consultants did note that they did not believe they would be able to complete this program with more than two families at a time. It is possible that this concern was not raised by the MSU consultants because of the greater number of consultants. It should be noted however, that the concerns about time are actually a function of data collection and not the delivery of the program itself. Teasing apart the research aspects of this program (data collection) from the systematic delivery of the program is important from an efficiency stand point. However, specifically related to the implementation of this program the self-administered nature led to equivalent implementation. The program did take approximately one month longer to complete than intended. Given the data collection technique, it is difficult to ascertain whether the delay was a result of parent or consultant barriers. Based on the interviews (of consultants only), travel to some of the more rural areas during the winter was a barrier, however, the most common barrier to implementation noted was “life events” of the parents such as job changes, moves, and relationship issues. 72 Implications of Study Findings on Intervention Research In the past several years, there has been an emphasis on bridging the gap between our knowledge of efficacious treatments and the services currently being received by consumers (Fixsen et al., 2005). There is agreement that much more is known about interventions, but for a variety of reasons is unable to make use of them to help achieve important behavioral outcomes in the “real world” setting. The United States Centers for Disease Control (2004) recommend greater use of parenting interventions for preventing youth violence and conduct disorder. They stress the need for interventions to start early, and to be locally-based and accessible, particularly given that families most at risk may find it hard to access conventional services. To achieve this, they emphasize partnership between health services and community-based organizations. These policy recommendations are in line with the current study and the study design was an attempt to bridge the research and practice gap by demonstrating the transportability of the IY-SAPTP to an at-risk and often times difficult to reach Head Start population. While there are flaws in a pretestposttest design, this study can be viewed as a step along Chorpita‟s proposed research lines. As revealed in this study, intervention researchers encounter numerous challenges associated with conducting careful implementations, following and measuring people over time, using appropriate designs for evaluation, and using the best available methods for data analysis. While there are significant limitations to the use of a pretest-posttest design, in a community based setting, the collection of data at both baseline and treatment phases is an improvement over “business as usual” practices. Additionally, this is the first study to make an attempt to systematically evaluate the transportability of this program through the use of random assignment. 73 While the results of this study are consistent with the previous research indicating that the more time-consuming and resource demands of the group format of IYPT only appear warranted for more severe cases of child behavior problems, this statement cannot be more conclusively made without a more rigorous research design and replication of the current findings. Results tentatively indicate that the IY-SAPTP may serve as a first line of treatment before determining the need for the consumption of greater resources. Limitations By the nature of the research design, this study presented a number of limitations related to threats both to internal and external validity. A primary threat to internal validity was related to history. From the beginning of baseline through the intervention, the children were also in school. It is possible and even likely that the school experience was also exposing children to improvements in behavior. Secondly, the current study was unable to control for maturation. As with many studies that examine children, it is difficult to rule out maturational effects. As children grow throughout the preschool years, research suggests that behaviors related to attention and impulse control improve. While parents rated their children as improving in these externalizing behaviors, it is difficult to determine if that was a function of time or an impact of the intervention. It is important to ensure that treatment improves children‟s behavior beyond the natural maturation effects they are experiencing in order to best assess the effectiveness of this program. The use of a control group could help address this issue in future studies. A third threat to internal validity was related to the reliance on self-report measures. This reliance presented a challenge to documenting the “true” growth of the participants. Most importantly, there is an inherent response bias present in self-report measures. The participants spent several hours watching the parent training videos in addition to completing workbook 74 activities and engaging in target behaviors. Therefore, it is reasonable to assume that they would potentially rate themselves more positively on parenting measures and their child‟s behavior more favorably. While it would perhaps be ideal to conduct observations of parenting practices, research suggests that there are a number of disadvantages to using direct observation when the parents are the primary change agent. For example, the parent may only implement with integrity while they are being observed and not at other times (Sterling-Turner & Watson, 2002). Given the considerable time, expense and training necessary to conduct direct observations, it was decided that it would not be an appropriate step at this time. However, an observational measure of parent-child interaction could help assess changes in parent and child behaviors in future studies. A measure of child behavior in perhaps a school setting would also add strength to the generalizability of the findings. Additionally, while implementation integrity has not previously been measured at all, the methods used in this study were somewhat limited and may not have fully addressed the issue. Because of the number of participants who dropped out of the study, it is possible that the implantation integrity rates were inflated. It should be noted that there was an incentive ($100 gift card) provided to families who completed the program. This is a deviation from a complete transition to the community -based setting, as $100 is not typically offered to families in Head Start. However, some level of incentive is typically provided in this setting. Further, research suggests that incentives can motivate families to become interested in activities that they might not have been interested in initially, and that incentives can also motivate them to continue their involvement in these activities. However, it is possible that the incentive positively influenced parent perception of intervention acceptability. 75 A threat to external validity was related to the generalization across settings. This study involved a relatively small mid-Michigan sample of Head Start children that met specific inclusion criteria. In summary, while the results of the current study are promising and add further support for the potential effectiveness of the IY-SAPTP, future studies would benefit from the inclusion of a control group and data from multiple sources to improve the threats on internal validity. The generalizability of the findings would also improve with broader inclusion criteria. Finally, the replication of results is crucial. Future Directions Results from this study indicate that the IY-SAPTP may be an effective treatment for some Head Start families with children at-risk for behavioral problems. This research represents one part, Effectiveness: Transportability, in the research model proposed by Chorpita. Replication of this research is crucial to further progress along Chorpita‟s four proposed research lines. A meta-analysis of the IYPTP suggests that the more time consuming and high demand of resources needed for the group administered IYPTP over the IY- SAPTP only appears warranted for the most severe cases of child conduct problems (Sougstad, 2010). The IY-SAPTP program may serve as an initial gateway for determining a need for the greater resource investment of the group format. Additionally, future research is needed to identify the predictors of high treatment integrity as the current study indicates more favorable outcomes are related to higher levels of treatment integrity. For the successful dissemination and transportability of the self-administered format, it is essential to systematically examine what the important variables and what steps will increase treatment integrity. This would help implementers to zero in on appropriate targets for change and craft strategies to influence that change. Knowing which factors predict successful 76 implementation with high treatment integrity will also assist implementers in overcoming barriers to behavior change and provide the necessary level of support. A recent study conducted by Phaneuf and McIntyre (2011) suggested the implementation of a three tiered model to parent training. They used the tiered model often implemented in educational systems termed a response to intervention (RTI) model which involves intensifying, modifying, or changing an intervention based on a student‟s response to interventions (Gresham, 2002). Webster-Stratton and Herman (2010) also considered a tiered intervention system based on the severity of initial levels of problem behaviors. Considering that school settings experience many of the same treatment barriers inherent in parent training programs (lack of time, resources etc.), they hypothesized that a parent training model that uses methods similar to RTI approaches may optimize efficiency without sacrificing treatment effectiveness. In theory, individuals who require little intervention to gain significant improvement do not require any added intervention intensity. Following this model, there were 29 families that demonstrated high levels of treatment integrity and experienced moderate to large effect size improvements on a variety of target behaviors. Based on these findings, it is possible that the level of support offered by the IYSAPTP adequately met their needs. With this success rate in mind, it seems appropriate to consider the self-administered version as an entry-level or first tier approach. If the somewhat time consuming data collection process were reduced, it is possible that more families could be reached. The issue for future research then becomes, at what point is the decision made to increase the level of support, and what is the next step. Future research should consider how to balance the demands of the self-administered format with the demands of research, the characteristics of those families who are most likely to 77 benefit from this cost-effective and efficient format. This study adds to the body of evidence in support of the self-administered format of the Incredible Years as an effective alternative means of intervention, and supports the notion that this intervention can be transferred to a community based setting. These findings also support the tenet of parent training, that child behavior can be modified by changing parenting behavior. This means this intervention is accessible and beneficial for mothers who are concerned about their child‟s behavior problems, and also provides a research based option for the Head Start system. 78 APPENDICIES 79 APPENDIX A Table 1. Program Contents The Incredible Years Basic Parent Training Program- Early Childhood Program Title and Contents Video 1: How to Play with a Child and Helping Children Learn How to Play with your Child Helping Children Learn Video 2: Praise and Rewards The Art of Effective Praising Tangible Rewards Video 3: Effective Limit Setting and Dealing with Noncompliance How to Set Limits Helping Children Learn to Accept Limits Dealing with Noncompliance Video 4: Handling Misbehavior Avoiding and Ignoring Misbehavior Time Out and Other Penalties Preventive Approaches 80 APPENDIX B Table 2. Measures Collected Week Meeting 1 Baseline 1 Home Visit 2 Baseline Phone Call 3 End of Baseline Home Visit 4 Phone Call 1 5 Home Visit 6 Phone Call 2 7 Home Visit 8 Phone Call 3 9 Home Visit 10 Phone Call 4 Measures Behavior Assessment System for ChildrenParent Rating Scales (BASC-PRS) Demographic Information Form Parent Practices Interview DECA Goal Attainment Scale, Global Change Form BASC Parent Rating Scales Goal Attainment Scale Global Change Form BASC Parent Rating Scales (BASC-PRS) Parent Practices Interview Goal Attainment Scale Global Change Form BASC Parent Rating Scales Goal Attainment Scale Global Change Form BASC Parent Rating Scales Global Change Form Goal Attainment Scale Parent Video Evaluation Treatment Integrity Checklist Goal Attainment Scale Global Change Form BASC Parent Rating Scales Global Change Form Goal Attainment Scale Parent Video Evaluation Treatment Integrity Checklist Goal Attainment Scale Global Change Form BASC- Parent Rating Scales Goal Attainment Scale Global Change Form Parent Video Evaluation Treatment Integrity Checklist Goal Attainment Scale Global Change Form 81 Table 2 (cont‟d) 11 Home Visit BASC- Parent Rating Scales Goal Attainment Scale Global Change Form Parent Practices Interview Parent Video Evaluation Treatment Integrity Checklist Consumer Satisfaction Survey DECA 82 APPENDIX C Table 3. Dependent Variables and Measures Research Question 1) A. Will parents who complete the Incredible Years Parent Training Program show an increase in positive parenting practices and a decrease in negative parenting practices? Dependent Variable Parent Report of Parenting Practices Parent Perception of Child 2) Will students identified with early Behavior onset conduct problems, whose parents participate in the training program, show significant behavioral improvement? 83 Measure - Parenting Practices Interview -DECA - BASC Monitor - Global Change Form - Goal Attainment Scale Table 3 (cont‟d) 3) A. To what degree are the parent management techniques taught in the Incredible Years Parent Training Program implemented with integrity? Treatment Integrity Treatment Integrity Checklist Percentage of Workbook Completion Treatment Integrity, Effectiveness Video Completion B. Will higher levels of treatment integrity influence degree of behavior change? 4) Do parents find the Treatment Acceptability Incredible Years Training Program an acceptable way to learn to address the needs of their children with behavioral problems? Parent Video Evaluation 5) Will higher levels of implementation integrity influence degree of behavior change? 6) What facilitated or hindered the consultants‟ ability to work with parents to implement the program? Implementation Integrity, Effectiveness Implementer checklist Qualitative Analysis Interview 84 Consumer Satisfaction Survey APPENDIX D Table 4. Demographic Information 1 Child‟s Sex F Child‟s Age 5-1 Marital Status Single 2 M 4-9 Married 3 M 4-10 4 F 4-8 Living together Married 5 M 4-0 Single 6 F 4-11 Single 7 M 4-4 Single 8 M 5-2 Married 9 F 4-2 Married 10 F 4-6 Single 11 F 5-8 12 M 4-1 Living together Married 13 F 4-7 Single 14 M 4-9 Separated 15 F 4-0 Single 16 M 3-11 Married 17 F 4-1 Single Parent Education Some college High school or GED College graduate Grades 911 High school or GED Grades 911 High school or GED Some college Some college Grades 911 Grades 911 Some college Grades 911 High school or GED Some college Postcollege High school or GED 85 Race Language Caucasian English Caucasian English Asian English/Korean Caucasian English African American English African English American Caucasian English Caucasian English Caucasian English Latino English/Spanish African English American Caucasian English Caucasian English Caucasian English African American Asian English Other-Urdu Caucasian English Table 4 (cont‟d) 18 M 4-11 Single 19 F 4-7 Separated 20 M 5-0 Separated 21 M 4-6 Single 22 F 4-4 Married 23 M 5-0 Single 24 F 4-6 Single 25 M 4-2 Single 26 M 5-1 Married 27 M 4-8 Single 28 F 4-7 Single 29 M 4-4 Separated 30 F 4-10 Single 31 M 4-0 Married 32 M 5-3 33 M 5-1 Living together Single 34 M 4-0 Single Grades 911 High school or GED Some college Some college High school or GED Grades 911 High School or GED Grades 911 Some College High School or GED High School or GED Grades 911 High School or GED Some College Grades 911 High School or GED Grades 911 86 Caucasian English Caucasian English Caucasian English Caucasian English Caucasian English Caucasian English Caucasian English Caucasian English Caucasian English African American English Caucasian English Caucasian English Caucasian English Caucasian English African English American Caucasian English Latino English Table 4 (cont‟d) 35 F 4-9 Single 36 M 4-9 Married 37 M 4-11 Single 38* M 4-0 Single 39 F 4-11 Single 40 M 4-4 Single 41 M 4-3 Single 42 F 4-6 Married 43 F 4-5 Single 44 M 4-2 Married 45 M 4-8 Single 46 M 4-6 Single 47 F 4-4 Married 48 M 4-7 Single 49 M 4-10 Single 50 F 4-8 Separated High School or GED High School or GED High School or GED High School or GED High School or GED High School or GED Grades 911 Grades 911 High School or GED Some college High School or GED Some college Grades 911 Some college High School or GED Some college *Participants 38-50 dropped out of the study 87 Caucasian English Caucasian English Caucasian English African American English Caucasian English Caucasian English Caucasian English African English American Caucasian English Caucasian English African American English Caucasian English Caucasian English Caucasian English Caucasian English Caucasian English APPENDIX E Table 5. DECA Protective Factors and Behavioral Concerns Total Protective Factors Behavioral Concerns Mean Pre-Test 40.73 (9.28) 71.65 (1.67) Mean Post-Test 47.65 (8.82) 59.14 (6.79) *= p<.0005 88 t df 6.53* 36 11.32* 36 APPENDIX F Table 6. Paired Sample t-test results on LIFT PPI Appropriate Discipline Positive Parenting Monitoring Clear Expectations Harsh Discipline Inconsistent Discipline *= p<.0005 Mean Pre-Test 5.32 (0.57) 5.20 (0.72) 5.91 (0.34) 5.77 (0.92) 3.18 (0.53) 3.32 (0.69) Mean Post-Test 5.63 (0.64) 5.58 (0.63) 6.13 (0.43) 6.19 (0.61) 2.74 (0.60) 2.80 (0.55) 89 t df -3.45* 36 -7.35* 36 -3.80* 36 -4.73* 36 7.13* 36 6.44* 36 APPENDIX G Table 7. Effect Size Calculations on GAS and GCF Pro-Social Skills Participant GAS 1 2.82 GCF Academic/Social 1.80 2 0.81 1.60 3 1.11 1.58 4 1.01 1.39 5 1.21 0.89 6 0.81 1.81 7 0.91 1.60 8 1.01 1.11 9 1.01 1.48 10 0.81 1.36 11 0 1.81 12 1.11 1.75 13 1.01 1.60 14 0 1.39 15 1.41 1.21 16 0.91 1.94 17 1.01 1.60 18 0.71 1.64 19 0 0.89 20 0 0.89 21 1.01 1.36 22 1.01 1.82 23 1.11 1.66 24 1.62 1.65 25 1.62 1.58 90 Table 7 (cont‟d) 26 0.40 1.21 27 0.81 1.81 28 0.91 1.70 29 0 0.45 30 1.61 1.39 31 1.41 1.36 32 1.11 1.95 33 0 1.65 34 0.91 1.51 35 1.21 1.65 36 0.91 1.95 37 0.81 0 Overall Mean (SD) 0.92 (0.56) 1.44 (0.50) 91 APPENDIX H Table 8. Effect Size Calculations on GCF Negative Behaviors Participant GCF Attention 1 1.79 GCF Hyperactivity 1.75 2 1.60 1.60 3 1.58 1.22 4 1.52 1.47 5 0.89 0.68 6 1.82 1.60 7 1.66 1.58 8 0 0 9 1.58 1.58 10 1.36 0.74 11 1.81 1.68 12 1.75 1.75 13 1.60 1.60 14 1.52 1.52 15 1.36 1.36 16 1.94 1.60 17 1.66 1.66 18 1.60 1.60 19 0.89 0.89 20 0 0 21 1.36 1.36 22 1.81 1.81 23 1.79 1.79 24 1.65 1.79 25 1.58 1.58 92 Table 8 (cont‟d) 26 1.36 1.36 27 1.82 1.82 28 1.75 1.81 29 0.45 0.45 30 1.51 0.53 31 1.36 1.36 32 1.94 1.84 33 0.89 0.74 34 1.58 0.84 35 1.65 1.65 36 1.95 1.95 37 0 1.1 Overall Mean (SD) 1.39 (0.60) 1.31 (0.58) 93 APPENDIX I Table 9. Effect Size Calculations on BASC Monitor Participant Attention Hyperactivity/Impulsivity Internalizing Adaptive 1 0.15 -0.29 -0.58 0.87 2 0.83 0.8 0.84 1.02 3 0.25 0.68 0.53 0.83 4 0.75 0.68 0.53 0.83 5 -0.1 -0.1 0.02 0.6 6 1.29 2.03 0.92 1.31 7 0.82 0.27 0.86 0.81 8 -0.05 0.12 1.06 1.3 9 0.65 0.82 0.03 0.05 10 0.93 0.78 1.16 0.88 11 0.46 0.22 0.67 1.24 12 1.86 1.75 0.75 0.36 13 0.9 1.02 0.97 0.63 14 0.1 0.92 1.17 0.11 15 0.35 0.65 0.12 0.06 16 0.75 0.72 0.25 0.15 17 0.23 0.4 0.02 -0.03 18 0.1 0.56 0.1 0.09 19 0.05 0.13 0.32 -0.1 20 0.08 0.34 0.06 0.04 21 0.35 0.25 0.16 -0.02 22 1.33 1.26 0.73 23 0.35 0.65 0.42 0.68 0.55 24 0.91 0.75 -0.21 0.68 25 1.18 1.02 0.23 0.12 94 Table 9 (cont‟d) 26 0.63 0.78 0.74 0.58 27 2.03 1.16 1.35 0.54 28 0.25 0.82 1.2 0.92 29 0.04 -0.53 -0.08 0.21 30 -0.12 0.15 0.64 -0.21 31 1.03 1.07 0.96 1.01 32 0.93 2.03 0.68 1.42 33 0.15 0.12 -0.13 0.23 34 0.35 0.14 0.75 0.83 35 0.15 0.18 0.35 0.18 36 0.82 1.2 0.76 1.2 37 0.02 0.13 0.15 0.05 Mean MSU (SD) Mean Head Start (SD) Overall Mean (SD) 0.54 (0.50) 0.60 (0.59) 0.56 (0.53) 0.64 (0.54) 0.64 (0.65) 0.64 (0.58) 0.54 (0.54) 0.53 (0.49) 0.54 (0.52) 0.59 (0.57) 0.55 (0.48) 0.58 (0.53) * There were no significant differences between groups on any of the scales. 95 APPENDIX J Table 10. Mean Treatment Integrity on Videos, Workbooks and Target Activities Parent of Child Percent of Videos Completed 1 100 Percent of Workbook Activities Completed 86 2 100 84 75 3 100 60 75 4 100 62 82 5 75 48 25 6 100 100 90 7 100 86 65 8 100 62 72 9 100 68 70 10 100 86 75 11 100 77 60 12 100 80 90 13 100 88 90 14 100 70 67 15 100 65 70 16 100 87 80 17 100 20 45 18 100 35 33 19 100 72 55 20 100 40 50 21 100 68 67 22 100 84 88 23 100 75 80 24 100 77 90 25 100 87 90 96 Percent of Reported Engagement in Target Behaviors 88 Table 10 (cont‟d) 60 26 100 65 27 100 90 90 28 100 55 65 29 100 43 33 30 100 55 33 31 100 80 80 32 100 87 75 33 100 67 66 34 100 55 50 35 100 45 25 36 100 75 50 37 100 0 60 Mean MSU (SD) 98.9 69.70 (19.33) 69.22 (17.89) Mean Head Start (SD) 100 67.78 (16.20) 61.93 (21.86) Overall Mean (SD) 99.32 67.13 (21.10) 66.46 (19.52) *There were no significant differences between groups on any measure of treatment integrity 97 APPENDIX K Table 11. Mean Effect Sizes for Participants with Low and High Treatment Integrity BASC Monitor Subscale Effect Sizes for Low Treatment Integrity Total Group Effect Size Effect Sizes for High Treatment Integrity Attention 0.05 (0.12) 0.56 (0.58) 0.70 (0.51) Hyperactivity/Impulsivity 0.14 (0.34) 0.64 (0.58) 0.78 (0.55) Internalizing 0.15 (0.23) 0.54 (0.52) 0.64 (0.53) Adaptive Skills 0.12 (0.23) 0.58 (0.53) 0.70 (0.52) 98 APPENDIX L Table 12. Mean Parent Treatment Evaluation Questionnaire Scores Subscale n M SD Acceptability 37 53.95 6.62 Effectiveness Amount of Time for Improvement Total 37 34.84 6.05 37 8.11 2.33 37 93.89 11.15 Note- Acceptability scores range from 11 to 66 with greater scores indicating greater acceptability; Effectiveness scores range from 8 to 48 with greater scores indicating stronger perceived effectiveness; Amount of Time for Improvement scores range from 2 to 12 with greater scores indicating perception of faster behavioral improvement. 99 APPENDIX M Figure 1. Global Change Form Social Change Scores from Baseline through Intervention For interpretation of the references to color in this and all other figures, the reader is referred to the electronic version of this dissertation. 100 APPENDIX N Figure 2. Goal Attainment Scaling Scores from Baseline through Intervention 101 APPENDIX O Figure 3. Global Change Form Attention Change Scores from Baseline through Intervention 102 APPENDIX P Figure 4. Global Change Form Hyperactivity/Impulsivity Scores from Baseline through Intervention 103 APPENDIX Q Target Activity Checklists Video 1: Play Ask the parent if they: _____ Completed all videos _____ Completed handouts in workbooks _____ Engaged in the following behaviors: _____ Played with your child for a minimum of 10 to 15 minutes every day doing a learning activity (ex. From workbook: reading, sharing a story, coloring or painting). _____ Kept track of these play periods on the Record Sheet: Play Times Handout _____ Filled in the two checklists on evaluating play 104 Video 2: Praise and Rewards Ask the parent if they: _____ Completed all videos _____ Completed handouts in the workbook _____ Engaged in the following behaviors: _____ Played (and read) with your child every day for at least 10 minutes _____ Practiced using praise during play time _____ Chose one behavior you would like to see your child engage in more frequently and systematically praised it every time it occurred during the week _____ Increased the number of praises you gave and observed what effect this had on your child. _____ Kept track on the results on the “record sheet: praises” handout _____ Listed the behaviors you want to see more of on the Behavior Record Handout _____ Chose one behavior from the Behavior Record Handout to work on with a chart or sticker system. _____ Explained the star or chart system to your child for the behavior you want to encourage _____ Made the chart together with your child _____ If your child was having problems at school, set up a program that includes tangibles for “good behavior” notes from teachers. _____ Shared with teachers what incentives motivate your child 105 Video 3: Effective Limit Setting Ask the parent if they: _____ Completed all videos _____ Completed handouts in workbook _____ Engaged in the following behaviors: _____ Decreased the number of commands you gave to those that are most important. _____ When necessary, gave positive and specific commands. _____ Avoided using question commands, “let‟s” commands, negative commands, vague commands and chain commands. _____ Monitored and recorded the frequency and type of commands you gave at home for a 30 minute period on the “Record Sheet: Commands” handout, and recorded your child‟s responses to those commands. _____ Praised you child every time he or she complied with a command. _____ Used the Household Rules worksheet to establish some of the rules that your think are most important and wrote these ideas down on the handout. _____ Continued playing with your child for at least 10 minutes each day. _____ Ignored inappropriate responses to commands. _____ Avoided arguing with your child about rules and commands. _____ Used a distraction or diversion after you told your child that he or she cannot do something. _____ Made a list of behaviors you would like to see less of on the Behavior Record handout. _____ Gave commands only when you were prepared to follow through with it. _____ Chose and appropriate and safe place for Time Out. _____ Explained to your child how Time Out will work and when it will be used. _____ On the “Record Sheet: Commands and Time Out” handout, wrote down an example of a situation when you used Time Out for noncompliance. 106 Video 4: Handling Misbehavior Ask a parent if they: _____ Completed all videos _____ Completed handouts in workbook _____ Engaged in the following behaviors: _____ Decreased the number of commands given to those that are most important. _____ Gave commands only when you were prepared to follow through with it. _____ Ignored inappropriate or annoying behaviors that are non-destructive. _____ Used a distraction or diversion when possible. _____ Praised your child for appropriate behavior. _____ Practiced using self-praise and challenging negative thoughts with positive coping thoughts. _____ Completed self-talk and positive coping statements. _____ Identified behaviors that might result in a logical or natural consequence and identified what privileges would be appropriate to remove. _____ From your list of negative behaviors you wanted to see less of, thought of the opposite behavior. Then systematically praised this positive behavior every time it occurred during the week. _____ Use the self-talk in the Problem Situations handout to record upsetting thoughts you have in problem situations, and write down some alternative calming thoughts. _____ Chose a problem to discuss with your child using the problem-solving approach and recorded the interaction on the “Record Sheet: Problem Solving” handout. _____ Thought of some ways to teach your child how to be verbally assertive. _____ Reviewed your list of behaviors you wanted to see more of and less of and thought of the parenting strategy that works for each one. 107 REFERENCES 108 REFERENCES Allen, K. D., & Warzak, W.J. (2000). The problem of parental nonadherence in clinical behavior analysis: Effective treatment is not enough. Journal of Applied Behavior Analysis, 33, 373-391. Arnold, D. S., O„Leary, S. G., Wolff, L. S., & Acker, M. M. (1993). The parenting scale: A measure of dysfunctional parenting in discipline situations. Psychological Assessment, 8, (2), 137-144. Barlow, D. H., & Hersen, M., (1984). Single-case experimental designs: Strategies for studying behavior change (2nd ed.). New York: Pergamon. Barlow, J., & Stewart-Brown S. L. (2000). Review article: behavior problems and parenttraining programs. Journal of Developmental and Behavioral Pediatrics, 21, 356-370. Barnett, W. S. (1995). Long-term effects of early childhood programs on cognitive and school outcomes. The Future of Children, 5, 25-50. Bloomquist, M.L. & Schnell, S. (2002). Helping children with aggression and conduct problems: Best practices for intervention. New York: Guilford Press. Brossart, D., Parker, R., Olson, E., & Mahadevan, L. (2005). The relationship between visual analysis and five statistical analysis in a simple AB single-case research design. Behavior Modification, 30, 531-563. Breitenstein, M., Hill, C., & Gross, D. (2009). Understanding disruptive behavior problems in preschool children. Journal of Pediatric Nursing, 24, 3-12. Brestan, E. V., & Eyberg, S. M. (1998). Effective psychosocial treatments of conduct disordered children and adolescents: 29 years, 82 studies, and 5,272 kids. Journal of Clinical and Consulting Psychology, 27, 180-189. Busk, P., & Serlin, R. (1992). Meta-analysis for single-participant research. In T. R. Kratochwill & J. R. Levin (Eds.). Single-case research design and analysis: New directions for psychology and education (pp.187-212). Mahwah, NJ: Erlbaum. Capage, L. C., Foote, R., McNeil, C. B., & Eyberg, S. M. (1998). Parent-child interaction therapy: An effective treatment for young children with conduct problems. Behavior Therapist, 21, 137-138. Campbell, S. B. (1997). Behavior problems in preschool children: Developmental and family issues. Advances in Clinical Child Psychology, 19, 1-26. 109 Carr, J. E., Austin, J. L., Britton, L. N., Kellum, K. K., & Bailey, J. S. (1999). An assessment of social validity trends in applied behavior analysis. Behavioral Interventions, 14, 223-231. Chambless D. L. , Hollon S. D. (1998) Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66, 7–18. Chorpita, B. F. (2003). The frontier of evidence-based practice. In A. E. Kazdin & J. R. Weisz (Eds.) Evidence-Based Psychotherapies for Children and Adolescents (pp.42-59). New York: Guilford Press. Christophersen, E. R., Mortweet, S. L. (2003). Parenting that works: Building skills that last a lifetime. Washington DC: American Psychological Association. Costello, E. J., Egger, H., & Angold, A. J. (2005). Ten-year research update review: The epidemiology of child and adolescent psychiatric disorders. Journal of the American Academy of Child and Adolescent Psychiatry, 44, 972-986. Cowen, R. J., & Sheridan, S. M. (2003). Investigating the acceptability of behavioral interventions in applied conjoint behavioral consultation: moving from analog conditions to naturalistic settings. School Psychology Quarterly, 18, 1-21. Creswell, J. W. (1998) Qualitative inquiry and research design: choosing among five traditions. Thousand Oaks, Ca.: Sage. Currie, J. (2001). Early childhood education programs. Journal of Economic Perspectives, 15, 213-238. Detrich, R. (1999). Increasing treatment fidelity by matching interventions to contextual variables within the educational setting. School Psychology Review, 28, 608-620. Dunst, C. J., Hamby, D. W., & Trivette, C. M. (2004). Evidence based approaches to early childhood development. Centerscope, 4, 1-10. Elliott, S. N., Von Brock, M. B., & Robertson, S. (1991). Response cost as a classroom intervention: Teachers' and psychologists' ratings of acceptability. Canadian Journal of School Psychology, 29, 43-52. Farrington, D. P. (1995). The development of offending and antisocial behaviour from childhood: Key findings from the Cambridge Study in Delinquent Development. Journal of Child Psychology and Psychiatry, 36, 929-964. Feil, E. G., Walker, H., Severson, H., & Ball, A. (2000). Proactive screening for emotional/behavioral concerns in Head Start preschools: Promising practices and challenges in applied research. Behavioral Disorders, 26, 13-25. 110 Fixsen, D. L., Naoom, S. F., Blasé, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A Synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231). Flugum, K. R., & Reschly, D. J. (1994). Prereferral interventions: Quality indices and outcomes. Gilliam, W. S. (2005). Prekindergarteners left behind: Expulsion rates in state prekindergarten systems. New Haven, CT: Yale University Child Study Center. Gordon, D. A. (2000). Parent training via CD-ROM: Using technology to disseminate effective prevention practice. Journal of Primary Prevention, 21, 227-251. Gordon, D. A., Graves, K., & Arbuthnot, J. (1995). The effect of functional family therapy for delinquents on adult criminal behavior. Criminal Justice and Behavior, 22, 60-73. Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18, 37-50. Gresham, F. M. (1998). Designs for evaluating behavior change: Conceptual principles of singlecase methodology. In T. S. Watson & F. M. Gresham (Eds.), Handbook of child behavior therapy (pp. 23-40). New York: Plenum. Gresham, F.M., Gansle, K.A., Noell, G.H., Cohen, S., & Rosneblum, G. (1993). Treatment integrity of school based behavioral intervention studies: 1980-1990. School Psychology Review, 22, 254-272. Head Start Information Center. www.headstarinfo.org. Heinrichs, N., Bertram, H., Kuschel, A., & Hahlweg, K. (2005). Parent recruitment and retention in a universal prevention program for child behavior and emotional problems: Barriers to research and program participation. Prevention Science, 6, (4), 275-286. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165-179. Jaeschke R, Singer J, & Guyatt G. (1989) Measurement of health status. Ascertaining the minimal clinically important difference. Control Clinical Trials, 10, 407-415. Jones, K., M., Wickstrom, K. F., Friman, P. C. (1997). The effects of observational feedback on treatment integrity in school-based behavioral consultation. School Psychology Quarterly, 12, 316-326. 111 Kaiser, N. M., McBurnett, K., & Pfiffner, L. J. (2011) Child ADHD severity and positive and negative parenting as predictors of social functioning: Evaluation of three theoretical models. Journal of Attention Disorders, 15, 193-203. Kamphaus, R. W. & Reynolds, C. R. (1998). Behavior Assessment System for Children: Monitor for ADHD Circle Pines, MN: American Guidance Services. Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied settings. New York: Oxford University Press. Kazdin, A.E. (2005). Parent Management Training: Treatment for Oppositional, Aggressive, and Antisocial Behavior in Children and Adolescents. New York: Oxford University Press. Knitzer, J. (2000). Early childhood mental health services: A policy and systems development perspective. In J.P. Shonkoff, & S.J. Meisels (Eds.), Handbook of Early Childhood Intervention, 2nd Ed. (pp. 416-438). New York: Cambridge University Press. Kratochwill, T. R., Elliott, S. N., Loitz, P. A., Sladeczek, I., & Carlson, J. S. (2003). Conjoint consultation using self-administered manual and videotape parent-teacher training: Effects on children„s behavioral difficulties. School Psychology Quarterly, 18, 269-302. Kratochwill, T. & Stoiber, K. (2002). Evidence-based interventions in school psychology: Conceptual foundations for the procedural and coding manual of Division 16 and Society for the study of school psychology task force. School Psychology Quarterly, 17, 341389. Lahey, B. B., Loeber, R., Hart, E. L., Frick, P. J., Applegate, B., Zhang, Q., et al. (1995). Fouryear longitudinal study of conduct disorder in boys: Patterns and predictors of persistence. Journal of Abnormal Psychology, 104, 83-93. Lane, K. L., Beebe-Frankenberger, M. E., Lambros, K.M., and Pierson, M. (2001). Designing effective interventions for children at risk for antisocial behavior: An integrated model of components necessary for making valid inferences. Psychology in the Schools, 38, 365379. Lavigne, J. V., Gibbons, R. D., Christoffel, K. K., Arend, R., Rosenbaum, D., Binns, H., et al. (1996). Prevalence rates and correlates of psychiatric disorders among preschool children. Journal of the American Academy of Child and Adolescent Psychiatry, 35, 204-214. LeBuffe, P. A., & Naglieri, J. A. (1999). The Devereux Early Childhood Assessment. Lewisville, NC: Kaplan Press Publishing. LeBuffe, P. A. , & Shapiro, V. B. (2004). Lending “strength” to the assessment of preschool social-emotional health. The California School Psychologist, 9, 51-61. 112 Lien, M. T. & Carlson, J. S. (2009). Psychometric properties of the Devereux Early Childhood Assessment in a Head Start sample. Journal of Psychoeducational Assessment, 27, 386396. Mains, J. A. & Scogin, F. G. (2003). The effectiveness of self-administered treatments: a practice friendly review of the research. Journal of Clinical Psychology, 59, 237-246. Manalov, R., & Solanas, A. (2008). Comparing N=1 effect size indices in the presence of autocorrelation. Behavior Modification, 32, 860-875. Mash, E. J., & Dozois, D. J. A. (2003). Child psychopathology: A developmental systems perspective. In E. J. Mash & R. A. Barkley (Eds.), Child Psychopathology (pp. 3-71). New York, New York: Guildford Press. Masten A, Garmezy N. Risk, vulnerability, and protective factors in developmental psychopathology. In: Lahey B, Kazdin A, editors. Advances in clinical child psychology. Vol. 8. Plenum Press; New York: 1985. pp. 1–52 Maughan, D. R., Christiansen, E., Jenson, W. R., Olympia, D., & Clark, E. (2005). Behavioral parent training as a treatment for externalizing behaviors and disruptive behavior disorders: A Meta-analysis. School Psychology Review, 34, 267-286. Miles, M. B, and Huberman, A. M. (1994). Qualitative Data Analysis, 2nd Ed., p. 10-12. Newbury Park, CA: Sage. Moncher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11, 247-266. National Research Council and Institutes of Medicine (2000) From Neurons to Neighborhoods: The Science of Early Childhood Development J. Shonkoff and D. Phillips (eds) Washington D.C.: National Academy Press. Noell, G., H., Witt, J. C., LaFleur, H., Mortenson, B. P., Ranier, D. D., & LeVelle, J. (2001). Increasing intervention implementation in general education following consultation: A comparison of two follow-up strategies. Journal of Applied Behavior Analysis, 33, 271284. O‟Brien, M. (1996). Child-rearing difficulties reported by parents of infants and toddlers. Journal of Pediatric Psychology, 21, 433-446. Ogg, J.A., & Carlson, J.S. (2009). The self-administered Incredible Years Parent Training Program: Perceived effectiveness, acceptability, and integrity with children exhibiting symptoms of Attention-Deficit/Hyperactivity Disorder. Journal of Evidence-Based Practices for Schools, 10, 143-165. 113 Olive, M. L., & Smith, B. W. (2005). Effect size calculations and single subject designs. Educational Psychology: An International Journal of Experimental Educational Psychology, 25, 313-324. Patterson, G., & Gullion, M. (1968). Living with children: New methods for parents and teachers. Champaign, IL: Research Press. Patterson, G. (1982). Coercive Family Process. Eugene, OR: Castilia. Patterson, G.R., Capaldi, D., & Bank, L. (1991). An early starter model for predicting delinquency. In D. Pepler, & R.K. Rubin (Eds.), The development and treatment of childhood aggression. Hillsdale, NJ: Lawrence Erlbaum Associates. Peterson, L., Homer, A., L., & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15, 477-492. Phaneuf, L., & McIntyre, L. L. (2011). The application of a three-tier model of intervention to parent training. Journal of Positive Behavioral Interventions, 13, 198-207. Pianta, R. C., & Caldwell, C .B. (1990). Stability of externalizing symptoms from kindergarten to first grade and factors related to instability. Development and Psychopathology, 2, 247-258. Qi, C. H., & Kaiser, A. P. (2003). Behavior problems of preschool children from low-income families: A review of the literature. Topics in Early Childhood Special Education, 23, 188-216. Reemers, T. M., Wacker, D. P., & Koeppl, G. (1987). Acceptability of behavior interventions: A review of literature. School Psychology Review, 16, 212-227. Reid, J. (1993). Prevention of conduct disorder before and after school entry: Relating interventions to developmental findings. Development and Psychopathology, 5 243-262. Report of the Surgeon General's Conference on Children's Mental Health: A National Action Agenda. US Public Health Service, Washington, DC, 2000 Rhymer, K. N., Evans-Hampton, T. N., McCurdy, M., & Watson, T. S. (2002). Effects of varying levels of treatment integrity on toddler aggressive behavior. Special Services in the Schools, 18, 75-82. Rutter, M. (2005). Environmentally mediated risks for psychopathology: research strategies and findings. Journal of the American Academy of Child and Adolescent Psychiatry, 44, 3-18. 114 Sanders, M.R., Markie-Dadds, C., Tully, L.A. & Bor, W. (2000). The Triple-P Positive Parenting Program: A comparison of enhanced, standard, and self-directed behavioral family intervention for parents of children with early onset conduct problems. Journal of Consulting and Clinical Psychology, 68, 624-640. Schoenwald, S. K., & Hoagwood, K. (2001). Effectiveness, transportability, and dissemination of interventions: What matters when? Psychiatric Services, 52, 111-119 Scott, S., Spender, Q., Doolan, M., Jacobs, B., & Aspland, H. (2001). Multicenter controlled trial of parenting groups for childhood antisocial behavior in clinical practice. British Medical Journal, 323, 194-203. Scruggs, T. E., & Mastropieri, M. A. (2001). How to summarize single participant research: Ideas and applications. Exceptionality, 9, 227-244. Scruggs, T. E., Mastropieri, M. A., & Castro, G. (1987). The quantitative synthesis of single-subject research: Methodology and validation. Remedial and Special Education, 8, 24-33 Serketich,W. J., & Dumas, J. E. (1996). The effectiveness of behavioral parent training to modify antisocial behavior in children: A meta analysis. Behavior Therapy, 27, 171–186. Shernoff, E., Kratochwill, T., & Stoiber, K. (2002). Evidence-based interventions in school psychology: An illustration of task force coding criteria using single-participant research designs. School Psychology Quarterly, 17, 390-422. Sterling-Turner H. E, Watson T. S. (2002). An analog investigation of the relationship between treatment acceptability and treatment integrity. Journal of Behavioral Education, 11, 3950. Stern, S. B., Alaggia, R., Watson, K., & Morton, T. R. (2008). Implementing an evidence-based parenting program with adherence in the real world of community practice. Research on Social Work Practice, 18, 543-554. Stewart, L. S., & Carlson, J. S. (2010). Investigating parental acceptability of the Incredible Years Self-Administered Parent Training Program for children presenting externalizing behavior problems. Journal of Applied School Psychology, 26, 162-175. Stewart-Brown, S., Patterson, J., Mockford, C., Barlow, J., Klimes, I., & Pyper, C. (2004). Impact of a general practice-based group parenting program: quantitative Task Force on Evidence-Based Interventions in School Psychology. (2003). Procedural and Coding Manual for Review of Evidence-Based Interventions. Accessed . Southam-Gerow, M.A., Rodriguez, A., Chorpita, B.F., & Daleidan, E.L. (2012). Dissemination and implementation of evidence based treatments for youth: Challenges and recommendations. Professional Psychology: Research and Practice, 1, 1-8. 115 Telzrow, C. F. (1995). Best practices in facilitating treatment adherence. In A.Thomas & J. Grimes (Eds.) Best Practices in School Psychology-III (pp.501-510). Washington, D.C.: National Association of School Psychologists. The Incredible Years website. www.incredibleyears.com. . United States Centers for Disease Control (2004). Best practices of youth violence prevention: A sourcebook for community action. http://www.cdc.gov/ncipc/dvp/bestpractices.htm . Walcott, C.M., Carlson, J.S., & Beamon, H.L. (2009). Effectiveness of a self-administered training program for parents of children with ADHD. School Psychology Forum: Research in Practice, 3, 44-62. Webster-Stratton, C. (1992). The incredible years parent training: For parents of children age 38. Video Series. Toronto: Umbrella Press. Webster-Stratton, C. (1998). Preventing conduct problems in Head Start children: Strengthening parenting competencies. Journal of Consulting and Clinical Psychology, 66, 715-730. Webster-Stratton, C. (2000). The Incredible Years training series. Office of Juvenile Justice and Delinquency Program Juvenile Justice Bulletin, 1–22. Webster-Stratton, C., & Hammond, M. (1990). Predictors of treatment outcome in parent training for families with conduct problem children. Behavior Therapy, 21, 319–337. Webster-Stratton, C., & Hammond, M. (1997). Treating children with early-onset conduct problems: A comparison of child and parent training interventions. Journal of Consulting and Clinical Psychology, 65, 93-109. Webster-Stratton, C., & Hammond, M. (1998). Conduct problems and level of social competence in Head Start children: Prevalence, pervasiveness, and associated risk factors. Clinical Child Psychology and Family Psychology Review, 1, 101-124. Webster-Stratton, C., & Herman, K.C. (2010). Disseminating Incredible Years Series earlyintervention program: Integrating and sustaining services between school and home. Psychology in the Schools, 47, 36-54. Webster-Stratton, C., Hollinsworth, T., & Kolpacoff, M. (1988). Self-administered videotape therapy for families with conduct problem children: Comparison with two cost-effective treatments and control group. Journal of Consulting & Clinical Psychology, 56, 558-566. Webster-Stratton, C., Hollinsworth, T. & Kolpacoff, M. (1989). The long-term effectiveness and clinical significance of three cost effective training programs for families with conductproblem children. Journal of Consulting & Clinical Psychology, 57, 550-553. 116 Webster-Stratton, C., Reid, M. J., & Hammond, M. (2001). Preventing conduct problems, promoting social competence: A Parent and teacher training partnership in head start. Journal of Clinical Child and Adolescent Psychology, 30, 283-302. Webster-Stratton, C., Reid, M. J., & Hammond, M. (2004). Treating children with early-onset conduct problems: Intervention outcomes for parent, child, and teacher training. Journal of Clinical Child and Adolescent Psychology, 33, 105-124. Webster-Stratton, C., Reid M. J., & Hammond, M. (2008). LIFT Parenting Practices Interview information. Unpublished manuscript. Retrieved January 4, 2012, from http://www.incredibleyears.com/Resources/. Webster-Stratton, C., & Taylor, T. (2001). Nipping early risk factors in the bud: Preventing substance abuse, delinquency, and violence in adolescence through interventions targeted at young children (0–8 years). Prevention Science, 2, 165–192. Weisz, J. R., Hawley, K. M., & Doss, A. J. (2004). Empirically tested psychotherapies for youth internalizing and externalizing problems and disorders. Child and Adolescent Psychiatric Clinics of North America, 13, 729-816. Wheeler, J. L., Baggett, B. A., Fox, J., & Blevins, L. (2006). Treatment integrity: A review of intervention studies conducted with children with autism. Focus on Autism and Other Developmental Disabilities, 21, 45–54. Williford, A. P., & Shelton, T. L. (2008). Using mental health consultation to decrease disruptive behavior in preschoolers: Adapting an empirically-supported intervention. The Journal of Child Psychology and Psychiatry, 49, 191-200. Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom intervention strategies. In T.R. Kratochwill (Ed.) Advances in School Psychology (pp. 251-288). Hillsdale, NJ: Lawrence Erlbaum Association. 117