ALL OUTCOMES MATTER: EVALUATING PRACTITIONER IMPLEMENTED NATURALISTIC DEVELOPMENTAL BEHAVIORAL INTERVENTIONS WITHIN PRESCHOOL CLASSROOMS By Sophia D’Agostino A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Human Development and Family Studies—Doctor of Philosophy 2019 ABSTRACT ALL OUTCOMES MATTER: EVALUATING PRACTITIONER IMPLEMENTED NATURALISTIC DEVELOPMENTAL BEHAVIORAL INTERVENTIONS WITHIN PRESCHOOL CLASSROOMS By Sophia D’Agostino Naturalistic Developmental Behavioral Interventions (NDBIs) are influenced by applied behavioral and developmental sciences and are considered evidence based practices for children with disabilities. NDBIs are a great match for early childhood practitioners of inclusive preschool classrooms as they embed theoretical views often held by early childhood practitioners while encompassing empirically based strategies for children with disabilities. NDBIs produce positive objective outcomes for children and are designed to be socially valid approaches. Yet, to successfully bring NDBIs into the unique cultural context of inclusive preschool classrooms, the procedures of both the training and coaching intervention and the NDBI approach must be socially valid to the practitioners and stakeholders. Yet, the prevalence of social validity assessment is low within published intervention studies. Along with objectively measured behaviors, researchers must holistically investigate social validity to increase the accessibility of NDBIs implemented by practitioners in natural settings like inclusive preschool classrooms. Study 1 systematically reviews the published practitioner implemented NDBI research to determine current social validity practices related to the goals, procedures, and outcomes of each aspect of intervention implementation. Twenty-two practitioner implemented NDBI studies were identified and of those, 11 studies conducted social validity assessment and were further reviewed to evaluate the components (i.e., goals, procedures, and outcomes) and features of the social validity assessment practices. An in-depth analysis of the relationship of social validity data with outcome data and the use of social validity results in drawing effectiveness conclusions was also conducted. Results indicate that social validity assessment was lacking within practitioner implemented NDBI studies. Training and coaching practitioners to implement NDBIs effectively with high social validity outcomes requires exploration of innovative models. The National Professional Development Center on Autism Spectrum Disorder (NPDC) provides a model for practitioner training and coaching as well as provides users with online training modules including NDBIs. Yet, the NPDC model requires in-person coaching which may be less time and cost effective compared to technology based ongoing coaching. Study 2 evaluates the use of the Adapted NPDC Model, which incorporates technology based coaching and self- evaluation as well as extensive assessment of social validity. A single-case multiple probe across participant design was employed to evaluate the effects of the Adapted NPDC Model on six participant dyad’s practitioner implementation fidelity, frequency of communication opportunities, and frequency of independent child target skill behavior. Within five sessions, all six practitioners reached pre-set performance criteria of two consecutive sessions above 90% implementation fidelity. Changes in practitioner and child behavior generalized to a different activity context and maintained up to five weeks post intervention. Social validity results suggest direct and indirect consumer satisfaction across goals, procedures, and outcomes related to the Adapted NPDC intervention and NDBI implementation. Taken together, Study 1 and 2 findings have implications for research and practice related to the service-need gap in training practitioners of inclusive preschool classrooms. These studies illuminate the importance of interactions and relationships within NDBI implementation research. Overall implications for research and practice as well contributions to the field are also discussed. ACKNOWLEDGEMENTS Personally, experience has been my best teacher. Through my years of studies at Michigan State University, I was afforded many experiences and life events that I have learned so much from. I am thankful for my graduate school years during which I have gained invaluable knowledge. I would like to first thank God for guidance and support through these last four years. Also, many individuals provided and supported my learning experiences and their assistance deserves acknowledgement. First, I wish to acknowledge the assistance and support provided by my committee. I would like to express my very great appreciation to Dr. Sarah Douglas. Sarah, thank you for accepting me as your student and continually supporting me while holding high standards. Many times, you went above and beyond to develop my skills. Dr. Claire Vallotton and Dr. Hope Gerde, thank you for providing me with very valuable feedback and insight, which enhanced this dissertation greatly. I am particularly grateful for the assistance given by Dr. Josh Plavnick. Thank you for nurturing my interest in applied behavior analysis, supporting my first single-case research project, and for supervising independent field work. Again, thank you to my committee for teaching me, challenging me, advising me, and supporting me over the past four years. I wish to also acknowledge the help provided by Laurie Linscott. Thank you for allowing me to focus on topics of interest, like inclusion, during my time at the Child Development Laboratories; your support along these four years is truly appreciated. My special thanks are extended to my friends and colleagues for always cheering me on and reminding me to stay grounded. Finally, I wish to thank my parents and my husband for their unwavering support and encouragement as well as their unconditional love throughout my graduate career. iv My dissertation research could not have been possible without the monetary support of the department of Human Development and Family Studies. This research was also made possible through the resources afforded to me through Dr. Sarah Douglas. Her willingness to provide resources so generously has been very much appreciated. Also, thank you to Ana Duenas and Dr. Libbey Horton for spending your time coding studies and video for this research. Finally, thank you to the local public schools who welcomed me into their classrooms to conduct research. I would like to sincerely thank the amazing practitioners who consented to participate in my study. Without you, this research would not have been possible. At times, it is difficult to comprehend that this journey is ending. I am so thankful for the experiences that have set me on a path toward a successful career. It is my greatest hope to continue to work with early childhood practitioners with the shared goal of improved child outcomes utilizing socially valid practices. Thank you to everyone who played a part in my graduate school experience; no matter the size of your role, it was necessary, and I am eternally grateful. v TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES CHAPTER 1: INTRODUCTION Background Naturalistic Developmental Behavioral Interventions Purpose Theoretical Framework Behavioral Approaches Applied behavior analysis Developmental Learning Principles Constructivism Sociocultural theory Social-pragmatic theory of language acquisition Merging Theories Practitioner Implemented NDBIs in Inclusive Settings Social Validity Support for Practitioners Chapter Summary REFERENCES CHAPTER 2: STUDY 1 Abstract Introduction Practitioner Implemented NDBIs Purpose of Social Validity Previous Reviews Method Search Procedures Initial screening Reliability of study inclusion Review Procedures Coding form Training and interrater agreement for coding Results Social Validity Assessment Components Practitioner participants Child participants Social Validity Features Types of instruments Respondents vi x xi 1 1 1 2 2 3 3 4 5 5 6 6 8 9 10 11 12 17 17 18 19 20 21 23 23 24 26 26 26 27 28 29 29 30 31 31 31 Administration of social validity assessments Relationship Between Social Validity, Intervention Outcomes, and Maintenance Social Validity Assessment for Conclusions Discussion Lack of Comprehensive Social Validity Assessment Quality of Social Validity Description Value of Social Validity Results Limitations Implications for Future Research and Practice Conclusion REFERENCES CHAPTER 3: STUDY 2 Abstract Introduction Technology-based Intervention Training National Professional Development Center Adapted NPDC model The Present Study Method Participants Practitioners Children Dyad A Dyad B Dyad C Dyad D Dyad E Dyad F Coach Setting Materials Dependent Variables and Measurement Practitioner implementation fidelity Target skill communication opportunities Child target skill Interobserver agreement Experimental Design Procedures Baseline Intervention Treatment integrity Maintenance Generalization Social Validity vii 33 35 36 36 37 39 40 41 42 44 45 51 51 52 54 55 56 58 58 58 58 59 60 60 61 61 63 63 64 64 65 65 65 66 66 67 68 68 69 70 71 72 72 72 Results Practitioner Behavior Implementation fidelity Dyad A Dyad B Dyad C Dyad D Dyad E Dyad F Frequency of communication opportunities Dyad A Dyad B Dyad C Dyad D Dyad E Dyad F Child Behavior Child target social communication skill Dyad A Dyad B Dyad C Dyad D Dyad E Dyad F Social Validity Goals Procedures NDBI procedures Training and coaching procedures Outcomes Practitioner self-efficacy Child outcomes Discussion Innovative Delivery Method Social Validity Relationships with Child Learning Outcomes Limitation and Future Research Directions Implications for Practice Conclusion APPENDICES APPENDIX A NPDC Model APPENDIX B Adapted NPDC Model APPENDIX C Practitioner Recruitment Letter APPENDIX D Practitioner Consent APPENDIX E Practitioner Participant Demographic Questionnaire APPENDIX F Parent Recruitment Letter viii 75 75 75 76 76 76 78 78 78 78 79 79 79 81 81 82 82 82 82 83 83 83 84 84 85 85 85 85 86 87 87 88 88 89 90 91 92 94 95 96 97 98 99 100 103 104 APPENDIX G Parental Consent APPENDIX H Child Information Questionnaire APPENDIX I Procedural Fidelity Checklists APPENDIX J Target Skill Prioritization APPENDIX K Intervention Activity Form APPENDIX L Self-Evaluation Checklists APPENDIX M Coaching Fidelity Checklist APPENDIX N Teacher Efficacy for the Inclusion of Young Children with Disabilities Scale APPENDIX O Practitioner Intervention Rating Profile APPENDIX P Outside Rater Intervention Rating Profile APPENDIX Q Training and Coaching Social Validity Scale REFERENCES CHAPTER 4: COMBINED CONCLUSION Implications and Contributions Study 1 Study 2 Intervention characteristics Fidelity factors Individual characteristics Relationships Matter Competent coaching Conclusion REFERENCES 105 107 109 113 114 115 118 119 120 121 122 123 129 129 129 130 130 131 131 133 135 136 137 ix LIST OF TABLES Table 2.1 Components of Social Validity Assessment by Participants Table 2.2 Reported Features of Social Validity Assessments Table 2.3 Study Outcome, Maintenance, and Social Validity Results Table 3.1 NDBI Approach Descriptions Table 3.2 Practitioner Participant Information Table 3.3 Child Participant Information 28 32 34 56 59 62 x LIST OF FIGURES Figure 2.1 PRISMA Flow Chart of Study Inclusion 25 Figure 3.1 Practitioner Implementation Fidelity and Frequency of Child Behavior 77 Figure 3.2 Frequency of Communication Opportunities and Child Behavior 80 xi CHAPTER 1: INTRODUCTION Background Inclusive environments are an important piece of the educational landscape for young children with disabilities. Early childhood inclusion was established over 30 years ago with the reauthorization of the Education for the Handicapped Act in 1986 renamed to Public Law 99- 457, which required services for young children with disabilities to be delivered in least restrictive or natural environments by adults who regularly interact with the child. Definitions of early childhood inclusion have evolved over time to include access, participation, and supports in the least restrictive environment for children with disabilities (Division for Early Childhood/National Association for the Education of Young Children, 2009). An inclusive model is effective if practitioners are proficient in evidence-based practices to support the development and learning of children with special needs (Odom, Buyesse, & Soukakou, 2011). One specialized category of intervention practices designed to be delivered in the young child’s natural context are referred to as Naturalistic Developmental Behavioral Interventions (NDBI; Schreibman et al., 2015). Although NDBIs were originally designed for children with autism spectrum disorder (ASD), the intervention practice encompasses skills in a variety of child development domains and are therefore appropriate for children with differing needs. Naturalistic Developmental Behavioral Interventions Schreibman and colleagues (2015) refer to evidence and research-based intervention practices that are based firmly on methods from both developmental and behavioral science as NDBIs. NDBIs have several core components including developmentally appropriate learning targets, intervention in the natural environment, and instructional strategies from both behavioral and developmental science (Schreibman et al., 2015). Additionally, NDBIs support the 1 practitioner in selecting individualized and developmentally sequenced learning targets for the child. NDBIs are also incorporated into the child’s natural environment, routines, and activities by individuals with whom the child has a relationship (e.g., parent, caregiver, teacher). Within NDBIs, developmental strategies are merged with behavioral strategies to support child outcomes making them an excellent match for inclusive preschool classroom intervention (Odom & Wolery, 2003; Schreibman et al., 2015). Purpose While numerous studies have provided empirical support for the effectiveness of NDBIs (see Schreibman et al., 2015), there is a need for continued research to understand the necessary components within treatment packages that are acceptable for individual practitioner use with different children and contexts while remaining effective. To fit the cultural environment of the inclusive preschool classroom, the procedures for training and implementation of NDBIs must be socially valid. The procedures for assessing social validity, procedures for training, and procedures for establishing fidelity must be accessible to researchers. Therefore, this dissertation includes two studies: (1) a systematic review of social validity assessment practices utilized in practitioner implemented NDBI studies, and (2) a single case investigation of practitioner coaching and NDBI implementation on participant outcomes including fidelity of implementation, frequency of communication opportunities, child behavior outcomes, and social validity. Theoretical Framework The overall goal of these studies is to expand the literature base of practitioner implemented NDBI studies and contribute to closing the service-need gap in inclusive preschool classrooms. Historically, NDBIs have drawn from various theoretical perspectives that seemed 2 incompatible, yet NDBIs are grounded in traditional behaviorist perspectives as well as traditional developmental perspectives (Schreibman et al., 2015). These theoretical frameworks are integrated, and together, provide intervention strategies that are empirically supported and compatible with early intervention legislation. NDBIs merge developmental perspectives, often held by early childhood practitioners, and behavioral perspectives, often held by practitioners who design specialized supports for children with disabilities. When brought together, they provide the perfect union to help early childhood practitioners support children with disabilities, who may not respond to developmental approaches alone (Odom & Wolery, 2003). To further understand the development of NDBIs, their historical roots are identified and discussed next. Behavioral Approaches Behavioral approaches play an essential role in NDBIs. Behaviorism is the theory that an individual’s behavior is influenced by their environment. One dominant theory within behaviorism is operant conditioning. Operant conditioning was established by Skinner (1953) and is a commonly used behavioral approach. In operant conditioning, positive reinforcement is used to increase a behavior (Cooper, Heron, & Heward, 2007). For example, a child who is learning to clean up their toys will be more likely to continue to clean up their toys if they are provided with positive reinforcement from a caregiver – such as praise when they are cleaning up their toys. Operant conditioning establishes a relationship between behavior (e.g., cleaning up toys) and its consequences (e.g., positive reinforcement; Cooper et al., 2007). In NDBIs, natural, positive reinforcement is used in every approach to increase the child’s target skill. Applied behavior analysis. As behaviorism evolved and approaches were published, applied behavior analysis emerged (ABA; Baer, Wolf, & Risley, 1968). The main focus of ABA is to apply behavioral principles to understand how changes in the environment affect human 3 behavior (Baer et al., 1968). Baer and colleagues (1968) proposed that ABA should be applied, behavioral, analytic, technological, systematic, effective, and promote generalization of skills. Applied behavior analysis procedures involve the manipulation of one or more parts of the three- term contingency (Cooper et al., 2007); the antecedent (i.e., what happens before the behavior), the behavior, and the consequence (i.e., what happens directly after the behavior). In 1987, Lovaas adopted a structured ABA approach to treat young children with ASD referred to as discrete trial training (DTT). DTT is a highly-structured intervention where skills are broken into small components and taught one at a time in a one-on-one format with the child and an adult. During DTT, the child’s correct responses are reinforced. This method, although effective in teaching skills, has limitations including a lack of skill generalization (Delprato, 2001), tendency for prompt dependence (Smith, 2001), and the display of challenging behaviors, like escape and avoidance, by the child (Steege, Mace, Perry, & Longenecker, 2007). DTTs limitations led to the increased popularity and continued creation of NDBIs, which were viewed as more appropriate and acceptable (i.e., socially valid) to use in natural, inclusive settings (Odom, 2000; Schreibman et al., 2015; Wolery, 2005). Developmental Learning Principles Child development is the process through which humans grow and develop from infancy to adulthood (Berk, 2013). Numerous theories exist to explain child development. These theories outline the belief that development is dependent on the context, experiences, and interactions of the child. Developmentally appropriate practice (DAP; Copple & Bredekamp, 2009) supports cognitive learning theories, especially the work of Piaget, Bruner, and Vygotsky, which later formed the concepts of constructivism and sociocultural theory. 4 Constructivism. In 1952, Piaget established the theory, now referred to as constructivism, in which child development progresses through distinct stages at specific times, which include sensorimotor, preoperational, concrete operations, and formal operations. The theory emphasizes that children interact with their environment (i.e., physical and social) as they learn and develop through each stage. The child is an active participant in “constructing” an understanding of the world in which they live. The theory helped explain how children engage with the environment, develop symbolic play, and contributed a clear explanation of the preoperational stage of development where children begin to learn language and see the world from the perspective of others (Piaget & Inhelder, 1969). The constructivist approach to learning was also influenced by Bruner (1978). Bruner’s framework emphasized the active process of learning as children construct new concepts and ideas based on their knowledge of the world. Instead of stages, Bruner believed that development was a continuous process in which the adult plays an important role. Bruner (1983) also emphasized the importance of language and how the use of words can assist in the development of the concepts they represent. This approach to constructivism emphasizes that children learn through developmentally appropriate experiences that they construct and initiate in the natural context with adult facilitation. Sociocultural theory. Much of sociocultural theory is based on the work of Vygotsky (1978), which introduced the idea that children learn when they are taught within their “zone of proximal development”. The zone of proximal development is the level just above the child’s current cognition. In this approach, adults and more advanced peers play a role in socially mediating learning through the use of “scaffolding”, which includes models of behavior, thoughtful planning of meaningful activities, discourse during activities, and asking the child 5 questions. Sociocultural theory emphasizes the social context in which children learn through observation, and imitation of adults and peers during natural, every day, motivating activities. Social-pragmatic theory of language acquisition. Another developmental theory that influenced NDBIs is the social-pragmatic theory of language acquisition. This theory, supported by the works of Bruner (1978), Tomasello (2000), and Snow (1977), emphasizes the social nature of learning language. In the process of learning language, children pay attention to social cues. The adult’s responsiveness to the child’s lead results in joint attention and the child’s understanding of the adults’ intent. Specifically, a typically developing child can learn a new word by noting where an adult is looking while the verbal label is given to an object. For instance, a child may learn the label for sink by using the social cue of the adult’s gaze and nod to the sink while saying, “Use the sink to rinse off.” Social-pragmatic theory of language acquisition supports the importance of social engagement and social experiences and its connection to language learning. Merging Theories As the fields of behavioral science and developmental science matured and began to tackle the same early intervention needs, it was apparent that intervention approaches needed to consider advances in both fields when choosing treatment targets and strategies. NDBIs merge the strengths of DAPs and ABA approaches to create intervention practices that are developmentally appropriate, delivered in the child’s natural context, and combine development- enhancing strategies from both fields (Schreibman et al., 2015). The specific strategies that make NDBIs potent were derived from developmental and behavioral perspectives. NDBIs fully meet criteria as ABA techniques including: (a) intervention protocols that are composed of operant teaching techniques; (b) intervention goals that are 6 socially significant; and (c) intervention results are analyzed objectively by assessing a child’s progress before, during and after the intervention (Baer et al. 1968). It was recognized that highly-structured ABA techniques needed to advance to incorporate developmental practices as research demonstrated that learning was facilitated through social engagement (Rogers & Pennington, 1991) and children with ASD followed similar developmental paths (Lifter, Sulzer- Azaroff, Anderson, & Cowdery, 1993; Mundy, Sigman, Ungerer, & Sherman, 1987). Hence, NDBIs also meet DAP components including: (a) age appropriate; (b) individually reflect children’s strengths, interests, and needs through social relationships; and (c) are sensitive to the child’s social and cultural context (Copple & Bredekamp, 2009). For young children with disabilities, NDBIs are friendly for natural implementers like practitioners to increase the quantity and quality of early intervention during natural, ongoing activities. One NDBI, identified as an evidence-based practice (Wong et al., 2015), is Pivotal Response Training (PRT). PRT is designed to target pivotal areas of child functioning that, if promoted, would result in widespread changes in other behaviors not specifically targeted during intervention (Koegel, Koegel, Harrower, & Carter, 1999). Providing intervention in pivotal areas, such as communication, teaches children to become increasingly responsive to diverse learning opportunities and interactions in the child’s natural environments. PRT involves presenting the child with a clear opportunity to use the skill that is understandable to the child, uninterrupted, and developmentally appropriate (i.e., the antecedent). Next, the practitioner pauses and waits for the child’s response (i.e., the behavior). Finally, the practitioner reinforces appropriate responses and reasonable attempts with natural reinforcement (rather than artificial or arbitrary rewards) or provides a prompt to the child to complete the task (i.e., the 7 consequence; Stahmer, Suhrheinrich, Reed, Schreibman, & Bolduc, 2011). PRT also involves turn taking with the practitioner, providing choices, and following the child’s lead. Another example of an evidence-based NDBI (Wong et al., 2015) is Incidental Teaching (IT). In IT, the practitioner constructs communication opportunities for the child within the context of natural activities (Hart & Risley, 1975). Through the use of strategic environmental arrangement, the practitioner creates communication opportunities. The practitioner then utilizes systematic prompting and naturally occurring reinforcement to increase the child’s use of the target skill. For instance, the practitioner may place a preferred toy out of reach yet in sight of the child to create opportunities for the child to request items. The merging of theories has allowed researchers to create evidence-based NDBIs, which are more socially valid and more ideal for inclusive preschool practitioners to implement. Practitioner Implemented NDBIs in Inclusive Settings Two thirds of all children with disabilities, ages three through five, participate in an inclusive setting for some portion of their school day (U.S. Department of Education, 2017). The prevalence of inclusive preschool demands effective professional development models for inclusive classroom settings. Recommendations in the field focus on practitioner implementation of evidence-based practices to support the development of children with disabilities (Division for Early Childhood, 2014; Odom et al., 2009). NDBI strategies have been promoted for inclusive early education settings and are included in the Division for Early Childhood Recommended Practices (Wolery, 2000) as they take into consideration the whole child and support development within the natural environment. Although NDBIs are an evidence-based practice suited for inclusive early childhood settings, there is a paucity of research focused on training practitioners to implement NDBIs. This service-need gap may be addressed by following the 8 principles of implementation science, which presumes that all of the methods used within a research study, including intervention practices (e.g., NDBIs) and implementation strategies (e.g., practitioner training), are evidence-based practices and their characteristics promote acceptance and adoption (Dunst, Trivette, & Raab, 2013). One variable to determine acceptance and adoption of the intervention package is social validity. Social Validity Social validity can be defined as satisfaction of the intervention package by various consumers (e.g., parents, practitioners, community members; Wolf, 1978). Social validity assessments are imperative in determining the social importance of applied research and adapting intervention packages as needed to promote adoption and maintained use. Without social validation, there is perhaps less chance NDBIs will be implemented effectively by practitioners in the natural classroom environment (Carter, 2010; Kazdin, 1977; Schwartz & Baer, 1991; Wolf, 1978). Improved social validity was a driving force behind the initial emergence of NDBIs as a greater focus on development addressed limitations of highly-structured behavioral approaches appealing to users in the child’s natural environment (Schreibman et al., 2015). Historically, the rate of researchers utilizing social validity assessments in ABA intervention research was low (Schwartz & Baer, 1991), and reports indicate that although prevalence has increased, social validity remains understudied in recent single-case research (Snodgrass, Chung, Meadan, & Hall, 2018). Hence, it can be argued that social validity is a necessary consideration in the selection and effective use of NDBIs by practitioners to ensure acceptability and promote maintained use (Carter, 2010). 9 Future research should explore social validity measurement to determine potential barriers to practitioner implemented NDBIs. Wolf (1978) recommended three components through which consumers (e.g., parents, practitioners, children) can socially validate the intervention package. These include assessing the following: (1) the social significance of the goals; (2) the acceptability of the procedures; and (3) the social importance of the results of the intervention package. Practitioner implemented NDBI studies involve both implementation practices (i.e., intervention training model) and intervention practices (i.e., NDBI). Practitioner adoption and maintained use of NDBIs depends on the effectiveness and social validity outcomes of both of these practices. Support for Practitioners Many early childhood practitioners do not feel equipped to effectively support children with disabilities (Brookman-Frazee, et al., 2010; Stahmer, Collings, & Palinka, 2005) and may require additional training to effectively deliver specialized interventions (Bruns & Mogharreban, 2007; Killoran et al., 2001). Also, practitioners who do report using evidence- based practices report adapting them to fit their preferences and perceived child needs (Stahmer et al., 2005) therefore impairing implementation fidelity and positive intervention outcomes. Providing practitioners with socially valid and evidence-based intervention training to implement NDBIs is needed to address this service-need gap. As implementation science research implies, both the training intervention and NDBI must include evidence-based characteristics that are acceptable to practitioners in order to have intended effects (Dunst et al., 2013). Therefore, social validity factors of practitioner implemented NDBI studies should not be ignored. 10 Chapter Summary It is clear that NDBIs are an evidence-based intervention that integrate behavioral and developmental theories into socially valid interventions implemented by practitioners in inclusive settings. However, there is a need for further research focused on the social validity of practitioner implemented NDBIs. For this reason, Study 1 will examine the social validity measurement practices of prior research focused on training practitioners to implement NDBIs. Study 2 will examine an efficient, effective, and socially valid intervention model to train practitioners to implement NDBIs in the inclusive early childhood setting with high fidelity and generalized and maintained behavior change. Each study has the ability to inform early intervention practices. Specifically, through a systematic review, Study 1 will identify the social validity assessment practices incorporated into practitioner implemented NDBI studies and illuminate implications for future research and practice. Informed by Study 1 findings, Study 2 will employ a single-case experimental design to investigate a socially valid intervention package. Study 2 will train practitioners to implement an NDBI in an inclusive preschool classroom to evaluate effectiveness and social validity. Overall, these studies contribute to the broader discussion of the service-need gap related to preschool inclusion and training general education practitioners to implement NDBIs with effective and socially valid outcomes. 11 REFERENCES 12 REFERENCES Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of applied behavior analysis, 1, 91-97. development. Journal of College Teaching & Learning, 6, 25. Human growth and development: The Wolfson lectures. 62–84. Oxford, England: Clarendon Press. Berk, L.E. (2013). Child development (9th ed.). Boston: Pearson. Beavers, A. (2009). Teachers as learners: Implications of adult education for professional Bruner, J. S. (1978). Learning how to do things with words. In J. S. Bruner & A. Garton (Eds.), Bruns, D. A., & Mogharreban, C. C. (2007). The gap between beliefs and practices: Early Carter, S. L. (2010). The social validity manual: A guide to subjective evaluation of behavioral Cooper, J. O., Herron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (2nd ed.). Copple, C., & Bredekamp, S. (2009). Developmentally appropriate practice in early childhood childhood practitioners' perceptions about inclusion. Journal of Research in Childhood Education, 21, 229-241. interventions. Boston, MA: Academic Press. Upper Saddle River: Prentice Hall. programs serving children from birth through age 8. National Association for the Education of Young Children. 1313 L Street NW Suite 500, Washington, DC 22205- 4101. Delprato, D. J. (2001). Comparisons of discrete-trial and normalized behavioral language intervention for young children with autism. Journal of autism and developmental disorders, 31, 315-325. Division for Early Childhood. (2014). DEC recommended practices in early intervention/early childhood special education 2014. Retrieved from http://www.decsped.org/recommendedpractices Division for Early Childhood/National Association for the Education of Young Children. (2009). Early childhood inclusion: A joint position statement of the Division for Early Childhood (DEC) and the National Association for the Education of Young Children (NAEYC). Dunst, C. J., Trivette, C. M., & Raab, M. (2013). An implementation science framework for conceptualizing and operationalizing fidelity in early childhood intervention studies. Journal of Early Intervention, 35, 85-101. doi: 10.1177/1053815113502235 13 Education of the Handicapped Act Amendments of 1986, Pub. L. 99-457 Hart & Risley. (1975). Incidental teaching of language in the preschool. Journal of Individuals with Disabilities Education Act of 2004 (PL 105-17) Part C, Sections 632, 635, and Applied Behavior Analysis, 8, 411–420. 636. social validation. Behavior Modification, 1, 427-452. doi: 10.1177/014544557714001 competencies for early intervention and early childhood special education. Teaching Exceptional Children, 34, 68-73. Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through Killoran, J., Templeman, T. P., Peters, J., & Udell, T. (2001). Identifying paraprofessional Koegel, L. K., Koegel, R. L., Harrower, J. K., & Carter, C.M. (1999). Pivotal response Koegel, R. L., & Koegel, L. K. (2006). Pivotal response treatments for autism: Communication, Lifter, K., Sulzer-Azaroff, B., Anderson, S.R., & Cowdery, G.E. (1993). Teaching play activities to preschool children with disabilities: The importance of developmental considerations. Journal of Early Intervention, 17, 139–159. Lovaas, O. I. (1987). Behavioral treatment and normal educational and intellectual functioning in Lovaas, O. I. (2003). Teaching individuals with developmental delays: Basic intervention Mundy, P., Sigman, M., Ungerer, J., & Sherman, T. (1987). Nonverbal communication and play young autistic children. Journal of Consulting and Clinical Psychology, 55, 3–9. Retrieved from https://thelovaascenter.es/media/attachments/2017/03/08/article_87.pdf intervention, I: Overview of approach. Journal of the Association for Persons With Severe Handicaps, 24, 174–185. social & academic development. Baltimore, MD: Paul H Brookes Publishing. techniques. Austin, TX: Pro-Ed. correlates of language development in autistic children. Journal of Autism and Developmental Disorders, 17, 349–364.
 Odom, S. L. (2000). Preschool inclusion: What we know and where we go from here. Topics in Early Childhood Special Education, 20, 20-27. Intervention. In: Reichow B., Boyd B., Barton E., Odom S. (eds) Handbook of Early Childhood Special Education. Springer, Cham. Odom S.L. (2016) The Role of Theory in Early Childhood Special Education and Early Odom, S. L., & Bailey, D. B. (2001). Inclusive preschool programs: Ecology and child 14 A quarter century of research perspectives. Journal of Early Intervention, 33, 344-356. childhood special education: Evidence-based practices. The Journal of Special Education, 37, 164-173. outcomes. In M. Guralnick (Ed.), Early childhood inclusion: Focus on change (pp. 253- 276). Baltimore, MD: Brookes. Odom, S. L., Buysse, V., & Soukakou, E. (2011). Inclusion for young children with disabilities: Odom, S. L., & Wolery, M. (2003). A unified theory of practice in early intervention/early Parsons, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-based staff training: a guide for Piaget, J., & Inhelder, B. (1969). The psychology of the child. New York, NY: Basic Books. Rogers, S. J., & Dawson, G. (2010). Early start Denver model for young children with autism: Rogers, S. J., & Pennington, B. F. (1991). A theoretical approach to the deficits in infantile Promoting language, learning and engagement. New York: The Guilford Press. practitioners. Behavior Analysis in Practice, 5, 2. autism. Development and Psychopathology, 3, 137–162. the art? Journal of Applied Behavior Analysis, 24, 189–204. McNerney, E. (2015). Naturalistic developmental behavioral interventions: Empirically validated treatments for autism spectrum disorder. Journal of Autism and Developmental Disorders, 45, 2411-2428. Schreibman, L., Dawson, G., Stahmer, A. C., Landa, R., Rogers, S. J., McGee, G. G., ... & Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice the state of Skinner, B. F. (1953). Science and human behavior. New York, NY: The Macmillan. Smith, T. (2001). Discrete trial training in the treatment of autism. Focus on Autism and Other Snodgrass, M. R., Chung, M. Y., Meadan, H., & Halle, J. W. (2018). Social validity in single- Snow, C. E. (1977). The development of conversation between mothers and babies. Journal of case research: A systematic literature review of prevalence and application. Research in Developmental Disabilities, 74, 160-173. Developmental Disabilities,16, 86–92. doi: 10.1177/108835760101600204 Child Language, 4, 1-22. pivotal response teaching for children with autism. Guilford Press. Stahmer, A. C., Suhrheinrich, J., Reed, S., Schreibman, L., & Bolduc, C. (2011). Classroom Steege, M. W., Mace, F. C., Perry, L., & Longenecker, H. (2007). Applied behavior analysis: Beyond discrete trial teaching. Psychology in the Schools, 44, 91-99. doi: 15 10.1002/pits.20208 Tomasello, M. (2000). First steps toward a usage-based theory of language acquisition. Cognitive Linguistics, 11, 61-82. Wolery, M. (2000). Recommended practices in child-focused interventions. In S. Sandall, M. McLean, & B. Smith (Eds.), DEC recommended practices in early intervention/early Childhood special education (pp. 29–38). Longmont, CO: Sopris West. Wolery, M. (2005). DEC recommended practices: Child-focused practices. In S. Sandall, M. L. Hemmeter, B. J. Smith, & M. E. McLean (Eds.), DEC recommended practices: A comprehensive guide for practical application in early intervention/early childhood special education (pp. 71-106). Longmont, CO: Sopris West. behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214. Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A., Kucharczyk, S., & Schultz, T. R. (2015). Evidence-based practices for children, youth, and young adults with autism spectrum disorder: A comprehensive review. Journal of Autism and Developmental Disorders, 45, 1951–1966. doi:10.1007/s10803-014-2351-z. 16 CHAPTER 2: STUDY 1 Abstract A systematic review was conducted to evaluate the components (i.e., goals, procedures, and outcomes) and features of social validation assessment practices within studies in which researchers trained practitioners to implement Naturalistic Developmental Behavioral Interventions (NDBIs) with young children with disabilities in early childhood settings. NDBIs are evidence-based approaches implemented by the adults in a child’s natural environment (e.g., by practitioners in schools). The social validity of this body of research may impact maintained practitioner implementation of NDBIs. Results of the systematic review indicate that social validity component examination within practitioner implemented NDBI studies is lacking. Of the 22 practitioner implemented NDBI studies identified, analysis indicated that 11 studies reported social validity assessment. An in-depth analysis of social validity assessment features, relationship with outcome data, and the use of social validity results in drawing effectiveness conclusions was conducted. Implications for future research and practice are discussed. 17 Introduction Social validation of the goals, procedures, and outcomes of the intervention packages for young children with disabilities is important to ensure effective application of evidence-based practices (Schwartz & Baer, 1991; Wolf, 1978). Kazdin (1977) introduced the term social validity as a way to evaluate the importance of behavior change through normative comparisons and through subjective measures of consumers within the natural environment. Wolf (1978) advised researchers in the field of applied behavior analysis to include social validity measures, as well as objective outcome measures, to determine consumer satisfaction of an intervention package or the degree to which society validated interventions. Schwartz and Baer (1991) posited that social validity measures could provide an understanding of what programs and aspects of programs were liked and disliked, help inform researchers of adaptations and applications of evidence-based practices and provide information to ensure an intervention package could be effectively implemented in the natural environment. Further, Kennedy (2002) asserted that researchers integrate maintenance of behavior change data as an indicator of social validity with normative comparisons and subjective evaluations in order to detect and address possible consumer concerns. Increased attention to social validity assessment within special education research occurred in 2005 when Horner and colleagues emphasized natural implementers in natural environments over extended time periods enhanced social validity. Within early childhood, practitioners (i.e., teachers, paraeducators, therapists) are an essential part of the education of children with disabilities in the natural environment. The development and application of practitioner implemented interventions represents one area in which there is a critical need for 18 social validity measurement to implement, refine, and ensure evidence-based interventions are adopted and maintained by practitioners. Practitioner Implemented NDBIs Naturalistic developmental behavior interventions (NDBIs) is one category of evidence- based interventions for young children with disabilities that can be implemented by practitioners (Schreibman et al., 2015). A primary aim of NDBIs is to provide practitioners with tools to implement developmentally appropriate, systematic instruction in a child’s natural educational environment (Schreibman et al., 2015). NDBI approaches can be traced back to Hart and Risley’s (1975) work on incidental teaching, where teaching arrangements capitalize on naturalistic consequences to teach new skills and promote generalization of learned skills. Common NDBI approaches include Natural Language Paradigm (NLP; Koegel, O’Dell, & Koegel, 1987), Pivotal Response Training (PRT; Koegel et al., 1989) and Enhanced Milieu Teaching (EMT; Kaiser & Hester, 1994; see Schreibman et al., 2015). Researcher developed NDBIs were designed to be implemented by adults within the child’s natural environment during existing activities. In 2005, Horner and colleagues suggested that practitioner implementation of evidence-based practices in natural contexts led to increased social validity. Previous systematic reviews have established NDBIs as effective, evidence-based practices for young children with disabilities (e.g., Lane, Lieberman-Betz, & Gast, 2016; Mrachko & Kaczmarek, 2017; Snyder et al., 2015; Verschuur, Didden, Lang, Sigafoos, & Huskens, 2014). Yet, these reviews have given little attention to the social validity of practitioner implementation of NDBIs in early childhood settings, a critical component that must be explored to ensure adoption and effective implementation of NDBIs by practitioners. Further, results of previous reviews on caregiver and practitioner training also indicate that NDBIs can be 19 implemented effectively (Lang, Machalicek, Rispoli, & Regeaster, 2009; Mrachko, & Kaczmarek, 2017; Patterson, Smith, & Mirenda, 2012; Rispoli, Neely, Lang, & Ganz, 2011; Verschuur, Didden, Lang, Sigafoos, & Huskens, 2014). However, effective interventions are not always adopted by practitioners (Cook & Odom, 2013), which may be connected with social validity concerns. Reviews on caregiver and practitioner training indicate that studies in this area rarely include systematic assessment of social validity. Additionally, results of these reviews indicate that maintenance of intervention use by practitioners and maintenance of skills by children are often not assessed, and when they are, results are often mixed which may indicate a lack of social validity within the intervention package (Kennedy, 2002). Despite ample research highlighting promising evidence-based practices related to early intervention and training in natural settings (e.g., Snyder et al., 2015), social validity reports of these practices are consistently lacking (e.g., Verschuur et al., 2014). Indeed, the lack of social validation research may be a contributing factor to the research to practice gap (i.e., implementation of evidence-based practices in education; Cook & Odom, 2013) of practitioner implementation of NDBIs in early childhood education settings. Therefore, it is imperative to explore the inclusion of social validity assessments within practitioner implemented NDBI studies. Purpose of Social Validity The purpose of social validity assessment is to measure the overall acceptability of an intervention beyond treatment effectiveness. Implementation of evidence-based practices is influenced by both the effectiveness and acceptability of the intervention (Hanley, 2010). Social validity is also a key variable to the implementation of evidence-based practices in natural contexts as evidence-based interventions may not be implemented if they are not considered 20 socially valid by consumers (Carter, 2010). The social validity of an intervention allows researchers to discern consumer preference and in turn design sustainable interventions (Schwartz & Baer, 1991). Ideally, social validation considers consumer acceptability of the goals, procedures, and outcomes of an intervention package (Wolf, 1978). As such assessments should aim to evaluate the social validity of the goals of the intervention, determine the appropriateness and acceptability of the intervention procedures, and evaluate the importance of the outcome effects of the intervention package for all participants (e.g., practitioners and children). Seminal works in the field recommend measurement of social validity at multiple stages of intervention development (i.e., before, during, and after intervention) with feedback from direct and indirect consumers as well as the greater community (Schwartz & Baer, 1991; Wolf, 1978). Previous Reviews Researchers have conducted systematic literature reviews to identify the state of social validity assessment within various categories of special education intervention research (i.e., social competence interventions, students with autism or emotional impairments, and singe-case design studies). Yet, none of the previous reviews have focused on practitioner implementation of specific evidence-based practices. Previous reviews have identified a lack of social validity assessment within published studies despite a steady rise in the rate of social validity assessments beginning in the 1970s (Hurley, 2012; Ledford et al., 2016; Snodgrass, Chung, Meadan, & Halle, 2018; Spear, Strickland-Cohen, Romer, & Albin, 2013). Further, these previous reviews have focused on varying aspects of social validity assessment practices leading to conclusions that the state of social validity within published work is lacking. 21 Specifically, Hurley (2012) reported that only 27% of the included studies in their review of social competence interventions for preschoolers assessed at least one component of social validity (i.e., goals, procedures, and outcomes) and noted that outcomes were assessed most often. Further, Spear and colleagues (2013) explored how involving students at risk for emotional or behavioral disorders that employed single-case research methodology addressed social validity as defined by Horner and colleagues (2005). Horner and colleagues noted four quality indicators of social validity including the social importance of: (1) the dependent variable, (2) the magnitude of change in the dependent variable, (3) the implementation of the independent variable, and (4) the typical intervention agents and contexts over time. Findings suggest that although researchers conducted interventions within natural contexts, none of the studies met all four of Horner and colleagues (2005) quality indicators. Additionally, only 50% of studies met at least one quality indicator. In a broad review of single-case methodology, Ledford and colleagues (2016) coded studies for objective and subjective social validity measures and compared the reported social validity results of 48 studies to the single-case experimental results. They concluded that positive social validity results and the presence of a functional relation (i.e., positive experimental results) within a study most often existed when objective social validity measures were used. Lastly, Snodgrass and colleagues (2018) conducted a review of single-case research narrowing inclusion articles in which all three components of social validity were assessed (i.e., goals, procedures, and outcomes). This effort was made to determine the scientific rigor of social validity as reported and discussed in the results and discussion sections of studies. Findings revealed a clear lack of rigor with none of the reviewed articles encompassing all steps of the scientific method and nearly half discussing experimental results and social validity findings together. 22 In sum, previous reviews have focused on examining social validity within single-case experimental design studies and have identified relevant aspects of social validity assessment to analyze (i.e., prevalence rates, rigor, components assessed, methods used, etc.). Using this framework, this review aims to review published studies evaluating the social validation of practitioner implemented NDBI approached within early childhood settings, a type of intervention specifically designed to be socially valid (Schreibman et al., 2015). To our knowledge, reviews of this subset of the intervention literature have not been conducted. Specifically, this review sought to answer the following research questions: (a) What components of social validity (i.e., goals, procedures, outcomes) were assessed for practitioners and child participants; (b) What features of social validity assessments were reported (i.e., types of instruments, respondents, administration); (c) What relationships exist between social validity results and reported intervention outcomes (i.e., implementation fidelity, child target skill outcomes, and evidence of maintenance of behavior change); and (d) Were social validity assessment results used to draw conclusions of intervention effectiveness for practitioner and child participants? Search Procedures Method Studies were identified for inclusion in the review using the following methods. First, an electronic search was conducted of four databases: ProQuest (i.e., ERIC, PsychINFO, PsychARTICLES), EBSCO host (i.e., Education Full Text), Web of Science, and Scopus (limited to Title-Abs-Key search). The following search terms were used: (preschool or young child* or early child*) and (train*or coach*) and (teacher or practitioner or para*or aide or staff or assistant or educator) and (incidental teaching or pivotal response training or pivotal 23 response treatment or PRT or natural language paradigm or NLP or early start Denver model or reciprocal imitation training or joint attention symbolic play engagement and regulation or milieu teaching or enhanced milieu teaching or project impact or natural environment teaching or naturalistic intervention). Search terms for locating NDBIs were selected based on the work of Schreibman and colleagues (2015) and Snyder and colleagues (2015). Following the initial search, ancestral searches of reference lists were conducted from related systematic reviews (i.e., Lane et al., 2016; Mrachko & Kaczmarek, 2017; Snyder et al., 2015; Verschuur et al., 2014). See Figure 2.1 for a PRISMA flow chart (Moher, Liberati, Tetzlaff, & Altman, 2009) outlining the specific systematic review sequence, number of studies located, and exclusions. Initial screening. Abstracts were screened for each study using a screening checklist of the inclusion and exclusion criteria. If an abstract did not contain sufficient information to determine inclusion, the full text was reviewed using the same screening checklist. To be included in the review the study had to (a) be published in a peer reviewed journal; (b) be published in English; (c) include both training and implementation of an NDBI by a practitioner; (d) include a child three to five years old with a disability as recipient of the NDBI; and (e) take place in an early childhood classroom setting. Articles were excluded if they included an NDBI implemented by someone other than a practitioner (e.g., researcher, outside interventionist) or if the training provided to the practitioner was not part of the research investigation. The screening process resulted in 22 articles eligible for further review. In instances where articles included more than one study, only studies meeting inclusion criteria were included in the review. See Figure 2.1 for reasons for exclusion. 24 database searching (n = 128) n o i t a c i f i t n e d I g n i n e e r c S through other sources Articles identified through Additional articles identified Articles after duplicates removed Figure 2.1 PRISMA Flow Chart of Study Inclusion Note. *reasons for exclusion: did not involve practitioner training, did not include a child between the age of 3 and 5 years, intervention was not implemented in the preschool classroom excluded that did not assess social validity Studies included in systematic review assessed for eligibility Articles screened (n = 89) Articles excluded* (n = 67) y t i l i b i g i l E d e d u l c n I Full-text articles Full-text articles (n = 22) (n = 11) (n = 6) (n = 89) (n = 11 ) 25 Reliability of study inclusion. The first and third authors independently conducted the article search to ensure reliability of the search results. The reliability of the search was determined by calculating the number of articles identified by both authors out of the total number of identified articles through database and ancestral search of previous reviews (98% reliability of the initial search). Of the 89 unique articles identified, the third author screened 45% (i.e., 40 articles) for eligibility. Agreement for eligibility in the systematic review was 95%. Disagreements between coders were discussed until agreement was reached. Review Procedures Coding form. A coding form was developed based off of the procedures in previous systematic reviews (i.e., Hurley, 2012; Ledford et al., 2016; Snodgrass et al., 2018) to evaluate each included study. First, article PDFs were electronically searched for the following terms: social validity, treatment acceptability, consumer satisfaction, and satisfaction survey or interview and the methods and results sections were read to determine if social validity assessment data were gathered. If an article included social validity assessment, coding proceeded to indicate (a) the type of social validation measurement (i.e., goals, procedures, outcomes) reported for practitioner and child participants; (b) what methods of assessment were used for each type and participant group (e.g., questionnaire, interview) and how they were administered (e.g., in person, mail); (c) who responded to social validity assessment (e.g., practitioner, family, peers); and (d) the time point(s) of assessment (e.g., before, during, or after the study). Each component of social validity assessment (i.e., goals, procedures, outcomes) was coded separately for measures relating to practitioner and child participants. Also, each social validity assessment utilized in the study was coded separately to determine the results and respondents. Social validity assessment results were coded to indicate 26 the degree to which results reported were positive using the categories positive, mixed, or negative related to practitioner and child participants. Mixed ratings could occur when results were positive for some items and negative for others or when results were positive for some respondents and negative for others. Finally, the discussion section of each article was read to determine if social validity assessment results were used to draw conclusions of intervention effectiveness for practitioner or child participants. Additionally, results of primary data (i.e., implementation fidelity, maintenance, child behavior data) were identified based on author report using the categories positive, mixed, and negative for practitioner implementation fidelity and reported child behavior outcomes. Since the purpose of this review was not to evaluate the effectiveness of intervention packages, but to review social validity assessment present in the articles, guidelines such as What Works Clearinghouse or determinations of functional relations using visual analysis were not included within this review. Last, the type of NDBI(s) used (e.g., incidental teaching, pivotal response training, etc.) in the study were recorded as well as the research design (e.g., single-case, group design, etc.). Training and interrater agreement for coding. Coders in this study (i.e., the first and third author) were doctoral students with expertise in early intervention and applied behavior analysis. The first author coded all of the studies. During training, the third author reviewed the coding form and coded a study not included in this review. The first and third authors discussed all agreements and disagreements. The third author then coded two additional articles until agreement met or exceeded 90%. After coding training was complete, the third author independently coded 32% of the studies included in this review (i.e., 7 studies). Purposeful sampling was used to select studies for interrater agreement to ensure balanced reliability check of articles that did and did not include social validity assessment. Interrater agreement was 27 calculated by dividing the number of agreements for each code by the number of agreements plus disagreements and then multiplying by 100. Interrater agreement was 99% (range = 97%- 100%). All disagreements between coders were discussed until agreement was reached. Results Twenty-two articles in which practitioners were trained to implement an NDBI with preschool age children met inclusion criteria and were eligible for further review to determine if they included an assessment of social validity. However, only 11 studies (i.e., 50%) included assessment of social validity. These 11 studies were further analyzed using the coding methods described above (see Table 2.1). Studies were published between 2008 and 2017 and included practitioner training to implement the following NDBIs: Incidental Teaching (Fetherston & Sturmey, 2014; Hall, Grundon, Pope, & Romero, 2010; Neely, Rispoli, Gerow, & Hong, 2016; Ryan, Hemmes, Sturmey, Jacobs, & Grommet, 2008), Natural Language Paradigm (Gianoumis, Seiverling, & Sturmey, 2012; Seiverling, Pantelides, Ruiz, & Sturmey, 2010), Pivotal Response Training (Hall et al., 2010; Robinson, 2011; Suhrheinrish, 2015), Early Start Denver Model (Vismara, Young, Stahmer, Griffith, & Rogers, 2009; Vismara, Young, & Rogers, 2013), and Project ImPACT (Wainer, Pickard, & Ingersoll, 2017). The 11 studies that included assessment of social validity were primarily single-case experimental design (n = 8; 73%). However, group (Ryan et al., 2008; Vismara et al., 2009; Vismara et al., 2013) and quasi-experimental designs were also used (Wainer et al., 2017). Table 2.1 Components of Social Validity Assessment by Participants Practitioner Child Article Goals Fetherston & Sturmey, 2014 a Procedures Outcomes Goals T X X I Procedures Outcomes 28 Table 2.1 (cont’d) Gianoumis et al., 2012 Hall et al., 2010 Neely et al., 2016 Robinson, 2011 Ryan et al., 2008 b Seiverling et al., 2010 Suhrheinrich, 2015 Vismara et al., 2009 Vismara et al., 2013 Wainer et al., 2017 c X X X X X X X X X X X X X X X X X X X X X X X Total 0% 82% 18% 18% Note. X = component assessed; T = training; I =intervention; aExperiment 2; bExperiment 3; cPhase 1 Social Validity Assessment Components 91% 9% 9% Practitioner participants. The results of social validity assessment by component (i.e., goals, procedures, and outcomes) for practitioner participants are displayed in Table 2.1. First, the social significance of the treatment goals related to practitioner participants was not measured in any of the reviewed studies. Practitioners were the focal participants in all of the included studies, yet none of the studies conducted social validation assessment to detect unacceptability or analyze if or why the targeted intervention goals for practitioners are liked or disliked. Next, social validity assessments related to procedures were categorized by training and intervention (see Table 2.1). Practitioner training procedures, those that were used during the training of practitioners, were assessed using social validity measures in nine studies. The NDBI intervention procedures, those used by practitioners to provide support to children in the early childhood setting, were measured using social validity assessments in only two studies. 29 Suhrheinrich (2015) and Neely and colleagues (2016) assessed social validity (i.e., consumer satisfaction) for both training and intervention procedures. Of the 11 studies reviewed, two (Gianoumis et al., 2012; Seiverling et al., 2010) did not assess the social validity for either training or intervention. Social validity assessment of the outcomes of the intervention package were identified in 10 of the 11 articles reviewed. Of note, none of the reviewed studies assessed all three components of social validity (i.e., goals, procedures, and outcomes) for practitioner participants. Neely and colleagues (2016) and Suhrheinrich (2015) assessed the most components related to practitioners with assessments present for two (i.e., procedures and outcomes) of the three components. Child participants. The prevalence of social validity assessments related to the goals, procedures, and outcomes for child participants are displayed in Table 2.1. Child participant data was lacking in the reviewed studies, which may be, in part, because the primary dependent variables in studies were related to practitioner participants - not child participants. The social validity of the target goal for child participants was assessed through informal discussion with stakeholders in one of the reviewed studies (Robinson, 2011). In this study, the target children’s Individual Education Plans (IEP) were reviewed with stakeholders to establish a target social communication goal for each child participant. In the area of procedures, one study (Robinson, 2011) reported that the NDBI procedures were assessed for social validity from the viewpoint of child participants. In this study, an observation rating method was used to rate child affect as a measurement of intervention acceptability during each phase of the study. Last, the social validity of the child participant outcomes of the intervention package were assessed in only two studies (Robinson, 2011; Suhrheinrich, 2015). These studies utilized a researcher made social 30 validity assessment for practitioners that included a rating of the benefit or success of the NDBI for child behavior. Social Validity Features Table 2.2 displays the reported measurement features of social validity assessments. Features are reported in terms of the type(s) of social validity assessment instruments, the respondents involved, and how the assessments were administered (i.e., who administered, timing of administration, and method of administration). Types of instruments. Questionnaires were the most common measure of social validity in the reviewed studies. Questionnaires included rating scales or Likert type scale. Only one study (Neely et al., 2016) utilized a validated measure called the Treatment Evaluation Inventory Short Form (TEI-SF; Kelly et al., 1989). The remaining studies used questionnaires that were developed for the purpose of assessing the specific treatment program. Some questionnaires (n = 3; 27%) included open-ended questions, which expanded the understanding of the social validity of treatment procedures. Notably, Vismara and colleagues (2013) identified themes from their open-ended questionnaire designed to measure the social validity of the training methods. Additionally, two studies (Fetherston & Sturmey, 2014; Seiverling et al., 2010) showed video of the intervention to respondents to complete social validity questionnaires. Robinson (2011) used single-case design method to display observational social validity data of child-affect throughout the study. Overall, social validity assessment instrument methods lacked diversity. Methods such as interviews and focus groups were not present. Respondents. Respondents who participated in social validity assessments were identified using the following categories: direct participants, indirect participants, and external 31 Table 2.2 Reported Features of Social Validity Assessments Instruments Respondents Who When Administered How Article Fetherston & Sturmey, 2014a Gianoumis et al., 2012 Hall et al., 2010 Neely et al., 2016 Robinson, 2011 Ryan et al., 2008b Seiverling et al., 2010 Suhrheinrich, 2015 Vismara et al., 2009 Vismara et al., 2013 Wainer et al., 2017c Q X X X X X X X X X X X OE SCD Direct X X X X X X X X X X X X X X X Administered Indirect External Researcher CD X X X X X X X X X X X X X Administered Pre During Post Report CD X X X X X X X X X X X X X X X X X X X X X X X X X X X X Total: 100% 27% 9% 100% 0% 18% 45% 55% 27% 27% 100 9% 91% Note. X = component assessed; Q is Questionnaire; OE is open-ended items; SCD is single-case design; CD is cannot determine; aExperiment 2; bExperiment 3; cPhase 32 stakeholders (Schwartz & Baer, 1991; Snodgrass et al., 2018). All of the studies utilized direct respondents (i.e., practitioners or children) for social validity. No studies involved indirect consumers - those who are affected by the intervention package yet are not the recipients (e.g., administrators or parents; Schwartz & Baer, 1991). Two studies included board certified behavior analysts as external stakeholders (Fetherston & Sturmey, 2014; Seiverling et al., 2010). Administration of social validity assessments. Social validity assessment administration was coded to indicate who administered the assessment, the timing of administration, and to determine if researchers reported how assessments were administered (see Table 2.2). Five studies indicated the researcher administered the social validity assessments. The remaining studies did not indicate who administered the social validity assessments. Social validity assessments were conducted after the conclusion of the study in all of the reviewed studies. However, two studies also included social validity assessments before the study began (Gianoumis et al., 2012; Seiverling et al., 2010), and another two studies included assessments during the study (Robinson, 2011; Vismara et al., 2009). One study (Wainer, Picakard, & Ingersoll, 2017) included social validity assessment before, during, and after the study concluded. Only two studies reported how social validity assessments were gathered. Suhrheinrich (2015) gathered questionnaires by mail, and Robinson (2011) gathered social validity assessment of child affect ratings using direct observation. However, Robinson (2011) did not report how the questionnaires were administered to participants. Details of social validity assessment administration were unclear in the majority of studies. 33 Table 2.3 Study Outcome, Maintenance, and Social Validity Results Practitioner Child Implementatio n Fidelity Main- tenance SV Results G IP TP O Target Skill Main- tenanc Outcomes Article Fetherston & Sturmey, 2014 a Gianoumis et al., 2012 Hall et al., 2010 Neely et al., 2016 Robinson, 2011 Ryan et al., 2008b Seiverling et al., 2010 Suhrheinrich, 2015 Vismara et al., 2009c Vismara et al., 2013 Wainer et al., 2017c P P M P P P P P P P P P P - M P - - M - M P - - - - - - - - - - - P - M M P P - P P M P - - - P - - - P - - - P P P P P - - M - P P - M M P P P P - P - - SV Results G Pr O - - - - - - - - - - - - - - - - - - - M P - - - - - - - - P - - - e P M - M P - - - - - - Note. SV = social validity assessment; G = goals; IP = intervention procedures; TP= training procedures; O = outcomes; X = component assessed; P = positive; M = mixed; N = negative; - = not collected/reported; aExperiment 2; bExperiment 3; cPhase 1 34 Relationship Between Social Validity, Intervention Outcomes, and Maintenance The reported results of practitioner and child participant intervention outcomes including implementation fidelity, target skill outcomes, maintenance, and social validity assessment by component were identified for each study (see Table 2.3). When comparing practitioner implementation fidelity to the results of social validity assessment of practitioner outcomes, a positive relationship was often found. Studies with positive social validity outcome data (n = 9) often reported positive results regarding implementation fidelity (n = 8). One study (Ryan et al., 2008) did not measure social validity of outcomes for practitioners so a relationship could not be determined. The remaining two studies reported mixed results. Hall and colleagues (2010) reported mixed results in implementation fidelity and positive social validity assessment of the intervention outcomes, while Seiverling and colleagues (2010) reported positive implementation fidelity and mixed social validity assessment of intervention outcomes. The relationship between child outcomes and social validity was less clear. Five studies (45%) reported child outcome data, yet only two of these (Robinson, 2011; Suhrheinrich, 2015) also reported social validity assessment outcome data for which results were positive. Maintenance data for practitioner participants were reported in seven studies, of which four studies (57%) reported positive results and three studies (43%) reported mixed results. Child maintenance data were reported in four studies, of which two studies (50%) reported mixed results and two studies (50%) reported positive results. The relationship between maintenance data and social validity results regarding outcomes appears unclear as reported results are sparse and mixed. Of note, a single study (Robinson, 2011) reported results for practitioner and child participants’ intervention outcomes, maintenance of behavior change, and social validity assessment results regarding outcomes for which results were positive. 35 Social Validity Assessment for Conclusions Studies were also reviewed to determine if social validity assessment results were used to draw conclusions of intervention effectiveness for practitioner and child participants. Of the 11 studies reviewed, eight (72%) included social validity and experimental findings together in the discussion section. In the discussion section of two studies (Gianoumis, Seiverling, & Sturmey, 2012; Robinson, 2011), a positive relationship was reported for both practitioner and child participant social validity and primary data results. A positive relationship was reported for practitioner social validity and implementation fidelity results in the discussion of four studies (Fetherston & Sturmey, 2014; Hall et al., 2010; Vismara et al., 2009; Wainer, Pickard, & Ingersoll, 2017). Within the discussion section two studies (Neely et al., 2016; Seiverling et al., 2010) indicated that a relationship between practitioner implementation fidelity results and assessment of social validity results did not exist. In both of these studies, the absence of a relationship was noted as a limitation. Discussion Findings from this study identify a limited number of practitioner implemented NDBI studies (i.e., 22), of which half included reports of social validity assessment. The 11 studies that measured social validity were published over the past decade, which suggests that researchers are seeing some value of social validity assessment. It is important to note that researchers may face challenges in reporting adequate social validity assessment data due to publishing conventions (e.g., page limits, peer review priorities). However, the low prevalence of social validity assessment within NDBI studies is similar to what others have found in the broader literature (Hurley, 2012; Ledford et al., 2016; Snodgrass et al., 2018). That is, lack of thoroughness and diversity among social validity practices is consistent with findings of previous reviews (Hurley, 36 2012, Ledford et al., 2016; Snodgrass et al., 2018; Spear et al., 2013). Since this review focused on research studies containing intervention methods created to be effective and socially valid (i.e., NDBIs) with typical intervention agents and contexts (i.e., practitioners in early childhood settings), it is concerning that social validity practices are not more prevalent. Given the findings, several concerns are discussed: (a) lack of comprehensive social validity assessment, (b) quality of social validity description, and (c) value of social validity results. Limitations and implications for research and practice are also discussed. Lack of Comprehensive Social Validity Assessment Within this review, studies assessed the procedures and outcomes related to practitioner participants most often. Researchers are collecting data on the acceptance of the training intervention used and its outcomes, which is commended. Yet, none of the studies assessed all three of the components of social validity (i.e., goals, procedures, and outcomes); it is critical that researchers assess all three components for both participant groups (i.e., practitioner and child) when making claims regarding the social validity of an intervention package. The lack of social validity assessment of the goals for practitioner and child participants is particularly concerning. Wolf (1978) states that the goals of the treatment must be socially significant, meaning the desired outcomes are both essential and appropriate to consumers. Reviewing the IEP with practitioners can be a practical method for ensuring social validity of intervention goals as the IEP considers the family and educational team’s contributions to a child’s prioritized educational goals (Carter, 2010; Hurley 2012). Also, researchers should ensure that the treatment goals of practitioners are important and consistent to individual values and characteristics. Results of this review also highlight that researchers are using methods to assess the social validity of the training procedures and outcomes related to practitioner participants, but 37 less often related to child participants. The lack of social validity assessments for child participants may be related to the fact that children are often not the primary participants. However, child perceptions and experiences are still an important consideration for researchers. Although the practitioner implemented NDBIs are considered effective practices, social validity assessment can provide researchers with additional information and understanding to inform future intervention adjustments and applications (Carter, 2010). Next, questionnaires were primarily utilized to assess social validity, which is consistent with previous findings (Ledford et al., 2016). Additionally, the majority of studies relied on self- made questionnaires, even though validated measures exist (Carter, 2010). Informal measures without established validity norms should be used only to supplement established standardized measures (Carter, 2010). Recommendations for improving social validity questionnaires include: (1) numerous response options, (2) questions that require the rater to span the range of the scale to ensure active consideration, (3) specification of the exact portion of the intervention package being rated, (4) questions that cover all of the relevant areas of the intervention package, and (5) questions that focus on collecting specific information rather than general statements to make information useful (Schwartz & Baer, 1991). Further a lack of diversity for other methods and measures was apparent in this review. It is recommended that researchers explore methods such as interviews, normative comparison data, affect ratings, and self-efficacy ratings in addition to or in place of researcher made questionnaires (Carter, 2010). General consensus indicates that direct consumers (i.e., practitioner and child participants) of the intervention package should be primary respondents when social validity assessments are collected (Schwartz & Baer, 1991). Yet, recommendations also encourage researchers to gather social validity data from carefully chosen indirect consumers and external 38 stakeholders (e.g., family members, coworkers, administrators, education professionals) who are influenced by the intervention package and/or influence intervention program survival (Carter, 2010; Schwartz & Baer, 1991). However, within this review studies focused on social validity measurement with only direct consumers. Next, the timing (i.e., pre, during, post) of social validity assessment should be planned and purposeful to support the objective of each assessment. Researchers should consider how ongoing, unobtrusive assessment can be used to increase social validity during the study. Data from this and previous reviews (e.g., Hurley, 2012) suggest that social validity assessments are primarily conducted at the end of the study, limiting opportunities to determine how consumers perceived the NDBI or training methods before and during implementation. Although valuable for future research, post-study data does not inform how to adapt procedures to make them most compatible within early childhood settings and for children with disabilities (Gresham & Lopez, 1996). Future research might follow the example of Wainer and colleagues (2017) who assessed direct consumers self-efficacy of the NDBI at three time points giving them the ability to measure social validity outcomes of practitioner self-efficacy over time and analyze the scores for comparison. Quality of Social Validity Description When planning who to include in social validity assessment, researchers should also consider who will administer the assessments and how they will be administered. The reviewed studies provided limited details of these features. When information was provided, it was evident that researchers were administering methods of social validity, but the process was not clear. Schwartz and Baer (1991) recommend that respondents to social validity assessment be informed of why the information is being collected and how it will be used. Carter (2010) also 39 recommends that respondents have a clear understanding of treatment outcomes before completing social validity assessments as this may increase the value of the assessment. Even though questionnaires may be anonymous, respondents, especially direct consumers, know who will be reviewing the information, which may lead to a hesitation in providing negative, constructive feedback due to relationships with researchers (Garfinkle & Schwartz, 2002). Methods to improve anonymity and decrease the pressure to provide only positive reports include anonymous electronic questionnaires and social validity assessments with a larger number of consumers (i.e., direct, indirect, external). Additionally, it is important for researchers to clearly and completely describe how social validity assessments are administered as various methods may impact the interpretation of the results. Value of Social Validity Results Results from this review indicate that social validity is often seen as a stand-alone variable that researchers often do not integrate with primary data outcomes. As Snodgrass and colleagues (2018) point out, the purpose of conducting social validity assessment in the first place is lost if authors do not use the data when discussing claims of intervention effectiveness. Carter (2010) also emphasizes the importance of considering social validity results along with intervention effects. Of the reviewed studies, 73% describe a relationship between primary data and social validity assessment outcomes, which indicates that researchers are to some extent valuing social validity as a component of intervention effectiveness. Researchers should continue to value the results of social validity assessment akin to primary data results. Even in instances where data are contradicting, researchers can use these findings to make suggestions for intervention adjustments. One promising example of researchers aiming to improve social validity assessment in future studies is Seiverling and colleagues (2010) who gathered social 40 validity data on the changes of practitioner and child behavior and noted that respondents were unable to detect any changes despite positive intervention effects. They suggested revisions to their social validity questions to more accurately measure changes in practitioner and child behavior. When social validity assessment results were compared to maintenance results of the reviewed studies, the relationship was unclear. Kennedy (2002) explains that this relationship may be influenced by other factors (e.g., policies, laws, organizational procedures), which may limit the use of maintenance data as an indicator of social validity. Perhaps planned social validity assessments like semi-structured post interviews or validated instruments could enlighten sustainability factors, which may be associated with social validity. The collection of maintenance of behavior change data and further post-study social validity assessment could illuminate this relationship and should be of value to researchers. Limitations Despite the important findings within this study, several limitations of this review are acknowledged. First, the inclusion and exclusion criteria resulted in a small number of studies available for review. Nonetheless, the narrow scope of this review (i.e., practitioner implemented NDBIs in preschool settings) was justified given the recommendation for the use of evidence- based practices in the child’s natural environment (DEC/NAEYC, 2009; DEC, 2014; Schreibman et al., 2015; Wong et al., 2015). The limited number of included studies highlights a lack of attention to this topic within published experimental research. It is possible that the reviewed studies did not report social validity assessment in the experimental articles and may have published this information in a separate manuscript. It should also be noted that this review was limited to peer reviewed publications. As such, dissertations and other unpublished work 41 were not included, which may have led to a larger number of studies and different conclusions. Finally, while every attempt was made to utilize search terms relevant to NDBIs based on past research in this area, it is recognized that the use of a variety of terms within the field may have limited the ability to locate studies for review. Implications for Research and Practice Improving the examination of social validity is the professional responsibility of researchers. It is recommended that researchers use more than on method to measure social validity as each method focuses on a different component of social validity (i.e., goals, procedures, and outcomes). In order to improve social validity assessment practices and advance the implementation rate, researchers need to be aware of the best practices in social validity assessment. This awareness may increase efficiency in the selection and use of social validity assessments. When drafting intervention studies, researchers should include the mixing of high- quality methodological designs with rigorous and thoughtful assessments of social validity (Kramer, 2011; Leko, 2014; Snodgrass et al., 2018). Researchers may begin this process by incorporating a research question around social validity then following the basic steps of scientific inquiry. As Snodgrass and colleagues (2018) found, researchers do not use the scientific method to guide their process even when all three components of social validity are assessed. Hence, social validity assessment findings and primary data results are less likely to be discussed together in the discussion section. Methodological practices related to social validity (i.e., mixed method research designs) can provide a more credible assessment of social validity and increase the likelihood of researchers valuing social validity data as they contribute this important research to the field (Kramer, 2011; Leko, 2014; Snodgrass et al., 2018). 42 In order to conduct meaningful social validity assessments, researchers must be competent in the various methods used to gather information and become familiar with the components and features of social validity assessment. Researchers should initially determine the factors (e.g., goodness of fit, consumer acceptability, normative comparison) and types (e.g., interview, validated questionnaire, etc.) of measurement from each component (i.e., goals, procedures, and outcomes) that are considered most valuable based on the purpose of the study (Carter, 2010). Researchers can utilize tools such as the Social Validity Measurement Inventory and the Social Validity Measurement Matrix (see Carter, 2010) for this purpose. Next, it is recommended that researchers plan for collaboration, choice, and input from practitioners and indirect consumers (e.g., family, co-workers) in order to socially validate the goals and procedures of the intervention. This process can create buy-in before the intervention begins, which may impact primary data and social validity results (Hieneman, Dunlap, Kincaid, 2005). This may entail a choice of intervention, implementation setting, individualized child goals, etc. Carter (2010) suggests methods including a semi-structured interview and formal, validated instruments like the Intervention Rating Profile shortened to 15 questions (IRP-15; Martens, Witt, Elliot, & Darveaux, 1985), which was developed to bring an awareness to what treatments practitioners find acceptable. Also, researchers can identify methods to utilize with participants and stakeholders with existing tools like the Checklists for Exploring the Social Significance of Treatment Goals and the Treatment Goal Prioritization questionnaire (Carter, 2010). Finally, in order to push current social validity assessment practices forward, I reiterate the call by Hurley (2012) and Snodgrass et al. (2018) for higher standards and stricter requirements of editors and peer reviewers. Since many of the early intervention research studies 43 typically have a primary focus of demonstrating behavior change techniques, social validity assessment has become a secondary priority if it is included at all. Although existing quality indicators (i.e., Horner et al., 2005) and dimensions of applied behavior analysis emphasize consideration of social validity, it does not appear that journals are holding to these standards. The requirement of social validity assessment for dissemination of intervention studies may lead to the necessary increase in social validity assessment presence, quality, and rigor as well as the inclusion of social validity data when making effectiveness claims. Conclusion Social validity assessments play an important role in early intervention research aimed to train practitioners to implement naturalistic interventions (i.e., NDBIs) due to its relationship with consumer satisfaction and adoption with continued use. NDBIs, by definition, are designed to be socially valid as they are evidence-based, implemented in natural settings, and involve both shared control and natural contingencies to teach developmentally appropriate skills (Schreibman et al., 2015). Researchers should rely on high quality and rigorous social validity assessment methods to support implementation fidelity and acceptability of practitioner implemented NDBI studies. Considering the current call for increased attention to social validity assessment along with the urgency to increase evidence-based practices in natural settings, this review fills a gap in the literature. Future research should include a greater emphasis on the inclusion and reporting of high-quality social validity assessments. 44 REFERENCES 45 REFERENCES *References marked with an asterisk indicate studies included in the systematic review. Brookman-Frazee, L. I., Taylor, R., & Garland, A. F. (2010). Characterizing community-based mental health services for children with autism spectrum disorders and disruptive behavior problems. Journal of Autism and Developmental Disorders, 40, 1188–1201. doi: 10.1007/s10803-010-0976-0 interventions. Boston, MA: Academic Press. Carter, S. L. (2010). The social validity manual: A guide to subjective evaluation of behavioral Cook, B. G., & Odom, S. L. (2013). Evidence-based practices and implementation science in Division for Early Childhood. (2014). DEC recommended practices in early intervention/early special education. Exceptional Children, 79, 135-144. childhood special education 2014. Retrieved from: http://www.decsped.org/recommendedpractices Division for Early Childhood/National Association for the Education of Young Children. (2009). Early childhood inclusion: A joint position statement of the Division for Early Childhood (DEC) and the National Association for the Education of Young Children (NAEYC). Chapel Hill: The University of North Carolina, Frank Porter Graham Child Development Institute. *Fetherston, A. M., & Sturmey, P. (2014). The effects of behavioral skills training on instructor *Gianoumis, S., Seiverling, L., & Sturmey, P. (2012). The effects of behavior skills training on Gresham, F. M., & Lopez, M. F. (1996). Social validation: A unifying concept for school-based correct teacher implementation of natural language paradigm teaching skills and child behavior. Behavioral Interventions, 27, 57-74. doi: 10.1002/bin.1334 and learner behavior across responses and skill sets. Research in Developmental Disabilities, 35, 541-562. doi: 10.1016/j.ridd.2013.11.006 consultation research and practice. School Psychology Quarterly, 11, 204. doi: 10.1037/h0088930 use behavioral strategies when educating learners with autism spectrum disorders across environments. Behavioral Interventions, 25, 37–51. doi: 10.1002/bin.294 *Hall, L. J., Grundon, G. S., Pope, C., & Romero, A. B. (2010). Training paraprofessionals to Hanley, G. P. (2010). Toward effective and preferred programming: A case for the objective measurement of social validity with recipients of behavior-change programs. Behavior Analysis in Practice, 3, 13-21. doi: 10.1007/BF03391754 46 Hart, B., & Risley, T. R. (1975). Incidental teaching of language in the preschool 1. Journal of applied behavior analysis, 8, 411-420. doi: 10.1901/jaba.1975.8-411 Hieneman, M., Dunlap, G., & Kincaid, D. (2005). Positive support strategies for students with behavioral disorders in general education settings. Psychology in the Schools, 42, 779- 794. doi: 10.1002/pits.20112 single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165–179. doi:10.1177/001440290507100203. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of Hurley, J. J. (2012). Social validity assessment in social competence interventions for preschool children: A review. Topics in Early Childhood Special Education, 32, 164–174. Retrieved from http://dx.doi.org/10.1177/0271121412440186. Speech, Language, and Hearing Research, 37, 1320-1340. doi: 10.1044/jshr.3706.1320 Kaiser, A. P., & Hester, P. P. (1994). Generalized effects of enhanced milieu teaching. Journal of Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through Kennedy, C. H. (2002). The maintenance of behavior change as an indicator of social validity. social validation. Behavior Modification, 1, 427-452. doi: 10.1177/014544557714001 Behavior Modification, 26, 594–604. Retrieved from http://dx.doi.org/10.1177/014544502236652. Koegel, R. L., O'dell, M. C., & Koegel, L. K. (1987). A natural language teaching paradigm for nonverbal autistic children. Journal of autism and developmental disorders, 17, 187- 200. doi: 10.1007/BF01495055 How to teach pivotal behaviors to children with autism: A training manual. Santa Barbara: University of California. assessment: An illustration using the Child Occupational Self-Assessment (COSA). Journal of Mixed Methods Research, 5, 52-76. doi: 10.1177/1558689810386376 Koegel, R. L., Schreibman, L., Good, A., Cerniglia, L., Murphy, C., & Koegel, L. K. (1989). Kramer, J. M. (2011). Using mixed methods to establish the social validity of a self-report Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Lane, J. D., Lieberman-Betz, R., & Gast, D. L. (2016). An analysis of naturalistic interventions for increasing spontaneous expressive language in children with autism spectrum disorder. The Journal of Special Education, 50, 49-61. doi: 10.1177/0022466915614837 Shadish, W. R. (2010). Single-case designs technical documentation. What works clearinghouse. Retrieved from: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf. 47 Ledford, J. R., Hall, E., Conder, E., & Lane, J. D. (2016). Research for young children with autism spectrum disorders: Evidence of social and ecological validity. Topics in Early Childhood Special Education, 35, 223–233. Retrieved from http://dx.doi.org/10.1177/0271121415585956. Leko, M. M. (2014). The value of qualitative methods in social validity research. Remedial and Special Education, 35, 275-286. doi.: 10.1177/0741932514524002 concerning the acceptability of school-based interventions. Professional psychology: Research and practice, 16, 191. doi: 10.1037/0735-7028.16.2.191 Martens, B. K., Witt, J. C., Elliott, S. N., & Darveaux, D. X. (1985). Teacher judgments Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6, e1000097. Retrieved from https://doi.org/10.1371/journal.pmed.1000097 increase social communication for young children with ASD. Topics in Early Childhood Special Education, 37, 4-15. doi: 10.1177/0271121416662870 Mrachko, A. A., & Kaczmarek, L. A. (2017). Examining paraprofessional interventions to *Neely, L., Rispoli, M., Gerow, S., & Hong, E. R. (2016). Preparing interventionists via telepractice in incidental teaching for children with autism. Journal of Behavioral Education, 25, 393-416. doi: 10.1007/s10864-016-9250-7 Odom, S. L., Buysse, V., & Soukakou, E. (2011). Inclusion for young children with disabilities: A quarter century of research perspectives. Journal of Early Intervention, 33, 344-356. doi: 10.1177/1053815111430094 *Robinson, S. E. (2011). Teaching paraprofessionals of students with autism to implement pivotal response treatment in inclusive school settings using a brief video feedback training package. Focus on Autism and Other Developmental Disabilities, 26, 105-118. doi: 10.1177/1088357611407063 brief staff training procedure on instructors' use of incidental teaching and students' frequency of initiation toward instructors. Research in Autism Spectrum Disorders, 2, 28-45. doi: 10.1016/j.rasd.2007.02.002 *Ryan, C. S., Hemmes, N. S., Sturmey, P., Jacobs, J. D., & Grommet, E. K. (2008). Effects of a Schreibman, L., Dawson, G., Stahmer, A. C., Landa, R., Rogers, S. J., McGee, G. G., Kasari, C., Ingersoll, B., Kaiser, A.P., Bruinsma, Y., McNerney, E., Weatherby, A., & Hallady, A. (2015). Naturalistic developmental behavioral interventions: Empirically validated treatments for autism spectrum disorder. Journal of Autism and Developmental Disorders, 45, 2411-2428. doi: 10.1007/s10803-015-2407-8 48 Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice the state of the art? Journal of Applied Behavior Analysis, 24, 189–204. doi: 10.1901/jaba.1991.24- 189 case research: A systematic literature review of prevalence and application. Research in developmental disabilities, 74, 160-173. doi: 10.1016/j.ridd.2018.01.007 training with general-case training on staff chaining of child vocalizations within natural language paradigm. Behavioral Interventions, 25, 53-75. doi: 10.1002/bin.293 *Seiverling, L., Pantelides, M., Ruiz, H. H., & Sturmey, P. (2010). The effect of behavioral skills Snodgrass, M. R., Chung, M. Y., Meadan, H., & Halle, J. W. (2018). Social validity in single- Snyder, P. A., Rakap, S., Hemmeter, M. L., McLaughlin, T. W., Sandall, S., & McLean, M. E. Spear, C. F., Strickland-Cohen, K., Romer, N., & Albin, R. W. (2013). An examination of social (2015). Naturalistic instructional approaches in early learning: A systematic review. Journal of Early Intervention, 37, 69-97. doi: 10.1177/1053815115595461 validity within single-case research with students with emotional and behavioral disorders. Remedial and Special Education, 34, 357–370. Retrieved from http://dx.doi.org/10.1177/0741932513490809. training. Autism, 19, 713-723. doi: 10.1177/1362361314552200 Dissemination of evidence-based practice: Can we train therapists from a distance? Journal of Autism and Developmental Disorders, 39, 1636-1651. doi: 10.1007/s10803-009-0796-2 treatment for children with autism spectrum disorders: A systematic review. Review Journal of Autism and Developmental Disorders, 1, 34-61. doi: 10.1007/s40489-013- 0008-z *Suhrheinrich, J. (2015). A sustainable model for training teachers to use pivotal response Verschuur, R., Didden, R., Lang, R., Sigafoos, J., & Huskens, B. (2014). Pivotal response *Vismara, L. A., Young, G. S., Stahmer, A. C., Griffith, E. M., & Rogers, S. J. (2009). *Vismara, L. A., Young, G. S., & Rogers, S. J. (2013). Community dissemination of the early *Wainer, A. L., Pickard, K., & Ingersoll, B. R. (2017). Using web-based instruction, brief workshops, and remote consultation to teach community-based providers a parent- mediated intervention. Journal of Child and Family Studies, 26, 1592-1602. doi: 10.1007/s10826-017-0671-2 Wolery, M., & Hemmeter, M. L. (2011). Classroom instruction: Background, assumptions, and challenges. Journal of Early Intervention, 33, 371-380. doi:10.1177/1053815111429119 start denver model: Implications for science and practice. Topics in Early Childhood Special Education, 32, 223-233. doi: 10.1177/0271121411409250 49 Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A., Kucharczyk, S., ... & Schultz, T. R. (2015). Evidence-based practices for children, youth, and young adults with autism spectrum disorder: A comprehensive review. Journal of Autism and Developmental Disorders, 45, 1951-1966. doi: 10.1007/s10803-014-2351-z behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214. 50 CHAPTER 3: STUDY 2 Abstract This single-case investigation was designed to evaluate the effects of technology-based intervention training on practitioner’s implementation of a naturalistic developmental behavioral intervention (NDBI). A total of six general education preschool practitioners engaged in an intervention with six children with varying disabilities in the inclusive classroom setting to increase targeted social communication skills. The technology-based intervention training package included a collaborative approach to intervention planning, an online training module, self-evaluation, and ongoing coaching and feedback through videoconferencing. Following the technology-based intervention program, practitioners reached preset mastery criteria for implementation fidelity, and increased communication opportunities for the child target skill. Additionally, child participants increased social communication target skills above baseline levels. All results generalized to a different activity context (e.g., small group, choice time, etc.) and maintained over time. Social validity was also measured and results suggest high levels of acceptability for the technology-based intervention package. 51 Introduction Inclusive programs, classrooms that integrate children with and without disabilities, are widespread in early childhood with two thirds of all children with disabilities, ages three through five, participating in inclusive settings for some portion of their school day (U.S. Department of Education, 2017). However, inclusive models are only effective if practitioners are proficient in evidence-based practices to support the development and learning of children with disabilities (Odom, Buysse, & Soukakou, 2011). Research has shown that many early childhood practitioners do not feel equipped to effectively support children with disabilities (Stahmer, Collings, & Lawrence, 2005). The Division for Early Childhood (DEC) and the National Association for the Education of Young Children (NAEYC) jointly recommend that inclusive programs hold high standards for practitioner professional development and professional competencies (DEC/NAEYC, 2009). Therefore, practitioners require training to support the sustained use of evidence-based practices. Naturalistic Developmental Behavioral Interventions (NDBIs) are an evidence-based approach influenced by applied behavior analysis and developmental theory (Schreibman et al., 2015). NDBIs are well matched for inclusive early childhood settings as they are designed to teach developmentally appropriate skills in the natural setting during typically occurring activities (Schreibman et al., 2015; Snyder et al., 2015). However, implementing evidence-based practices like NDBIs can be challenging for practitioners within inclusive preschool classrooms as the practices are often complex and require training and ongoing coaching to implement with fidelity. Further, practitioners often report barriers to training due to time, resources, training costs, and implementation comfort (Langley et al., 2010; Wainer & Ingersoll, 2013). The purpose of this study is to support practitioners in implementing an NDBI through a socially 52 valid coaching and training intervention package to decrease the gap between research and recommended evidence-based practice. Intervention training models must include characteristics associated with increased outcomes in order to be effective for practitioners (Dunst, Trivette, & Raab, 2013). Research suggests that adults learn best when they are actively involved, when they can relate new information to past experience, and when the learning has a direct application to their daily responsibilities (Knowles, 1980). Involving practitioners in professional development using a more authentic approach can include focusing on practitioner needs when planning intervention (Boudah, Blair, & Mitchell, 2003), allowing practitioners the opportunity to practice learned skills in the natural environment (e.g., in their classroom; Garet, Porter, Desimone, Birman, & Yoon, 2001), and providing ongoing support to improve practice (Harwell, 2003). Further, a meta-analysis of adult learning interventions found that significant effect sizes were associated with the use of evaluation strategies such as self-evaluation (Dunst & Trivette, 2009). Similarly, Hemmeter, Snyder, Kinder, and Artman (2011) suggested that early childhood practitioner training programs must identify and provide ongoing, individualized support and feedback that has been demonstrated to be effective in supporting the implementation of evidence-based practices. Additionally, results of previous research have identified performance feedback as an essential training component (Brock & Carter, 2016; Rispoli, Neely, Lang, & Ganz, 2011). One promising and innovative training structure that can effectively incorporate adult learning needs (e.g., active involvement, ongoing performance feedback, and practice in the natural environment) and addresses barriers like time and cost is technology-based intervention training (Ferguson, Craig, & Dounavi, 2018; Neely, Rispoli, Gerow, Hong, & Hagan-Burke, 2017). 53 Technology-based Intervention Training Technology-based intervention is an innovative training approach that allows for the maximization of resources through the use of components such as online instruction and videoconferencing to train and coach practitioners from a distance. The education field has modeled their use of technology-based intervention after the health care field’s use of telemedicine, which is defined as the use of telecommunication to provide health care at a distance (Augestad & Lindsetmo 2009). Systematic reviews (e.g., Neely et al., 2017) and meta- analyses (e.g., Morin et al., 2018) of technology-based intervention training within the education field have identified the method as efficient and effective. Specifically, results from these technology-based interventions have shown marked increases in implementation fidelity by practitioners and gains in child outcomes (Ferguson et al., 2018) all while delivering the intervention in a time and cost effective manner (Neely et al., 2017). Technology-based intervention training methods have been utilized to teach practitioners to implement NDBI approaches including IT (Neely et al., 2016; Neely, Rispoli, Boles, Morin, Gregori, Ninci, & Hagan-Burke, 2018), Project ImPACT (Wainer, Pickard, & Ingersoll, 2017), and Early Start Denver Model (Vismara, Young, Stahmer, Griffith, & Rogers, 2009) with study results supporting this method as a feasible (i.e., cost-effective) way to increase access to high quality training. Further, when technology-based training and coaching was compared to live instruction, results clearly demonstrated that technology-based instruction was as effective as live instruction (Vismara et al., 2009). In addition, results of the study by Pantermuehl and Lechago (2015) demonstrate that feedback delivered in-person and through videoconference have comparable effects on performance. 54 The existing literature offers evidence supporting technology-based intervention training to prepare practitioners to use NDBIs, yet some notable directions for future research exist. First, existing technology-based NDBI training research has been focused on children with autism (e.g., Neely et al., 2016; Neely et al., 2018; Vismara et al., 2009; Wainer et al., 2017). However, children with identified developmental delay or speech and language impairment experience difficulty in developing and using social communication skills, so they may also benefit from NDBIs. Additionally, prior research has not included collaborative approaches (i.e., intervention and child target skill choice) with the practitioner. This process can create buy-in before the intervention begins, which may impact primary outcomes and social validity results (Hieneman, Dunlap, Kincaid, 2005). Also, previous studies have not trained general education preschool practitioners in inclusive classroom settings. Two-thirds of children with a disability, age three through five, spend some portion of their day in a general education early childhood classroom (U.S. Department of Education, 2018). Hence, general education early childhood practitioners are an essential part of the education of a young child with a disability. There is a need to extend technology-based intervention training and research in these distinct ways. National Professional Development Center One existing training method that provides instruction related to NDBIs is the model from the National Professional Development Center (NPDC) on Autism Spectrum Disorder (ASD; Appendix A). The NPDC on ASD model was established to promote the use of evidence- based practices by practitioners through efficient (i.e., cost and time saving) and effective (i.e., positive outcomes) training and coaching while addressing the needs of individual practitioner and children (Odom et al., 2013). The NPDC on ASD includes resources for coaching, selecting evidence-based practices, and provides online, self-paced learning modules on evidence-based 55 practices. Two specific NDBI approaches that are included in NPDC on ASD training materials include incidental teaching (IT; Hart & Risley, 1982), which is also known as Naturalistic Intervention (Wong et al., 2015), and pivotal response training (PRT; Koegel et al.,1989; Wong et al., 2015). IT and PRT were designed to provide support for communication and social skills of young children with disabilities and are recognized as evidence-based intervention practices through comprehensive reviews (Simpson, 2005; Wong et al., 2014, 2015). Table 3.1 describes IT and PRT. However, the NPDC on ASD model is autism-focused, relies on in person coaching and feedback, and does not include self-evaluation or assessment of social validity. Table 3.1 NDBI Approach Descriptions NDBI Incidental Teaching (IT) Pivotal Response Training (PRT) Description Practitioner establishes the child’s interest in a learning event within the natural setting and context through arrangement of the environment. Practitioner follows the child’s lead and provides necessary support (i.e., prompting) for the learner to engage in the targeted behavior and provides natural consequences for the targeted behavior or skills (Hart & Risley, 1982; Wong et al., 2015). Practitioners build on child’s initiative and interest to develop communication, play, language, and social behaviors. Practitioner first arranges the environment with preferred objects or activities and follows the child’s lead. The practitioner establishes the child’s attention before providing a clear, developmentally appropriate learning opportunity. If needed, the practitioner prompts the child and naturally reinforces the target behavior or skill. Practitioner uses child choice, shared control, and varied tasks throughout implementation to maintain the child’s interest and engagement (Koegel et al.,1989; Wong et al., 2015). Adapted NPDC model. The Adapted NPDC Model (Appendix B) is designed to focus on the use of evidence-based NDBI approaches for children with a variety of disabilities in a collaborative, effective, and accessible manner. The Adapted NPDC Model is focused on the child and the practitioner and the use of an NDBI approach to improve one target skill. Adaptations to the original model include a streamlined assessment process in which the child’s 56 needs are understood through practitioner and family inquiry, reference to Individualized Education Plan (IEP) goals, and consideration of child characteristics. Assessment results are then used in combination with sample intervention video recordings and summaries to support practitioner selection of one of the two NDBI approaches (i.e., PRT or IT). The practitioner then completes a self-paced, online module on the chosen approach provided by Autism Focused Intervention Resources and Modules project (AFIRM, 2018), which is an extension of the NPDC on ASD. AFIRM modules were designed to introduce and provide practitioners with basic procedural information about each intervention approach, and users report the modules are relevant and useful to their work (Sam, Cox, Savage, Waters, & Odom, 2019). Further, greater adaptations include the use of technology and social validity assessment. First, rather than in-person coaching, the Adapted NPDC Model utilizes technology-based intervention through practitioner collected video data and video-conferenced coaching. The use of technology can alleviate many of the issues that practitioners note with training accessibility, time, and expenses through the use of technology-based intervention. Second, the Adapted NPDC Model also adds video self-evaluation to coaching sessions. Studies using videoconferencing benefit from the use of video self-evaluation (Neely et al., 2016; Neely et al., 2018; Wright, Ellis, & Baxter, 2012). Video self-evaluation involves the practitioner reviewing a video of their own performance and evaluating their adequacy on implementation. Research has demonstrated the use of self-evaluation in supporting skill acquisition and maintenance of learned skills (e.g., Neely et al., 2016; Neely et al., 2018; Wright et al., 2012). Third, the Adapted NPDC Model is designed to promote social validity through active participation by the practitioner, consideration of practitioner experiences, characteristics, and preferences, and social validity assessment of direct consumers and stakeholders throughout the intervention process. 57 The Present Study The purpose of this study is to extend the current literature exploring an efficient, effective and socially valid technology-based intervention training model for practitioners to implement NDBI approaches in inclusive preschool classrooms. This study will assess the effects of the Adapted NPDC model (i.e., collaborative selection of child target skills and NDBI approach, completion of online training module corresponding to the selected NDBI approach, self-evaluation, video conferenced coaching related to practitioner and child behavior, and measures of social validity). Specifically, this study addressed the following research questions: 1. Does the Adapted NPDC model support practitioner implementation of NDBIs? Do effects generalize to a different activity context and maintain over time? 2. Does the Adapted NPDC model increase the frequency of communication opportunities provided for the child’s target skill? Do effects generalize to a different activity context and maintain over time? 3. Does the Adapted NPDC model increase the frequency of child participant’s’ target skill and do effects generalize to a different activity context and maintain over time? 4. What is the social validity related to the goals, procedures, and outcomes of the adapted NPDC model? Participants Method Practitioners. A total of six early childhood general education practitioners participated in this study. Practitioners were recruited from separate classroom settings based on the following criteria: (a) lead or assistant teacher in an inclusive preschool classroom where at least one child with a disability was enrolled and (b) no previous training in NDBIs. A recruitment 58 Prior Inter- vention Training Visual Supports None IT IT None PRT Amy 32 Bachelor’s Betty 46 Bachelor’s Carey 25 Bachelor’s 5 15 4 Danielle 51 Bachelor’s 17 None PRT Emily 42 Associate’s 2 None PRT Fae 36 Master’s 12 None IT A B C D E F letter (Appendix C) and consent form (Appendix D) with demographic questionnaire attached (Appendix E) were emailed to early childhood practitioners in two Midwest school districts. Each of the six consenting practitioners described themselves as white, female, early childhood teachers in a lead teacher position. Further information for each of the practitioners is presented in Table 3.2. Table 3.2 Practitioner Participant Information Dyad Name Age Highest Degree Years Teaching NDBI Activity Context Primary Gen. Choice Time Choice Time Small Group Choice Time Choice Time Choice Time Small Group Small Group Choice Time Small Group Small Group Small Group Note. IT = incidental teaching; PRT = Pivotal Response Training; Gen. = generalization Children. Following practitioner consent, children within the classroom were recruited to participate with each practitioner (i.e., one per practitioner to create a dyad). Child participants were eligible if they had: (a) a school disability label of any of the following: early childhood developmental delay, speech and language impairment, autism spectrum disorder, other health impairment, and/or cognitive impairment; (b) Individualized Education Program goal(s) in the area of social and/or language domains; and (c) regular classroom attendance (i.e., child attends school on average three days a week). To obtain consent for child participants, a parent recruitment letter (Appendix F), consent form (Appendix G), and Child Information 59 Questionnaire (Appendix H) were presented by the practitioner at the child’s home visit or program open house. If more than one eligible child consented per practitioner, the practitioner selected a target child in collaboration with the coach based on the benefits they felt the child might gain by participating in the intervention. Each consenting child participant’s race was described as White by their parent. Table 3.3 contains further information about each of the six child participants. Each practitioner and child pair are referred to as a dyad hereafter. Dyad A. Amy and Aaron had no experience working together before the study. Amy had two years of prior experience as a lead teacher to four-year -old children in an inclusive preschool setting and was in her fifth year of teaching. Aaron had one year in a segregated early childhood special education classroom for children with or at-risk for ASD prior to joining Amy’s general education inclusive preschool classroom. At the start of the study, Aaron was able to use nouns to name common familiar objects related to home, community, and the classroom. He was not yet able to use verbs in response to a “what” question when visuals were present (i.e., Asking, “What is he doing?” while showing him a visual of a person eating). Aaron was able to verbally initiate a request for assistance using one word (i.e., help). Further, he was able to point to individual icons to sequence phrases using visual sentence strips to make requests for desired items, but he was still working on independent verbal requests for desired to needed items during tasks (i.e., without visual sentence strips and core vocabulary board). Aaron’s parent reported that he enjoyed the car wash, trains, reading, counting, and memorizing things like environmental signs. Dyad B. Betty and Bryan had not worked together prior to the study. Betty had one year of experience as a general education inclusive preschool teacher to four-year-old children before the start of the study and 15 years of teaching experience. Bryan had one year of segregated early 60 childhood special education schooling prior to joining Betty’s general education inclusive preschool classroom. Bryan demonstrated delays in expressive communication and intelligible speech at the start of the study. Bryan was able to use one to two words together and was working on verbalizing three or more words with different combinations of subjects, objects, and actions in meaningful utterances without adult modeling. Bryan’s parent reported that he enjoyed superheroes and pretend play, and used non-verbal expressions (i.e., gestures) to meet his needs. Dyad C. Carey and Colin had not worked together prior to the study. At the start of the study, Carey was beginning her first year as a general education inclusive preschool teacher to four-year-old children and was in her fourth year of teaching. Colin was beginning his second year in an inclusive preschool classroom and was able to independently use two to three words to label a variety of items but was not yet using independent vocalizations to request wants and needs. Colin had a current behavior intervention plan to address physical aggression for the purpose of gaining tangible items and/or attention from adults and peers. Colin’s parent reported that he enjoyed construction toys (e.g., blocks, trains, legos, etc.) and physical activities like running, kicking and throwing balls, and biking. Dyad D. Danielle and David had not worked with each other prior to the study. David received five months of special education support in an inclusive three-year-old preschool setting prior to the start of the study and Danielle was beginning her second year as a general education inclusive preschool teacher to four-year-old children and 17th year in a teaching position. At the start of the study, David’s speech and language pathologist reported in his IEP that he had significant delays in receptive and expressive language. He was able to label common items at 61 Table 3.3 Child Participant Information Dyad Name Agea Gender Eligibility A Aaron 4:1 Male ASD B Bryan 4:3 Male SLI C Colin 4:4 Male SLI D David 4:5 Male ECDD E Ethan 3:8 Male SLI F Fiona 4:8 Female OHI Independent Target Definition of Target Goal Goal Request desired or needed items using three or more words. Use three or more words to communicate wants and needs. Use three or more word phrases to communication wants and needs with others. Use two or more word phrases to request items from an adult. Use two or more words to label items during classroom activities. Use two or more word phrases to communicate wants and needs. Verbalizations directed toward an adult or peer for the purpose of obtaining a desired or needed item that were not prompted by the practitioner. These three could not be individually prompted (i.e., prompted one word at a time). Verbalizations directed toward adult or peer for the purpose of obtaining a wanted or needed item or action (e.g., “Spiderman shoot webs.”). These three words could not be individually prompted (i.e., prompted one word at a time). Verbalizations directed toward an adult or peer for the purpose of obtaining a desired or needed item that were not prompted by the practitioner. These three words could not be individually prompted (i.e., prompted one word at a time). Verbalizations directed toward an adult for the purpose of obtaining a desired or needed item that were not prompted by the practitioner. These three words could not be individually prompted (i.e., prompted one word at a time). Verbalization for the purpose of labeling. Verbal labels were not scored if they were echoed from a peer or adult verbalization. These two or more words could not be individually prompted (i.e., prompted one word at a time). Verbalizations directed toward an adult for the purpose of obtaining a desired or needed item that were not prompted by the practitioner. These two or more words could not be individually prompted (i.e., prompted one word at a time). Note. aage reported in years:months; ASD = autism spectrum disorder; SLI = speech and language impairment; ECDD = early childhood developmental delay; OHI = other health impairment 62 home, school and in the community using single word vocalizations. He also used his name to convey a variety of messages (e.g., calling out “David” to convey, “I want that” or “My turn”) to adults and peers. However, he did not independently combine words to verbally communicate with peers and teachers in the classroom. Additionally, parent reports indicated that David enjoyed objects with wheels, farm animals, and music. Dyad E. Emily and Ethan had not worked together prior to the study. Ethan was beginning his first year of school, and Emily was beginning her second year as a general education inclusive preschool teacher to three-year-old children. Ethan’s speech language pathologist indicated on his IEP that he had significant delays in expressive language at the start of the study. Ethan’s IEP also indicated that he used noises rather than words to label items but was beginning to use one-word verbalizations when prompted. He was also able to imitate some words but did not perform this skill consistently. Parents reported that Ethan preferred to play with tractors or farm equipment, animals, and barns. Dyad F. Fae and Fiona had not worked together prior to the study. Fae was in her twelfth year of teaching and eighth year as a general education inclusive preschool teacher to four-year- old children. Fiona was beginning her second year in an inclusive preschool classroom placement. Fiona’s IEP noted that she was eligible for special education services under the disability area of other health impairment, and her parent noted previous atrioventricular canal repair and the diagnosis of Down syndrome. Fae’s IEP also noted a delay in the area of speech and language. At the start of the study, Fae was able to follow routine directions and could accurately select common objects in her environment when provided with a verbal prompt (e.g., practitioner says, “Show me the red ball” and she would touch the red ball). She was independently using one word to label and make requests for objects but was not yet 63 communicating in phrases or sentences. Parents reported that Fae enjoyed music and art activities, but at times displayed oppositional behavior (e.g., non-compliant behavior). Coach. The coach (i.e., the author) had a master’s degree in special education and was a doctoral candidate in child development. Additionally, the coach held early childhood and autism teaching endorsements and courses towards a certificate in applied behavior analysis. The coach provided all of the coaching and feedback to the practitioners. The coach also had previous experience in practitioner training and coaching as well as implementing interventions with young children with disabilities in inclusive school settings. Specifically, the coach consulted with the practitioner participants for two years prior to this study as an employee of the local intermediate school district. Setting The study was implemented at the beginning of the school year in six separate early childhood inclusive preschool classrooms where children with and without disabilities between the ages of 36 and 72 months were educated together. Each classroom’s daily schedule included at least one small group and large group activity, choice time (i.e., free play) lasting at least one hour, and a meal or snack time. All dyads were in programs that operated four days a week, yet Dyads A, B, and F were educated in half day programs (i.e., three hours per day) while Dyads C, D, and E were in full day programs (i.e., six and a half hours per day). Specific contexts during the typical school day were chosen by individual practitioners (e.g., choice time, small group, etc.). Each practitioner selected one context for baseline, intervention, and maintenance probes (i.e., the primary context), and a separate context was chosen by each practitioner for generalization probes (see Table 3.2). 64 Materials A variety of toys and materials in the child’s educational environment were used throughout the study. The classroom materials used varied by dyad and session. A tablet was provided to each practitioner to video record sessions and share them with the researcher during the study. A secure file sharing host (i.e., Dropbox) was used by the practitioners with individual password protected accounts. The training was completed online using the provided tablet or other computer equipment available to the practitioner. Additionally, video conferencing with the practitioner was conducted using the provided tablet or other available device. Materials used during intervention (e.g., online training module, self-evaluation, etc.) are explained in the procedure section. Dependent Variables and Measurement Three dependent variables representing practitioner and child outcomes were measured in this study. The primary dependent variable was the percentage of NDBI strategy steps performed correctly (i.e., practitioner implementation fidelity). The frequency of target skill communication opportunities offered by the practitioner to the target child within each 10-min session was also measured. To measure change in the distal outcomes for child participants, the frequency of the child’s independent use of the chosen target skill was measured. Decisions related to movement between phases (e.g., meeting criterion), were based solely on the primary dependent variable (i.e., practitioner implementation fidelity). Practitioner implementation fidelity. Practitioners were taught to implement the chosen NDBI with at least 90% fidelity across two consecutive sessions (based on the NDBI steps in the Procedural Fidelity Checklist; see Appendix I). Practitioner implementation fidelity was calculated by taking the number of steps completed correctly divided by the total number of 65 steps multiplied by 100. Since practitioners could offer multiple target skill communication opportunities within one session, the resulting percentages for each opportunity were aggregated for an overall mean percentage of steps completed correctly during each session. For example, if the practitioner offered three opportunities during a session, with resulting 90%, 85%, and 95% fidelity, the overall fidelity for the session was 90%. Target skill communication opportunities. Target skill communication opportunities occurred when the practitioner arranged the environment to illicit the child to use the target skill. For example, a target skill communication opportunity could occur when the practitioner put a preferred item in sight but out of reach. A target skill communication opportunity was only counted when the practitioner displayed an action (e.g., moving pieces out of reach of the child) or vocalization (e.g., I see more colors over here) that encouraged the child to request assistance or materials and the child initiated physically (e.g., reaching, pointing) and/or verbally (e.g., requesting or labeling with or without the target skill requirement) toward the item or activity. Further, only communication opportunities for the chosen child target skill were counted. Specifically, if the practitioner arranged the environment for a communication opportunity that was not the child’s target skill, this was not counted. Child target skill. Each child participant had one target goal that was assessed during the study to determine the impact of practitioner intervention on child behavior. The child’s target skill behavior was measured with a frequency count during the 10-min video probes, and the total number of occurrences per probe were recorded. An occurrence was defined as an independent display of the target skill chosen for the child. However, practitioner prompted displays of the target skill were also measured. Operational definitions of child target skills were established with the practitioner based on the child’s IEP prior to data collection (see Table 3.3). 66 All of the child participants had expressive communication goals in their IEP that practitioners deemed socially important. Interobserver agreement. To ensure reliable coding of data, a second researcher collected reliability data across 32% of baseline sessions, 40% of intervention sessions, 40% of maintenance sessions, and 33% of generalization sessions, which were randomly selected and evenly distributed across all six dyads. Interobserver agreement (IOA) data were collected for all three dependent variables (i.e., practitioner implementation fidelity, target skill communication opportunities, and child target skill). The second observer held a doctorate in special education and was a board certified behavior analyst. Prior to coding, the first author provided the secondary coder with training specific to coding each dependent variable in the study. Training continued until the secondary coder demonstrated at least 90% agreement with the first author on two training videos. IOA was calculated using point-by-point agreement. Agreement was noted if both observers coded the same behavior within each trial. Disagreement was noted if observers coded behaviors did not match up within a trial. Agreement was calculated by dividing the total agreements by the sum of agreements and disagreements and multiplying by 100 to produce a percentage. IOA was 100% for Dyad A across all conditions and behaviors. Dyad B had 97.5% (range: 83%-100%) IOA overall with 100% IOA for all behaviors across baseline, maintenance, and generalization and 93% (range: 83-100%) across intervention. For Dyad C, IOA was 99% (range: 92-100%) overall with 100% IOA across baseline, intervention, and generalization for all behaviors. IOA was 96% (range: 92-100%) across maintenance sessions for Dyad C. For Dyad D, IOA was 99% (range: 94-100%) overall with 100% IOA across baseline, intervention, and maintenance, and 94% for the generalization probe. IOA for Dyad E was 99% (range: 94-100%) 67 overall with 100% IOA across baseline, maintenance, and generalization. IOA was 97% (range: 94-100%) across intervention sessions for Dyad E. For Dyad F, IOA was 98% (range: 92-100%) overall with 100% IOA across baseline and generalization. IOA was 96% (range: 92-100%) across intervention and 97% (range: 93-100%) across maintenance sessions. Experimental Design A single case experimental research design was used for this study. Specifically, a concurrent multiple probe across participants design (Horner & Baer, 1978) was employed to investigate the use of the Adapted NPDC Model to collaboratively select a child target skill and appropriate NDBI, complete training and self-evaluation and video conference coaching. Single- case research design is a rigorous, quantitative scientific methodology used to determine whether a functional relation exists between the introduction of a researcher-manipulated independent variable and a dependent variable (Horner, Carr, Halle, McGee, Odom & Wolery, 2005; Kratchowill et al., 2010). A multiple probe across participants design was chosen because the dependent variables within this study were unlikely to reverse for the participants. This design also requires at least three demonstrations of effect and controls for threats to internal validity (i.e., history and maturation) through the staggering of experimental phases across participants (Lane & Gast, 2014). Participants were randomly assigned within the multiple probe design and intervention was initiated for subsequent participants once there was consistent effect demonstrated for practitioner procedural fidelity. Procedures In line with the Adapted NPDC Model, each participant completed an assessment prior to the start of data collection. During this step, the coach met with each practitioner at their school. During this meeting, each child’s needs were identified to establish a socially important 68 treatment goal. Child target skills were selected collaboratively with the coach and practitioner from the child’s IEP. Each target skill fell under the social skills or language domain and was appropriate for frequency measurement during data collection. In instances where two or more target skills were noted for inclusion in the study, the Target Skill Prioritization (adapted from Carter, 2010; Appendix J) form was used to rank target skills to assist in selecting child’s target skill. The coach guided the practitioner to identify a target skill that matched the child’s learning needs as outlined in the IEP and was viewed as socially important by the practitioner. The selected target skill was then operationally defined by the coach and practitioner. Next, the practitioner was provided with two NDBI approaches: PRT (Koegel et al.,1989; and IT (Hart & Risley, 1982) to choose to use as the intervention approach with the child. The coach provided a video example of each NDBI (see Amsbary & AFIRM Team, 2017; Suhrheinrich et al., 2018) along with written descriptions of each intervention (Table 3.1) to assist the practitioner in intervention selection. Then, the operational definition of the child’s target skill, chosen NDBI approach, child’s preferred toys or activities, and practitioner chosen context and generalization context for the NDBI were recorded by the practitioner on the Intervention Activity Form (Appendix K). Last, the practitioner and coach practiced the video recording procedure, video upload procedure, and tested the videoconferencing program using the provided tablet. Baseline. Following the assessment process (i.e., child target skill selection and NDBI approach selection), probes were conducted for all dyads. Probe sessions during baseline were conducted in the natural environment of the participant’s school. During the baseline phase, the practitioners were directed to work on the target skill with the assigned child as outlined on the Intervention Activity Form and videotape a 10-min session with the child. The practitioners were 69 instructed not to watch the videos following the sessions. No instructions or feedback from the coach were given during this phase. Each practitioner conducted at least five baseline probes with a maximum of one baseline session per day and four per week. Practitioners remained in baseline while the first practitioner began the intervention phase of the study. Once the first practitioner displayed an increasing trend of steps implemented independently, the researcher administered probes to the second dyad to confirm stable behavior before introducing the independent variable. This process was repeated for the remaining participant dyads until all six dyads completed the intervention phase of the study. Intervention. Each practitioner completed the Autism Focused Intervention Resources and Modules (AFIRM) that corresponded with their selected intervention approach (i.e., IT or PRT) following completion of the baseline phase. If the practitioner was using IT, they completed the Naturalistic Instruction module (Ambsary & AFIRM Team, 2017). If the practitioner as using PRT, they completed the Pivotal Response Training module (Suhrheinrich, Chan, Melgarejo, Reith, Stahmer, & AFIRM Team, 2018). The online module provided the practitioner with background knowledge on the NDBI approach and took approximately one and a half to two hours to complete. To confirm completion of the online training, the practitioner took the module’s post-assessment and submitted the completion certificate to the coach. Upon receipt of the post-assessment certificate, the practitioner was instructed to videotape a 10-min session in which they implemented the selected NDBI approach to work on the child’s target skill. The practitioner had access to the Self-Evaluation Checklist (Appendix L) corresponding to their chosen NDBI but had not received any feedback from the coach. Following the session, the practitioner uploaded the video to Dropbox using the same procedure as baseline. The coach and practitioner then both independently viewed the videotaped session 70 and separately evaluated the video using the Self-Evaluation Checklist corresponding to the chosen NDBI. After both the practitioner and coach viewed and evaluated the video, they met at a mutually agreeable time via secure videoconference (i.e., Zoom) to discuss the session. The videoconference coaching sessions were recorded, using the built-in video recording capabilities in the videoconference program, for later analysis. During the videoconference coaching session, the coach reviewed each step of the evaluation checklist with the practitioner and provided feedback according to the Coaching Fidelity Checklist (Appendix M). The coach also asked the practitioner open-ended questions to solicit their thoughts and ideas (e.g., What are your thoughts about today’s session?). When requested or as needed, video segments were replayed by the coach during videoconferencing sessions. The mean length of a videoconferencing session across all six practitioners was 18.6 min (range: 9.42 min – 26.34 min). The intervention phase of the study continued until the practitioner implemented the NDBI steps with 90% or greater fidelity for two consecutive sessions. This criterion was chosen to ensure practitioner mastery of the NDBI approach. Treatment integrity. An independent observer scored treatment integrity for coaching by viewing 100% of coaching session videos for each practitioner (i.e., 30 sessions) and rating them using the Coaching Fidelity Checklist. The independent observer was trained on the coaching fidelity checklist using coaching videos from a separate pilot study. Treatment integrity was calculated as the percentage of steps completed correctly divided by the total number of steps and multiplying the quotient by 100 to obtain a percentage. Treatment integrity was an average 98% across all sessions (range: 90-100). The step missed most often was: “Ask the practitioner if they have any questions”, which was missed in three of the total sessions. 71 Maintenance. Practitioners entered the maintenance phase once mastery criterion was met and stability in trend and level was demonstrated for the dependent variable across at least five probes in the intervention phase. A minimum of five maintenance follow-up probes were collected after the last intervention session. Follow-up probes were collected once a week. The target goal and context remained the same for each dyad during maintenance. As in baseline and generalization phases, no feedback was provided, and practitioners did not complete the self- evaluation checklist. Practitioners videotaped a 10-min session for data collection purposes, uploaded them as done in previous phases of the study, but were instructed not to view the video recordings. Generalization. Generalization was assessed through application of the intervention in a different context than baseline and intervention probes. The generalization context was chosen by the practitioner prior to the baseline phase. Each practitioner videotaped a 10-min session implementing the NDBI approach with the child in the generalization context and uploaded video clips as described previously. During baseline, practitioners recorded a generalization session after the third baseline sessions. During intervention and maintenance, practitioners were asked to record a generalization session as the last probe of the phase. Practitioners were instructed not view the video recordings. Additionally, no video feedback or self-evaluation occurred for generalization probe sessions. Social Validity Four measures of social validity were collected to obtain consumer feedback regarding the intervention package. Each social validity measure was chosen to assess the overall importance and acceptability of the goals, procedures, and outcomes of the intervention package from the view point of consumers as suggested by Wolf (1978). Each measure described below 72 was converted to an electronic format (i.e., Qualtrics) and a link was e-mailed to respondents from the coach (i.e., author) to be completed without the presence of the coach. All four measures also included an option to leave comments at the end. First, the practitioner participants completed the Target Skill Prioritization Questionnaire (adapted from Carter, 2010) prior to the baseline phase in order to determine social validity of the child target skill. The eight-item Likert-style questionnaire allowed the practitioner to rate agreement and disagreement regarding the target skill (e.g., “This is the best target skill that could be chosen”). Target Skill Prioritization was measured using a six-point scale ranging from 1 (strongly disagree) to 6 (strongly agree) with a higher score signifying a higher priority of the target skill. The Target Skill Prioritization questionnaire took approximately one minute to complete. Next, the Teacher Efficacy for the Inclusion of Young Children with Disabilities scale (TEIYD; adapted from Esposito, Guarino, & Caywood, 2007; Walls, 2007; Appendix N) was distributed to practitioner participants before the baseline phase and after the last maintenance probe was collected. The TEIYD was used to measure practitioner self-efficacy in working with children with disabilities in their inclusive classroom over time. The TEIYD contained survey items on efficacy in three subtopics: knowledge of young children with disabilities (5 items), teaching confidence with young children having a disability and who are included into the general education classroom (7 items), and perceptions of abilities to implement both effective teaching strategies and modification to the general education curriculum to meet the needs of young children with disabilities (3 items). Each question asked the practitioner to rate their response on a five-point Likert-type scale with the following response options: 1—no confidence, 2—little confidence, 3—moderate confidence, 4—confident, and 5—very confident 73 (e.g., “I can modify instructional practices to meet the needs of young children with disabilities in an inclusive setting.”) with a higher score signifying a higher level of confidence in each topic area. The TIEYD took approximately five minutes to complete at each time point. Additionally, the Intervention Rating Profile-15 (IRP-15; Martens, Witt, Elliot, & Darveaux, 1985; Appendix O) was used to examine NDBI approach acceptability by practitioners and indirect consumers. Practitioners completed the IRP-15 before the baseline phase and after the final maintenance probe session. The IRP consisted of 15 items (e.g., “I find this intervention suitable for the child’s target skill development.”), which were rated on a six- point Likert-type scale ranging from 1 (strongly agree) to 6 (strongly agree). Overall scores ranged from 15 to 90 with higher scores reflecting greater acceptability. The IRP-15 took approximately 3 minutes to complete. Also, an electronic link to the IRP-15 was emailed by the author to an indirect consumer in the school community (e.g., administrator, special education provider) who observed one implementation session during the intervention phase through video or in person (Appendix P). Last, a researcher developed questionnaire to provide a social validity rating of the training and coaching procedures was emailed to practitioners when they entered the maintenance phase (Appendix Q). This questionnaire included eight questions for practitioners with a six-point Likert-type response scale ranging from 1 (strongly disagree) to 6 (strongly agree) and was approximately four minutes to complete. Total scores ranged from 8 to 48 with higher scores reflecting greater acceptability of the training and coaching procedures. Each question targeted a component of the training and coaching package (e.g., “I enjoyed the self- evaluation process”; “I found the video conferencing to be effective.”). A higher score indicated a higher level of acceptability as perceived by practitioners. 74 Results The intervention package (i.e., the Adapted NPDC Model) produced increased implementation fidelity while simultaneously increasing the frequency of target skill communication opportunities offered by the practitioner to the target child. The intervention package also increased the frequency of the child’s independent use of the chosen target skill. The increases in the dependent variables occurred when the intervention was introduced to each dyad at six different points in time. These changes in performance suggest a functional relationship (Horner, Carr, Halle, McGee, Odom, & Wolery, 2005) between the coach’s implementation of the Adapted NPDC Model and the dependent measures Additionally, generalization across activity contexts was observed for each dyad. Practitioner Behavior Implementation fidelity. Analysis of level, trend, and variability of the primary dependent variable (i.e., practitioner implementation fidelity) was conducted to establish a functional relation between the Adapted NPDC Model and the implementation fidelity of the chosen NDBI. The percentage of the practitioner chosen NDBI steps performed correctly is represented in Figure 3.1 for each practitioner participant. Analysis of these data across practitioners and phases suggest a clear functional relation between the Adapted NPDC Model on practitioner implementation fidelity. Overall, all six practitioners reached fidelity criteria (i.e., above 90% for two consecutive sessions) within the first three sessions of the intervention phase. Each practitioner conducted five sessions to meet the What Works Clearinghouse standards outlined for single case research and maintained high levels of implementation fidelity above 90% (Kratochwill et al., 2016). Analysis also suggests that implementation fidelity was generalized across activity contexts. 75 Dyad A. Amy demonstrated zero-rate responding across all baseline sessions and immediately increased to 86% of NDBI steps implemented correctly during the first intervention session with a mean fidelity score of 94% (range: 86-100%). Her implementation fidelity remained high in maintenance with a mean of 99.6% (range: 98-100%) for probes conducted weekly for five weeks post intervention. Amy demonstrated generalization to a different activity context (i.e., small group) increasing from 0% implementation fidelity in baseline to 100% implementation during intervention and maintenance. Dyad B. Betty demonstrated zero-rate responding across all baseline sessions and immediately increased to 98% implementation fidelity during the first intervention session with an average level of 99% (range: 98-100%) during the intervention phase. Her implementation fidelity remained high in maintenance with a mean of 99.6% (range: 99-100%). Betty demonstrated generalization of implementation fidelity to a different activity context (i.e., small group) with 0%, 100%, and 100% accuracy during baseline, intervention, and maintenance probes, respectively. Dyad C. Carey demonstrated zero-rate responding across all baseline sessions. Carey immediately implemented the NDBI steps with high fidelity (i.e., 95%) during the first intervention session with mean implementation fidelity of 98% (range: 95-100%). Implementation fidelity remained high in maintenance with a mean of 99.6% (range: 99-100%). Carey demonstrated generalization to a different activity context (i.e., free choice). Generalization probes increased from 0% implementation fidelity during baseline to 100% during intervention and maintenance probe sessions. 76 Figure 3.1 Practitioner Implementation Fidelity and Frequency of Child Behavior Baseline Intervention Maintenance Generalization Probe Dyad A Dyad B Dyad C Dyad D Dyad E F r e q u e n c y o f C h i l d B e h a v i o r Dyad F n o i t a t n e m e l p m I I B D N f o t c e r r o C t n e c r e P Note. Practitioner fidelity of implementing NDBI in closed triangles corresponding to left y-axis. Frequency of independent child target behavior in open squares corresponding to the right y-axis. Sessions 77 0204060801000510152025020406080100051015202502040608010005101520250204060801000510152025020406080100051015202502468101214161820222426283032343638400204060801000510152025 Dyad D. Danielle demonstrated zero-rate responding across all baseline sessions and immediately increased the percentage of NDBI steps implemented correctly to 75% during the first intervention session with a mean of 93% (range: 75-100%) when the intervention package was applied. During the maintenance phase, implementation fidelity remained high with a mean of 99.6% (range: 98-100%). Danielle demonstrated generalization to a different activity context (i.e., small group) increasing from 0% implementation fidelity in baseline to 99% implementation during intervention and maintenance probes. Dyad E. Emily demonstrated a low level of implementation fidelity with a mean of 19% steps performed correctly (range: 0-37%) across baseline sessions with the last baseline session at 0%. She immediately increased to 100% during the first intervention sessions with a mean of 99% (range: 96-100%). Emily’s implementation fidelity remained high in maintenance with a mean of 99.6% (range: 98-100%). Generalization of implementation fidelity was demonstrated to a different activity context (i.e., small group) increasing from 0% accuracy during the baseline probe to 100% accuracy during intervention and maintenance probes. Dyad F. Fae demonstrated a low level of implementation fidelity during the first three probe sessions then zero-rate responding for the five remaining probes during baseline. Fae’ s implementation fidelity was a mean of 18% (range: 0-59%) across baseline sessions and immediately increased to 86% during the first intervention session with a mean of 96% (range: 86-100%). Generalization probes to a different activity context (i.e., small group) increased from 0% implementation fidelity during baseline to 95% during intervention and remained high at 93% during maintenance. Frequency of communication opportunities. The frequency of communication opportunities for the child target skill offered by the practitioner is displayed in Figure 3.2. 78 Analysis of level, trend, and variability of this secondary dependent variable across phases suggests an effect of the Adapted NPDC Model on frequency of communication opportunities with generalization across activity contexts. Dyad A. Amy provided 0 target skill communication opportunities across all baseline sessions and immediately increased to 9 opportunities during the first intervention session and remained stable with an average level of 8.8 opportunities (range: 8-9) during the intervention phase. Her frequency of target skill communication opportunities provided increased in maintenance with an average level of 13.2 (range: 9-19). Amy generalized the frequency of communication opportunities offered for the target skill to a different activity context. She increased the frequency of communication opportunities from 0 at baseline to 10 in intervention and 13 following the final maintenance probe. Dyad B. Betty provided 0 target skill communication opportunities across all baseline sessions and immediately increased to 7 opportunities during the first intervention session and continued on an accelerating trend with an average level of 10.8 opportunities (range: 6-18) during the intervention phase. Betty’s frequency of target skill communication opportunities remained high in maintenance with an average level of 13 communication opportunities (range: 10-16). Betty demonstrated an increasing trend in target skill communication opportunities provided across generalization probes with 0 opportunities offered in baseline, 4 opportunities during intervention, and 5 opportunities after the final maintenance probe. Dyad C. Carey provided 0 target skill communication opportunities across all baseline sessions and immediately increased to 13 opportunities during the first intervention session. Carey demonstrated a stable trend during intervention offering an average level of 11.8 opportunities (range: 9-13). Her frequency of target skill communication opportunities 79 Figure 3.2 Frequency of Communication Opportunities and Child Behavior Baseline Intervention Maintenance r o i v a h e B d l i h C d n a s e i t i n u t r o p p O n o i t a c i n u m m o C f o y c n e u q e r F Generalization Probe Dyad A Dyad B Dyad C Dyad D Dyad E Dyad F Note. Practitioner frequency of opportunities in closed triangles. Frequency of independent child behavior in open squares. Generalization probes in closed circles for practitioner frequency and open circles for frequency of independent child behavior. Sessions 80 0510152025051015202505101520250510152025051015202502468101214161820222426283032343638400510152025 provided remained high in maintenance with some variability and an average level of 14.6 communication opportunities (range: 6-23). Frequency of target skill communication opportunities demonstrated by Carey during generalization probes increased from 0 opportunities in baseline to 15 opportunities during intervention and 10 opportunities after the final maintenance probe. Dyad D. Danielle provided 0 target skill communication opportunities across all baseline sessions and immediately increased to 5 opportunities during the first intervention session and displayed an increasing trend during the intervention phase with an average level of 11.8 opportunities (range: 5-21). Her frequency of target skill communication opportunities provided increased in maintenance with an average level of 15 communication opportunities (range: 13- 18). Danielle demonstrated generalization of the frequency of communication opportunities offered for the target skill to a different activity context (i.e., small group) at an increasing trend from 0 opportunities offered at baseline, 8 opportunities in intervention, and 15 opportunities offered post-maintenance. Dyad E. Emily offered a mean of 1.3 communication opportunities for the target skill (range: 0-2) showing a decelerating trend to 0 at the final baseline probe session. Emily immediately increased to 10 communication opportunities offered during the first intervention session with an average level of 12.4 opportunities (range: 7-20) across the intervention phase. The average level of communication opportunities offered increased to 13.5 (range: 13-15) in maintenance. Emily demonstrated generalization to a different activity context (i.e., small group) with 0 communication opportunities offered at baseline increasing to 7 communication opportunities offered at both the intervention and maintenance phase generalization probes. 81 Dyad F. Fae offered an average level of 1.5 communication opportunities (range: 0-8) across baseline with a decreasing trend concluding with a stable and low baseline level of 0 communication opportunities offered for the child target skill. Fae demonstrated an immediate increase to 13 communication opportunities during the first intervention session with an average level of 13 communication opportunities (range: 2-18) during the intervention phase. Fae demonstrated some variability during intervention with one overlapping data point. Yet, communication opportunities provided increased in maintenance and remained stable and high with a mean of 14.2 communication opportunities (range: 14-15). Fae demonstrated generalization to a different activity context (i.e., small group) with an increasing trend from 0 communication opportunities at baseline to 7 and 10 communication opportunities provided in intervention and maintenance probes, respectively. Child Behavior Child target social communication skill. The third dependent variable was the frequency of independent occurrences of the child’s target social communication behavior. Figure 3.1 displays the frequency of independent occurrence of child participants’ target skill behavior with practitioner implementation fidelity during each phase of the study. Figure 3.2 also displays the frequency of independent occurrence of child participants’ target skill behavior along with frequency of communication opportunities provided by the practitioner. Analysis of these data across participants and phases suggest an effect of the Adapted NPDC Model on independent child target skill behavior. Analysis also suggests that child target skill behavior generalized across activity contexts. Dyad A. Aaron demonstrated 0 instances of the target communication behavior (i.e., requesting) across all baseline sessions and immediately increased to 2 independent requests with 82 an average level of 5 independent requests per session (range: 2-8) during intervention. His frequency of independent target skill behavior remained high in maintenance with a mean of 12 independent requests (range: 9-16). At generalization probes, Aaron performed the target behavior 0, 10, and 13 during baseline, intervention, and maintenance conditions. Dyad B. Bryan demonstrated 0 instances of the target communication behavior (i.e., requesting) across all baseline sessions. Bryan immediately demonstrated an increase in the target skill behavior to 5 independent requests with an average level of 9.2 (range: 3-16) and an increasing trend across intervention sessions. His frequency of independent requests remained high in maintenance with a mean of 12.8 (range: 10-16). Generalization probe data depict an increasing trend in Bryan’s generalization of his target skill behavior to a different activity context (i.e., small group) across phases with an increase from 0 independent requests in baseline to 5 at both the intervention and maintenance session probes. Dyad C. Colin demonstrated 0 instances of the target communication behavior (i.e., requests) across all baseline sessions and immediately increased to 10 independent requests during the first intervention session with an average level of 10.2 independent requests (range: 7- 13). His frequency of independent requests remained high in maintenance with a mean of 13.8 (range: 6-20). Colin demonstrated generalization of his target skill behavior to a different activity context (i.e., small group) with 0 requests at baseline, 13 requests during the intervention phase, and 10 requests at the final generalization probe during the maintenance phase. Dyad D. David demonstrated 0 instances of the target communication behavior (i.e., requests) across all baseline sessions. David immediately increased to 1 independent request during the first intervention session and demonstrated an increasing trend with an average level of 8.4 target skill occurrences (range: 1-18) during the intervention phase. His independent 83 requests remained high and stable in maintenance with a mean of 14.8 (range: 13-18). Generalization probe data demonstrate an increasing trend of independent requests from 0 in baseline to 7 during intervention and 14 independent requests at the final generalization probe during maintenance. Dyad E. Ethan demonstrated 0 instances of the target communication behavior (i.e., labeling) across all baseline sessions and an immediate increase to 6 independent labels during the first intervention session with an average level of 12.4 (range: 7-20) across intervention sessions. Ethan’s independent target skill communication behavior remained high in maintenance with an increasing trend and mean of 13 (range: 9-16). Ethan demonstrated generalization of independent target skill behavior to a different activity context across phases. Ethan’s generalization probe pre-intervention shows no instances of target skill communication behavior while the generalization probe during intervention shows 6 instances, which increased to 7 at the final probe session during the maintenance phase. Dyad F. Fiona demonstrated her target communication skill independently (i.e., requests) 1 time during the second baseline session with an average level of .13 (range: 0-1). Fiona demonstrated an immediate increase to 7 independent requests during the first intervention session with an average level of 13 independent requests (range: 5-24) with some variability. Her independent requests in maintenance remained higher than baseline level with a mean of 7.8 independent target skill occurrences (range: 3-11). Fiona demonstrated generalization of independent requests to a different activity context (i.e., small group) across phases with 0 independent requests in baseline, 2 during the intervention phase, and 6 independent requests during the final generalization probe in maintenance. 84 Social Validity Goals. The target social communication skill goals were selected by the practitioners with assistance from the coach based on the child’s current IEP. Practitioner participants competed the social validity instrument independently after the target skill was chosen and prior to the beginning of the study. Results of the Treatment Goal Prioritization (adapted from Carter, 2010) questionnaire revealed that all six of the practitioners agreed (rating of 5) or strongly agreed (rating of 6; M = 5.67) that the child target skill was the best choice for the child and of the highest priority. Procedures. The treatment procedures within the intervention package included the training and coaching procedures as well as the NDBI procedures. The study took place in natural settings (i.e., early childhood general education classrooms) and NDBIs were implemented by natural change agents (i.e., general education early childhood teachers). NDBI procedures. Before the intervention phase began, all six practitioners rated their chosen NDBI at a high level of acceptability with a total mean rating of 81.6 out of 90 on the IRP-15 (Martens et al., 1985). After using the chosen NDBI, all six practitioners rated the intervention at a higher level of acceptability with a total mean rating of 87.48 out of 90 using the IRP-15 following the intervention phase. In open comments, one practitioner wrote, “I loved this intervention. It really taught me a lot about myself.” Another practitioner commented, “10 minutes was too long for this particular child. Five minute sessions would have been more appropriate.” Additionally, outside raters, who worked with the practitioner participants, were asked to complete the IRP-15. The four outside raters included an administrator, special education teacher, assistant teacher, and speech and language pathologist. Eight responses were recorded 85 for the IRP-15 as some outside raters completed the questionnaire for more than one practitioner and child dyad they worked with. Each of the six practitioners were represented at least once within the eight responses. Outside raters watched the practitioner implement the intervention via video or in person and reported that the practitioner’s chosen NDBI was highly acceptable with a mean rating of 81.42 out of 90. In open comments, outside raters noted that they hoped the interventions will be used with all children in the classroom that may need it, and they saw great improvements in the target children. Additionally, one respondent wrote, “I like how the teacher is given shared control and the child is rewarded and encouraged. I believe this is a technique that could be used with a variety of students and goal skills.” Training and coaching procedures. Upon completion of the intervention phase, practitioners were sent a researcher developed questionnaire to assess the acceptability of the training and coaching procedures. Overall, all six practitioners found the procedures to be acceptable with a total mean rating of 45.16 out of 48. Specifically, practitioners found the video conferencing (M = 5.83) and self-evaluation (M = 5.5) to be enjoyable and effective and agreed that the online training module was an acceptable way to initially train practitioners on the NDBI (M = 5.67). Practitioners also agreed that the procedures were efficient (i.e., time, cost, etc.; M = 5.5) and their overall experience was positive (M = 5.83). One practitioner wrote in open comments: I am older and not very good with technology but the online training was a breeze for me. I was honestly very nervous about taking videos of myself. I have always heard if you videotape yourself it will make you a better teacher but way too scared to ever do it. Well, setting the baseline was easy as I did not have to watch the videos but once I started having video conferencing sessions with my coach, I learned so much! She helped me understand how to implement the PRT and gave me confidence along the way. I believe this type of training is a great strategy that uses the child's natural environment to learn. Plus, with me learning right from my house (Online training module and having my personal coach) gaining essential skills to help me grow. It was amazing to watch the videos and get immediate feedback to correct anything that needed tweaking. I am in my 86 second year of leading an inclusion class and this is by far the best thing that has happened for not only me but ALL the students in my class! Another practitioner also indicated in open comments, “I definitely learned a lot from the coaching. The online training was fine, but I tend to learn more from doing. The coaching was key in self-reflection as well as how to move forward and improve with this intervention.” One practitioner also noted, “I LOVED THIS!!! I felt at first it might be a lot of work, but at the end I have learned a ton and I think I am a better teacher after doing this.” Outcomes. The treatment outcomes within the intervention package included consumer satisfaction of both practitioner and child outcomes. Practitioners were surveyed to gather data related to self-efficacy changes over time and the social validity of child outcomes. Practitioner self-efficacy. The TIEYD (Esposito et al., 2007; Walls, 2007) was completed by practitioner participants pre-study and post intervention phase to assess the social validity of practitioner self-efficacy in working with children with disabilities in their inclusive classroom over time. Practitioner self-efficacy was determined by computing an overall mean and means relating to each topic area. Pre-study results revealed that overall, practitioners held moderate confidence (M = 3.8 out of 5) in their efficacy for the inclusion of young children with disabilities, which increased to a mean of 4.5 indicating a higher level of confidence post-study. Prior to the study, practitioners were confident in their knowledge of how disabilities can impact young children (M = 4.4), which increased to a slightly higher confidence rating post study (M = 4.6). Practitioners’ confidence in their ability to teach young children with disabilities included into the general education classroom was rated as moderately confident (M = 3.4) pre-study and increased to confident (M = 4.2) post-study. Perceptions of practitioner’s ability pre-study to implement both effective teaching strategies and modification to the general education curriculum to meet the needs of young children with disabilities was rated as confident (M = 3.9) 87 and increased to very confident (M = 4.7) post-study. One practitioner indicated in open comments pre-study, “I feel I have relied on the special education teacher to teach and maintain a lot of the IEP goals for the children unless they were more academic goals.” Another comment pre-study stated, “I feel that I still have a lot to learn.” Post-study practitioner comments reveal: This experience has given me the confidence to dig into an IEP and see how to create activities to help a child with a disability. Now that this experience is done, I want to continue to learn about other children's IEP goals in our classroom and provide them with activities to help them meet and exceed their goals! This is the second year I have been teaching in an inclusive classroom and I feel this was the first time that I really had someone give me the tools to help a child with a disability. Child outcomes. Results from a statement on the practitioner training and coaching social validity questionnaire (i.e., “I see improvement in the child’s target skill as a result of this intervention.”) completed after the intervention phase revealed that practitioners strongly agreed (M = 5.67) that the intervention was effective for child participants. Also, open comments on the IRP-15 questionnaire provided following study completion indicated positive social validity related to child outcomes. One practitioner wrote, “I am absolutely amazed at the results I have seen in the student I worked with! His target goal was to put together three to five words to request and he far exceeded that. It has carried over to other parts of the day as well.” Another practitioner’s comment stated, “It is unbelievable to me how this child is talking now using four words to request items! The child’s confidence has also grown!” Discussion This study demonstrated the effectiveness of the Adapted NPDC Model to train and coach practitioners to implement NDBIs in an inclusive general education preschool classroom. The innovative delivery method of the Adapted NPDC Model resulted in increased outcomes for all six of the practitioner and child dyads as well as generalization of skills across contexts and maintenance of behavior change up to five weeks following the removal of the intervention 88 package. Additionally, respondents found the intervention package socially valid and practitioner self-efficacy increased. Several aspects of this study deserve further discussion including the technology-based intervention training method, social validity assessment, and child learning outcomes. Innovative Delivery Method The Adapted NPDC Model incorporated technology-based intervention to train practitioners. First, an online training module was included as initial training. The training was didactic in nature, which is consistent with other technology-based training studies (Neely et al., 2017). All six practitioners in the current study improved their fidelity of implementing of an NDBI above baseline levels following completion of the online training module only. These results are consistent with previous research identifying online modules as an effective initial training for practitioner implementation of an NDBI (Neely et al., 2016; Vismara et al., 2012; Wainer & Ingersoll, 2013). Additionally, this study included the innovative addition of ongoing coaching via videoconferencing with video-based performance feedback and self-evaluation following online training. The addition of coaching, performance feedback, and self-evaluation are supported in previous syntheses of technology-based training for practitioner implemented NDBIs (Ferguson et al., 2018, Neely et al., 2017) and provided practitioners with added support to help increase and maintain implementation fidelity. Further, the implementation of the intervention package was practical and cost effective. The training and coaching were conducted based on practitioner preference for time and location using a low cost tablet that was provided to them. The duration of this innovative training method was also practical for practitioners. The training module duration was approximately 1.5 hour and the average coaching session lasted 22 min with each practitioner participating in just 89 five coaching sessions (M = 1.83 hr per practitioner). These durations are significantly lower than the training duration of previous studies’ training durations (Neely et al., 2016). Although training duration may not be an indicator of training effectiveness (Brock & Carter, 2015), training duration has the potential to impact acceptance of procedures and subsequent intervention fidelity (Ingersoll & Berger, 2015). Results of the current study indicate that practitioner and child behavior changes endured over time. Specifically, results of weekly probes up to five weeks show continuing durability of behavior at high levels once mastery was achieved and the intervention package was removed. These results are noteworthy as literature reviews of the technology-based intervention training note limited reports of the maintenance of behavior change (Neely et al., 2017) despite the fact that maintenance of behavior change is considered to be an indicator of intervention effectiveness (Baer, Wolf, & Risley, 1987) and social validity (Kennedy, 2002). Social Validity The Adapted NPDC Model included multiple social validity measures to assess the social significance of the goals, the social appropriateness of the procedures, and the social importance of the effects of the intervention package (Wolf, 1978). All of the participants responded to the social validity questionnaires with responses indicating high acceptability of the Adapted NPDC Model. External consumers—who were blind to study purpose and outcomes (i.e., assistant teachers, speech and language pathologists, and administrators) also indicated high acceptability. Maintenance or sustained use data is considered an indicator of social validity (Kennedy, 2002). Specifically, when practitioners do not consider an intervention package acceptable, even when research has demonstrated evidence of effectiveness, they may be less likely to implement it as intended or maintain its use (Greenwood & Abbott, 2001). Initial findings suggest that the 90 Adapted NPDC Model is effective in increasing implementation fidelity and maintained behavior change and is also socially valid to practitioner participants and stakeholders. Positive social validity may also be due to the study’s alignment with Horner and colleagues (2005) quality indicators of social validity for single-case research: (a) the intervention was implemented with fidelity by practitioners in the natural classroom setting, (b) the intervention was feasible (i.e., practical and cost effective), and (c) the effects were maintained over time after coaching was removed. This study also included a social validity measure of practitioner self-efficacy to teach young children with disabilities in inclusive classrooms. Increased levels of practitioner self- efficacy is a significant finding as attitudes and beliefs are the primary barrier to preschool inclusion (Barton & Smith, 2015). Specifically, within this study, practitioners reported increased confidence in implementing evidence-based teaching strategies with young children with disabilities post intervention. These results align with research that indicates that training and ongoing support increases practitioner efficacy (Gebbie, Ceglowski, Taylor, & Miels, 2011). The Adapted NPDC Model shows promise in increasing practitioner efficacy, which can impact the learning of children with disabilities in inclusive classroom settings. Relations with Child Learning Outcomes Although improvement in practitioner implementation fidelity as a result of the intervention package is an important finding, child improvements are also an important measure of effectiveness. Results of this study suggest that although practitioners adhered to NDBI implementation at high levels of fidelity, increases in communication opportunities (i.e., dosage) were followed by increases in child behavior. However, practitioners were not coached to increase the number of communication opportunities and were unaware of the collection of data 91 on this behavior. Results demonstrate that independent child target skill behavior increased as the practitioners improved their fidelity of implementing the NDBI and provided more communication opportunities. Future research should continue to explore the impact of dosage on child outcomes. This finding aligns with research suggesting that it is important to provide multiple learning opportunities as young children may not respond to all opportunities provided and therefore may require more opportunities in order to exhibit the target skill independently (Douglas, Light, & McNaughton, 2013). Limitations and Future Research Directions Several limitations exist within this study which could be explored in future research. First, a multi-component training package (e.g., online training module, videoconferencing, self- evaluation) was used in this study. Therefore, it was not possible to isolate the impact of individual components. Future research might include component analysis to assess most essential components of the Adapted NPDC Model. Next, the current study included one practitioner in the Adapted NPDC Model per classroom. Future research should implement a train the trainer model to promote expanded and sustained implementation of this intervention package (e.g., Neely et al., 2018; Suhrheinrich, 2015). Consultants, special education teachers, and administrators could be trained to implement the Adapted NPDC Model with general education preschool inclusion practitioners with the addition of evidence-based coaching skills, which could increase crucial outcomes for all participants. The Adapted NPDC Model relies on videos recorded by the practitioner and sent to the coach for data collection and further coaching. Perhaps the fact that practitioners videotaped themselves impacted the outcome variables as they may have knowingly implemented with higher fidelity while the camera was recording. Yet, the same could be stated for in-person 92 observations when the practitioner knows the coach is watching. Implementation fidelity may also be inflated due to measurement procedures. Incidental teaching and pivotal response training fidelity procedures were based on prior research and trainings (see Neely et al., 2016; Suhrheinrich et al., 2018), which included “environmental arrangement” as the first step in a trial. This step was operationally defined as the practitioner displaying an action (e.g., moving pieces out of reach of the child) or vocalization (e.g., I see more colors over here) that encouraged the child to request assistance or materials and the child initiated physically (e.g., reaching, pointing) and/or verbally (e.g., requesting or labeling with or without the target skill requirement) toward the item or activity. As such, the inclusion of this step could have artificially inflated the fidelity of implementation since many trials or communication opportunities could be offered in the duration of a session. Individual differences in data require further discussion. Dyad F displayed variability in practitioner communication opportunities and child target skill behavior during intervention, yet behaviors stabilized at high levels in the maintenance phase. This difference may be due to individual child characteristics as Fiona’s parents and IEP reported characteristics of challenging behavior (i.e., stubbornness). Modifications to the NDBI such as the addition of an antecedent based strategy or reinforcement procedures to decrease a challenging behavior may increase the immediacy of intervention effects (Feeley & Jones, 2006). Future research examining characteristics of young children with disabilities and modifications to interventions based on child characteristics is recommended. Finally, when considering the generalizability of this intervention package to other practitioners and settings, it is possible that additional training may be necessary to ensure effectiveness. Practitioners in the current study were involved in fully inclusive programs (i.e., 93 no self-contained early childhood special education), and voluntarily agreed to participate. As such, practitioners were motivated to learn the behaviors and implement the chosen NDBI. Additionally, the coach worked with the practitioners previously, which suggests a prior relationship that may be influence implementation fidelity. Future research should investigate factors that might moderate the effectiveness of the Adapted NPDC model by considering coach (e.g., relationship and coaching skills), practitioner (e.g., motivation and vested interest), and school setting characteristics (e.g., administrator support). Implications for Practice These findings contribute to the broader discussion about the service-need gap related to preschool inclusion and training general education practitioners to implement NDBIs. The results of this study provide further evidence for an alternative method in the dissemination of NDBIs to practitioners. Since videos can be recorded during naturally occurring activities and ongoing coaching and training is flexible and unobtrusive, the Adapted NPDC model is a feasible option for education professionals. When education professionals (e.g., administrators, coaches, university professors, etc.) consider professional development models to prepare practitioners to support preschool age children with disabilities in their classrooms, they may consider using the Adapted NPDC model, which has been identified through this work as effective, acceptable, and practical for general education practitioners. The collection of videos using this training model may provide important video examples for additional staff and provide the IEP team with additional progress monitoring opportunities. Such video examples of child and practitioner success may also influence the perceptions of other practitioners working with children with disabilities in inclusive settings and may provide 94 increased likelihood of inclusive placements for children with disabilities (Praisner, 2003). Targeting stakeholder teams (e.g., assistant practitioners, paraeducators, ancillary staff) for training could potentially increase the quantity and the quality of intervention for children with disabilities and stakeholder’s perceived ability to work with children with disabilities within the inclusive classroom. Conclusion This study provides initial evidence for the effectiveness and efficiency of the Adapted NPDC Model for disseminating training and on-going coaching of NDBIs to general education practitioners of young children with disabilities included in their preschool classrooms. The use of innovative technology and training methods allowed for an efficient, effective, and socially valid approach to practitioner professional development. As children with disabilities are increasingly included in general education preschool classrooms, the need to implement effective, feasible, and socially valid intervention becomes pressing. Current research surrounding the use of technology-based intervention training as a means to train practitioners, although still limited, is progressing. This study adds to the current literature demonstrating that technology-based intervention training, like the Adapted NPDC Model, can be an effective platform for addressing the service-need gap in early childhood inclusive classrooms. 95 APPENDICES 96 APPENDIX A NPDC Model 97 APPENDIX B Adapted NPDC Model 98 AssessmentImplementationOutcomesChild3IEP3GoalsChild3Strengths,3Interests,3HistoryTarget3Skill3PrioritizationSelection3of3NDBI3Approach3ProcessImplementation3of3NDBI3with3FidelityChild3and3Practitioner3Data3CollectionNPDC3Model3adapted3from3Odom3et3al.,32009COACHINGSelfKEvaluationDelayedVideoconferencingOnline3TrainingAdapted3NPDC3Model APPENDIX C Practitioner Recruitment Letter 99 Human Development and Family Studies 552 W. Circle Drive 7 Human Ecology East Lansing, MI 48824 517-355-7680 Fax: 517-432-2953 http://hdfs.msu.edu Dear Educator, May, 2018 You are being invited to participate in a study. The purpose of the is study is to train and coach early childhood practitioners in inclusive preschool classrooms to implement a play-based intervention during naturally occurring activities utilizing strategies from developmental and behavioral science called, Naturalistic Developmental Behavioral Intervention. As the practitioner, you will choose the specific intervention to work with a child with a disability (e.g., autism, early childhood developmental delay, speech and language impaired) using collaborative and technology-based training and coaching methods. During this study, you will work with the coach (Sophia D’Agostino) to choose a child target skill and intervention approach to implement. As part of the study, you will provide video recordings of yourself interacting with the child throughout each phase. Videos will be uploaded to a secure network and you will receive the electronic materials and directions you need to participate. You will complete a self-paced online training module and implement the intervention with one child with a disability in your classroom. As you are learning the intervention, you will self-evaluate each video using a checklist and meet with the coach via videoconferencing at a time of your choosing for feedback. The process of self-evaluation and videoconferencing feedback with the coach will only occur during the intervention phase for an estimated four to six sessions based on the amount of sessions it takes you to master the intervention strategy. Sessions will take place during typically occurring classroom activities and videoconferencing sessions will be scheduled at a time that is convenient for the practitioner. The study is expected to occur between August and January. If you are willing to participate, please complete the attached consent and demographic forms and return via email to Sophia D’Agostino at sophiad@msu.edu. Please contact me if you have any questions about the study. Thank you, Sophia D’Agostino Sophia D’Agostino, M.Ed Michigan State University 515 Pleasant St SE Grand Rapids, MI 49503 616-202-8175 sophiad@msu.edu APPENDIX D Practitioner Consent 100 101 102 APPENDIX E Practitioner Participant Demographic Questionnaire 103 Practitioner Participant Demographic Questionnaire Please answer the following demographic questions. 1. Age: __________________________________________________________________ 2. What gender do you identify with? __________________________________________ 3. What is your race/ethnicity? ________________________________________________ 4. What is your highest degree and what major is it in?_____________________________ 5. What is your current job title? ______________________________________________ 6. What certifications do you hold? ____________________________________________ 7. How long have you been in an early childhood educator role? ______________________________________________________________________ 8. How long have you worked in an inclusive preschool classroom in which children with identified disabilities are included in a classroom with typically developing children?________________________________________________________________________________________________________________________________________ 9. What type of training have you received to work with children with disabilities? Please include the content provided and training format (in person, online, from the district or outside source, etc.). _______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ APPENDIX F Parent Recruitment Letter 104 Human Development and Family Studies 552 W. Circle Drive 7 Human Ecology East Lansing, MI 48824 517-355-7680 Fax: 517-432-2953 http://hdfs.msu.edu Dear Parent, August, 2018 Your child is being invited to participate in a research study. The purpose of the study is to to train and coach early childhood practitioners in inclusive preschool classrooms to implement a play-based naturalistic intervention for a child with a disability that impacts social and communication skills. The intervention will use a collaborative and technology-based methods with a teacher in your child’s classroom. During the study, your child will work with their teacher to increase a target skill that is slected by the teacher with your input. Your child will participate in normally occurring classroom activities (like free play) while the teacher follows your child’s lead and implements the play-based naturalistic intervention. Your child will also be videorecorded by their teacher using a tablet. The video recordings will be uploaded to a secure network for teacher training purposes and only shared with approved members of the research team. The study will occur during normally occurring classroom activities and is expected take place between September and January. If you are willing to allow your child to participate, please complete the enclosed consent and demographic forms and return them to your classroom teacher at your child’s school. I have provided an additional copy of the consent form for your records. Please contact me if you have any questions about the study. Thank you, Sophia D’Agostino Sophia D’Agostino, M.Ed Michigan State University 552 West Circle Dr East Lansing, MI 48824 616-202-8175 sophiad@msu.edu APPENDIX G Parental Consent 105 106 APPENDIX H Child Information Questionnaire 107 108 APPENDIX I Procedural Fidelity Checklists Incidental Teaching Procedural Fidelity Practitioner Initials: Phase, Session Number, Date: Observer: Child Target Skill: Directions: Watch the video session and choose “+” if the practitioner emitted the technique as described and “-“ if the practitioner did not emit the technique as described. Mark “na” if practitioner did not need to prompt. The practitioner must create an opportunity for the child to use the target skill above or it is noted as no opportunity with “no opp” Technique Description Environmental Arrangement Practitioner arranges preferred materials in the environment to encourage child communication. 1 2 3 Implementation 7 4 6 5 8 9 10 1 2 Follow Child’s Lead Restrict Access 3 4 Waiting 5 Prompting 6 7 Reinforcing Practitioner follows the child’s initiation toward the communication opportunity Practitioner restricts access to the item, activity, etc. Practitioner waits at least 5 seconds for the child to practice the target skill before initial prompting If necessary, practitioner presents prompt for communication (e.g., model,gestural/physical prompt, vocal prompt) Practitioner delivers prompts only if the child demonstrates interest in the item/activity. Practitioner waits at least 5 seconds between prompts. If child demonstrates target skill, practitioner provides access to the activity/toy after for at least 20 s or less if activity naturally ends before 20 s. 109 If child does not demonstrate target skill, practitioner provides another model of the correct response. If the child has not demonstrated the targeted skill after the second prompt, practitioner provides a final model of the correct response and provides access to the item/activity. TOTAL Video Time Number observed ____ / ____ possible Procedural Fidelity = ____ % 8 Directions: Tally each occurrence of the child’s target skill behavior as prompted or independent. Tally Total Per Session Prompted Target Skill Independent Target Skill 110 Pivotal Response Training Procedural Fidelity Practitioner Initials: Phase, Session Number, Date: Observer: Child Target Skill: Directions: Watch the video session and choose “+” if the practitioner emitted the technique as described and “-“ if the practitioner did not emit the technique as described. Mark “na” if practitioner did not need to prompt. The practitioner must create an opportunity for the child to use the target skill above or it is noted as no opportunity with “no opp” Technique Description Implementation 1 Environmental Arrangement Practitioner arranges the environment to get the child’s attention by limiting distractions, having preferred materials ready and additional materials nearby if child interests change. 2 Present a Clear Opportunity 3 Wait 4 Adult Response Prompt (if needed) Practitioner presents a clear opportunity for the child to use the skill that is understandable to the child, uninterrupted, and developmentally appropriate. This can include: Gesture/Play model, Verbal model, Instruction, Question, Facial Expression, Comment, Situational*see list with operational definitions for clarity Pause for at least 5 seconds to wait for the child to respond. The child may respond before 5 second wait time is up and the practitioner should then respond. If an appropriate response or reasonable attempt, practitioner gives reward paired with specific verbal praise. An attempt is a behavior that serves the same function as the target skill, without the accuracy or complexity of a correct response. If an inappropriate or incorrect response, practitioner provides a prompt/presents the instruction again to the child to complete the task. If appropriate response or reasonable attempt after prompt, practitioner gives reward with verbal praise. 5 Provides Choice Within the opportunity, the practitioner provides the child control with choices. Choices can be between and/or within activities. 111 1 2 3 4 5 6 7 8 9 10 6 Follows the Child’s Lead 7 Turn Taking Within the opportunity, the practitioner follows the child’s lead by responding to how the child’s interests change and incorporates how the child wants to interact with the materials. Within the opportunity, the practitioner incorporates turn- taking (e.g., regain control of materials and model the skill, facilitate interactions, encourage a turn with a peer). 8 Use Varied Instruction Within the opportunity, the practitioner uses a mix of both easy and difficult tasks TOTALS Video Time Number observed ____ / ____ possible Procedural Fidelity = ____ % Directions: Tally each occurrence of the child’s target skill behavior as prompted or independent. Tally Total Per Session Prompted Target Skill Independent Target Skill 112 APPENDIX J Target Skill Prioritization 113 Target Skill Prioritization (adapted from Carter, 2010) Directions: Complete one form for each treatment goal that is being considered. Rate your agreement/disagreement with each of the questions below for the each treatment goal. (1 = strongly disagree, 2 = disagree, 3 = somewhat disagree, 4 = somewhat agree, 5 = agree, 6 = strongly agree) Treatment Goal: ________________________________________________________________ ______________________________________________________________________________ 1. This is the best treatment goal that could be chosen. 1 2 3 4 5 6 2. This treatment goal focuses on the most important issues. 1 2 3 4 5 6 3. This treatment goal increases opportunities to engage in activities that may currently be limited. 1 2 3 4 5 6 4. This is a reasonable treatment goal to accomplish. 1 2 3 4 5 6 5. This treatment goal will not have negative side effects. 1 2 3 4 5 6 6. This treatment goal will increase opportunities for positive feedback. 1 2 3 4 5 6 7. This treatment goal will promote other needed skills. 1 2 3 4 5 6 8. This treatment goal is needed more than other goals. 1 2 3 4 5 6 APPENDIX K Intervention Activity Form Practitioner Initials: Directions: With the coach, complete the chart below taking into consideration the information listed in the Child Information Questionnaire. Child Target Skill (operational definition): NDBI Approach: Child Preferred Activities and Toys: Setting and Time of Day Appropriate for Intervention (e.g., inside at play time): Generalization Context: 114 APPENDIX L Self-Evaluation Checklists 115 116 117 APPENDIX M Coaching Fidelity Checklist 118 Coaching Fidelity Checklist Practitioner Initials: NDBI Approach: Session Number: Observer: Videoconferencing Coaching Procedures Completed Video Time 1. Coach and practitioner both have the completed evaluation checklist in front of them. + - 2. Coach provides overall positive statement about the practitioner’s performance. + - 3. Coach and practitioner review each step of the self-evaluation checklist. + - 4. When agreement occurs, the coach will state that they have reached agreement and ask if the practitioner has any questions. + - NA 5. When disagreement occurs, the coach will state in a neutral voice that they disagree and provide a rationale for the disagreement. + - NA 6. Coach shares the screen to watch segments of the video session simultaneously. + - 7. Coach points out instances when strategy was used correctly giving specific praise (with or without video). + - NA 8. Coach points out opportunities to use strategies in the future. + - NA 9. Coach provides constructive feedback when strategy is used incorrectly. + - NA 10. Coach asks whether the practitioner has any questions. + - 11. Coach will end with providing an overall positive statement about the practitioner’s performance. + - 12. Each session will conclude with scheduling or attempting to schedule the next videoconferencing session. + - Number observed _____ / _____ possible Procedural Fidelity = ______% APPENDIX N Teacher Efficacy for the Inclusion of Young Children with Disabilities Scale 119 The Teacher Efficacy for the Inclusion of Young Children with Disabilities (TIEYD; adapted from Esposito et al., 2007; Wells, 2007) Practitioner Initials: Choose One: Before the start of the study OR End of the study Directions: Using the 5-point scale below, indicate your confidence level for each of the following statements: 1 = no confidence, 2 = little confidence, 3 = moderate confidence, 4 = confident, 5 = very confident 1. I know how disabilities can impact a young child’s social relationships. 2. I know how disabilities can impact a young child’s language development. 3. I know how disabilities can impact a young child’s cognitive skills. 4. I know how disabilities can impact a young child’s motor skills. 5. I know how disabilities can impact a young child’s self-help skills. 6. I understand my role in serving children with an active IEP. 7. I know the most effective teaching strategies for young children with disabilities. 8. I know the most effective strategies for working with families of young children with disabilities. 9. I can modify instructional practices to meet the needs of young children with disabilities in an inclusive setting. 10. I understand how to break learning tasks down into smaller components. 11. I understand what appropriate learning tasks are for young children with disabilities. 12. I can develop learning tasks for the inclusive setting based on IEP goals and objectives. 13. I can make appropriate classroom environment modifications to meet the needs of young children with disabilities in inclusive settings. 14. I can select appropriate curriculum for young children with disabilities in inclusive settings. 15. I can modify classroom activities for young children with disabilities in inclusive settings. APPENDIX O Practitioner Intervention Rating Profile 120 Intervention Rating Profile-15 (IRP-15; adapted from Martens et al., 1985) Practitioner Version Practitioner Initials: Circle One: Before the start of the study OR After the study Please respond to the following items with the number that best describes your agreement or disagreement with each statement pertaining to the chosen Naturalistic Developmental Behavioral Intervention. 1 = strongly disagree, 2 = disagree, 3 = somewhat disagree, 4 = somewhat agree, 5 = agree, 6 = strongly agree 1. This is an acceptable intervention for the child’s target skill. 2. I find this intervention appropriate for the child’s target skill. 3. This intervention should/did prove effective in increasing the child’s target skill. 4. I would suggest the use of this intervention to other practitioners. 5. The child’s skill deficit is appropriate for this intervention. 6. I find this intervention suitable for the child’s target skill development. 7. I am willing to continue using this intervention in the classroom. 8. This intervention would not result in negative side-effect for the child. 9. This intervention would be appropriate for a variety of children. 10. This intervention is consistent with those I have used in classroom settings. 11. This intervention was an acceptable way to increase the child’s target skill. 12. This intervention is reasonable for the child’s target skill. 13. I like the procedures used in this intervention. 14. This intervention was a good way to respond to the child’s skill deficit. 15. Overall, this intervention is beneficial for the child. Comments:_________________________________________________________________________________________________________________________________________________ APPENDIX P Outside Rater Intervention Rating Profile 121 Intervention Rating Profile-15 (IRP-15; adapted from Martens et al., 1985) Outside Rater Version Role: Practitioners Initials: Child’s Initials: Child’s Target Skill: Please respond to the following items with the number that best describes your agreement or disagreement with each statement pertaining to the chosen Naturalistic Developmental Behavioral Intervention. 1 = strongly disagree, 2 = disagree, 3 = somewhat disagree, 4 = somewhat agree, 5 = agree, 6 = strongly agree 1. This is an acceptable intervention for the child’s target skill. 2. I find this intervention appropriate for the child’s target skill. 3. This intervention should/did prove effective in increasing the child’s target skill. 4. I would suggest the use of this intervention to other practitioners. 5. The child’s skill deficit is appropriate for this intervention. 6. I find this intervention suitable for the child’s target skill development. 7. I support the continued use of this intervention in the classroom. 8. This intervention would not result in negative side-effect for the child. 9. This intervention would be appropriate for a variety of children. 10. This intervention is consistent with those I have used/observed in classroom settings. 11. This intervention is an acceptable way to increase the child’s target skill. 12. This intervention is reasonable for the child’s target skill. 13. I like the procedures used in this intervention. 14. This intervention was a good way to respond to the child’s skill deficit. 15. Overall, this intervention is beneficial for the child. Comments:_________________________________________________________________________________________________________________________________________________ APPENDIX Q Training and Coaching Social Validity Scale 122 Practitioner Training and Coaching Social Validity Questionnaire Practitioner Initials: Please respond to the following items to evaluate the training and coaching procedures (online training module, video conferencing and self-evaluation). 1 = strongly disagree, 2 = disagree, 3 = somewhat disagree, 4 = somewhat agree, 5 = agree, 6 = strongly agree 1. I found the online training module to be an acceptable way to train practitioners. 2. I foud the self-evaluation to be effective. 3. I foud the video conferencing to be effective. 4. I enjoyed the self-evaluation process. 5. I enjoyed video conferencing with the coach. 6. I believe this training and coaching approach is an efficient (time, cost, etc.) training method. 7. I see improvement in the child’s target skill as a result of this intervention. 8. Overall, my experience with this training and coaching approach was positive. Comments: ________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ REFERENCES 123 REFERENCES Amsbary, J., & AFIRM Team. (2017). Naturalistic intervention. Chapel Hill, NC: National Professional Development Center on Autism Spectrum Disorder, FPG Child Development Center, University of North Carolina. Retrieved from http://afirm.fpg.unc.edu/Naturalistic-intervention Augestad, K. M., & Lindsetmo, R. O. (2009). Overcoming distance: Video-conferencing as a clinical and educational tool among surgeons. World Journal of Surgery, 33, 1356–1365. doi:10.1007/s00268-009-0036-0. Autism Focused Intervention Resources and Modules Team. (2018). Autism Focused Intervention Resources and Modules. Chapel Hill, NC: National Professional Development Center on Autism Spectrum Disorder, FPG Child Development Center, University of North Carolina. Retrieved from: http://afirm.fpg.unc.edu. Barton, E. E., & Smith, B. J. (2015). Advancing high quality preschool inclusion: A discussion and recommendations for the field. Topics in Early Childhood Special Education, 35, 69- 78. doi:10.1177/0271121415583048 Boudah, D. J., Blair, E., & Mitchell, V. J. (2003). Implementing and sustaining strategies instruction: Authentic and effective professional development or “business as usual”? Exceptionality, 11, 3–23. doi:10.1207/S15327035EX1101_2 Brock, M. E., & Carter, E. W. (2017). A meta-analysis of educator training to improve implementation of interventions for students with disabilities. Remedial and Special Education, 38, 131-144. doi: 10.1177/0741932516653477 interventions. Boston, MA: Academic Press. Carter, S. L. (2010). The social validity manual: A guide to subjective evaluation of behavioral Division for Early Childhood/National Association for the Education of Young Children. (2009). Early childhood inclusion: A joint position statement of the Division for Early Childhood (DEC) and the National Association for the Education of Young Children (NAEYC). Chapel Hill: The University of North Carolina, Frank Porter Graham Child Development Institute. Douglas, S. N., McNaughton, D., & Light, J. (2013). Online training for paraeducators to support the communication of young children. Journal of Early Intervention, 35, 223-242. doi: 10.1177/1053815114526782 Dunst, C. J., Trivette, C. M., & Raab, M. (2013). An implementation science framework for conceptualizing and operationalizing fidelity in early childhood intervention studies. Journal of Early Intervention, 35, 85-101. doi: 10.1177/1053815113502235 124 Esposito, M. C., Guarino, A. J., & Caywood, K. D. (2007). Perceived Teacher Efficacy Beliefs for the Inclusion of the Student with Learning Disabilities. Learning Disabilities: A Multidisciplinary Journal, 14, 265-272. Retrieved from https://eric-ed- gov.proxy2.cl.msu.edu/?id=EJ803320 Feeley, K., & Jones, E. (2006). Addressing challenging behaviour in children with Down syndrome: The use of applied behaviour analysis for assessment and intervention. Down Syndrome Research and Practice, 11, 64-77. doi:10.3104/perspectives.316 Ferguson, J., Craig, E. A., & Dounavi, K. (2018). Telehealth as a Model for Providing Behaviour Analytic Interventions to Individuals with Autism Spectrum Disorder: A Systematic Review. Journal of Autism and Developmental Disorders, 49, 1-35. doi: 10.1007/s10803- 018-3724-5 Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S. (2001). What makes professional development effective? Results from a national sample of teachers. American Educational Research Journal, 38, 915–945. doi:10.3102/00028312038004915 Gebbie, D. H., Ceglowski, D., Taylor, L. K., & Miels, J. (2012). The role of teacher efficacy in strengthening classroom support for preschool children with disabilities who exhibit challenging behaviors. Early Childhood Education Journal, 40, 35-46. doi: 10.1007/s10643-011-0486-5 Greenwood, C. R., & Abbott, M. (2001). The research to practice gap in special education. Teacher Education and Special Education, 24, 276-289. doi: 10.1177/088840640102400403 Hart, B. M., & Risley, T. R. (1982). How to use incidental teaching for elaborating language. Lawrence, KS: H & H Enterprises. Harwell, S. H. (2003). Teacher professional development: It’s not an event, it’s a process. Waco, TX: Center for Occupational Research and Development. Hieneman, M., Dunlap, G., & Kincaid, D. (2005). Positive support strategies for students with behavioral disorders in general education settings. Psychology in the Schools, 42, 779- 794. doi: 10.1002/pits.20112 Horner, R. D., & Baer, D. M. (1978). Multiple-probe technique: a variation on the multiple baseline. Journal of applied behavior analysis, 11, 189-96. doi: 10.1901/jaba.1978.11- 189 Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional children, 71, 165-179. Retrieved from http://ezproxy.msu.edu.proxy2.cl.msu.edu/login?url=https://search-proquest- com.proxy2.cl.msu.edu/docview/201222049?accountid=12598 125 Ingersoll, B., & Berger, N. I. (2015). Parent engagement with a telehealth-based parent-mediated intervention program for children with autism spectrum disorders: Predictors of program use and parent outcomes. Journal of Medical Internet Research, 17, 227–245. doi:10.2196/jmir.4913. Kennedy, C. H. (2002). The maintenance of behavior change as an indicator of social validity. Behavior Modification, 26, 594–604. Retrieved from http://dx.doi.org/10.1177/014544502236652. Knowles, M. S. (1980). The modern practice of adult education: From pedagogy to andragogy (2nd ed.). Chicago: Follett. Koegel, R. L., Schreibman, L., Good, A., Cerniglia, L., Murphy, C., & Koegel, L. K. (1989). How to teach pivotal behaviors to children with autism: A training manual. Santa Barbara: University of California. Kratochwill T. R., Hitchcock J. H., Horner R. H., Levin J. R., Odom S. L., Rindskopf D. M., et al. (2010). Single Case Designs Technical Documentation. In What Works Clearinghouse: Procedures and Standards Handbook (Version 2.0). Retrieved from http://ies.ed.gov.proxy2.cl.msu.edu/ncee/wwc/pdf/wwc_scd.pdf Lane, J. D., & Gast, D. L. (2014). Visual analysis in single case experimental design studies: Brief review and guidelines. Neuropsychological rehabilitation, 24, 445- doi: 10.1080/09602011.2013.815636 Langley, A. K., Nadeem, E., Kataoka, S. H., Stein, B. D., & Jaycox, L. H. (2010). Evidence- based mental health programs in schools: Barriers and facilitators of successful implementation. School mental health, 2, 105-113. doi: 10.1007/s12310-010-9038-1 Martens, B. K., Witt, J. C., Elliott, S. N., & Darveaux, D. X. (1985). Teacher judgments concerning the acceptability of school-based interventions. Professional psychology: Research and practice, 16, 191. doi: 10.1037/0735-7028.16.2.191 McDuffie, K. A., & Scruggs, T. E. (2008). The contributions of qualitative research to discussions of evidence-based practice in special education. Intervention in School and Clinic, 44, 91-97. doi: 10.1177/1053451208321564 Neely, L., Rispoli, M., Gerow, S., & Hong, E. R. (2016). Preparing interventionists via Neely, L., Rispoli, M., Boles, M., Morin, K., Gregori, E., Ninci, J., & Hagan-Burke, S. (2018). telepractice in incidental teaching for children with autism. Journal of Behavioral Education, 25, 393-416. doi: 10.1007/s10864-016-9250-7 Interventionist Acquisition of Incidental Teaching Using Pyramidal Training via Telehealth. Behavior Modification. Retrieved from https://doi.org/10.1177/0145445518781770 126 Neely, L., Rispoli, M., Gerow, S., Hong, E. R., & Hagan-Burke, S. (2017). Fidelity outcomes for autism-focused interventionists coached via telepractice: A systematic literature review. Journal of Developmental and Physical Disabilities, 29, 849-874. doi: 10.1007/s10882-017-9550-4 Odom, S. L., Buysse, V., & Soukakou, E. (2011). Inclusion for young children with disabilities: Odom, S. L., Cox, A. W., Brock, M. E., & National Professional Development Center on ASD. A quarter century of research perspectives. Journal of Early Intervention, 33, 344-356. (2013). Implementation science, professional development, and autism spectrum disorders. Exceptional Children, 79, 233-251. Retrieved from http://ezproxy.msu.edu.proxy2.cl.msu.edu/login?url=https://search-proquest- com.proxy2.cl.msu.edu/docview/1270781690?accountid=12598 Pantermuehl, R. M., & Lechago, S. A. (2015). A comparison of feedback provided in vivo versus an online platform on the treatment integrity of staff working with children with autism. Behavior Analysis in Practice, 8, 219-222. doi: 10.1007/s40617-015-0059-y Praisner, C. L. (2003). Attitudes of elementary school principals toward the inclusion of students with disabilities. Exceptional Children, 69, 135–145. doi: 10.1177/001440290306900201 Rispoli, M., Neely, L., Lang, R., & Ganz, J. (2011). Training paraprofessional to implement interventions for people with autism spectrum disorder: A systematic review. Developmental Neurorehabilitation, 14, 378–388. doi:10.3109/17518423.2011.620577. Sam, A.M., Cox, A.W., Savage, M.N., Waters, V., Odom, S.L. (2019). Disseminating information on evidence-based practices for children and youth with autism spectrum disorder: AFIRM. Journal of Autism and Developmental Disorders. 1-10. Retrieved from https://doi-org.proxy2.cl.msu.edu/10.1007/s10803-019-03945-x Schreibman, L., Dawson, G., Stahmer, A. C., Landa, R., Rogers, S. J., McGee, G. G., Kasari, C., Ingersoll, B., Kaiser, A.P., Bruinsma, Y, & McNerney, E. (2015). Naturalistic developmental behavioral interventions: Empirically validated treatments for autism spectrum disorder. Journal of autism and developmental disorders, 45, 2411-2428. doi: 10.1007/s10803-015-2407-8 Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice the state of the art? Journal of Applied Behavior Analysis, 24, 189–204. doi: 10.1901/jaba.1991.24- 189 Snyder, P. A., Rakap, S., Hemmeter, M. L., McLaughlin, T. W., Sandall, S., & McLean, M. E. Stahmer, A. C., Collings, N. M., & Palinkas, L. A. (2005). Early intervention practices for (2015). Naturalistic instructional approaches in early learning: A systematic review. Journal of Early Intervention, 37, 69-97. doi: 10.1177/1053815115595461 127 children with autism: Descriptions from community providers. Focus on Autism and Developmental Disabilities, 20, 66-79. doi: 10.1177/10883576050200020301 Suhrheinrich, J. (2015). A sustainable model for training teachers to use pivotal response Suhrheinrich, J., Chan, J., Melgarejo, M. Reith, S., Stahmer, A., & AFIRM Team. training. Autism, 19, 713-723. doi: 10.1177/1362361314552200 (2018). Pivotal response training. Chapel Hill, NC: National Professional Development Center on Autism Spectrum Disorder, FPG Child Development Center, University of North Carolina. Retrieved from http://afirm.fpg.unc.edu/Pivotal-response-training Vismara, L. A., Young, G. S., Stahmer, A. C., Griffith, E. M., & Rogers, S. J. (2009). Dissemination of evidence-based practice: Can we train therapists from a distance? Journal of Autism and Developmental Disorders, 39, 1636-1651. doi: 10.1007/s10803-009-0796-2 Wainer, A. L., & Ingersoll, B. R. (2013). Disseminating ASD interventions: A pilot study of a distance learning program for parents and professionals. Journal of autism and developmental disorders, 43, 11-24. doi: 10.1007/s10803-012-1538-4 Wainer, A. L., Pickard, K., & Ingersoll, B. R. (2017). Using web-based instruction, brief workshops, and remote consultation to teach community-based providers a parent- mediated intervention. Journal of Child and Family Studies, 26, 1592-1602. doi: 10.1007/s10826-017-0671-2 Walls, S. D. (2007). Early childhood preservice training and perceived teacher efficacy beliefs concerning the inclusion of young children with disabilities. Auburn University. Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A., Kucharczyk, S., Brock, M.E., behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214. Plavnick, J.B., Fleury, V.P., & Schultz, T. R. (2015). Evidence-based practices for children, youth, and young adults with autism spectrum disorder: A comprehensive review. Journal of autism and developmental disorders, 45, 1951-1966. doi: 10.1080/10459881003785506 teacher self-evaluation on Head Start teachers’ use of praise. Journal of Research in Childhood Education, 26, 187–198. doi: 10.1080/02568543.2012.657745 Wright, M. R., Ellis, D. N., & Baxter, A. (2012). The effect of immediate or delayed video-based 128 CHAPTER 4: COMBINED CONCLUSION Implications and Contributions The overall goal of these studies was to expand the literature base of practitioner implemented NDBI studies and contribute to closing the service-need gap in training practitioners of inclusive preschool classrooms. The specific goals of this dissertation were to (1) identify the social validity assessment practices incorporated into practitioner implemented NDBI studies and illuminate implications for future research and practice, and (2) evaluate effectiveness and social validity outcomes of the Adapted NPDB Model to train practitioners to implement an NDBI in an inclusive preschool classroom. These studies help highlight the importance of acceptability of intervention packages and the importance of interactions relationship within intervention packages. Specifically, intervention planning processes lead to higher implementation fidelity, maintained behavior change, and more effective outcomes. Practitioners and the children and families they work with deserve effective and efficient training and coaching informed by empirical evidence. This work adds to existing literature by providing an initial understanding of what models might work for general education inclusive preschool teachers and the children with disabilities they support. Study 1 Study 1 showed that social validity measurement was lacking within practitioner implemented NDBI studies. The role of social validity in educational and social inquiry research is valuable because it highlights relevant stakeholders’ perceptions of the intervention’s goals, procedures, and outcomes. Researchers can then use this information to refine or adjust the intervention package in order to promote maintained use. The lack of comprehensive social validity assessment (i.e., assessing the goals, procedures, and outcomes related to practitioner 129 and child participants) may exist for many reasons. First, researchers may not be aware of existing, quality methods to guide social validity assessment (Carr et al., 1999; Schwartz & Baer, 1991). Second, the addition of rigorous assessments (e.g., mixed methods) may increase researcher cost and time, which can be a barrier (Leko, 2014). Strict journal requirements, review practices, and/or page restrictions may also hinder the inclusion of social validity within accepted manuscripts (Carr et al., 1999). Finally, social validity assessments are subjective and therefore maybe viewed as less important to researchers compared to objective data (Wolf, 1978). Overall, findings, like those from Study 1, highlighting low prevalence of social validity assessment are concerning and similar to findings in previous reviews of social validity. Study 1 outlines specific suggestions to researchers to promote the expansion of social validity assessment in applied research. Increased attention to social validation can improve application of evidence- based interventions in children’s natural settings. Study 2 Study 2 highlights the use of an innovative technology-based training intervention. The Adapted NPDC Model took into consideration the intervention characteristics and individual characteristics that affect implementation. Intervention characteristics. The Adapted NPDC Model draws from Horner and colleagues’ (2005) criteria for evaluating social validity and implementation science research and theories (e.g., Damschroder, Aron, Keith, Kirsh, Alexander, Lowery, 2009; Dunst, Trivette, & Raab, 2013) by utilizing intervention characteristics that are associated with increased outcomes. Specifically, the NDBI approaches utilized in this research are empirically supported and have a perceived advantage over highly structured interventions as they are developmentally and behaviorally based, which may support implementation (Dunst et al., 2013; Damschroder et al., 130 2009). Similarly, the Adapted NPDC Model is technology-based and research supported from reputable external entities (e.g., research groups, NPDC on ASD). Further, both aspects of the intervention package (i.e., NDBIs and training and coaching model) were successfully adopted and rated acceptable by practitioners. Since the overall investment required for interventions including monetary, time, supply, and opportunity affect implementation, it is imperative that professional development systems require an overall low investment (Damschroder et al., 2009; Wlodkowski, 2004). The results of Study 2 suggest that the Adapted NPDC Model can provide an affordable professional development option that requires little time and financial investment. Fidelity factors. Fidelity factors that may be essential in ensuring positive outcomes for practitioner and child participants (Barton & Fettig, 2003). These fidelity variables refer to the NDBI procedures implemented by the practitioner and the Adapted NPDC Model procedures implemented by the coach. Study 2 incorporated measurement of practitioner behavior through implementation fidelity as well as measurement of coach behaviors though treatment integrity (also known as treatment fidelity). Collecting and reporting treatment integrity is a strength of Study 2 as such results are often not reported in technology-based training intervention literature (Neely et al., 2018). It is vital to collect and report treatment integrity data since it has the potential to moderate practitioner implementation fidelity and subsequent outcomes (i.e., child target skills and social validity; e.g., Dunst et al., 2013). Individual characteristics. The Adapted NPDC Model individualized practitioner implementation of the NDBI and child target skill to each dyad based on their unique needs and characteristics. The intervention involved the practitioner input at multiple stages including the planning stage (i.e., participant selection, goal selection, NDBI selection, setting selection), the intervention stage (i.e., implementation and self-evaluation), and the evaluation stage (i.e., social 131 validity assessment). Involving practitioners at each stage of the intervention package can support intervention adoption and implementation, social validity, and self-efficacy. Individual factors are known to influence adoption and implementation of a new intervention. These factors include knowledge and beliefs towards the intervention, personal traits (e.g., values, competence), relationship and commitment to the organization, individual phase or stage of change (e.g., contemplation, action, maintenance), and level of self-efficacy (Damschroder et al., 2009). Individual characteristics of participants can and should be recognized and responded to by researchers in order to promote adoption and maintained implementation. From the beginning, practitioners were given choices within the Adapted NPDC Model including the target child, the target child’s goal, and the NDBI. Choices allow the practitioner to follow their knowledge, beliefs, and values, which can affect adult learning and implementation (Damschroder et al., 2009; Wlodkowski, 2004). Adoption and maintenance of implementation can also be supported through ensuring social validity and empowering participants and stakeholders at multiple levels. The Adapted NPDC ensured social validity through numerous assessments at various levels of the intervention process. Researchers should also recognize the importance of self-efficacy as this construct can play a strong role in individual behavior change (Bandura, 1997). Higher self-efficacy is associated with greater likelihood of embracing change and sustaining the use of a novel intervention, even when faced with obstacles (Damschroder et al., 2009). Study 2 results confirm that practitioner self-efficacy for working with children with disabilities increased post study, which is a notable and important finding supporting the efficiency of the Adapted NPDC Model and maintained behavior change. This research also suggests that it is important to provide adequate training and coaching to practitioners of inclusive classrooms as a means to increase 132 their self-efficacy and skill development in a novel intervention, and subsequently increase the likelihood that they continue to implement the intervention (Vismara, Young, Stahmer, Griffith, & Rogers, 2009). Another effect of increasing practitioner self-efficacy may be decreasing burnout since practitioners who work with children with special needs experience high rates of attrition (Billingsley, 2004). Implementing a high-quality preschool program that uses evidence-based practices, like NDBIs, with high fidelity and alignment to practitioner’s teaching philosophy may correspond to lower burnout (Coman et al., 2013). Hence, the use of the Adapted NPDC Model with an increase in practitioner self-efficacy may lead to lower rates of practitioner burnout. Relationships Matter This dissertation recognizes the role that social relationships play within implementation. As Damschroder and colleagues (2009) describe, implementation is a social process and the context of an implementation effort includes a set of active variables that interact and impact outcomes. Contributing variables include the theories underpinning the intervention and the implementation. Both developmental and behavioral approaches have recognized the importance of social relationships as an essential context for early intervention for children of typical and atypical development. NDBIs, mutually informed by developmental and behavioral principles, recognize that relationships are an important characteristic of the learning context (Schreibman et al., 2015). Establishing relationships is an essential feature of a child’s development. “Human relationships, and the effects of relationships on relationships, are the building blocks of healthy development” (Phillips & Shonkoff, 2000, p. 4). Adult-child social interactions are recognized as important indicators of high-quality classroom experiences, which influence children’s development (Pianta, 1999). Research 133 strongly supports the impact that high-quality, sustained, and reciprocal adult-child social interactions have on children’s healthy development (Hamre & Pianta, 2001). Early childhood observational measures of classroom quality, like the Classroom Assessment Scoring System (La Pora, Pianta, & Stuhlman, 2004) and Inclusive Classroom Profile (Soukakou, 2012) include adult-child social interactions as a principal measure of quality. Within NDBIs, practitioners establish engaging activities between the adult and child that may contain meaningful, relationship building social interactions leading to enhanced child learning (Schreibman et al., 2015). The Adapted NPDC Model provides the practitioner with ongoing coaching and performance based feedback to enhance adult-child interactions and promote positive outcomes and promote a high quality early childhood program. The Adapted NPDC Model recognizes the importance of relationships through not only the use of relationship-based child interventions, but also through collaboration between the coach and practitioner within the implementation context. Health care field research highlights that relationships between patients and providers are correlated with patient satisfaction or social validity, patient adherence to treatment, and improved outcomes (e.g., Kirby, Tellegen, & Steindl, 2017). The early development and early intervention fields have recognized the critical nature of relationships including those between the child and caregiver, child and practitioner, and practitioners and parents (Phillips & Shonkoff, 2000). Additionally, the field of behavior analysis has also recognized the importance of enhancing relationships with caregivers (Taylor, LeBlanc, & Nosik, 2018). Within the Adapted NPDC Model, the empirically based methods used by the coach do not exist separately from relationships with practitioners. Therefore, implementation outcomes are enhanced by not only what is done, but how it is done. The outcomes influenced by relationship building are far reaching and include social validity, 134 practitioner implementation fidelity, increased child skills, and maintained behavior change, which in the end, positively impact all participants. Competent coaching. An important gap is recognized from this dissertation research. Explicitly, in order to promote future replication of this research and adoption of the Adapted NPDC Model into practice, the competency of the coach requires further exploration. The relationship building skills of the coach may affect implementation. The skills of the coach that may influence relationship building stem from the work of the developmental field and behavior field. Both of these fields offer curriculum content for establishing and maintaining successful relationships (e.g., Gilkerson & Ritzler, 2005; Kelly, Zuckerman, Sandoval, & Buehlman, 2003; Lown, 2016; Taylor et al., 2018). Broadly, these fields merge to highlight interpersonal skills and strategies involving empathy and compassion, which can lead to successful implementation of an intervention. These skills include communication (e.g., listening, responding, asking reflective and open-ended questions), collaboration, and understanding and respecting culture (Gilkerson & Ritzler, 2005; Kelly et al., 2003; Taylor et al., 2018). For instance, common communication and collaboration errors can include judging, using jargon, and jumping to conclusions (Taylor et al., 2018). Skills that can enhance the relationship include making positive comments about behavior, demonstrating enthusiasm for observed improvement, and asking how the practitioner is doing and if they are happy with how things are going (Taylor et al., 2018). These behaviors are observable and can be operationally defined on a coaching protocol to be examined or taught. Attention to these factors can improve the coaching and influence other important outcomes. As previously stated, treatment fidelity was reported at high levels in Study 2. Treatment fidelity was based on evidence-based intervention practices and included observable coaching 135 behaviors, yet do not include all of the recommended relationship building skills. Of note, the coach (i.e., the author) in Study 2 had an established rapport with practitioners; she worked with them as a consultant provided by the local intermediate school district for three years prior to the start of the study. The coach may have used relationship building skills during the sessions. Since all of the coaching sessions were recorded with the exception of the only in-person meeting pre-baseline, core relationship building skills could be observed and measured. These skills could be added to the coaching implementation protocol to promote relationship building in order to enhance outcomes. Future research should include coaching skills to the Adapted NPDC Model. Conclusion The findings of these studies suggest the importance of socially valid and adequate training and coaching to support general education inclusive preschool practitioners’ adoption and maintained use of evidence-based NDBIs. This dissertation highlights the importance of objective and subjective outcomes within applied research. The hope is that this dissertation begins to answer questions about socially valid and effective early childhood practitioner professional development to advance the scientific basis in understanding what implementation models work for practitioners in inclusive preschool classrooms. Further, future research is warranted to explore unique factors that may support implementation to successfully bring effective practices to practitioners for young children with disabilities. 136 REFERENCES 137 REFERENCES Bandura, A. (1997). Self-efficacy: The exercise of control. New York: wH Freeman. Barton, E. E., & Fettig, A. (2013). Parent-implemented interventions for young children with disabilities: A review of fidelity features. Journal of Early Intervention, 35, 194–219. doi:10.1177/1053815113504625. Billingsley, B. S. (2004). Special education teacher retention and attrition: A critical analysis of the research literature. The Journal of Special Education, 38, 39–55. doi: 10.1177/00224 66904 03800 10401. Carr, J. E., Austin, J. L., Britton, L. N., Kellum, K. K., & Bailey, J. S. (1999). An assessment of social validity trends in applied behavior analysis. Behavioral Interventions: Theory & Practice in Residential & Community‐Based Clinical Programs, 14, 223-231. doi: 10.1002/(SICI)1099-078X(199910/12)14:4<223::AID-BIN37>3.0.CO;2-Y Coman, D., Alessandri, M., Gutierrez, A., Novotny, S., Boyd, B., Hume, K., Sperry, L., & Odom, S. (2013). Commitment to classroom model philosophy and burnout symptoms among high fidelity teachers implementing preschool programs for children with autism spectrum disorders. Journal of Autism and Developmental Disorders, 43, 345-360. doi: 10.1007/s10803-012-1573-1 Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4, 50. doi: 10.1186/1748-5908-4-50 Dunst, C. J., Trivette, C. M., & Raab, M. (2013). An implementation science framework for conceptualizing and operationalizing fidelity in early childhood intervention studies. Journal of Early Intervention, 35, 85-101. doi: 10.1177/1053815113502235 Gilkerson, L., & Ritzler, T. T. (2005). The role of reflective process in infusing relationship- based practice into an early intervention system. The handbook of training and practice in infant and preschool mental health, 427-452. Hamre, B. K., & Pianta, R. C. (2001). Early teacher–child relationships and the trajectory of children's school outcomes through eighth grade. Child Development, 72, 625-638. doi: 10.1111/1467-8624.00301. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional children, 71, 165-179. Retrieved from http://ezproxy.msu.edu.proxy2.cl.msu.edu/login?url=https://search-proquest- 138 com.proxy2.cl.msu.edu/docview/201222049?accountid=12598 Kelly, J. F., Zuckerman, T. G., Sandoval, D., & Buehlman, K. (2003). Promoting First Relationships: A curriculum for service providers to help parents and other caregivers meet the social and emotional needs of young children. Seattle, WA: NCAST Publications. Kirby, J. N., Tellegen, C. L., & Steindl, S. R. (2017). A meta-analysis of compassion-based interventions: Current state of knowledge and future directions. Behavior Therapy, 48, 778-792. doi:10.1016/j.beth.2017.06.003 La Paro, K. M., Pianta, R. C., & Stuhlman, M. (2004). The classroom assessment scoring system: Findings from the prekindergarten year. The Elementary School Journal, 104, 409-426. doi: 10.1086/499760 Leko, M. M. (2014). The value of qualitative methods in social validity research. Remedial and Special Education, 35, 275-286. doi.: 10.1177/0741932514524002 Lown, B. A. (2016). A social neuroscience-informed model for teaching and practicing compassion in health care. Medical Education, 50, 332–342. doi: 10.1111/medu.12926. Neely, L., Rispoli, M., Boles, M., Morin, K., Gregori, E., Ninci, J., & Hagan-Burke, S. (2018). Interventionist Acquisition of Incidental Teaching Using Pyramidal Training via Telehealth. Behavior Modification. Retrieved from https://doi.org/10.1177/0145445518781770 Phillips, D. A., & Shonkoff, J. P. (Eds.). (2000). From neurons to neighborhoods: The science of early childhood development. National Academies Press. Pianta, R. C. (1999). Enhancing relationships between children and teachers. Washington, DC: American Psychological Association. Schreibman, L., Dawson, G., Stahmer, A. C., Landa, R., Rogers, S. J., McGee, G. G., Kasari, C., Ingersoll, B., Kaiser, A.P., Bruinsma, Y., McNerney, E., Weatherby, A., & Hallady, A. (2015). Naturalistic developmental behavioral interventions: Empirically validated treatments for autism spectrum disorder. Journal of Autism and Developmental Disorders, 45, 2411-2428. doi: 10.1007/s10803-015-2407-8 Schwartz, I. S., & Baer, D. M. (1991). Social validity assessments: Is current practice the state of Soukakou, E. P. (2012). Measuring quality in inclusive preschool classrooms: Development and the art? Journal of Applied Behavior Analysis, 24, 189–204. validation of the Inclusive Classroom Profile (ICP). Early Childhood Research Quarterly, 27, 478-488. doi: 10.1016/j.ecresq.2011.12.003 139 Taylor, B. A., LeBlanc, L. A., & Nosik, M. R. (2018). Compassionate Care in Behavior Analytic Treatment: Can Outcomes be Enhanced by Attending to Relationships with Caregivers?. Behavior Analysis in Practice, 1-13. Doi: 10.1007/s40617-018-00289-3 Vismara, L. A., Young, G. S., Stahmer, A. C., Griffith, E. M., & Rogers, S. J. (2009). Wlodkowski, R. J. (2004). Creating motivating learning environments. Adult learning methods: Dissemination of evidence-based practice: Can we train therapists from a distance? Journal of Autism and Developmental Disorders, 39, 1636-1651. doi: 10.1007/s10803-009-0796-2 A guide for effective instruction, 3, 141-164. Retrieved from http://raymondwlodkowski.com/Materials/AdultLearningMethods.pdf Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214. 140