TEACHING A SELECTION RESPONSE FOR NO-ACCESS VIDEO-BASED PREFERENCE ASSESSMENTS TO CHILDREN WITH AUTSIM By Emma Mitchell A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Applied Behavior Analysis—Master of Arts 2019 ABSTRACT TEACHING A SELECTION RESPONSE FOR NO-ACCESS VIDEO BASED PREFERENCE ASSESSMENTS TO CHILDREN WITH AUTSIM By Emma Mitchell This study examined the prerequisite skills necessary to conduct a no-access video- based preference assessment for two children with Autism Spectrum Disorder (ASD) who did not demonstrate a selection response during a video-based preference assessment. A brief Multiple Stimulus Without Replacement (MSWO) of videos of toys was used to teach a selection response by systematically fading the participants’ access to toys during the preference assessment. Once the selection response was successfully taught with the toys, it was assessed with videos of social interactions. A progressive ratio (PR) schedule and concurrent operant reinforcer assessment were then conducted to determine the reinforcing effectiveness of the highest and lowest preferred social interactions from a brief MSWO for each of the participants. The selection response was successfully taught to both participants and mixed results were found from both reinforcer assessments for both participants. Keywords: autism, preference assessment, selection response, reinforcer assessment, video TABLE OF CONTENTS LIST OF TABLES .................................................................................................................v LIST OF FIGURES ...............................................................................................................vi KEY TO SYMBOLS AND ABBREVIATIONS ..................................................................vii Introduction ............................................................................................................................1 Method ...................................................................................................................................4 Participants .......................................................................................................................4 Setting ...............................................................................................................................4 Materials ...........................................................................................................................4 Measurement .....................................................................................................................5 Interobserver Agreement and Treatment Fidelity .............................................................6 Experimental Design .........................................................................................................7 Teaching a Selection Response .........................................................................................7 General Procedures ...................................................................................................7 Baseline (social interactions) ....................................................................................8 Baseline (toys) ...........................................................................................................8 100% toy access ........................................................................................................8 Toy probe ..................................................................................................................8 Social interaction probe. ............................................................................................9 80% toy access …. .....................................................................................................9 60% toy access ...........................................................................................................9 40% toy access. ..........................................................................................................9 20% toy access. ..........................................................................................................9 Progressive Ratio Schedule...............................................................................................9 General Procedures ....................................................................................................10 Baseline ......................................................................................................................10 HP Social Interaction .................................................................................................11 LP Social Interaction..................................................................................................11 Control .......................................................................................................................11 Concurrent Operant ...........................................................................................................11 Results ....................................................................................................................................13 Teaching a Selection Response .................................................................................13 Progressive Ratio Schedule........................................................................................14 Concurrent Operant ....................................................................................................14 Discussion ..............................................................................................................................15 APPENDIX ............................................................................................................................20 iii REFERENCES ......................................................................................................................26 iv LIST OF TABLES Table 1. Operational definitions of social interactions ..........................................................21 Table 2. Dependent variables .................................................................................................22 v LIST OF FIGURES Figure 1. Teaching a selection response to Daniel and Ben using a systematic fading of access to toys during a brief MSWO with embedded reversals ............................................23 Figure 2. Progressive ratio schedule results for Daniel and Ben. Four conditions are depicted on Daniel’s graph; baseline sessions (circle), LP sessions (diamond), HP sessions (triangle), control (square) .......................................................................................24 Figure 3. Results of the concurrent operant reinforcer assessment for Daniel and Ben . .....25 vi KEY TO SYMBOLS AND ABBREVIATIONS Applied Behavior Analysis Autism Spectrum Disorder Early Intensive Behavioral Intervention Progressive Ratio High Preferred Low Preferred ABA ASD EIBI PR HP LP vii Introduction Autism spectrum disorder (ASD) is a developmental disability characterized by deficits in social communicative behavior and excesses in repetitive behavior or restricted interests (American Psychiatric Association, 2013). These deficits can substantially limit an individual with ASD’s ability to become independent, build meaningful relationships, and make academic gains (Bellini, Peters, Benner, & Hopf, 2007). Through the process of reinforcement, instructors of children with ASD can promote the occurrence of socially appropriate and adaptive behaviors that improve the lives of individuals with ASD throughout their lifespan. Therefore, it is critical that behavioral interventions for children with ASD systematically and frequently identify stimuli that may function as reinforcers, in order to enhance treatment success. There are many different procedures that have been used to identify effective reinforcers, which are collectively referred to as preference assessments. Free operant preference assessments consist of observing an individual and recording what activities he or she engages in when they are given unrestricted access to different stimuli (Cooper, Heron, & Heward, 2007). Selection- based (forced choice) procedures are another mode of preference assessment where an individual is presented with an array of stimuli and asked to make a selection from that array. Following multiple stimulus presentations, a ranking of highest preferred and lowest preferred stimuli is determined (Higbee, 2000). Common forms of selection-based preference assessments include the multiple-stimulus without replacement (MSWO; DeLeon & Iwata, 1996), brief MSWO (Carr, Nicolson, & Higbee 2000) and a paired-stimulus preference assessment (Fisher et al., 1992). Over the last 30 years preference assessments have been used to assess preference of leisure and edible stimuli (DeLeon & Iwata, 1996), food and drinks (Windsor et al., 1994), tactile 1 and auditory stimuli (Paclawskyj & Vollmer, 1995) to name a few. A recent review by Kang et al. (2013) confirmed that stimuli that are selected first in preference assessments have a higher probability of functioning as a reinforcer than those stimuli that are selected last. The results of the review conducted by Kang et al. suggest that preference assessments are an effective tool when identifying reinforcers that include many different classes of stimuli (e.g. toys, food, auditory). Recently, researchers have begun to assess preference using video-based preference assessments to determine preference over these different categories of stimuli. Some benefits that the researchers found from using a video-based format were that they were able to depict stimuli and their natural characteristics (e.g., how a toy functions) and in contexts in which those stimuli are often used. For example, Snyder, Higbee, and Dayton (2012) compared the results of a video-based preference assessment to a tangible paired-stimulus preference assessment and found that the results were similar in that the highly preferred items corresponded in both assessments. These results suggest that using video-based preference assessments can be effective in identifying reinforcers for children with developmental disabilities. Researchers have also started to examine whether or not video-based preference assessments without access to selected stimuli accurately depict preference. In a no-access preference assessment, stimuli are not immediately provided to the individual when he or she makes a selection (Snyder et al., 2012). Benefits of the no-access assessment is that it reduces administration time, allowing for more time for instruction and it also allows for assessment of preference of stimuli that may not be readily available. Clark, Donaldson, and Kahng (2015) conducted a video-based preference assessment with no-access with five individuals with ASD and found that video-based preference 2 assessments with no-access to stimuli accurately predicted relative reinforcing value of stimuli. Brodhead, Abston, Mates, & Abel (2017) and Brodhead, Kim, and Rispoli (2019) both examined no-access and access video-based preference assessments and found that the no-access assessments may accurately predict preference. However, authors have noted that it is unclear what prerequisite skills are necessary for these preference assessments to accurately predict preference and encouraged future research to evaluate what prerequisite skills may be necessary for valid outcomes in a video-based preference assessment. Given the recommendations of previous research on video-based preference assessments with no-access (e.g., Clark et al., 2015; Brodhead et al. 2017; Brodhead et al., 2019), the purpose of this study was to evaluate the prerequisite skills for no-access video-based preference assessments and to teach a selection response to two children with ASD who exhibited deficits in selection responses for video-based preference assessments. A reinforcer assessment using a PR schedule and a concurrent operant reinforcer assessment was conducted to evaluate the reinforcing strength of the highest and lowest preferred social interaction for each participant. 3 Participants Method The participants in this study include two children with a diagnosis of ASD, Daniel and Ben, and both attended a university based Early Intensive Behavioral Intervention (EIBI) program for 30 hours a week that was located in a public school. Daniel and Ben were three years old and had begun this EIBI program 4 months prior to this study and had never received Applied Behavior Analysis (ABA) services prior to beginning the program. Participants were included in the study if they demonstrated 100% accuracy during a pre-study matching evaluation (Wolfe et al., 2017) and did not engage in a selection response during a video-based paired-stimulus preference assessment. Setting All sessions were conducted at the EIBI program location in a room separate from the treatment room that included the participant and experimenter seated at a table. No other employees or children were located in the room at the time that the research sessions were being conducted. Materials All sessions were recorded using a video camera and were later reviewed on a computer. After each session was conducted, the videos were reviewed, and data were collected on a datasheet with a pen. Five social interactions were identified for each participant to include in this study. Videos of each social interaction were recorded and embedded within the application Keynote on an iPad using the procedures described in Brodhead, Al-Dubayan, Mates, and Brouwers (2016). These videos were approximately five seconds long and were on a continuous loop (see 4 Brodhead et al., for more details). The videos included an adult and the experimenter engaging in one of the identified social interactions with the experimenter and adult as the main focus of the video. Each of the interactions were paired with a vocal statement from the therapist (e.g. “Clap!”). The social interactions that were selected were based on the social appropriateness of the interaction and that they could be easily replicated by other therapists or teachers (see Table 1 for a list of social interactions and accompanying operational definitions). Videos of tangible items were also used in this study. These videos depicted five toys being manipulated to depict their intended function and were identical to those used in Brodhead et al. (2016). Measurement The number of selections served as the primary dependent measure for the first part of this study. A selection was defined as any instance when the participant touched a video on an iPad. During the brief MSWO of social interactions, we measured each social interaction the participant selected, along with the order in which they were selected (see Table 2 for response definitions). During the PR schedule reinforcer assessment, we measured the breakpoint for each session. The breakpoint was determined by recording the response requirement before receiving the social interaction for that particular session that was terminated (Roane, Lerman, & Vorndran, 2001). For example, if the participant strung 5 beads onto a pipe cleaner and then the session was terminated because of no responding or off task behavior, but the response requirement for that session was 16 beads, the breakpoint would be recorded as 8, because a PR 5 8 was the previous schedule requirement. The larger the breakpoint, it can be inferred that the more reinforcing was the social interaction. During the concurrent operant reinforcer assessment, we measured number of responses that occurred each 2 min session. Specifically, we measured the number of times the participant put a Bingo chip in a cup during each session. Placing a Bingo chip in a cup was defined by the participant picking up one Bingo chip with their hand and placing it inside of one cup. Instances in which the participant put two Bingo chips in a cup or put a Bingo chip in two cups simultaneously were not scored. By measuring the number of Bingo chips placed in each cup during a session, it would allow us to determine the relative reinforcing effectiveness of the participants highest and lowest preferred social interaction. Interobserver Agreement and Treatment Fidelity Interobserver agreement (IOA) and treatment fidelity were collected for 30% of sessions during the teaching of the selection response, 30% of sessions during the PR schedule and 30% of sessions during the concurrent operant reinforcer assessment. For IOA, an agreement was defined as when the observer and experimenter scored an occurrence or non-occurrence of a response during a session. IOA was calculated by counting the number of correct responses over the number of opportunities multiplied by 100 (Gast & Ledford, 2014). IOA for Daniel and Ben for teaching a selection response, PR schedule, and concurrent operant reinforcer assessment was 100% across sessions. Treatment fidelity was calculated for 30% of teaching a selection response, 30% of sessions during the PR schedule, and 30% of sessions for the concurrent operant reinforcer assessment. Treatment fidelity was scored on a task analysis with each behavior that the researcher engaged in during teaching sessions for a selection response. Treatment fidelity was 6 calculated by dividing the number of researcher behaviors observed by the number of occurrences plus non-occurrences and multiplying it by 100 (Gast & Ledford, 2014). Treatment fidelity for the reinforcer assessments was also scored on a task analysis of research behaviors for each session of the assessment. Treatment fidelity for Daniel ranged between 86% and 100% with a mean of 99% and fidelity for Ben ranged between 83% and 100% with a mean of 99%. Experimental Design A non-concurrent multiple baseline across participants was used. A non-concurrent multiple baseline across participants involves implementing the intervention across three points of time with three participants in order to control for threats to history and to show an intervention effect (Cooper, et al., 2007). Teaching a Selection Response We used a multiple baseline with embedded reversals across participants (Gast & Ledford, 2014) to evaluate our instructional procedures in teaching a selection response to Daniel and Ben. General Procedures. We chose to teach a selection response to Daniel and Ben using a systematic fading of access to toys during a brief MSWO. We systematically faded access to toys to teach the selection response to strengthen the response and make access to reinforcement unpredictable to the participant. Each session consisted of the researcher and participant seated next to each other at a child size table with a stopwatch, datasheet and pen. For sessions with videos of toys, an iPad was used, and for sessions with videos of social interactions, a laptop computer was used. The session began with the researcher presenting an iPad or computer to the participant with five videos displayed on the screen and saying, “Touch the one you want”. The participant would 7 then be given the opportunity to make a selection by touching one of the videos. Each session allowed the participant to make 15 selections. If the participant made a selection, the experimenter would remove the iPad or screen, rearrange the videos to the correct array and array size, and present the next trial. Sessions were terminated if the participant did not make a selection within 30 seconds of the instruction or if they engaged in challenging behavior for 30 seconds after the instruction was given. Baseline (social interactions). Baseline sessions for videos of social interactions consisted of the researcher conducting a brief MSWO of videos of social interactions without giving access to the social interactions after the participant made a selection. Once the participant made a selection, the researcher recorded their selection, rearranged the array and array size on the computer, and presented the next trial. No social interactions or feedback were provided when a participant made a selection. It is important to note that during the teaching of a selection response, participants did not receive access to the actual social interactions that they were selecting during the brief MSWO of videos of social interactions. Baseline (toys). Baseline sessions for videos of toys were identical to baseline sessions of social interactions. The participant did not receive access to the selected toy after making a selection. 100% toy access. Teaching sessions with 100% toy access included the researcher conducting a brief MSWO of videos of toys and giving the participant access to the selected toy immediately after they made a selection for every trial. The participant was provided access to the toy for 30 seconds after making a selection. Toy probe. This condition was identical to that of baseline (toys). 8 Social interaction probe. This condition was identical to that of baseline (social interactions). 80% toy access. This condition was identical to the 100% toy access sessions, only the participant would receive access to the toys during 80% of trials in which he made a selection. The participant always received access to the toy on the first trial of every session. 60% toy access. This condition was identical to the 100% toy access sessions, only the participant would receive access to the toys during 60% of trials in which he made a selection. 40% toy access. This condition was identical to the 100% toy access sessions, only the participant would receive access to the toys during 40% of trials in which he made a selection. 20% toy access. This condition was identical to the 100% toy access sessions, only the participant would receive access to the toys during 20% of trials in which he made a selection. Progressive Ratio Schedule A multi-element design (Roane, Lerman, & Vorndran, 2001) was used to evaluate the relative strength of each social interaction identified as either high or low preferred during the last reversal of the no-access brief MSWO of videos of social interactions. The reinforcer assessment was conducted using a PR schedule, similar to Roane, Call, and Falcomata (2005), to determine the absolute reinforcing strength of each social interaction. The multi-element design consisted of three conditions: baseline (control), HP social interaction, and LP social interaction. PR schedule requirements were determined using a geometric progression (see Roane, 2008). Specifically, PR schedule requirements were double of those of the previous schedule requirements during that particular session. For example, the study began with a PR 1 (CRF) schedule, then progressed to a PR 2, then a PR 4. Subsequent schedule requirements continued in the following order: PR 8, PR 16, PR 32, PR 64, PR 128, and so on. 9 The task of stringing beads was identified as a task that could be completed independently for both Daniel and Ben during the reinforcer assessment. General Procedures. Each session of the PR schedule consisted of the researcher seated next to the participant at a child size table. A computer with the video of the HP or LP social interaction for that participant would be present. The researcher then gave the instruction, “When you string these beads, you get (HP, LP, or nothing)” and then either showed them the video of the social interaction they would receive or would tell them they would receive nothing. The researcher then gave the instruction, “When you string these beads, you get (HP, LP, or nothing)” and then prompted the participant to string one bead and gave the designated outcome for that session. The session began with the researcher presenting the pipe cleaner and pile of beads in front of the participant and giving the instruction again. If the participant engaged in stringing beads, the actual social interaction was given to the participant according to the PR schedule requirement, and then a new trial was presented with the next response requirement according to the geometric progression of the PR schedule. If the participant did not engage in any responses for one minute or engaged in challenging behavior for one minute, the session was terminated. For HP and LP sessions of the PR schedule, the participant received access to the actual social interaction once they engaged in the number of responses that met the schedule requirement. Baseline. The experimenter gave the instruction, “When you string these beads, you get nothing”. No outcomes were delivered if the participant did or did not complete the task. Sessions were terminated after one minute of off task behavior or if the participant did not engage in a response for one minute. 10 HP Social Interaction. For Ben, sessions started with the researcher giving the instruction, “When you string these beads, you get bumble bee” and for Daniel, sessions started with the researcher giving the instruction, “When you string these beads, you get wiggle arms.” The appropriate social interaction was provided to each participant dependent on the number of responses they engaged in relative to the PR schedule requirements. LP Social Interaction. This condition was identical to the HP social interaction condition, only the LP social interaction was provided upon completion of the relative PR schedule requirements. Control. This condition was identical to that of baseline. No consequences were provided when the participant engaged in a target response. Concurrent Operant Responding during the reinforcer assessment did not demonstrate a strong enough effectiveness for the HP and LP social interactions for each of the participants, so we chose to conduct a concurrent operant reinforcer assessment similar to that of the procedures in Brodhead et al. (2016) to ensure that the participants were accessing reinforcement more frequently for engaging in a response. This form of reinforcement assessment allows for the evaluation of relative reinforcing value of social interactions. Sessions lasted 2 minutes for both Ben and Daniel. Two sessions were conducted each day with a 5-10-minute break in between each session for each of the participants. Each participant was required to drop a Bingo chip into a cup to serve as a response to evaluate during the concurrent operant reinforcer assessment. Before each session, three cups were placed in a row, equidistant from one another in front of the participant in a quasi-random order. The location of each cup changed between each 11 session. To aid in discrimination outcomes associated with each cup, the participant’s HP social interaction was placed with the green cup and the participant’s LP social interactions were placed with the orange cup. The HP and LP videos of social interactions were placed on iPads behind the respective cup. When a participant placed a chip in the HP or LP cups, they would immediately receive the corresponding social interaction for five seconds. A blank iPad was placed behind a blue cup to demonstrate the control and was not associated with any outcome. Prior to each session, the researcher modeled the target response while playing the corresponding video of the HP or LP social interaction on the iPad behind the cup that corresponded with each video. Then, the researcher physically guided the participant to place a Bingo chip into each cup and provided the corresponding consequence and instructed them to begin. Each participant could respond until the session ended. During the concurrent operant reinforcer assessment, the participant would receive the actual HP or LP social interaction immediately after placing a Bingo chip in the corresponding cups. A response was counted if a single chip was placed into a cup. If the participant placed multiple chips at a time into a cup, or into two cups at the same time, they were not counted. The total number of responses for each cup was calculated for each session. 12 Teaching a Selection Response Results Daniel and Ben’s results for teaching a selection response are depicted in Figure 1. The x-axis represents the number of sessions conducted and the y-axis represents the number of selections the participant made during teaching sessions. Daniel engaged in zero selections during both the baseline social interaction and baseline toy sessions. Ben engaged in zero selections during baseline for social interactions and ranged from one to two selections during baseline for toys. During teaching sessions with 100% access to toys, Daniel selected 11 times during the first teaching session and made all 15 selections for the last three teaching sessions. When assessing for the transfer of the skill during the first toy probe, Daniel made 10 selections. We then moved him to 80% toy access where he made 15 selections during the first session and then responding decreased to 12 selections during the second session. Daniel’s responding increased to 15 selections after two teaching sessions and remained stable for three sessions. When assessing the transfer of the selection response to the no-access social interaction probe, Daniel began the first two sessions making nine responses and increased responding to 15 selections for the last three sessions. Ben made 15 selections during all four 100% toy access sessions and the toy probe session. When assessing the transfer of the selection response during the social interaction probe, Ben made 1 selection. We then continued teaching through 80% toy access, 60% toy access, 40% toy access, and 20% toy access sessions because Ben selected a variable number of times during the social interaction probe sessions. After the third 20% toy access teaching session, Ben 13 made 15 selections during the following social interaction probe sessions and maintained stable responding. Both participants demonstrated the transfer of the skill of making a selection response during no-access video-based preference assessments by reaching 15 selections for the social interaction probes. Progressive Ratio Schedule Data for the PR schedule are depicted in Figure 2 for Daniel and Ben. On the x-axis the number of sessions is displayed, and the breakpoint is displayed on the y-axis. Both Daniel and Ben did not string any beads during baseline sessions. Daniel did not string any beads during LP sessions or control sessions of the PR schedule and strung one bead during sessions 9 and 11 for his HP social interaction making one response the breakpoint for those sessions. Ben did not string any beads during his LP or control sessions of the PR schedule and strung one bead during session 12 for his HP social interaction making one response the breakpoint for this session. Concurrent Operant Data for the concurrent operant reinforcer assessment are depicted in Figure 3 for Daniel and Ben. On the x-axis the number of sessions is displayed, and the number of responses is displayed on the y-axis. The cumulative number of responses for Daniel’s HP social interaction (8 responses) were higher than the cumulative number of his LP responses (6 responses) and responding for the control (5). The cumulative number of responses for Ben’s HP social interaction (81 responses) were higher than the LP condition (0 responses) and his control condition (42 responses). 14 Discussion In this study, a selection response was taught through a systematic fading of giving access to toys during a brief MSWO. This instructional strategy was effective in teaching the selection response to both Daniel and Ben. Both participants made selection responses during no- access brief MSWO’s of videos of social interactions whereas before the study, they were not making selections during the brief MSWO with no-access to social interactions. This teaching method could be used to teach a selection response to children who do not demonstrate this skill during a no-access video-based preference assessment and who have the prerequisite skills of video to object and object to video matching. During the teaching of a selection response Daniel and Ben both learned the selection response, but at different rates. For Daniel, he demonstrated a selection response just after 10 teaching sessions while Ben learned the selection response after 17 teaching sessions. An explanation for this could be that for Daniel, the unpredictability of the access to toys during the 80% access teaching sessions strengthened his selection response which then became under stimulus control of the instruction “Touch the one you want” and successfully generalized to the no-access social interaction sessions. For Ben, the increase in unpredictability of the access to toys during the 80%, 60%, 40%, and 20% teaching sessions was necessary to strengthen his selection response. His selection response became under the control of the instruction “Touch the one you want” after this repeated exposure and was then able generalize the selection response to the no-access social interaction sessions. Some benefits of using this systematic fading of access to toys teaching method over other teaching methods (e.g., least to most or most to least prompting) might be that this allows the instructor to eliminate physical prompts from their teaching sessions which also removes the 15 necessity of a second prompter. Teaching the selection response allows an individual to make selections during no-access video-based preference assessments which open up the options for researchers and clinicians to use items or activities that are not readily available to the individual. Another benefit of this teaching method was that it used an intermittent schedule of reinforcement to reinforce the response of making a selection, therefore strengthening the response and making it more resistant to extinction (Catania, 2013). Once the selection response was taught to both participants, a PR schedule was conducted to determine if the participants’ selected HP and LP social interactions functioned as reinforcers. The results from the PR schedule did not demonstrate a significant preference over the HP and LP social interactions for Daniel and Ben. Responding for both participants was low during PR schedule sessions and when the participants did engage in a response, the breakpoint for responding was one for their HP social interaction and zero for their LP and control sessions. An explanation for low responding during the PR schedule for both participants could be that the participants’ behavior may not have been under the control of the contingencies of the PR schedule. For example, the time between prompting the participant through the sequence of the PR schedule, to the start of the PR schedule could have been too long for the participant. It could also be said that the selected social interactions may not have been strong enough reinforcers for the participants to engage in a number of responses to gain access to them. Future research should consider shortening the length of time of the PR schedule by shortening the initial instruction in order to make the contingency clearer for the participants. Considering each participant had a breakpoint of one for their HP social interaction during the PR schedule, a concurrent operant reinforcer assessment was conducted to further assess the reinforcing effectiveness of these social interactions. The concurrent operant reinforcer 16 assessment for Daniel determined that his HP social interaction served to be more of an effective reinforcer than his LP social interaction by one response and by two responses for the control. Daniel engaged in responses during all three conditions of the concurrent operant reinforcer assessment which suggests that the social interactions were not highly preferred because he was also engaging in responses during the control sessions where he was not receiving a social interaction after engaging in a response. For Ben, the concurrent operant reinforcer assessment found that cumulatively over 10 sessions he engaged in 39 more responses to gain access to his high preferred social interaction than he did for the control condition. He did not engage in any responses to gain access to his LP social interaction. His high rates of responding to gain access to his HP social interaction suggests that his HP social interaction could function as a potential reinforcer, but compared to his responding for the control condition, it would suggest that receiving no consequence for engaging in a response could also function as a potential reinforcer. It could also be said that Ben’s behavior was not under the control of the concurrent operant reinforcer assessment, therefore resulting in his responding during the control condition. Similar to the PR schedule, the length of time between the initial instruction to the time that the participants were able to engage in the behavior of dropping a Bingo chip into a cup during the concurrent operant reinforcer assessment could have been too delayed to reinforce behavior. Therefore, the participants’ behavior may not have been under the control of the consequence associated with task completion. Future assessments might reduce the length of time between the initial instruction to the beginning of a session by having the videos of social interactions playing on the iPads at the start of the session and only physically guiding the participant to complete the task and providing the corresponding social interaction instead of modeling it first and playing the video. 17 Another explanation for why the PR schedule and concurrent operant reinforcer assessments were unable to produce noticeable results when evaluating the reinforcing effectiveness of these social interactions for these participants could be because the social interactions did not function as effective reinforcers for each of the participants. This possibility is supported by the finding that both participants did not engage in any responses to gain access to the social interactions during the PR schedule. The high rates of responding during the control condition for Ben and the fact that Daniel made responses during each condition of the concurrent operant reinforcer assessment further supports this possibility. It could also be considered that the contingency of the PR schedule was not clear to either of the participants, and therefore their behavior was not under the control of the PR schedule and corresponding task. It is also possible that participants lacked other prerequisite skills necessary for completing this task (e.g., following one-step directions). Because participants demonstrated deficits in selection and stimulus discrimination prior to beginning this study, it is possible the reinforcer assessments used in this study were not appropriate for the participants. Future research may look at what the prerequisite skills may be for successful completion of reinforcer assessments with deficits similar to the participants in our study. It should also be noted that during sessions of the PR schedule and concurrent operant reinforcer assessment, both Daniel and Ben engaged in high rates of vocal and motor stereotypy. Their high rates of stereotypy during sessions could have potentially functioned as stronger reinforcers than the social interactions that were presented, therefore decreasing their responding during the PR schedule and concurrent operant reinforcer assessment because they were accessing reinforcement from their own stereotypy. 18 Because we were unable to produce any differentiated results during our reinforcer assessment, the extent to which the participant selections were indicative of true preference for stimuli arts a major limitation to this study. Therefore, the instructional procedures for teaching a selection response should not be used in applied practice until future research is conducted. It is important that future research continues to evaluate the reinforcing effectiveness of selected stimuli from a preference assessment through reinforcer assessments such as the ones conducted in this study. Prerequisite skills of participants should be considered before conducting video-based preference assessments and reinforcer assessments to accurately depict preference. Although this study helps to further investigate what these prerequisite skills might be and ways in which we could potentially teach those skills, it is still unclear what prerequisite skills are necessary for no- access assessments. Future research should consider the findings of this study when conducting no-access assessments, specifically the demonstration of a selection response, but should continue to explore what prerequisite skills are necessary for no-access assessments. 19 APPENDIX 20 Table 1. Operational definitions of social interactions Social Interaction Dancing Clapping Funny Face Thumbs Up Operational Definition The therapist’s arms are in the air and moving side to side. Legs are moving in an up and down motion. The therapist forcefully brings hands together with palms facing each other and makes a noise. Therapist makes eye contact with child and sticks their tongue out. Therapist holds arm away from body and makes a fist with their hand and points thumb up. Therapist Statement “Dance!” “Clap!” “Funny face!” “Thumbs up!” Crack an Egg* Therapist makes a fist and taps the child’s head three times “Crack!” Fist Pound* Spin* then slowly separates all ten fingers to move down their head. Therapist makes a fist with their hand and touches knuckles with child. Therapist picks child up under the child’s armpits and spins them around one revolution. Wiggle Arms* Therapist takes child’s arms and moves them up and down in a rapid motion for 5 seconds. *Indicates that this interaction requires physical touch “Fist pound!” “Spin!” “Wiggle arms!” 21 Table 2. Dependent variables Response Response Description Child touches a video on an iPad or computer screen. Selection Stringing Beads Child puts bead onto a pipe cleaner. Bingo chip in cup Child picks up Bingo chip that is on the table and places it into a cup. 22 BL SI BL Toy BL toy Probe Daniel s n o i t c e e S f o l r e b m u N s n o i t c e e S f o l r e b m u N 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 100% Toy Access 80% Toy Access SI Probe 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 SI Probe Ben SI Probe SI Probe SI Probe SI Probe 60% Toy Access 40% Toy Access 20% Toy Access 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Sessions Figure 1. Teaching a selection response to Daniel and Ben using a systematic fading of access to toys during a brief MSWO with embedded reversals. 23 i t n o p k a e r B i t n o p k a e r B 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Baseline Daniel Baseline LP HP Control Baseline HP LP Control 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Sessions Ben Baseline 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Sessions Figure 2. Progressive ratio schedule results for Daniel and Ben. Four conditions are depicted on Daniel’s graph; baseline sessions (circle), LP sessions (diamond), HP sessions (triangle), control (square). 24 s e s n o p s e R f o r e v m u N 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 s e s n o p s e R f o r e b m u N 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Daniel Contro HP LP Control HP LP 1 2 3 4 5 6 7 8 9 10 Sessions Ben 1 2 3 4 5 6 7 8 9 10 Sessions Figure 3. Results of the concurrent operant reinforcer assessment for Daniel and Ben. 25 REFERENCES 26 REFERENCES disorders (5th ed.). Arlington, VA: American Psychiatric Publishing. skills interventions for children with autism spectrum disorder. Remedial and Special Journal of brief video-based multiple-stimulus without replacement preference assessment. Behavior Analysis in Practice, 9, 160–164. doi: 10.1007/s40617-015-0081-0 American Psychiatric Association. (2013). Diagnostic and statistical manual of mental Bellini, S., Peters, J. K., Benner, L., Hopf, A. (2007). A meta-analysis of school-based social Education, 28, 153-162. doi: 10.1177/07419325070280030401 Brodhead, T., Al-Dubayan, M., Mates, M., Abel, E., Brouwers, L. (2016). An evaluation or Brodhead, M. T., Abston, G. W., Mates, M., & Abel, E. A. (2017). Further refinement of video- based brief multiple stimulus without replacement preference assessments. Applied Behavior Analysis, 50, 170-175. doi: 10.1002/jaba.358 Brodhead, M. T., Kim, S. Y., & Rispoli, M. J., (2019). Further examination of video-based Carr, J.E., Nicolson, A. C., & Higbee, T. S., (2000). Evaluation of a brief multiple-stimulus Catania, A.C. (2013). Learning (5th ed.). Cornwall-on-Hudson, NY:Sloan. Clark, D. R., Donaldson, J. M. and Kahng, S. (2015), Are video‐based preference Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). preference assessments without contingent access. Journal of Applied Behavior Analysis, 52, 258-270. doi: 10.1002/jaba.507 assessments without access to selected stimuli effective?. Journal of Applied Behavior Analysis, 48: 895-900. doi:10.1002/jaba.246 preference assessment in a naturalistic context. Journal of Applied Behavior Analysis, 33, 353-357. doi: 10.1891/jaba.2000.33-353 Upper Saddle River, N.J.: Pearson Prentice Hall DeLeon, I. G., & Iwata, B. A. (1996). Evaluation of a multiple-stimulus presentation format for assessing reinforcer preferences. Journal of Applied Behavior Analysis, 29, 519– 532. doi: 10.1901/jaba.1996.29-519 NY: Taylor & Francis. Gast, D. L. & Ledford, J. R. (2014). Single Case Research Methodology (2nd ed.). New York, Fisher, W., Piazza, C. C., Bowman, L. G., Hagopian, L. P., Owens, J. C., & Slevin, I. (1992). A comparison of two approaches for identifying rein- forcers for persons with severe and profound disabilities. Journal of Applied Behavior Analysis, 25, 491– doi: 10.1901/jaba.1992.25-491 498. 27 In G. Eifert, J. Forsyth, & S. Hayes (Eds.), Derived relational responding: Applications for learners with autism and other developmental disabilities (pp. 7- 24). Oakland, CA: New Harbinger Publications, Inc. Higbee, T. S. (2000). Reinforcer identification strategies and teaching learner readiness skills. Kahng, S., O’Reilly, M., Lancioni, G., Falcomata, T. S., Sigafoos, J., & Xu, Z. (2013). Comparison of the predictive validity and consistency among preference assessment procedures: A review of the literature. Research in Developmental Disabilities, 34, 1125-1133. doi: 10.1016/j.ridd.2012.12.021. developmental disabilities and visual impairments. Journal of Applied Behavior Analysis, 28, 219–224. doi: 10.1901/jaba.1995.28-219 stimulus preference assessment. Journal of Applied Behavior Analysis, 45, 413-418. doi: 10.1901/jaba.2012.45-413 Paclawskyj, T. R., & Vollmer, T. R. (1995). Reinforcer assessment for children with Snyder, K., Higbee, T. S., & Dayton, E. (2012). Preliminary investigation of a video-based Roane, H. S., Lerman, D. C., & Vorndran, C. M. (2001). Assessing reinforcers under progressive schedule requirements. Journal of Applied Behavior Analysis, 34, 145- 167. doi: 10.1901/jaba.2001.34-145 Roane, H. S., Call, N. A., Falcomata, T. S. (2005). A preliminary analysis of adaptive responding under open and closed economies. Journal of Applied Behavior Analysis, 38, 335-348. doi: 10.1901/jaba.2005.85-04 Roane, H. S. (2008). On the applied use of progressive-ratio schedules of reinforcement. Journal of Applied Behavior Analysis, 41, 155-161. doi: 10.1901/jaba.2008.41-155 Windsor, J., Piche, L. M., & Locke, P. A. (1994). Preference testing: A comparison of two presentation methods. Research in Developmental Disabilities, 15, 439–455. Wolfe, K. Kunnavatana, S. S., & Shoemaker, A. M. (2017). An investigation of a video-based preference assessment of social interactions. Behavior Modification, 0, 1-18. 10.11.77/014544551773106 doi: 28