PROMPTME: A PROCESS GUIDE FOR DEVELOPING NEW TECHNOLOGY FOR THE COMPOSITION CLASSROOM By Howard E. Fooksman A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Digital Rhetoric and Professional Writing – Master of Arts 2017 ABSTRACT PROMPTME: A PROCESS GUIDE FOR DEVELOPING NEW TECHNOLOGY FOR THE COMPOSITION CLASSROOM By Howard E. Fooksman This thesis describes the process in which a new piece of educational technology, PromptMe, was conceived, developed, and tested. The paper documents how the researchers identified an existing issue in writing classes, undertook a multi-stage research problem to refine the exigency and identify possible solutions. In addition, this thesis documents the testing process, situated into a theoretical framework of design, that researchers used to prove their design concept, and observe user interactions. This work builds on a history of software development by writing instructors in the 1980s and 90s, and uses the PromptMe development process to create a list of key steps in the development process that future creators can follow to build their own academic technology. This addresses a gap in the current literature that covers the process of conceiving and testing new software. ACKNOWLEDGMENTS This thesis would not have been possible without the contributions of many other people. I want to specifically recognize the contributions of some of them here. First, I want to thank my research partners, Rebecca Zantjer and Dr. Laura Gonzalez. The first decision I made in graduate school, asking them to work with me on this project, was the smartest choice I could have made, and none of this would have been possible without your collaboration. I also want to thank Rebecca for her work on designing many of the figures I use later in this document. Much of the work in Chapter Three comes from our coursework. I want to express my gratitude to my committee chair, Dr. Bill Hart-Davidson, whose endless patience was surely tested by this process, and without whose direction and support this paper would never have been completed. I also want to thank my committee members: Dr. Scott Schopieray, for introducing me to the potential impact of academic technology in the humanities, Dr. Ben Lauren, for his mentorship during the early stages of this work, and his support throughout, and Dr. Danielle Devoss, whose ability to cut through the bullshit and motivate me to finish was sorely needed. The mentorship, friendship, and support of Dr. Elizabeth Keller was invaluable as I worked through this process. Dr. Keller was just one of an uncountable number of friends and family members who offered critical feedback and emotional support over the last few years. iii Finally, I want to acknowledge the support of the Department of Writing, Rhetoric & American Cultures at Michigan State University, and of Department Chair Dr. Malea Powell. I battled depression, along with other health issues, during this process, and if not for Dr. Powell's intervention, I do not believe that this thesis would have been completed. iv TABLE OF CONTENTS LIST OF FIGURES……………………………………………………………………………...vii CHAPTER ONE: INTRODUCTION ......................................................................................................... 1 Introduction .................................................................................................................................................. 1 Brief History of Software Development in Composition ..................................................................... 1 Readers Guide ............................................................................................................................................... 5 CHAPTER TWO: PROJECT OVERVIEW ................................................................................................ 8 Guiding Methodology For This Project.................................................................................................... 8 Project Background ...................................................................................................................................... 8 The Research Team ...................................................................................................................................... 8 Research Approach and Timeline ............................................................................................................ 10 Research Locations ..................................................................................................................................... 11 Testing Timeline and Locations ............................................................................................................... 12 Resources ..................................................................................................................................................... 12 CHAPTER THREE: RESEARCH PROCESS AND FINDINGS ........................................................ 13 The Original Research Goal...................................................................................................................... 13 Initial Observations.................................................................................................................................... 13 The Revised Research Problem (Part One) ............................................................................................ 15 Analysis of Writing Prompts .................................................................................................................... 16 The Revised Research Problem (Part Two) ............................................................................................ 18 Faculty Focus Groups ................................................................................................................................ 18 General Findings and Final Revised Research Problem ....................................................................... 20 Determining the Points of Intervention ................................................................................................. 21 Defining Potential User Roles .................................................................................................................. 23 Primary Users ........................................................................................................................................ 23 Secondary Users .................................................................................................................................... 24 Proposing a Solution .................................................................................................................................. 24 CHAPTER FOUR: THE TRANSFORMED SCENARIO .................................................................... 25 Imagining the Transformed Scenario ...................................................................................................... 25 PromptMe User Classes ............................................................................................................................ 27 Prompt Managers .................................................................................................................................. 27 Prompt Evaluators ................................................................................................................................ 28 Prompt Viewers ..................................................................................................................................... 29 PromptMe Activities .................................................................................................................................. 29 The Design Process.................................................................................................................................... 30 The Manager Interface .............................................................................................................................. 30 Heat Map and Primary Report Page ........................................................................................................ 31 Confusing Parts Page ................................................................................................................................. 33 v Definition Page ........................................................................................................................................... 36 Paraphrase Page .......................................................................................................................................... 37 Evaluator Report Page ............................................................................................................................... 38 Needs Assistance Page ............................................................................................................................... 40 CHAPTER FIVE: TESTING AND FEEDBACK................................................................................... 42 Proof of Concept Testing ........................................................................................................................ 42 Methodological Foundation for Testing.................................................................................................. 43 Initial Testing ............................................................................................................................................... 44 Lessons Learned From Testing ................................................................................................................ 46 Additional Testing Options ....................................................................................................................... 46 Additional Demonstrations and Feedback ............................................................................................. 47 CHAPTER SIX: CONCLUSIONS .............................................................................................................. 49 Current Standing Summary ....................................................................................................................... 49 Next Steps.................................................................................................................................................... 50 Fundraising and Entrepreneurship .......................................................................................................... 51 Implications of the Development of PromptMe .................................................................................. 53 APPENDICES ................................................................................................................................................ 55 APPENDIX A: STUDENT INTERVIEW QUESTIONS AND TEST VOCABULARY ............... 56 APPENDIX B: FOCUS GROUP MODERATOR GUIDE .................................................................. 58 BIBLIOGRAPHY ........................................................................................................................................... 62 vi LIST OF FIGURES Figure 1: This figure shows the timeline for our process and revisions of our research goal ............ 11 Figure 2: This figure shows the timeline of our initial writing center observation................................ 15 Figure 3: This figure demonstrates the transformed use case after adding PromptMe to the current process of assigning and explaining writing prompts. ............................................................................... 26 Figure 4: Example of PromptMe's Heat Map Interface ............................................................................ 32 Figure 5: Example of PromptMe's Confusing Parts Interface ................................................................. 35 Figure 6: Example of PromptMe's Definitions Interface.......................................................................... 37 Figure 7: Example of PromptMe's Paraphrase Interface .......................................................................... 38 Figure 8: Example of PromptMe's Evaluators Report Interface ............................................................. 39 Figure 9: Example of PromptMe's Needs Assistance Interface ............................................................... 41 vii CHAPTER ONE: INTRODUCTION Introduction This thesis chronicles the development of a new piece of educational technology, PromptMe, for the composition classroom. As a member of the team that conducted this research, I've documented the process from the classroom to the project's current standing, explore strategies for research and testing, and lay out the potential avenues of development that the project may take going forward. By detailing the steps that the research team took to take PromptMe from an initial research hypothesis to a tested concept ready for development, this paper will serve as a process guide for writing instructors, instructional designers and other developers seeking to find technological solutions to classroom exigencies. Brief History of Software Development in Composition As composition studies scholars work to better understand the work that goes into developing college writers, the computers and writing subfield looks specifically for ways that technology can be applied to that effort. There is an active body of scholars working on applying technology to the writing classroom, and even developing their own technology. The earliest work in the field comes from scholars like Hugh Burns, Mimi Schwartz, Wayne Butler, Fred Kemp, and Helen Schwartz in the 1980s, with their work first cataloged by Paul LeBlanc in Writing Teachers Writing Software in 1993. In order to understand the work that goes into developing a new piece of technology for the composition classroom, it is necessary to first understand the importance of the role of academic 1 researchers in this process. Going back to that initial era in the field helps establish the foundations of the work being done today. In Writing Teachers Writing Software, LeBlanc explains that “the people who build tools, and their methods for doing so, have great power to define their use” (7). By placing the design of new software in the hands of people who specialize in computer aided composition (CAC), the end results will be tools that are more useful to the writing instructor. William Condon, in “Selecting Computer Software For Writing Instruction,” adds that new technologies that have made writing software more accessible (in 1992) and allow “users who know very little about computer programming to develop applications that fit a variety of needs and that, unlike commercial applications, are actually designed by people who know something about writing instruction” (54). If, as Condon argues, there are now fewer technical barriers for instructors to develop technology, and with there being a clear need for writing instructors to be involved in the development of new software, it becomes imperative to ask why more instructors aren't involved in that development. LeBlanc complains that department are doing a poor job of recognizing the role of instructors in the development of new tools, insisting that “software design should be as mainstream an activity for composition professionals as teaching a writing class” (10). Despite that argued imperative, the work is not typically recognized or rewarded within the university system, creating a situation where instructors who develop technology are doing do so on their own time (89). This lack of consideration is in spite of the fact that instructors recognize the value of new technology in their classroom. Instructors can see how “new technologies … improve our working conditions and and provide better ways to help our students” (Anson 268), even if existing power structures aren't built to reward the creation process. 2 In “Technology and Literacy: A Story about the Perils of Not Paying Attention,” Cynthia Selfe explains that departments tend to “allocate the responsibility of technology decisions – and oftentimes the responsibility of technology studies – to a single faculty member” making it easy to ignore both the function and support of computers in the classroom (413). This, however, is a dangerous practice, as students are already using technology outside of the classroom, and it becomes an obligation for faculty to be able to “understand and make sense of, to pay attention to, how technology is now inextricably linked to literacy and literacy education in this country” (414). Even updates to existing technology brings their own challenges, as instructors feel that “with every change in technology, teachers have a choice to upgrade and invest time in learning new functions or not upgrade and work with what is available” (Mishra, Koehler & Kereluik 50), so there is a general lack of impetus to introduce completely new system. It's not just making the technology itself visible that helps to overcome some of the structural challenges that discourage the development of new technology by instructors, but also making making sure the work of creation is visible as well. Creating a clear model for development helps to dispel myths about development and encourage more people to get involved. By failing to show the efforts involved in the development of new technology, a system is created where users take the creation process for granted. As Diehl, et al explain in Grassroots: Supporting the Knowledge Work of Everyday Life,” “when work is invisible, all sorts of poor decisions can result because of the illusions of technological functions” (8). This visibility issue can also help to explain why so much of the literature on faculty 3 development of technology comes out of the 1980's and 90's. While researchers like LeBlanc, Hawisher, Selfe, Hugh Burns and Fred Kemp were publishing extensively on the subject in that era, the work has largely been abandoned in recent years. Work in those early days focused on justifying the use of computers as a means of intervention in the writing classroom (Burns 1984), arguing that the introduction of educational technology was a necessary development. This argument took a back seat in the 1990s, as computers became a bigger part of instruction. At the same time, the actual labor of developing new technology moved from the classroom instructor to instructional designers and developers (Reiser 2001). Reiser pinpoints this shift to the release of a new description of the field of instructional design in 1994. In that statement, the AECT (Association for Educational Communication and Technology) first includes development as a core function of that field. They still cared about the learning process itself, but, much like the CAC work of the preceding decade, development became the core focus (3). With that shift, they assumed much of the work done to actually create and test new classroom technology. The other factor that influenced this shift was the widespread introduction of the internet on college campuses. As more students began to use the internet in their personal and classroom practices, the more that research shifted from the development of new technology to the study of how students were using it. In their 1997 study State of Technology in US Classrooms (which is primarily focused on secondary schools) Coley, Cradler and Engel argue that future research should study on the effectiveness of existing tools, and the pedagogy that supports them (53). That isn't to say that there isn't development work currently taking place in composition. 4 While instructor-designed technology, like Eli Review, have been introduced in the last decade (HartDavidson, Grabill & McLeod), that's the exception rather than the rule. Instructors who may have worked to research new educational technology are now publishing on the use of the technology rather than the development. While this sort of work is valuable, it's left a gap in the field, as most recent work has focused on application rather than development. To quote a review of Pullman and Gu’s 2014 Designing Web-Based Applications for 21st Century Writing Classroom, the most recent survey of work on software created by writing instructors, “the collection is less a manual than a heuristic that does not show “how to” create web applications but demonstrates what writing instructors can do with these applications” (Petit 237). Readers Guide The remainder of this thesis serves as a guide for future researchers who want a model to follow for similar projects. Each chapter discusses one element of the project, and, taken as a whole, will take the reader from the start of the project through its current status and potential implications. Chapter two serves as an introduction to the key elements that guided this research project. It provides an introduction to the research team, examining their backgrounds and research interests, and explains how they came together. This chapter also provides a timeline for the research and testing phases of the project, and the locations where those phases took place. Finally, it provides an overview of the resources available to the research team. By making the who, what, when, and how of the project explicit this chapter provides a framework for the rest of the thesis, allowing the reader to better understand the decisions made in the proceeding chapters. The third chapter is where the research process begins. For this project, we started by 5 conducting an analysis of a current scenario impacting the composition classroom. The scenario we chose to address was how students interpret their assigned writing prompts. We then conducted three rounds of research with potential stakeholders (instructors, faculty members, writing center consultants) who were being impacted by the current scenario. This chapter explains the decision making process behind each round of research, and why we revisited the research goal of the project after each round. Chapter four examines the next phase of the project, our proposal for the transformed scenario. It details the needs and roles of each user in our new model, looking at how they would interact with a software interface and how it could change their current practices. The remainder of the chapter provides an in-depth look at the wireframe model for the PromptMe application, explains the function of each section, and explains how these functions impact user experience. By the end of this chapter the reader will have a clear view of how the project was conceived before entering the testing phase. Concept testing this proposed application is covered in chapter five. It starts by providing a methodological framework for the testing process, situating the work within the field of user centered design. The chapter then goes on to examine the procedures used for initial field testing, and what we learned from testing that we could apply to future testing and development. The chapter concludes by looking at the other activities researchers participated in to receive feedback. The final chapter provides the reader with the current status of PromptMe, and examines what brought it to this point. I also provide a look at potential development paths that PromptMe can take moving forward, and why a future project would want to take each of the paths. The 6 chapter, and this thesis, concludes with a look at how the work done for this project can provide an example for other writing teachers and researchers to follow. 7 CHAPTER TWO: PROJECT OVERVIEW Guiding Methodology For This Project In “Beyond a Narrow Conception of Usability Testing” Patricia Sullivan argues that a research study must first be situated in a specific time, location, timeline society and research group (257), so that is what you will find in this chapter. Before examining our research question, process, or findings, it’s important to understand the fundamental factors that influenced all of our work. I introduce our research team, because our experience in the classroom helped us define the area we wanted to examine. I situate our work within a specific timeline and explain why we chose the locations we did, because those factors helped us design our activities. To be clear, the research question does not come first because it wasn’t the first step of our process. In fact, the development of a strong research question is itself a product of research. Project Background PromptMe originated as a course assignment in WRA 482, Information and Interaction Design, which is typically taken by graduate students in Rhetoric and Writing, as well as undergraduate students in the Professional Writing or Experience Architecture majors. The course challenges project teams to find a problem faced by college students, perform research to determine the factors that generate that problem, and then suggest a potential transformed scenario that addresses this exigency. The course took place during the fall semester of 2014. The Research Team The research team for this project consisted of three students in the Rhetoric and Writing graduate program at Michigan State. Each of us came into this project with an interest in 8 educational technology, while also working as teaching assistants in the first year writing program. Looking at our individual experiences can provide some context for how we identified a research location and our eventual problem. Laura Gonzales: Laura was a second year PhD student at MSU at the start of this project. She came into this project with seven years of experience teaching composition or professional writing courses. Her teaching experience included five years at University of Central Florida, where she had an opportunity to work with first generation and language learning students. Her research focus is on the translation practices of multilingual students, and how researchers can learn from those experiences. At this time Laura holds her doctorate from MSU and serves as an Assistant Professor at the University of Texas, El Paso. Rebecca Zantjer: Rebecca was a second year MA student in the Digital Rhetoric and Professional Writing track at MSU at the start of this project. She was in her second year of teaching first year writing, and had worked with the teaching assistant training program. Rebecca's research interest focused on usability, user-experience, and the intersections between online activity and learning. Rebecca completed her MA in 2015 and currently works as a user-experience professional. Howard Fooksman: At the start of this project I was a first year student in the Digital Rhetoric and Professional Writing MA at MSU, and was in my first semester of teaching first-year composition. I came into this project having spent the previous few years working as a content producer and freelance writer. My research interests focus on the implementation and support of technology in the classroom, looking primarily at how instructors use technology to support their pedagogy. 9 Dr. William Hart-Davidson: Dr. Hart-Davidson served as the instructor for WRA 482, and the primary investigator for our IRB application. He served as a mentor through this process, though was not directly involved in the research or development of PromptMe. His background and experience in developing Eli Review, software that supports providing feedback to student writing, helped us shape the direction of this project from the beginning. Research Approach and Timeline For this project, we took an iterative approach to both identifying our research question and our research design (Cobb). Rather than starting with a fixed question and research trajectory, we decided to let our findings determine the path of the research. We started with a research goal, to discover a problem that currently exists in our classrooms, then conducted three rounds of research; writing center observations, student interviews, and faculty focus groups. After each round of research findings we revisited our original research goal, using assumptions based on our findings to revise the scope of the project and shape our next round of research. By taking this approach to research we were able to be more responsive to our findings throughout the process, letting our research determine the next step we took. Over the course of four months, we moved from a narrow research focus to a broader one, determined some specific areas of intervention, and moved on to our design and testing phase. Our initial goal for this project was not to develop a new piece of academic technology; that possible solution only presented itself as we worked through our research process. Figure 1 (below) shows our research timeline from the start of the course through the end of the semester. As you can see, each research activity led us to revisit our research goals and design the next research activity. This timeline covers the work described in Chapters Three and Chapter Four of this thesis. 10 Figure 1: This figure shows the timeline for our process and revisions of our research goal. Research Locations All of the initial research for this project took place at Michigan State University. Writing center observations took place at the main location for the MSU Writing Center. We conducted our student interviews outside of Wells Hall at MSU. Wells Hall is the largest academic building at Michigan State, and hosts the widest variety of courses, allowing us to speak to students with a wider range of majors than we would have at other locations on campus. Additionally, the location we chose was just outside of a popular campus Starbucks, meaning that we were assured of a heavy flow of foot traffic. We chose to do these interviews in public, as they did not ask students to disclose any information that would be protected, and we hoped that overhearing other participants admit to confusion or difficulties would make other students more willing to share their own experiences. Faculty focus groups took place in Bessey Hall, home of the Department of Writing, Rhetoric & American Cultures. We chose this location because it was the most convenient gather 11 point for faculty in the department, and we hoped that would lead to more participants for our focus groups. These research activities are discussed in more detail in Chapter Three. Testing Timeline and Locations Once we completed the initial research, we had identified clear problem (see General Findings and Final Revised Research Problem), a proposed solution (Chapter Three) and a model for the transformed activity (Chapter Four). Once those were in place, we spent the next year testing some of our core activities and presenting the model at academic conferences. Testing took place primarily within the first year writing program at MSU, with sessions in the classroom and in instructor training sessions. These sessions took place between January 2015 and August 2016, and lasted between 45 minutes and two hours. Resources Through this process we drew on resources from the College of Arts and Letters (CAL) and the WIDE Research Center at MSU. We received the 2015 Pathways to Entrepreneurship grant from CAL, which provided us with developmental support and travel funding to present this project at an international conference. We also had access to design software to develop our wireframe models, video cameras to record student interviews, and other materials provided by the Department of Writing, Rhetoric & American Cultures that allowed us to design and test without personal investment (see Chapter Five for more details about our testing process). 12 CHAPTER THREE: RESEARCH PROCESS AND FINDINGS In the last chapter I provided an overview of the elements of this research project. I introduced the members of the research team and provided details about their teaching experience and research interests that influenced this project. I provided a timeline for the research and testing phases of PromptMe, and detailed the locations where those phases took place. Finally, I listed the resources that our research team had available as we moved into the research phase of the project. In this chapter, I move on to the research process itself, exploring how we arrived at our research question through multiple rounds of research with potential stakeholders. The Original Research Goal For the development of PromptMe, we started with the assumption that international students and language learners face unique challenges dealing with translating academic language in the classroom (Gonzales and Zantjer). Based on this research, we saw that there were certain words that are untranslatable across different languages, and believe that these words caused students to have difficulties in navigating the assignments and interactions in a class. With this problem in mind, we sought to find out how international or language learning students currently deal with the problem of untranslatable words, and what potential solutions may be introduced to make this process simpler for students, instructors, and other stakeholders. Initial Observations With this initial question in mind, we began our field research by observing a writing center appointment at Michigan State's Writing Center (see Figure 1, Chapter 2, Writing Center Observation, for this activity’s place on the timeline). We initially planned to conduct a series of 13 observations of a variety of writing center consultations, with different students and consultants, but in this first session we observed activities that forced us to go back to our original research assumptions and change our research activities. The chart in Figure 2 shows the timeline for our writing center observation, a thirty minute appointment between an undergraduate student client and a graduate student tutor. For this particular assignment, the student was required to find a job opening for a position in his field, and craft a cover letter stating his interest in the position. This student was a junior at MSU, and spoke English as his primary language, which led us to assume that he would already have a grasp on most of the language translations required to complete this assignment. In addition, the student had already successfully completed a first year writing course at MSU. In Figure 2, you can see different activities that took place in the writing center consultations. The boxes marked in red are translation moments, which is when the client needed help translating a particular term into language he understood. These moments took up most of the appointment. 14 Figure 2: This figure shows the timeline of our initial writing center observation After this observation we were able to make some assumptions that we used to revise our initial research question: ● Translation events took up the vast majority of the appointment time. During this appointment, the time taken up by translation amounted to 62.5% of the total appointment time. ● The need for translation help was not limited exclusively to international or second language students. Domestic students, who we originally assumed came into the classroom already prepared to complete these translation events on their own, may also face difficulty in translating academic terminology. ● A student's ability to complete an assignment is highly reliant on the quality of the translations that took place. In this case, we saw the writing center tutor translate the term “forecast” with the task of “identify”. From our own reading of the prompt, and our experience as instructors, we would have expected “forecast” to be translated at “predict,” 15 rather than “identify,” and understand that this could lead a student to submit a paper that does not adequately address the prompt. ● Terms like “design” and “forecast”, which, as instructors, we believed to be universally understood by college students, could still pose a challenge. In this student's case, he had seen the term design in a number of different contexts, and was unsure which task he was actually being asked to perform for this assignment. Following these observations, we realized that we were unnecessarily narrowing the problem that our students faced in translating writing prompts. Rather than just posing a problem for international students and language learners, the process of translating assignment prompts into actionable activity was a challenge for all students, regardless of primary language or academic experience. Students relied on a variety of means to try to determine what they were being asked to do in each assignment, and thus found themselves relying on these translations to successfully complete their assignments. The Revised Research Problem (Part One) At this point, we had a new potential problem identified. We had seen that students may have difficulty translating academic language, and that those translations may impact their ability to successfully complete their assignments. We believed that what we had observed in our writing observation (that students have to dedicate significant time and outside resources to translate unfamiliar terms) could be a problem for a wider student population than we originally suspected, and that the vocabulary wasn't limited just to untranslatable terms but to terms commonly in assignment prompts. At this point, we decided to follow up on this discovery by talking to students directly to determine how widespread this problem was for students, and how they dealt with the 16 problem in the past. Analysis of Writing Prompts With the revised problem identified, we needed to confirm that our observations were correct. We did this by conducting a series of interviews with both students and faculty, asking both groups to describe their interactions with the writing prompts (see Figure 1, Chapter 2, Student Interviews, for where this fits in our research timeline).. In order to have the best data possible for these interviews, we pulled a sample set of 100 prompts used in the first year writing courses at MSU. From these prompts we extracted key verbs (Appendix A) that indicated actions that students were supposed to complete. From our writing center and classroom observation we understood that it was these action words that often caused the most translation events to occur. Using this list of words, we conducted a series of 30-40 random student interviews over the course of an afternoon. For each student, we presented them with one or more action words, and asked them to define the term, what they thought they were supposed to do when presented with the activity in a prompt, and what contexts they had seen the words in before. From these interviews, we discovered that students often lacked workable definitions for specific words, that they had a different understandings of these words based on personal and academic experience. We also confirmed our assumption that these challenges were not limited to international students or language learners, despite our initial hypothesis. When asked how they would determine what to do when presented with one of these unfamiliar terms, students revealed that they often would search for definitions of these terms online, resulting in decontextualized meanings that potentially led students to complete assignments 17 incorrectly. Students also admitted that most often these activities caused them to ask instructors for further clarification, potentially resulting in additional burdens placed on course instructors. We created a short video capturing some of our interview activities, stored here, https://www.youtube.com/watch?v=2SzMWLoR4C8, which capture the general process of these interviews. The Revised Research Problem (Part Two) While our initial research centered on student experience, our findings from our student interviews led us to realize that translation issues happened between the students and faculty, and that it would be difficult to address the issue without also speaking to the instructors who are actually creating these prompts. For our next round of research we chose to focus on the instructors' part of the process, as we sought to determine why they used the language that they did, how they wrote their assignment prompts, and how they perceived the issue of student translations. At this point, our research was focused on discovering where translation errors were taking place in the process, and if there was something that an instructor could do to prevent them. Faculty Focus Groups For the next step in our research project we invited the teaching faculty in MSU's Department of Writing, Rhetoric and American Cultures to sit down for a series of focus group discussions about writing prompts. These faculty members taught six sections of writing intensive courses, primarily first year or professional writing, a year, with an average of nineteen years of teaching experience. We conducted two focus groups in total, the first consisting of four female 18 instructors who had been teaching at Michigan State for more than a decade, and the second consisting of two male instructors in their first or second year in the department. For these focus groups, we asked each participant to identify assignments that students had the most difficulty with, and why they thought these problems occurred. Some reasons suggested by instructors on why students struggled with specific assignments included a failure to pay attention to the scaffolding of an assignment, missing the conversation that took place around the prompt when it first assigned, or being unprepared or unfamiliar with academic language. We also asked instructors to provide us with a sample prompt that they use in their classrooms. Looking at these prompts, we asked the instructors a series of questions that let us see how they were writing the assignments and introducing them to their students. (Appendix B) When asked to interrogate their own process of creating these assignments, participants had many of the same issues that students did. They often were unsure of the specific reasons that they used certain words in their assignments, and had the same difficulties students did when we asked them to define terms they used in the context of their own assignments. Some participants had borrowed all or part of the assignment from another instructor, but did not see the scaffolding that went into each assignment before and after it was given to students. In addition, participants explained that assignments tended to evolve over time, so the prompts were not always indicative of the activities that they expected students to complete. Some of our participants also admitted to a lack of faith in the usefulness of their assignment sheets, believing that students often failed to even read through them. Most participants 19 relied on the context of classroom discussion to provide the definitions required to successfully complete these assignments, asking students to actively participate in the process of clarifying what they were asked to do by asking questions and engaging in conversation in the classroom. These conversations provided the first set of translation moments for assignment prompts, but were often wasted as students are “distracted or texting on their phones” rather than being active participants. These conversations also left out students who were uncomfortable admitting to confusion publicly. From their classroom experience, participants observed that language learners in particular were less likely to participate in these discussions, as they didn't want to admit to having problems with the language. Students would often wait until after class, sometimes until the last minute, before emailing instructors asking for clarification. These conversations would happen repeatedly for each assignment, forcing instructors to spend extra time repeatedly answering the same question outside of the classroom. In addition, these verbal instructions required students to take detailed notes in order to have the full assignment context when they started to work, something that rarely took place in first year writing classrooms. Finally, students who missed the class where assignments were explained were often never provided the full context of the assignment, and had no way of even knowing that additional definitions were provided along with the assignment prompt. Finally, instructors informed us that students often waited until the last minute to determine what they were being asked to do for each task, resulting in last minute emails to instructors asking for clarification right before an assignment was due. General Findings and Final Revised Research Problem After three rounds of research (the writing center observation, student interviews, and 20 faculty focus groups), we believed that we established that translating the writing prompt was an existing exigency within the college writing class, and one that we should look for a solution to address. We saw, in our writing center observation, that students could spend significant preparatory time trying to determine what they were actually being asked to do in a writing prompt, and that errors in that translation could lead to poor outcomes. We learned, from talking to students, that they often came across words or tasks where they didn't understand what they were being asked to do, and that, even having seen the terms before, they still had difficulty meeting instructor expectations. Finally, we learned from instructors that they believed they were adequately explaining the prompts in class, but that students failed to capture those conversations to work off of as they completed their assignments. We also learned from those focus groups that instructors sometimes had trouble articulating why they made the decisions they did in the writing of prompts. Even with an established problem, we still had two big tasks that we had to address before moving forward with suggesting a solution. The first was to determine the points of intervention: where in the writing process a proposed solution would do the most good. The second task was to establish who our potential users would be, and what their roles in the current system entailed. By establishing what the current process looked like, we could establish where our solution could intervene, and what that transformed scenario would look like. Determining the Points of Intervention Following our research, we realized that students and instructors often had different understandings of what they were being asked to do in prompts, and relied on other tools in order to make and complete assignments. We identified four key moments where we could look for a solution that would intervene and improve outcomes:  During the creation of the prompt – Instructors often create prompts with the 21 assumption that they will be providing context during classroom instruction. Instructors make assumptions about the vocabulary and experience of their students, but know that each class and student brings different lived experience to their work. By providing instructors with a tool that flags or identifies terms that students have had difficulty with, either in previous courses or during the current course, and providing alternatives, instructors are more likely to produce assignments that are understandable and actionable by their students.  As the prompt is introduced – Instructors can use a tool that can help visualize student confusion in real time, and indicate specific points they should be addressing during classroom discussion. By directing their conversation to the points where students are having the most difficulty, instructors can provide students with appropriate context and clarification. For students, a tool that lets them anonymously admit to confusion without asking questions in front of the class should enable more students to comfortably participate in these discussions.  As an assignment is completed — By providing a record of the translation moments, a tool could also intervene at the moment students start to work. These notes will provide students with additional background for assignments, and make it less likely that they will need to ask instructors for additional help. In addition, as students seek help from third-parties (like writing center consultants) to understand teacher’s feedback and improve their paper, a tool could be used to provide additional context to someone who didn't participate in the class discussion.  During grading and assessment – As instructors read and grade student papers, having a record of the definitions they agreed to during initial conversation can help reduce the disconnect between student production and instructor expectation. 22 Instructors can see how their provided translations impacted the work that students submitted, and make notes for future iterations of the prompt. Defining Potential User Roles Having discovered an existing exigency that needed to be addressed, and identifying the potential moments of intervention, we had to define the potential users of any new system we design. In order to do this, we needed to decide what each user's current process looks like. For a writing assignment, we established that there are will be primary users, those that work in the system to complete the translation process, and secondary users, who are stakeholders in student outcomes and the writing process, but don't directly participate in the writing or grading process.. Each of these user groups have their own interest and interactions with writing prompts. Primary Users Instructors: Instructors create the initial prompt, both in the written document and with the associate classroom instruction. They then assign the prompt to students, who provide feedback on the prompt. Instructors then receive a completed paper from students, providing additional feedback on the prompt. Instructors then provide additional information to students based on this feedback. They also use this feedback to evaluate and revise the prompt for future use. Instructors also are responsible for evaluating student performance, based largely on their ability to act on the prompt assigned. Students: Students receive a prompt from instructors. They then interpret and evaluate the prompt, and provide feedback to the instructor (through questions). Students then use the prompt to complete the assignment, receiving feedback on it from instructors and third parties, including writing center consultants and peers. 23 Secondary Users Writing Tutors (and other third parties): Students often seek help in completing assignments from a variety of third party sources, including parents, peers, and academic support like advisors and writing center consultants. These parties help students interpret the prompt, generate feedback on it that students can send to instructors, and provide feedback on student writing (relying on the prompt to do so). Writing Prog ram Administrators (and other stakeholders): Program directors primarily see prompts during assessment and evaluation of instructors. They may provide feedback to instructors on the prompt, receive feedback from students and instructors on the prompt, and evaluate student and instructor performance. In addition, WPAs set course requirements and expectations, including learning goals, which often determine what a prompt is designed to accomplish. Proposing a Solution With our points of intervention and user groups identified, the next step in the development process for this project was to suggest a potential solution. Based on our personal experiences with classroom software such as Eli Review, which offered a way for instructors to facilitate and intervene in the peer review process, we chose to suggest a technological solution to our research problem. This decision led us to the creation of PromptMe, and a search for a transformed scenario that would assist students and instructors in translating writing prompts into actual writing projects. 24 CHAPTER FOUR: THE TRANSFORMED SCENARIO In the last chapter I described the three rounds of research that our research team conducted to develop our research question. I then offered a detailed look at the current process students use to translate their writing prompts, and the roles of the student, instructor, and other stakeholders in the current scenario. In this chapter, I offer one potential transformed scenario and examine how distinct user groups will interact with our proposed application. I will then walk the reader through a wireframe model of the new application’s interface. Imagining the Transformed Scenario For the first stage of development for PromptMe, we focused on the two primary user groups. In the current system, instructors need a way to identify moments where translation and translation errors occur, identify students who are having the most difficulty with the assignment, and have a record of these interactions for future revision and analysis. Students need a system that helps them identify specific terms or tasks that are unclear, provide alternative or clarifying language, and provide actionable feedback to their instructor on the prompt. Both user groups need a record of these activities that they can refer back to when completing their follow up tasks. In this transformed scenario, we imagined a system where instructors create a prompt, and allow students to interact with it in a number of ways. Students will be able to identify unclear or confusing words or phrases, and mark them up in the system, providing feedback for the instructor that shows them where students are having the most problems, and which students are having the most difficulty. The system will also allow students to create paraphrased definitions in their own words to the system, helping instructors see where translation errors are occurring. Finally, the 25 system we imagined will provide an archive of these translation moments and definitions, accessible through the system dashboard, which will let them refer back to these changes and conversations going forward. Figure 3: This figure demonstrates the transformed use case after adding PromptMe to the current process of assigning and explaining writing prompts. This initial workflow model shaped our design process, though it failed to capture a number of interactions that went into the final design of PromptMe. These differences were only discovered as we started visualizing what the new system would do and look like. Many of the changes we made 26 going forward concentrated on the two primary user groups, and how they could interact with the prompt in a new system. PromptMe User Classes While we started with four potential use groups for any system we developed, the final model of PromptMe we propose only has three classes of users. We've also chosen to use new labels to refer to each user class in the system, as we believe that the system should define users by their interactions with the prompt, not by their title. We've defined these groups as Prompt Managers, Prompt Evaluators, and Prompt Viewers. These class labels, we hope, will help democratize the process of creating usable assignment problems, and fight against some of the systemic problems with classroom power structures. Each user class will be enabled to complete unique actions in the system, and have access to different types of data that PromptMe generates. One user can find themselves belonging to multiple classes in this system. Prompt Managers Prompt managers in PromptMe will primarily be the instructors, though we imagine scenarios where students can workshop activities within PromptMe as managers as well. We selected to call this group managers, as they will direct the available activities and process flow through the system. Managers will have a number of abilities in PromptMe:  Upload and share a prompt with evaluators or viewers.  Determine which activities that evaluators will be asked to perform.  Receive real-time feedback on the assignment sheet. 27  View a heat map of where evaluators are having the most difficulty translating the assignment.  .Identify which evaluators are having the most problems with the assignment.  Identify which evaluators are most confident in their ability to translate the prompt.  How evaluators are redefining or translating the prompt.  Endorse translations that best match my goals for the assignment.  Provide context for these translations that can be used by evaluators and viewers.  Provide additional definitions where evaluators are struggling in translating the assignment.  Receive private help requests from individual students.  Save a record of these activities for future evaluation and revision. Choose which viewers or other managers can have access to these records. Prompt Evaluators Prompt evaluators will primarily be students, though we hope that instructors will use other instructors as evaluators to test future assignments before they ever get in front of students. We chose to label this group as evaluators as they will generate the feedback that goes into the system. Evaluators interact with PromptMe in a number of ways:  Read a prompt assigned to the evaluator by a manager.  Identify terms and words that are confusing or unclear, or moments where I don't understand the context.  Create and share alternative definitions for parts of the assignment with the manager and other evaluators.  View and evaluate alternative definitions offered by other users.  Compare my understanding of the assignment with other evaluators. 28  See which alternative definitions the manager has endorsed as useful.  Privately request additional individual help from the manager if needed. Access and share a record of endorsed translations while completing the assignment or seeking third party assistance. Prompt Viewers The final user class for PromptMe is the viewer. The prompt viewer has limited access to the system, enabled by the prompt manager, for use in assessment or instruction. The viewer can be a program administrator, a teaching assistant or third party tutor, or another manager looking for help in crafting their own assignments. We labeled this class viewers because PromptMe will not allow them to participate in the live translation and negotiation of prompts that take place between the manager and evaluator. Viewers will be able to interact with PromptMe in a number of unique ways:  View current and past assignment prompts that have been shared with me.  View anonymous feedback on prompts, including translations and endorsements.  Associate parts of a writing prompt with specific learning goals for assessment.  Request additional feedback from managers on planned revisions.  Archive shared prompts for assessment and future instruction. By enabling these three classes of users to complete each of these tasks, we had the general structure for a new system in place. Each class interacts with the prompt and interface in unique ways, but all of the processes are complimentary within our proposed model. PromptMe Activities Managers will upload and share prompts with students to begin the translation process. They can ask evaluators to complete one to three activities within the interface (with each activity being 29 assigned either individually or at the same time). Evaluators can indicate confusing terms by highlighting them in the system, add their definitions for those terms (or other terms assigned by the manager, or restate the task in their own words. The manager will see the results of each of these activities in their interface, which will allow them to provide additional clarification, endorse definitions or paraphrases that they feel are the most useful, or revise and reassign the prompt based on evaluator feedback. The Design Process Once we had an idea of how the system could work, we decided to design a prototype of what the software application would look like. By creating a visual representation of our solution, we were able to identify the core functions of our system, determine how users would interact with the system, and how we could replicate these activities for participatory design sessions and other testing activities. We used a PDF to create the following wireframe prototype, which allows for click through the different functions. This became useful as we began to present PromptMe to academics, as they were able to interact with the file and see how it would work. There was a drawback, however, from using this particular design method, as revisiting and revising the original model was labor intensive. This had a significant impact on our design to use paper prototyping in our testing. The Manager Interface PromptMe's entire function revolves around the prompt manager interface and the activities that they can schedule in the system. We've mocked up a number of the activities that a manager can create within the system and what each step looks like in the interface. The following figures give examples of how an instructor, working as a prompt manager, can improve the translation of their prompts into language that students can understand and act on. 30 The first activity for a prompt manager is to upload and assign a prompt to students. Students will see the prompt within the PromptMe interface, and will be asked to highlight terms that they find unclear or confusing. This information generates a heat map on the manager's screen, showing them, in real time, where students are having the most difficulties. Heat Map and Primary Report Page The heat map (top half of Figure 4) provides the first point of intervention for a prompt manager in the system. Managers will be provided with a heat map, showing which words and terms evaluators identify as being the most difficult to translate. The heat map indicates problem terms in three ways: by changing the font color, on a scale from yellow (fewest highlights) to red (most highlights); by increasing the text size based on the number of evaluators highlights, and by putting a small indicator with the number of evaluators who highlighted the term. By mousing over the number, managers can see a list of all evaluators who had marked up that particular term. The heat map also provides some sortable selections, allowing instructors to see which terms were highlighted by individual evaluators, and noting evaluators who were most confused. Managers will be able to set the threshold for which evaluators need assistance, but evaluators will also be able to self-nominate for additional help, just as if they asked for extra time during class. 31 Figure 4: Example of PromptMe's Heat Map Interface There are three additional information blocks available on the main manager interface (lower half of Figure 4), providing the results of each of the activities evaluators can complete within the system. The first box, labeled Confusing Parts, shows a full list of terms that evaluators have highlighted independent of the written prompt. The view on the primary page provides a total count for how many terms are marked by different percentages of evaluators. 32 The second box, located in the bottom center of the main page, includes a link to evaluators defined terms. These terms are ones that have either been selected by managers or evaluators as needing to be defined, and click on them will take the manager to a full list of defined terms, with the provided definitions. Terms can be sorted by the order they appear in the prompt, alphabetically, or by the frequency of evaluator definitions. If this activity has not be enabled the box will include an option to assign the task of creating definitions to students. The final box, located in the bottom right corner of the main manager interface (named “in their words”), provides a link to the paraphrased prompts that evaluators can be asked to complete. This indicates the number of evaluators that have completed their restatement of the prompt, and clicking on it will take managers to a complete list of restated prompts, along with a list of evaluators who have failed to complete the activity. If this activity has not be enabled the box will include an option to assign the task of restating the assignment in their own terms to evaluators. The evaluator version of this page will be highly dependent on what the manager has enabled. The can see the prompt itself, along with the three task boxes. If a specific box is enabled, clicking on it will take evaluators to the place where they can complete the activity. Evaluators may be able to see the heat map, without the names of students attached, if their prompt manager has chosen to share it. Otherwise they will only see their own version of the prompt with their own highlights. The viewer version of this page can mirror the instructor page, without the ability to edit or assign any part of the assignment. Managers will be able to privatize all data before sharing it, to protect student privacy. Confusing Parts Page The first activity that evaluators can be asked to complete is to indicate words or terms that they find confusing or unclear. In addition to generating a heat map, this activity creates the 33 confusing words page (Figure 5), which can be accessed by clicking on the confusing words box on the main page. This page repeats the same information that managers see on the heat map (Figure 4), sorted by frequency. We use the same color index (from yellow to red) to indicate which terms evaluators find the most confusing. By displaying the terms in this form, managers can determine which parts of the prompt are presenting the most difficult, and can generate a list of terms that they can ask students to define. Each term will be displayed within the sentence in which it appears, to help managers contextualize how they used the term. Managers will be able to click on each specific term to see which evaluation indicated that they were having problems translating that term. Instructors will be able to choose to display this list to evaluators either live or once the activity is completed. By providing live feedback, we believe that evaluators will feel more comfortable in admitting that they don't understand specific terms after seeing that other evaluators have done the same. Evaluators will not be able to see the names of who highlighted each term. One point of feedback we received while demonstrating this system to instructors was the need to set a minimum or maximum number of terms that students can highlight, or set limits for how long a highlighted section can be. While we understand how instructors may fear that students may not take the activity seriously and wind up just highlighting everything, or not highlighting anything at all, we believe that the onus on building value in the process falls on the teacher, not the technology, and that a strong instructor commitment to acting in the prompt manager roll will result in improved student activity as evaluators. 34 Figure 5: Example of PromptMe's Confusing Parts Interface 35 Definition Page The definition page (Figure 6) is where managers can assign evaluators to provide definitions for either highlighted or assigned terms. Evaluators will provide their self-generated definitions for each of the terms, which will be compiled on the definition page. Managers can than sort these terms by student name, alphabetical order, or the frequency in which these terms have been defined. Once the definitions are defined, managers are able ask evaluators to select which definitions they find the most helpful, as well as endorse the definitions they feel most accurately represent what they are asking evaluators to do in this assignment. By offered the opportunity to endorse evaluatorgenerated terms, managers are empowering evaluators to take control of the translation process. Evaluators who see their own definition endorsed by fellow evaluators or managers will become more confident in their ability to read and interpret prompts. Evaluators will be able to see their own definitions immediately, as well as rate their comfort with and their confidence in the term they selected. Evaluators will only be able to see and endorse the definitions provided by each other once the manager has enabled that activity. Evaluators will not be able to see the names of peers who are providing each definition, removed the incentive for them to endorse terms of people they believe have a greater grasp of the assignment than they do based on outside experience. 36 Figure 6: Example of PromptMe's Definitions Interface Paraphrase Page The final activity generated by managers in PromptMe is asking evaluators to paraphrases the assignment, or to restate the parts they understand in their own words. This activity allows managers to see how students are translating the prompt, endorse and share restatements that best translate the original prompt into workable language. The mockup below (Figure 7) shows a typical user response, which can include both a restatement of the task, and additional evaluator-generated comments. Again, like with definitions, managers can share evaluator paraphrases with the entire group, allowing them to endorse the ones that think come closest to their understanding of the assignment. This sharing can happen live (as they are created) or delayed (after all evaluators have completed the task. While live sharing will allow evaluators to see other approaches as they craft their own restatement of the problem, it's likely to allow misunderstandings by some of the early submitters to influence other evaluators. 37 Figure 7: Example of PromptMe's Paraphrase Interface Evaluator Report Page One way that managers can focus in on individual evaluator activities within PromptMe is to view the activity report for a single evaluator (Figure 8). This report can be accessed by clicking the evaluator name anywhere in the interface, or from a class roster available from a toolbar dropdown list. This page will provide a guide to all highlight terms, definitions, and paraphrases provided by that evaluator, along with their confidence levels and number of endorsed definitions. A manager can look at this page for a snapshot of how each evaluator is working through the prompt translation, and help them identify evaluators who are doing well, and those that are struggling. Evaluators who demonstrate a firm grasp of the assignment prompt can be used in the classroom as peer mentors, while evaluators who are struggling can be flagged, using this “assist this evaluator” button, and added to a list of participants who need extra attention or instruction. This sort of feedback will again allow early intervention moments to take place, but relies on students to actively engage as evaluators in order to provide actionable data to their instructors. 38 Figure 8: Example of PromptMe's Evaluators Report Interface The evaluator version of this page will provide all the same information on the evaluators’ individual performance and activity. Activities that have yet to have been completed will display in the corresponding boxes, while the “assist this evaluator” button is replaced by a “I need assistance” button that allows evaluators to privately alert the prompt manager that they need additional help. 39 Needs Assistance Page The final page available in our proposed model is the needs assistance page (Figure 9), where managers can see reports of students that they flagged as needing extra attention, or who selfnominated for additional attention. Students who self-nominate are able to add additional comments, which can be read only by the prompt manager, to explain what they need additional help with. Each evaluator who needs additional assistance will be listed, along with their activity in the system and their comments (if entered). Clicking on a name will take managers back to the evaluator report page for more details. These functions combine to address all of the needs we identified in our early research, while minimizing the changes to the existing user roles. With these activities defined and modeled, it was time to begin testing. 40 Figure 9: Example of PromptMe's Needs Assistance Interface 41 CHAPTER FIVE: TESTING AND FEEDBACK In the last chapter I defined the potential user roles and activities in the transformed scenario. I also showcased the wireframe model for each activity, and explained what need is addressed by each function. In this chapter, I will walk through our concept testing process for our proposed application. I describe the methodological framework we used to conduct that testing, detail a typical testing session, and discuss how we gathered feedback on those tests. Proof of Concept Testing It is all well and good to design a theoretical system to improve the classroom experience, but we knew that, before the full system could be built and implemented, we needed to test some of our activities in a real-world environment, something emphasized by Lean UX (Gothelf and Stein 2013). In Lean UX: Applying lean principles to improve user experience, Gothelf and Stein, provide a heuristic for developing a testing prototype that includes determining who is going to be using it, what you want to learn from it, and how much time you want to invest in development. They encourage doing this step to reduce your later work, as “knowing your audience allows you to create the smallest possible prototype that will generate meaningful feedback from this audience (59). For our efforts, we concentrated on low-fidelity prototypes like paper and wireframes, as they provide an easy way to get something in front of users to get feedback. By breaking out some of PromptMe's key functionality into manageable, repeatable classroom activities we hoped to observe student interactions with these activities, garner feedback on how they felt while using it, and look for ways to continue fine-tuning the process flow before putting the software into development. 42 Methodological Foundation for Testing Once we had a system designed, we decided to apply a participatory design model for our testing process. One reason that we took this approach was that participatory design approaches design as a product of user-centered research (Spinuzzi), instead of relying on the designer to establish the parameters, and relies on the knowledge of our testers to provide generative feedback. We wanted to place ourselves as facilitators for the testing process, allowing our participants to control much of the interactions with our model, and allowing us to be responsive to the feedback that we received during each testing session. This process becomes iterative, as each set of interactions leads to a new model for testing. It's here where we drew heavily from Kuniavsky's view on interviewing (119), as we understood that observation and interviewing participants worked together to provide actionable feedback on a testing session. Kuniavsky argues for impartial interviewers who allow subjects to control the direction of the interview. This was particularly challenging for us, as we were heavily invested in the success of the PromptMe model, and had to work to avoid allowing our own biases creep into our post-session interviews. For most of our classroom activities, we relied on paper prototyping to create simple, easily adjustable activities for our participants. Paper prototyping allowed us to create materials for testing cheaply and quickly, while providing us with documents that could be changed on the fly to react specific challenges. This sort of prototyping is designed to test core concepts, rather than design, which made it a great first step in the process. 43 Initial Testing We began this process by hosting a series of workshops, both for first year writing classes and faculty that teach first year writing. We had been invited to these sessions by the instructors or trainers, who want us to see if we could help students improve their process of translating prompts. By coming into these classrooms as an impartial third party, we could assure students that their participation wouldn't impact their grades. For the instructor sessions, we were invited as guest of the first year writing program to demonstrate activities that instructors could reuse in their own classes. Our initial test activities were meant to mimic the annotation and definition stage of the PromptMe model. Without having a model of the software available to test, we provided students with sticky notes and markers, and provided the prompt on a 3' x 4' printed page and asked them to work with a general prompt provided by the first year writing program. While one member of our research team facilitated the activity, another member of the team would circulate the room, assisting participants with completing the activity, and collecting feedback on their impression of the activity. Since much of the work in PromptMe is collaborative, we began by placing participants into groups of two or three. We began by asking participants to underline those terms or tasks that they found confusing. Each group was asked to select one member to highlight at a time,. For situations where multiple participants wanted to underline the same part of the prompt, we asked them to mark next to the word, so we could see how many people were having trouble at the same places. This process took between 15 to 20 minutes on average to complete. As we expected, many of the same words we had identified as causing problems, in our earlier research, such as synthesize and 44 compare, were the same words that participants were highlighting within the context of the actual prompt. This activity generated a heat map of where participants were confused, mirroring the purpose of the confusing words activity in PromptMe. Once we had a marked-up prompt, we collected the most popular translation problems and wrote them on the board. Selecting the most words, we asked our participants to work in their teams to work in their groups to provide definitions, based on their own experiences. Unexpectedly, while these activities were analog, many groups sought out dictionary definitions online, rather than risk providing inaccurate definitions. After giving groups 15-20 minutes to complete this activity, we brought the group back together to compare results. Each group wrote their definitions on a sticky note, which we collected. We then sought to gain consensus, by comparing provided definitions, and came up with an instructor-endorsed definition for each term. This model closely resembles the definition and endorsement activity in PromptMe, and showed us that students were interested in trying to redefine activities, but often relied on expert sources rather than their own ability to deduce definitions from personal experience and context clues. The second workshop activity was the paraphrase activity from PromptMe. We asked each group of participants to restate parts of the question in their own words. Again, we had them submit these activities on sticky notes, then read and voted on new which new wording best represented the activity that the instructor intended. In this scenario, we saw participants draw more from their own experience, as they often provided new language that they had seen in other assignments. This meant that participants were able to connect this assignment to activities that they had completed in the past, hopefully leading to a better chance of them completing the assignment correctly. 45 Lessons Learned From Testing By conducting analog testing with students and instructors, we managed to collect valuable feedback that will help with the construction and implementation of PromptMe, We noted that structuring the sequence of activities is crucial. By assigning each activity separately, students were best able to focus on the task at hand, and gave robust responses. Adding time to compare and reflect on results helped situate each activity as a new step,, and encouraged students to refer to their previous actions when creating new answers. At the same time, we learned that asking students to complete these tasks individually could encourage them to not rely on classmates, who may also be confused, to ask their questions for them. In addition to these observations, we also saw how doing this activity in our proposed environment would add additional benefits for instructors and students. Even with two people facilitating and observing, we had a difficult time identifying those students who were having the most difficulty,, show the potential value of a flagging system for students who highlight the most terms. Additionally, while the sticky notes gave us a way to visualize where the problems were coming from, they do a poor job of archiving the process. As instructors, we would want to be able to refer back not just the accepted definitions that the group agreed to, but the process they went through to arrive at those definitions. Associating students with their answers, even if they were rejected,, would provide instructors with a reference frame during evaluations, and spotlight student growth over the course of the semester. Additional Testing Options As with any new piece of technology, testing is a constant process. The next big move for PromptMe is the move to testing digitally. A number of free tools offer functionality that we can use to further test our system before we begin development. Using Google Docs, we can create a shared prompt and allow students to simultaneously highlight terms, and have tried this in both standard 46 and online classes. While a full class can be somewhat chaotic, we did see an increase in activity compared to the paper version of the prompt. Testing some of the activities in Twine, a web application designed for storytelling that allows for the creation of simple interactive online activities has allowed us to gauge students' willingness to participate in the definition process individually, and gives student a place to experiment with different versions of the same prompt. The drawback here is that the activities are individual, and students can't see what their classmates are generating. In addition, the Twine model forces instructors to select specific terms in advance, and can't be replicated quickly. While we've been pleased with the success of our designed activities, and the feedback we've received from participants, we realize that the next step of building PromptMe is going to be the most difficult. Now that we have a sense of what the core activities will need to look like, we seek to move PromptMe from the theoretical to reality, which will require raising the funds necessary to hire developers and programmers to actually build the application. Additional Demonstrations and Feedback In addition to testing our activities, we also presented our research and design to a variety of potential users for feedback. These presentations included meetings with the Writing Program Administrators at Michigan State, and at the HASTAC and SIGDOC conferences. Some feedback received from these potential users that we have incorporated into our design included adding the comfort level ratings to the evaluator pages, allowing managers to create their own list of terms to be defined, and allowing managers to release activities to evaluators in stages. By adding these options to the system we understand that we are creating a more complex application for users, but 47 believe the usefulness of the new tools outweighs the risks. These additions will be tested once we've moved past the paper and wireframe prototypes. 48 CHAPTER SIX: CONCLUSIONS In chapter five I provided the methodological framework we used during our project’s testing phase. I described the typical testing session and the rationale behind each decision. Finally I wrote about the feedback we received from testing and demonstrations. In this final chapter I discuss the project’s current status, our potential avenues to continue, and what we’ve learned from this process. Finally, I present a summary of the project that can be utilized by future writing instructors and developers to follow. Current Standing Summary Thirty months after the start of this project, the development of PromptMe is currently stalled in the paper testing phase. This does not, however, mean that the project is dead, but that we have identified the next steps that we need to take, and haven't yet pursued them. One challenge with graduate student projects, we've discovered, is that it becomes hard to maintain a research trajectory when members of the team graduate and move on. Even if the project has stagnated, however, that doesn't mean that there aren't lessons that can be taken from our work for future development. This paper lays out the process in which our team identified a current need in the composition classroom and conducted research to determine where a new process could improve the current situation. We used an iterative model of research design that allowed us to be flexible and responsive to our findings, and used that responsive research to develop our transformed scenario in only three months. We then used that new interaction model to develop a wireframe model that demonstrated the functionality of the system. We successfully moved the project from its 49 classroom origin into the field, and used a participatory design model and paper prototypes to test out some of the core functions of our future solution.. We collected feedback on our testing model to revisit our initial design, and make changes based on the feedback we collected. While the eventual success of PromptMe depends on factors outside of our control, including procuring funding to build a sustainable model, the process we used to get to this point revealed the key steps that can be followed in order to develop a new piece of educational technology. By connecting those steps, this thesis can serve as a roadmap that future researchers can follow to develop other software solutions that may improve student experiences or outcomes. A brief summary of those steps is available in the Implementation section at the end of this chapter. Next Steps Moving forward, there are a number of steps that we plan to take to make PromptMe a working tool that can be used in the classroom. 1. Establish a minimum viable product (MVP)- We need to determine what core functions are essential to PromptMe, and what the software can live without in the early phase of implementation. By establishing this we can move forward with talking to developers and software architects to determine the cost and time to develop a testing model. 2. Secure Funding- I go into more detail on this below, but we need to determine both what model we want to use for funding, and then secure enough to build out our MVP. 3. Publishing- As we continue the testing process, we need to look at avenues for publishing our research and testing findings. Publications can help establish our credibility as we look for both financial and institutional partners going forward. 4. Test Alternative Solutions- In addition to Twine, there are other existing and emerging 50 technologies, like MIT's Annotation Studio, that may replicate some of the functions of PromptMe. Keeping abreast of new technologies that will either compete with PromptMe or allow us to test other functions will be essential in the next phase. 5. Determine Where PromptMe Lives- One of the final decisions we have to make as we look towards testing a functional PromptMe is to determine where the final product will be situated. Based on our own classroom experience, we've learned that students are reluctant to use too many different software packages in a single class. While we initially conceived PromptMe as a standalone software solution, we need to consider if it would be better situated inside an existing software like Google Docs or a Learning Management System like Blackboard. Remaining independent will allow us to have more control over the software as it is implemented, but will require more work to distribute, support and market. Fundraising and Entrepreneurship In order to progress past our current development point, we have to begin actually developing the software. In order to do that, we will need to secure the financing to hire outside labor to actually create the application. We had already secured a guarantee of server space from MATRIX at MSU, and began the process of trying to secure funding in order to hire the necessary staff to build and support PromptMe. Without a developer or software architect on our team, we knew that we would need to raise enough money to hire both just to get a beta version that could be tested and showcased for future development.  Self-Funding: We knew, realistically, that this was not an option. As graduate students, like most student projects, we just didn't have the financial assets to invest in building this without outside assistance. 51  Grant Funding: Academic organizations and private organizations offer funding to build technology. Some options we researched included the National Council of Teachers of English, The National Endowment for the Humanities, Google, and Microsoft. These organizations usually only backed projects that had a working model already, or were backed by more-experienced researchers, so we chose to wait to apply for them until stage two of development.  Sale or Partnerships: We initially explored partnering with educational technology companies, software companies, or Michigan State in order to raise the money to build our initial model. While these options may have allowed us to see PromptMe built sooner, we ultimately decided that the loss of control over the project that these partnerships would have necessitated was not something we were comfortable with at this point. While we eventually will consider licensing PromptMe to a company that can provide the distribution and support that a fully-functional application will require, we want to retain the rights to our intellectual property until the point where we have something ready to test.  Entrepreneurship: The growing start-up culture in Michigan provided us with a model for raising funds without sacrificing any of our autonomy. By participating in a series of pitch competitions and public showcases, we hoped to receive grant and award funding that would allow us to remain independent, while at the same time attracting future investors that would be interesting in partnering with us in the future. Over the following six months, we applied for six major local pitch competitions, and were selected to present at three of them. Our first application was for the College of Arts and Letters Pathways to Entrepreneurship program, which provided us with funding to travel to showcase our research, while offering a series of classes on the entrepreneurship process. These classes helped us 52 develop our pitch for non-academic audiences, who may not be as familiar with the concepts we were discussing, and would have trouble seeing a reason to invest in a project that wasn't financiallymotivated. Implications of the Development of PromptMe Despite the value of understanding how technology comes to be, there has been limited recent research into the actual process of developing educational technology in composition, with much of the literature focusing on bringing preexisting solutions into the classroom, or adapting pedagogy to existing tools. The work we did with PromptMe puts control of the development of the technology in the hands of the teachers who work with students, was responsive to instructor and student feedback, and showed how modern UX processes can help guide academic research. The process we followed with PromptMe was ethical, research based, student focused, and incorporated feedback from all of our potential stakeholders. This thesis, by making all the work that went into developing PromptMe explicit and visible, can provide model that can be adapted to address other exigencies in the classroom. In summary, here are the steps we took to arrive at this point:  Locate a Current Scenario For our research group, our teaching experience made the classroom a natural location to start with. We could have also looked at our experience as graduate students, researchers, or members of the campus community to find a scenario we wanted to address. By focusing on area where we had access and experience we were better able to navigate the research process.  Conduct Research to Find an Existing Exigency and its Impacts 53 Once we located a scenario, we then worked to identify a need, or exigency, that existed somewhere in that scenario. For our team, the exigency was that students face problems interpreting the language of assignment prompts. We then conducted three rounds of research activities, talking to different stakeholders, to determine what users were currently being affected and how they interacted in the scenario. After each research activity we revised our research question based on what we had discovered.  Propose a Revised Scenario that Addresses that Exigency Once we understood the problem, the users, and their current roles, we proposed a potential revised scenario that introduced a software application into the current scenario. The decision to use an application was based on our interest and experience in using technology in the classroom. Our revised scenario introduced new user roles and activities.  Conduct Testing on the Proposed Solution Once we had envisioned a potential solution, we had to field test it. We started this process by grounding our approach in user-centered research, though we could have selected other approaches. Once we had a methodological framework in place for testing, we began working with potential users, both faculty and students, to test key functions of the application.  Gather and Examine Feedback for Future Revisions We used our testing activities to begin to collect feedback that we could use to revise both our model and our testing. This process required us to have an open exchange with our users, and to apply our findings to our future work. This process left us with a model ready for development and demonstrated a repeatable process that we or other researchers can use in the future 54 APPENDICES 55 APPENDIX A: STUDENT INTERVIEW QUESTIONS AND TEST VOCABULARY Word List Pulling from a sample of over 100 writing prompts used in MSU's first year writing classes, we produced the following list of common “action words”; words or phrases used by the prompt's writing to indicate a specific task they wanted a students to complete as part of the project. We then selected a small sample of these terms and asked students about their experiences seeing them in class. The terms or words we used for this activity were: Reflect Analyze Synthesize Explore Identify Evaluate Compare Summarize Write a Thesis Use Flawless Grammar Take Notes On/Of Interviews For each interview we had our student participant select a random word or term from a hat. Once the student had the specific term, we asked them the following questions: Have you seen this term before in one of your classes before? If you saw this term in an assignment prompt, what sort of activity would you do? 56 Have you seen this term before in an assignment? In what context have you seen this before? Has an inability to understand a writing prompt ever impacted your class performance? 57 APPENDIX B: FOCUS GROUP MODERATOR GUIDE I. Introduction (5 minutes) Hi. My name is Laura, and I’ll be your moderator today. Rebecca, Howard and I are hosting this focus group to learn a little more about how you design assignment sheets. We are working on a project in Bill Hart-Davidson’s Interaction Design class, where we are hoping to learn more about how students interpret the instructions we provide for them assigning writing tasks. As writing instructors ourselves, we often wonder where our students mess up…We know that many of them are trying to complete our assignments to the best of their ability, but we know that our instructions can be misinterpreted or misleading. We want to learn more about how we communicate with our students through our assignment sheets, in order to possibly design a system that may facilitate this communication in some way. We don’t know what this system will entail yet, but we know that you’re the right people to ask for help and we’re so thankful for your time. (say more about the background and design of the course and our purpose/goals). If you are not sure what a focus group is, it is simply a group of people who get together to talk about a topic chosen by researchers. The group usually talks about the topic for about an hour until all of the questions are answered and everybody has said what they want to say. We are thankful for your time and look forward to learning more about how you approach assignment sheet design. We have a lot to discuss in the next hour, so we might move our talk along at a slower or faster speed, depending on how much we are getting through. If we go on to talk about something else 58 when you still have not said what is on your mind, please don’t be afraid to stop me at anytime. Whatever is said in this room will not be shared directly with anyone else. We will not refer to you by name in our findings without consulting you first. We will, however, with your permission, be recording this talk with video recorders so we don’t forget what was said. Be sure to speak loudly and clearly so the recorder accurately picks up your voice. Even though your words are being tape recorded, nothing that you say will be connected with your name. Please take a look at the consent form in front of you and let me know if you have any questions about it. Know that continuing with the focus group after reading the form tells me you’re consenting to participate. You can, however, walk out and leave at any point. Finally, if you have to go to the bathroom, please get up quietly and do what you need to do. Then come on back so we can hear your thoughts some more. Any questions? Okay, let’s get started! II. Questions A. General background (approx 15 minutes) 1. First, before we get into evaluating any of the information, we’d like to understand a little more about everyone’s background and experience, so…Can you tell us what courses you teach and how long you’ve been teaching? 2. When did you first come to MSU? 3. Excellent. So, part of the reason we asked you to join us was because all of you teach writing in some capacity at MSU. We want to learn a little bit more about the 59 assignments you give your students in your writing courses. Can we go around the room and briefly share what assignments we give our students in our courses. For example, I’m currently teaching the introduction to professional writing course, and I assign a rhetorical analysis of communication strategies, a profile of a PW major assignment, and a group project where students help a client revise and redesign specific documents. What assignment do you all have in your courses? B. Specific Design Questions (approx 40 minutes) Great. So we have a few similarities in the assignments we teach. 1. You should have a sheet of paper in front of you. If I were to ask what is the assignment your students struggle with the most, which one would you point to? Can you write the number one on the sheet of paper in front of you, and then list the assignment? Now lets go around and share the assignments we wrote down. What do you think makes this particular assignment so difficult for students to understand? 2. You should have brought an assignment sheet with you today. If possible, could you trade that assignment sheet with someone else in the group. Now, take a couple minutes to read that assignment sheet. Let’s go around the room and have your partner explain what they think this assignment sheet is asking students to do? 3. What are your goals for the assignment you brought with you today? What do you hope your students will do in this assignment? 4. What are the biggest feedback comments you give to students based on this assignment? Are there things you address in class now based on your experience with this assignment? What do you students do really well on? Not so well. 60 5. How do you present this assignment sheet to your students? Do you print it? Do you just put in on D2L? What do you say or remind them of ? 6. Where or how do you typically write assignment sheets? In Word? Watch video and react to it? MODERATOR: Well that’s all for today…thanks again for your help! 61 BIBLIOGRAPHY 62 BIBLIOGRAPHY Anson, Chris M. “Distant Voices: Teaching and Writing in a Culture of Technology.” vol. 61, no. 3, 1999, pp. 261–280., www.jstor.org/stable/379069. Burns, Hugh L., and George H. Culp. "Stimulating Invention in English Composition through Computer-Assisted Instruction." Educational Technology 20.8 (1980): 5-10. Burns, Hugh. “The Challenge for Computer-Assisted Rhetoric.” Computers and the Humanities, vol. 18, no. 3/4, 1984, pp. 173–181., www.jstor.org/stable/30204327. Cobb, Paul, et al. "Design experiments in educational research." Educational researcher 32.1 (2003): 9-13. Condon, William. "Selecting computer software for writing instruction: Some considerations." Computers and Composition 10.1 (1992): 53-56. Diehl, Amy, et al. "Grassroots: Supporting the Knowledge Work of Everyday Life." Technical Communication Quarterly 17.4 (2008): 413-34. ProQuest. Web. 28 Mar. 2017. DeWitt, Scott Lloyd. Writing inventions: Identities, technologies, pedagogies. SUNY Press, 2001. Gonzales, Laura, Rebecca Zantjer, and Howard Fooksman. "Portable pedagogy: how interaction design made us better teachers." Proceedings of the 33rd Annual International Conference on the Design of Communication. ACM, 2015. Gonzales, Laura, and Rebecca Zantjer. "Translation as a User-Localization Practice." Technical Communication 62.4 (2015): 271-284. Gothelf, Jeff, and Josh Seiden. “Lean UX: Applying lean principles to improve user experience." O'Reilly Media,Inc., 2013. Hart-Davidson, William, Jeffrey Grabill, and Michael McLeod. "Systems and methods for tracking and evaluating review tasks." U.S. Patent Application No. 13/045,632 Hawisher, Gail E., and Cynthia L. Selfe. “The Rhetoric of Technology and the Electronic Writing Class.” College Composition and Communication, vol. 42, no. 1, 1991, pp. 55–65., www.jstor.org/stable/357539. Kemp, Fred. "Who programmed this? Examining the instructional attitudes of writing-support software." Computers and Composition 10.1 (1992): 9-24. Kuniavsky, Mike. Observing the user experience: a practitioner's guide to user research. Morgan Kaufman, 2003. 63 LeBlanc, Paul J. Writing Teachers Writing Software: Creating Our Place in the Electronic Age. Advances in Computers and Composition on Studies Series. National Council of Teachers of English, 1111 W. Kenyon Rd., Urbana, IL 61801-1096. 1993. Mishra, Punya, Matthew J. Koehler, and Kristen Kereluik. "Looking back to the future of educational technology." TechTrends 53.5 (2009): 49. Petit, Angela. "Designing Web-Based Applications for 21st Century Writing Classrooms." IEEE Transactions on Professional Communication 57.3 (2014): 235-237. Reiser, Robert A. "A history of instructional design and technology: Part I: A history of Instructional media." Educational technology research and development 49.1 (2001): 5364. Selfe, Cynthia L. “Technology and Literacy: A Story about the Perils of Not Paying Attention.” College Composition and Communication, vol. 50, no. 3, 1999, pp. 411–436., www.jstor.org/stable/358859. Spinuzzi, Clay. "The methodology of participatory design." Technical Communication 52.2(2005): 163-174. 64