. . . . m.“ 4.2.4.. . . . , . . . 4 r . 3% H. . w 3.... ..4... V. .5. UaWWflfiw . .u . 4.? 4 4.4.... 41).. at. 9 1». VLF-:1 5.)” 4. it. .. .. $3.11 .44 n... l... éfi I . I’D?! vi 3007 This is to certify that the thesis entitled EVALUATING WORKPLACE ENGLISH LANGUAGE PROGRAM SUCCESS: QUANTITATIVE AND QUALITATIVE ASSESSMENTS presented by I a, l >_ 4- Kristin Joy Ekkens o: g .E‘ < c 9 a: as g 93 I??? has been accepted towards fulfillment _l _c_; I) of the requirements for the I I M.A. degree in Linguistics and Germanic, Slavic, Asian, and African Languages chc/ [JA‘MQL r Major Professor’s Signature 71.2 M7 / r Date ' MSU is an Affirmative Action/Equal Opportunity Institution PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 6/07 p:/ClRC/DateDue.indd—p.1 EVALUATING WORKPLACE ENGLISH LANGUAGE PROGRAM SUCCESS: QUANTITATIVE AND QUALITATIVE ASSESSMENTS By Kristin Joy Ekkens A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF ARTS Department of Linguistics and Germanic, Slavic, Asian, and African Languages 2007 ABSTRACT EVALUATING WORKPLACE ENGLISH LANGUAGE PROGRAM SUCCESS: QUANTITATIVE AND QUALITATIVE ASSESSMENTS By Kristin Joy Ekkens Research on workplace English programs suggests that program evaluation is essential to program success. Although researchers agree that program evaluation involves the learners, practitioners, and company management (Burt 1995; Mikulecky & Lloyd, 1996), no one evaluation method has been established or systematically implemented. The purpose of this paper is to investigate the effectiveness of a variety of evaluation tools within D.L. Kirkpatrick and JD. Kirkpatrick’s (2006) four-level training evaluation model. The participants for this study come from three groups (n= 36) of non- native English-speaking employees in healthcare and manufacturing companies in West Michigan, company representatives (n=12) such as supervisors and training facilitators, and the service providers (n=4). The learners participated in an onsite, ten-week workplace English language training and were evaluated using learning journals, standardized tests, pre- and post-interviews, and end-of-class evaluations. The results indicate that to prove workplace English program success, program administrators must work together with company representatives and practitioners to evaluate the reaction to the program, the measurable learning gains, changes in the learners’ workplace behavior, as well as the overall results or programmatic impact on the company. It can be concluded that providing stakeholders with accurate program outcomes is only possible when informal assessments are used in conjunction with standardized tests. Copyright by KRISTIN JOY EKKENS 2007 ACKNOWLEDGMENTS I would not have been able to complete this thesis without the guidance, support, feedback, and encouragement of many others. I would like to thank my committee, Dr. Paula Winke and Dr. Senta Goertler, for providing excellent feedback. I would like to thank those who have inspired me along the way: Thank you to ESP experts Anne Lomperis, Kay Westerfield, and Barbara Tondre-El Zorkani for your advice and guidance (TESOL Conference 2007). I wish to sincerely thank Kathy Emmenecker, my mentor, as well as Susan Ledy, Vera Grishkina, and Kelly Hernandez, my extremely hard-working colleagues. Thank you for your encouragement and your patience. Thank you also to my undergraduate advisors, Dr. Elizabeth Vander Lei, James VandenBosch, and Dr. William VandeKopple, who encouraged me to pursue my dreams and gave me a strong foundational education to do so. And finally, I want to extend a special thank you to my husband, Dave Ekkens, who encouraged me to continue working when I lacked focus, who patiently supported my studies in graduate school, and who always encouraged me to do my best. Thank you also to family and friends for their patience, understanding, and encouragement. I could not have done it without all of your support. iv TABLE OF CONTENTS LIST OF TABLES .......................................................................................................... vii CHAPTER I: INTRODUCTION .................................................................................... 1 1.1 General Introduction .......................................................................................... 1 1.2 Background to the Study .................................................................................... 1 1.3 Theoretical Grounding ....................................................................................... 3 1.4 Rationale for the Study ...................................................................................... 3 1.5 Organization of the Thesis ................................................................................. 5 CHAPTER H: THEORETICAL FRANIEWORK AND LITERATURE REVIEW ..6 2.1 Introduction ........................................................................................................ 6 2.2 Kirkpatrick’s Four-Level Evaluation Framework ............................................. 6 2.2.1 Reaction .............................................................................................. 6 2.2.2 Learning .............................................................................................. 7 2.2.3 Behavior .............................................................................................. 7 2.2.4 Results ................................................................................................. 8 2.2.4 Limitations of Kirkpatrick’s Evaluation Framework .......................... 9 2.3 Introduction to Workplace Literacy Programs ....................... h ........................... 9 2.3.1 Types of Program Models ................................................................. 10 2.3.2 National Workplace Literacy Funding .............................................. 12 2.3.3 Current Resources ............................................................................ 14 2.3.3.1 Current Programs .................................................... l .............. 14 2.3.3.2 Handbooks ............................................................................ 16 2.4 Program Assessment and Evaluation .............................................................. 17 2.4.1 Formative and Summative Evaluations ........................................... 18 2.4.2 Types of Assessment Tools .............................................................. 21 2.4.2.1 Standardized Assessment Tools ............................................ 22 2.4.2.2 Qualitative Informal Assessments ........................................ 24 2.4.2.3 Quantitative Informal Assessments ...................................... 26 2.5 Evaluation of Benefits for Stakeholders .......................................................... 28 2.6 Issues in Evaluation for Workplace English Programs ................................... 30 2.7 Gaps in the Literature ...................................................................................... 33 CHAPTER 1H: METHODS AND PROCEDURES ..................................................... 35 3.1 Introduction ..................................................................................................... 3 5 3.2 Definition of Key Concepts ............................................................................ 35 3.3 Research Questions .......................................................................................... 36 3.4 Setting ............................................................................................................. 37 3.5 Subjects ........................................................................................................... 40 3.5.1 Learners ............................................................................................. 40 3.5.2 Company Representatives ................................................................. 41 3.5.3 Service Provider ................................................................................ 42 TABLE OF CONTENTS CONTINUED 3 .6 Procedures ....................................................................................................... 43 3.6.1 Instruments and Data Collection ....................................................... 43 3.6.2 Data Analysis ................................................................................... 49 CHAPTER VI: RESULTS AND DISCUSSION ........................................................... 51 4.1 Introduction ...................................................................................................... 51 4.2.1 Research Question 1 ......................................................................... 51 4.2.2 Research Question 2a ........................................................................ 57 4.2.3 Research Question 2b ....................................................................... 61 4.2.4 Research Question 3 ......................................................................... 67 4.3 Summary ......................................................................................................... 72 CHAPTER V: SUMMARY AND CONCLUSIONS ................................................... 73 5.1 Introduction ...................................................................................................... 73 5.2 General Discussion .......................................................................................... 73 5.3 Implications based on Kirkpatrick’s Framework ............................................. 79 5.4 Implications for Company Management ........................................................ 80 5.5 Implications for Program Administration ....................................................... 81 5.6 Limitations ....................................................................................................... 84 5.7 Directions for Future Research ........................................................................ 87 5.8 Final Comments ............................................................................................... 87 APPENDICES .................................................................................................................. 88 APPENDIX A: Handbooks and Reviews on Workplace Literacy from the 19905 ............................................................................................. 89 APPENDIX B: Participants Personal Data ............................................................ 90 APPENDIX C: Employee Journal Guidelines ....................................................... 91 APPENDIX D: Sample Learning Journal Entry of Student’s Work ..................... 92 APPENDIX E: Participant Interview Questions .................................................... 93 APPENDIX F: Supervisor Interview Questions .................................................... 94 APPENDIX G: Participant Questionnaire: Employees Development Effects ...... 95 APPENDIX H: Supervisor’s General Rating of Participants ................................ 96 APPENDIX I: Supervisors’ Evaluation of Program Effects in Their Departments ................................................................................... 97 APPENDIX J: Learning Journal Participants” CASAS Scores and Learning Gain Realized ................................................................................. 98 APPENDIX K: Participants’ Perception of Improvement in Speaking ................. 99 APPENDIX L: Learner’s Responses to Behavioral Questionnaire by Company ID ................................................................................ 100 REFERENCES .............................................................................................................. 101 vi LIST OF TABLES Table 1: Supervisors' Participation in Kirkpatrick's Behavioral level ................................ 8 Table 2: Handbooks and Reviews on Workplace Literacy from the 1990’s .................... 89 Table 3: Six Workplace Literacy Programs from NWLP in the 1990's ........................... 13 Table 4: Six Workplace Education Programs Recognized by OVAE in 2005 ................. 15 Table 5: Workplace Literacy Handbooks in the 2000's .................................................... 17 Table 6: Examples of the Four-levels of Evaluation ........................................................ 29 Table 7: Participants’ Personal Data .................................................................................. 90 Table 8: Learners’ Reaction to Workplace English Training ............................................ 54 Table 9: Evaluating Reactions: Supervisor Interviews and Learners’ Journal Comments .............................................................................................. 56 Table 10: Descriptive Statistics for Standardized Test Scores .......................................... 58 Table 11: Paired Samples Correlations .............................................................................. 59 Table 12: Journal Sample: Paired Samples t-Test within Subjects Design with Two Levels ................................................................................................................ 59 Table 13: Journal Response ............................................................................................... 62 Table 14: Participants’ Perception of Improvement in Listening ...................................... 62 Table 15: Participants’ Perception of Improvement in Reading ........................... . ............ 64 Table 16: Learner’s Responses to Behavioral Questionnaire ............................................ 67 Table 17: Questionnaire Results: Supervisor Ratings on Learners’ Behavior at Work .................................................................................................................. 69 Table 18: Questionnaire Results: Supervisors’ Evaluation of Workplace ESL Program Behavioral Effects on Department .................................................................... 71 Vii CHAPTER I: INTRODUCTION 1.1 General Introduction The purpose of this thesis is to pr0pose a solution to the complex problem of evaluating program effectiveness. As a program administrator for the past five years, I have been faced with finding effective evaluations in order to demonstrate the quality of our program to all stakeholders. Since I was not able to find current and empirically tested evaluation processes for program administration, I developed my own research project which investigates the effectiveness of qualitative and quantitative assessments of program success. Thus, the goal of this thesis is fourfold: (1) to investigate effective program evaluation; (2) to find a theoretical framework for evaluating training programs (3) to share my results and their implications with other workplace programs and (4) to share with other workplace programs findings and implications from previous research. 1.2 Background to the Study Many workplaces in the United States offer English language classes to their workforce members who are non-native speakers of English. This is done not only to help the workers improve their English; effective training programs lead to positive outcomes for all stakeholders involved. Non-native English-speaking employees participating in language training are found to have increased job satisfaction. enhanced self-esteem, greater job mobility, and higher earning potential along with their increased language skills (F riedenberg, Kennedy, Lomperis, Martin, & Westerfield, 2003). In an effective program, the employers see a return on their investment (ROI) (Martin & Lomperi s, 2002). This can be measured by increased productivity, quality of work, and positive work attitudes as well as decreased number of accidents, errors, misunderstandings, employee tum-over and absenteeism. And finally, optimal outcomes for the language training provider include enhanced capability, broader professional recognition, and increased profitability (F riedenberg, et al., 2003). However, measuring these outcomes and deeming the programs as successful and effective involves the cooperation and efforts of all of the stakeholders. For fee-based programs paid for by the employer and delivered by an external organization, the success of the program depends on the employers buy-in, especially at the level of the frontline supervisor (Burt, 1997; Grognet, 1996) and the quality of the external organizations’ program. That being said, if the employees, or learners, are not pleased with the instruction or course content, they will not be motivated to attend class, let alone achieve considerable gains. The learners need the instructor’s feedback as well as encouragement from management. Just as the general and frontline managers must actively support the program, it is important to help them see the transferability of the newly attained language skills to the participants’ jobs (Burt, 1995). Moreover, if the course is partially or fully fimded by a grant, the grantors often require data measured by state-approved standardized tests. I Therefore, to make a workplace English program successful, it is essential to have the buy-in of all stakeholders (Mikulecky, Lloyd, Kirkley, & Oelker, 1996). In their handbOok for practitioners and trainers in workplace education, Mikulecky, Lloyd, Kirkley, et a1. stress that workplace literacy programs are similar to businesses in that they both must produce quality goods (or service) and monitor that quality in order to survive. Research on workplace literacy programs suggests that program evaluation and participant assessment is essential to program survival. Sticht (1999) argues that evaluation should not be completed only at the conclusion of a program to measure overall accomplishments; rather it is an integral part of designing and implementing a program. Although researchers in the field agree that program evaluation involving the learners, the practitioners, and the employer is necessary, no one evaluation method has been established or systematically implemented. This presents program administrators with the challenge of not only finding or developing measures to show the effectiveness of the language training, but also presenting the success in a manner that is meaningful for each of the stakeholders. A more thorough discussion of the research on program evaluation will follow in chapter two. Since no systematic measures have been developed, and because program evaluation can be very context-specific, an in-depth descriptive study of evaluating effectiveness of workplace literacy programs is necessary. 1.3 Theoretical Grounding This study is informed by Kirkpatrick’s (2006) four-level training evaluation framework which will be discussed in Chapter II. In short, the four levels of this framework are as follows: 1) reaction, 2) learning, 3) behavior, and 4) results. In this framework the levels are considered to be sequential, and are used to measure training program effectiveness. In workplace language training, program administrators and employers can use Kirkpatrick’s framework as a base for selecting appropriate evaluation methods for their unique settings. 1.4 Rationale for the Study Previous research on workplace literacy programs has focused on implementing effective assessments. Many researchers conducted studies 1) to find out what programs were doing to evaluate their effectiveness, and 2) to create an effective assessment model that could be used across sites. Previous research does not paint a consistent picture and more research is necessary. In this section I briefly describe two major studies on evaluating workplace literacy programs to illustrate the existing need for an effective evaluation framework. In a national evaluation, Moore, Myers, Silva, and Alamprese (1998) evaluated five workplace language programs and concluded that program effectiveness is associated with instructional time. They argue that due to limited instructional time (16- 30 hours per individual) standardized assessments are not appropriate in workplace literacy programs such as those evaluated in their study. Moore, et a1. (1998) noted that out of 45 programs funded by the National Workplace Literacy Program in the 1990’ s, only one third used standardized assessments, which may be caused by the short instructional period. The authors conclude that the result of this is the lack of an effective instrument for assessing changes in cognitive skills in short workplace language trainings. In Mikulecky and Lloyd’s (1996b) study evaluating six workplace literacy programs, they stated that evaluation of workplace literacy programs was a relatively new area and that programs lacked an assessment approach flexible enough to address this type of program. Therefore, they attempted to develop a data-driven, systematic approach to evaluating workplace literacy programs that could be used across several small programs. Mikulecky and Lloyd’s (1996b) conclusion that they created an assessment model that could feasibly be used across sites is debatable as most program administrators have less funding and time available for interviews, and more participants to assess than were included in this three-year study. Furthermore, the focus was on literacy (reading and writing) performance and not exclusively on English for Speakers of Other Languages (ESOL). Researchers in the field of training agree that a key component of any training or educational program is the program evaluation process (Burkhart, 1996; D. L. Kirkpatrick & J. D. Kirkpatrick, 2006). Unfortunately, after numerous attempts at finding effective evaluation measures, to my knowledge, stakeholders of workplace literacy programs have not accepted a systematic evaluation framework. Considering this reoccurring dilemma, this thesis is important because it explores a topic that seeks a solution, a solution which previous research has not provided. 1.5 Organization of the Thesis After this short overview of the study and its context, chapter two will illustrate the theoretical framework and present previous research. First, I outline Kirkpatrick’s (2006) training evaluation framework. Then I introduce past and current research on workplace language training programs. And finally, I present previous research on evaluation in language programs. In chapter three I discuss my research questions and how they were examined. I present a detailed description of the context, the methods, the participants, the instruments, and the data collection and data analysis procedures. In chapter four, I present the findings of the data analysis as well as a discussion of the findings as they relate to previous research. In chapter five I summarize the most important findings from chapter four and I offer some implications for company managers, program administrators, and practitioners involved in workplace language training programs. Finally, the limitations of the study are discussed and potential directions for further research are given. CHAPTER H: THEORETICAL FRAMEWORK AND LITERATURE REVIEW 2.1 Introduction This thesis operates within D. L. Kirkpatrick and J. D. Kirkpatrick’s (2006) four- level framework for evaluating effectiveness of training programs. First, in this chapter I will address the theoretical framework. Then, I will summarize relevant past and current research on workplace language training programs. And finally, I will outline’evaluation tools and methods recommended by researchers and practitioners for workplace language training programs. 2.2 Kirkpatrick’s Four-Level Evaluation Framework For almost fifty years, starting with the first edition in 1959 until the most recent in 2006, business and academics alike have turned to Kirkpatrick’s four-level training evaluation model (Nickels, 2000). According to this model, evaluation should begin with the first level and move sequentially through levels two, three, and four. The four levels in order from first to last are 1) reaction, 2) learning, 3) behavior, and 4) results (D. L. Kirkpatrick & J. D. Kirkpatrick, 2006). In the following section, these levels will be discussed in more detail as they relate to workplace language training. 2.2.1 Reaction The first level of the model evaluates how those who participate in the training react to it. Kirkpatrick calls this a measure of customer satisfaction, the custom er being the participant in the training. He argues that the participants’ reactions can make or break the training because what they say to their bosses often is passed along to higher- level management who make the funding decisions (D. L. Kirkpatrick & J. D. Kirkpatrick, 2006). 2.2.2 Learning The second level of this framework is learning. In this evaluation model learning is defined as “the extent to which participants change attitudes, increase knowledge, and/or increase skill as a result of attending the program” (D. L. Kirkpatrick & J. D. Kirkpatrick, 2006, p. 22). Some training programs aim to accomplish one of those goals, and others aim to accomplish all three. In order to evaluate learning, specific objectives must be determined. In this model, D. L. Kirkpatrick and J. D. Kirkpatrick (2006) argue that learning has taken place if one or more of the following has occurred: 1) attitudes have changed, 2) knowledge is increased, and/or 3) skills have improved. 2.2.3 Behavior D. L. Kirkpatrick and J. D. Kirkpatrick (2006) explain that the third level evaluates “the extent to which change in behavior occurred” (p. 22). According to this framework, four conditions are necessary for a change in a participant’s behavior to occur: 1) the person must have a desire to change, 2) the person must know what to do and how to do it, 3) the person must work in the right climate, and 4) the person must be rewarded for changing (2006). About the third condition, D. L. Kirkpatrick and J. D. Kirkpatrick go on to describe five different climates the supervisors create that may or may not encourage a change in behavior including preventing, discouraging, neutral, encouraging, and requiring. These climates are on a continuum described in Table 1. Tahie 1 > Supervisors' Participation in Kirkpatrick's Behavioral level Continuum of Work Climates Preventing Discouraging Neutral Encouraging Requiring Forbids participation. Demonstrates a Ignores learner’s Encourages Knows what Leadership style negative example. participation in learning and the conflicts with what Makes it clear training. May application to participant was taught. that participant discourage or the job. learned and should not change prevent Discusses helps with behavior. participation if program with knowledge negative results participant transfer. occur because before and after. behavior changes. Note. The information in this table was adapted from D.L. Kirkpatrick & J.D. Kirkpatrick (2006) pp. 23-24). Regarding the fourth condition, the rewards can be a) intrinsic such as feelings of satisfaction, pride, and achievement, b) extrinsic such as praise from the boss, recognition by others, or monetary rewards, or c) both intrinsic and extrinsic rewards. D. L. Kirkpatrick and J. D. Kirkpatrick (2006) argue that if the first two levels are bypassed, and if a change in behavior is not discovered, the training will be viewed as unsuccessful. This may or may not be a faulty assumption because the participants may have reacted positively, or they may have increased in knowledge, but these gains obviously cannot be taken into account if they are not evaluated. 2.2.4 Results D. L. Kirkpatrick and J. D. Kirkpatrick (2006) define results as the final results that occurred because of the training, for example, increased production, improved quality, decreased costs, reduced frequency and/or severity of accidents, increased sales, reduced turnover, and higher profits. Ifthe reason the company is offering the training is for these results, D. L. Kirkpatrick and J. D. Kirkpatrick (2006) argue that the final objectives of the training should be stated in these terms. As it is difficult if not impossible to measure some types of programs in terms of tangible results, this framework suggests that the final results have to be measured in terms of improved morale or other nonfinancial terms. 2.2.5 Limitations of Kirkpatrick’s Evaluation Framework This framework has been used in many training programs, as exemplified in the 16 case studies included in the third edition of D. L. Kirkpatrick & J. D. Kirkpatrick’s (2006) book. The case studies report on evaluations of leadership programs, training programs for new supervisors and managers, courses, performance learning models, and career development, for example. Nonetheless, this framework as presented in their book is limited in scope. Understandably, the authors cannot incorporate every type of training in their case studies; however, the book lacks a case study on the four levels of evaluation within workplace language training programs. As a program administrator it leaves me to wonder: can this framework function in a workplace English program where linguistic and cultural barriers play a huge role? Moreover, will these barriers make it nearly impossible to measure program effectiveness? 2.3 Introduction to Workplace Literacy Programs Although workplace language training programs have many names for similar services [Vocational English as a Second Language (VESL), Workplace Education Programs (WEP), Workforce English, Workplace Literacy Programs (WLP), English for occupational purposes (EOP), English for professional purposes (EPP), English for business purposes (EBP), Business English], the programs face similar challenges when measuring program effectiveness. In general, workplace language training programs focus on authentic tasks and materials gathered during a needs assessment built on the knowledge of the workplace since the participants’ often have urgent, work-related goals (Friedenberg, et al., 2003). The participants are often immigrants or refugees from various language backgrounds that are employed and taking on-site English language training. The training is often paid for or offered by the employer: Moreover, the employer decides whether or not the training occurs off the employees’ shift, on the shift, or part on/part off. That is, some employers pay for their employees to attend the training and others offer training as a convenient benefit. These classes are provided often to increase retention, to prevent job loss, or to promote deserving employees (F riedenberg, et al.). 2.3.1 Types of Program Models As mentioned previously, there are many varieties of workplace language training. Some researchers have divided the variety of programs into subgroups according to who they intent to serve. For example, the term workplace literacy instruction, as used by Mikulecky and Lloyd (1996b), encompasses a number of distinct subgroups including workplace readiness, a work-centered approach, and a worker- centered approach (Jurmo, 2004; McGroarty & Scott, 1993). Workplace readiness programs, or pre-workplace classes, often serve unemployed adult ESL learners who are preparing to enter the workplace. Topics covered may be related to interviewing, writing a resume, or filling out the forms needed to obtain a job. Some pre-workplace classes focus on specific types of j obs such as manufacturing, custodial positions, or food-service (McGroarty & Scott, 1993). A work-centered approach is most commonly known as worlq)lace ESL and is second language instruction held at the work site. Goals for instruction are often developed using information collected from a needs assessment in which the service provider interviews the employer and employees and observes and gathers information 10 about the language tasks needed to perform on the job. Often the goals are competency- based (1993). A worker-centered approach, on the other hand, takes a more holistic approach to determining the participants’ second language needs. This approach focuses on both tasks needed to perform on the job as well as personal goals and language needs the participants may have (McGroarty & Scott, 1993). Other researchers have attempted to define the types of workplace English programs in terms of where the programs are housed. For example, in a report of interviews with 18 workplace literacy education providers across the United States, Burt (1997) categorized the programs into five types: a workplace-educational institute partnership model, a workplace-union partnership, English as a Second Language (ESL) education within a workplace model, a workplace-private contractor partnership model, and a workplace-community-based organization partnership model. The most common model was that of the workplace partnering with an educational institution such as a community college, university, adult education, or public school system (for further information on program types see Burt, 1997; Friedenberg, et al., 2003; Grognet, 1994). More recently, Friedenberg, et al., (2003) listed seven different models for workplace English including a corporate training company, government-funded organization, and nongovernmental organization. The authors suggested that some providers may represent a combination of the models. In sum, it is important to define the program model type because the purpose and objectives of the educational program, and assessment types required, will often be different depending on a) the type of program model and b) the source of the program funding. The next section will discuss one of the sources that funded a number of workplace literacy programs between 1988 ll and 1994. This funding promoted a variety of workplace programs including instruction in reading, ESL, GED, and basic math. 2.3.2 National Workplace Literacy Funding Workplace literacy became a focus of attention in the field of adult literacy in the United States from the mid-1980’s to the mid-1990’s (Jurmo, 2004). During that time, many state and federal workplace education initiatives were funded, resulting in numerous research reports and how-to-guides some of which are listed chronologically in Table 2 found in Appendix A (see also Burt, 1994a, 1997; Burt & Saccomano, 1995, for reviews on various workplace literacy programs). Many of the programs were designed to help employees (incumbent workers) strengthen basic skills such as oral language, comprehension, reading, writing, and other skills related to the workplace. From 1988 to 1994 the US Department of Education funded over 300 workplace projects offering instruction in basic skills, literacy, and English as a Second Language (ESL) through the National Workplace Literacy Program (Burt & Saccomano, 1995; Jurrno, 2004). Listed chronologically in Table 3 are five NWLP programs that were recognized for best practices by the US Department of Education. The authors and dates of the final reports and the programs’ funding source are included in the Table 3. 12 Table 3 Six Workplace Literacy Programsfi-om N "LP in the 1990's Author (of Year Program report) Funding Source Food and Beverage Industry 1 ESL Workplace Literacy VanDuzer 1990 Department of Education Curriculum for Hotels Syracuse Labor/Management 2 Consortium Workplace Mosenthal & 1993 Office of Vocational and Literacy Skills Improvement Hinchman Adult Education (OVAE) Project Arlington Education and l I 3 Employment Program (1993)“ 1993 gugupoarnmrem 0f (REEF) The Food and Beverage 4 Skills Enhancement Training Burt 1994 Workers Union Local 32 Program & Employers Benefits Fund & OVAE Office of Vocational and 5 Globe 2000 1998 Adult Education (OVAE) Nevertheless, as nearly $133 million dedicated to the National Workplace Literacy Program project depleted, so did many of the workplace programs as evidenced in the dramatic decline of research and reports on workplace literacy after 1998. Given that the current situation of federal and state funds for adult literacy is uncertain and often limited, the decision to provideworkplace education for employees is often left to the employer, as are the costs. Imel (2003) argued that even though the so-called National Workplace Literacy Program era is in the past, workplace literacy programs have not disappeared. Rather, there has been a shift in leadership. Instead of at the federal level (i.e. the National Workplace Literacy Program project), the leadership for the programs resides at the state level. National-level efforts can be seen in attempts to provide standards through The National Institute for Literacy’s (NIFL) Equipped for the Future project, as well as research compiled by the US Department of Education’s Office of Vocational and Adult Education. Imel (2003) mentioned that at the local-level many 13 programs are operating without federal assistance or through funding provided by the Workforce Investment Act. Without federal assistance, the service provider depends on contracts with the employers, which often depends on the size of the company, its commitment to education/training in general, and the economic status of its industry (Grognet, 1994). Not surprisingly, as federal and state funding for workplace education programs dwindled, so did the number of active programs. 2.3.3 Current Resources Although the number of active programs dwindled, many are still functioning. However, to my knowledge, there is no exhaustive list of current workplace literacy programs possibly as a result of inconsistent funding, and due to the difficulties in reporting the different types of programs (for example, adult education, universities, non- profits, for-profits). Although insightful, the handbooks from the 1990’s were dated and did not incorporate the technology or the mindset of the 21St century. Therefore, in this section, I have compiled lists of more recent programs (in the 2000’s), online resources, and handbooks. I 2.3.3.1 Current Programs In 2005, the Office of Vocational and Adult Education (OVAE) identified the six workplace education programs reported below as noteworthy. That is, OVAE determined that these programs demonstrated the following: 1) significant quantifiable learner gains that are measured by standardized assessments, 2) workplace-related instruction, 3) a foundational skills component customized to a specific workplace; and 4) employer involvement as a full partner (OVAE, 2005). The results of each program are briefly described in Table 4 as reported by OVAE (see OVAE, 2005, for a more detailed description and summary). 14 Table 4 Six Workplace Education Programs Recognized by OVAE in 2005 UI No of Average Program Location P arti '. hrs of Assessments Average Results crpants . . instruction English Works Indiana 341 tested 66 CASAS; 5.5 points informal assessments Everett Community Washington 400 per yr 36 CASAS CASAS not College Limited available; 85% English Proficient/ found WorkFirst Program employment Miami Valley Career Ohio 110 per yr 36 WorkKeys 100% improved Technology Center’s scores by at Hospitality On-Site least 1 level Training/Professiona 1 Cook (HOST/PC) program Pennsylvania Pennsylvania 721 30 Customized All 721 Workforce Rubric demonstrated Improvement gains on Network (PA WIN) ' customized rubric Pima College Adult Arizona NA NA BEST/ 55% made 1 or Education’s TABE more federal Workplace level gains Education program Workforce Washington 227 NA Learner and 94% achieved Development employer goals; 70% of Council of Seattle- Evaluations; employers noted King County individual improved moral “Literacy Works” learning goals In addition to the OVAE selected adult education programs, many non-profit and for-profit programs exists throughout the United States. The following is a brief list of other workplace literacy programs that have received awards and/or have been recommended by members of NIFL’s Workplace Literacy Discussion List. The programs and their website addresses are listed alphabetically below. a. b. Creative Workplace Learning (C WL) (www.creativeworkplacelearningo_rg) located in Brighton, Massachusetts; Customized Workplace English (C WE), Literacy Center of West Michigan (www. kentliteracy. org) located in Grand Rapids, Michigan; 15 Essential Language (www.essentiallang_uage.com ); (1. Literacy@ Work {http://www.literacvchicago.org/workplace_literacy.htm); Minneapolis Community and Technical College (http ://www.minne2_1polis. edu/cect/workforce_esl .cfm ); f. New England Literacy Resource Center (www.nelrc.org[practice/workplace.html); g. South Seattle Community Colleges in Seattle, Washington; h i. .0 . Texas LEARN S (hipJ/www-tcall.tamuedu/texasLeamsD; and, Workplace ESL Solutions (www.WorkplaceESL.com) located in Utah. 2.3.3.2 Current Handbooks As discussed previously, since the 1990’s researchers have published numerous handbooks on workplace literacy programs. In the 2000’s more handbooks were published, this time focusing on how to plan, implement, and evaluate workplace literacy and English as a Second Language (ESL) programs. Again, just as in the 1990’s most of these were funded by the government. Most of the handbooks were created so that program administrators, employers, and instructors can better understand how to create and sustain effective workplace literacy programs. All of them draw on previous research and articles published in the 1990’s. Although all of these handbooks are easily accessible on the web, it is difficult to find a resource that lists all available handbooks. The list provided in Table 5 is not an exhaustive list; however, it is a good starting point as it provides a list of six recent handbooks along with pertinent information and a direct link to each of the resources. 16 Table 5 Workplace Literacy Handbooks in the 2000’s Title Author Year Funding Source Online Access 1 Learning at It 'ork Gardner 2000 Center for Literacy http://aeonlinecoeutked Studies & The University u/pdf/leamatwrl<.pdf of Tennessee 2 A Introduction to Crocker, 2002 US Dept. of Ed., Adult http://wwwpro- ESL in the Sherman, Ed. & Literacy net2000.org/CM/content_ Workplace Dlott, & files/89.pdf Tibbetts 3 Tennessee ESL in the Sawyer & 2003 Tennessee Dept. of Labor http://www.cls.utk.edu/pd Workplace: A Tondre and Workforce f/esol_workplacefl‘enn_E Training Alanna! for Development, Office of SOL_in_the_Workplace.p ESOL Supervisors Adult Ed. & The (If and Instructors University of Tennessee, Center for Literacy Studies 4 Exploring Hyde, 2004 National Centre for http://wwwncvereduau/r assessment in flexible Clayton & Vocational Education esearch/proj/nrO007.pdf delivery of Booth Research (NCVER) & vocational education Australian National and training Training Authority programs (ANT A) 5 OHIO il'orkplace ABLE 2005 Adult Basic and Literacy https://www.owens.cdu/w Education Resource Education (ABLE) & orkforce_cs/WorkplaceEd Guide Ohio Dept. of Ed. ucationGuide.pdf 6 ChartingA Course: Tondre-El 2006 Texas Adult Ed. and http://www-tcall.tamu.cdu Responding to the Zorkani Family Literacy Industry-Related Partnership Instructional Needs of the Limited English Proficient 2.4 Program Assessment and Evaluation In this section I first define the terms assessment and evaluation. Next, I define and explore the use of formative and summative evaluations in workplace English programs. In addition, I discuss the types of standardized and informal assessments used in workplace English programs. Lastly, I will present common issues with assessments related to workplace English programs. In this thesis the terms assessment and evaluation will be based on Lynch’ 3 (2003) definitions. He defines language assessment as “the range of procedures used to 17 investigate aspects of individual language learning and ability, including the measurement of proficiency, diagnosis of needs, determination of achievement in relation to syllabus objectives and analysis of ability to perform specific tasks” (p.1). On the other hand, program evaluation is “the systematic inquiry into instructional sequences for the purpose of making decisions or providing opportunity for reflection and action” (p. 1). Lynch explains that the areas of evaluation and assessment overlap as evaluation often uses the data from language assessment to arrive at its conclusions. 2.4.1 Formative and Summative Evaluations Many researchers have noted how it is difficult to find many workplace English programs that have been extensively evaluated, or ones that have gathered convincing data (Mikulecky & Lloyd, 1996; Sticht, 1999). Mikulecky and Lloyd (1996) observed that most of the programs that reported evaluation data often provided simplistic . information based on leamers’ satisfaction surveys and anecdotal reports on effectiveness. After surveying the state of workplace literacy programs in the 19903, Mikulecky, Lloyd, Horwitz, et a1. (1996) found that few programs reported quantifiable data demonstrating what learners had gained from attending the program. Furthermore, few models from the NWLP era reported both formative and summative evaluations. Formative evaluations involve an evaluation processes that is generally used during program operation to identify program areas that can be addressed, whereas summative evaluations take place at the end of the program and are designed to assess overall program success (Burkhart, 1996). Lynch (2003) relates summative and formative evaluations and assessments to their purpose. A summative evaluation, for example, can have an administrative purpose such as making decisions about placing individuals in a language program and organizing 18 and developing the program. Another purpose of a summative evaluation is to assess the total effectiveness of the program. It evaluates three perspectives: achievement of program goals, learner gains, and impact on company productivity. In addition, summative evaluations can establish accountability and can provide evidence of program effectiveness for possible continued funding (Mikulecky, Lloyd, Kirkley, et al., 1996). On the other hand, a formative evaluation can relate to instructional purposes including making decisions about what individuals have achieved and what they still need to learn, as well as evaluating how well components of the language program are working (Lynch, 2003). A formative evaluation checks the progress of a program while it is still under way, so that changes can be made. It addresses four areas: goals, resources, instructional processes, and impact (1996) (for more information on formative and summative evaluations see Lynch, 2003; Mikulecky & Lloyd, 1996; Mikulecky, Lloyd, Horwitz, et al., 1996). The consensus among researchers and practitioners in workplace English seems to be that evaluating involves both summative and formative evaluations, including quantitative as well as qualitative measures. Some of the assessment tools used in workplace literacy programs are: focus groups and stakeholder interviews, observations, participant and supervisor interviews, questionnaires, job-performance measures, commercially available tests, and classroom assessments such as student portfolios, journals, checklists, samples of class work, and self-assessments (Burt & Saccomano, 1995) As mentioned earlier, most programs in the NWLP era did not provide formative and summative evaluations in their final reports. The REEP Project (mentioned in Table 3) was an exception, however. REEP provided an evaluation framework created by the 19 internal staff and the project advisory committee (Mansoor, 1993). The stakeholders had five informal meetings focused on the formative evaluation process, which took place before, during, and after the course. The framework considered the following questions: “Who needs to know what? When do they need to know it? What data collection instruments are needed?” (Mansoor, 1993, p. 43). The framework indicated that all stakeholders-- the learners, teachers, employers, administrators, and the federal govemment—need to be involved in determining program goals. Similarly, in her mid-905’ survey of 18 workplace literacy programs, Burt (1997) found that the stakeholders involvement in not only planning and implementation was important, but also in formative and summative evaluations. For example, Burt interviewed program directors, curriculum writers, teacher trainers, and teachers about program goals, stakeholder involvement, critical points of instruction, curricula, and program accomplishments and failures. During the interviews, the project directors stressed the issue of the stakeholders’ involvement in program planning, assessing participants’ needs, and most importantly, encouraging the participants to attend. Around the same time, Moore, et al. (1998) examined the impact of the National Workplace Literacy Program through a final national evaluation. The purpose of the study was to describe the implementation and institutionalization of workplace literacy programs as well as assess the effects that workplace literacy instruction has on participating workers. They focused on five programs in particular and found that across sites, employers relied on staff to assess how useful their programs were. The main types of information used to assess programs’ effectiveness were a mixture of formative and summative evaluations including worker’s performance on instructor-developed tests, attendance data, and anecdotal information. However, Moore et al. (1998) reported that 20 standardized tests were infrequently used for assessing how well the learners mastered the skills they were taught. This issue will be explored in more detail towards the end of this chapter. 2.4.2 Types of Assessment Tools To better understand how success can be effectively measured in workplace English programs, it is necessary to explore a variety of assessment tools, both standardized and informal (also known as mainstream and alternative assessments) (Lynch, 2003). Most state and federal evaluation and accountability procedures require performance levels be expressed in an objective, quantifiable, and measurable form (Sticht, 1999), which is the purpose of a standardized test. Sticht defines a standardized test as “a test that is administered under standard conditions to obtain a sample of learner behavior that can be used to make inferences about the leamer’s ability” (p. 59). These tests can be particularly useful for making comparisons in an individual’s ability pre- and post-instruction (summative) and also among programs. A number of standardized assessment tools are available for adult ESOL programs. The following list is a sampling of standardized assessment tools that programs across the United States employ: Comprehensive Adult Student Assessment System (CASAS), Basic English Skills Test (BEST), BEST Plus, the Test of Adult Basic Education (TABE), Adult Language Assessment Scales (A-LAS), New York State Placement Test for English as a Second Language Adult Students (NY S Place), Adult Basic Learning Examination (ABLE), English as a Second Language Oral Assessment (ESLOA), and the Test of Applied Literacy Skills (TALS) derived from NALS (for detailed descriptions see Sticht, 1999; VanDuzen & Berdan, 1999). 21 On the other hand, the term informal assessments, as defined by Grognet (1997), is interchangeable with the term alternative assessments and indicates the following: . Any method, other than a standardized test, of determining what a student knows or can do; . Activities that reflect tasks typical of classroom instruction and real-life settings, and that represent actual progress toward curricular goals and objectives; . Activities that are monitored and recorded in some way, either by a teacher observation, peer observation, or student self-assessment (p. 21). Performance-based assessment is a type of informal (or alternative) assessment and is characterized by activities that are specifically designed to assess performance on one or more instructional tasks and/or those in which students demonstrate specific skills and competencies (Grognet, 1997; Pierce & O’Malley, 1992). 2.4.2.1 Standardized Assessment Tools There are a number of commercially available standardized assessment tools as well as research-based methods and templates for informal assessments. Because the number of evaluation and assessment tools available is extensive, the following list will 1) mention only the most commonly used measurement tools in workplace English programs, 2) briefly describe their purposes, and 3) explain some of their advantages and disadvantages. 2.4.2.1.1 Comprehensive Adult Student Assessment System (CASAS). httpsflivwwcasasorg CASAS was originally created for the State of California, but is now one of the most commonly used assessments in adult education. It is competency- based and offers a number of types of tests with different levels (A, B, C, D) and forms for pre- and post—testing (63/64, 65/66, etc.) that can be used to assess Adult Basic Education (ABE), ESL, and employability. The tests include ESL Appraisal, Life Skills, Life & Work, Employability Competency System (ECS), and Workplace Speaking. 22 However, CASAS has only published a reading test for Life & Work and is currently field-testing the listening test. Life Skills and Life & Work are administered in groups and generally take 1.5 to 2 hours to conduct both listening and reading. To compliment Life Skills and Life & Work, which only assess receptive skills, Workplace Speaking was developed to measure interactive skills. It is administered individually and measures the oral skills of learners at a low-intermediate level and above. This test is difficult to administer in larger classes as each interview can take 15 to 20 minutes. Furthermore, the test administrators must be thoroughly trained to properly administer CASAS. Many CASAS tests are available in a paper-and-pencil format as well as a computer-based form. Training from CASAS is necessary in order to purchase test materials (CASAS, 2007; Sticht, 1999). 2.4.2.1.2 Basic English Skills Test (BEST) http://wwwcalogg. The BEST was originally funded by the Office of Refugee Resettlement (OR) and was deve10ped and field-tested by the Center for Applied Linguistics (CAL). It is a low-level proficiency test that focuses on survival and pre-employment language skills. It consists of two parts: oral interview and literacy skills. CAL has made available a short-fonn oral interview which takes ten to fifteen minutes to administer to each learner. There is one level of the test and four forms (A, B, C, D) (Sticht, 1999; van Duzer & Berdan, 1999). BEST was one of the most commonly used tests in Adult Education during NWLP era, however, it is being replaced with BEST Plus. 2.4.2.1.3 BEST Plus www.best~_plus.net. The BEST Plus (Oral English Proficiency Test) was created by CAL to compliment the BEST Literacy test. As of October 1, 2006, BEST Plus took the place of BEST (Oral Interview Section) and can be used with BEST Literacy, which is the Literacy Skills Section of BEST. BEST Plus is a 23 functional oral language assessment of interpersonal communicative skills of adult English language learners. It is available in two formats, computer-adaptive and print- based serni-adaptive. The scoring rubric measures listening comprehension, language complexity, and communication. It has similar content to BEST Literacy. In contrast, the computer-adaptive format of BEST Plus scores the test immediately, is more reliable, and provides higher test security as the software customizes the test so that the learners do not take the same test twice. It takes five to twenty minutes to administer. 2.4.2.2 Qualitative Informal Assessments Program administrators and practitioners have used various informal assessments to evaluate workplace English programs. The following two sections provide a non- exhaustive list and brief descriptions of commonly used qualitative and quantitative informal assessments. Practitioners often use the five qualitative informal assessments described below to evaluate the learners’ reaction to the training and to measure learning gains achieved in the classroom. 2.4.2.2.1 Interviews. Interviewing representatives from each group of stakeholders (managers, front-line supervisors, employee representatives, learners, and instructors) is an important factor in both formative and summative evaluations. Interviews are used to 1) determine course goals and objectives, 2) assess ongoing needs and impact, and 3) evaluate the total effectiveness of the program. The stakeholders’ responses can be compared for any potential disagreements, for example, conflicting goals. Also, it is important to ask open-ended questions and ask for more examples or reasons when the interviewee gives a brief response. Both the participants and their direct supervisors should be interviewed before instruction to solicit information on their goals, mid-point to assess impact, and after as an exit interview. The purpose of such 24 interviews is to obtain information about learner perceptions rather than assess their skills. An advantage to interviewing stakeholders is that the qualitative inforrnati on collected can be quantified so that improvement in subjective areas can be demonstrated to funders and managers in a numeric form (Burt & Saccomano, 1995; Mikulecky, Lloyd, Kirkley, et al., 1996). 2.4.2.2.2 ’Journals. Journals (also known as learner-generated learning logs or learning journals) are a type of program-developed assessment that is learner-created and instructor-monitored. A typical journal consists of a notebook with page headings such as “Things I Learned This Week” “Things I Find Easy in English” “Things I Find Hard in English” “Things I Would Like to Be Able to Do in My Work in English” “Things I Have Learned in ESL Training That I Have Practiced on my Job”. Weekly entries on one or more pages combined with a periodic‘review from the instructor can help the learners see their progress, as well as help the instructor individualize instruction (Crocker, Sherman, Dlott, & Tibbetts, 2002). 2.4.2.2.3 Informal classroom assessments. Performance-based assessments can be created by the instructor using authentic materials to measure specific workplace tasks taught in the classroom. Examples of performance-based tests are completing job schedules, pay stubs, and responding to scenarios. For goals that relate to long-term and more general literacy gains, Mikulecky and Lloyd (1994) suggest measuring changes in lifestyles as measured through interviews or questionnaires with questions on learner practices, beliefs, and plans. If the goal for the learner is to increase reading ability of workplace materials, a prose scenario or cloze exercise will most likely be appropriate (Burt & Saccomano, 1995; see Mikulecky & Lloyd, 1994, for examples of classroom assessments). 25 2.4.2.2.4 Portfolios, Portfolios can mean different things to different people. They can be showcase portfolios that are used to display the leamer’s best work, or collections portfolios that show daily class assignments, or assessment portfolios that are focused reflections of specific learning goals that contain systematic collections of student work, student self-assessment, and instructor assessment (O’Malley & Pierce, 1996). In general, portfolios are individual learning folders which include samples of written and work, checklists, pre- and posttest scores, self analysis, and program- developed assessment instruments such as journals where learners record their reactions. Portfolios can serve as a self-assessment and can provide vital information about learners’ attitudes and motivations. They also allow participants who have difficulty with standardized tests to demonstrate their progress (Burt & Saccomano, 1995; Crocker, et al., 2002; Pierce & O’Malley, 1992; for an example of a student self-evaluation see Mansoor, 1993, p. 86). 2.4.2.2.5 Observations. Observations include job-shadowing to gain an understanding of workplace tasks, attending workplace-related activities such as team meetings and other functions, as well as visiting the classroom to observe what is being taught. When combined with interview responses, classroom observations can help determine whether or not the programs goals match what is being taught in class. The observer should take notes, describe the activities, and record the time of each activity (Burt & Saccomano, 1995; for a template see Mikulecky & Lloyd, 1994, p. 49). 2.4.2.3 Quantitative Informal Assessments Program administrators as well as practitioners often use the following three quantitative informal assessments to evaluate goal attainment, the learners’ behavioral changes in the workplace, and the impact the training has on the company. 26 2.4.2.3.] Checklists. The practitioner can create different types of checklists, such as aural/oral, reading, or writing, based on the objectives of the course. The objectives for the course, or even for specific lessons, can be used to form the basis of the checklist (Crocker, et al., 2002). The checklists are used to reflect the level of competence each learner is demonstrating. The instructor rates learners on their ability to perform tasks such as the following: follows simple directions, reads alphabet in English, begins short journal entries, discusses feeling about work with some elaboration, draws conclusions from reading. The instructor is then able to note strengths and weaknesses as well as rate the learners (for examples see Burt, 1994b, p.26; Grognet, 1997, pp. 27-29; Mansoor, 1993, p. 89; Mikulecky & Lloyd, 1994, p. 45-47). 2.4.2.3.2 Employee Job Performance Ratings. Employee Job Performance Ratings are created so the supervisor or team leaders of employees with similar jobs can rate their job performance (i.e. completing paperwork, working in teams). Typically supervisors and key employees will assist in developing the ratings on various aspects of job performance, including job-related behaviors such as reading measurements and setting up equipment and communication indicators related to literacy and basic skills such as offering suggestions at team meetings. To assess the impact of training on the employees, the ratings should be created with anchored scales and used as pre-and post- indicators. A more accurate analysis will involve a larger number of employees rated; for that reason, it is best to cover a broad range of behaviors that encompass several jobs (Mikulecky, Lloyd, Kirkley, et al., 1996) (for examples see Managers Informational Packet, 1998, pp.27-28; Mansoor, 1993, p. 87; Mikulecky & Lloyd, 1994, pp. 52-53; for procedures on developing Supervisor Ratings see Mikulecky, Lloyd, Kirkley, et al., 1996, p. 60). 27 2.4.2.3.3 Company Records. Collecting and analyzing company records will help assess the program’s impact on productivity. Records in which information can be gathered on individuals such as customer complaints, attendance, safety, error or scrap rates, applying for promotions, enrollment in later courses, and the number of suggestions made in team meetings, the number of requests to be cross-trained, and the learner’ s willingness to indicate lack of comprehension will not only show the learners’ improvement but also an improvement on the bottom line. Mikulecky, Lloyd, Kirkley, et al. (1996) suggest collecting data on the participants for a period before instruction and after, and, if possible, comparing the data to that of a control group. By showing an impact on the bottom line, workplace education programs are more likely to receive continued funding (for more information on return on investment, see Martin & Lomperis, 2002; for more information on employee productivity indicators, see Mikulecky & Lloyd, 1994; Mikulecky, Lloyd, Kirkley, et al., 1996). 2.5 Benefits of Evaluation for Stakeholders Many programs create an evaluation framework that specifies. which stakeholders need to know about which evaluations, which instruments will be used to assess which factors, and when the assessments will take place. For example, the REEP project (Mansoor, 1993, p. 44) developed a framework that specified which evaluation methods the learners, the teachers, the employers, and other stakeholders (i.e. administrators, external evaluator, the public)] needed to know. The learners needed to be aware of program/learner goals, their proficiency levels, progress, and skill achievement. They would be evaluated using outreach materials, pre-test data, progress reports, and self- evaluations. Each assessment tool was to be given at a certain time period in relation to instruction (before, during, or after). The teachers, employers, and other stakeholders 28 needed to be aware of such things as employer goals, impact on the employees/ workplace, and learner progress (Mansoor, 1993). These goals are assessed using the formal and informal assessments described above. Typically the learners and instructors are interested in formative, on-going assessments that measure learner progress and achievement. The employers, administrators, funders, and other stakeholders are often more interested in summative evaluations that produce statistical data evaluating the impact of the program on the employees and the workplace. In the same way, the formative and summative assessments discussed above can be more clearly discussed in terms of Kirkpatricks’ four-level framework. Table 6 illustrates the four levels applied to workplace education program evaluation, and provides a brief summary of the strengths and weaknesses of each level. Table 6 is adapted from The Bottom Line: Basic Skills in the Workplace (1988, found in Burkhart, 1 996). Table 6 Examples 0/ the F our-levels 0/ Evaluation Type/ Level Pugpose Strengths Weaknesses m Measure 0 Easy to o Subjective 0 Completed participant Student student feeling administer 0 Provides no feedback questionnaires Reaction about a 0 Provides measurement of or “happiness" reports program/ immediate learning, transfer 0 Informal student/ course feedback on of skills or instructor interviews instructors, benefit to the 0 Focus group discussions facilities, and program 0 Informal comments from program design artici ts Measuring the 0 Provides o Requires skill in 0 Written pro/post Icsts Stu dent amount of objective data test construction 0 Standardized pre/post- Leaming learning that on the 0 Provides no test scores has occurred in effectiveness of measurement of o On-the-job assessments a program/ training transfer of Skills 0 Supervisor reports course a Data can be or benefit to the . Skills laboratories collected before organization . Role pmvs students leave . Simulations the "3mg 0 Projects or presentations program 0 Oral examinations Table 6 (cont’di Type/ Level Purpose Strengths Weaknesses Examples Measure the 0 Provides o Requires task 0 Performance checklists Student transfer of objective data analysis skills to 0 Performance appraisals Performance training on impact to job construct and is o Crirjcaj checklists / Behavior situation time consuming analysis to administer . semappmjsal - Can l’f’ a . On-the-job observation “P0195315” 0 Reports from customers, sensrtive issue peers, and participant’s manager Measure 0 Provides o Requires high 0 Employee suggestions Organization impact of objective data level of 0 Manufacturing indexes Results training on for cost/benefit evaluation design (cost, scrap, schedule organization analysis and skills: requires compliance, quality. organizational collection of data equipment donation) support over a period of 0 Interview with sales time manager 0 Requires 0 Financial reports knowledge of . Union grievances organization 0 Absenteeism rates needs and goals 0 Accident rates 0 Customer complaints 2.6 Issues in Evaluation for Workplace Literacy Programs Many researchers and practitioners in adult education and workplace education report difficulties finding an appropriate measure or balancing assessment types for workplace literacy programs (Moore, et al., 1998). Some of issues that researchers and practitioners have expressed are that the standardized assessment tools do not match the course objects and materials, time standards are being abused during administration, negative gains are reported instead of leaming gains, and there is insufficient time to develop program-specific tools. These issues will be described in more detail in the following paragraphs. It is not a new concept that assessments should be linked to the learning objectives of the class as well as be relevant to both teaching and jobs (Mikulecky, Lloyd, Kirkley, et al., 1996); however, workplace English service providers often customize the curriculum for specific industries, and therefore often have a difficult time finding 30 standardized tools that are relevant. To complicate matters more, many grantors only permit the use of certain standardized tests, which narrows the selection. For example, in the State of Michigan, adult education programs are currently only permitted to report data using CASAS, TABE, or WorkKeys. Standardized tests can be used as part of summative evaluation. However, the assessments are most effective when the curriculum matches the types of materials and skills being taught (Mikulecky, Lloyd, Kirkley, et al., 1996). It is possible that when the curriculum aligns with the test, these types of standardized tests can be appropriate. Yet some researchers argue that because standardized tests do not strongly represent “the literacy requirements of actual jobs” they should not be considered appropriate in workplace English programs (Sticht, 1999). It has been suggested that while developing the curriculum, practitioners should also develop pre- and post-assessments that reflect the possible objectives of the curriculum. However, many issues arise around reliability, validity, and development time when developing custom-designed measures. Another issue around standardized tests is what Sticht (1999) calls ignoring time standards. Many workplace English courses are ten to twelve weeks long, providing 40 to 50 hours of instruction, and are too short to both pre- and post-test. For example, if a company commits to 40 hours of instruction over 10 weeks, a CASAS pre-test could be given and used as placement tool. However, the CASAS guidelines state that the posttest should be given after 90 to 100 hours of instruction in order to correctly assess language gains that were made (CASAS, 2007). Mikulecky (2000) speculates that even though the number of contact hours is minimal, the instructors should encourage the learners to practice a couple of hours outside of class. Therefore, he argues that the problem is not the use of standardized tests; rather, the problem is that standardized tests do not measure 31 everything that a program is doing. To enhance the problem, in some programs only standardized tests are being used and other program goals such as changed goals and behaviors, improved literacy strategies, improved productivity, increased confidence and willingness to take further training are not being assessed (Mikulecky, 2000). Other programs have complained about the negative gain phenomenon. Sticht (1999) reported that “it is not unusual to find that 10 to 20 percent of learners score poorer on the posttest than they do on the pretest” (p. 67). Sticht gave two possible reasons and one possible solution for this problem: 1) learners spend more time on the posttest than they do on the pretest because they have a new desire to perform more accurately on the test rather than guessing, which may result in an incomplete posttest-- leading to negative gain scores; 2) negative gain scores reflect guessing or other regression effects, which can be reduced by offering alternatives for multiple choice tests; and finally, 3) for programs with higher potential for negative gains, (using any multiple choice tests) they should report frequency distributions showing the various amounts of negative and zero gain occurring in the program. Sticht (1999) argues that “simply showing average pre-and posttest scores that include the zero and negative gain scores obscures this valuable information and produces inaccurate indications of lower improvement in the program than actually occurs” (p. 67). The last issue addressed in this literature review regarding assessments in adult and workplace education is that the standardized tests may not adequately address learners’ strengths and weaknesses especially at the lowest level of literacy (Burt & Keenan, 1995). It is not always clear why the ESL learners are having trouble with the test. In addition, some low-literate ESL learners do not have the functional skills necessary for reading test questions and are not familiar with classroom behaviors such as 32 test-taking. Burt and Keenan (1995) argue that it is not tests are not the problem, but the inappropriate use of them, for example, administering a language proficiency test to measure achievement. This inappropriate use is sometime encouraged by funding stipulations, as mentioned previously. This is one of the reasons Mikulecky, Lloyd, Kirkley, et al. (1996) argue that “items such as relevant pre- and post-test results, meeting minutes, class materials, samples of classroom work, and other relevant materials should be gathered as evidence that goals were achieved” (p. 56). In sum, it is clear that measuring success in workplace education programs has been a challenge for a number of programs, and a number of practitioners have created a number of tools to assess and evaluate their programs. It is unclear, however, which tools most efiiciently and effectively measure success. 2.8 Gaps in the Literature As illustrated previously, researchers and practitioners alike consider both formative and summative program evaluation to be vital for workplace English program sustainability. Researchers have suggested a multitude of informal assessments and encourage practitioners to use a variety of informal assessments such as learning journals, initial and exit-interviews, checklists and portfolios that can be used in workplace English programs. However, the uniqueness of each workplace English course (i.e. different settings, goals, materials, learner population) presents a challenge for program administrators and standardized testing. For this reason, Moore, et al. (1998) found that standardized testing was inadequate for many of the NWPL programs. Furthermore, the qualitative assessments require a commitment on the part of the participants (Mackey & Gass, 2005), not to mention the time and effort the program administrators must take to analyze and report the data. Moreover, if the course is 33 partially or fully funded by a grant, the grantors often require quantitative data measured by state-approved standardized tests, which takes additional time away from instruction and are not always reliable due to the short nature of workplace English courses (Sticht, 1999). For these reasons, program administrators are faced with the challenge of effectively and accurately demonstrating program effectiveness given that the results from standardized assessments are not always reliable. Unfortunately, there are no current studies that provide a solution to this challenge. 34 CHAPTER III: METHODS AND PROCEDURES 3.1 Introduction This thesis is a multiple-case study which investigates the effects of workplace language training programs provided by a non-profit literacy organization in a health care and a manufacturing company. As discussed in Chapter II, when measuring training effectiveness according to D. L. Kirkpatrick and J. D. Kirkpatrick (2006) evaluation means assessing at four levels (reaction, learning, behavior, and results), which include multiple stakeholders such as the company management, the language learners, and the service provider who may have differing viewpoints. To measure the effectiveness of the training from the different perspectives, a variety of instruments was used, and the data were analyzed both quantitatively and qualitatively. Furthermore, much of the literature discussed in Chapter II forms the basis for this study in terms of its theoretical framework and research design. The purpose of this study is multifaceted, and since each workplace language training setting is unique, this study is also explorative in nature. This study seeks to explore the use of different qualitative and quantitative assessments to evaluate reaction to the training, learning gains, and the changes in behavior in the workplace as a result of the workplace language training program. 3.2 Definition of Key Concepts The following definitions of key concepts are an introduction to the terminology used in this thesis. First, the term workplace literacy program can be used very specifically to mean a reading program for native and/or non-native English-speaking employees. However, it is also used among researchers in a broad sense to cover workplace English as a Second Language (ESL), reading, GED, and math programs. In this thesis, the term workplace literacy program will include ESL and will be used 35 interchangeably with workplace language training and workplace education program. Second, related to training the term eflective will be used in this thesis synonymously with the term success as it relates to the stakeholders’ perspective. For example, from the Human Resources perspective an effective training is a successful training. Similarly, measuring success is measuring a program’s effectiveness. More specifically, a successfiil or effective training can be defined by positive results in one or all levels (reaction, learning, behavior, results) of Kirkpatrick’s (2006) four-level model, depending on the outcomes established for the training. Additionally, the reaction level of this framework is equal to measuring customer service satisfaction. 3.3 Research Questions The study and the research questions are based on Kirkpatrick’s training evaluation framework discussed in Chapter two, which, in brief, encourages evaluating on multiple levels and viewing the training from multiple perspectives. For this reason, the research is intended to assess the viewpoint of different stakeholders using a variety of quantitative and qualitative assessment tools. Hence, the research is guided by three questions: 1. How do supervisors’ reactions to the workplace English training compare to language learners’ reactions as measured by informal interviews and learning journals? 2. To what extent do workplace language learners demonstrate learning as measured by: a. Standardized assessments? b. Learning journals? 36 3. How does learners’ behavior change as a result of workplace language training as measured by learner and supervisor questionnaires? 3.4 Setting This study was conducted in the workplaces of two major employers in Western Michigan. For the protection of the company’s identity they have been assigned pseudonyms. The two companies are the automobile manufacturing company with the pseudonym Alpha Manufacturing and the healthcare organization Beta Health (North and South Campuses). Alpha Manufacturing offered the English language training to their employees as a benefit and attendance was voluntary, though most supervisors recommended which of their limited English proficient employees attend. Alpha Manufacturing paid the employees to attend the training, even those who attended classes off shifi. Classes at Alpha Manufacturing were held on-site at the company in a fully- equipped training room and met twice a week for one and a half hour sessions. The instruction occurred over ten-weeks, from January until March, 2007, for a total of 30 hours of instruction. The materials used in this class were customized materials and an off-the-shelf text, English ASAP by Steck-Vaughn Company. English ASAP has an emphasis on teaching the Secretary's Commission on Achieving Necessary Skills (SCANS) competencies in the context of workplace scenarios and includes topics such as communication, technology, time management, customer service, culture of work, health and safety, and working with people. At Beta Health North language training was held in a small departmental conference room and at the South campus in a large training room. The classes met three times a week for one-hour sessions. The employees were paid to attend 30 minutes of the training, and used their lunch break for the other 30 minutes. Like Alpha manufacturing, 37 the treatment occurred over ten weeks, from January to March, 2007. Instructional materials used in these classes included Ready to Go, an off-the-shelf textbook by Joan Saslow that focuses on language for communicating in the community, at home, and in the workplace. Reaay to Go correlates to the following national standards: SCANS Competencies and Foundation Skills, CASAS Life Skills Competencies, Equipped for the Future (EFF) Content Standards, and the National Reporting System (NRS) ESL Levels. Both companies in this study contracted with a non-profit organization of which I am the workplace English program director. To protect its identity, the organization has been assigned the pseudonym Literacy Organization. Literacy Organization was established in 1988, providing one-on-one literacy tutoring for native and non-native English speakers. Since 2001, Literacy Organization has provided on-site, fee-based workplace language training for companies in various industries throughout western Michigan. In its workplace language training program, Literacy Organization has served nearly 40 companies between 2001 and 2007. Afier contracting with a company, Literacy Organization’s program staff conducts a needs analysis involving the company management, supervisors, and employees in order to customize the program to the company’s specific needs. Typically, the classes are partially customized and partially taught from off-the-shelf books. The customized lessons are developed by material developers/trainers who have an MA TESOL and are trained and guided by the program coordinator. When the company elects to have customized training, Literacy Organization’s program staff conducts a thorough analysis involving job shadowing, videotaping, audio recordings, and digital photography. The goal of the analysis is to gather information that will stimulate real-life situations from within the classroom; these situations are then used to accomplish the course objectives 38 established by the advisory team (company management, the language learners, and Literacy Organization). To teach the classes Literacy Organization'employs full-time and part-time instructors and also hires independent contractors to teach specific courses. All trainers have an MA TESOL or in a related field, are working towards their MA, or have a teaching certificate with an ESL endorsement. Literacy Organization evaluates for effectiveness throughout the trainings using standardized and informal assessments, both quantitative and qualitative. At the end of each training course, the program staff meets with the company management to review the effectiveness of the training and provide a formal report. Funding for the training programs in this study. came from a combination of fees paid by the employers and grants awarded to Literacy Organization to help off-set the cost of instruction. The two grants were 1) WIA Incumbent Worker funds awarded by a local MI Works! Agency’s Workforce Development Board, and 2) EL/Civics grant funding awarded by the State of Michigan’s Office of Adult Education in the Department of Labor and Economic Grth (DLEG). Before instruction began in Alpha Manufacturing and Beta Health, the non-native English-speaking employees interested in taking English classes were given a standardized appraisal that measured their speaking, listening, reading, and writing . abilities. Those selected to attend the classes and participate in this study placed at no lower than an intermediate English proficiency level in listening and reading as measured by the Comprehensive Adult Student Assessment System (CASAS). Those lower than an intermediate proficiency level were placed into a lower-level class, or held for a lower-level class to begin. Hence, they were excluded from this study. 39 Before proceeding with the study, permission to conduct the study was obtained from the Literacy Organization who provided language training, the employers, and the language learners. All who participated in this study volunteered. 3.5 Subjects The subjects for this study include 36 employees, 15 managers, and 4 service provider program staff In this section, background information is provided for each of these groups. 3.5.1 Learners The learner participants for this qualitative study come from three groups (n = 36) of non-native English-speaking employees who participated in a ten-week workplace English language training at their companies in Grand Rapids, Michigan. Of the 36 employees, 10 worked in a variety of positions at a manufacturing company, Alpha Manufacturing, and 26 worked in a healthcare organization in Nutritional and Environmental Services at Beta Health (North and South campuses). The typical learner in this study had worked for the current employer about six years and had been in the United States between 10 and 15 years. The learners’ ages ranged from low 205 to over 60, with the average age being 43. Among this group were 25 females and 11 males from 16 countries, including Sudan, Dominican Republic, Vietnam, Cuba, and Ethiopia. For the participants’ personal data see Table 7 in Appendix B. Although 36 learners participated in the training, only 21 completed learning journals, as well as a pre- and posttest of either the listening or the reading test, or both. This is because all 36 participants were encouraged but not mandated to complete a journal, and some were absent when pre- or post-testing was conducted. As for the data analysis, the 36 were still included in parts of this study, for it was designed to be an 40 action research study. That is the study occurred in a realistic setting in which the service provider must report to the company on all the participants, not excluding those who were absent on testing days. Therefore, results from all 36 participants are included in research questions 1 and 3, which use qualitative assessments. That is, interviews were conducted with and evaluations were collected from those who participated in the training (it = 36) yet possibly did not complete a learning journal or take the pre-and post-tests. 3.5.2 Company Representatives Twelve company representatives participated in the present study. They held various positions within the companies. The following company representatives participated in the study: two training facilitators and two supervisors from Alpha manufacturing; one human resources officer overseeing both Beta Health North and South; one manager and three front-line supervisors from Beta Health North; and two managers and one front-line supervisor from Beta Health South. The company representatives participated in midpoint and final group interviews and were asked to complete evaluations at the end of the 10-week training. Although all of the front-line supervisors who oversaw employees in the training were invited to participate, not everyone attended the meetings or completed questionnaires. Out of the 12 company representatives, eight females and four males participated. As the study was conducted in the natural setting, background questionnaires were not obtained from the company management. In addition to the interviews and questionnaires, before instruction began the company representatives participated in a needs assessment with Literacy Organization to explain the language requirements of the employees’ positions, the 41 language and cultural barriers they have encountered with these employees, as well as to determine objectives for the training. 3.5.3 Service Provider As mentioned before, Literacy Organization was described as a non-profit organization that served cliental throughout West Michigan. Before the study began, permission to conduct the study using Literacy Organization’s clients was granted by the executive director. Other staff members involved in this study were a) the program coordinator who administered the standardized tests in each program and observed the trainings; b) a full-time instructor who taught both healthcare classes; and c) a part-time instructor who taught the manufacturing class. As the director of the program, I was responsible for hiring and supervising these employees. The instructors had been employed by the Literacy Organization for less than a year when the study was conducted. When they were hired, both instructors had between one and five years of teaching experience with some experience in teaching workplace English in national and international settings. Both had an MA degree in Teaching English to Speakers of Other Languages (TESOL) along with volunteer and/or translation experience in the manufacturing or healthcare industries. Both instructors were fluent in at least one other language besides English with the part-time instructor being a native-speaker of Portuguese. Prior to the study, the instructors were thoroughly trained by the program director and coordinator in conducting a needs analysis, customizing lessons, administering standardized tests (CASAS), and learning procedures specific to workplace language training. They were also trained to use a variety of language teaching methodologies and approaches such as a structured communicative approach, task-based and situational approaches, and aspects of a intensive language teaching methodology 42 created by Professor John Rassias from Dartmouth College. The Rassias methodology that the instructors were trained in is based on the Audiolingual approach and incorporates different types of drills (grammar and vocabulary) with activities such as role plays and dialogues. Their instruction focused on functional English and communicating (speaking, listening, reading, writing, and pronunciation) in the workplace. 3.6 Procedures To reduce interviewer bias and to enhance dependability in this study, methodological triangulation was used in this investigation (Mackey & Gass, 2005). That is, multiple qualitative and quantitative data types were used to look at the research questions from a variety of angles. In the following section, the data collection procedures are reported with abrief description (as they have been discussed in detail previously) of the following instruments: a standardized assessment, learning journals, interviews, and questionnaires. 3.6.1 Instruments and Data Collection Standardized Assessment. The Comprehensive Adult Student Assessment System (CASAS) is a competency-based standardized assessment that can be used to assess Adult Basic Education (ABE), ESL, and employability. Although many researchers have discouraged the use of standardized tests in workplace literacy programs, it was incorporated into this study because grantors in the State of Michigan often request results as measured by CASAS and Educational Functioning Levels (EFL). As our non- profit organization often receives grant funding to supplement the cost of instruction, we have made it a common practice to use standardized assessments with most companies. Thus, the purpose for administering the standardized assessment in this study was to 43 measure the learning that occurred because of the training. The format for the appraisal, the listening, and the reading tests were multiple-choice and were completely in English. About four weeks before classes began the CASAS’ ESL Appraisal was administered to all employees interested in attending the ESL training. This was done in order to appropriately place the learners’ in class by similar English proficiency levels. As mentioned previously, those who were lower than an intermediate level were placed in a lower level class not included in this study. Then, one week before the ten weeks of instruction began, the Life & Work reading test and the Employability Competency System (ECS) listening test were administered by the program coordinator as a pre-test. To measure learning gains, the program coordinator along with the instructors then post- tested the learners the week after instruction concluded. Both the pre- and the post- listening and reading tests were administered to each class separately in two, one-hour sessions. The same level but different forms were used for the pre- and the posttest in accordance with the progress testing selection recommendations found in CASAS’ Catalogue (2007) and previously described in chapter 2. For example for the ECS listening test, if form 63B was given as the pre-test, form 643 was administered as the posttest. After the tests were administered, the answer sheets were scored, entered into a Microsoft Excel spreadsheet, and gain scores for both listening and reading tests were calculated. To measure learning gains, the pre- and post- tests were analyzed using SPSS, looking at the descriptive statistics and the results of paired correlations and a paired samples t-test. Learning Journals. Learning journals are a type of program-developed assessment that is leamer-created and instructor-monitored. In this study, the goal of the journals was to promote written interaction in English and to elicit feedback from the participants 44 on their perceived learning. The learning journals used in this study were divided into five sections, as adapted from Crocker, et al., (2002): Things I find easy in English, Things I find hard in English, Things I would like to be able to do in my work in English, Things I learned this week, Examples of when I used the new things I learned. This journal format was chosen because it has been used and reported on in similar workplace training situations. During the first and second week of training, the researcher visited each class and explained the learning journal procedures, giving each participant journal guidelines (see Appendix C for learning journal guidelines) and a five-subject spiral- bound notebook. The learners were asked to make a minimum of three entries each week (see Appendix D for a journal entry sample). The journals were to be completed outside of class in addition to other assignments given by the instructor. The learners were asked to complete the journals outside of class time because instructional time was limited to three hours a week, mostly paid for by the company. For this reason, and because the advisory teams did not choose the improvement of writing abilities as one of the main training objectives, the journals were given as supplementary work. However, because the learners were not required to write in the journals as part of the training, only 21 out of 36 learners elected to write in the learning journals, and the population that decided to write in the journals may have influenced the data. Thus, for triangulation purposes multiple instruments such as interviews and questionnaires were used to collect data. Near the course mid-point, the instructors collected the learning journals, made comments, and returned the journals the next week. In addition, the instructors asked the participants to respond to a brief reflection on learning journals based on the following open-ended questions: 1) How do you feel about writing in your journal ?, 2) Is it difiicult to express your ideas?, 3) Do you find it hard to remember to write in it every 45 week?, 4) What do you enjoy the most about writing in it?, and 5) Would you want to continue using a learning journal in the fitture? At the end of the course, the learning journals were collected again, analyzed, and returned to the language learner. The journal entries were analyzed in a qualitative fashion, looking at a categorization of comments. The learning journals were collected and analyzed to measure the learners’ reaction to the training as well as to measure their learning as compared with the standardized test scores. Interviews. After standardized pre-tests were conducted and the journals distributed, a semi-targeted sample of learner participants (n=8) from the initial 36 were selected for one-to-one interviews in English. The selection criteria were participants who a) had a hi gh-intermediate or above speaking level, b) had differing language backgrounds, and c) attended class the day interviews were conducted. The sample group included three of the ten participants from the manufacturing company, and six of the 26 healthcare participants. Not all 36 participants were interviewed, for this study was conducted as a typical program might be conducted (for purposes of replication), under time and staffing constraints as well as limited funds for evaluation. The initial informal interviews were conducted by the researcher within the first two weeks of each ten-week class. The focus of the interviews was to solicit the learners’ perspective on course goals, instructional needs, and their reaction to the journals and the standardized tests (see Appendix E and F for a complete list of interview questions). Next, exit-interviews were conducted during the final week of each ten-week course with eight of the participants previously interviewed and three additional participants who were unavailable during the initial interviews. The focus of the exit- interview was to solicit a) the leamers’ reaction to the usefulness of the training, b) their 46 perceptions on progress toward their learning goals, and c) examples of their achievements as a result of the training. Each interview was conducted one-on-one with the learner. On average the interviews lasted about ten minutes. The participants were given non-directive prompts and probes when a more extensive response was needed. Finally, mid-point and end-point group interviews were conducted with the company representatives. All managers and front-line supervisors were asked to attend the meetings. The meetings were conducted onsite at the company and at a convenient time for the managers. The purpose of the supervisor interviews was a) to measure the supervisors’ reaction to the training, and b) to gather feedback so that the initial assessment of learner needs and expected outcomes could be compared with the actual results. All interview responses were recorded using a digital audio recorder. While recording the interviews, the researcher took notes and therefore, the interview responses were only partially transcribed before they were analyzed. Just as with the journal entries, the interview responses were analyzed in a qualitative fashion, looking at a categorization of comments in order to assess the learners’ and company managements’ reaction to the workplace English language training. Questionnaires. The learners, managers, and the front-line supervisors were asked to complete questionnaires at the end of the ten-week course. The questionnaires were intended to evaluate changes in behavior. The learner questionnaire, adapted from Sticht (1999), was divided into three sections: at work, at home, and in the community (the learner questionnaire can be found in Appendix G). It had 20 questions in which the learners marked yes/know/I don ’t know and were asked to provide an example. In the first section, the learners provided feedback about how the ESL program helped them at 47 work. Questions in this section included the following: Read job materials better? Speak/communicate on the job better? Work better in teams? Feel confident about trying for a promotion, and Improve your morale with the company? In the second section of the questionnaire, the learners provide examples on how the program has helped them at home such as Have you started reading more at home ? Do you communicate in English at home? Do you help your children/grandchildren with homework more? In the third and final section, the learners answered questions on how the program has helped them in the community: do you feel more confident about reading in stores and ofi‘ices? Do you feel more confident writing in government forms? The last day of class, the instructors allowed 10 to 15 minutes for the participants to complete the questionnaire. The program director explained the purpose for the questionnaire, with the learners read through the questionnaire, and helped answer the learners’ questions. All who attended the last day of class (n = 22) completed the learner questionnaire. At the end of the course, the front-line supervisors were given two questionnaires. The purpose of the first questionnaire was for to rate the general improvement of each individual involved in the ESL training in the following five categories: increased communication, improved productivity, increased attendance at work, increased self- esteem, and improved safety. The supervisors rated each individual under their supervision that attended the training, giving the learner a 0 (no change or worse) to 3 (greatly improved) in each category. The supervisors were then encouraged to provide comments or further explanation on the back side of the questionnaire. This questionnaire was adapted from Mansoor (1993 ), and is found in Appendix H. The second supervisor questionnaire measured program effects on an entire department (see Appendix I). For four of the eight questions, the front-line supervisors were asked to 48 circle a number from I (greatly decreased/ many more errors/ much more difficult! etc.) to 5 (greatly improved/ increased/ etc.), as well as three short answer questions. The categories being rated included production, quality of work, transferability, attitude, and future trainings. This second questionnaire was adapted from The Manager ’s Informational Packet (1998). Both questionnaires were chosen because they had been previously piloted in similar scenarios and because together they addressed behavioral changes both in the individual and the entire department. Copies of the questionnaires were given to the managers and front-line supervisors who attended the midpoint interview. Extras were provided for those who could not attend. Then, towards the end of the 10-week session, an electronic version was created and emailed as a reminder to those who attended the meeting. The managers and supervisors were asked to return the questionnaires by the next meeting scheduled for the end of the 10-week training. Five front-line supervisors completed the questionnaires. The questionnaires were not anonymous so that the questionnaire responses could be matched and triangulated with the interview responses, and because the Literacy Organization wanted to be able to address any issues that arose from the questionnaires. The learner and supervisor questionnaires were analyzed, looking at numerical trends as well as a categorization of comments. The questionnaires were used to identify how the learners’ behavior changes in the workplace after workplace English language training. 3.6.2 Data Analysis As described above, the data for this study consisted of pre-and posttest scores, participant writing samples from learning journals, and audio-recorded initial and exit- interviews. The pre- and post-tests were compared for the individuals in a course using a paired-samples t-test within subjects design to detect gains brought about by the program. 49 In keeping with qualitative research procedures, I used an inductive data analysis in which frequent, dominant, and/or significant themes emerged from the raw data (Mackey & Gass, 2005). 50 CHAPTER VI: RESULTS AND DISCUSSION 4.1 Introduction In the previous two chapters, the theoretical framework, the previous research, the research methods, the subjects and the instruments in this study were discussed. In this chapter, the results from the research in response to the research questions will be presented and discussed. As mentioned in chapter three, the research questions are answered using various quantitative and qualitative measurements. The data sets will be discussed in detail in order of the questions they apply to, and later will be referred back to when appropriate. The first and third research questions will address the results from all the learner participants (n = 36), as the supervisors comments relate to the collective group of learners. Correspondingly, research questions 2a and 2b only address the results from the participants who completed language journals and a pre-and post-CASAS test. In the discussions it will be pointed out which data set is considered. 4.2.1 Research Question 1 The first research question stated: How do supervisors’ reactions to the workplace English training compare to language learners’ reactions as measured by informal interviews and learning journals? To address this research question, first the results of the supervisor group interviews will be presented and discussed, then the learners’ journal comments will follow, and finally the supervisor and learner comments will be compared and discussed. When asked about the changes they had observed as a result of the trainin g, the supervisors at Alpha Manufacturing shared the following comments: 0 [Supervisor 3] I have one employee (ID 21) her self-esteem has just gone through the roof It '5 just unbelievable how well she has improved. In fact, she just started training a new employee today. 51 [Supervisor 4] Self-esteem that has improved across the board [Learner ID 56] is the one who has improved the most because he is more talkative now. Before he was just quiet and sat by himself and now he is talking more with others. [Supervisor 3] Just the improvement of communication skills has to be a benefit to the company. If you have supervisors and employees on the floor who are able to understand each other it has to be a big help. [Supervisor 4] Communication skills have increased with all of them. They are speaking more with the technical staff For productivity, it has increased for two of the people. Everyone else was normal before, and there were no attendance issues before. Not surprisingly, both of the supervisors who attended the meeting made comments related to increased self—esteem and improved communication among their employees. Importantly, their reactions to the training were very positive and they requested that instruction continue. One supervisor added that there were at least 10 more people in his department that could benefit from the training. Unfortunately, neither supervisor was able to observe the training during the 10 weeks, and therefore, had no reaction to the trainer or the materials. Similar to the Alpha Manufacturing supervisors, the supervisors from Beta Health reported an increase in self-esteem as well as improved communication among their employees that attended the training: [Supervisor 6] Self-esteem in all of [the learners] probably has improved. Just in the smiles when they come out of class and observing them in class, you can tell they thoroughly enjoy going to class. [Supervisor 58] [Learner ID 44] is now more confident when she looks at me and talks very openly. She is really trying to explain herself [Supervisor 58] [Learner ID 49] is pretty quiet in our department but she is opening up. She is asking more questions whereas before she would just try to figure it out or ask someone she is comfortable with. And now she is going to me a lot more. 52 In addition, some of the supervisors provided examples of how the training had positively affected the workplace such as in performance reviews and staff meetings as demonstrated in the following comment: 0 [Supervisor 12] My performance reviews have really improved fi'om last year. It was a struggle last year. I 've just seen a big improvement when I did them last month. I did them on PowerPoint and they were reading along. When I did the questions, I noticed a big improvement. And it didn't take us as long, the time was shortened! Another interesting finding was that at the time of the group meeting only one of seven supervisors from Beta Health had observed the class. Although all supervisors seemed eager to attend, none had made it a priority before the end of the training. The supervisor that observed the class commented that the trainer did a great job and had excellent rapport with the learners. The supervisor also mentioned that the lessons, based on Reaay to Go, were applicable to the learners’ work even though they were not customized. This means that the supervisor had a positive reaction not only to the instruction, but also to the course materials. It is important to mention that although most of the supervisors did not observe the class, many still had a positive reaction to the training as demonstrated in the following comment: 0 [Supervisor 59] Basically the employees have all come to me and just praised the program; they just really like it. They all ask me to say words for them because they want to get the appropriate pronunciation. They write words down and ask if they are spelling it right. They really are trying and really like it. These findings suggest that when the learners communicate a positive reaction to the training, their supervisors will more than likely also have a positive reaction. Interestingly, one frustration that was shared was the fact that employees were still speaking in Spanish together in front of patients or other non-Spanish speaking co- workers. This frustration was acknowledged as a frustration by supervisors from both 53 Nutritional and Environmental Services. These findings possibly suggest that the supervisors had not clearly communicated their expectations for behavior at work. Otherwise, it is possible that expectations had been communicated, but the employees were ignoring the English only “rule”, which is often unspoken yet expected in the American workplace. Based on their learning journals, the learners from all three groups reacted in a similar fashion to the training as the supervisors. On multiple occasions the learners wrote unsolicited entries on their enjoyment of the class and the instructor, and were thankful for being given the opportunity to participate in the training. A portion of these reactions are categorized by company and learner and are listed in Table 8. Table 8 Leamers' reaction to workplace English training Company Learner ID Beta South 32 53 38 39 53 35 Beta North 51 Alpha 25 NA NA Learning Journal Entry 1 like my English class. I'm feel improve a lot. I'm so proud of myself because I had the opportunity to show rm boss what I'm learning and now I'm doing things that I wasnt able to do before my English classes. [I'm] learning a lot. I will be practicing all the time to make a progress. I like to talk with my friends in English I tell them Im learning English and I want to practice with them. [The teacher's] English is very understandable for Spanish and African speaking people. I'm very happy to have her as my teacher. Is [very] easy for me understand my teacher. she speaks clearly. I love grammar. The teacher's explination is very clear. Communication whit my coworkers beter in English I'm very thankful to this opportunity [English classes] that was given to us. I enjoy my classes very much. I'm learning a lot of thing and I want to learn how to write in the journal every week without missing anything. Yes. I want to continue learning evry thing you teach me. Note. NA= Not Available. These comments came from anonymous joumal reflections. 54 These comments, coming from a variety of learners from different training groups, suggest that the learners not only had a positive reaction to the classes, but they also were learning and were motivated to practice what they learned. In a comparison of the supervisors and learners comments, it was found that the learners typically provided evidence to support the supervisors’ statements. For instance, during the end-of-training interviews, the supervisors commented that 1) communication with the learners at work had improved; 2) the learners were speaking more confidently and were asking more questions; 3) the learners had improved in American cultural norms such as making eye-contact when speaking to supervisors; and 4) there was a positive reaction to the training. Similarly, in their journals the learners mentioned an increase in communication with their co-workers, an increase in self-confidence in showing the supervisor newly acquired skills, and taking initiative to speak with native- English speaking co-workers. Specifically, one supervisor described an increase in self-esteem of a particular learner, and as a result that employee would be training new employees. That same learner [1D 21] wrote in her journal that she was talking more with her supervisor and having more communication with co-workers. Interestingly, both the supervisor and the learner noted a difference. The difference is the supervisor noted a behavioral change in the learner, whereas the learner noted improved capabilities to function and communicate on the job. The fact that both recognized related improvements provides strong evidence that the training achieved an important goal, to improve communication in the workplace. In addition, this finding suggests that a positive reaction to the training from both supervisors and learners may lead to promotions or job mobility, which is a positive outcome for both the learner and the company. 55 Table 9 below provides a comparison of the supervisors and the learners reactions to the workplace English language training. A total of six comments from the supervisor interviews and six of the learners’ journal responses are provided. Table 9 Evaluating Reactions: Supervisor Interviews and Learners' Journal Comments Supervisor Learner Company ID Interview Comments ID Journal Comments Alpha 3 I have one employee (ID 21) her 21 Now I takin more with my Manufact- self-esteem has just gone supervisor an [my team leader] uring through the roof. It's just and wi have more unbelievable how well she has comunication. improved. In fact, she just started training a new employee today. 4 Communication skills have 23 Right now I understood some increased with all of them They worker very [well]. IfI are speaking more with the continues to practice with them, technical staff. For productivity, they will be show me how to it has increased for two of the pronouce. people. Everyone else was normal before, and there were no attendance issues before. 3 I think the program is going 25 I'm very thankful to this well. I've seen improvement. It's opportunity [English classes] really up to them to learn They that was given to us. It will not need to go to class wanting to be difficult for us anymore the learn. ideas that we want to express because the subjects are practical and interesting beside we have a very expert teachers. Beta 58 With their work day, they are 51 Communication whit my North looking at us and are speaking to coworkers beter in English. us more now than before the instruction. They are making eye-contact 59 One observation I've had is when 39 I like to talk with my friends in I had two Spanish-speaking English I tell them 1m teaming employees together, they would English and I want to practice switch over and speak in with them Spanish. But now they often speak in English, or switch back and forth Beta 12 I've seen a lot of improvement in 53 I'm so proud of myself because South my staff meetings such as in I had the opportunity to show staff being willing and feeling my boss what I'm learning and more confident [to ask] if they now I'm doing things that I didn't understand instead of not wasn't able to do before my asking at all. That's a big English classes. improvement. 56 The comparison in Table 9 shows that the supervisors and the learners noticed changes, and reacted to evidence of both learning and changed behaviors. Furthermore, the supervisor and learner comments show increased language skills, enhanced self- esteem, improved ability, increased job satisfaction, and possibly a motivation for greater job mobility. These are all qualities, as mentioned previously (F riedenberg, et al., 2003), that benefit not only the employee, but also the employer. In sum, according to the interviews and learning journals both the supervisors and the learners had a positive reaction to the lO-week workplace language training. 4.2.2 Research Question 2a The research question is: To what extent do workplace language learners demonstrate learning as measured by standardized assessments? To answer this research question the standardized test scores are analyzed with a paired samples correlation and t- test. As mentioned before, this research question only addresses the results from the participants who completed language journals and standardized pre-and post-tests. Table 10 shows the descriptive statistics used to compare the range of scores, the minimum and maximum scores as well as the mean and standard deviation from the listening and reading pre- and posttests from the sample group (n=20). Twenty learners took the listening pretest scoring between 203 and 229 with a mean of 216.60 and a standard deviation of 6.54; whereas 19 learners took the listening posttest scoring between 207 and 235 with a mean of 218.42 and a standard deviation of 7.29. This means that the lowest listening score increased by four points and highest listening score increased by six points. Moreover, the mean increased 1.82 points from the pre- to posttest. For the reading test, 18 learners took the pretest, scoring between 209 and 237 with a mean of 57 217.56 and a standard deviation of 7.55. On the other hand, 19 learners completed the reading posttest scoring from 210 to 247 with a mean of 222.74 and a standard deviation of 8.86. This means that the lowest reading score increased by one point and highest listening score by 10 points. The mean increased 5.18 points from the pre- to posttest. Table 10 Descriptive Statistics fin Standardized Test Scores N Range Min Max M SD Listening Pre-test 20 26 203 229 216.60 6.541 Listening Posttest 19 28 207 235 218.42 7.290 Reading Pre-test 18 28 209 237 217.56 7.548 Reading Posttest 19 37 210 247 222.74 8.856 It should be noted here that out of this sample group, only 19 out of 20 learners are shown in the descriptive statistics as completing the listening posttest, and 19 out of 18 completed the reading posttest. The reason for this is that a few of the learners were unable to attend the pre- and post-testing for both reading and listening, and others came to the post-test but were not pre-tested. Due to time restrictions in the natural setting, the program coordinator was unable to assess all the learners who participated in the training. All of the individual pre- and posttest scores, as well as learning gain/loss realized are listed in Appendix J. Generally, the learning gains ranged between 2 and 14 points. Negative gains ranged between -1 and -16 points. On average, two to three of the learners in each class made no gains or had negative gains on both the listening and the reading tests. That is, eight out of 20 learners made no gain or negative gain on the listening test; similarly, seven out of 18 learners made no gain or negative gain on the reading test. To establish whether or not there was improvement from pre- to post-test in listening and reading, the scores were analyzed using a paired samples t-tests within 58 subjects design with two levels. In this analysis the within subject factor is test, pre-test versus post-test. Table 11 provides correlations between the two paired scores. The findings for listening and reading are now presented together but discussed separately. Table 11 Paired Samples Correlations Pair Source N R p Listening Pre- 1 Listening Post 19 0.657 0.002 Reading Pre- 2 Reading Post 17 0.666 0.003 Listening. The r of .66 shown in Table 11 indicates that there is a relatively high positive association between the listening pretest and the listening posttest. This means that the learners who tended to do well on the pretest did well on the posttest, and those who did poorly on the pretest did poorly on the posttest. Table 12 shows the results of the paired samples t-tests. Note that the posttest score is two points higher than the pretest score and that this is not a statistically significant difference: t(18) =-1.263, pi->.05. This means that it is not the case that learners in this study tended to do significantly I better after 30 hours of instruction on the listening posttest. Table 12 Journal Sample: Paired Samples t Test within subjects design with two levels Paired Differences Pair Source M SD SE t df p-value Listening Pre- 1 Listening Post -1 .684 5.812 1.333 -1 .263 18 0.223 Reading Pre- 2 Reading Post -3.529 6.663 1.616 -2. 184 16 0.044 Reading. The r of .67 shown in Table 11 indicates that there is a relatively high positive association between the reading pretest and the reading posttest, just as in the listening test. Again, this means that the learners who tended to do well on the pretest did 59 well on the posttest, and those who did poorly on the pretest did poorly on the posttest. Table 12 shows the paired samples t-test for the reading tests. Note that the posttest score mean is 3.5 points higher than the pretest score and that this is a statistically significant difference: ((16) = -2.184, p<.05. This means that after 30 hours of instruction, the learners did significantly better in reading. It should be noted here that participant ID 51 was excluded from the paired samples correlations and t-tests due to a score of -16 on the reading test, which was an obvious outlier and greatly skewed the results from this small sample set of 21 participants, hence the decreased number of participants (11:17) shown in Table 11. A comparison of the pre- and posttest scores show that out of 21 participants, seven performed worse from the pre- to the posttest on both the listening and the reading tests. That is, two-thirds of the class made some improvement whereas one-third did not improve or did worse. As stated previously, the non-significant listening results from the quantitative data suggest that after 30 hours of instruction the learners made no progress in improving their listening ability. Yet, the statistically significant quantitative reading results suggest that after 30 hours of instruction, participants made significant learning gains. It is possible, however, that the negative gain could be attributed to a number of other factors. First, although the standardized test scores may be reliable with a standardized group of learners, this group is all but standard. The negative gain on the standardized tests could be a result of the variety of educational levels (from less than three years to a degree from a university), first languages, or length of time in the United States (from less than one year to over twenty years), which makes this group an atypical testing group. For many of the learners, standardized testing is not a familiar event. This 60 can cause anxiety and poor pre- or post-testing results. Another possibility is that the CASAS test is not valid in this learning context. It is competency-based,.but if the lessons are customized to the companies’ needs and goals, the material being taught is not necessarily the material being tested. This would suggest that a different test be used to measure gain more accurately. However, CASAS is currently the only state-approved standardized test in Michigan that measures English as a Second Language (ESL) and must be used when there are grant restrictions. Therefore, the answer to research question 2a is that learners demonstrate learning on the reading test but not on the listening test after 30 hours of instruction. 4.2.3 Research Question 2b The research question is; To what extent do workplace language learners demonstrate learning as measured by learning journals? The answer to question 2b, as demonstrated in the findings discussed below, is learning journals indirectly measure learning and provide evidence of learning. From the 21 learning journals and 111 entries that were analyzed, five categories emerged: course goals, motivation to learn, perceived successes and/or achievements, perceived struggles and/or failures, and miscellaneous comments (see Table 13 for number of entries per category). Interestingly, the learning journal responses in these categories provided evidence of effectiveness not only in learning but also in attitude and behavior. However, to address the research question at hand, only the data from the learners’ perceived successes and/or achievements in listening and reading will be presented and discussed in this section, which will then be compared to the findings of the standardized test scores. 61 Table 13 Journal Responses No. of Categories Comments (out of l l 1) Course goals 21 Motivation to learn 10 Perceived successes/ achievements 51 Perceived struggles/failures 14 Miscellaneous comments 15 The following results were taken from 51 out of 111 comments from the perceived successes and/or achievements category. In order to compare the quantitative and qualitative results, the learners’ journal responses were divided into two subgroups: listening and reading. First, the eight listening achievements out of the 51 comments on perceived success will be presented in Table 14 and discussed below. Table 14 Participants' Perception of Improvement in Listening ID Journal Responses I feeling exciting in my work today because before the class the patient ask me. -How long have 40 you been working here? How long have you been living in the US?- I answerd the question but , today I can [hear] very clearly. I'm so Happy. Thank you teacher. 40 This week I hear the nurses talking about the copier it was jammed I was feeling happy because I understood I'm so happy! There have been almost 3 month and I'm feeling more and more confident when I 53 talk English or when I start a small conversation. I'm can listen more now [than before]. 1 am more confortable when I have to talk with the staff about somethings. They can understand me better and I'm can explain better. Now on. I will go to the training that [Beta Health] offers too management. There are a lot like 53 crucial conversations, corrective actions. how to encourage staff to do a better job, etc. But in order to do some of those training I have to continue studying very hard. I am very good at listening now. I think [this class] is helping a lot I like when someone. my co- 53 worker or visitor or family member (Spanish) ask me to help them or ask them to translate and 1 can help them, it feel so good. 3 5 When somebody call me by phone in English I can understand much better than before. When someone called me before I always said Sony, I don't speak English 35 I took my car for to clean, to the carwash and could understand and I was understood 35 Is easy for me to talk with the patients I don't not why, but they understand me and I can understand them. 62 The findings from the perceived improvement in listening ability suggest the following about the learners as a result of the training: 1) increased self-esteem (i.e. I feeling exciting in my work today... I ’m so Happy. Thank you teacher; I was feeling happy because I understood; I am very good at listening now. I think [this class] is helping a Iot...I can help [with translation] now, it feel so good; I 'm feeling more and more confident when I talk English or when I start a small conversation.) ; 2) improved customer service (i.e. patients are better understood and learners are responding to more of the patients’ questions; When someboay call me by phone in English I can understand much better than before); and 3) improved communication among co-workers (i.e. Now on, I will go to the training that [Beta Health] oflers too management, I am more confortable when I have to talk with the staff about some things. They can understand me better and I 'm can explain better.) These findings demonstrate not only a learning gain, but also a change in behavior. Next, I will present the perceived achievements in reading. Only four out of the 51 comments fell into this category. The findings demonstrate that the participants had success both in the workplace and their personal lives. All of the comments came from different learners. The learners commented that they are now able to understand certain postings at the workplace, can check their email, and can read magazines at home. This demonstrates learning as these are tasks in which it appears they could not do before. The exact comments about reading improvement are listed in Table 15. 63 Table 15 Participants’ Perception glmprovement in Reading [1) Journal Responses 34 The writing reading. pronunciation getting better in my self. I try to do my homework and practicing my lecture. 40 When sometimes on the mural my supervisor put some paper with the invitation for the [potluck] or something [now] I can understand. 32 I learned how to check my email. 17 At home: I readed the magazine, watch tv Finally, the journals comments will be compared to the results of the standardized tests. Unlike the standardized tests, the learning journals allowed the learners to document specific improvements in their listening abilities such as understanding small talk, understanding work-related trainings, and understanding phone calls. Out of the remaining 38 journal comments in the perceived successes/achievements category, 23 related to improved speaking skills, 11 to general increased ability, three to improved writing, and one to correct grammar use. Not surprisingly, the largest number of comments related to improved speaking ability (see Appendix K). That is, not only did the learners perceive gains in their listening and reading abilities (which they were tested on), they perceived even more achievements in their speaking abilities. This finding strongly suggests that the learners learned exceedingly more than the subjects in which they were tested. It is possible as well that because of their gain in self-confidence they were more willing to apply their already existing speaking skills. For instance, the learners expressed ability to perform better in the workplace with their improved speaking abilities as demonstrated in their written comments: 0 I 'm feeling happy because today I can to give a direction for a family patient she was lost she need to find the exit. 0 ...Now my supervisor give some address for something she needs. I can ask her If I don 't understand 0 When Im working and the people, Patients, Family Patients, or C0 Workers ask me how to get to any area. I Try to give them the more easy way. 64 0 My biggest challenge is to be able to talk with my supervisor. The most frustrated thing is when I have to say something to [the supervisor], [the supervisor] never listens, [my supervisor] always tried to finish the word for me or to put words in my mouth that I don 't want to say. Now, I can stop [the supervisor] and I am very certain that [the supervisor] can understand better when I talk. Last week I reported a toilet clogged or plugged up in 3E room 30. 0 Now I takin more with my supervisor an [my team leader] and wi have more comunication. Additionally, participants commented in the learning journals that because of instruction they were able to check e-mail, read the announcement bulletin board at work, and pay their phone bill on-line without help from bilingual family members. Moreover, a discrepancy exists between the results from the standardized tests in reading and listening and the learners’ perceptions of increased speaking, listening, and reading abilities. Perhaps the discrepancy is not in the validity of the quantitative versus the qualitative assessments, rather in how the data is presented. As demonstrated in research question 2a, the test scores can be reported in different ways, showing a non- significant improvement as well as an above average gain (as was the case for the listening scores). Although the differences were non-significant when compared to other studies, there is an above average gain in terms of raw scores. While this raw score difference is not tested statistically, it is a difference that would be noted by the stakeholders. The listening journals, however, provide concrete examples of learning and mastery of different speaking, listening, reading, and writing tasks. The standardized tests only measured listening and reading abilities whereas the journals demonstrated gain in the productive skills as well as receptive skills. Surprisingly, during the interviews, when asked about their perception of the standardized tests in relation to measuring course outcomes, many of the learners thought the scores would be helpful and were eager to improve on the test though they mentioned not enjoying taking the test. 65 When asked to reflect on the journals, many of the learners commented that they enjoyed writing in the journals because it helps them practice their writing. One learner commented that “I think [the journal] is good because you can express how much your learning in class and you can see how much you have to learn. ” Another learner wrote the following: If anybody of us find it hard to remember to write in it every week then something is missing. Like for example this opportunity, this class, never it will happen in our life that's why we must give enough time to study it once a week, even an hour or two. For me, I never find it hard to remember to write in it every week because this is a very rare opportunity. there ’s a time for work and a time of studying beside that this course has an end and afier this, it is a knowledge that we have acquired and nobody can take it. It should also be mentioned that out of all the journal responses, only 14 were perceived struggles and/or failures. Generally, these comments dealt with low-self esteem (i.e. Is hard for me when I go to the store don't be understood and someone asks me what? I feel so bad), and perceived lack of ability (i.e. I don 't understand at meeting. At meeting sometime they don 't understand what I say; I can 't give direction to a person to a certain place.) And out of those 14, none were negative reactions to the training itself, just perceived struggles in learning and using the English language. The journal comments listed above come from varies points in the training, meaning that the learners may have had a lower self-esteem in the beginning of the training as compared to the end of the training. As mentioned before, this research question only addresses the results from the participants (n=21) who completed language journals and the standardized pre- and post-tests. Thus, it is important to note that these responses come from a sample group of the 36 learners, and therefore, may not be representative of the entire group. 66 4.2.4 Research Question 3 The third and final research question is: How does learners’ behavior change as a result of workplace language training as measured by learner and supervisor questionnaires? To address this question, first the results of the learner questionnaire will be presented and discussed followed by the results of the supervisor questionnaire. Table 16 presents a summary of the responses from 22 learners from the two companies (see Appendix L for learner questionnaire responses by company). When the data from the learners’ questionnaires were analyzed, the findings showed that learners reported a change in their own behavior in the workplace, at home, and in the community. Table 16 Learner's Responses to Behavioral Questionnaire Questions TOTAL Has this ESL program helped you at work? YES NO DK 1 Read job materials 21 l 2 Write job materials 19 l 3 Listen/ understand 22 4 Speak/ communicate 20 2 5 Work better in teams 22 l 6 Improve confidence 20 l l 7 Reduce waste; scrap; errors 17 2 2 8 Know more about company policies 12 4 3 9 . In trying for a promotion 17 3 2 10 In company training programs 13 1 6 ll Improve morale 14 1 5 Total 197 16 20 Has this ESL program helped you at home? 12 Reading more 21 13 Writing more/ better 18 1 l 14 Communicate in English 19 3 15 Helping (grand) children with homework 9 9 2 16 Reading to your (grand) children more 9 8 2 Total 76 21 5 Has this program helped you in your community? 17 More confidence about reading in stores, offices 20 l 18 More confident writing in government forms 15 2 3 19 Easier for to speak in public 20 1 20 Consider taking more education or training 20 2 programs as a result Total 75 4 6 67 Calculated from 22 questionnaires and summed over the three training groups, 81% of the learners reported the ESL training helped them at work; 69% reported it helped them at home; and 85% of the learners reported the ESL training helped them in the community. Interestingly, all 22 learners agreed on only two questions. They agreed that the ESL program helped them listen and understand better at work and work better in teams. The next highest number of yes responses dealt with reading job materials and reading more at home. On the other hand, nearly half of the learners that completed the questionnaire felt that the ESL program did not help them at home assisting their children and grandchildren with homework or reading to them. In the example section, most of those who responded no also explained that they did not have children or grandchildren at home. The highest number of unknown responses is associated with the ESL program helping the learners in company training programs and improving morale. In addition to selecting Yes/No/Don’t Know, the learners provided examples to justify their responses. Some examples for reading include the computer, process materials, inventory, bills, and chemical bottle labels. For speaking and listening some examples include speaking with the nurse on the floor, with co—workers, with Americans, and with supervisors. Next, I will present the findings from the supervisors’ questionnaires. In the first questionnaire, where the supervisors evaluated the individual learners on his/her behavior in the workplace, only five supervisors responded, each reporting on one to three of the learners participating in this study. That is, two of 18 learners from Beta Health South, three of the eight learners from Beta Health North, and six of 10 learners from Alpha Manufacturing were reported on in the evaluations. The findings from the first questionnaire show that for 10 out of the 11 learners evaluated, the supervisors noted a behavioral change in at least one of the five categories (communication skills, 68 productivity, attendance, self-esteem, and safety). Table 17 presents the categories and ratings representing the amount of improvement for each individual as reported by hi s/her supervisor. In general, the supervisors reported no change in attendance or safety, and only one reported increased productivity in one employee. They reported the most behavioral improvement in self-esteem and communication skills. Table 17 Questionnaire Results: Supervisor Ratings on Learners' Behavior at Work ID Amount of Improved Behavior Communication Attendance Self- Company Supervisor Learner Skills Productivity at Work esteem Safety BHS NA 35 - - - 2 - NA 27 - - - 2 - BHN 59 5 1 3 N/ A N/A 3 - 50 2 N/A N/A 3 - 47 2 N/A N/A 3 - AM 3 26 - - - - - 21 3 - - 3 - 22 1 - - l - AM 4 17 2 - - 2 - 25 2 - - l - 20 2 3 - 3 - Note. BHS = Beta Health South, BHN= Beta Health North, AM= Alpha Manufacturing; Note. Rating scale: 3 = Greatly; 2 = Moderately; 1 = slightly; (-) = No change; N/A = Not Applicable The most noted change was in self-esteem. For example, the managers rated almost half of the learners as having greatly increased in self-esteem. In fact, when asked on the questionnaire about his/her perceived effectiveness of the training, manager 59 commented, “Y es, [the training] has help self esteem greatly. Work attitude and production increased. ” The managers chose communication skills as the second largest behavioral change. Most of the comments regarding increased communication skills were discussed during the company interviews. Specific comments supporting this finding include the following: 0 [Manager 59] They talk with each other more to solve issues that may arise. 69 0 [Manager 3] The more knowledge the employee has of the English language, the more comfortable they will be in their position. Conversely, as seen in Table 17, not all the supervisors felt that other behaviors in the workplace were affected by the training. For instance, a supervisor from Beta Health wrote the following: I didn’t see that much change in our employees who participated in the class other than self-esteem. The employees we sent had been in this country for years. Their understanding was better than some. I didn't think this was helpfitl to the employees we sent. Maybe a more basic class for staff having more difliculty communicating and absorbing information would be more usefill. Interestingly, the only negative reactions that supervisors had came from Beta Health South. And, the two supervisors offering the comments did not participate in any pre- instruction, mid-point, or post- instruction meetings. This could be possibly because 1) they were told to attend and did not, 2) they were given the choice to attend and did chose not to attend, or 3) their manager did not inform them of the meetings. Furthermore, both commented that the timing of the Beta Health classes were inconvenient. That being said, both of these supervisors only reported on one employee whereas other supervisors had more participants in the training and more schedules to flex. Fortunately, as the learners conveyed in their attitudes towards class during their interviews, the environment created by the unsupportive manager did not seem to affect the learners’ perceived benefits of the training. The second supervisor questionnaire evaluated the behavioral effects the training had on the entire department. The findings in Table 18 suggest that after 10 weeks of instruction the majority of supervisors thought the following improved: 1) the attitude of the employees in their department, 2) the supervisory work load, and 3) the learners’ skills transferability. Only one of the five supervisors did not see a change in 70 departmental attitude, or morale among native and non-native English speaking employees, whereas the others reported some to a lot of improvement. Three of the five reported that their worked load somewhat increased. Similarly, three of the supervisors thought that the learners’ ability to handle new technical equipment or training in the department was better than their ability to handle those situations prior to instruction. Table 18 Questionnaire Results: Supervisors' Evaluation of Workplace ESL Program Behavioral Effects on Department ID Categories for Improvement Quality of Supervisor’s Company Supervisor Production Work Transferability Attitude workload BHS NA - - Same - - BHS NA - - Same + - BHN 59 + - Better H + AM 3 - - Better + + AM 4 + + Better ++ + Note. BHS = Beta Health South, BHN= Beta Health North, AM= Alpha Manufacturing; Note. Greatly increased (++), somewhat increased (+), No change (-) On the other hand, the majority thought that there was no change in job productivity or quality of work. That being said, some supervisors expressed in the first questionnaire that productivity, attendance, and safety were not issues with the people that attended the class. Possibly for this reason they did not report a change in productivity. Furthermore, during the mid-point and final interviews most of the supervisors and managers in charge of each training group expressed and provided evidence of changed behavior in the workplace. For example, one of the supervisors from Alpha Manufacturing stated that the learners were speaking more with the technical staff, and therefore, there was an increase in production. The other supervisor from Alpha Manufacturing mentioned that one of the learners began training other employees since the ESL training. Some supervisors from Beta Health South reported that as a result of the class, the learners were more confident to ask questions or for help when they did not understand and were 71 communicating better over the phone. Unfortunately, two of the supervisors from Beta Health South who in the interviews reported the most improvement in their workers did not submit a completed questionnaire, and therefore, are not included in the supervisor questionnaire results. 4.3 Summary In conclusion, the significant findings for each research question were the following: 1) according to the findings from the interviews and learning journals, both the supervisors and the learners had a positive reaction to the lO-week workplace language training. Moreover, when the learners communicated to their supervisors a positive reaction to the training, their supervisors most often had a positive reaction. 2a) After 30 hours of instruction, learners demonstrated learning on the standardized reading test but not on the standardized listening test. 2b) The learning journals indirectly measured learning and provided evidence of learning as well as socio-effective behavioral changes. 3) According to the questionnaires, the learners’ reported a change in their own behavior in the workplace, at home, and in the community. In general, when rating individuals the supervisors reported no change in attendance or safety. They reported the most behavioral improvement in self-esteem and communication skills. In evaluating the effects on the department the supervisors reported most improvement in (a) the learners’ attitude, (b) supervisory work load, and (c) employee skills transferability. Undoubtedly, without both the supervisors’ and the learners’ perspective, captured through interviews and learning journals in this study, it would not have been possible to identify accurate learning gains or the perceived benefits of the workplace English language courses. 72 CHAPTER V: SUMMARY AND CONCLUSIONS 5.1 Introduction In this chapter I first provide a brief general discussion of the findings from the research questions as they relate to current research and Kirkpatrick’s four-level evaluation model. Then, I provide practical implications for company management and program administrators based on the research and findings of this thesis. Finally, I will address limitations of the study and suggest areas for further research. 5.2 General Discussion The reaction level has been argued to be a relatively easy yet necessary step in the evaluation process (Kirkpatrick, D.L. & Kirkpatrick, J. D., 2006). Without the supervisors’ and learners’ positive feedback, the program would most likely fail to continue. In this study during the reaction level of evaluation as well as the other two levels analyzed, both the learners and supervisors reported an increase in the leamers’ self-esteem and motivation to learn, which can be argued to lead to increased learning. D. L. Kirkpatrick & J. D. Kirkpatrick (2006) support this statement by saying, The future of a program depends on positive reaction. In addition, if participants do not react favorably, they probably will not be motivated to learn. Positive reaction may not ensure learning, but negative reaction almost certainly reduces the possibility of its occurring (p. 22). Similarly, in their study investigating the role of motivation and other related individual difference variables such as anxiety and willingneSs to communicate in oral task performance, Kormos and Demyei (2004) found that accuracy increased when the students had a generally positive attitude to the language course itself. However, a learner’s positive reaction to the training does not guarantee improved language skills as measured by standardized tests. Ekkens (2006) investigated the amount of impact each 73 of the following had on standardized test scores: 1) the test-takers attitude toward the test and test administration, 2) the test-takers motivation to improve on the posttest, and 3) the degree to which the test content tested what was taught in class. Through informal interviews, other qualitative assessments, and a test item analysis, Ekkens found that although motivated to do well in the class, some learners purposefully performed worse on the standardized test in an effort not to graduate from the training provided by the company. In other words, in some cases because of the learners’ positive reaction to the instruction and motivation to improve their language skills, their attitude toward the test and motivation to improve on the test decreased. Therefore, both the learner’s attitude toward the test and motivation to take the training play a considerable role in obtaining accurate test results. Interestingly, even when the learners were motivated to do well on the test and in the training and had a positive reaction to the class, some did not improve on the standardized tests. These findings demonstrate the need for triangulation with assessment instruments as well as a multiple-level training evaluation. Furthermore, these findings also suggest two things of importance: 1) the company and service provider’s expectations must be clearly communicated before the training begins, and 2) program outcomes should involve incentives or tangible rewards such as a promotion for those who graduate at an advanced level of English proficiency. The finding from the second research question that discrepancies exists between the results from the standardized tests in reading and listening and the learners’ perceptions of increased speaking, listening, and reading abilities also demonstrates that evaluating learning cannot only be based on standardized assessments. For example, the reading gain in the present study is similar to the gain reported by English Works in the report by OVAE profiling exceptional workplace education programs (2005). English 74 Works reported a 5.5 gain on a CASAS test (which test was not specified, i.e. listening or reading) after an average of 66 hours of instruction. It is assumed that this gain was calculated by adding the individual gains and dividing by the number of test-takers. When this same calculation is applied to the sample group (n =21) in this study, the learners demonstrate a 3.4 point gain after only 30 hours of instruction. Although it is impossible to know if the learners would continue to make learning gains at this rate, for the sake of comparison, the learners would make a 6.8 gain in 60 hours of instruction compared to 5.5 in 66 hours. On the other hand, when the average point gain is calculated for the listening test, the results show that the learners averaged a 1.9 gain after 30 hours of instruction, which is about 70 percent of English Works’ 5.5 gain. However, according to the CASAS Life & Work Test Administration Manual (2003), the average point gain after 90 to 100 hours of instruction is 4 to 5 points. That is, the listening scores as an average still demonstrate higher than average gains reported by CASAS. This finding raises questions regarding how workplace education programs should report their results so that learning gains can be compared across programs. Interestingly, in the mid 1990’s Sticht and Armstrong (1994) reported on adult education programs across the nation, and found that the programs only reported test scores as an average program score. They argued that, Averages conceal the large differences that may occur among the individuals in the programs. Many of the programs may have included adult students who developed quite a bit of skill. Similarly, however, there may have been many students who made little or no gain (1994, Executive Summary, Part IH). Unfortunately, most of these programs used TABE to evaluated literacy gains; hence the gains cannot be compared to the results of the present study. This issue is further complicated because many workplace education programs used and continue to use 75 different assessment tools such as CASAS, BEST, TABE, WorkKeys, and customized assessments as demonstrated in the OVAE report (presented in Table 4 on page 15). Alternatively, because the results to the quantitative measures often speak more loudly to those who provide the fimding than the qualitative assessments, it is important to discuss in more detail the negative gains one-third of the learners made on the CASAS tests. Negative gain is not uncommon to workplace English programs. Sticht (1999) argues that it is not unusual to find that 10 to 20 percent of learners score poorer on the post-test than they do on the pre-test. He suggests that this negative gain is often caused by guessing or other regression effects, which often happens with multiple choice tests. He recommends that programs with a higher potential for negative gain should include frequency distributions in company reports showing numbers and percentages of learners making various amount of negative and zero gain. Sticht (1999) argues that just showing the average pre- and post-test scores that includes the negative gains produces inaccurate indications of lower improvement in the program than actually occurs. Even though the participants in this study were explicitly told not to guess on pre- or posttest items, guessing is something that could not be strictly monitored. A comparison of the findings from research question 2a and 2b also suggest that success is differentially measured by the standardized tests and the learning journals, and that program success is closely related to increased self-esteem and learner motivation. In her discussion on language use in the workplace, Burt (2004) mentions that social factors affect a learners’ attitude, effort, classroom behavior, and achievement, which supports the findings in this study. Yet, as previously demonstrated, the social factors that affect learners extend beyond the training room to the work floor. It is likely that when learners are supported by their supervisors they will have a positive attitude 76 towards learning English. For this reason, the service providers must work to get the companies to understand that much more occurs than just measured improvements in listening and reading abilities. This study demonstrates that other socio and behavioral affects, such as increased self-esteem and improved confidence to speak to native-English speaking co-workers, occur as a result of workplace English language training. Additionally, the results of this study suggest that a successfiil program must have not only the supervisor buy-in (Burt, 1996), but must also the buy-in of the learners. As adult learners they need to become active learners. Marshall (2002) argues that “To become active learners, adults need to know why they are learning the content and processes they are and where and how they will use them” (p. 93). Marshall continues stating that “the learners need to (a) have the skills that they and other stakeholders are seeking, (b) know that they have them, and (0) show that they have them” (p. 93). It is the program administrator’s responsibility to provide opportunities for the learner to reflect on learning, a platform so that the learners can demonstrate the ability to apply what they learned, and assessment tools in which learners can evaluate their own progress (Marshall, 2002). The learning journals piloted in this study provided all three opportunities. That being said, it is not an easy task to provide the company representatives with evidence of demonstrated learning as measured by the learning journals. That is, the learning journal entries collected were confidential information that must be shared anonymously with the supervisors. This responsibility falls to the program administrators, and although it takes time, it is an important step in the evaluation process. In addition, the learners should be supplied with course objectives and intended outcomes for the course. In her first of two handbooks entitled Charting a Course: Planning and implementation tips for program planners & Administrators, 77 Tondre-El Zorkani (2007) notes that after the employees’ language, literacy, and employability skills are assessed, achievable goals and objectives must be mutually agreed upon by the company and the service provider. To encourage program success in a systematic manner, this handbook provides sample checklists for the employer, the adult education provider, and the instructor.’ The results of this study (and supported by Tondre-El Zorkani) suggest that the learners also be involved in the process of establishing goals; for, if the learners do not have the same personal objectives, they will mostly likely not reach the supervisors’ expectations. Therefore, an additional checklist for the learners will encourage active learning described previously by Marshall (2002). When the learners’ questionnaires from research question 3 were analyzed, the findings showed that learners reported a change in their own behavior in the workplace, at home, and in the community. Similarly, Sticht (1999) in his survey of 22 employees from four companies found that the training was not viewed as only restricted to helping the employees at work. In fact, he reports that more than half of the employees thought that the programs not only helped them at work, but also at home. He also found that some 40 percent thought the programs had helped them in their communities. When the percentages reporting that the ESL program helped the learners at work, home, and/or the community were calculated, the percentages were higher than those reported by Sticht (1999). Calculated from 22 questionnaires and summed over the three training groups, 81% of the learners reported the ESL training helped them at work; 69% reported it helped them at home; and 85% of the learners reported the ESL training helped them in ’ Because this handbook was just recently published by Texas LEARN S and is not yet publicly available, the checklists were not included in this study as an evaluation method. Contact Barbara Tondre-El Zorkani at btondre@earthlink.net for more information. 78 the community. These numbers are compared to the following ieported by Sticht (1999, p. 49): 62-76% at work, 33-64% at home, and 0-55% in the community. 5.3 Implications based on Kirkpatrick’s Framework As discussed before, the fact that not only learning gains but also changes in behavior result from workplace English training proves to program administrators, practitioners, and company representatives that it takes much more than just administering one pre- and post- standardized test or a few informal classroom assessments for a workplace language program to be effective. Rather, to prove that a workplace English program was successfiil, program administrators must work together with the practitioners and the company representatives to evaluate the learners’ reaction to the program, their measurable learning, changes in their workplace behavior, as well as the overall results or programmatic impact on the company. As demonstrated in this study, this takes a combination of quantitative and qualitative assessments. However, the results of some types of programs are more difficult to measure than others. D. L. Kirkpatrick and J. D. Kirkpatrick (2006) give the example of diversity training. They mention that one major objective for diversity training is to change the attitudes of supervisors and managers toward minorities in their departments. Although the company hopes tangible results will follow, diversity training does not have tangible results that can be measured in dollars and cents. They suggest that programs with topics such as leadership, communication, motivation, time management, and empowerment, are also difficult if not impossible to measure finals results. I would add workplace language training to this list. Although gathering final results in workplace language training programs is possible, as Martin and Lomperis (2002) have demonstrated in their studies on calculating a company’s Return On Investment (ROI), it can be a very difficult 79 process for two reasons: 1) the service provider is dependent on the company to gather information, and 2) collecting data on company impact takes more time than most companies and service providers are willing to spend on evaluation. Workplace language program effectiveness is not only linked to course outcomes. In fact, experts would argue that many other factors play a role in training success. Such factors include a) setting explicit goals that are supported by top management, b) having an expert teaching staff and providing opportunities for staff training, c) conducting a proper needs assessment and job skills analysis and creating an operational plan that links the activities and the objectives, d) developing a strong curriculum related to the jobs and workplace materials, e) having sufficient instructional time and opportunities for practice, and finally, f) providing the learners with incentives and support services such as child care and transportation (Moore, et al., 1998). Although this study does not a complete solution to the challenge of evaluating workplace language training programs, it does provide an insight into different qualitative and quantitative evaluation tools that can be replicated in other programs. In the next two sections 1 provide implications for company management and program administrators. 5.4 Implications for Company Management As demonstrated by Kirkpatrick’s framework and in this study, the training facilitator or contact person at the company for the training must be sure the work environment, or “work climate” is neutral or positive. That is, everyone from the front- line supervisor to the CEO or president of the company must be on board for the training to be effective. In this study, I found that in general the front-line supervisors who attended meetings, participated in interviews, and completed questionnaires had a positive reaction to the training and documented an overall positive change in the 80 leamer’s behavior. On the other hand, those who did not participate in the program development and evaluation process were more likely to complain about the inconvenience of the class time, and did not react as positively to the training. Yet, it is possible that this reaction was a result of the inconvenient class time and not the fact that these supervisors were not involved in the training development and evaluation. The fact that 91% of the learners that completed the questionnaire, or 20 out of 22 learners, felt an increase in self-esteem, and that the supervisors identified an increase in self-esteem in 10 of the 11 learners evaluated means that the outcomes of workplace language training are not just about language. This study demonstrates an increase in teamwork among people of different nationalities and language backgrounds, which encourages and enhances cultural competency of all employees. Moreover, workplace English language training breaks down not only language barriers in the workplace but also cultural barriers built by ignorance and incorrect stereotypes that are prevalent in our society. For this reason, cross-cultural training for the supervisors and management in the company providing training is an integral component of the workplace English training package. The cross-cultural training can be helpful not only in explaining some of the various world views, but also in providing suggestions for avoiding communication breakdowns and suggesting repairs when one occurs. 5.5 Implications for Program Administrators As the results of this study demonstrate, the supervisors and the learners may have differing perspectives on the effectiveness of the training program. These perceptions can only be captured through a variety of qualitative and quantitative assessments. Therefore, the program administrators and practitioners should do everything they can to be sure the learning climate within the company is at least neutral if not positive, as 81 described in Table 1 on page 8. D. L. Kirkpatrick and J. D. Kirkpatrick (2006) recommend that one way to create a positive climate is to involve management in the development of the program such as in determining the training needs. Another approach would be to present a condensed version of the training to the management before conducting the training to the participants. The results of this study also suggest that it is important to evaluate not only behavior, but also reaction and learning in case no directly measurable or observable change in behavior occurs. Then, the program administrator is able to determine if there was no change as a result of the training, the wrong job climate, or a lack of rewards (D. L. Kirkpatrick & J. D. Kirkpatrick, 2006). As mentioned previously in chapter two and in the previous section, management needs to be involved in the evaluation process. Another consideration that program administrators should take into account is weighing the costs and benefits of conducting the four levels of evaluation. D.L. Kirkpatrick and J .D Kirkpatrick (2006) point out that if the training program is only going to run one time, it might not be justified to spend the time and money it takes to evaluate behavior and/or results. However, if the program is going to be repeated, the possible improvements resulting from the four-level evaluation may justify the extensive evaluation effort. For example, both of the programs evaluated in this study were 10- week classes. Based on our workplace language program’s past relationship with Alpha Manufacturing, future training programs were likely as long as the program proved to be effective. Similarly, although Beta Health was a new client, management had expressed interest in continuing as they had a large non-native English speaking population that was interested in the training. Importantly, continuation for both companies depended on the 82 effectiveness of the training program. Therefore, a more extensive evaluation process was important for the future of these two programs. For this reason, I highly encourage workplace language program administrators to discuss the possibility of future training before the training session even begins. If possible and needed, the program administrator should encourage the company representative to commit to a program rather than just a training session. For example, if the company asks me to recommend the best training program for their workers, and through a needs assessment I find that most of the potential participants are at a beginning English proficiency level, I will recommend a 30-week program that is spread over a year. That would provide the learners with short, intense training programs with specific objectives, and also combat misconception that the learners will be fluent in English after a lO-week session (see Burt, 2004, for more detail on misconceptions). Finally, as the findings to research question 2a demonstrate, standardized tests cause many challenges in accurately reporting learning gains in workplace language programs. For example, there is not one specific reason for why learners may achieve negative gains. As mentioned before, negative gains could be attributed to number of factors such as various educational levels, language backgrounds, length of time in the United States, and exposure and familiarity with taking standardized tests. However, that does not mean that standardized tests that evaluate workplace training should not be used. To ensure quality programs and in hopes of one day comparing workplace language programs across the United States, program administrators must be held accountable and must implement a variety of assessments, including standardized tests of very high quality. The benefits of standardized tests are that they are easily used as a quantitative measure of learning gains, can be used to compare various workplace English programs, 83 and have high face validity. Moreover, they may be able to demonstrate general proficiency gains over time. However, the interpretation of such test results do need to be considered in light of other measures, such as qualitative ones like the learning journals used in this study, in order to understand the fiill picture of the benefits related to workplace English courses. As a caveat, at the end of the training both companies in this study decided to continue partnering with the Literacy Organization to provide language training for their employees. I found that the evaluation process made it easier to build stronger and more trusting relationships with the company representatives, which encouraged repeat business. 5.6 Limitations While some results were discovered in response to the research questions, the data analysis faced limitations. First of all, this study is a case study, and was only investigating three workplace classes; it is difficult to draw generalizations from the results. Another challenge was due to the inconsistency of those who completed the different evaluative measures. For example, not all the learners who attended training were present for pre- and post-testing. Not all the learners participated in the learning journals due to low-level writing abilities. Not all the supervisors participated in the meetings and the evaluations. Hence, a limitation of this study is that the results come from only the perspectives of those who participated in the journals, interviews, and meetings even though more people were involved in the training. That being said, in workplace training situations perfect attendance is nearly impossible. Program administrators can send out reminders and work with the company to provide incentives for attendance for both the learners and the supervisors; yet, it is also important for 84 program directors to assess and monitor how much staff time is being spent on gathering evaluation data in relation to the benefits of collecting the data. As discussed in Chapter two, coordinating and evaluating workplace language training programs takes a considerable amount of time and effort. Extensive evaluation of the training program adds an extra burden to the service provider’s already limited time. For this reason, and because it was my hope to replicate this evaluation process in other workplace language training settings, these case studies were conducted in their natural settings. For instance, we held a maximum of one additional testing session for participants who did not come to the scheduled testing session. Another example is I reminded the supervisors only twice to complete and submit their.questionnaires. Although this is the reality of working with multiple stakeholders in workplace language training settings, it is also a limitation of this study. I was unable to pre- and posttest all of the learners who participated in the training. In addition, although I interviewed most of the supervisors, I did not receive completed surveys and questionnaires from all of the supervisors, and therefore, could not include that data in this study. The interviews with the learners and supervisors were the most time intensive as I conducted each interview for consistency in this study, and travel time should be included. In fiiture studies, I recommend that the interview workload be divided between the service provider staff (director, coordinator, instructors), as well as even including a company training facilitator. Although this is in part a qualitative study, a possible limitation is that a second researcher did not verify the categories that were created for the learning journal entries. Perhaps for a larger study this would be necessary. Furthermore, it is important to consider the possibility that the data was influenced by the Hawthome Effect. In other 85 words, it is possible that the presence of the researcher, who was also the program director, may have resulted in changed behavior of the learner and the company representatives due to the fact that they were included in the study (Mackey & Gass, 2005). Because I have a supervisory role in the Literacy Organization, it is possible that 1) the learners did not respond as honestly as they would have if I had no relation to the instructor or their bosses, and 2) the company management reacted differently to me in person rather than giving their honest opinion to a third party. In a larger study that also includes an in depth study on return on investment and overall programmatic results, I would recommend program administrators use an outside evaluator to objectively gather and report on the data. This, however, is costly and is most likely not feasible for smaller programs or the initial evaluation levels. Another limitation is that this study focuses more heavily on levels one through three of Kirkpatrick’s (2006) training evaluation model. That is, the focus of this study is on reaction, learning, and behavior, but not results as defined by D.L. Kirkpatrick and J .D. Kirkpatrick (2006). Martin and Lomperis (2002) argue that a necessary component of evaluation is determining cost benefits. This study begins to look at the participants’ reaction to the course, the learning that took place, as well as on-the-j ob impact; however, it does not offer proof that the training is leading to increased profits for the companies. From these results, for example, we cannot calculate the cost benefits or return on investment (Martin & Lomperis, 2002). Despite these limitations, the data provided concrete examples of positive reaction to the training, learning as a result of the training, and behavioral changes in the workplace and also provided a foundation to build on for future research. 86 5.7 Directions for Future Research As I have mentioned before, incorporating the four levels of evaluation is quite time intensive and therefore, costly. However, it will cost program administrators immeasurably more to lose a client because they were unable to prove the workplace language program was effective. For this reason and based on the limitations of this study, in the future I hope to further research as well as implement the fourth level of Kirkpatrick’s framework based on return on investment (ROI) models that have been proposed by Martin and Lomperis (2002) and others. There is a need for more empirical studies that show the effectiveness and best practices in using the return on investment as an effective evaluation tool. Furthermore, it would be advantageous to replicate this study in a number of programs over time, and to conduct delayed performance and/or reaction evaluations with both the learners and the company management. 5.8 Final Comments In conclusion, the English language learners, the company representatives, and the service provider found the workplace English language training to be successful. Despite the challenges associated with evaluating workplace English training programs at multiple levels, it was found that to prove whether or not a workplace English program is successful, program administrators must work together with the practitioners and the company representatives to evaluate (a) the reaction to the program, (b) the learners’ measurable learning gains, (c) changes in the learners’ workplace behavior, as well as (d) the overall results or programmatic impact on the company. As demonstrated in this study, this takes a combination of quantitative and qualitative assessments. 87 APPENDICES 88 APPENDIX A Table 2 10 11 12 13 Handbooks and Reviews on Workplace Literacy from the 1990's _ Title Author Year Funding Source The Impact of Workplace Literacy Programs: A New Model for Evaluating Mikulecky & 1993 National Center on Adult the Impact of Workplace Literacy Lloyd Literacy (N CAL) Programs Handbook of Ideas for Evaluating Mikulecky & . . . ' Workplace Literacy Programs Lloyd 1994 Indiana Umversrty Assessment approaches and impact Mikulecky, Lloyd, 1995 National Center on Adult results in workplace literacy programs & Ku'kl‘ ey Literacy (NCAL) The military experience and workplace. ' National Center on Adult literacy. a revrew and syntheSIS for policy Sticht 1995 . . Literacy (NCAL) and practice 332122272fp’2’fii‘52‘fi3é'c’52‘2” MM WW 1996 National CememnAdu“ . . ' . Lloyd Literacy (NCAL) instructional practices . . , . . Office of Educational A Revrew of Recent Workplace Literacy Mrkulecky & 199 6 Research and Programs Lloyd I mprovement Developing and Evaluating Workplace l [.1 l . Literacy Programs: A Handbook for Ki rid eclz, (1:32,; 1996 11:1 figfaml (321683?“ Adult Practitioners and Trainers ey, cy . Office of Educational Interviews from the Field Burt 1997 Research and Improvement Adult Education At Work Davis 1997 Tennessee Department {http://slincscoe.utk.edu/_'pdj7@11ted_pdf) of Education Addressing Literacy Needs at Work: Moore M ers US Department of Implementation and Impact of Workplace Silv (’9‘ Aylarriprese 1998 Education, Plamring and Literacy Programs a, Evaluation Service 11:11: {JiZZtalVISIONS’ Office of Vocational and Manager's Informational Packet . 1998 Adult Education Workplace Skills . (OVAE) Protect Testing andAccountabi/ity in A dult S tich t 1999 Applies] Behavioral & Literacy Education Cogmtrve Scrences Turning Skills into Profit: Economic The Conference Ofi'rce of Vocational and Benefits of Workplace Education Board 1999 Adult Education Programs (OVAE) 89 APPENDIX B Table 7 Participants' Personal Data Length in Participant Country of US in Years of Yrs in Company ID/Gender Age Origin Years Education Company Interview Journal Alpha 17 Male 60 Vietnam 10 to 15 HSD 5 yrs 9m Yes Mann: 20 Male 33 NA 1 to 5 HSD 11 m Yes Yes fa“mung 21 Female 29 Dominican l to 5 HSD 1 yr 4m Yes Yes Republic 23 Female 37 Vietnam 1 to 5 HSD 2 yrs 5m Yes Yes 24 Male 30 Cuba 5 1 University 2 m Yes 25 Male 61 Phillipines 10 to 15 University 9 111 Yes 26 Male 60 Vietnam 20+ HSD 20 yr 5m Yes 16 Female 23 Vietnam 20+ 10th grade 7 m 22 Female 31 Dominican 10 to 1 5 HSD 16m Yes Republic 56 Female 31 Ethiopia 1 to 5 HSD 1 yr 3m Beta 47 Female 53 Bangladesh 1 to 5 HSD 5 yrs Yes Health 49 Female 48 Italy 20+ HSD 10 yrs Yes Yes N°“h 50 Female 53 Chile 10 to 15 HSD 9 yrs Yes 51 Female 45 Mexico 20+ 8th grade 14 yrs Yes Yes 44 Female 52 Cambodia 15 to 20 S 3 yrs 14 yrs 45 Female 40 Eritrea 5 to 10 S 3 yrs 7 yrs 46 Male 53 Cambodia 20+ 8th grade 6 yrs 48 Male 56 Dominican 10 to 15 81h grade 8 yrs Republic Beta 32 Female 42 Guatemala 20+ 6th grade 7 yrs Yes 3:3th 34 Female 55 Dominican 15 to 20 HSD/ GED 6 yrs Yes Yes Republic 35 Female 64 Cuba 20+ 10th grade 8 yrs 3m Yes Yes 36 Female 42 Eritrea 10 to 15 81h grade 14 yrs Yes Yes 37 Female 42 Cuba 5 to 10 HSD 3 yrs Yes Yes 38 F ernale 41 Puerto Rico 15 to 20 GED 6 yrs 3m Yes Yes 39 Male 42 Mexico 10 to 15 GED 2 yrs Yes Yes 40 Female 25 Cuba 5 to 10 HSD 4 yrs Yes 10m 41 Female 46 Russia 10 to 15 HSD 7 yrs Yes 53 F ernale 49 Dominican 20+ HSD + 2 yrs 6 yrs 8m Yes Republic university 27 Female 29 Sudan 5 to 10 S 3 yrs 5 yrs 28 Female 41 Ethiopia 10 to 15 10th grade 9 yrs 1m 29 Male 32 Sudan 10 to 15 10th grade 3 yrs 30 Male 35 Sudan 10 to 15 6th grade 5 yrs 31 Female 44 El Salvador 20+ 5 3 yrs 3 yrs 33 Female 36 Sudan 10 to 15 S 3 yrs 5 yrs Yes 42 Female 39 Ethiopia 1 to 5 HSD 5 yrs 43 Male 33 Sudan 1 to 5 HSD + 2 yrs 6 m umversrty Note. 21 participants out of the 36 completed a learning journal. Note. HSD = High School Diploma 9O Appendix C Employee Journal Guidelines Purpose: T 0 help you see the improvements you are making in learning English 1. Things I Find Easy in English 2. Things I Find Hard in English 3. Things I Would Like to Be Able to Do in My Work in English 4. Things I Learned This Week 5. Examples of When I Used the New Things I Learned (At work, home, or in the community) GUIDELINES: I Write your goal for the first 10 weeks of this class. Example: Goal #1_Communicate better with my supervisor Put your name inside the cover of the notebook. Before each journal entry write the date (month/ day/ year) Make a journal entry each week for at least 3 of the categories. You may write as often as you like, but no less than once a week. Give as many examples as you can. Example of a journal entry: Examples of When I Used the New Things I Learned (At work, home, or in the community) . 1/23/07 or January 23, 2007 LaetweekxI learned/how tomake/Woonawtteam wreatmgy. Today, fordwflrstt‘unex, I wrote/chow owwptece/ofpapermrd/gme/cttomywpervtaor. She laced/my Ldewmdmmwouldmmabmxtwatthe Wtemwmeet’wrgx. I You do NOT need to bring the journal to class every day. Your instructor will collect your journals every three weeks to make a photocopy and then will return it to you. At the end of the class, you may keep your journal. 91 Appendix D Sample Learning Journal Entry of Student’s Work 02-04-2007 '///Lc 4445245 MlM/t owl/l7 29.2,) f jour’alelartt/ (525222126 arr/2222 22.2214 '2, 23.22 g 4772/ 222257 44.22221 2;? 754’on wifeefflcca { flux Jcécumd wait/23. J‘SVl/Yéhgflb @[mr’t‘ ftp/MU 55“ “(1 55%/22222 PW *5/82” 62/6/27 WMMA WébllLdi/oiu. flat/70" l6 Ali/(22M 3% #45222»; «2’2 [29 éMZfWZ/fgj mug/7L...“ watt/2,522....) acted Hwy 5.3/205 9 MPH1 71/5 //3M ave/fault dflfldl‘an/ If ~85 ’2/57 57/655566 my 52211220 52,ch J05) (cu/[‘1 502576492264 92 Appendix E PARTICIPAN T INTERVIEW QUESTIONS Pre-Instruction Purpose: solicit information on goals, perceived needs, reason for enrolling 1. 2. 3. When do you have to use English on the job? With whom? What do you need to talk about? (What do you do?) What do you have difficulty in understanding on the job? What English language skills (reading, writing, speaking, comprehension, pronunciation, grammar, American culture) do you need to improve in order to do your job? Why are you taking this class? (Tell me about the class, the materials you are using in the class, the journal.) What do you hope to achieve in this class? What are the most important goals for this class? How do you measure your success in this class?/ How will you know you have learned something? (i.e. teacher feedback, grades, test scores, homework, improved communication on the job, teacher’s opinion, feeling, etc.) (What do you think about the standardized test?) Exit-Interview Purpose: learners ’ perceptions (sharing experiences in learning) PM“ Howsg 1. What was your goal for this class? 2. 3. What do you know now that you did not know before the training? Do you think What kind of progress have you made towards your goal? the course was helpful? Why/ Why not? What can you do now that you could not do before? Did you have any language or culture-related problems in the workplace recently? If so, how did you handle the problem? What did you think of the books and/or materials used in class? What part of the training was most helpful for your job? In what area would you like more training? How have you benefited from this class? 0. How Is this class different from other ESL classes you have taken? 93 Appendix F SUPERVISOR INTERVIEW QUESTIONS Needs Assessment Purpose: to determine learner needs and expected outcomes 4. 5. 6. 1. What specific communication situations do you encounter in the workplace? 2. 3. When would you know classes were successful? How would you like to see What are the two most important outcomes/goals for this class? success measured? Are there opportunities for workers to advance? Are the skills of all the workers appreciated and used? Is the worker input in decision making valued? Impact Survey Purpose: to compare the initial assessment of learner needs and expected outcomes with actual results. 1. 2. 3. .V‘ >39" What changes have you observed? Give an example of some of the improvements. Has teamwork among employees improved? Have participants been able to participate more effectively in internal and external socializing and networking? What do think of the instructor’s teaching style and ability? Were there sufficient resources to run the program effectively? (i.e. class time, building facility, classroom materials) What did you think of the books and/or materials used in class? How has this training impacted your department/ this company? 94 Appendix G PARTICIPAN T QUESTIONNAIRE: EMPLOYEES DEVELOPMENT EFFECTS Has this ESL program helped you at work: YES NO Don’t Know Examples y—I Read job materials better? Write job materials better? Listen/ understand on the job better? Speak/ communicate on the job better? Work better in teams? Improve your confidence? Reduce waste; scrap; errors; etc.? Know more about company policies, etc.? ‘0.°°>3.°‘.V‘:“P°.N Feel confident about trying for a promotion? 10. Learn better 1n company training programs? l 1. Improve your morale with the company? Has this ESL program helped you at home? YES NO Don’t Know Examples 12. Have you started reading more at home? 13. Do you write more/better at home? 14. Do communicate in English at home? 15. Do you help your children/ grandchildren with homework more? 16. Do you read to your (grand) children more? Has this program helped you in your communi ? YES NO Don’t Know Examples 17. Do you feel more confident about reading in stores, offices, etc? 18. Do you feel more confident writing in government forms, etc? 19. Has this program made it easier for you to speak igublic? 20. Has the program led you to consider taking more education or training programs? WAN'K You FOR YOUR TIME! 1 95 Appendix H SUPERVISOR’S GENERAL RATING OF PARTICIPANTS BUSINESS SUPERVISOR DATE INSTRUCTIONS: Please indicate how much each student has improved since enrolling in the Customized Workplace English training. For each category, enter your rating using the following scale: a) Participant improved 3- Grreatly, 2-Moderately, 1- Slightly, 0- No change or worse. b) Ifyou do not have adequate information, use N/O — Not Observed. c) If employee needed no improvement, mark N/A - Not Applicable. Please write specific comments on individual participant forms. Let us know in what ways you can see an improvement in language performance and if the learner has been promoted, rrrade job changes, or increased responsibilities because of improved English skill level. PARTICIPAN T Increased Improved Increased Increased Improved NAME Communication Productivity Attendance Self-esteem Safety Skills at Work 96 Supervisor’s Name Department Date Total employees in dept. *How many employees in your department participated in the program? Appendix I SUPERVISORS’ EVALUATION OF PROGRAM EFFECTS IN THEIR DEPARTMENTS In your opinion, now that the initial course has been completed, how would you rate its effects on participants that you supervise? Circle the number that shows how you feel. PRODUCTION: 5 4 3 2 l Greatly Somewhat Stayed Somewhat Greatly increased increased the same decreased decreased QUALITY OF WORK: 5 4 3 2 l Greatly Somewhat Stayed A few more Many more improved improved the same errors errors TRANSFERABILITY: After completing the program, when new technical equipment or training comes to your department. do you think your employees will be able to handle it Better Same Worse ATTITUDE: Regarding the employees in your department who participated in the program, how much improvement in attitudes towards themselves. their jobs, or the company did you observe? (For example: greater cooperation team-building etc.) 5 4 3 2 1 A lot Some Same amount as Little None before program Since your employees participated in the program, do you feel that your job as a supervisor has become: 5 4 3 2 1 Much Somewhat easier Same as Somewhat Much more easier before difficult difficult Please give an example: *If your company plans to continue to have employees participate in similar programs in the future. what would you recommend to improve the way the program is run? *Based on the effect that the program has had on the employees from your department who participated, would you recommend additional employees to the program? Why or why not? *Of the employees in your department who participated in the program, have any shown progress in potential for advancement? 97 Appendix J Table 7 kaming Journal Participants' CASA S Scores and Learning Gain Realized Listening test Reading Company ID Pre Post Gain/Loss Pre Post Gain/Loss Alpha 21 216 214 -2 213 210 -3 Manu- 23 206 212 6 218 213 -5 facturing 20 219 213 -6 212 222 10 1 7 203 207 4 222 216 -6 24 229 229 O 23 7 247 10 25 219 225 6 210 216 6 26 212 210 -2 220 222 2 47 207 214 7 216 215 -1 gem Health 50 219 210 -9 220 230 10 orth 49 216 212 —4 209 216 7 51 219 223 4 229 213 -16 53 225 221 -4 NA 23 1 NA ggmeealth 41 223 235 12 NA 234 NA 3 6 214 221 7 226 224 -2 32 212 221 9 209 NA NA 40 221 221 O 229 222 -7 34 214 221 7 213 220 7 3 7 221 225 4 212 224 1 2 38 221 216. -5 213 215 2 39 221 223 2 222 226 4 3 5 214 NA NA 21 5 229 1 4 98 Appendix K Participants’ Perception of Improvement in Speaking Comment # Response 1 I feel so happy because I learn to pronounce a few words in the class [I] feel secure and comfortable. 2 I'm feeling happy because today I can to give a direction for a family patient she was lost she need to find the exit. 3 ...Now my supervisor give some address for something she needs. I can ask her if I don't understand. 4 Now I'm using: Hold the elevator please. Where are you going? 5 Last week I went to clean a patient room in the bathroom the toilet it was clogged and I could call my supervisor and explain the problem 6 I learned to answer the patient call lights quickle. Now I used this in my work hours to get to the patient East 7 When Im working and the people, Patients, Family Patients, or CoWorkers ask me how to get to any area. I Try to give them the more easy way. 8 I learned to speak a litle more clear... I can have more complete sentences. 9 I'm feel more confortable and confident. I'm feel that my English is more understandable, I'm pronouncing a little more better than when I started. My biggest challenge is to be able to talk with my supervisor. The most frustrated thing is when I have to say something to [the supervisor], [the supervisor] never listens, [my 10 supervisor] always tried to finish the word for me or to put words in my mouth that I don't want to say. Now. I can stop [the supervisor] and I am very certain that [the supervisor] can understand better when I talk. 11 I speak more slow in order to pronunciate better. 12 I'm talking a lot better. The best example is: before I talked and then thinked, now I think before I talk and the results are very good. I feel more confident and better with myself. I did a corrective action to one of my staff member today... It was very nice because while I 13 was talking I was thinking: "Oh I'm doing very good," I was nervous but I was very confident with what I was doing. It was my first time doing this but I guess I did good. My boss gave me a "wow", she said that I did very good to be my fust time. 14 I learned English by talking to the patients at the hospital. 15 I learned this week how give directions within the building and directions in an elevator. 16 I have been using new words that I've learned at work. 17 I can talk more with my pastor's wife, she is American 13 I think that I speak a little more better using the new grammatical forms that I have learned. 19 Last week I reported a toilet clogget or plugged up in 3E room 30. 20 Wen I have telephone calls at house for an apoiment I can ask for directions beter. Wen I go to store I can ask for thing am luking for. 21 Now I takin more with my supervisor an [my team leader] and wi have more comunication. 22 We are learning pronunciation with mirro. First time I was shy but I know that way can help my pronunciate better. 23 I used the new things I learned at work about some sentence polites. For examples: Could you please say that more slowly. 99 Appendix L Learner '3 Responses to Behavioral Questionnaire by Company ID Questions Alpha Beta North Beta South “as ““9 ESL Program YES NO DK YES NO DK YES NO DK helped you at work? 1 Read job materials 6 5 l 10 2 Write job materials 6 5 l 8 3 Listen/ understand 6 6 10 4 Speak/ communicate 5 1 5 l 10 5 Work better in teams 5 1 6 10 6 Improve confidence 5 l 5 l 10 7 Reduce waste; scrap; errors 5 1 5 1 7 2 8 Know more about company policies 5 1 3 2 4 4 9 In trying for a promotion 5 l 4 l 2 8 1 10 In company training programs 5 1 1 5 8 11 Improve morale 5 1 1 1 4 8 Has this ESL program helped you at home? 12 Reading more 6 5 10 13 Writing more/ better 6 3 1 1 9 l4 Communicate in English 6 5 2 8 l 15 Helping (grand) children with homework 3 l 2 1 5 5 3 16 Reading to your (grand) children more 3 l 2 l 4 5 3 Has this program helped you in your community? 17 More confidence about reading in stores, ofi'rces 6 4 l 10 18 More confident writing in government forms 4 l 1 4 2 7 1 19 Easier for to speak in public 5 l 5 l 10 20 Consider taking more education or training programs as a result 5 1 5 1 10 Note. DK = Don’t Know 100 REFERENCES Adult Basic and Literacy Education (ABLE). (2005). OHIO Workplace Education Resource Guide. Retrieved March 9, 2007, from https://www.owens.edu/workforce cs/WorkplaceEducationGuide.pdf Burkhart, J. (1996). Evaluating Workplace Education Program Eflectiveness. Denver: Colorado State Department of Education. (ERIC Document Reproduction Service No. ED399435) Burt, M. (1994a). The cafeteria workers’ skills enhancement training program. Performance report. Washington, DC: US Department of Education. (ERIC Document Reproduction Service No. ED368957) Burt, M. (1994b). Ihe skills enhancement training program. Washington, DC: Local 32, Food and Beverage Workers Union. (ERIC Document Reproduction Service No. ED407880) Burt, M. (1995). Selling workplace ESL instructional programs. Washington, DC: Project in Adult Immigrant Education and National Clearinghouse for ESL Literacy Education. (ERIC Document Reproduction Service No. ED392315) Burt, M. (1997). Workplace ESL Instruction: Interviews from the Field Washington, DC: Project in Adult Immigrant Education/National Clearinghouse for ESL Literacy Education (N CLE)/Center for Applied Linguistics. Burt, M. (2004). Issues with outcomes in workplace ESL programs. Washington, DC: National Center for ESL Literacy Education (N CLE). Burt, M. & Keenan, F. (1995). Adult ESL learner assessment: Purposes and tools. Washington, DC: National Center for ESL Literacy Education (NCLE). Burt, M., & Saccomano, M. (1995). Evaluating workplace ESL instructional programs. Washington, DC: Project in Adult Immigrant Education/National Clearinghouse for ESL Literacy Education (NCLE)/Center for Applied Linguistics. (ERIC Document Reproduction Service No. ED386961) Comprehensive Adult Student Assessment System (CASAS). (2007). CASAS 2007 catalogue. Retrieved November 21, 2006, from www.casas.org Comprehensive Adult Student Assessment System (CASAS), Foundation for Educational Achievement (2003). Life and Work Test Administration Manual. San Diego, CA. Continuing Education Institute. (1998). Global 2000 National Worlqilace Literacy Program. Final Report. Washington, DC: US Department of Education. (ERIC Document Reproduction Service No. ED427183) 101 Crocker, J., Sherman, R., Dlott, M., & Tibbetts, J. (2002). An introduction to ESL in the workplace: A professional development packet. Washington, DC: US Department of Education. Retrieved March 9, 2007, from http://wwwpro-netZOOO.org/CM/content_file§/89.pdf Davis, D. C. (1998). Adult education at work. Nashville, TN: Tennessee Department of Labor and Workforce Development, Office of Adult Education. Retrieved March 7, 2007, from http://slincs.coe.utk.edu/pdf/adultedgrdf Korrnos, J. & Demyei, Z. (2004). The interaction of linguistic and motivational variables in second language task performance. ZIF, 9(2). Retrieved June 26, 2007, from http://www.m2.tu-darmstadt.de/prj ekt_ej ounal/jg—O9-2/beinag/korm052.htm Ekkens, K. J. (2006). Investigating Test Performance: A Workplace English Case Study. Unpublished manuscript, Michigan State University. Friedenberg, J ., Kennedy, D., Lomperis, A., Martin, W., & Westerfield, K. (2003). Eflective Practices in Workplace Language Training: Guidelines for Providers of Workplace English Language Training Services. Alexandria, VA: TESOL. Gardner, D. P. (2000). Learning at work. Knoxville, TN: Center for Literacy Studies/ The University of Tennessee. Retrieved March 9, 2007, from http://aeonline.coe.utk.edu/pdf/l earnatwrkpdf Grognet, A. G. (1994, November). ESL and the employment connection. Presentation at the Office of Refugee Resettlement English Language Training Consultations, Washington, DC. (ERIC Document Reproduction Service No. ED378843) Grognet, A. G. (1996). Planning, implementing, and evaluation workplace ESL programs. Washington, DC: National Clearinghouse for ESL Literacy Education (NCLE)/Center for Applied Linguistics. (ERIC Document Reproduction Service No. ED406866) Grognet, A. G. (1997). Performance-based curricula and outcomes: The Mainstream English language training project (ll/EL T.) Updates for the 1990s and beyond. Denver, CO: Spring Institute. Hyde, P., Clayton, B., & Booth, R. (2004). Erploring assessment in flexible delivery of vocational education and training programs. Adelaide, Australia: National Centre for Vocational Education Research (N CVER). Retrieved March 9, 2007, from http://www.ncver.edu.au/reseJarch/proj/nr0007pdf Imel, S. (2003). Whatever happened to workplace literacy? Washington, DC: Clearinghouse on Adult, Career, and Vocational Education. 102 Jurmo, P. (2004). Workplace literacy education: Definitions, purposes, and approaches. Focus on Basics, 7, 22-26. Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels (3rd ed.). San Francisco: Berrett-Koehler. Lynch, B. K. (2003). Language assessment and programme evaluation. Edinburgh: Edinburgh University Press. Marshall, B. (2002). Preparing for success: A guide for teaching adult English language learners. Washington, DC: Center for Applied Linguistics. Mansoor, I. (1993). REEP Federal Workplace Literacy Project: Performance report. Washington, DC: US Department of Education. (ERIC Document Reproduction Service No. ED363146) Martin, W. M., & Lomperis, A. E. (2002). Determining the cost benefit, the return on investment, and the intangible impacts of language programs for development. TESOL Quarterly, 36(3), 399-429. McGroar'ty, M., & Scott, S. (1993). Worlqylace ESL instruction: Varieties and constraints. ERIC Digest. Washington, DC: National Clearinghouse for ESL Literacy Education. (ERIC Document Reproduction Service No. ED 367190) Mikulecky, L. (2000, October). Dr. Larry Mikulecky Response to Questions from the Workplace Discussion List. Message posted to http://www.nifl. gov/1incsflscussionS/workplace/milgulecky.htrnl Mikulecky, L., & Lloyd, P. (1994). Handbook of ideas for evaluating workplace literacy programs. Bloomington, IN: Indiana University. Mikulecky, L., & Lloyd, P. (1996a). Eflective workplace literacy programs: A guide for policymakers. Philadelphia, PA: National Center on Adult Literacy. Mikulecky, L., & Lloyd, P. (1996b). Evaluation of workplace literacy programs: A profile of effective instructional practices. Philadelphia, PA: National Center on Adult Literacy. Mikulecky, L., & Lloyd, P. (1997). Evaluation of workplace literacy programs: A profile of effective instructional practices. Journal of Literacy Research, 29(4), 555-585. Mikulecky, L., Lloyd, P., Horwitz, L., Masker, S., & Siemantel, P. (1996). A review of recent workplace literacy programs and a projection of fitture challenges. Philadelphia, PA: National Center on Adult Literacy. 103 Mikulecky, L., Lloyd, P., Kirkley, J., & Oelker, J. (1996). Developing and evaluating workplace literacy programs: A handbook for practitioners and trainers. Philadelphia, PA: National Center on Adult Literacy. Moore, M. T., Myers, D., Silva, T., & Alamprese, J. A. (1998). Addressing literacy needs at work: Implementation and impact of workplace literacy programs. Final report. Washington, DC: Department of Education. (ERIC Document Reproduction Service No. ED426285) Mosenthal, P. B., & Hinchman, K. A. (1993). Syracuse LaborManagement Consortium Worlplace Literacy Skills Improvement Project: Evaluation report. Washington, DC: Office of Vocational and Adult Education. (ERIC Document Reproduction Service No. ED355408) Nickols, F. (2000). Evaluating training: There is no “cookbook” approach. Retrieved May 9, 2007 at http://home.att.net/~hicl_cols/evaluate.htm Office of Vocational and Adult Education. (1998). Manager ’s informational packet. Washington, DC: Project VISIONS/ National Workplace Skills Project/ Oflice of Vocational and Adult Education. (ERIC Document Reproduction Service No. ED 425316) Office of Vocational and Adult Education. (2005). Workplace Education Program Profiles in Adult Education. Washington, DC: Office of Vocational and Adult Educati on/ Institute for Work and the Economy, DTI Associates, Inc. O'Malley, M., & Pierce, L. V. (1996). Authentic assessment for English language learners: Practical approaches for the teacher. New York, NY: Addison Wesley Publishing Company. Pierce, L. V., & O’Malley, J. M. (1992). Performance and portfolio assessment for language minority students. NCBE Program Information Guide Series, 9. Sawyer, & Tondre, B. (2003). Tennessee ESOL in the Workplace: A Training Manual for ESOL Supervisors and Instructors. Knoxville, TN: Tennessee Department of Labor and Workforce Development, Office of Adult Education! University of Tennessee Center for Literacy Studies. http://www.cls.utk.edu/esol_workplace.html Sticht, T. (1999). Testing and Accountability in Adult Literacy Education. El Caj on, CA: Applied Behavioral & Cognitive Sciences, Inc. Sticht, T. G. (1995). The military experience and workplace literacy: a review and synthesis for policy and practice. Philadelphia, PA: National Center on Adult Literacy, University of Pennsylvania. 104 Sticht, T. G., & Armstrong, W. B. (1994). Adult literacy in the United States. Retrieved February 27, 2007, from http://naldeg/libragg/research/adlitUS/cover.htm Tondre-El Zorkani, B. (2007). Charting a Course: Responding to the Industry-Related Adult Basic Education Needs of the Texas Workforce, Handbook #1 — Planning and Implementation Tips for Program Planners and Administrators. Houston, Texas: Texas LEARN S. (For more information, contact Barbara Tondre-El Zorkani, btondre@earth1ink.net) Van Duzer, C., & Berdan, R. (2000). Perspectives on assessment in adult ESOL instruction. In the Annual Review of A dult Learning and Literacy. Cambridge, MA & San Francisco, CA: National Center for the Study of Adult Learning and Literacy & Jossey-Bass Publishers. 105