xv 9.1.2: t |e...::.. 33:49:. :ll’i: I.W¢o 1 .3“ I. .I. 5“}. .. .. n 39.13:!ch .n. .0. .3453... . , , . §:‘:I...HVI..1 {13.1. . v . . .55. 3.3.. . .. 32'2“... 19....vtxtl... . .3... ,2. 2.5.3.31: firfirifi a a. L A». i... :53»?! 1-D}! 1:.‘3‘. A. 3...“...21I‘. , a? 23.5.... .. s t n‘lo... .1153»... .53.... .r'h." .. «r1!s.tw.iv:.z..é‘afivst9a r A, I 15:07: altiimiuiiil. i. )ha...1!’l -3. i .42...- .. I. is}; .5 . in... 7:... 1 than: Inuit?! tenant‘s-hr...) :gil: hisfiaz ¢ ‘. at}... (if? . J P:....1..o;,~..l.. {‘37}...- \f. 0731...: . .. i)... d. .v4’v‘ll‘.)‘\ulu 1. El’l...-\I.. WW? 2001/ This is to certify that the dissertation entitled Assessing consumer-centered case management programs in Michigan: Development of a measurement model and measures of implementation. presented by David L. Loveland has been accepted towards fulfillment of the requirements for Ph-Dv degree in Esycho] oggz 47..., 2ng Major professor Date 3- /3-'09L MS U is an Affirmative Action/Equal Opportunity Institution 0- 12771 LIBRARY Michigan State Unlverslty PLACE IN RETURN BOX to remove this checkout from your record. To AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 5128151004 6/01 c:/CIRCIDateDue.p65-p.15 ASSESSING CONSUMER-CENTERED CASE MANAGEMENT PROGRAMS IN MICHIGAN: DEVELOPMENT OF A MEASUREMENT MODEL AND MEASURES OF IMPLEMENTATION By David Lynn Loveland A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Psychology '2002 ABSTRACT ASSESSING CONSUMER-CENTERED CASE MANAGEMENT PROGRAMS IN MICHIGAN: DEVELOPMENT OF A MEASUREMENT MODEL AND MEASURES OF IMPLEMENTATION By David Lynn Loveland The consumer-centered, strengths-based model of mental health service delivery is slowly being disseminated throughout the community mental health system in the United States. However, despite growing support for consumer-centered, strengths-based services, there has been limited research on the application of this model. Therefore, this study was implemented to develop a methodology and conceptual model for evaluating the effectiveness of consumer-centered, strengths-based case management services that serve adults with serious mental illness (SMI). A consumer perspective or view was used to guide the development and application of an evaluation framework and specific measures of implementation. The project consisted of interviewing a cross-sectional sample of 56 consumers and 13 case managers in two case management programs in a rural community mental health center (CMHC) in Michigan. Results were mixed between the two case management programs with most of the predicted relationships occurring in the standard case management program. Results from the standard case management program supported the conceptual model and most of the specific measures. Findings from the study also indicated that the consumer’s perspective can be an effective and informative framework for evaluating consumer-centered, community- based services I‘or individuals with serious mental illness. Detailed psychometric analyses are provided for all measures used in the study. Implications of the study’s findings for future research are discussed. To my parents, Lynn and Shirley, and my sisters, Deb and Sally: your love and support gave me the strength and courage to complete my doctorate. iii \lx 1h: It ACKNOWLEDGEMENTS I am indebted to many people for their assistance in the development and completion of my dissertation. I am deeply indebted to all the individuals receiving services at the Ionia County Community Mental Health Center, especially those who were willing to participate in the study. I thank you for sharing your time, stories, wisdom, and kindness with me. I would also like to thank the entire staff of the Ionia County Community Mental Health Center. I can only hope that my research will, in part, give back some of the support, knowledge, and guidance that you have given me over the past four years. To Cris Sullivan and Bill Davidson, I thank you both for your ability to work around some of my unique characteristics. Thank you for serving as role models and providing me with the values of a community psychologist. To Tom Summerfelt and Esther Onaga, I thank you for your wisdom and for your time on my committee. To half of my entire social network and support group, Jen, thank you for being my best friend, editor, entertainment coordinator, academic consultant, dog walker and babysitter, fashion consultant, partner on my trips to debauchery, and the main reason I miss Lansing. To the other half of my entire social network and support group, Dan, thank you also for being my best friend, loser in all our pool games, procrastinating editor, partner in crime and debauchery, and the other reason I miss Lansing. To Jessica, thank you for helping me become a better person and more aware of the world. To God, thank you for giving me Nicole, Pam, Christina, and Angie as my cohort; I made it through this program because of them. To Jose Cuervo, thank you for the cheap therapy. Finally, to China, thank you for being the love of my life. iv is sl k If lllt' PREFACE The consumer-centered, strengths-based model of mental health service delivery is slowly being disseminated throughout the community mental health system in the United States. The model represents an evolution of treatment for individuals with mental illness and reflects the growing influence of the consumer movement over the past 20 years and a paradigm shift in theoretical models of illness and treatment. Michigan has been one of the leaders in mental health reform and one of the first states to adopt and mandate the consumer-centered, strengths-based model of service delivery for all service providers who receive either public, Medicaid, or Medicare mental health dollars. Nevertheless, despite the growing public, academic, and legislative support for this model in Michigan and across the United States, there has been limited research done on consumer-centered, strengths-based services for adults with mental illness. Moreover, there is no accepted operational definition, prototype or conceptual model, and little information on how to assess the efficacy or effectiveness of this intervention in applied settings. In addition, service providers in Michigan were mandated to adapt this model to existing programs that were using the more traditional, clinically- or professionally-based service delivery model prior to changes in Michigan’s mental health code in the mid- 19908. Evaluation research has shown that implementation conditions, such as the lack of empirical research, vague or unspecified guidelines for constructing and operating programs, and program complexity can lead to inconsistent or incomplete implementation of programs or innovations across Sites or treatment systems. In turn, inconsistent or incomplete program implementation can lead to a degradation of the the con better I measu dcx'elc mean for ac' mane inter eVal‘. Inca pub fidi per CXE ell] me intervention (i.e., lowering the impact of the program). Problems in program implementation can lead to disastrous results when evaluating the effectiveness of the intervention. Therefore, it is important to examine the implementation of an intervention prior to evaluating its impact. Because no such project has yet been undertaken to assess the implementation of the consumer-centered, strengths-based model in Michigan, this study was instituted to better understand what the model is, how it can be conceptualized, and how it can be measured. Due to the nascent state of research in this area, this study focused on developing a conceptual model of consumer-centered services in Michigan and a measurement model with specific measures or indicators of treatment. Case management for adults with mental illness was chosen as the intervention to examine. Case management was chosen because it is the most common community-based, mental health intervention for individuals with serious mental illnesses and has been extensively evaluated over the past 30 years. A key feature of this study was the perspective used to guide the development of a measurement model and specific measures of consumer-centered services. Nearly all published assessments of case management programs, including implementation and fidelity evaluations, have used a clinical (i.e., professionally-based) or provider perspective, which subsequently has led to surveying clinicians and administrators and excluding primary consumers from the evaluation process (beyond being evaluated by clinicians or evaluators). In contrast, this study placed an emphasis on assessing case management services through the consumer’s perspective. Thus, the measurement model and most of the specific measures that were either created or selected for this study are consumer focused rather than provider or clinician focused. The study consisted of interviewing a cross-sectional sample of S6 consumers receiving case management services in two programs (a standard case management program and an ACT program) at one rural community mental health center. An important aspect of this study was the process involved in gaining access to and cooperation of a community mental health center and gaining access to and trust of consumers who participated in a pilot study and this study. This process is detailed throughout the manuscript. Analysis of the measurement model and specific measures are presented along with a discussion of the results and future plans for applying this measurement model to assess the implementation of this consumer-centered services in Michigan. vii LlSl llSl TABLE OF CONTENTS LIST OF TABLES .................................................................................. xi LIST OF FIGURES ................................................................................. xii INTRODUCTION ............................................................................................................... 1 History, Systems, and Theories of Mental Health Treatment ..................................... 4 The Community Mental Health Movement ......................................................... 4 Clinical Model of Mental Health Treatment ........................................................ 5 Ecological Models of Mental Health Treatment .................................................. 6 The Community Support Program ....................................................................... 7 Consumerism ........................................................................................................ 8 The Current System .............................................................................................. 9 History and Models of Case Management ................................................................. 10 Broker Case Management .................................................................................. 10 ACT /ICM Programs ........................................................................................... 11 Strengths-Based Case Management ................................................................... 15 The Consumer-Centered Case Management Model in Michigan ............................. 20 Mental Health Treatment in Michigan ............................................................... 20 Program Implementation .................................................................................... 22 Assessing Program Implementation ................................................................... 24 Defining Consumer-Centered, Strengths-Based Services in Michigan ............. 28 Prototype Model of Case Management Services in Michigan ........................... 31 Proxy Indicators .................................................................................................. 4O Expanded Conceptual Model of Case Management Services in Michigan ....... 42 Measures of Implementation .............................................................................. 43 The Research Project ................................................................................................. 47 METHODOLOGY ............................................................................................................ 49 Pilot Study ..................................................................................................... 49 Research Site and Case Management Program ......................................................... 52 Participants ................................................................................................................ 53 Consumer Participants ........................................................................................ 53 Case Manager Participants ................................................................................. 55 Measures .................................................................................................................... 56 Strengths Assessment ......................................................................................... 58 Consumer-Centered Service Planning ................................................................ 60 Community Inclusion-Based Services ............................................................... 64 Satisfaction ......................................................................................................... 67 Empowerment ..................................................................................................... 68 Quality of Life .................................................................................................... 69 Demographics ..................................................................................................... 70 Procedures .................................................................................................................. 70 Recruitment ........................................................................................................ 7O Face-to-F ace Interviews ..................................................................................... 74 viii Timeline of Interviews ...................................................................................................... 75 Agency Data ....................................................................................................... 75 RESULTS .......................................................................................................................... 77 Strengths Assessment ................................................................................................ 78 Strengths Scale ................................................................................................... 79 Difference Score of Strengths Scales ................................................................. 82 Opinions Scale .................................................................................................... 83 Utility of Strengths Measures ............................................................................. 84 Consumer-Centered Service Planning ....................................................................... 87 Congruity of Needs ............................................................................................. 87 Congruity of Goals ............................................................................................. 91 Treatment Planning and Goal Development ...................................................... 94 Relationship with Case Managers ...................................................................... 96 Utility of Consumer—Centered Service Planning ................................................ 98 Community Inclusion-based Services ..................................................................... 102 Service Provision .............................................................................................. 102 Promoting Independence .................................................................................. 103 Utility of Community Inclusion-Based Services .............................................. 105 Proxy Indicators ....................................................................................................... 108 Empowerment ................................................................................................... 108 Quality of Life .................................................................................................. 110 Satisfaction ....................................................................................................... 1 1 1 Utility of the Proxy Indicators .......................................................................... 113 Revised Measurement Model .................................................................................. 117 Revised Strengths Assessment ......................................................................... 117 Revised Consumer-Centered, Service Planning ............................................... 118 Revised Community Inclusion-Based Services ................................................ 119 Revised Proxy Indicators .................................................................................. 119 Final Measurement Model ................................................................................ 120 Two Perspectives .............................................................................................. 121 Summary of all Measures Examined in the Study ........................................... 127 DISCUSSION .................................................................................................................. 129 Future Directions .............................................................................................. 145 Lessons Learned ............................................................................................... 146 APPENDIX A .................................................................................................................. 150 APPENDIX B .................................................................................................................. 154 APPENDIX C .................................................................................................................. 156 APPENDIX D .................................................................................................................. 159 APPENDD( E .................................................................................................................. 160 APP APP APPENDD{ F ................................................................................................................... 162 APPENDIX G .................................................................................................................. 163 APPENDIX H .................................................................................................................. 165 REFERENCES ................................................................................................................ 169 Tab Tab Tab lab Tab lab Tab Tab. LIST OF TABLES Table 1: Sources of Data Collection ................................................................................. 57 Table 2: MHSIP Satisfaction Survey Scores .................................................................... 65 Table 3: Group Mean Comparisons of the Strengths Scales ............................................ 81 Table 4: Internal Validity of the Strengths Dimension ..................................................... 86 Table 5: Program Comparisons of the Needs Assessment ............................................... 88 Table 6: Congruity Ratings of Needs Assessment Items .................................................. 90 Table 7: Internal Validity of the C-C,SP Dimension ...................................................... 100 Table 8: External Validity Correlations of C-C, SP Dimension 1 ................................... 101 Table 9: External Validity of Community Inclusion-Based Services I .......................... 106 Table 10: External Validity of Community Inclusion-Based Services 11 ....................... 107 Table 11: Correlation of Proxy Indicators ...................................................................... 115 Table 12: External Validity of Proxy Indicators 1 .......................................................... 116 Table 13: External Validity of Proxy Indicators II ......................................................... 1 17 Table 14: Correlations of Case Manager-Reported Surveys .......................................... 122 Table 15: Correlations of Case Manager-Reported Surveys 11 ....................................... 123 Table 16: Correlations of Consumer-Reported Surveys ................................................. 125 Table 17: Correlations of Consumer—Reported Surveys 11 ............................................. 125 Table 18: Review of Measures ....................................................................................... 128 xi T; M»: lb Firxe V hm V LIST OF FIGURES Figure 1: Dimensions of a Consumer-Centered, Strengths-Based Model in Michigan. . .36 Figure 2: Conceptual Model ....................................................................... 47 Figure 3: Measurement Model ..................................................................... 61 Figure 4: Revised Measurement Model ......................................................... 121 xii consu Depa: has i com com diam! also i the! we“. can b m: refor COIN 199:: late 1 Prov Selec SUpp Heal INTRODUCTION Since the early 1990's the Michigan public mental health system has been gradually shifting from a professionally-driven, clinically-based treatment model to a consumer-driven or consumer-centered, strengths-based treatment model (Michigan Department of Community Health [MDCH], 1999). In fact, Michigan is the first state to have instituted in its law the guidelines that all public mental health services must be consumer-centered and strengths-based (PA 194 Section 330.1712, MDCH, 1997). A consumer-centered model of service delivery implies that the consumer, not the clinician, dictates or coordinates the spectrum of mental health and ancillary services. The model also implies an ideological shift from the deficit-based View of the clinical service model, which is coordinated by clinicians and other service providers, to a more strengths-based view, which assumes that consumers have capacities, abilities, and self-defined goals that can be achieved through access to necessary resources. These changes in Michigan's mental health code and subsequent service provisions are reflective of larger system reform efforts across the United States, which are a result of the growing influence of the consumer movement in the mental health field (Campbell, 1998a & 1998b; Chamberlin, 1990; DHHS, 1999; Frese, 1998; Kaufman, 1999; McCabe & Unzicker, 1995). Since the late 1970's, consumer advocacy has led to a reconceptualization of how services are provided, how consumers of mental health services are viewed and treated, and the selection of treatment goals for consumers (Campbell, 1997; McCabe & Unzicker, 1995). In addition to state level initiatives, federal initiatives and guidelines have also supported this shift in service philosophy and treatment ideology. The Center for Mental Health Services (CMHS) a division of the Substance Abuse and Mental Health Services organiz Ass-on: tenet 0? ofmen Chang: rescar consu of thi basec most men- rexi. In a trea COP. Agency (SAMHSA) now advocates for consumer involvement in all stages of mental health service provision, including program development, service provision, advocacy and representation at state and federal levels (Henderson et al., 1998). Other national organizations, such as the National Alliance for the Mentally 111 (N AMI) and the National Association of State Mental Health Program Directors (N ASMHPD) also support the tenet of consumerism and the involvement of consumers and ex-consumers at all levels of mental health treatment (McCabe & Unzicker, 1995). Despite the growing influence of the consumer movement and subsequent changes to entire state mental health systems, such as in Michigan, there is no empirical research that has investigated the degree to which the Michigan model is truly consumer- centered or strengths-based. Furthermore, there is limited research on the concept of consumer-centeredness and how it can be measured or assessed. Therefore, the purpose of this study was to investigate the implementation of a consumer-centered, strengths- based model of service delivery on case management services in Michigan and how the concept can be quantifiably assessed. Case management was selected because it is the most widely used community-based treatment intervention for individuals with serious mental illness (SMI) and has been extensively researched over the past 30 years (e. g., see reviews by Bedell, Cohen, & Sullivan, 2000; Mueser et al., 1998; Phillips etal., 2001). In addition, as noted, case management has been adapted to the strengths-based model of treatment. The first objective of this study was to examine the underlying principles of consumer-centered, strengths-based services and how these general concepts apply to case management services in Michigan. A second objective was to assess the psychometric properties and general utility of numerous indicators of consumer- centered, strengths-based case management services in Michigan, which were created specifically for this study or were adapted fi'om related research endeavors. A third and final objective of this study was to illuminate the complex and time intensive process involved in gaining access into a community mental health center and gaining both access to and trust of consumers who were involved in a pilot study and the full study. As is explained in detail in this document, the assessment of a consumer-centered program requires extensive involvement fiom consumers of those services. Consumers in this study were viewed and treated as evaluators of services rather than subjects of the primary investigator’s research. This perspective is unconventional in traditional research with consumers of mental health services, yet appropriate for a consumer- centered, strengths-based paradigm. Therefore, the process of engaging consumers in this project was a critical component of the study. This document first provides a brief historical overview of the community mental health movement and how the system has evolved fiom a clinical-based, provider-driven system to the current movement towards consumerism and the strengths-based paradigm. Following this general overview, a more specific historical analysis of the types of case management models that have been employed in the community mental health system, including the more recent strengths-based models of case management, is provided. The proposal then delineates the ideal components of a consumer-centered, strengths-based model of case management in Michigan. Next, results from a study that assessed a fully operational consumer-centered, strengths-based case management program in Michigan are reported on. Finally, a detailed discussion of the results is presented with ideas for future research endeavors. History, Systems, and Theories of Mental Health Treatment Before proceeding to a discussion of consumer-based case management services in Michigan, a brief overview of the community mental health system is presented to illuminate some of the theoretical and sociopolitical forces that have shaped current policies towards community mental health treatment. There were four major events over the past 40 years that have significantly impacted current community-based models of mental health treatment: the Community Mental Health Centers (CMHC) Act of 1963, the development of ecological theories of mental illness, the Community Support Program (CSP), and the ex-patient/consumer movement. The Commqu Mental Health Movement The current community mental health system was established in 1963 with the passage of President Kennedy's Community Mental Health Centers Act (PL 88-164) (Grob, 1991). The CMHC Act of 1963 represented a profound shift in treatment ideology as well as one of the most significant social reform movements of the twentieth century (Felix, 1967; Rochefort, 1997). The Act was established in part to provide community-based services to the nation's institutionalized psychiatric populations, many of whom were soon to be released en mass through the process of deinstitutionalization (Grob, 1994; Scheid & Horwitz, 1999). This momentous shift in mental health policy was influenced more by the political zeitgeist of the times than by strong empirical and theoretical evidence supporting its application (Levine, 1981; Rochefort, 1997). The actual application of the CMHC Act lacked structure or coordination and was poorly funded (Grob, 1991; Levine, 1981; Scheid & Horwitz, 1999). In addition, little was known at the time about how to treat individuals with serious mental illness in the community. The only theoretical model available was the institutional or clinical model of mental illness and treatment. As a result, although the physical walls of the psychiatric asylums were removed, the philosophy and service policies of institutional care persevered. Despite the growth of community mental health centers throughout the first 17 years of the community mental health movement, service delivery continued to be dictated by the institutional or clinical model of mental health treatment. The clinical model is characterized by an emphasis on reducing or eliminating biological and neurological deficits within individuals (Kiesler, 2000). As noted by Talbott (1979) and Mechanic (1999), the institutional/clinical model of mental health care was not eliminated with the passage of the CMHC Act and deinstitutionalization, but rather was transplanted to the public community mental health centers. Talbott (1979) referred to this phenomenon as transinstitutionalization. Clinical Model of Mental Health Treatment What was problematic with transinstitutionalization was that the institutional model was ineffective and in conflict with the principles of community mental health, which reflected the practice of community integration and the development of natural community support systems. In contrast, institutional or clinical models of mental health treatment focused more on the delivery of professional-driven and costly medical-based services, in institutional settings (e. g., outpatient clinics, hospital-based clinics, and inpatient settings). Despite the ongoing growth of community mental health centers throughout the 1960's and 70's, hospitalization, incarceration, and nursing home placement rates for individuals with mental illness actually increased yearly after 1963 and continued to do so until the late 1980's (Kiesler & Sibulkin, 1987; Kiesler & Simpkins, 1994; Schlesinger & Gray, 1999). The institutional model was partly perpetuated by the introduction of Medicaid and Medicare Federal Insurance programs for the indigent, disabled, and elderly populations in the late 1960's (Levine, 1981; Mechanic, 1999). Both insurance systems were based on a medical model of service delivery, characterized by acute, reactive, medical-based services (Levine, 1981; Price & Smith, 1983; Mechanic, 1999; Rochefort, 1997; Schlesinger & Gray, 1999). In addition, many researchers have argued that institutional/clinical-based models of mental health treatment promote institutional dependency, powerlessness, and stigma, ofien associated with individuals who have received treatment from the mental health system (Coursey, Farrell, & Zahniser, 1991; Phelan & Link, 1999; Ridgeway, 1988; Schef’f, 1984). Ecological Models of Mental Health Treatment In reaction to the omnipotence of the medical-based paradigm for the treatment of mental illness and in response to the initiation of the community mental health movement, a small group of social scientists, invested in improving care for people with mental illness, developed a new branch of psychology - community psychology - and a new model of mental illness and treatment - the ecological paradigm. The ecological paradigm is a collection of theories and models that explains human behavior through the interactions of individuals and their environments. In contrast to the unidirnensional or genetically-based paradigm of the clinical/medical model, ecological models of mental illness and treatment derive explanations of human behavior from multiple dimensions and factors. Within the fiamework of community mental health, the ecological model of research and evaluation is concerned with the interdependence among people, their behavior, and their sociophysical environments (J eger & Slotnick, 1982). An ecological perspective on mental health emphasizes the evaluation of multiple environments (e. g., social systems or the context of psychological health and illness) and views a person’s adjustment to these environments in terms of transactional relationships between individuals and their environment (Bonfenbrenner, 1979; Glenwick, Heller, Linnery, & Pargament, 1990; Holahan et al. 1979). Ecological theories of mental illness and treatment provided a theoretical bridge between the original principles of the community mental health movement and the growth of the consumer movement and the principles of consumerism (described below). The Communig Support Program As a result of the early problems and failures (perceived and real) associated with community mental health and deinstitutionalization, such as increasing rates of hospitalization and discharge, homelessness, incarceration, and nursing home placements, and a general incapacity of community mental health centers to meet the needs of individuals with SMI, the National Institute of Mental Health (N IMH) launched the Community Support Program (CSP) in the late 1970's (Tuner & TenHoor, 1977). NIMH created the CSP, also known as Community Support Services (Grob, 1991; Rubin, 1987), in an attempt to rectify the structural limitations of the current system and to create a more unified mental health system in communities (Turner & TenHoor, 1978). The premise of this program was that individuals with serious mental illness needed more support in the community through continuity of care, community integration practices, (IO and resource development (Anthony, 1993; Kiesler & Sibulkin, 1987). The CSP was guided by a philosophy of community integration, which was achieved through the development of natural community-based resources (Turner & TenHoor, 1978). In accordance with the principles of the CSP, case management was viewed as the most effective vehicle for mental health service delivery (Rubin, 1987; Solomon, 1998). The CSP established services that were: a) consumer-centered, b) empowering, c) racially and culturally sensitive, d) flexible, e) focused on individuals' strengths rather than deficits, f) normalized and incorporating of natural supports, g) and accommodating of special needs (Anthony, 1993; Hodge & Giesler, 1997). As a result of the CSP, case management became the primary vehicle of community-based mental health services for adults with serious mental illness. Cmerism Although the CSP promoted more of an emphasis on treating individuals with SMI, rather than the worried-well, which had dominated the focus of community mental health centers up until the mid 1970's (Morrissey, 1999; Scheid & Horwitz, 1999), many among the recepients of mental health services and their families were still extremely dissatisfied with the types of services offered to them and the way they were perceived and treated. As a result of this dissatisfaction, another reactionary movement, based on the omnipotence of the clinical paradigm of mental illness and treatment, was created by ex-patients and current consumers of mental health services (Campbell, 1997; Chamberlin, 1978; Chamberlin & Rogers, 1990). The impetus behind this movement was a desire by individuals with mental illness to counteract the feelings of powerlessness and stigma they had experienced as patients in the mental health system (Chamberlin & Rogers, 1990), to alter how the mental health system functioned (Frese, 1998), and to provide a voice for individuals who had been disenfi'anchised as a result of their treatment by the mental health system (Chamberlin, 1990). The consumer movement has led to a reconceptualization of how services are to be delivered and what the goals of treatment should be (Campbell, 1997 & 1998a; Chamberlin & Rogers, 1990). In contrast to the paternalistic, hierarchical, and professionally-driven clinical or institutional treatment system, consumerism has lead to a more equitable service delivery system, characterized by shared decision making and consumer involvement in all levels of service provision. The consumer movement has had a significant impact on the development and application of community-based mental health services and on national mental health policy (Campbell, 1997; DHHS, 1999; McCabe & Unzicker, 1995). The Current System The community mental health movement established the present national community mental health system in the United States. Early problems and failures associated with this momentous but poorly planned system reform movement provided the impetus behind the growth of ecological theories of mental illness and treatment, the CSP, and the rise of the ex-patient/consumer movement. These three forces have greatly impacted the current mental health system in Michigan and across the United States. Current community-based models of treatment for individuals with SMI reflect the principles of community mental health, ecological theories of treatment, the CSP, and consumerism. with the A! History and Models of Case Management Two comprehensive reviews of case management research, one by Rapp (1996) and the other by Mueser et al. (1998) categorized the different types of case management into three different models: broker, PACT/ACT/ICM, and strengths-based/psychosocial rehabilitation models. The goal of this review is to briefly describe the three models and to examine the empirical research on each model. Broker Case Management Case management programs for individuals with mental illness were introduced with the first wave of community mental health centers in the mid 1960's, although the actual roots of case management can be traced back to the nascent field of social work at the beginning of the twentieth century (Deutsch, 1949; Grob, 1994; Rubin, 1987). Early models of case management for individuals with mental illness focused more on brokering services and, thus, were referred to as the broker model of case management. These early programs were characterized by large caseloads, often exceeding 50 clients per clinician and were usually office-based and reactive in nature (e.g., responding after a crisis had occurred). The goals of these early programs consisted of helping individuals become connected to multiple community-based services and to help them acquire necessary resources, such as SSDI benefits, Medicaid, mental health treatment, and housing (Intagliata, 1982; Mueser et al. 1998). In addition, it was assumed that case managers could reduce or eliminate community barriers that existed in the fragmented community mental health and primary health systems (Intagliata, 1982; Rubin, 1987). The emphasis was on improving continuity of care and linking individuals to mostly clinically-based service providers, such as outpatient mental health centers, medication l0 clinics, sheltered workshops, substance abuse services, and supervised or structured housing. Ironically, although the broker model of case management is probably the oldest and most commonly employed case management program, at least historically, there is very little research on its efficacy, except that is has often been used as the control condition for comparison to ACT and [CM programs (described next). Not surprisingly, more intensive models of case management (i.e., smaller caseloads and access to more resources) have usually been found to be more effective than this model on reducing hospitalization, improving consumer satisfaction, and increasing community tenure for high users of hospital-based services (Mueser et al., 1998). Anecdotal reports and limited empirical evidence suggests that the broker case management model is limited in effectiveness (Franklin, Solovitz, Mason, Clemons, & Miller, 1997; Rapp, 1996; Mueser et al. 1998). ACT/ICM Prograr_n§ A more innovative breakthrough in case management services came in the early 1970's with the introduction of Stein and Test's (1980) Program in Assertive Community Treatment (PACT), which is now more commonly referred to as ACT. The PACT model is an intensive case management program specifically designed to reduce the rehospitalization of individuals with SMI. The premise behind this program was that individuals with severe mental illness could be treated and stabilized in the community rather than in the hospital (Stein & Test, 1980). The primary feature of this model was the delivery of services "in vivo"- the delivery of services in individuals' communities where they will need and use them. In addition, the program is intensive, proactive, and comprehensive. For example, caseloads rarely exceed a ratio of 10:1 consumers to case managers, every team has a psychiatrist, nurse, and social worker assigned to it, services are available seven days a week, and crisis services are available 24 hours a day. Furthermore, most programs have access to some type of clinically-supervised housing service, transportation services, and daily monitoring and onsite medication administration (if needed). Finally, a more subtle aspect of the PACT model is that case managers fi'equently become the primary service provider for therapy, vocational counseling, training in daily living skills, and crisis support. This is in contrast to the broker model where the case manager refers consumers out to other agencies or other divisions within the community mental health center to provide these services. Stein and Test's (1980) classic study examined the impact of their intensive case management program compared to standard aftercare services in a community mental health center. Using an experimental design (random assignment), Stein and Test found a significant reduction in hospital readmissions and days hospitalized, a significant increase in independent living compared to supervised residential living, a significant increase in sheltered employment, a significant reduction in syrnptomatology (7 out of 12 clinical scales), 3 significant increase in medication compliance (at 8 and 12 month periods) and a significant increase in satisfaction with life and self-esteem for individuals in the PACT program compared to individuals in the standard care condition. In addition, the PACT program demonstrated a significant reduction in cost compared to the standard aftercare condition (Weisbrod, Test, & Stein, 1980). The PACT Model provided the impetus for the National Institute of Mental Health's (N IMH) Community Support Program (CSP) as well as representing the flagship program of community mental health. The combination of the Stein and Test's (1980) successful PACT model with the Community Support Program led to the widespread proliferation of case management programs across the United States (although only a small proportion were actually PACT programs). The PACT or ACT program was widely disseminated to varying degrees of fidelity to the original model (Deci et al., 1995; McGrew et al., 1994). The ACT model spawned numerous hybrids referred to as intensive case management programs (ICM), continuous treatment teams (CTT), and intensive treatment teams (ITT) (Teague et al., 1995; Bond, 1990). These programs often retained the core components of the PACT model (e. g., small caseloads, 24 hour crisis services, community-based services, and case managers as primary service providers), but would vary in access to other resources (e. g., having a psychiatrist and nurse assigned to the team, having access to transportation and housing services). Because it is extremely difficult to discern the difference between ACT and other ICM programs in the literature and in application (J ohnsen et al., 1999; Mueser et al., 1998), they are grouped under one model for this review (this grouping method was used by Mueser et al., 1998 and Rapp, 1996). Although it is not an obvious characteristic of ACT/ICM programs, most programs employ a clinical/professional-based perspective of mental illness and treatment. For example, the original PACT program is community-oriented, but employs a paternalistic, clinical-based model of service delivery (e. g., see Estroff, 1985; Rapp, 1998). One of the problems associated with this type of service delivery is the reliance on clinical-based services rather than attempting to develop natural community support systems. Individuals are provided with resources, but they are usually delivered in a 13 pres into and our in: exp SUC he \\ prescriptive and paternalistic fashion (Saleeby, 1997). The result is a lack of democratic involvement from the consumers and a lack of development towards self-determination and independence. Consumers tend to rely on these services rather than developing their own resources. Another problem related to this last point is that due to the reliance on intensive case management services and other clinical-based services, ACT programs are expensive to operate. In general, research findings have found that ACT and ICM programs have been successful at reducing hospitalization rates, increasing consumer satisfaction, and improving housing stability, especially for hi gh-end users of hospital-based services, compared to standard, broker type case management models, standard outpatient mental health services, or no services at all (Bond et al., 1990; Bond, Miller, Krumwied, & Ward, 1988; Borland, McRae, & Lycan, 1989; Burns & Santos, 1995). Less consistent but still moderately positive results have also been found in improving individuals' quality of life, medication compliance, and vocational outlets (including sheltered employment; Bedell, Cohen, & Sullivan, 2000; Mueser et al., 1998; Phillips, Burns, Edgar, et al., 2001). ACT and ICM programs have been less effective in the reduction of symptomatology, improvements in social functioning, and the reduction of substance abuse and related behaviors (Drake, Mercer-McFadden, Mueser, McHugo, & Bond, 1998; Mueser et al., 1998; Rapp, 1996). Often the impact of ACT and ICM programs on symptomatology, social functioning, and substance abuse was no better than the standard case management condition in controlled studies (Mueser, Bond, & Drake, 2001). In addition, several studies have found that ACT programs are no more expensive to operate than standard case management programs for high-end users of hospital-based l4 services (Essock, Frisman, & Kontos, 1998; Weisbrod, Test, & Stein 1980; Wolff et al., 1997). However, it is important to note that both programs are expensive to operate for such individuals. The differences in expenses between the two models can be attributed to extensive hospitalization cost for standard programs (i.e., the control conditions) and intensive case management and residential services for ACT programs. Finally, the PACT and other ICM programs have demonstrated that deinstitutionalization could be sustained if the appropriate resources were made available. By simply providing comprehensive services proactively in individuals' living environment, consumers could be maintained in the community, without relying on expensive hospital-based services. Strengths-Based Case Management A more recent adaptation of the ICM model is the strengths-based case management program (Rapp & Chamberlain, 1985). The strengths-based case management program was developed by Rapp and colleagues (Modrcin, Rapp, & Poertner, 1988; Rapp & Chamberlain, 1985; Rapp & Wintersteen, 1989) at the University of Kansas. This model of case management is similar to other intensive case management programs in that caseloads tend to be small (i.e., less than 20:1 consumer to staff ratio), services are provided proactively in the community, crisis services are available 24 hours a day, the case manager is the primary service provider, and nursing and psychiatry are available to the team (although not necessarily assigned to the team as in the PACT progam; Marty, Rapp, & Carlson, 2001). What differentiates this model from other intensive case management services is the philos0phy of service delivery and the goals of treatment. As noted above, the PACT 15 model, and probably most other intensive case management programs (e.g., Drake et al., 1998), employ a clinical-based model of service delivery (Bachrach, 1992; Harris & Bergman, 1987). For example, Santos, Henggeler, Burns, Arana, & Meisler (1995) referred to the ACT/ICM program as a hospital without walls. Although one of the primary goals of all case management programs is community integration, ACT/ICM programs achieve this goal through a reliance on intensive clinically-based services and an emphasis on stabilization and symptom reduction (e. g., see review by Mueser et al., 1998). The strengths-based ideology or method of mental health delivery is the antithesis to the pathology and deficit ideology utilized by the clinical/medical profession in mental health treatment. The premise behind the strengths-based ideology is to reduce the level of victim blaming that is inherent in the medical model (Rappaport et al., 1975) and to focus on individual strengths, empowerment, and self-determination (Saleebey, 1997). The strengths perspective consists of two major principles of human behavior: a) people have strengths and capacities that can be exploited, and b) people can grow and prosper if given access and control over resources necessary for them to thrive in the community (Rapp, 1993). Additionally, the strengths perspective emphasizes that the community, like individuals, is an oasis of resources waiting to be discovered and used (Kisthardt, 1997; Rapp, 1993). The strengths-based perspective of mental illness and treatment was developed fi'om ecological theories of mental illness and treatment (Rapp, 1998), but received its impetus fi'om the consumer movement of the late 197 0'3. Consequently, the strengths- based model of case management reflects the integration of the PACT treatment 16 technology, ecological theories of human behavior, and the principles of consumerism. The product of this integration is a program that is intensive in services but driven by the goals, needs, and desires of consumers rather than the clinical expertise of the case manager. The case manager is still the primary service provider but is viewed as a partner with the consumer in attempting to achieve the consumer's self-defined goals (Kirstardt, 1993). Finally, the strengths model views individuals in terms of recovery rather than maintenance (Wilson, 1992). The concept of recovery posits that individuals may continue to experience symptoms related to their psychiatric condition but that they can still live fulfilling lives in spite of these symptoms (Anthony, 1993). This last point is a subtle but important difference between traditional clinical-based models and services for individuals with SMI and the strengths-based perspective and case management program. Because strengths-based case management services do not focus on reducing current psychiatric symptoms, individuals are given the opportunity to grow and recover despite their psychiatric disability. Helping individuals gain control over their psychiatric disability is still a concern of consumer-centered case managers; however, it is considered a step or process towards achieving consumers' self-defined goals, rather than being a goal in itself. The strengths-based model was first introduced in 1985 in Kansas and has been slowly disseminated to other programs across the United States. Due to the recent introduction of this model, there is very little empirical research that has focused on the strengths-based case management model, with only seven studies that have examined the impact of the strengths-based program. Research findings indicate that this model of case management is successful in helping individuals achieve their self-defined goals, in a 17 relatively short period of time (i.e., less than six months) (Kisthardt, 1993; Rapp & Wintersteen, 1989; Modrcin, Rapp, & Poertner, 1988). These goals involved numerous domains, including vocational, educational, leisure, social support, financial, and housing (Rapp, 1998). In addition, four out of five studies that examined hospitalization rates found reductions in hospital rates compared to standard case management services and over time within the program (although two of the studies were not statistically significant) (Macias et al., 1994; Modrcin et al., 1988; Ryan, Sherman, & Judd, 1994; Rapp & Chamberlin, 1985; Rapp & Wintersteen, 1989). Three studies that assessed quality of life found significant improvements in this domain over a standard control group (standard mental health services or broker case management) and over time within the program (Macias et al., 1994; Modrcin et al., 1988; Stanard, 1999). Findings from the two studies that used a randomized control group design also found that individuals who received strengths-based treatment reported significantly lower psychiatric symptoms (e. g., lower levels of stress, fewer problems with mood, and greater psychological well being) than the control group conditions (Macias et al., 1994; Modrcin etal., 1988). In addition to the positive outcomes noted above, numerous researchers have also asserted that a consumer-centered or strengths-based perspective leads to a sense of psychological empowerment (Saleebey, 1997; Rappaport, Reischel, & Zimmerman, 1992). Psychological empowerment is a personal sense of control over one's environment and the resources in it (Rappaport, 1981). Rappaport, Reischl, and Zimmerman (1992) asserted that empowerment includes not only access or control over resources, but also individuals' interactions with the environment that have led to their gaining access and control over these resources. They further asserted that empowerment involves control over one's psychological resources. Therefore, just providing consumers with vocational assistance or residential housing will not necessarily lead to an increase in individual's sense of empowerment, unless they feel that they have control over these resources. Rapp (1998) and Saleeby (1997) have argued that a strengths orientation fosters a sense of empowerment among individuals attempting to recover from the debilitating effects of mental illness by nurturing and facilitating their own capacities while avoiding prescriptive practices that only promote dependency. In addition, consumers in a strengths-based program are promoted to a level of partner with their case manager. Patemalistic practices are avoided by allowing consumers to collaborate with case managers and direct the treatment planning process (Kisthardt, 1997) Another aspect that differentiates the strengths-based case management model from ACT or ICM models is that in ACT programs, while individuals are given access to resources, there is no fostering of personal empowerment. In contrast, it can be argued that the strengths-based program both fosters empowerment, through a consumer- centered model of service delivery, and helps consumers gain access to needed resources. Thus, in theory, consumers in the strengths-based case management program are more capable of sustaining any gains they have made or holding onto any resources they have acquired. In addition, psychological empowerment itself can serve as a powerful aid in the process of recovery fi'om mental illness (Rappaport et al., 1992). Although individuals may continue to experience symptoms related to their illness, the improved l9 sense of co from menu GiV centered st care has re In fact in 1 strengths-l including it managerr, many m the stren; Smiees. Consume Smite 1 leader 1 the PA: 1995; l 2000). PSyehg sense of control over their life and the disease can greatly facilitate individuals' recovery from mental illness. Given the potential of the strengths-based paradigm in general and the consumer- centered/strengths-based case management program, it is not surprising that this model of care has received strong support from consumer organizations across the United States. In fact, in states, such as Michigan, where consumer advocacy is well established, the strengths-based model has been adapted to all existing community-based services, including case management (MDCH, 1999). The Consumer-Centered Case Management Model in Michigan The historical review of the community mental health system and case management programs in the United States provides some indication of the direction many states, such as Michigan, have chosen for system reform. Michigan has adopted the strengths-based consumer centered perspective for all its community mental health services, including case management. However, it is unclear how the adoption of the consumer-centered model at the state level has been translated into application at the service level. WHealth Treatment in Michiga_n Historically, at least within the last 40 years, Michigan has been considered a leader in community mental health treatment. Michigan was one the first states to adopt the PACT model, and today has more ACT programs than any other state (Deci et al., 1995; National Association of State Mental Health Program Directors, [NASMHPD], 2000). Michigan is also one of the leaders in the development of F airweather Lodges, psychosocial rehabilitation centers, Clubhouses, and drop-in centers for individuals with 20 mental ‘ )llehiga 0011311111 U.) K .1) O l_. ~l part of lndtx'tt consu 1113113 thee Mic pra prz IIE IO mental illness. Following in the trend of progressive mental health innovations, Michigan established into law the requirement that all mental health services must be consumer-centered and strengths-based (Michigan Mental Health Code PA 194 Section 330.1712). Michigan is one of the first states to mandate consumer-centered services as part of its state law. With the passage of this law, all service providers that serve individuals with mental illness must incorporate the principles and procedures of a consumer-centered/strengths-based model of service delivery, including case management services. The implementation of consumer-centered services has been further facilitated by the establishment of behavioral health managed care in October of 1997 (MDCH, 1999). Michigan applied for and received both Health Care and Financing Administration's (HCFA) 1915(b) and 1115 waivers, allowing the state to eliminate fee-for-service billing practices, dictated by Medicaid Insurance. The elimination of these medical-based practices has allowed the state to innovate all aspects of services, such as billing, treatment planning, and service provision. As a result, the state has moved further towards a consumer-centered model of service delivery. In turn, service providers now have the flexibility and funding (block grant money provided at the beginning of each fiscal year) to establish consumer-centered, strengths-based case management services. However, unlike the research-based programs assessed by Rapp and colleagues (1998), which were developed from the ground up, case management programs in Michigan were already operating under the more traditional clinical-based model when service providers were asked to transform the service delivery system. Case management programs, as well as all other community-based services that were in existence during 21 these no model. 1 strength: were no lnchig: ground P108311 adapti implc' Frog: these transitions in Michigan, were adapted to the consumer-centered, strengths-based model. In addition, as noted frequently in this document, the consumer-centered, strengths-based model is relatively new and virtually unexamined in research; thus, there were no prototypical models or guidelines available to help guide the architects of the Michigan model. Considering how difficult it is to adopt well defined model programs from the ground up (Bachrach, 1989; Torrey, 1990), an important question to consider for programs in Michigan is: how effective have service providers in Michigan been at adapting their case management programs to a strengths-based, consumer-centered service delivery model? In other words, to what extent has this service innovation been implemented? Mam Implementation The questions posited above concern the issue of assessing program implementation or implementation assessment. Either term refers to assessing whether an intervention was implemented and is being delivered as planned (King, Morris, Fitz- Gibbon, 1987; Yeaton & Sechrest, 1981). Assessment of program implementation is a valuable yet underutilized evaluation tool (Boruch & Shadish, 1983; King et al., 1987). Implementation assessment or monitoring can provide decision makers (e. g., legislatures and funding organizations) feedback on whether a policy is being put into operation as planned or even the feasibility of the policy (Rossi, Freeman, & Wright, 1979; Patton, 1997). Although most program evaluations in social research (e. g., community mental health research) proceed with the tacit assumption that the treatment intervention was implemented as planned, this is often not the case (Chen, 1990; King et al., 1987; Patton, 22 1997). dificultj be com; Phillips. to the rr Considr coneep‘ (Yeator eornprr 1981). intent imrrler etheat Rem. et'aluz inten- W583 plann mane OfVa 1r1110‘ 1997). In addition, the more complex the intervention is to implement or the greater the difficulty in administering the intervention, the greater the likelihood that its integrity will be compromised (i.e., less than firll implementation; Rossi et al., 1979; Sechrest, West, Phillips, Redner, & Yeaton, 1979). The problem with not implementing an innovation as planned or any compromise to the model program is the diluting or degradation of the treatment intervention. Considering that most programs are based on an effective model or at least a theoretical concept, any derivation from that model can diminish the impact of the intervention (Yeaton & Sechrest, 1981). In addition, poor implementation of the program greatly compromises efficacy and effectiveness research (Patton, 1997; Yeaton & Sechrest, 1981). Program evaluations are frequently undermined by the lack of adoption of the intervention (Chen, 1990; Rossi et al., 1979; Patton, 1997). When programs are not implemented or operated as planned, they can appear to be ineffective, when in fact the efficacy or effectiveness of the program could not be accurately assessed (Scheirer & Rezrnovic, 1983). Scheirer (1994) refers to this problem as a Type 111 error: the evaluation of a non-event. There are numerous cases where potentially effective social interventions were discarded due to perceived program failure; however, in many of these cases a post hoc analysis revealed that the intervention had not been implemented as planned (Chen, 1990; Patton, 1997; Sechrest et al., 1979; Yeaton & Sechrest, 1981). As a result of the complex nature of most social interventions, such as case management for individuals with SMI, most programs are implemented with some degree of variation from the ideal model (Patton, 1997). This is especially true of new social innovations that lack definition or a clear prototype, which is the case with consumer- 23 «mere mNhem l'ntt'ers armeej progrmr Max byman Cons mph the} ma inr Elm in centered, strengths-based case management programs in Michigan. Although programs in Michigan are loosely modeled after the strengths-based program developed at the University of Kansas (e. g., see Rapp, 1998) and the concepts of consumerism, as noted, service providers in Michigan adapted these concepts to existing case management programs. Instead of being provided with a prototypical model to guide the construction of case management services, mental health service providers in Michigan were guided by state initiatives and legal mandates. These two conditions - broad, non-specific treatment guidelines for the implementation of consumer-centered, strengths-based services and the lack of empirical research regarding the effectiveness of this service model - create a condition for substantial variation in the implementation of this intervention across service providers in Michigan. In turn, the wide variation in implementation of consumer-centered, strengths- based programs can undermine any attempt at evaluating the effectiveness of this model. Considering the potentially negative impact of assessing programs that have not been implemented as planned, it seems important to first assess how well service providers in Michigan have adapted their case management services to the consumer-centered, strengths-based model before attempting to evaluate its effectiveness. Assessing Progpam Implementation There are multiple ways of assessing the implementation of a program. The methodology chosen is based on both the questions that are asked and the stakeholders involved in asking the questions. The choice of methodology is also influenced by the amount of information and empirical research that is available on the program. For instance, there is an extensive body of research on the PACT/ACT program (e.g., see 24 Bedell. derelo; and the PACT j assessr al., 19‘? of con- restrie treat 3 pm trea' of tl is tl and 110 Bedell, Cohen, & Sullivan, 2000; Mueser et al., 1998; Phillips et al., 2001). This well developed research base provides evaluators with a clear understanding of the guidelines and the tools needed for assessing PACT programs. In fact, evaluation research on PACT programs has evolved from assessing implementation to a more specified assessment of fidelity using a standardized measure of program fidelity (e. g., McGrew et al., 1994; Deci et al., 1995, Teague et al., 1995 & 1998). In contrast, the knowledge base of consumer-centered, strengths-based programs is limited. This limited knowledge base restricts the options for assessing pro gram implementation. Patton (1997) outlined five different methods or techniques for assessing program implementation that are related to different issues or questions asked. These methods include effort evaluation, which focuses on documenting the quantity and quality of program activity that takes place; monitoring assessment, which focuses on data usually collected through a management information system (MIS); process evaluation, which focuses on the internal dynamics and actual operations of a program; component evaluation, which involves formal assessment of distinct parts of a program; and treatment specification, which involves identifying a measuring precisely what it is about a program that is supposed to have an effect (i.e., assessing the critical aspects of treatment). Assessment of program implementation can incorporate one or a combination of these methods. Again, a major factor influencing the selection of any of these methods is the amount of information (e. g., theoretical formulations, guidelines, expert consensus, and empirical research) that is available to evaluators prior to assessing the program. In addition, a related issue to consider is how information will be collected and fiom what sources. In other words, what measures of implementation will be employed? 25 1n the catego' menu indiea 'mdica billin obser each arehi reco whj. of d nan U31 811: ha In their comprehensive review of implementation research, Scheirer & Rezrnovic (1983) categorized six different types of implementation measures that have been used: technical measures, which is defined as a measurement taken directly from a piece of equipment to indicate whether it is operating correctly (e. g., temperature reading); unobtrusive indicators, which include any measure already collected from an agency or program (e. g., billing records, personnel information, number of clients served); behavioral observations, which are observer-collected data using a prespecified set of categories, each with an operational definition of target behaviors to be recorded; institutional or archival records, which is a special case of unobtrusive indicators that include medical records, agency administered surveys, and staff reports; interviews and questionnaires, which are usually administered by an outside evaluator (e.g., beyond the normal activities of the program or agency); and ethnographic observations, which are unstructured or naturalistic observations that proceed without prespecified observational categories and usually over an extended period of time (e. g., months or years). Other evaluators have suggested similar options for data collection (e. g., King et al., 1987 ; Rossi et al., 1979). The most common technique employed in implementation research is evaluator- based interviews and questionnaires (Scheirer & Rezrnovic, 1993). Scheirer & Rezrnovic noted that although interviews and questionnaires were the most common data collection method employed, only 18% of interviews and 6% of surveys were used with the primary recipients of the programs (e.g., consumers of mental health services). Most studies focused on interviewing and surveying staff and administrators of innovative programs (this article was published nearly 18 years ago and may not reflect current practices). Scheirer 'rmplerner between eollabor 19m?6C1 obsert’e consurr process for par CODCC’ fesoh the p: I987 the Par of Scheirer and Rezrnovic also noted that nearly 75% of all studies that assessed program implementation used multiple data collection methods. Another issue to consider is who should be involved in assessing the implementation of a program. This issue relates to the level of collaboration used between the evaluator and other stakeholders involved in the research. This level of collaboration can be placed on a continuum from an extreme non-collaborative perspective, wherein consumers are viewed as subjects who will be measured and observed by an expert, to empowerment-based and participatory evaluations, wherein consumers are not only viewed as expert collaborators in the research, but the evaluation process itself becomes an intervention facilitating self-determination and personal growth for participants of the program (Fetterman, 2000; Rogers & Palmer-Erbs, 1994). Finally and probably the most important or preliminary issue to consider, is the conceptual model of the program. How the program is conceptualized will help to resolve many of the evaluation issues noted above. Therefore, the conceptual model of the program helps to clarify what the program is and how it should be evaluated (Brekke, 1987). Brekke (1987) argued that assessment of a programs implementation begins with an a priori specification of the program (i.e., conceptual model). Brekke (1987) also argued that instruments used to collect data must be tailored to the needs of the evaluation and be meaningful to stakeholders. Conversely, the needs of particular stakeholders and the questions they ask will also determine how and what types of data are collected (King et al., 1987; Rossi et al., 1979). Due to the nascent state of consumer-centered research, neither a functional conceptual model nor accepted measures of consumer-centered practices are available. Therefore, before proceeding 27 with an assessment of implementation, it is necessary to first develop a framework of a consumer-centered, strengths-based case management program in Michigan and from there, develop relevant measures of the program. The other issues noted above — methodology, stakeholder focus, and level of collaboration — are readdressed after developing a conceptual model of the program. Defining Consumer-Centered, Strengths-Based Services in Michigan In order to develop a conceptual model of consumer-centered case management programs in Michigan, it is necessary to examine the underlying principles of the program and its intended purpose. In evaluation science language, this refers to understanding the normative treatment theory or action theory underlying the intervention and the normative treatment (Chen, 1990; King et al., 1987). Michigan's public community mental health centers employ two variations of case management: Assertive Community Treatment (ACT) modeled from Stein and Test's (1980) program and a more general model that has many of the same components and goals of the ACT model but has a higher client to staff ratio, assigns clients to a case manager as opposed to the team, and has access to fewer resources (e.g., a psychiatrist is not assigned specifically to the team). In urban settings, it is common for large community mental health centers to provide both ACT and general case management services. The distinction between these two models becomes less obvious in rural agencies where service providers have adapted and merged the two models to meet their particular needs and resources. A unique characteristic of Michigan case management services is that the programs are not referred to as case management services but rather as support coordination services. In addition, staff are referred to as support coordinators rather than 28 case managers. This change in program and staff titles reflect Michigan's philosophy that individuals with mental illness are not cases to be managed, but rather people in need of supportive services. Nevertheless, due to the universal application of the terms case management and case managers, the more traditional titles will be used in this document to avoid confusion. Like most states, Michigan has established guidelines on how community-based mental health services are to be delivered to individuals with SMI, including case management programs. The preamble of Michigan Mental Health Code (Code 2.0, MDCH, 1999) articulates the principles of person-centered planning, community integration, and strengths-based consumerism. Several key statements within the preamble include: o The design and delivery of mental health supports and services will support consumer self—determination and independence. 0 Consumers and families will have a meaningful and valued role in the design, service delivery and evaluation of the community mental health service provider. 0 Efforts to maintain and further expand consumer-operated and controlled alternatives will be pursued. o Partnerships will be continuously developed in the community with an intention of increasing the community's desire and capacity to support and accommodate people with disabilities and their families. 0 Community-based rehabilitation, recovery and inclusion into community life will be promoted. o . . .. Resources will be shifted away from high cost, highly structured and regulated service models to more individualized, cost effective services which may include consumer directed or managed services and supports. MDCH has also established best practice guidelines in all areas or concepts noted in the preamble. These guidelines that are relevant to case management services for individuals with SMI are detailed below. Consumer-based services (consumerism). MDCH advocates for the inclusion of primary and secondary consumers and their family members in the development, 29 provision Thenunxr making rc planning guideline V consur and ne Speeil UpOn are r} Chou ll'lClt COIL hos ton provision, management, distribution, and evaluation of all public mental health services. The purpose of this guideline is to ensure that consumers have choices and decision- making roles in public community mental health centers. Involvement in treatment planning is an essential component of this guideline. Several key aspects of this guideline include: o The focus of all programs is on recovery rather than stabilization. This should be shown by an expressed awareness of recovery by consumers and staff. 0 Services will be strengths-based, focusing on abilities and potentials. 0 Consumers should be involved in the evaluation of services through the application of satisfaction surveys and other consumer-assessed measures of treatment. Person-centered planning. MDCH advocates for services that allow the primary consumer to direct the treatment planning process and to focus on what he or she wants and needs. Although this guideline allows for direct input from clinical staff, it specifically notes that the identification of possible services and professionals is based upon the expressed needs and desires of the consumer. Two key points of this guideline are that individuals have the ability and strengths to express preferences and make choices and that those choices will always be considered (if not always granted). Community inclusion. MDCH advocates for services that promote community inclusion practices. The purpose of this guideline is to promote services that lead consumers away fiom clinical-based services, such as day treatment, partial hospitalization, sheltered workshops, and CMHC operated residential facilities and towards community inclusion, such as supported employment, supported or independent 30 housing organized E impleme initial fr manage alist ol 28 exp item. ' guié C011 1hr: SE housing in residential neighborhoods, and socialization in community-based organizations. Expert consensus guidelines. In addition to the Michigan guidelines for the implementation of consumer-centered services, Marty et al. (2001) have provided the initial fiamework to create an implementation measure of strengths-based case management programs, guided by Rapp's (1998) model. Marty and colleagues developed a list of critical ingredients of the strengths-based case management model and then had 28 experts in the field of strengths-based case management rate the relevance of each item. The resulting list provides 72 items grouped under four critical dimensions of strengths-based case management: engagement, strengths assessment, personal plan, and resource acquisition. Marty et al.'s research findings provide the first and only attempt at operationalizing strengths-based case management. Prototype Model of Case Management Services in Michigg Michigan's practice guidelines and Marty et al.'s (2001) expert consensus guidelines provide the framework for developing a normative treatment model of a consumer-centered, strengths-based case management program. Figure 1 displays the three primary dimensions of a strengths-based, consumer-centered model in Michigan. Three of Marty et al.'s (2001) four dimensions of strengths-based case management are used in the model in Figure 1: strengths assessment, personal plan (consumer-centered service planning), and resource acquisition (community inclusion-based services). The dimension of engagement was dropped because it is difficult to differentiate case management behaviors that are related to engagement and ongoing services. Since this study was designed to assess a program that has been fully operational for 31 many Y ahead) Man)- based : based M)" can b1 conta Sen’ir guide Cent, indi‘ Figure 1: Dimensions of a Consumer-Centered, Strengths-Based Model in Michigan Consumer-Centered Service Planning Strengths l 1 Assessment Community Inclusion-Based Services many years, it was considered too difficult to separate engagement (individuals will already be engaged) and ongoing services. Furthermore, many of the items noted in Marty eta1.'s engagement dimension are considered components of community inclusion- based services by MDCH or share aspects with the other two dimensions of strengths- based case management. For example, item 2 of the engagement component - CM uses every opportunity to identify the consumer's interests, talents, abilities, and resources - can be incorporated under strengths assessment. Another example, item 6 - majority of contacts happen out of the office — can be incorporated under community inclusion-based services. Because of the overlap between Marty et al.'s engagement items and MDCH's guidelines for community inclusion-based services or the other two dimensions, the engagement dimension was dropped but the items were retained. Strengths afiesament. The first step in any program that utilizes a consumer- centered, strengths-based perspective is to assess the strengths, resources, and skills that individuals possess. A strengths-based view of individuals suggests that people have 32 capacities that can be enhanced through access to resources and support systems. Thus, in order for a strengths-based model to be implemented, case managers must be aware of individuals' strengths and capacities. Furthermore, individuals' strengths (e. g., capacities, personal skills, work history, interests, and goals) become the building blocks of the service plan. An effective strengths assessment will provide the framework for the consumer-centered service plan. Furthermore, individuals' strengths, which can include access to resources, will influence the intensity and types of case management services that are provided. Individuals with fewer strengths will usually require more supportive services while individuals with more strengths and access to more resources will usually require less intensity and frequency of case management services. In order for a consumer-centered, strengths-based program to work, case managers have to believe, and subsequently their behaviors should reflect this belief, that consumers have capacities and strengths. Thus, a critical aspect of this case management model is case managers' perceptions of their clients' capacities and strengths. Co_nsumer-centered service plan_ngg. Once individuals' strengths have been assessed, the next step is to develop a consumer-centered service plan, which will dictate the future roles of both the case manager and the consumer. The essence of a consumer- centered case management program is the development of treatment goals that are selected and defined by consumers. In a consumer-centered program, the consumer is considered the expert on what he or she needs and wants. The case manager is also considered to be an expert, but in bridge building and resource development, rather than in identifying the needs and goals of consumers. In theory, the consumer and his or her 33 case m consult of case profess ineong Chaml Roth, . Hatsul impet- goals (Cam been Speci 18). cons agre Cent repc C011: 11nd case manager become an interdependent team, combining their expertise to achieve the consumers' self-defined goals (Kisthardt, 1997). This model of case management services is in contrast to more traditional models of case management, including the PACT and ICM models. The traditional, professionally-driven mental health service model has been characterized as being incongruent with the self-defined needs and goals of consumers (Campbell, 1996; Chamberlin, 1978; Comtois et al., 1998; Coursey, Farrell, & Zahniser, 1991; Crane-Ross, Roth, & Lauber, 2000; Dimsdale, Klerrnan, & Shershow, 1979; Mitchell, Pyle, & Hatsukarni, 1983; Ridgway, 1988; Sanfort, Becker, & Diamond, 1996). In fact, the impetus behind the consumer movement was due, in part, to the disparity between the goals and needs of consumers and the goals and perceptions of service providers (Campbell, 1997). As noted by Ridgway (1988) "Many mental health professionals have been trained to focus on the individual's pathology and incapacities and their need for specialized interventions, rather than on their strengths and their day to day needs" (p. 18). A consumer-driven service model should, by design, be congruent with consumers' goals. There should be some indication that clinicians and consumers are in agreement or are congruent with the needs and goals of consumers. The consumer- centered style of service delivery should, in application, close the gap between the self- reported needs and goals of consumers, and case managers' perceptions of what consumers need and want. In Michigan, the process of providing strengths-based services begins with an understanding of what consumers need and want in terms of self-defined goals. The first 34 step is th which pic defined g guideline manager congrue. BSSCSSlTl provide 001151111 The Cor step is the development of a service plan (traditionally referred to as a treatment plan), which provides the methods, steps, and resources required to achieve consumers' self- defined goals. If services are truly person-centered, as described in the MDCH guidelines, there should be relatively high agreement or congruence between case managers and consumers on the needs and goals of the consumer. This high level of congruence between case managers and consumers should be reflected in personal assessments and the written action plan. Again, both the MDCH guidelines and Marty et al.'s (2001) consensus guidelines provide a functional and measurable list of items for detecting a strengths-based, consumer-centered service plan. The measurable items include and indicate that: o consumers select all treatment goals. 0 goals are written in consumer's language, which also indicates that consumers, and not case managers, select the goals. 0 the goals are reasonable and achievable. o the goals are not clinically-based (e. g., compliance with treatment, medication compliance, or reduction in annoying behaviors). 0 each goal involves natural community support systems (e.g., family, fiiends, competitive employment or other non-mental health services). 0 consumers dictate when and where the treatment planning meeting occurs. 0 consumers are able to invite whomever they want to participate in the meeting. These items provide a checklist of conditions that are necessary for a service plan to be considered consumer-centered and strengths-based. 35 Communig inclusion-based services. The third dimension of consumer- centered, strengths-based case management programs covers service provision. Both Marty et al. (2001) and MDCH strongly advocate for services that are community-based and lead to the acquisition of natural community-based resources. However, one of the major dilemmas of assessing and classifying case management programs is that services that are highly individualized (an important component of consumer-centered interventions) are more difficult to define or standardize. Although services should be community-based, if services are to be individualized, it is the needs of consumers that will dictate the frequency and location of case management services. For example, if a consumer requests individual or group counseling, services will probably be delivered in a clinical setting, yet be consumer-centered. Given the considerable variation among consumers' needs and the types of services they will require, it is difficult to establish an indicator or standard of strengths-based service provision (e. g., percentage of services that are community-based or minimal level of intensity of services required to be strengths-based). Nevertheless, there are indicators of strengths-based services which can demonstrate that services are leading to the acquisition of natural community-based resources and are strengths-based. Because service provision is based on the needs and goals of consumers, consumers themselves can provide an accurate assessment of whether services are helping them achieve their goals. Both Marty et al. (2001) and MDCH provide a list of specific indicators that can assess whether services are leading to community inclusion and resource acquisition. This list can be converted into an 36 evaluation tool used by consumers. Some of the key aspects of community inclusion- based services include: 0 Services are mostly delivered in consumers' living environment (out of the office) 0 Services are available at times that are convenient for consumers 0 Consumers are supported in engaging in non-segregated activities in the community (e. g., social functions, employment, or leisure activities) 0 Consumers are supported in obtaining competitive employment 0 When competitive employment is not possible, consumers are encouraged to seek volunteer jobs in the community 0 Consumers are encouraged to participate in agency committees involved in the development and provision of mental health services 0 Treatment planning occurs when and where consumers desire 0 Consumers are supported in using natural community support networks (e. g., friends, partners, family, or clergy) in the treatment planning and service provision process 0 Consumers are supported in obtaining and sustaining independent, community- integrated housing 0 Consumers receive assistance in obtaining viable transportation 0 Services are helping consumers meet their self-defined needs and goals. Another indicator of effective services can be assessed through satisfaction and dissatisfaction. Again using the logic that consumers can provide an accurate assessment of the quality of services they have received, another way of detecting the impact of consumer-centered services is to ask consumers how effective services are at helping 37 them. This can be achieved through the application of a satisfaction survey, specifically designed to assess consumer-centered, strengths-based mental health services. The application of the consumer satisfaction survey as a performance indicator of mental health services has received near universal support by researchers, funders, and federal agencies (Campbell, 1997; 1998a; Essock & Goldman, 1997; MHSIP Task Force, 1996). As a result of the consumer movement and behavioral health managed care, satisfaction surveys are now commonly employed in most public mental health facilities. In addition, controlled studies of ACT/ICM programs indicate that individuals who received intensive case management services were significantly more satisfied with mental health services than individuals who received either less intensive case management services or standard mental health services (Mueser et al., 1998). A caveat to the findings reviewed by Mueser and colleagues (1998) is that satisfaction surveys that have been traditionally employed in mental health research are of limited utility for two reasons. First, satisfaction surveys are usually from the provider perspective and thus tend to assess consumers' satisfaction with the services they received rather than with the services they needed (Campbell, 1998a; Ganju, 1999). Second, satisfaction surveys are subject to the phenomenon of ceiling effects; they are measurement tools in which most individuals score near or at the top of the range of scores (Cook & Jonikas, 1996; Elback & F ecteau, 1990). To avoid these problems, consumers need to be given an opportunity to evaluate services in terms of how well the agency performed in meeting consumers' needs rather than assessing whether they were satisfied with the services they received. Intuitively, if 38 services are effective at meeting consumers' needs and goals, consumers should be satisfied with those services. Summm of the prototypical model. These three dimensions provide the blueprint for creating a conceptual model that is sensitive to individual differences within and across consumer-centered, strengths-based case management programs. The three dimensions displayed in Figure 1 provide direction along with indicators of consumer- centered, strengths-based case management. In addition, the non-recursive relationship between service planning and service provision highlights the ongoing and interactive process between planning and service provision. Both treatment planning and service provision are processes rather than discrete events. Service planning and service provision evolve with the changing needs of consumers. The model in Figure 1 displays the interactive relationship of these two dimensions at any given time in the program. An advantage of using the model in Figure 1 is that the impact of case managers can be assessed. The strengths-based, consumer-centered model is based on how services are delivered rather than on structural components, such as whether a psychiatrist is assigned to a team or if the program includes 24-hour crisis services. As noted by Sullivan (1997) and Kirstardt (1997), the relationship between the case manager and the consumer is the primary and essential component of the helping process. In addition, research has demonstrated that case managers can influence consumers' treatment outcomes above and beyond the impact of the program itself (Neale & Rosenheck, 1995; Ryan, Sherman, & Judd, 1994). Sullivan (1997) noted that 67% of consumers (total sample was 46) who received strengths-based case management services listed the relationship between them and their case manager as being critical to their successful 39 recovery. The strength of the relationship was the second most critical factor noted by consumers (the use of medication was the most commonly noted factor) conducive to their recovery process. These findings suggest that it is important to examine the individual relationships between case managers and consumers. Proxy Indicators In addition to the direct assessment of services, the conceptual program model can be enhanced by examining additional indirect or proxy indicators of consumer-centered, strengths-based services. For example, Bickman and colleagues (1996) examined the satisfaction of parents of children who were involved in the Fort Bragg demonstration project. Although parents' satisfaction with case management services was not a direct indication of program effectiveness, it was argued that the generally high satisfaction with services reported by parents indicated that services were operating as planned. The linkage between satisfaction and program fidelity was conceptualized in Bickrnan's (1996) theoretical model of the program. Using the same theoretical methodology, indirect or proxy indicators of consumer-centered, strengths-based services can be selected to enhance the assessment of program implementation. As mentioned throughout this document, there is limited empirical research on the consumer-centered, strengths—based model; however, numerous researchers have argued that there is a strong relationship between this model of service delivery and the concepts of satisfaction, empowerment, and quality of life. Motion with services. The case for assessing service satisfaction has already been made, and as noted above, Bickman et al. (1996) used satisfaction as one of the 40 critical components for assessing the quality (i.e., fidelity) of their case management program. Emp_owerment. It has been argued extensively that consumer-centered and strengths-based services promote a sense of empowerment for individuals with mental illness (Rappaport et al., 1992; Campbell, 1997; Ridgeway, 1988; Rapp, 1998; Saleebey, 1997; Rogers, Chamberlin, Ellison, & Crean, 1997; Segal, Silverrnan, & Temkin, 1995; Corrigan, Faber, Rashid, & Leary, 1999). In contrast, clinically-based services can promote a sense of disempowerment and stigma (Rapp, 1998; Ridgeway, 1988). For example, Rogers et al. (1997) found a significant inverse relationship between their consumer-constructed scale of empowerment and the amount of professionally-driven mental health services consumers had received. Conigan and colleagues (1999), using Rogers et al.’s Consumer Empowerment Scale (CES), examined the relationship of empowerment to multiple factors among a group of 35 consumers of a partial hospitalization program. Their results indicated that personal empowerment was positively correlated with quality of life, social support, and self-esteem. The empowerment scale was also negatively correlated with psychiatric symptoms. Qualig of life. A related dimension to empowerment is quality of life. A primary goal of the CSP is to help individuals achieve a higher quality of life (Baker & Intagliata, 1982; CMHS, 1995), which comes through the attainment of their goals (e.g., buying a house, getting a good education, and obtaining a rewarding job). Therefore, consumers' perception of their quality of life can indirectly reflect the degree to which services are consumer-centered. This concept is reflected in traditional and strengths-based models of case management (e.g., CMHS, 1994; Modrcin et al., 1988; Mueser et al., 1998). Quality 41 of life reflects a growing understanding of the more humanitarian purpose of treatment, which is to enhance individuals' lives (Lehman, 1988). Furthermore, quality of life fits well into the framework of the strengths-based model since the assessment of it comes from consumers rather than from service providers. Finally, quality of life has been found to be positively correlated with a personal sense of empowerment for individuals with SMI (Corrigan et al., 1999; Rogers et al., 1997; Rosenfield, 1992; Segal et al., 1995) Eatpan1ded Conceptual Model of Case Management Services in Michigan The inclusion of satisfaction, empowerment, and quality of life as part of the larger assessment of program implementation provides an enhanced capacity to test the validity of the proposed model. Figure 2 displays the model in Figure 1 with the addition of the direct and indirect indicators of consumer-centered, strengths-based case management. The model displayed in Figure 2 can be considered the normative treatment for case management services in Michigan for individuals with SMI. The key element of this model is the capacity to assess whether services are consumer-focused or professionally-focused (i.e., clinically-based). In theory, within the consumer-centered model, a consumer and his or her case manager develop a partnership in order to achieve the consumer's goals. If the partnership is successful, both the consumer and his or her case manager will work together on the same goals. The conceptual model in Figure 2 is structured to assess if the consumer and case manager are working on the same, consumer-defined goals. 42 Figure 2: Conceptual Model Strengths Aneument Service Planning sud Provision Proxy Indicators Consumer-Centered Empowerment / Service Planning Strengths Quality of Life Assessment Comm unity Inclusion-Based Services Satisfaction As noted, another benefit of this conceptual model is the capability to examine individual relationships between case managers and consumers. Measures of Implementation The conceptual or normative treatment model displayed in Figure 2 provides a framework for creating new measures or selecting fi'om established measures of program implementation (Brekke, 1987; Chen, 1990; Patton, 1997). As noted by Brekke (1987) the next step is to develop measures of implementation that reflect the goals of the evaluator, guided by a conceptual model. It is also important at this point to reintroduce the other issues noted previously that should be considered when assessing program implementation. Specifically, who are the stakeholders involved in the process; what method(s) of implementation evaluation should be employed (e.g., process evaluation, 43 component evaluation, or monitoring); what data elements and sources of data should be collected; and what collaboration model should be used (e. g., expert-model/non- collaborative, participatory, or empowering)? These questions are reintroduced at this point because the development and selection of implementation measures are inter- related with these issues. Although measure development precedes an actual evaluation, the goals of stakeholders and the evaluator need to be considered in developing or selecting the measures. Primary sta_keholders. One point to reiterate is that the consumer-centered, strengths-based model was influenced by the consumer/ex-patient movement. The evolution of this paradigm reflects the growing influence of consumers, ex-consumers, and their family members in the development, provision, and evaluation of mental health services. Thus, it seems appropriate to use a consumer perspective when developing or selecting measures of program implementation, which in-tur'n, suggests that consumers are considered the primary stakeholder of the process. Nevertheless, it is also assumed that a consumer perspective and measures that reflect this perspective can meet the information needs of decision makers (e.g., legislatures and funders) as well as service providers. A consumer perspective does not negate the needs of other stakeholders but it does reshape information in the view of consumers rather than in the view of service providers. Data sources. Evaluation experts recommend collecting data fi'om multiple data sources in the evaluation process, such as primary consumers, administrators, medical records, and MIS data (Patton, 1997; Scheirer & Rezrnovic, 1983; Rossi & Freeman, 1985). Again, however, due to the selection of a consumer perspective, an emphasis is placed on collecting data directly from consumers, rather than exclusively from their case managers or an outside evaluator, which reflect the more common and traditional data sources used in case management evaluation research (e. g., McGrew et al., 1994; Deci et al., 1995, Teague et al., 1995 & 1998). Rossi, Freeman, and Wright ( 1979) argued that the collection of participant data is valuable to the evaluation process because providers may not be aware of what is important to participants; it may be the only way to find out what was actually delivered; and participants’ understanding of treatment cannot be assumed. Level of collaboration. By default, a consumer-based perspective argues for high collaboration with primary consumers. The highest or most involved collaboration model in evaluation research is a participatory or empowerment evaluation (Fetterman, 2000; Rogers & Palmer-Erbs, 1994). As noted by Patton (1997, pg. 101), “Empowerment evaluation is most appropriate where the goals of the program include helping participants become more self-sufficient and personally effective.” Because a primary tenet of the consumer-centered, strengths—based model is that it leads to empowering individuals with mental illness, it seems intuitive that an evaluation of the program should also support this tenant. Subsequently, measures of implementation should provide useful information for consumers, promote a sense of empowerment, and provide Opportunities for participation and partnerships between researchers and consumers. Methods for evaluating implementation. The conceptual model in Figure 2 provides for the analysis of components (i.e., six unique dimensions), while specific indicators of each dimension (described in detail in the methods section) provide for analyses of treatment specification (critical aspects of treatment). In addition, the 45 dynamic and unstructured nature of this intervention requires methods of evaluation that can incorporate this level of complexity. For example, Bickman and colleagues (1996) used the component evaluation method for assessing the implementation and fidelity of the large and comprehensive Fort Bragg Demonstration Project. Moreover, unlike the PACT and related models of intensive case management, the consumer-centered, strengths-based case management program lacks definition in terms of structure or process. As a result, methods such as effort evaluation, monitoring, and process evaluations are less well suited for this type of dynamic community-based intervention. The component method of program evaluation suggests that individual components (a component is the largest homogenous unit within an intervention) of a complex program, such as case management, can be assessed separately even though the components are interrelated (Bickman, 1985; Heflinger, 1996). Guided by this perspective, instead of attempting to create an overall indicator of implementation or program fidelity, the component method examines multiple dimensions independently. This technique allows for greater generalizability of findings and cross-program comparisons based on components rather than entire programs (Bickman, 1985). The treatment specification method is a more specific analysis of critical aspects of complex interventions. For example, a critical aspect of the consumer-centered model, as argued in this document, is that there should be an indication of congruity between consumers and their case managers on what consumers need and want. The treatment specification method can be used to examine this critical aspect rather than a broader view of the entire program. 46 These two methods provide a guide to how measures of program implementation of a consumers-centered model can be applied. Consequently, measurement development can focus on dimensions or specific interactions rather than on more global measures of overall program process or performance, such as the measures used for assessing the ACT and CTT programs (McGrew et al., 1994; Deci et al., 1995, Teague et al., 1995 & 1998). Mary of Measurement Model. Summarizing the points noted above, in order to develop an effective measurement model, which will lead to assessing the implementation of a consumer-centered, strengths-based case management program in Michigan, measurement development requires a consumer focus (consumers are given a voice in the evaluation process); a high level of collaboration with consumers; data collection from multiple sources (e. g., consumers, their case managers, medical records, and service data) with an emphasis on directly assessing consumers’ opinions; and an implementation evaluation methodology that can assess the complex and dynamic nature of consumer-centered and strengths-based services, such as a component or treatment specification methodologies. These guidelines provide the fiamework for creating a useful measurement model. The next step is to apply these guidelines to the construction of a measurement model and specific measures of consumer-centered, strengths-based services in Michigan. The Research Project A research study was implemented to test the utility of multiple measures that comprise a measurement model for assessing consumer-centered, strengths-based case management services in Michigan. Because the consumer-centered concept and service 47 model are in early stages of development and research, the focus of this study was on examining the psychometric properties, including reliability and validity, of several measures of program implementation. Scheirer and Rezrnovic (1983) outlined five measurement criteria to assess the adequacy of implementation measures: the use of multiple measurement techniques; the presence of an operational definition; the examination of reliability; the assessment of validity; and the use of sampling. Except for the use of sampling (a fairly homogeneous group of individuals with SMI from one agency participated in this study), this study was instituted to test the measurement model and individual measures for assessing the implementation of a consumer-centered, strengths-based program using four of the five measurement criteria outlined by Scheirer and Rezrnovic as well as other evaluators (e. g., King et al., 1987). This study was an exploratory investigation using a cross-sectional sample of consumers receiving case management services in one rural community mental health center (CMHC) in Michigan. The goal of this study was to examine if the measures that were selected or created to assess the implementation of a consumer-centered program, and the measurement model (detailed in the methods and results sections), were reliable, valid, useful, and conducive to an empowerment and self-determination ideology for consumers receiving case management services. Each measure was tested using the same four criteria. Validity of each measure was examined by correlating the scale scores of each measure with other indicators within each dimension (if more than one existed) and with indicators within other dimension that should be related (see Figure 2 for possible relationships). 48 METHODOLOGY The study was a cross-sectional design intended to investigate the utility of a measurement model and individual measures of a consumer-centered case management program in Michigan. This section provides a detailed description of the research process and the development and selection of measures that were used in the study. An important aspect of this study was the process involved in gaining access into a public community mental health agency and gaining the trust and help of consumers of two case management programs. The process that led up to the research project is an essential feature of this study and requires elaboration. The ongoing involvement of staff and consumers was critical for the evolution of the research in both the development of measures and the design of the project. Pilot Study Due to the complexity of this study, a pilot study was implemented in February of 2001 to provide preliminary information regarding the feasibility of the primary project. There were several concerns about the study that needed to be addressed before initiation. The first concern was the utility of the survey protocol. Several scales that were initially selected for the study were either constructed specifically for this study, had received minimal empirical research, or had not been used with consumers of case management services. It was extremely important to get feedback from consumers on the usefulness of these measures and to examine any psychometric problems (e.g., ceiling effects of scores or poor reliability) that might arise. A second concern was in regards to gaining the trust and participation of consumers. Because of the need to have primary consumers participate in this study, it was necessary to first see if individuals were willing to work 49 with an outside evaluator. Another concern was related to the time required to complete the entire survey protocol; that is, would it take too much of the consumer’s time to complete the protocol? A fourth concern related to the availability and reliability of agency level data (e.g., service data, demographic data on consumers, or medical records). Because demographic, diagnostic, and service data were to be collected and analyzed and because this type of archival data is often incomplete or of low reliability, there were concerns that the community mental health agency data would be insufficient for the research. There was also a concern that the community mental health agency would not cooperate to the extent necessary to complete the proposed study. The concern was that agency staff, including supervisors and administrators, might be verbally supportive but could not provide the actual assistance needed to complete the study as proposed. In order to address these concerns prior to implementing the full study, a pilot study, using a preliminary research protocol (the protocol changed as a result of the pilot study), was implemented with 30 active consumers of the standard case management program. The pilot study consisted of a seven-day window (seven consecutive business days) when case managers could recruit consumers from their caseloads to participate in the study. Case managers decided they needed a one-week notice to recruit consumers. Within that time period case managers were able to approach 39 consumers for participation. The only requirement for participation was that consumers had to have been enrolled in the case management program for at least one consecutive year. Seven consumers refused to participate and two were unavailable during the seven-day period. In order to facilitate the process, the agency provided round-trip transportation to all 50 consumers who were willing to come in and participate. All participants were paid $30 immediately after completing the survey protocol. In addition, all relevant agency data were collected on all 30 consumers. Agency data were collected over a two-day period following the completion of the participant interviews. Finally, in addition to consumer interviews and agency data, all seven case managers completed one Multnomah Community Ability Scale (Baker, Barron, McFarland, & Gigelow, 1994; Baker, Barron, McFarland, Bigelow, & Camahan, 1994) on each client on their caseload who participated in the study. For their assistance and participation in the study, case managers were paid $14 for each consumer on their caseload who participated in the pilot study. The pilot study achieved its goal in answering all the issues noted above, especially consumers’ willingness to participate. Both data from the surveys and post survey discussions with consumers revealed important information about the usefulness of the protocol and what participants thought about the different surveys. A common theme among pilot-study participants was that they appreciated being asked to evaluate the services they had received. Another common and related theme was that most of the consumers had never been asked these questions before and hoped that the agency would adopt some of the measures. In addition, information from the pilot study indicated that consumers required an average of 45 rrrinutes to complete the entire protocol. Two consumers required assistance in reading the surveys, but had no problem in understanding or answering the questions. Another finding of the pilot was that staff was extremely cooperative and interested in the study. Although this was not a surprising finding, as will be explained below, it was very encouraging. Finally, information from 51 the 30 surveys and anecdotal feedback led to refinements in the survey protocol. The finalized protocol used in the primary study is detailed in the measurement section. Research Site and Case Management Program The site of the study was a rural community mental health center (CMHC) in west central Michigan that serves one county with approximately 70,000 residents. The CMHC serves approximately 900 adults and children with mental illness and developmental disabilities annually. The CMHC has one standard case management program with an average daily census of 120 adults, of whom approximately 50% are classified as having a mental illness and the other 50% are classified as having a developmental disability. The case management program employs between six and seven case managers with an average caseload of 15 to 20 consumers (usually mixed between adults with mental illness and adults with developmental disabilities). The agency also has an ACT program that was established in July of 2001. The ACT program has three case managers, one psychiatric nurse, and one supervisor (a psychiatrist is not assigned specifically to the team). The ACT program received nearly all their referrals directly fiom the standard case management program in July and August of 2001. By September 25, 2001, the ACT program had an active total caseload of 37 consumers. Several other sites were considered for the study, but they provided less favorable conditions to test the measurement model. The Ionia CMHC was also recommended by MDCH as a model agency in progressive innovation and the application of consumer- centered services. In addition, the two case management programs maintain fairly small caseloads, which is necessary to implement consumer-centered, strengths-based services. Finally, consumers are assigned to specific case managers, which is another necessary condition for a strengths-based model (Rapp, 1998). In contrast, all other potential programs in other agencies either used a team-caseload model (i.e., the PACT model) or caseloads exceeding a ratio of 40: 1. Because the purpose of the study was to test a measurement model of consumer-centered, strengths-based case management services, it was practical to select a program with some indication that the model has been implemented. Therefore, the rural CMHC was selected because there were numerous program and administrative indicators suggesting that the two case management programs were employing a consumer-centered, strengths-based model of service delivery. Participants Two groups of research participants were involved in this study: adult consumers of case management services and their case managers. QpppumerLarticipants F ifty-four consumers from two case management programs completed the full survey protocol. Fifty-seven consumers participated in the study; however, three individuals did not complete the full survey protocol. In addition, case manager surveys were completed on 56 of the 57 individuals who participated. From case manager reports (case managers coordinated the recruiting process), approximately 10 to 14 additional consumers were asked to participate but declined. As determined by the management information system (MIS), there were approximately 100 individuals between the two case management programs that met the selection criteria for this study when data collection began in late August 2001. In practice, the number of consumers that were receiving case management services was 53 lower. Due to high staff turnover in the case management program (described next) and the introduction of the ACT program in July 2001, it was difficult to determine the exact number of individuals receiving case management services. It is estimated that about 80 consumers between the two programs met the criteria for selection into this study. Therefore, approximately 88% (7 0/80) of all eligible consumers were asked to participate and 81% (57/70) of those who were asked, agreed to participate. These figures are similar to those noted in the pilot study. Of the 56 consumers with at least partial data, 55% (31) were from the standard case management program; 59% (33) were female; 96% (54) were classified as white; 82% (46) were classified as having a serious mental illness (SMI; e. g., having a diagnosis of either schizophrenia, bipolar depression, major depression or a combination of disorders); 50% (27/54) reported never being married (five were currently married and 22 were either divorced, separated, or widowed); 82% (44/54) reported having completed high school, GED, or higher (18 reported having some college, two reported having a two-year degree, and two reported having a four-year degree); and 21% (12) were currently employed either full time (3) or part time (9). In addition, 55% (31) of consumers reported that they were renting an apartment, house, or trailer; 16% (9) were residing in an adult foster care (AFC) home or nursing home; 13% (7) were living with their parents (their parents owned the home); 11% (6) owned their own home (with or without a partner); and 5% (3) lived with fiiends. Three individuals who participated in the study were incarcerated in jail at the time of the interview. 54 Case Manager Participants Thirteen staff members at the agency participated in the study (i.e., had consumers assigned to their caseload who participated in the study). All staff members who were eligible did participate (i.e., recruited their clients to participate in the study and completed the case manager portion of the survey protocol). Case manager participants consisted of six case managers fiom the standard case management program; two employees from other community-based programs within the agency who had continued to carry a small caseload of consumers enrolled in case management services (both employees were case managers who had taken positions in other programs); three case managers from the ACT team; one psychiatric nurse from the ACT team; and the ACT supervisor. The two employees fiom non-case management programs had one consumer each who were involved in the study; for the rest of the staff noted, the number of consumers involved in the study on each caseload ranged from two to nine. Due to staff turnover in the case management program (four case managers had left the standard program since January 2001) and the development of the new ACT program in July 2001, most of the case managers involved in the study had less than one year of experience as case managers by the start of data collection in mid August 2001. Although six staff members involved in the study had worked at the agency for at least three years and one of the ACT case managers had worked in a previous ACT program at the agency, 80% (45) of consumers involved in the study were working with case managers with less than one year of experience in the two case management programs (this figure includes all 25 consumers in the ACT program). More detailed information 55 regarding the duration of the relationships between consumers and their case managers is provided in the results section. Measures Figure 3 displays the measurement model adapted from the conceptual model in Figure 2, and displays the list of indicators for each dimension. Three of the six dimensions of the measurement model -— strengths assessment, consumer centered services planning, and community inclusion-based services - were all measured by multiple indicators collected from multiple methods (e. g., consumer self-ratings, clinician ratings, clinical records). Figure 3: Measurement Model Strengths Assessment Service mulling and Provision Proxy Indicators Consumer-Centered Service Planning Empowerment Strengths Assessment Quality of Life Community Inclusion-Based Services Satisfaction The proxy indicators of the measurement model - satisfaction, empowerment, and quality of life - were each assessed by one self-report measure. 56 It is helpful to point out that the indicators of each dimension listed in Figure 3 represent the proposed model, which was then tested. Because the purpose of this study was to examine the utility of the proposed measurement model, the initial model is presented here. A revised measurement model is presented at the end of the results section. In addition, the two congruity indicators listed under the dimension of consumer-centered services are composites of two other measures (described next). For clarification, Table 1 presents the full list of measures that were used either as direct indicators, such as the empowerment or the quality of life scale or as composite measures, such as two versions of a needs assessment measure (staff and consumer) that were used to create the congruity of needs indicator. Finally, it is usually standard practice Table 1: Sources of Data Collection Dimension! Measure Source of Construct Data Strengths Strengths Scale (case manager version): Case Assessment Manager Strenghs Scale (consumer version): Consumer Qpinion Scale: This is a 5-item measure given to case Case managers Manager Consumer Survey of Mengl Health Services. Only two Consumer questions from this survey were used that specifically ask if the case manager is assessing strengths Consumer- Personal Needs Assessment. This is one of two measures Consumer Centered Service that was used to create the congruity of needs indicator Planning Consumer Needs Assessment. This is the second measure Case used to create the congruity of needs indicator Manager _G_(_>al Question (Consumer). This is one of two questions Consumer (this one given to consumers), which was then used to create the congruity of goals indicator G_oal Question (Case Manager). This is the second of two Case questions (this one given to case managers), which was Manager then used to create the congruity of goals indicator Relationships with Case Manager scale. This is a self- Consumer report measure that assesses consumers opinions of their C386 manager 57 Consumer Survey of Me_ntal Health Services, Treatment Consumer Planning and Goal Development (CSMHS: TPGD). This is a subscale of the CSMHS Community Consumer Survey of'Meml Health Services: Service Consumer Inclusion-Based Provision (CSMHS: SP). This is a subscale of the CSMHS Services Consumer Survey of Mentafilealth Services: Promoting Consumer Independence (CSMHS: PI). This is a subscale of the CSMHS Empowerment Consumer Empowerment Scale Consumer uality of Life Quality of Life Scale Consumer Satisfaction Consumer Survey of Mental Health Services, Outcomes Consumer Subscale (CSMHS:O). This is a subscale of the CSMHS to report the reliability coefficients for each measure in this section; however, because reliability analyses were used to judge the adequacy of each measure, they are reported in the results section. When available, reliability estimates from previous research is noted. As noted, Table 1 provides a summary of the measures used in this study and sources of data used to complete each measure. Information was gathered from consumer self-report measures, case managers' opinions and ratings of consumers, and clinical records. In addition to the sources of data noted in Table 1, additional information was collected from clinical records regarding diagnosis and service utilization data from the agency's management information system. The following subsections provide more detail about how the six dimensions in Figure 3 were assessed and what specific measures were used to collect data. Strengths Assessment There were two measures of strengths assessment that were used in the study. In addition, a third measure, referred to as the opinion scale, was also included in the analyses of individual strengths. The three measures are described below. Strengths scaflcase manager version). The strengths scale is an expanded version of a 12-item agency version and was administered to case managers as part of the 58 protocol of surveys that they completed in the study. Case managers were asked to complete one strengths scale for each consumer on their caseload who participated in the study. Because the original 12-item agency version appeared to be limited in potential domains of strengths, the agency version of the strengths scale was modified to incorporate a wider range of strengths. The modified version has 28 items or categories of strengths that consumers can possess. A copy of the modified version is provided in Appendix A. In addition, the expanded version employs a likert scoring system for each item. Instead of either checking off or leaving the item blank, case managers were asked to rank each strength or attribute as it related to the consumer. The likert scale ranges from 4 (strongly agree) to 1 (strongly disagree) and a fifth option of don't know or not applicable. The fifth option, if selected, was coded as missing for scoring purposes. The scoring system was adapted fiom the MHSIP Consumer Survey (described below). Examples of the expanded version include: 1. The consumer makes fiiends easily 8. The consumer is assertive 15. The consumer is creative Strengths scale (con_sumer version). The consumer version of the strengths scale uses the same items and method of scoring as the staff version. The only difference is that the consumer version is worded in the first person. Although the primary purposes of the strengths dimension is to assess if case managers are using a strengths-based perspective, the consumer version was used in this study to compare the two perspectives and to also examine the relationship between the two views. 59 Qpinion scale. In addition to the 28 items that assess general strengths, five additional items were added to the case managers’ version of the strengths scale to assess how the case manager feels about the consumer. These five items, which are scored the same way as the first 28 items, were included to examine if the case manager's personal opinion of the consumer can influence treatment planning and provision. Therefore, these five items were used to create an additional measure that examined case managers opinions or bias towards a particular consumer. A scale score was produced by summing the scores of the five items (it was assumed that case managers would not select the don't know/not applicable category for any of these items). These five items include: 29. The consumer is easy to work with 30. The consumer is undemanding and easygoing 31. The consumer is easy to contact or to meet to with 32. The consumer is motivated in treatment 33. The consumer is likable and enjoyable to be around Consumer-Centered Service Planning The primary purpose of assessing the implementation of a consumer-centered case management program is to assess the degree to which the program is actually delivering consumer-centered services. Consumer-centered services are driven by the goals and needs of consumers rather than by the opinions of professionals (i.e., clinicians and case managers). In theory, consumer-centered services will be highly individualized (consumers will vary on most categories) and aligned with what consumers need and what their goals are. Guided by this perspective, measures of consumer-centeredness 60 were designed to assess the level of congruity or alignment between the perceptions of case managers and consumers. There are four aspects or indicators of consumer-centered programs that were measured in the study: congruity of goals, congruity of needs, consumers opinions of the relationship with their case managers, and consumer’s assessment of the treatment planning process. The four aspects of consumer-centered services, with a detailed description of the measures employed to assess each one, are provided below. _Co_ngruity of Treatment Gog. Congruity of treatment goals was assessed by comparing consumers' reported goals with a list of goals reported by their case manager. "Reported goals" consists of asking both the case manager and the consumer to report what the consumer's life goals are. The goal questions posited to each group were: 0 Goal Question (Staff): What are the goals that the consumer would like to work on in the next year? In other words, what are some of the things the consumer would like to do or acquire in the future? These can be short-term or long-term goals. Please list these goals below and be as complete as possible. 0 Goal Question (Consumer): What are the goals that you would like to work on in the next year? In other words, what are some of the things you would like to do or acquire in the future? These can be short-term or long-term goals. Case managers were asked to write these goals down. Consumers were asked to report their goals during the face-to-face interview (described below). Consumer Needs Assess_ment. The consumer needs assessment is one of two measures that were used to create the congruity of needs indicator. This measure was created specifically for the study as a tool for case managers to assess the needs of consumers. The scale is comprised of 34 categories or areas of need with each item scored on a 4-point likert scale that ranges from 1 (no need) to 4 (high need) and a fifth 6| category of don't know. The 34 areas of need were derived fiom several sources, including a shorter agency version of a needs assessment used by case managers and research that has examined consumers' perspective on what they wanted or needed from community mental health services. Specific areas of need that consumers often want, but are not always considered by clinical staff, include developing better relationships (family, friends, or partners), financial assistance, education, social activities (community integrated activities), improving self-confidence, obtaining a job, and living a more normal life (Dirnsdale et al., 1970; Coursey et al., 1991; Comtois et al., 1998; Lynch & Kruzich, 1986; Sanfort et al., 1996). Therefore, these areas were included in the needs assessment measure used in this study. Case managers were asked to complete one consumer needs assessment scale for each consumer participant who was assigned to their caseload. Personal Needs Assess_ment. A second needs assessment measure, referred to as the personal needs assessment scale, was administered to consumer participants in the proposed study. The personal needs assessment scale is similar (i.e., same list of needs) to the consumer needs assessment scale noted above except that the questions are worded in the first person rather than the third person. Congmity of Needs. Similar to the concept of congruity of goals in a consumer- centered program, there should also be a high degree of congruity or agreement between case managers and consumers on what consumers need. This measure consisted of comparing the two consumer needs assessment scales (staff and consumer versions) described above. Similar to the analysis of treatment goals, the list of needs was compared and scored based on the level of agreement between consumers and case 62 managers. Again, consumers' listed needs was viewed as the primary set from which to compare case managers' list of consumer needs. CSMHS: Treatment Planmgand Goal Development. This measure is a subscale of the consumer survey of mental health services (CSMHS) described in detail in the following section (a copy of the full measure is in Appendix C). The CSMHS was administered to consumer participants as part of the survey protocol they complete in study. The treatment planning and goal development subscale was used as an additional measure of the consumer-centered, service-planning dimension. Relationship with Your Case Mmer (RYCM) sca_l§. This is a modified version of a scale that was recently developed by Ruth Ralph as part of the research protocol for the multi-site, multi-state Consumer-Operated Services Program (COSP) study, sponsored by the Substance Abuse and Mental Health Services Administration (SAMHSA; COSP, 2000). The purpose of this scale is to assess consumers’ opinions of their case manager and the quality of the relationship. Modifications of this scale consisted of rewording questions to specifically focus on case managers rather than on any mental health employee. The modified scale consisted of 20 items scored on a 4- point likert scale ranging from 1 (strongly disagree) to 4 (strongly agree) and an additional category of not applicable. Examples of questions include: 2 My case manager does not understand me 8 I feel free to complain to my case manager 16 My case manager compliments me when I do something well (a copy of the full measure is in Appendix D) 63 Community Inclusion-Based Services This dimension was assessed by one measure with two subscales that were employed to assess if the program is providing community inclusion-based services. The survey is a modified version of the Mental Health Statistics Improvement Project (MHSIP) Consumer Survey. The MHSIP Consumer Survey was developed by the MHSIP task force, which was comprised of researchers, services providers, and consumers of mental health services. The MHSIP Consumer Survey was developed by the task force as an assessment tool to be used by consumers of mental health services. Because of the limitations of generic satisfaction surveys (e.g., ceiling effects), the MHSIP survey was designed to be more comprehensive and relevant to consumers of mental health services. The purpose of the survey is to allow consumers to voice their opinions on how effective mental health services are at meeting their needs, rather than just assessing whether they were satisfied with the services they received (Ganju, 1999). Because service provision in a consumer-centered model is not based on any standard protocol (e. g., three contacts a week in the community), but rather is based on the varied needs of each consumer, it is argued that an effective technique for assessing the quality of service provision is to survey consumers' opinions on how effective those services are at meeting their needs. Since community inclusion is based on the perceptions and desires of consumers, consumers should be able to provide the most accurate assessment of how effective services are at helping them achieve their definition of community inclusion. This argument suggests using an instrument that can assess consumers' opinions of what they want and if services are helping them achieve their goals. In turn, this suggests the application of some type of consumer report, service satisfaction survey. Due to the potential advantages of the MHSIP Consumer Survey over other more commonly employed satisfaction surveys, the survey was used in the pilot study. The survey consisted of 32 items scored on a 5-point likert scale ranging from 1 (strongly agree) to 5 (strongly disagree) and a sixth category of not applicable. The survey produces an overall score and four subscale scores: general satisfaction, access, appropriateness, and outcomes. The overall score is simply the mean of all items. Low scores on any of the subscales indicates that consumers agree or perceive that services are appropriate, effective, or leading to their self-defined outcomes. Table 2 displays information regarding the average, range, and standard deviation of subscale scores, and the reliability of each subscale (N=30). Table 2: MHSIP Satisfaction Survey Scores MHSIP Scale Average Range of Standard Alpha Score Avg Scores Deviation General Satisfaction 1.53 1.00 - 4.00 .69 .84 Access to Services 1.71 1.00 - 3.00 .63 .80 Appropriateness of Services 1.83 1.00 - 2.92 .64 .89 Outcomes of Services 2.23 1.00 - 3.80 .83 .89 Overall Score 1.92 1.00 - 2.87 .60 Information in Table 4 indicates that three of the four subscales suffer from some of the same problems associated with generic satisfaction surveys. Specifically, three out of the four subscales displayed a pattern of high agreement with most of the items, as can be seen by the low average scores (low score. = high agreement) and the narrow standard deviation. Only the outcomes of services demonstrated a broader range of responses. 65 Consequently, the outcomes subscale correlated (> .3) with more measures collected in the pilot study than the other three subscales or the overall score. Consumer SurveyLof Mental Health Services (CSMHS). Due to the narrow distribution of scores among most of the subscales, only the outcome subscale was retained for the primary study. In place of the three subscales of the MHSIP Survey, three subscales were created that incorporated MDCH guidelines for community inclusion and treatment planning and Marty et al.'s (2001) two dimensions of personal plan and resource acquisition: treatment planning and goal development, promoting independence, and service provision. Several items from the MHSIP survey were used in constructing these three subscales. In addition, the outcome subscale was modified in language to relate specifically to case management services rather than general mental health services. The likert scale of the MSHIP survey was retained, but the scoring system was reversed. The range of scores was from 1 (strongly disagree) to 5 (strongly agree). The not applicable category was also retained. The four subscales are described in more detail below. 0 Treatment Planning and Goal Development: This subscale was used as an independent measure for the consumer-centered service planning dimension. There are seven items that assess if consumers were involved in the development of treatment goals, organizing the treatment planning meeting (time and place), and was encouraged and supported in involving natural community support netwOrks (e. g., fiiends, partners, or family members) in the treatment planning process. Examples of the items include: 1. My case manager is helping me achieve my goals in life 66 2. I was able to choose who attended my treatment-planning meeting 3. I, not staff, decided my treatment goals. 0 Service Provision: This subscale was designed to assess if services are based on the needs and goals of consumes and are strengths-based. Examples of items include: 1. I have control over when and where my case manager meets with me. 2. Most of the meetings with my case manager occur in my community and not at the mental health center. 3. My case manager helps me to appreciate my personal strengths and capacities. 0 Promoting Independence: This subscale was designed to assess if services are promoting independence from mental health services as well as general independence and self-determination. Examples of items include: 1. My case manager has helped me to find and hold a competitive job in the community. 2. My case manager has helped me to become more independent financially. 3. My case manager has helped me to become more independent of mental health services. (a copy of the full survey is in Appendix C) Satisfaction CSMHS: Outcomes. Consumer rated satisfaction with services was measured with the modified outcomes subscale of the MHSIP Consumer survey (described in detail 67 above). Most of the questions were retained in this subscale but the wording was altered to specifically refer to case management services. Examples of items include: 1. As a result of case management services, I deal more effectively with daily problems. 2. As a result of case management services, I am better able to deal with crisis. 3. As a result of case management services, my housing situation has improved. (A copy of the full measure is in Appendix C) To reduce the impact of method variance (described in more detail in the data analysis section) among the four subscales of the consumer survey of mental health services, the outcome subscale of the survey was separated fiom the other questions and given at a different point in the survey protocol sequence. Emppwerment Consumer Empowerment Scale. This dimension was assessed by the Consumer Empowerment Scale ([CES], Rogers et al., 1997). The CBS is one of the few empowerment measures that has been used with consumers of mental health services and has demonstrated high internal consistency (Cronbach's alpha = .86) and external validity with individuals with SMI (Rogers et al., 1997; Conigan et al., 1999; Wowra & McCarter, 1999). The effectiveness of this scale is partially based on the involvement of consumers in the development of the measure, which probably enhances its face validity. The CBS scale is a 28 item self-report measure that is easily administered and easily 68 scored. Each item is scored on a five point likert scale that ranges from 1 (strongly agree) to 4 (strongly disagree). Examples of the CES include: 1. I can pretty much determine what will happen in my life. 9. I see myself as a capable person. 22. I feel powerless most of the time. (A copy of the full scale is in Appendix E). Qu_ali§; of Life Quality of Life sglp. This dimension was measured using Sullivan's adaptation of Andrew and Withey's (1976) quality of life scale. Although there are numerous quality of life scales available, the advantages of Sullivan's version is that the scale is easily administered, has a high internal consistency (Cronbach's alpha = .88), is sensitive to change over time (a problem with other QOL scales used in this area of research), and has demonstrated a capacity to differentiate individuals with regards to other indicators related to quality of life (e. g., personal safety, threat of physical violence, and access to resources). The scale is scored on a seven point likert scale that ranges from 1 (extremely pleased) to 7 (terrible) with a midpoint of 4 (mixed; equally satisfied & dissatisfied; Sullivan et al. 1992; Sullivan & Bybee, 1999). Examples of the QOL include: 1. First, a very general question. How do you feel about your life overall? 3. How do you feel about your personal safety? 8. How do you feel about your emotional and psychological well-being? (a copy of the full scale is in Appendix F) 69 Pilot data I and display study. study. is a l inlonnatio emplosrne i c. enrolled “'35 ther COnsum Contact Consui admit trans; man; case: lIlVe Pilot data on both the CBS and QOL indicated that both scales were easy to complete and displayed a moderate to strong correlation with other measures collected in the pilot study. Demographics A final measure used in the proposed study, which was also used in the pilot study, is a consumer completed survey designed to assess standard demographic information (e. g., gender, age, marital status, and level of education) and present employment status (a copy of the survey is in Appendix G). Procedures Recruitment Consumer recruitment consisted of pulling all eligible consumers (e. g., currently enrolled in case management for at least four months) from the agency database. This list was then presented to the case management team to determine its accuracy (i.e., if consumers are still enrolled). After confirming the list, case managers were asked to contact and recruit consumers assigned to their caseload. Case managers then scheduled consumers to come in for the face-to-face interview with me. All interviews were administered at the agency. The agency was also able to provide round trip transportation during regular business hours for all consumers who required it. One reason this technique was successful in the pilot and full study was that case managers were very effective in contacting and recruiting consumers assigned to their caseload. A critical factor that cannot be overstated was the enthusiasm displayed by the case managers during the pilot study. Because of their exposure to the primary investigator over the past four years, the two case management programs were extremely 70 receptive to the study and very interested in the goals of the project. Finally, both programs were interested in using many measures in the firture. Nevertheless, the recruitment process proved to be more complicated and time consuming during the full study. There were several factors that contributed to a substantially slower recruiting process in the full study that were not present during the pilot study. One major complication of this study was the turnover of staff in the standard case management program and the loss of the psychiatric nurse in the ACT program during data collection. Although two case managers had left the program during the pilot study, both had been promoted within the agency and were available to supervisor the new trainees. In fact, the two case managers who left continued to carry a small caseload of individuals in the case management program. Two more case managers left the program in early July (one of them was just hired in January 2001). However, both of these case managers left the agency before their replacements could be hired. As a result, two new case managers, who were hired in late July, had to establish relationships with consumers assigned to their caseloads without the help of the previous case managers. In addition, approximately half of the eligible pool of research participants in the standard program was assigned to the caseloads of the two newest case managers. Finally, the psychiatric nurse assigned to the ACT program (who like the other team members was hired in July) left the program unexpectedly to take a position in another department in the agency. The loss of the psychiatric nurse on the ACT team was more difficult to absorb in the ACT program than one or even two case managers in the standard program. The psychiatric nurse position, which is a critical component of an ACT program, remained unfilled through the end of data collection. 71 Another related complication was the development of an ACT program in early July 2001. Due to a change in state regulations, the agency had to have an ACT program in place and operating before the end of the fiscal year, which was September 30, 2001. At the time of the pilot study, all consumers receiving case management services were enrolled in one program (i.e., the standard case management program). After the introduction of an ACT program, approximately 37 consumers were transferred from the standard program to the ACT program. Although technically this is not an issue of staff turnover, the net result is the same in that nearly 37 consumers had to re-establish a relationship with new case managers. Changes had to be made in the recruiting process and data collection procedures because of the significant changes in caseloads that occurred just before data collection began. Because the primary focus of the measurement model was on the relationship between a consumer and their case manager, it was necessary for a relationship to have been established before assessing the quality of the bond. Therefore, it was determined that case managers should have at least three months to work with consumers on their caseload before participating in the study. The three-month range was used more as a general guide than as an absolute cutoff. In many cases, the case manager provided feedback on whether a relationship had been established. These changes in the recruitment procedure led to a longer data collection period than originally planned. Data collection began in mid August; unfortunately, using the three-month criterion, only five case managers were eligible to participate in August (i.e., had worked with consumers for at least three months), and only three of them had more than one consumer assigned to their caseload who were eligible to participate. Seven 72 more case managers and the consumers assigned to their caseloads became eligible between October 15 and November 1 (including the entire ACT program on October 15). In addition to staff turnover and the development of the ACT program, several other factors interfered with the data collection process. For example, one of the two newest case managers on the standard program was out on sick leave for most of the data collection period for which she was eligible to participate (N ovennber 1 to December 12). This resulted in my only interviewing five consumers on her caseload. It is estimated that she had betweenl 3 to 18 eligible candidates. Furthermore, the agency was preoccupied with the hiring of a new chief executive officer in August; a three-day bi- annual site visit by the Department of Community Health in mid October; and the ongoing process of associating: merging with some autonomy remaining, with two other community mental health agencies as a result of managed care (the association process began in early summer 2001). These agency-level factors created a constant feeling of stress and transition among staff in both case management programs during the data collection process. Moreover, none of these factors existed during the pilot study. Despite the many barriers and problems that arose during the primary data collection phase, staff from both programs remained supportive and engaged during the study. Staff was extremely flexible in trying to set up appointments with their consumers to participate in the study. Case managers or the transportation department in the agency provided free round trip rides to the agency for all consumers who were willing to participate but did not have access to transportation. Although the final sample size was smaller than had been projected at the start of the study, considering all the problems that occurred, I was able to meet with over 80% of all eligible consumers. 73 In addition, another critical factor was the familiarity consumers had with the primary investigator. Approximately one third of the consumers who participated in the full study (about half in the pilot study) had met the primary investigator at least once over the previous four years. This familiarity with the PI proved to be helpful in gaining consumers’ trust. In addition, feedback from consumers indicated that word-of-mouth among consumers (e. g., at the drop in center) helped to spread the news about the research project and what the study was trying to accomplish. Anecdotal feedback from consumer participants suggests that the full protocol of surveys was fairly easy to complete and that many of the participants were pleased to be involved in the process (e. g., being asked their opinions of how effective services were to them). Numerous consumers noted that they would have participated in the study even if the $30.00 had not been offered. Moreover, numerous consumers noted that they would like to see these types of surveys used by the agency in the future. Eace—to-Faie Interviews At the time of administration, all research participants were asked to sign a consent form (See Appendix J). One consumer chose not to sign the consent form or to participate in the study at the point of the interview. After signing a consent form, all participants were asked to complete a series of structured surveys, which were all self- adrninistered measures. Three individuals required assistance with reading and writing. Consumer participants required approximately 45 to 60 minutes to complete the entire protocol. Upon completion of the survey, individuals were paid $30 for their time. To avoid the error involved in using multiple interviewers in data collection, all participants were surveyed by myself. Due to the high potential for missing data with the 74 administration of numerous questions, surveys were scanned for missing data before consumers were paid. Consumers were asked to fill out any missing data or the primary researcher confirmed the answers with participants. Six consumers completed surveys on their own without coming in for an interview. Three of these consumers were in jail and three other consumers who were unable to come in to the agency completed the surveys on their own time and mailed the completed surveys back to the agency. In addition to collecting data from consumers, all case managers were asked to fill out a series of surveys. Similar to consumers, case managers were asked to sign a consent form and then were asked to complete a staff version of the strengths, needs, goals measures. Case managers were allowed to complete the surveys at their convenience. Upon completion of the surveys case managers were paid $20 for their time. All 13 case managers who were eligible (i.e., had at least one consumer assigned to their caseload who participated in the study) participated in the study. Timeline of Interviews Due to staff turnover prior to initiating the full study and the initiation of the ACT program, data collection was spread out over several months to allow new case managers, both in the standard and ACT programs, to develop a relationship with consumers who were assigned to their caseload. Data collection began in mid August 2001 and ended on December 12, 2001. Agency Data After all research participants had completed the interviews, information from participants' medical charts and service utilization data from the agency database system 75 was collee section. was collected. Information on this data, including availability, is reported in the results section. 76 RESULTS The results section is organized into five subsections. The first four subsections consist of detailed data analyses representing the first three dimensions of the conceptual model in Figure 2 and a fourth subsection consisting of the three proxy indicators of the consumer-centered model. A fifth subsection consists of a summary of findings and a reconfigured model based on those findings. The four data analysis subsections follow the same format for examining the construction of each indicator (e. g., how goals were classified or how staff and consumer perspectives were compared), description and analysis of scoring method, distribution of each measure (e. g., curve, skewness, and measures of centrality), and reporting of missing data. In addition, each measure used in this study is examined for adequacy using four out of the five criteria outlined by Scheirer and Rezrnovic (1983) — operational definition, multiple data sources, reliability, and validity. Validity is examined by correlating measures within each dimension and with measures from other dimensions that are assumed to be related. Anecdotal information from consumers and case managers is provided when available to examine the utility of each measure as a tool for assessing consumer-centered services. An unexpected finding of this study that was revealed in the data analyses was the consistent and often substantial group level differences between the two case management programs on many of the indicators used in the research. Due to these findings, most of the analyses presented in the result section are broken down by case management program. Finally, significant tests are not included with correlational analyses. The exclusion of significant findings is done for two reasons. The first is that the study is 77 already Ct correlatior there is an multiple e t R", or the potential l effect $th for effect based pe Consume Second 5 strength Version The co manag the C( becau. already compromised by a small sample size; therefore, small and medium sized correlations will not be significant based solely on sample size. The second reason is that there is an inflation of the Type I error rate when performing exploratory analyses with multiple correlations or F-tests on the same data set. In place of significance testing, etaz, R2, or the zero-order correlation alone, is used to interpret the findings. When examining potential group differences, the F -test with significance is reported. Correlations and effect sizes are referred to as small, medium, or large based on Lipsey’s (1990) standards for effect sizes. Strengths Assess_ment There were two scales used to assess if case managers were using a strengths- based perspective with consumers assigned to their caseload. The first scale was the Consumers’ Personal Strengths and Assets scale completed by case managers. The second scale was an Opinion Scale that consisted of five additional questions on the strengths scale. In addition, consumer research participants completed a self-report version of the strengths scale, referred to as the Personal Strengths and Assets survey. The consumer version was used in this section to examine the relationship between case managers’ perspective and consumers’ own perspective. In addition, two questions fi'om the Consumer Survey of Mental Health Services (CSMHS) were included in this section because of their relevance to the strengths-based perspective. The two questions are: 9. My case manager helps me to appreciate my strengths and capacities 29. As a result of case management services, I am more aware of my strengths and personal assets 78 Sinrilar to the consumer version of the strengths scale, the two questions fi'om the CSMHS were employed in this section to examine the relationship between the case managers’ perspective and consumers’ evaluation of that perspective. Stren 3 Scale Scoring of the strengths scales and the opinion scale consisted of taking the mean score of all the items in which the DK/NA category was not selected. Although there was no missing data, the don’t know/not applicable (DK/N A) category was selected on numerous questions by case managers and consumers. The DK/NA category was scored as missing for data analysis. For the case manager version, there were 34 DK/NA selected for question 28, “The consumer does well in school and other academic settings” and 22 DK/N A for questions 18 “The consumer is a dedicated employee when he or she is on the job.” There were no significant trends for selecting the DK/NA category on the consumer version. The DK/NA category was not selected on any of the five questions of the opinion scale. Individual item analyses revealed no significantly non-normal distributions or response patterns. The median score on the case manager version was three on 22 items and two on six items. The median score on the consumer version was three for 27 items and two for one item. All items displayed a mild negative skew, which reflects the median score of 3 out of a range of l to 4. The mean score of the case manager version of the strengths scale was 2.63 (median = 2.69) with a standard deviation of .40. The midpoint of the scale is 2.5. A mean score of 2.63 is in the positive range of viewing individuals as having a wide-range of strengths. In other words, the case manager agreed on most of the strengths-based 79 items an aw “mute stren difi‘e cat: 13th rer in f0 items describing the consumer. Twenty-two out of 56 case manager strengths scales had an average score below the midpoint of 2.5, indicating a negative average view of individuals’ strengths. In other words, the case manager disagreed on most of the strengths-based items describing the consumer. There was a small but significant difference between the two case management programs with the SCM program having higher case manager ratings (mean = 2.72) than the ACT program (mean = 2.51), E (1, 54) = 4.33, p < .05 (eta2 = .074), on the case manager version of the strengths scale. Mean imputation was used to compute a reliability estimate. Because the DK/NA category was selected frequently for questions 18 and 28, these two items were removed prior to calculating internal consistency. In addition, mean imputation using the mean scale score was employed if a scale had three or less NA/DK categories selected after removing items 18 and 28. Using this criterion, 47 surveys were used to calculate internal consistency. Chronbach’s alpha suggested a high degree of internal consistency for the 26-item strengths scale (alpha = .91); although the large number of items that comprised the scale inflated this figure. Corrected item-total correlations indicated a spread of correlations, ranging from .23 to .79 with most of the item-total correlations within the medium to large range. This spread of item-total correlations was not surprising considering that the 26 items represented a wide range of strengths, personal opinions, and resources. In fact, the alpha rating and corrected item-total correlations were better than what was expected for a multidimensional measure. The overall score reflected individuals’ opinions and perceptions on a wide-range of strengths-based categories. There was no assumption that all 28 items were related or that the overall measure was assessing a unified construct. 80 The mean score of the consumer version of the strengths scale was 2.97 (median = 2.96) with a standard deviation of .39. Four out of 52 consumer scale scores were below the midpoint of 2.5; indicating that these four individuals did not agree with at least half of the strengths- or asset-based items. Chronbach’s alpha, which was computed by using the same mean imputation method employed on the case manager version, suggested a moderate to high degree of internal consistency for the 28-item consumer version of the strengths scale (alpha = .87; N = 52). Corrected item-total correlations indicated a wide spread of correlations, ranging from .05 to .67 with 20 of the 28 item-total correlations within the medium to large range. Questions 10 “I am an independent person” and 17 “I have acquired many work related skills” had smallest corrected item-total correlations. As noted above, although not all items appeared to be related to the overall mean score, the moderate item-total correlations of most of the items were actually better than expected for a multidimensional measure. The average consumer strengths scale score was similar in the two case management programs (SCM, mean = 2.99, SD = .46; ACT, mean = 2.94, SD = .30). Table 3 displays the mean score comparisons of the two versions of the strengths scale by the two case management programs. Table 3: Group Mean Comparisons of the Strengths Scales Case Consumer Staff F—test Eta2 Management Strengths Strengths Program Scale Scale SCM 2.99 (28) 2.72 (31) 5.31 * .09 ACT 2.94 (24) 2.51 (25) 2485* .34 Total Mean 2.97 (52) 2.63 (56) 20.13* .16 Note: sample size is in parentheses, * = significant at .05 The results displayed in Table 3 indicate that consumers rated themselves, on average, higher on the strengths scale than did case managers in either the SCM or ACT 81 itc‘ 50 512 Sir Il€£ 5C0 Illa rate inst: poir the: program. In addition, there was a significant group effect. Case managers in the SCM program rated consumers on their caseloads as having more strengths, on average, than case managers in the ACT program. Furthermore, although case managers in the SCM program rated consumers on their caseloads lower than consumers did themselves, there was a medium size correlation between the two scores, g (28), .39, (r2 = .15) that was not found in the ACT program, I (24), -.08, (r2 = .01). Difference Score of Strengths Scam Based on these results, a composite score was created by taking the average difference in scores between the case manager and consumer versions of the strengths scale. A composite score was created by taking the mean of all difference scores that were created by subtracting the consumer score from the case manager score on each item. For ease of interpretation, each composite score was converted by multiplying the score by -1. From this conversion, scores could range from —3 (less strengths viewed by staff than consumers), 0 (average perfect agreement by consumers and staff), to 3 (more strengths view by staff than consumers). Scores that are near zero, both positive and negative scores, reflect high average agreement (item differences can vary, but the overall score is zero) between the two strengths-based ratings. Positive scores indicate that case mangers rated the consumer as having more strengths, on average, than the consumers rated themselves. The composite score reflects an average view from the two perspectives. For instance, on numerous items a case manager could rate items one point higher or one point lower than the consumer. The sum of those differences could add up to zero (i.e., the sum of negative differences cancels out the sum of positive differences), but the 82 absolute value of those differences would be greater than zero. Because the purpose of looking at the two perspectives was to reveal if case managers were using a strengths- based view, it was not necessary to compute a perfect level of agreement, such as when using Kappa or other measures of interrater agreement. The purpose of computing a composite score was to examine if, in general, the two views were similar across the entire domain of strengths-based items, allowing for some differences on individual items. It is assumed, based on measurement error alone, that there are subtle differences in the ratings of case managers and consumers. The large number of items should minimize these random differences if, in fact, a case manager and consumer share a similar perception about the consumer’s strengths. Results of this composite measure indicate that the mean of the difference scores was -.32 (median = -.34) with a stande deviation of .49 and a range fi'om —l .57 to .54. Twelve of the 52 difference scores were positive; indicating that the case manager rated the consumer, on average, higher than the consumer rated him or herself on the strengths scale. There was a small, non-significant difference between the programs with the SCM program having a higher agreement score (i.e., less mean difference between the two scales) than the ACT program (SCM = -.24, ACT = -.42; eta2 = .04). Opinions Scale The mean score of the Opinion Scale was 3.09 (median = 3.00) with a standard deviation of .62. Similar to the strengths scale, the midpoint of the Opinion Scale is 2.5; therefore, a mean score of 3.09 indicated that case managers, on average, agreed with the five questions regarding how they felt about working with the consumer. Ten out of 56 Opinion Scale scores were below the midpoint of 2.5; indicating that case managers did 83 not agree with the view that it was easy to work with these ten consumers. There was a large significant difference between the two case management programs with the SCM program having higher case manager ratings (mean = 3.35) than the ACT program (mean = 2.75), 13 (1, 54) = 16.98, a < .05 (eta2 = .24), on the Opinion Scale. Chronbach’s alpha suggested a high degree of internal consistency for the five- item Opinion Scale (alpha = .90, unstandarized and .91 standardized). Corrected item- total correlations indicated that these five items were highly correlated to the overall mean score (item-total correlations ranged from .72 to .81). There is also the possibility that items on both the Opinion and the case manager version of the strengths scale were linked, at least partially, by method variance. The Opinion Scale score was highly correlated (large effect size) with the case manager version of the strengths scale score, g (56) = .67 (12 = .45). Utility of Strengt_h§ Measures Qperational definition. Both the strengths scale (case manager version) and the opinion scale were developed as direct indicators of case managers’ opinions and views of the consumers they work with. Although both scales lack anchors (or previous empirical research), it was argued that both have an intuitive appeal and good face validity as measures of implementation of a consumer-centered model. It was argued in this manuscript that case managers should possess a view that consumers do have strengths and capacities that can be nurtured in a consumer-centered, strengths-based model and that simply asking case managers to rate consumers’ strengths would provide a direct indicator of that perspective. An indication of a strengths-based view can be assessed by examining the average score of the case managers’ version of the strengths 84 scale hayin proyfi C0113 SI] scale. A score above 2.5 indicates that a case manager, on average, views a consumer as having strengths and capacities. In addition, a comparison of the two strengths scales provides an indication of the agreement in perspectives between case managers and consumers’ view of themselves. Scores near or above zero indicate a high level of agreement with consumers. Preliminary results suggests that the opinion scale reflects a unified construct of an overall opinion of the consumer while the strengths scale may reflect multiple dimensions of strengths and resources. However, the high correlation among the opinion and strengths scales (case manager version) score may also reflect that both scales are tapping into personal opinions rather than an objective assessment of consumers’ strengths and capacities. Multiple measures. Although the strengths and opinion scales were intended to be used by case managers, two perspectives of the strengths scale — consumer and case manager — were used to assess its utility. Results fi'om zero-order correlations and F-tests suggest that both versions of the strengths scale were useful and revealed program level differences. In addition, the composite difference score was created by comparing the two versions of the strengths scale. Correlations between the difference score and other measures are presented in the validity subsection below. Reliability. Both versions of the strengths scale displayed moderate to high internal consistency; however, corrected item-total correlations suggested that not all the items correlate to the overall scale score. As noted, this was not surprising considering that the strengths scale was not designed to measure one unified construct of strengths and assets. The scale was designed to cover multiple types of strengths and resources. 85 Further research is needed to examine the potential for subscales of the strengths scale that are more related and more suited to reliability analysis. The opinion scale displayed high internal consistency. Internal validity. Table 4 displays how all indicators of the strengths dimension are correlated. Because group level differences were found on the case manager version of the strengths and opinion scales, two correlations are provided that represent the relationships within the two programs. Table 4: Internal Validity of the Strengths Dimension Strengths Strengths Opinion Scale CSMHS: #9 CSMHS: #29 Scale Scale Staff Consumer SCM ACT SCMJ ACT SCMT ACT SCM 1 ACT SCM I ACT Strengths .39 -.08 Scale Staff Opinion .01 .1 1 .60 .73 Scale CSMHS: -.29 -.60 -.13 .34 -.05 .35 #9 $259MHS: -.35 -.54 -.39 .15 -.34 .10 .28 .42 Difference .45 .43 .12 .62 -.04 -.43 Score of SS as s s a: Note: Bolded correlations indicate significant contrasts in the relationship of those two items between the two groups. ‘ = Correlations were not provided between the two strengths scales and the difference score because of shared data. Correlations displayed in Table 4 show a mixed pattern of relationships among strengths dimension indicators across the two programs. There was some consistency among the correlations of consumer reported indicators across the two programs (Strengths scale consumer, CSMHS #9, and CSMHS #29) and among staff reported indicators across the two programs (strengths scale staff and opinion scale). The inverse relationship among questions 9 and 29 and the strengths scale were due to the reversed coding of the CSMHS survey (e. g., low score means high agreement or high satisfaction). However, there were several contrasts with the correlations among case manager reported indicators and 86 COD C0115 pros C011 AC the consumer reported indicators across the two programs. In nearly all correlations, consumer and staff scales or indicators were in the expected direction in the SCM program for consumer-centered services (some correlations were small or near zero). In contrast, the opposite relationship existed among the same scales and indicators in the ACT program. External Validig. External validity was assessed by comparing the correlations of the four strengths assessment indicators with indicators from the two related dimensions - consumer-centered, service planning and community inclusion-based services. Correlations among these three dimensions are presented in the next two result subsections. To avoid redundancy in reporting the results, analyses of the strengths assessment indicators with other dimensions are described in the next two sections. Con_sumer-Centered Service Planmg There were four indicators of this dimension: 1) congruity of needs, 2) congruity of treatment goals, 3) treatment planning and goal development, and 4) relationship with case managers. Copgmity of Needs Congruity of needs was a composite measure that was created by comparing consumer and case manager versions of a needs assessment scale. A difference or composite score was created by taking the mean of the differences (the absolute value of the differences) between the 34 items of the consumer and case manager needs assessment survey. This composite score represents the overall or average level of agreement between the two needs assessments. For ease of interpretation, the composite score was converted into a proportion ranging from 0 (perfect non-agreement) to l 87 (perfect agreement) by subtracting the overall score from three and then dividing by three. The needs assessment survey was designed to cover multiple dimensions of needs, such as transportation, housing, medical care, psychiatric treatment, companionship, and finding a job. Although a mean survey score was produced, it was not assumed that all 34 items are related or represent one unified construct. The score reflects an average rating of needs across multiple dimensions but does not provide information on specific areas of need. Thus, an internal reliability estimate for the overall survey was not computed for either version. The mean agreement or congruity between the needs assessment of consumers and their case managers was .66 (median = .66) with a standard deviation of .10 and a range from .33 to .89. The mean of the consruners’ needs assessment survey was 2.34 (median = 2.37) with a standard deviation of .54. The mean of the case managers’ needs assessment survey was 2.15 (median = 2.18) with a standard deviation of .51. Table 5 displays the group means for the two needs assessment measures and the congruity indicator by program. Table 5: Program Comparisons of the Needs Assessment Measure SCM Program ACT program F-test Eta2 Consumer Needs Assessment 2.24 (29) 2.46 (25) Ns .04 Case Manager Needs 1.84 (29) 2.50 (25) 40.1* .44 Assessment Congruity of Needs .66 (29) .66 (25) Ns .00 Note: sample size is in parentheses, * = significant at .05 Program mean comparisons of the needs assessment scales indicated that case managers in the SCM program, on average, reported that consumers had lower needs than did consumers themselves. In the ACT program, the mean scores of the two needs 88 asse prog man: need beta the 1 Prof agre Fur 5C0? pro COT TICt the qu qu tht assessment surveys were similar. There was also a significant difference between the programs on the case manager version of the needs assessment survey, with case managers in the SCM program reporting that consumers on their caseloads had less needs, on average, than did case managers in the ACT program. The average congruity score indicated that there were consistent differences between the reported needs of consumers and their case managers in both programs. These differences in congruity scores were reflected by the small correlations between the two needs assessment scores in the SCM program, I = -.06, (r2 = .003), and the ACT program, r = .15, (r2 = .022; it is important to note that it is possible to have high agreement and low correlation between two raters as well as the opposite scenario). Furthermore, although the two case management programs had similar mean congruity scores, there was a substantial difference in the standard deviation of those scores (SCM = .12, ACT = .07); indicating a much wider dispersion of congruity scores in the SCM program. Item level analyses reveal a mix pattern of congruity and incongruity between consumer and case manager reports on the 34 survey items. Table 6 displays the 34 needs assessment survey questions with mean congruity ratings for each item by group and item correlations (i.e., correlation between consumer and case manager versions of the needs assessment survey) by program. Using an arbitrary cutoff of .7 for high congruity, which reflects the top 25th percentile of the congruity scores, there were four questions that exhibited high congruity across both programs; however, two of these questions were about children. Because most consumers either did not have children or their children were of adult age, the congruity and correlation of these two items are 89 inflated as a result of most consumers and their case managers selecting a one on both questions (i.e., low occurrence of the event). Table 6: Congruity Ratings of Needs Assessment Items Ranking of Needs Assessment Congruity Congruity Correlation Correlations Questions Questions " in SCM in ACT in SCM in ACT 6. Legal assistance .83 .78 .55 .21 High congruity 20~ Childcare (respite care) .97 .85 .29 .37 in both - programs 28- Tramportatlon .71 .72 .49 .25 32. Parenting classes .94 .87 *‘l‘ .51 High SCM 18. Leisure activities .72 .58 .38 -.20 ui $35: ,3, 25. Reduce AOD use .82 .63 .54 .43 congruity 26. Access to entitlements .70 .60 .44 .18 34- Housing (better housing) .76 .68 .58 .39 High ACT 3- Vocational training .61 .75 .15 .38 $333,, 4. Findingagoodjob .64 .75 .46 .53 congruity 10. Social support .49 .74 -.30 .22 33- Symvtom relief from MI .63 .71 .26 .34 2. Education .69 .64 .36 .05 Average 9. Dental care .60 .69 .36 -35 congruity in both 11. Daily living skills .63 .63 .00 -.1 1 "mm 14. Personal hygiene .68 .60 .05 -.33 5- Spirituality .55 .60 .1 8 .09 L0" , 13. Personal safety .54 .56 -.30 .16 congruity in both 21. Counseling/therapy .57 .64 -.06 .03 ””3"“ 22. Medication (acquiring it) .57 .61 .10 .21 Note: * = not all questions are displayed and some questions have been abbreviated for this table. ** A correlation could not be computed because there was no variance in the case managers’ responses. Four questions displayed high congruity in the SCM program but not in the ACT program and four other questions displayed the opposite pattern. Finally, 22 questions displayed average (between the 25th and 75th percentile) to below average (at or below the lower 25th percentile) congruity ratings and item correlations across the two programs (not all questions are displayed in the Table 6). As noted, these categories are arbitrary and not based on statistically Significant differences. In fact, only the mean congruity 90 IO rating of question 4 “Finding a good job” was found to be statistically different between the two programs (only the 2nd and 3rd categories displayed in Table 6 were tested). Significant tests (i.e., alpha levels) were adjusted using a family-wise correction for inflation of Type I error - a common procedure when performing multiple univariate tests of significance on the same data set. The mean differences displayed in Table 6 may reflect only random variation across categories and between programs. Due to the small sample size, more detailed multivariate analyses could not be performed. Congruity of Goals Like the assessment of congruity of needs, a measure of congruity of goals was created by comparing a consumer and case manager’s list of the consumers’ goals in life. Because both lists were generated from an open-ended question rather than from fixed categories (as was the case for the strengths and needs surveys), comparisons had to be done individually and subjectively. The level of agreement or congruity was computed by scoring all matched goals as a one and all unmatched goals as a zero. The consumer’s goals were considered the primary set and the case manager’s listed goals were matched (i.e., compared) to the consumer’s list. If a case manager's goal was matched to a consumer's goal, then the goal was scored as a one. If the consumer’s goal could not be matched with a case manager’s goal, then it was scored as a zero. The number of potential matched pairs was based on the total number of goals given by the consumer; thus, if the consumer lists three goals and the case manager provides five goals, the total number of potential matches is three. After the goals were scored as a one or a zero, a total agreement score was computed by taking the proportion of agreement between all 91 goa goal OCCL inde unti listt mar sco ma Ne rep fro goals. For example, if both the case manager and consumer listed only one goal, and the goals were in agreement, the total agreement score was 1.0. If that same scenario occurred except that the goals did not agree, the total agreement score was 0.0. To minimize potential errors in matching goals, three raters were employed to independently compare and score each set of goals. The three ratings were compared and any differences in scoring that occurred were further discussed among the three raters until complete agreement was achieved. Fifty-two consumers provided a total of 172 listed goals; subsequently, there were 172 potential matches between consumers and case managers. Upon initial review, the three raters independently achieved agreement on scoring 89% (153/172) of the consumer goals (i.e., matched or unmatched to a case manager’s goals). The scoring of nineteen potential matches required firrther discussion. Nearly all the disagreements among raters were resolved after more background on the reported goals was provided (e. g., what case managers or consumers were referring to). For both consumers and case managers, the total number of listed goals ranged from 1 to 6. Consumers had a mean number of 3.31 goals listed (median and mode = 3) and case managers had a mean number of 3.38 goals listed (median and mode = 4). The correlation between the total number of goals on each list was .45 (r2= .20). The number of matched goals ranged from 0 to 5 with a mean of 1.6 (median and mode = 2.0). The mean congruity or agreement between consumers and case managers was .45 (median and mode = .50) with a standard deviation of .28. A score of .45 indicated that case managers, on average, were able to report about half of the same goals reported by consumers assigned to their caseload. The mean congruity of goals score were similar in the two case management programs (SCM = .44, ACT = .46), but there was a small, 92 significant difference in the number of goals reported by consumers and case managers (both reported more, on average, in the ACT program). Based on information gathered from consumers who were involved in the study, mostly through conversations with consumers, the congruity of goals probably underestimated case managers knowledge of consumers’ goals. For instance, numerous case managers reported goals that consumers talked about in previous conversations but did not report or indicate during the interview. For example, a case manager reported that a consumer wanted a camera, which the consumer did not indicate during the interview; nevertheless, the consumer had mentioned her desire for a camera on numerous occasions, including discussions that occurred in the pilot study. Another instance was a case manager who reported that a primary goal for the consumer was to get out of jail, yet the consumer did not report this as a goal. Given that the consumer was in jail at the time of the interview, it seemed reasonable to select the release from jail as a critical and pressing goal. This may have been an obvious point from the consumers’ perspective and, therefore, did not require reporting. Another frequent discrepancy was the reporting of abstinence from alcohol and other drugs, which was rarely reported by consumers, but fi'equently reported as a goal by case managers. There were probably many reasons why this discrepancy in reporting of abstinence as a goal existed (e.g., participants were uncomfortable discussing substance use and abuse problems with a stranger, denial of the problem, overreaction on the part of case managers, and not interpreting abstinence fiom alcohol and other drugs as a goal in life); nevertheless, numerous consumers involved in the study had a serious substance use disorder that was impacting their treatment and undermining their recovery from mental 93 illnCSf 509-0 0 servicr 1994; reasor and lit feelir Anot cons and l artsy A it late prol illness. Moreover, epidemiological statistics consistently indicate that approximately 50% of individuals with a serious mental illness who are enrolled in mental health services have a co-occurring substance use disorder (Drake et al., 2001; Kessler et al., 1994; Regier et al., 1990). Finally, four consumers struggled to come up with a list of goals for various reasons. For example, one consruner reported feeling very nervous during the interview and found it difficult to answer the open-ended questions. Another consumer reported feeling somewhat depressed (her words) and found it difficult to think about the future. Another consumer was extremely manic during the interview (the consumer was constantly moving and rocking in his seat, getting up and moving around, fast speech, and broken sentences: thinking faster than he could talk) and could not concentrate on answering any of the open-ended questions (he eventually came up with one vague goal). A fourth consumer reported several delusions and hallucinations as goals during the interview. Although all four consumers discussed here provided at least one goal, due the problems noted, these four consumers may have been unable to list the goals they regularly discuss with their case manager. Treatment Planning and Gcgrl Development A third indicator of consruner-centered services used in this study was the Treatment Planning and Goal Development (TPGD) subscale of the Consumer Survey of Mental Health Services (CSMHS). The TPGD is comprised of seven items scored on a 5-point likert scale ranging from 1 to 5 (low scores reflect high agreement or high satisfaction with services). An additional category of not applicable or don’t know (NA/DK) was also used. Although there was no missing data, the NA/DK category was 94 considered missing, if selected, for scoring purposes. The NA/DK category was selected four times on three different questions. Mean imputation using the mean score of the subscale was used to calculate reliability on the two surveys in which the consumer selected the NA/DK category once out of the seven questions (one survey was not used for reliability analysis). Fifty-four completed surveys were used in all other analyses. Item analyses revealed a consistent positive skew among the seven items with only questions 1 and 7 having a mean score at or above 2.0 (2.02 and 2.08). The overall mean score was 1.9 (median =1.9 and mode = 1.0) with a standard deviation of .71 and a range from 1.0 to 3.71. A mean score of 1.9 reflected high agreement among most consumers on the seven questions regarding treatment planning and goal development. There was a small, non-significant difference on the mean TPGD score between the programs with consumers in the SCM program reporting higher agreement on items regarding treatment planning and goal development (SCM = 1.7, ACT = 2.1, eta2 = .07). Chronbach’s alpha suggested that the subscale has moderate internal consistency (alpha = .79, unstandarized and .80, standardized). Corrected item-total correlations indicated that six out seven items had medium to large correlations with the overall score (item-total correlations range from .44 to .72) with question five “I, not my case manager, selected my treatment goals” (item-total correlation = .20) being the only question that had a small item-total correlation with the overall score. Although question five is relevant to this domain, the awkward wording of the question may have confused some readers, which in turn, contributed to its lower item-total correlation. If question five is removed, internal reliability increases to .83 (unstandarized) with corrected item-total correlations ranging from .49 to .71. Guided by this information, a new mean score was 95 pie: Rel the iter 8C8 alrr ad: $C( N; SCE qu rer recalculated without question five. The recalculated mean score was 1.9 (the mean score is the same as before; median = 1.75 and mode = 1.00) with a standard deviation of .77 and a range fi'om 1.00 to 4.17. The recalculated mean score was used in correlations presented below. Relation_ship with Case Managers The fourth indicator of the consumer-centered, service planning dimension was the Relationship with Your Case Manager (RYCM) survey. The scale consists of 20 items scored on a 4-point likert scale ranging from 1 (strongly disagree) to 4 (strongly agree). An additional category of not applicable (NA) was also used. The survey or scale produces one overall score. Because this scale was added after six consumers had already been interviewed, there were only 48 surveys available for analyses. One additional survey was removed after it appeared that the consumer did not understand the scoring system. Although there was no missing data on the 47 completed surveys, the NA category was considered as missing if it was selected. Before producing a mean scale score, questions 1, 2, 3, 9, 11, 13, 15, and 17, which were all negatively worded questions (e. g., my case manager does not understand me), were reversed scored. After reversing these eight questions, an overall mean score was computed. Item analyses revealed a consistent pattern of negatively skewed responses with all but two questions having a mean above 3 (question 6 = 2.76 and question 9 = 2.93). Questions 6 (6 NA selections) and 9 (6 NA selections) also had the most NA categories selected. The overall mean score for the sample was 3.17 (median = 3.15 and mode = 4.0) with a standard deviation of .47. The overall mean indicates that consumers reported high agreement (i.e., strongly agreed) with questions regarding the quality of the 96 relations however. 47 indiyi case mar high agr l reliabili selectet interna' that the standa large r Corret medic total t Spirit did r. thest 0r th man relationship with their case managers. Mean scale scores ranged from 2.15 to 4.00; however, only four scores were below the midpoint of 2.5, indicating that only four out of 47 individuals had a less than agreeable view, on average, of the relationship with their case manager. In addition, 60% of the sample had a mean score above 3.00; indicating high agreement with the 20 items regarding the relationship with their case manager. Mean imputation using the mean scale score was employed to compute internal reliability. Mean imputation was used if the survey had one or two NA categories selected out of 20 questions. Using this criterion, 44 surveys were used to compute an internal reliability estimate (three surveys were excluded). Chronbach’s alpha suggested that the survey had high internal consistency (alpha = .92, unstandarized and .93, standardized); however, the reliability estimate may have been partially inflated by the large number of items and the narrow range of response on 18 out of 20 questions. Corrected item-total correlations indicated that 18 out of the 20 questions were within the medium to large correlation range (.45 to .79). Questions 6 and 9 displayed small item- total correlations (.04 and .23). Removing questions 6 and 9 increased the internal reliability to .94 (unstandarized), which also slightly improved the item-total correlations (range .49 to .80) for the remaining 18 items. Because questions 6 “My case manager asks about and respects my religion or spirituality” and 9 “My case manager is not helpful to me if I disagree with him or her” did not seem to be related to the other 18 items, the mean score was recalculated without these two items. Although it is unclear why question 6 does not relate to the other items or the overall score, except that the question rarely came up between consumers and case managers in this sample, question 9 appears to be poorly worded and may have been 97 confusi 400) w? calculat present: I'tilitv t _—-’-— follows confusing to readers. The recalculated mean score is 3.20 (median = 3.17 and mode = 4.00) with a standard deviation of .51. The correlation between the first and second calculated mean score was .99. The recalculated mean score was used in correlations presented below. Utility of Consumer-Centered Service Plagmg Operational definition. The operational definitions of the four indicators are as follows: 0 Congruity of needs. This measure was defined as the level of agreement between consumers and their case managers on what are the consumers’ needs. The measure was created by computing the level of absolute agreement between consumers and case managers on a 34-item needs assessment survey. The congruity of needs score ranges from 0 to 1 with higher scores indicating high agreement. Because of the heterogeneity of items on the needs assessment survey, the overall or mean congruity score may not reflect the true level of agreement or understanding that case managers have regarding consumers’ needs. 0 Congruity of goals. This measure was defined as the level of agreement between consumers and their case managers on what are the consumers’ goals in life. The measure was created by computing a level of agreement between a consumer and case manager list of consumer-focused goals. The congruity of goals score ranges from O to 1 with higher scores indicating high agreement between consumers and case managers. 0 Treatment planning and goal development (T PGD). This measure was defined as a consumer-reported evaluation instrument designed to assess consumers’ 98 satisfaction or agreement with items related to treatment planning and goal development activities with their case manager. The measure produces a mean score that is created by taking the average score of the seven items (only a six item version was used in correlations). The mean score ranges from 0 to 5 with lower scores indicating higher satisfaction or agreement with the seven items. Like numerous other satisfaction surveys, the TPGD subscale appears to suffer from ceiling effects with most consumers strongly agreeing with all seven items. 0 Relationship with your case manager (RYCM). This measure assessed the consumer’s view of the relationship with their case manager. The measure covers multiple dimensions of the relationship, such as respect, trust, physical behaviors of the case manager, and the importance of the relationship. The measure is comprised of 20 items scored on a 4-point likert scale with eight items being negatively worded. One overall mean score is produced (the eight negatively worded items are reversed coded for scoring purposes). Mean scores can range from 1 to 4 with higher scores indicating that consumers have a positive view of the relationship with their case managers. Like the TPGD subscale, the RYCM scale, suffers from a ceiling effect. Multiple megures. The consumer-centered, service planning dimension was assessed with two sources of data (consumer and case manager) and two data collection methods (close ended surveys and open ended, qualitative information). One problem with employing the open ended, qualitative measure was the increased time in human hours that was needed to code and score the congruity of goals measure. This is an 99 important issue to consider when creating indicators of the consumers centered model that can be used easily and economically by service providers. Reliability. Due to the nature of the first two measures — congruity of needs and congruity of goals — a reliability estimate was not computed. Nevertheless, item level analyses of the congruity of needs measure suggested that the 34 items were not assessing one unified dimension, but rather may have been assessing multiple dimensions. Problems in recalling or just reporting on goals, at least for some consumers, may have undermined the reliability of the congruity of goals measure. More research is needed to confirm these observations. The six-item version of the TPGD subscale displayed medium to high internal consistency for a small item measure. The l8-item version of the RYCM scale displayed high internal consistency; however, the alpha rating may have been inflated, at least partially, by the large number of items and the ceiling effect of responses. Internal Validity. Table 7 provides a correlation matrix of the four indicators of the consumer-centered, service planning dimension. Table 7: Internal Validi of the C-C,SP Dimension Congruity of Congruity of RYCM scale Goals Needs SCM ACT SCM ACT SCM ACT Congruity .2 l .25 of Needs RYCM .14 .15 l.73 .26 scale TPGD .30 .21 I25 .14 .37 .77 subscale Note: bolded correlations indicate significant contrasts in the relationship of those two items between the two groups. On average, there are 29 pairs in the SCM program and 25 pairs in the ACT program for all correlations. Although all correlations were in the expected direction (the inverse relationships are due to the reverse coding order of the TPGD subscale), most correlations were within the 100 small progra and th congr measr indicz other C0118] 83565 n P? Nil 133' (iii F11: 901 C01 OD small to medium effect size range. The two exceptions to this, and the only substantial program level differences, were with the congruity of needs and RYCM scale correlation and the RYCM scale and the TPGD subscale correlation. The small correlations of the congruity of goals measure with the three other indicators may reflect, in part, measurement error. The consistently small correlations of the congruity of goals indicator with the other three indicators may also indicate that it did not relate to these other measures. External Validity. Table 8 provides correlations among the four indicators of the consumer-centered, service planning dimension and four measures used in the strengths assessment dimension. Table 8: External Validity Correlations of C-C, SP Dimension I Strengths Scale Strengths Opinion Scale Strengths Scale Consumer Scale Staff Differences SCM ACT SCM ACT SCM ACT SCM ACT Congruity of .45 .00 .47 .16 .28 . 14 -.05 .08 Needs Congruity of .01 -.27 -.10 -.14 -.10 -.21 -.21 .07 Goals TPGD .05 -.53 .06 -.08 -.10 -.07 -.05 .33 subscale RYCM scale .61 .40 .67 .08 .65 .07 -.06 -.26 Npte: bolded correlations indicate significant contrasts in the relationship of those two items between the two groups. On average, there are 29 pairs in the SCM program and 25 pairs in the ACT program for all correlations. Data displayed in Table 8 indicate that the congruity of goals and strengths scale difference indicators were minimally correlated with other scales in the matrix. Furthermore, nearly all the scales were minimally correlated in the ACT program. In contrast, the RYCM scale and the congruity of needs indicator displayed medium to large correlations with the two strengths scales (consumer and case manager versions) and the opinion scale in the SCM program. The TPGD subscale displays mixed results, but 101 generally was not related to other indicators in Table 8, except for the consumers’ strengths scale score and strengths scale difference score in the ACT program. Community Inclusion-based Services This dimension was assessed with two subscales of the Consumer Survey of Mental Health Services (CSMHS). Service Provision The first of two subscales used to assess this dimension is the Service Provision (SP) subscale of the CSMHS. The SP subscale is comprised of five items scored on a 5- point likert scale ranging from 1 to 5 (low scores reflect high agreement or high satisfaction with services). An additional category of not applicable or don’t know (NA/DK) was also used. Although there was no missing data, the NA/DK category was scored as missing, if selected, for scoring purposes. The NA/DK category was only selected once in 54 completed surveys. Mean imputation was used for the one missing data point. Item analyses revealed a consistent positive skew with three out of the five items ~questions 8 (mean = 1.85), 10 (mean 1.83) and 11 (mean = 1.70) - displaying a substantial positive skew and a narrow distribution of scores; subsequently, the overall mean score was 1.93 (median = 1.9, mode = 1.0) with a standard deviation of .73 and a range from 1.00 to 3.80. Like the TPGD subscale described previously, the SP subscale suffered from the same ceiling effect, with most consumers strongly agreeing with four out of five items. A mean score of 1.93 reflected high agreement among most consumers on the five questions regarding the provision of services. There was a small, non- sigrrificant difference on the mean SP score between the programs with consumers in the 102 SCM program reporting higher agreement on items regarding treatment planning and goal development (SCM = 1.80, ACT = 2.09, etaz = .04). Chronbach’s alpha suggested that the subscale has medium to high internal consistency (alpha = .77, unstandarized and .76, standardized). Corrected item-total correlations indicated that four out of five items had medium to large correlations with the overall score (item-total correlations range fi'om .44 to .74) with question 11 “Most of my meetings with my case manager are away from the community mental health agency” (item-total correlation = .26) being the only question that did not correlate well with the other items. Because most meetings take place away from the mental health center, question 11 was substantially skewed towards options 1 and 2 of the likert scale in this sample, which greatly diminished its variance and covariance with other items in the scale. Nonetheless, item 11 is considered an important question and may be more relevant with agencies that are more office-based and do not employ telecommuters: case managers based out of their home rather than the office, who by the very nature of their job, are community-based. Promoting Indgpendence The second subscale used to assess this dimension is the Promoting Independence (PI) subscale of the CSMHS. The PI subscale is comprised of five items scored on a 5- point likert scale ranging from 1 to 5 (low scores reflect high agreement or high satisfaction with services). An additional category of not applicable or don’t know (NA/DK) was also used. Although there was no missing data, the NA/DK category was scored as missing, if selected, for scoring purposes. The NA/DK category was selected at least once on all five questions with items 14 (six NA/DK selected) and 15 (eight NA/DK 103 selected} that recei items. L' Filty-thr on the t‘ mode = score yr range 1 reflect regard indep portit‘ regat SCOl‘t Cons lhen Sen. COn Coo SCQ selected) having the most selected. Mean imputation using the mean of those questions that received a score was used if the NA/DK category was selected once across all five items. Using this criterion, 48 surveys were used to calculate the reliability estimate. Fifty-three surveys were used for all other analyses. Item analyses revealed a more normal distribution of scores on all five items than on the two previous subscales of the CSMHS. Item means ranged fi'om 2.21 (mean and mode = 2.00, SD = .95) to 3.22 (median and mode = 4.00, SD = 1.23). The mean scale score was 2.64 (median = 2.67 and mode = 2.00) with a standard deviation of .85 and a range from 1.00 to 4.20. A mean score of 2.67 was below the midpoint of 3.00 and reflects moderate agreement, on average, among most consumers on the five questions regarding how well case management services lead to or promote consumers’ independence in living and from mental health services. Forty-two percent of the sample (22/53) had a mean PI score at or below the midpoint of 3.00; indicating that a substantial portion of consumers in this study did not agree, on average, with the five items regarding services that promote independence. Three of the five items had a median score of 3.00 or 4.00 (questions 15, 16, and 17); indicating that at least half of the consumers who participated in the study did not agree that their case manager had helped them to find a job or to become more independent financially and from mental health services. Mean scores were similar between the two case management programs. Chronbach’s alpha suggested that the subscale has medium to high internal consistency (alpha = .79, unstandarized and .80, standardized). Corrected item-total correlations indicated that four out of five items have large correlations with the overall score (item-total correlations range from .60 to .7 6) with question 15 “My case manager 104 has helped me to find a job” (item-total correlation = .30) being the only question that did not correlate well with the other items. Although item 15 has a small to medium correlation with the overall scale score, the question is relevant for assessing independence, and therefore, is retained as part of the subscale mean score. Utility of Community Inclusion-Based Services Operational definition. Both the Service Provision (SP) and Promoting Independence (PI) subscales of the CSMHS were developed as evaluation tools to be used by consumers; subsequently, both scales employ a consumer perspective for assessing the quality of case management services. The SP subscale consists of five items that assess if services are strengths-based and consumer-centered. The PI subscale consists of five items that assess if services are helping consumers become more independent in life and independent from mental health services. Lower scores (i.e., below the midpoint of 3.00) indicate higher agreement or satisfaction with the items regarding service provision and promoting independence. Multiple measures. The consumer-centered, service planning dimension was assessed by one data source and with one method of data collection; thus, it did not meet Scheirer and Rezrnovic (1983) standard for employing multiple data sources. Reliabiliy. Internal consistency of both subscales was medium to high for small item measures. Internal Validity. The correlation of the two subscales was .62 (correlations were Similar in the two case management programs). Variations in responses to the two subscales indicated that consumers did differentiate between questions that addressed service provision and those that addressed promoting independence, which indicates that 105 thesubs shade hmus donrll hrAC deflel y Consu Sers I I F‘ ( Congrt L‘ lCongn {WGD Note: E exu pro; Cor sub “my \Va ind the subscales are assessing different aspects of this dimension; nevertheless, these two subscales are related, as indicated by the large positive correlation. External Validity. Table 9 displays correlations of the two subscales of this dimension with indicators of the consrnner-centered, service planning dimension broken down by case management program (samples sizes are approximately 29 for SCM and 25 for ACT for all correlations). Table 9: External Validity of Community Inclusion-Based Services I Consumer-Centered, Service Provision Promoting Independence Service Planning subscale subscale Indicators SCM ACT SCM ACT Congruity ofNocdS -.31 .08 -.26 .08 Congnuly 0f Goals -.36 -.03 .1 l -.06 TPGD subscale .76 .58 .49 .27 RYCM scale -.54 -.55 -.54 -.19 Note: Bolded items reflect substantial differences in program correlations There were two patterns in the correlations displayed in Table 9. Correlations among consumer reported scales tended to be fairly consistent and positively correlated (controlling for reverse coding) across the two case management programs. The one exception was the correlation between the PI subscale and the RYCM scale in the ACT program, which was substantially smaller than the correlation in the SCM program. Correlations that involve both consumer and case manager reported scales were substantially different between the two case management programs. The one exception was the correlation between the PI subscale and the congruity of goals indicator, which was near zero in both programs. Table 10 displays the correlations of the subscales of this dimension with the four indicators of the strengths assessment dimension. 106 l Consr 11 Ser #— f Streng P..— , Streng cortsu: , 0pm Differ f strenc Nate; Agai that dim seal effe the pro dif W1” Wt 81 pe 0F in. ale Su' Table 10: External Validity of CommunitLInclusion-Based Services H Consumer-Centered, Service Provision Promoting Independence Service Planning subscale subscale Indicators SCM ACT SCM ACT Strengths Scale, staff -.13 .23 -.1 1 .42 Strengths Scale. -.34 -.69 -.39 -.47 consumer Opinion 80316 -.1 1 .28 -.02 .38 Difference score of .13 .62 .18 .59 strengths scales Note: Bolded items reflect substantial differences in program correlations Again, the same patterns occurred with indicators of the strengths assessment dimension that were noted in Table 10 with indicators of the consumer-centered, service planning dimension. Correlations among the three consumer reported scales — consumer strengths scale, SP subscale, and the PI subscale - were similar and ranged from medium to large in effect size between the two case management programs; although the correlation between the SP subscale and the consumers’ strengths scale was substantially larger in the ACT program than in the SCM program. In contrast, there were consistent program differences among correlations that involved both consumer and staff reported scales. Moreover, the correlations in the ACT program were in the opposite direction of what would be expected. For example, considering that all four subscales of the CSMHS were reverse coded, the positive correlation between staff strengths scale and the PI and SP subscales indicated that as consumers’ satisfaction with services decreased, staff's perception of their strengths increased. The same pattern was observed with staff’s opinion scale and the PI and SP subscales; as staff’s positive opinions of consumers increased, consumers’ satisfaction with services decreased. Finally, the same pattern was also observed with the difference scores of the strengths scales and the PI and SP subscales; as the differences score increased (staff became more strengths based 107 compared decreased. Th empowerr lam Er survey. T items neg consisted worded it It negative (median score tea from 2.2 Scale SC< Rogers 1 2-72, 81 the Size C356 ma (alpha : compared to the consumers own perspective), consumers satisfaction with services decreased. Proxy Indicators There were three scales used as proxy indicators of the consumer-centered model: empowerment, quality of life, and satisfaction (a subscale of the CSMHS). Empowerment Empowerment was measured with Rogers et a1. (1997) consumer empowerment survey. The survey consist of 28 items scored on a 4-point likert scale with 19 out of 28 items negatively worded (e.g., getting angry about something never helps). Scoring consisted of taking the mean of the 28 items after reverse coding the 19 negatively worded items. There was no missing data on the 54 surveys. Item analyses, after recoding, revealed that most items had a small to moderate negative skew. Item means ranged from 1.96 (median =2.00, mode = 1.00) to 3.52 (median and mode = 4.00) and standard deviations ranged from .54 to 1.08. The average score was 2.82 (median = 2.88, mode = 2.89) with a standard deviation of .22 and a range from 2.29 to 3.32. Both the range and standard deviation indicated a narrow range of scale scores that were clustered around the mean. The average mean score was similar to Rogers et a1. (1995; mean = 2.94, SD = .32) and Wowra and McCarter (1999; mean = 2.72, SD = .34); however, the standard deviation of scores in this study was only 64% of the size noted in those two studies. Average group scores were similar between the two case management programs. Chronbach’s alpha suggested that the 28 items had low internal consistency (alpha = .54, unstandarized and .64, standardized). The internal consistency of the scale 108 found in this study was much lower than what was reported by Rogers et a1. (1995; alpha = .86) and Wowra and McCarter (1999; alpha = .85). Corrected item-total correlations also confirmed this low internal consistency with correlations that ranged from .00 to .62. Because the small sample size may have influenced these findings, the 30 empowerment surveys fiom the pilot study were added; thus, increasing the sample size to 84. A recalculated alpha increased to .80 (unstandarized); however, item-total correlations still revealed a wide range of correlations from near zero to large in effect size. Although the architects of the empowerment scale used the 28-item mean score for analyses, they did report on a five-factor structure, which suggests that there are multiple dimensions within the scale. These multiple dimensions can help to explain the extreme variations in item-total correlations. Wowra and McCarter (1999) reconfirmed the five-factor structure and created five subscale scores from these structures. The five subscales include self-esteem-self—efficacy, power-powerlessness, community activism, optimism and control, and righteous anger. An exploratory factor analysis on the combined data set of 84 surveys confirmed only the first subscale — self-esteem/self- efficacy, which accounted for 24% of the variance; however, because both Rogers et al. and Wowra and McCarter used nearly 300 surveys each for their analyses, the five subscales were recreated using their recommended item structures. Chronbach’s alpha suggested that only the first subscale - self-esteem/self-efficacy - had what would be considered good internal consistency (alpha = .85 for the dissertation sample and .87 for the combined sample). The four other subscales had internal consistencies ranging from 109 .20 to .54 combined Re which co: however. grouping scale sco subscale and a rar l8, l9, 2 similar t M); l (1976) c Scale w on ten 5 selectei Score 0 survey the me life 00‘ Small .20 to .54 (these findings were fairly consistent between the dissertation sample and combined sample). Results of these analyses suggested that the overall mean empowerment score, which consisted of the full 28-item measure, reflected a multidimensional construct; however, only one subscale was found to be internally consistent within its suggested grouping. Two empowerment scores were used for correlational analyses: the full mean scale score and the subscale score of self-esteem/self-efficacy (SE/BF). The SE/EF subscale had a mean of 3.07 (median and mode =3.00) with a standard deviation of .49 and a range from 1.89 to 4.00. The SE/EF subscale consists of questions 5, 6, 9, 12, 14, 18, 19, 24, and 25 of the CES survey. Average group scores on the SE/EF subscale were similar between the two case management programs. Qu_alig of Life Quality of life was measured with Sullivan’s adaptation of Andrew and Withey’s (1976) quality of life measure. The scale consists of nine items scored on a 7-point likert scale with one additional category of not applicable (NA). The NA category was selected on ten surveys for question 5 and one survey for question 7. The NA category, if selected, was considered missing for scoring purposes. Mean imputation, using the mean score of the items that received a score, was used to analyze internal reliability. All 54 surveys were used to compute reliability. An overall scale score was produced by taking the mean of the nine items. Lower scores reflect higher satisfaction with the nine areas life covered in the scale. Item analyses revealed a consistent pattern of normally distributed scores with a small positive skew. Item means ranged fi'om 3.00 (median = 3.00 and mode = 2.00) to NO 3.93 (median and mode = 4) and standard deviations that ranged from 1.61 to 1.87. The average mean score was 3.61 (median = 3.71 and mode = 4.11) with a standard deviation of 1.33 and a range from 1.11 to 7.00. Forty-three percent of the sample had a mean score at or above the midpoint of 4.00; indicating that these individuals were less than satisfied, on average, with their quality of life (i.e., scored above 4.00 on most of the nine items). There was a small, non-significant mean score difference between the two case management programs with the SCM program having a slightly lower group average (slightly higher quality of life ratings) than the ACT program (SCM = 3.3 8, ACT = 3.87, etaz = .03). Chronbach’s alpha suggested that the subscale had high internal consistency (alpha = .91, unstandarized and .91, standardized). Corrected item-total correlations indicated that all items were highly correlated with the overall score (item-total correlations .ranged from .60 to .81). Satisfaction Satisfaction with services was measured with the Outcomes Resulting from Case Management Services subscale (referred to as the outcomes subscale) of the Consumer Survey of Mental Health Services (CSMHS). The outcomes subscale is comprised of 11 items scored on a 5-point likert scale ranging fi'om 1 to 5 (low scores reflect high agreement or high satisfaction with services). An additional category of not applicable or don’t know (NA/DK) was also used. Although there was no missing data, the NA/DK category was scored as missing, if selected, for scoring purposes. The NA/DK category was selected at least once on 10 out of 11 questions with items 24 (14 NA/DK selected) and 20 (7 NA/DK selected) having the most selected. Mean imputation using the mean 111 of those questions that received a score was used if the NA/DK category was selected on one or two questions across all 11 items. Using this criterion, 47 surveys were used to calculate the reliability estimate. F ifty-four surveys were used for all other analyses. Item analyses revealed a normal distribution with a small positive skew across the 11 items. Items means ranged from 2.15 (median = 2.00 and mode = 1.00) to 3.19 (median and mode = 4.00) and item standard deviations ranged from 1.06 to 1.34. The overall mean score was 2.49 (median = 2.36, mode = 1.0) with a standard deviation of .86 and a range from 1.00 to 4.45. A mean score of 2.36 reflects moderate agreement among most consumers on the 11 questions regarding how satisfied they were with their treatment outcomes. There was a small, non-significant difference on the mean outcomes score between the programs with consumers in the SCM program reporting higher agreement on items regarding their treatment outcomes (SCM = 2.28, ACT = 2.59, eta2 = .03). Thirty-three percent of the sample had a mean outcomes score at or above the midpoint of 3.0; indicating that these individuals were not satisfied (i.e., do not agree), on average, with the 11 items regarding their treatment outcomes. Chronbach’s alpha suggested that the subscale had high internal consistency (alpha = .91, unstandarized and .91, standardized). Corrected item-total correlations indicated that 10 out of 11 items had a medium to large correlation with the overall score (item-total correlations ranged from .53 to .83) with question 20 “As a result of mental health services, I have not experienced any side effects from the medication that I am taking” (item-total correlation = .39) being the only question that did not correlate well with the other items. Although question 20’s item total correlation was in the medium effect size range, the question was to broad to be reliable. For instance, several 112 consume some of t not as etl participa' concurre and I€i€\ reliable. PTOX)’ i1 consumers noted that they were taking numerous psychotropic medications and that some of these mediations were effective and did not have side effects, but that some were not as effective and did cause side effects. Because many of the individuals who participated in the study were taking more than one psychotropic medication concurrently, question 20 was difficult to answer. The question addresses an important and relevant issue with consumers, but will require some rewording to be useful and reliable. Based on these findings, question 20 was removed from the scale and the outcomes’ mean score was recomputed based on ten items. The recomputed mean score was 2.42 (median = 2.24 and mode = 1.00) with a standard deviation of .89 and a range from 1.00 to 4.50. Internal consistency increased to .92. The recomputed outcomes score was used with all correlational analyses. Utility of the Proxy Indicators Operational definition. There were three different surveys that were used as proxy indicators of the consumer centered model. 0 Quality of Life: The Sullivan QOL measure provides an overall score that reflects individuals’ self-reported appraisal of how pleased they are with their life and where they are at in their life at the present time. Lower scores indicate a higher sense of pleasure or satisfaction with ones own life in several areas, such as in accomplishing life goals, leisure activities, psychological well-being, and family. All nine items or life domains appear to be highly related and reflect an overall construct of quality of life. “3 the S; o Empowerment: Rogers et a1. (1995) defined their measure as a multidimensional assessment of individuals with mental illness sense of empowerment. Lower scores indicate a higher sense of general empowerment. Information from an exploratory factor analysis and reliability analyses suggested that more than one unified construct was being measured; however, only one of the suggested five dimensions -— self-efficacy/self- esteem —— had an acceptable internal consistency among the recommended groupings. The self-efficacy/self-esteem subscale scale or sub-dimension of the empowerment survey had higher internal consistency, better distribution of scale scores, and stronger relationships with other related measures (displayed below) than the overall scale score. - Satisfaction: Satisfaction was assessed with the Outcomes Resulting fiom Case Management Services subscale of the CSMHS. Unlike most satisfaction surveys that employ a provider’s perspective of satisfaction, the outcomes subscale uses a consumers’ view in evaluating how effective services have been at achieving consumer-centered outcomes. The scale was adapted fiom the MHSIP Consumer Survey, which was also developed as a consumer- viewed satisfaction survey. The 10-item version of the survey provides a mean score that reflects consumers’ level of agreement with those 10 items. Lower scores indicate higher agreement or satisfaction with case management services at achieving or moving towards the 10 treatment outcomes. Multiple measures. Information for all three proxy indicators was collected from the same data source and with the same methodology. Although the indicators conform 114 to the consumer-perspective, they do not conform to Scheirer and Rezrnovic (1983) criterion of employing more than one data source for an implementation measure. Future research should focus on collecting additional information, such as direct indicators of treatment outcomes (e. g., improved housing and employment) or reports from family members, fi'iends, and partners on how well consumers are doing. Reliabilig. Both the quality of life and outcomes subscale surveys displayed high internal consistency. The empowerment survey displayed low internal consistency and a wide range of corrected item—total correlations. The self-efficacy/self-esteem subscale of the empowerment survey displayed medium to high internal consistency. Internal Validity. Table 11 displays the correlations among the four proxy indicators (the self-efficacy/self-esteem subscale is included) broken down by case management program. Table l 1: Correlation of Proxy Indicators Empowerment Self-Efficacy/ Self- Satisfaction Esteem (consumer outcomes) SCM ACT SCM ACT SCM ACT Satisfaction (consumer -,36 -,06 -.7() -,42 outcomes) Quality of Life -.49 -.54 -.64 -.74 .69 .50 Note: Bolded correlations indicate substantial contrasts in the relationship of those two items between the two groups. Correlations among the proxy indicators were similar between the case management programs and were within the medium to large effect size range except for empowerment and satisfaction, which had a medium correlation in the SCM program but a near zero correlation in the ACT program. In direct comparisons, the self-efficacy/self-esteem subscale displayed a stronger relationship with the two other proxy indicators in both programs than the full scale score of the empowerment survey. This was not surprising considering that the measurement error (using Chronbach’s alpha as an indicator error) of 115 the empowerment scale was larger than the self-efficacy/self-esteem subscale. These results (excluding the newer development of the self-efficacy/self-esteem subscale) reflected the findings in the pilot study, which also showed high correlations among the outcome subscale (MHSIP version), quality of life, and empowerment surveys. External Validig. Table 12 displays the correlations of the four proxy indicators with indicators of the consumer-centered, service planning dimension. Table 12: External Validity of Proxy Indicators I Empowerment Self-Efficacy/ Satisfaction Quality of Life Self-Esteem @nsumer outcomes) SCM ACT SCM ACT SCM ACT SCM ACT Congruity 0f .26 .04 .52 -.12 -.62 .15 -.49 .10 Needs Congruity 01' -.03 .10 -.0.9 -.29 .06 .14 .04 -.02 Goals TPGD -.13 —.21 -.06 -.15 .13 .65 .33 .36 subscale RYCM scale .38 .29 .53 .27 -.53 -.49 -.40 -.21 Note: Bolded items reflect substantial differences in program correlations There were several different patterns of correlations displayed in Table 12. The congruity of needs indicator was highly correlated with three out of the four proxy indicators in the SCM program. In contrast, the same correlations ranged from zero to small in the ACT program. Correlations among the congruity of goals indicator and the four proxy indicators ranged from near zero to small in both programs. Correlations among the TPGD subscale and the four proxy indicators were mixed. Correlations among the RYCM scale and the four proxy indicators ranged fi'om small to medium in both programs, but overall, the RYCM scale appeared to be related to all four proxy indicators across both programs; however, there was a consistent difference in effects sizes between the two case management programs with the SCM program displaying larger effect sizes over the ACT program. Finally, effect sizes of the self-efficacy/self- esteem subscale were consistently larger than the full empowerment scale score. 116 Table 13 displays correlations of the four proxy indicators with the two indicators of the community inclusion-based services. Table 13: External Validity of Proxy Indicators II Service Provision Promoting subscale Independence subscale SCM ACT SCM ACT Empowerment -.31 .13 -.33 -.33 Self-efficacy/ self- -.48 -,35 -.32 -.59 esteem Satisfaction .55 .48 .47 .55 (consumer outcomes) Quality of Life .67 .52 .55 .63 Note: Bolded items reflect substantial differences in program correlations Except for the correlation between empowerment and the SP subscale in the ACT program, all other correlations were in the medium to large range, in the expected direction, and similar in effect size between the two case management programs. Again, the self-efficacy/self-esteem effect sizes were consistently larger than the full empowerment scale across all correlations and in both programs. Revised Measurement Model As a result of information gathered from this study, the measurement model presented in Figure 3 on page 56 required some revision. A summary of the findings is presented below with a revised measurement model. Speculation on why the programs produced consistently divergent results is provided in the discussion section. Revised Strengths Assessment The case manager version of the strengths scale appeared to be moderately correlated to the consumer version of the strengths scale and highly correlated with the opinion scale (as expected). The medium correlation between the two strengths scales suggests that both versions could be useful in evaluating the extent to which programs are strengths-based. Assessment of external validity suggests that both versions of the 117 strengths scale are moderately to highly correlated with two of the four indicators of the consumer-centered, service planning dimension but that the case manager version is not related to either of the two CSMHS subscales used to assess the community inclusion- based services. In contrast, the consumer version is moderately to highly correlated with the two CSMHS subscales (an expected outcome considering that they are all subscales of the same survey). The opinion scale displayed mixed results across indicators and between programs, but generally is related to staff reported or partially reported indicators in both dimensions. Finally, the composite strengths difference score displayed mixed results, but generally was unrelated to most indicators of the two adjacent dimensions. As a result of these findings, the three surveys are retained but the composite score is not. Revised Consumer-Centered, Service Planning Assessment of internal validity suggests that the congruity of needs indicator, RYCM survey, and TPGD subscale are related but that the congruity of goals indicator is minimally related to the other three indicators of this dimension. Both the congruity of needs indicator and the RYCM survey appear to be related to three out of four indicators of the assessment of strengths dimension and the two CSMHS subscales of the community inclusion-based service dimension (correlations for both indicators range from small to large across the two adjacent dimensions). The TPGD subscale of the CSMHS appears to be minimally related to the four indicators of the assessment of strengths dimension (correlations range from near zero to medium, with most correlations in the small effect size range) and, as expected, is highly correlated with two other subscales of the CSMHS (as well as the consumer outcomes subscale of the CSMHS ll8 used minin zero t« the cc resen Revis both j indic- asses subsc (metl Subs: RYC COIN staff indi. ester are: used to measure satisfaction). Finally, the congruity of goals indicator appears to be minimally related to indicators of both adjacent dimensions (correlations range from near zero to medium, with most in the small effect size range). As a result of these findings, the congruity of needs indicator, the RYCM survey, and the TPGD subscale (with reservations) are retained and the congruity of goals indicator is not. Revised Community Inclusion-Based Services As expected, the two PI and SP subscales of the CSMHS are highly correlated in both programs. The two subscales displayed mixed and inconsistent results with other indicators in both the consumer-centered, service planning dimension and the strengths assessment dimension as well as between programs. As expected, the PI and SP subscales of the CSMHS are highly correlated to the TPGD subscale of the CSMHS (method variance may be inflating these high correlations). In general, the PI and SP subscales are related to other consumer reported scales, such as the strengths scale and RYCM scales in both programs (again, method variance may be influencing these correlations). Correlations among the PI and SP subscales with other staff-reported or staff-consumer composite measures are mixed across dimensions and between the two case management programs. Although further research is needed, both measures are tentatively retained in the revised measurement model. Revised Proxy Indicators Except for the full scale score of the empowerment survey, the other three proxy indicators — satisfaction (consumer outcomes), quality of life, and the self-efficacy/self- esteem subscale — are moderately to highly correlated in both programs. These findings are similar to the results found in the pilot study. Because of poor internal consistency 119 and i empO esteei corre' are It inclu consr need: modt 00m find and inconsistent results with adjacent dimensions, the full scale score of the empowerment survey is eliminated fiom the model and replaced by the self-efficacy/self- esteem subscale, which has a higher internal consistency and is moderately to highly correlated with indicators in adjacent dimensions. The three remaining proxy indicators are moderately to highly correlated with the PI and SP subscales of the community inclusion-based services dimension in both programs; with the RYCM survey of the consumer-centered, service planning dimension in both programs; and the congruity of needs indicator in the SCM program. Only the quality of life indicator appears to be moderately related to the TPGD subscale in both programs. As a result of these findings, the three remaining proxy indicators are retained in the revised measurement model. Emal Measurement Model Figure 4 displays the revised and final measurement model based on the results of the study. A second strengths assessment dimension is added to the model that represents the consumers’ view of their own strengths. The consumer version of the strengths scale was moderately to highly correlated to indicators in all three adjacent dimensions. The arrow going from the staff version of the strengths assessment dimension to the community inclusion-based dimension has been removed as result of the research findings. Finally, the proxy indicator of empowerment has been removed and replaced 120 with an assessment of self-efficacy/self-esteem. Figure 4: Revised Measurement Model Strengths Assessment Service Planning and Provision Proxy Indicators Self-Efficacy/ Strengths Consumer- Self-Esteem Assessment Centered (staff) Service Planning ‘ Quality of Life Strengths Community Assessment (consumer) Inclusion-Based Services Satisfaction Empowerment is still considered to be a useful and meaningful indicator of consumer centered services but will require a more refined measure or multiple measures, such as the self-efficacy/self-esteem subscale and related domains that reflect the multiple aspects of empowerment. TwoLcrspectives Despite consistent program level differences in the correlations between consumer-reported and case manager-reported measures, there were also similarities between the programs within consumer-reported surveys and within case manager- reported surveys. For example, Table 14 displays correlations among case manager- reported surveys. An additional measure that has not been introduced previously in this 121 document, but is displayed in Table 14, is the Multnomah Community Ability Scale (MCAS). The MCAS was specifically designed as a rating tool for clinicians (e.g., case managers) who serve individuals with SMI. The MCAS is a brief 17-item scale that assesses consumers' level of functioning in the community. Case managers were asked to complete a MCAS on each consumer on their caseload who was involved in the study. Lower scores indicate that consumers are doing worse in the community (i.e., are displaying less ability or capacity to remain stable in the community). The MCAS score is used here to examine the relationship between case managers’ clinical perception and strengths-based perception of consumers. Table 14: Correlations of Case Manager-Reported Surveys MCAS Needs Assessment (case Opinion Scale manager version) SCM ACT SCM ACT SCM ACT Needs Assessment (case -,56 -.6O manager version) Opinion Scale .61 .57 -.32 -.43 Strengths Scale (case .74 .71 -.37 -.56 .60 .73 mangggversion) In contrast to the numerous program level differences reported previously, the correlations among case manager reported surveys are similar in effect size between the two programs. Case managers display or reported similar perspectives about consumers in both program. The correlations among the MCAS and the three other case manager reported scales are in the large effect size range and similar across the two case management programs. These high correlations suggest that case managers clinical View of consumers is highly related to their assessment of consumers’ needs, strengths, and their opinion of consumers. Again, the high and consistent correlations may also be indicative of method variance. For example, if the variance of the MCAS survey is partialed out of the 122 correlations among the other variables in Table 14, the remaining correlation effect sizes are lowered substantially. Table 15 displays these results. Table 15: Correlations of Case Manager-R rted Surveys 11 Needs Assessment (case Opinion Scale magger version) SCM ACT SCM ACT Opinion Scale -.32/.03 -.43/-.13 Strengths Scale (case -.37/.09 -.56/-.23 .60/.28 .73/.56 managgr version) Note: the first correlation in each box is the zero order and the second correlation is a partial one with the MCAS survey taken out. The consistent lowering of effect sizes among these correlations suggests that there is shared variance among the four scales (possibly more if a second and third order partial correlation is computed). After controlling for the MCAS variance, correlations among the case manager needs assessment survey and the case manager version of the strengths scale and the opinion scale are reduced to small effect sizes (near zero in the SCM program) in both programs. The same trend can be found among consumer-reported scales between the two case management programs. Despite program level differences among correlations between case manager-reported surveys and consumer-reported surveys, correlations among consumer-reported surveys were similar, and usually in the medium to large effect size range, between the two programs. For example, consumers were asked to complete 12 different scales (some were subscales such as the four subscales of the CSMHS). Two scales that have not been discussed previously but consumers were asked to complete were two subscales of the Behavioral And Symptom Index Survey (BASIS-32). Both subscales —depressionlanxiety and daily living/role functioning - are self-report measures of how well consumers are functioning in the community. Only two of these 12 scales — consumer needs assessment and TPGD subscale - did not consistently correlate 123 (correlations usually ranged fiom medium to large in effect sizes) with the other ten surveys in both programs. To firrther illuminate these trends, an EFA was done on the 11 consumer survey scores (the full empowerment survey score was not included but the two BASIS-32 subscales were) and the four case manager survey scores noted above (both programs were combined in the run). The EPA run, using a varimax rotation and a sample size of 42, produce a three-factor solution that accounted for 70% of the variance with the first two factors accounting for 63% of the variance. The first factor was a nine-item solution that consisted of the nine consumer surveys noted above, excluding the consumer needs assessment survey and the TPGD subscale. Factor loadings ranged fiom .59 to .89. The second factor consisted of the four case manager-reported surveys noted above. Factor loadings ranged from .78 to .87. The last factor, which accounted for 7% of the total variance, consisted of the two consumer surveys — consumers needs assessment survey and the TPGD subscale — that did not load on the other two factors. The results of the EFA suggest that there are two hi gher-order factors influencing the two sets of reports (i.e., consumer and case manager reported surveys). The high factor loadings also suggest that method variance may be present. Method variance can inflate factor loadings, as well as zero order correlations, by sharing error variance. The covariance of error terms across items is often caused by similar response patterns within participants. An effective assessment of method variance can be accomplished with a confirmatory factor analysis (CFA); however, due to the limited sample size, a CFA cannot be run on this data set (parameter estimates would be unreliable). As shown previously, however, partial correlation analyses can reveal the presence of shared 124 variance. For example, in Table 16, zero-order correlations among four of the consumer-reported surveys (three are proxy indicators and one is the RYCM subscale) are displayed. Table 16: Correlations of Consumer-Reported Surveys Quality of Life Self-Efficacy/ Satisfaction Self-Esteem (consumer outcomes) SCM ACT SCM ACT SCM ACT Self-Efficacy/ Self- -_64 -,74 Esteem Satisfaction (consumer .69 , 50 -,70 -,42 outcomes) Relationship with Your -.40 -.21 .53 .27 -.53 -.49 Case Manager Correlations displayed in Table 16 indicate a fairly consistent pattern between the two programs (the effect sizes are slightly larger in the SCM program). Most of the correlations are in the medium to large effect size range (bolded items indicate the two correlations that are in the small range). If the variance of the quality of life scale is partialed out of the remaining correlations, the effect sizes of these correlations are lowered but still display a relationship. The effect of partialing out the quality of life variance is displayed in Table 17. Table 17: Correlations of Consumer-Reported SurveJLs II Self-Efficacy/ Self- Satisfaction Esteem (consumer outcomes) SCM ACT SCM ACT Satisfaction (consumer -.70/-.47 -.42/-.08 outcomes) Relationship with Your .53/.39 .27/ .17 -.53/-.38 -.49/-.46 Case Mmer The changes in the correlations displayed in Table 17 suggest that method variance, multicolinearity (two scales assessing the same or similar phenomenon), or both are inflating the correlations among consumer-reported surveys. However, although after partialling out the variance of the quality of life scale, the effect sizes were reduced from 125 large to medium or from medium to small effect sizes, data in Table 17 also suggest that these variables are still related (second and third-order partial correlations may lower these relationships even further). As noted, it will be necessary in future research endeavors to examine the multivariate relationship among these variables to better understand the overall impact of method variance and multicolinearity. 126 Summari Re Standard i (i.e., relia‘ Table 18 ascending based on wwhu and exte measure (poor pt as pied? &1_mm__arv of all Measures Examined in the Study Results indicated that most of the measures performed as expected, at least in the Standard Case Management program (SCM), but there were differences in the quality (i.e., reliability and validity, as defined by Scheirer & Rezrnovic, 1993) of the measures. Table 18 summarizes all measures examined in the study. Each measure is ranked in ascending disorder by performance. A rank or level of performance for each measure is based on the scale or indicator’s internal consistency among items (i.e., reliability), how well it correlated with other measures in the same and adjacent dimensions (i.e., internal and external validity), and how well it related to both staff and consumer reported measures (i.e., another indicator of validity). The performance rankings range from 1 (poor performance or did not perform as predicted) to 4 (good performance or performed as predicted). A detailed description of the four ranks is provided below. 4. The measure had good reliability (alpha is > .90 and good internal consistency), good validity (the measure had medium to large correlations with other measures in the same dimension and with measures of adjacent dimensions), and correlated well with measures from both perspectives (consumer and staff). The measure performed as predicted. 3. The measure had good reliability (alpha is > .90 and good internal consistency), moderate validity (the measure had small to medium correlations with other measures in the same dimension and with measures of adjacent dimensions), and correlated moderately well with measures from both perspectives. The measure is useful but may need some modifications. 2. The measure had moderate to good reliability (alpha is < .9 but > .8 and moderate internal consistency), mixed validity (the measure had small to medium correlations with other measures in the same dimension and inconsistently correlated with measures of adjacent dimensions), and correlated inconsistently with measures from both perspectives (consumer and staff). The measure displays mixed results and needs modifications, but may still be useful. 1. The measure had poor to moderate reliability (alpha is < .8 and/or low internal consistency among items) and mixed to poor validity (the measure had mostly 127 small correlations with other measures in the same dimension and low correlations among measures of adjacent dimensions). The measure does not work in the measurement model. Slightly more weight (i.e., higher rank) is given to a scales internal and external validity than to reliability and larger correlations among measures. Table 18: Review of Measures Measure Reliability Validity Correlation Overall Comments with Rating Consumer & Staff Scales Quality of Life Good Good Good 4 (QOL) Relationship with Good Good Good 4 Based on the l8-item your case manager version (RYCM) Outcomes Good Good Good 4 (Satisfaction) Self-Efficacy/ Moderate Good Good 3.5 Performed better than Self-Esteem overall CES scale Congruity of Unknown Good Moderate 3 Useful measure but takes Needs time to compute Consumer Moderate Good Good Item-total correlations stren 3 scale suggest multi-dimensions Consumer Poor Moderate Moderate 2.5 Despite poor reliability, Empowerment the scale still correlates Scale (CES) with other related measures Service Provision Poor to Moderate Poor 2.5 Needs modifications (SP) moderate to good Promoting Moderate Moderate Poor 2.5 Needs modifications Independence (Pg Opinion Scale Good Moderate Poor 2.5 Relates mostly with other staff measures Staff strengths Good Moderate Poor 2.5 Relates mostly with other scale staff measures Treatment Moderate Mixed - Poor 2 Scale suffers from ceiling Planning and Goal poor effects and will require Development modifications. Better (TPGD) than MHSIP version Congruity of Unknown Poor Poor 1 May be useful on its own Goals but does not relate to other measures Congruity of Unknown Poor Poor 1 Individual strengths scales strengths perform better. (Strengths Difference Score) 128 DISCUSSION The primary objective of this study was to develop, test, and explicate specific indicators of a model that can eventually be used to evaluate consumer-centered, strengths-based case management programs. The consumer-centered or strengths-based model of mental health service delivery has received widespread support in Michigan and across the United States. Nevertheless, despite its growing popularity among consumer advocacy groups, state mental health boards, and federal mental health organizations (Campbell, 1998a & 1998b; Chamberlin, 1990; DHHS, 1999; Frese, 1998; Kaufman, 1999; McCabe & Unzicker, 1995), there is limited empirical research on the efficacy and effectiveness of consumer-centered services with individuals who have a serious mental illness (SMI). Moreover, there is limited information on how to proceed in assessing the effectiveness of consumer-centered services; therefore, this project was initiated to develop the methodology and specific assessment tools for evaluating the model. An additional and related objective of this project was to examine the utility of having consumers of two case management programs evaluate the services they received. It seems intuitive that consumers should be involved in the evaluation of services that are, by design, centered on their needs rather than on the clinical acumen of the service provider. Although consumers have been involved in traditional evaluation research, their role has been that of the patient or subject of research, a passive recipient of services (Rogers & Palmer-Erbs, 1994). Outcomes of treatment have traditionally been selected by administrators, funders, or evaluators and have focused disproportionately on clinical indicators of treatment effectiveness, such as the reduction of pathology and symptomatology, reductions in hospitalizations, and treatment compliance (Campbell, 129 1998a; Rapp, Shera, & Kisthardt, 1993; Ridgway, 1988). Using a consumer perspective, individuals are given an opportunity to evaluate the quality of the services they received (Campbell, 1997; Rapp et al., 1993; Rogers & Palmer-Erbs, 1994). The consumer-centered, strengths-based model of service delivery is a paradigm shift from the more traditional and extensively evaluated clinically-based service delivery model (Campbell, 1998a). This paradigm shift suggests that the evaluation of consumer- centered services also requires a conceptually different methodology and selection of assessment tools than has been employed in the assessment of traditional programs and services (Campbell, 1998a). As a result, this project employed a consumer-perspective to guide the development or selection of specific measures and indicators of a consumer- centered model. Guided by this view, it was argued that if services are consumer-centered, consumers and their case managers should be aligned or in agreement on what are the strengths, needs, goals, and treatment outcomes of individuals receiving these services. In other words, consumer-centered services can be identified by the level of congruity between consumers and case managers on multiple consumer dimensions. The higher the congruity or level of agreement, the closer services are to the consumer-centered, strengths-based model. These two aspects — a consumer perspective of services and congruity as an indicator - were used as the fiamework for developing or selecting specific measures. In addition, several features of the consumer-centered, strengths-based model presented in Figure l and 2 of this document were derived from the Michigan Department of Community Health’s (MDCH) guidelines for developing and operating community-based 130 mental health services in Michigan (MDCH, 1999) and Marty, Rapp, and Carlson’s (2001) expert consensus guidelines for developing and operating strengths-based case management programs. Results of the study suggested that consumers were interested in participating and invested in evaluating the services that they were receiving. Anecdotal reports generated from the participant interviews indicated that consumers were pleasantly surprised by the format of the surveys and encouraged by the possibility of using some of the surveys in future practice. Many of the consumer participants also appreciated being asked for their opinions. Particular questions in the surveys frequently generated further discussion among participants. Consumers often wanted to discuss answers in more detail or felt that they had more to say than could be expressed in the surveys. The interview format appeared to encourage individuals to discuss their experiences and opinions about being a consumer of mental health services. In addition, response patterns from staff suggested that case managers were comfortable answering questions about their opinions and perceptions of consumers. Findings from analyses on the different measures were mixed between the two case management programs. Most of the expected or predicted relationships were found in the standard case management (SCM) program but not in the Assertive Community Treatment (ACT) program. Correlations among consumer reported scales and correlations among case manager reported scales were similar between the two programs; however, consistent program level differences were found among correlations between consumer and case manager surveys. These analyses revealed two patterns in the data. 131 The first was that there were two distinct views of treatment that mostly transcended the two programs: consumer and case manager. The second pattern was that the congruity of views and perceptions between consumers and case managers diverged between the two programs. For example, consumers’ opinions of the relationship with their case manager (i.e., the RYCM survey) and case managers’ opinions of how easy it is to work with the consumer (i.e., the opinion scale) were highly correlated in the SCM program but were not correlated in the ACT program. In another example, the RYCM survey and case managers’ perception of consumers’ strengths (i.e., the case manager version of the strengths scale) were again highly correlated in the SCM program but were not correlated in the ACT program. Results from the SCM program indicated that among the three indicators of congruity between consumers and case managers - strengths, needs, and goals - congruity of needs was the only indicator that consistently correlated with measures within its dimension and among adjacent dimensions. The average congruity of needs score, which was identical in both programs, did not initially appear to reflect high agreement between the two raters; however, in comparison to previous studies that have examined both staff and consumer views on similar topics (Crane-Ross et al., 2000; Comtois et al., 1998; Sainfort et al., 1996), a mean concordance rating of .66 was above average for staff and consumers in a mental health facility. Crane-Ross and associates (2000), found significant disparities and low concordance in a sample of 41 8 consumer-case manager dyads tracked over three time periods on what were the consumers’ needs, if consumers were receiving the help (i.e., services) they needed, and if the consumers’ needs had been met. Although Crane-Ross’ 132 team did not report a simple level or measure of agreement, they did find negative to low correlations and significant disparities between consumer and staff ranks on 15 items that addressed different areas of need. Sainfort et al. (1996), found an average agreement rating of .54 with scores ranging from .42 to .66 on five items that assessed consumers’ quality of life in a sample of 37 consumers-staff dyads. Agreement was highest on assessment of clinical functioning (symptoms = .64 and functioning = .66) and lowest on social and vocational functioning (social support = .42 and occupation = .44). Finally, Comtois and colleagues (1998), found an average agreement rate of .71 with a range from .53 to .94, on 14 items that assessed problems areas in living (e.g., personal hygiene, shopping, and cooking meals) for a sample of 47 consumer-staff dyads from a multidisciplinary rehabilitation team (three staff members rated all 47 consumers). Although this agreement rate is higher on average than what was found in present project, Comtois et al.’s study compared two different needs assessment surveys and scored agreement as a simple check (i.e., selected it or did not select it) for each category rather than comparing ranks or level of need. Moreover, Comtois et al. noted that consumers tended to rank most of the problem areas of living lower than staff (i.e., view them as less of a problem), which suggests that lower levels of agreement would have been found if ranks, rather than simple checks, had been compared. Comparisons among these studies are limited due to the heterogeneity of measures and methodologies employed; nevertheless, previous research findings indicate a consistent and substantial disparity between the views of consumers and clinical staff on the needs, skills, and resources of consumers, and the level of agreement between these two perspectives in the present study appears to be above average. This above 133 3‘ 6X C0 of of average concordance between consumers and staff on the consumers’ needs may help to explain why the congruity of needs indicator correlated well and as predicted with other consumer and staff reported measures. In addition, the technique of averaging a comprehensive selection of 34 areas of need may have provided a more reliable and meaningful measure of congruity than has been used in previous research. The findings of this research indicate that the staff of the two case management programs were aware of consumers’ needs and provide support for the application of the congruity of needs indicator as a tool for assessing consumer-centered services. Nonetheless, item level analyses revealed disparities in some areas, such as the need for spirituality, personal hygiene, personal safety, counseling, social support, and medication. Thus, differences in perceptions still existed on specific items or domains of need. The strengths difference score did not improve upon the information that was already provided by the two strengths scales (consumer and case manager) and correlated poorly with other measures in the model. The consumer strengths scale was highly correlated with other consumer reported surveys, and the case manager version of the strengths scale and the opinion scale were both highly correlated with other staff reported surveys. There was also a medium correlation between the two strengths scales. The congruity of goals indicator displayed mixed results among related indicators and measures in both programs, but usually was only minimally correlated with other measures. The average congruity of goals was much lower than the average agreement on needs and had a larger dispersion of scores in both programs, which may have contributed to its low correlation with other measures in the study. Moreover, the close- ended structure of the needs assessment survey may have minimized measurement error 134 (i.e., increased reliability), whereas the open-ended format of the goals indicator may have increased measurement error due to problems in recalling goals. Despite its poor performance with other measures in the study, the goals indicator may still provide useful information for assessing the quality of consumer-centered services. For instance, although the average level of agreement was slightly below 50%, the indicator still reflected a level of agreement that exceeded chance. Case managers and consumers had to create a list of goals independently of each other and without guidance or the use of a predefined list; therefore, any matches may have reflected, at least in part, shared knowledge. Although chance agreement could have occurred if case managers attempted to guess consumers’ goals, information from their open-ended responses suggest that they did not try to guess. For instance, several case managers noted that a primary goal for several consumers was to abstain or reduce the use of alcohol and other drugs. This goal is usually not acknowledged or accepted by consumers of mental health or addiction treatment programs who are in the early phases of recovery from a substance use disorder. In fact, only one consumer noted abstinence from alcohol and other drugs as a goal, even though several consumers who participated in the study had an active substance use disorder (abuse and addiction). These findings suggest that case managers were honestly reporting what they considered to be the consumers’ goals and that they were not attempting to guess to inflate agreement. There were also indications of shared knowledge that also supports the assertion that case managers were not guessing, but rather were recalling conversation that had occurred with consumers. These indications of shared knowledge came from the unique, highly individualized, non-clinical or treatment oriented nature of some of the goals that 135 were reported by case managers and matched by consumers. Examples of these goals reported by both case managers and consumers included: 0 continue writing; 0 make home improvements; 0 return to college or at least take some courses; 0 get a license and a car; 0 get a camera and take photography classes; 0 visit with brother up north; 0 find a male partner; 0 have business succeed; 0 move out of the AFC home and into an independent apartment; and 0 build models for the Lego Company (reported by case manager). This sample of matched goals (the last one was not matched by the consumer) indicates that case managers were cognizant of many of the goals reported by consumers. The uniqueness of these goals provides evidence of communication between consumers and case managers which, in turn, suggests that the goals indicator yields useful information about shared decision-making. Furthermore, when case manager goals did reflect a clinical or treatment-orientation, such as the client will take medications as ordered, attend appointments as scheduled. stay out of the hospital or jail, or remain stable in the community; consumers rarely agreed with these goals. The lack of agreement on clinical-based goals indicates disparity or disagreement, but also that consumers were not conditioned to provide institutional responses. In other words, consumers felt free to list 136 goals that were meaningful to them, regardless of their involvement in mental health treatment. These qualitative assessments of both sets of reported goals suggest that the congruity of goals indicator can provide useful information; however, problems in recalling relevant goals discussed between consumers and case managers may have lowered the reliability of the indicator and, in turn, its relationship with other measures in the study. Finally, the congruity of goals indicator required an extensive amount of time to compute and may not be a practical or economical measure for service providers. The l8-item version of the Relationship with Your Case Manager (RYCM) survey, which was one of four indicators of the consumer-centered, service-planning dimension, displayed high internal consistency, good internal and external validity, and as noted, correlated with both case manager and consumer reported measures in the SCM program. The four subscales of the consumer survey of mental health services (CSMHS) also displayed mixed results, but generally correlated with other consumer reported scales in both programs. The first two subscales, Treatment Planning and Goal Development (TPGD) and Service Provision (SP) elicited high agreement response patterns, although the mean score and range of both scales had better distributions (i.e., lower ceiling effect and wider range) than three of the four MHSIP subscales examined in the pilot study. The Promoting Independence (PI) subscale displayed a wider range of responses and a higher mean score (lower agreement) than the first two subscales. The Outcomes subscale, which was used as a consumer-viewed indicator of satisfaction with services under the proxy indicator dimension, displayed a normal distribution of mean scores, 137 high internal consistency (alpha = .92), and good internal and external validity. Similar results were found in the pilot study with the original MHSIP version of the outcomes subscale. The first three subscales will require some modifications (e. g., altering the wording of some questions); nevertheless, initial results suggested that all four subscales were useful and informative measures and, based on pilot and dissertation data, were an improvement from the original MHSIP consumer survey. In addition, feedback from consumers suggests the creation of a fifth subscale that focuses on aspects of pharmacotherapy, such as side effects, education (e.g., are consumers being informed of their medication), self-determination (e. g., are consumers given choices and options), and feedback (e. g., are consumers given an opportunity to talk with the nurse and psychiatrist about their medication). Three of four proxy indicators — quality of life, satisfaction, and the self- efficacy/self-esteem subscale of the empowerment survey — displayed moderate to high internal consistency and good internal and external validity in both programs. The Quality of Life scale (Sullivan et al., 1992; Sullivan & Bybee, 1999) displayed high internal consistency and, as expected, was highly and consistently related to other adjacent measures. Similar results for the Quality of Life scale were found in the pilot study. The only problematic indicator was the full scale Consumer Empowerment Survey (CES; Rogers et al., 1997), which displayed low internal consistency, a wide range of corrected item-total correlations, and mixed results within and between programs. Because the self-efficacy/self-esteem subscale displayed higher internal consistency and higher correlations with other scales within the proxy indicators 138 dimension and with adjacent dimensions than the full CBS, the subscale score was used in place of the full scale score. The multidimensional nature of the Rogers et al. (1997) CES scale may have contributed to low internal consistency among the 28 items. Although they used only the full scale score in statistical analyses, both Rogers et al. and Wowra and McCarter (1999) extracted five unique factors using an exploratory factor analysis. Although these five factors are at least partially related and reflect the multidimensional nature of empowerment, the fill] scale score may not be a reliable measure of the empowerment construct. Another empowerment measure developed by Segal, Silverman, & Ternkin (1995) produced three scores reflecting different aspects of empowerment - personal, organizational, and extra-organizational - which were all positively correlated, but were not redundant measures of one attribute. Each dimension included some of the items in Rogers et al.’s scale; however, Segal et al.’s did not measure self-efficacy or self-esteem. These conflicting findings between the present and previous studies suggest the need for more research to better understand the construct of empowerment and how it can be reliably measured; nevertheless, the concept of personal empowerment is still considered an important and relevant indicator of consumer-centered services. The findings from the SCM program provide preliminary support for the application of many of the measures used in the study. It will be necessary in the future to examine the revised measurement and conceptual model in Figure 4 with more sophisticated multivariate statistical analyses to better understand the overall model and to examine a number of other issues, such as the construct validity of the first three 139 dimensions, the factor structures of the large item measures, the influence of method variance, and multicollinearity and other forms of redundancy. In addition, although there were differences between the two case management programs for many of the correlations between consumer and case manager measures, due to the nature of the study and the methodology employed, it was difficult to discern the factors that contributed to these differences. There were numerous program level differences that could have contributed to the results. For instance, the ACT program, by design, serves individuals who have recently (less than one year) been either hospitalized or incarcerated, use crisis services extensively, use case management services extensively (beyond crisis services), have an active comorbid substance use disorder, or continue to struggle with their mental illness. In contrast, the SCM program transferred all individuals with these issues or conditions to the ACT program. As a result, there was a selection confound between these two programs; individuals enrolled in the ACT program were different from individuals in the SCM program. These actual differences in the behaviors of consumers may have also perpetuated differences in staff’s perceptions of consumers in the two programs. Staff in the ACT program may have viewed their clients as being more disabled psychiatrically and less capable of benefiting from a consumer-centered orientation, at least in the early stages of treatment. Another salient difference between the two programs was the duration of operation; the SCM program has been in existence since the early 19803, while the ACT program became operational one month prior to the start of the study. There were also numerous mismatches between the case manager consumers were evaluating in the ACT program and the case manager who completed the survey protocol. 140 For example, one consumer in the ACT program had worked with one of the ACT case managers for three years in an outpatient setting before both of them came to the ACT program, and she referred to this case manager when she completed the consumer protocol of surveys. However, the psychiatric nurse completed the case manager protocol on this particular consumer, even though she had only worked with the consumer for three months. In another instance, the ACT supervisor completed three case manager protocols for consumers who were probably not referring to her when completing the consumer protocol of surveys. These mismatches occurred because the ACT team, rather than the consumers, decided who would complete the case manager survey protocol. In future endeavors, the primary relationship (i.e., the case manager to whom the consumer is referring) should be identified prior to data collection in case management programs that use a team concept rather than assignment to specific staff. Another problem with the ACT program was that four of the five staff completed their part of the survey protocol on the last day of data collection and only under pressure from the supervisor. This last minute rush to complete a large number of surveys may have led to hurried responses and, consequently, increased error in reporting. Further research with more established ACT programs is necessary to discern factors associated with ACT programs fiom those generally associated with new programs that may impact the results of consumer-centered measures. Due to these problems in data collection, caution should be used in generalizing the findings to future evaluations of consumer-centered services. Furthermore, the sample size is considered small for the purpose of assessing the psychometric qualities of experimental measures. Consequently, data analyses were restricted to univariate I41 statistics that lack the capacity to examine multivariate relationships of multiple items in scales, multiple scales with each dimension, and the relationships among the six dimensions. As noted throughout the results section, there were indications that zero- order correlations were inflated as a result of method variance, multicollinearity, or both. Staff turnover and the recent start-up of the ACT program also hindered data collection and reduced the number of established consumer-case manager relationships that could be assessed. Finally, although staff persevered through numerous agency level changes that occurred throughout the duration of the study, these events, nevertheless, distracted them during their participation in the study. Despite these limitations, the two main objectives of the study - testing a conceptual model and specific indicators of that model — were achieved. This study introduced a firnctional framework and specific measures that can eventually be used to evaluate consumer-centered, strengths-based case management services. Another strength of this study was the inclusion of two perspectives — consumer and case manager. Traditionally, evaluation research on mental health case management services has relied almost entirely on the views of clinicians, administrators, or evaluators in assessing the effectiveness of services (Mueser et al., 1998; Teague et al. 1995 & 1998, McGrew et a1. 1994). As results from this study revealed, the clinician’s view is not always reflective of the consumer’s perspective, a finding that has been reported in numerous other studies that have compared clinician and consumer perspectives (Comtois et al., 1998; Coursey, Farrell, & Zahniser, 1991; Dimsdale, Klerrnan, & Shershow, 1979; Mitchell & Pyle, 1983; Ridgway, 1988; Sanfort, Becker, & Diamond, 142 1996). By relying heavily on clinicians’ views of services, evaluators have only a narrow and incomplete view of treatment and its effectiveness with primary consumers. Finally, the study provided consumers with a positive experience of participating in evaluation research. The development or selection of specific measures was guided by an assumption, articulated by F etterrnan and others (F etterman, 2000; Patton, 1997; Rogers & Palmer-Erbs, 1994) that evaluation research should be empowering for participants as well as the evaluator. The structure and application of the consumer reported measures used in the study provided an opportunity for extensive consumer feedback and evaluation. An evaluation of consumer-centered, strengths-based services should promote the same principles of self-determination and personal empowerment that are fostered by the services themselves. It is hoped that these measures can help to facilitate that process in future pro gram evaluations. The results of this study provide preliminary support for the model and most of the measures that were assessed. The next step is to examine the fit of the model and selected measures with other consumer-centered, strengths-based case management programs. Other more established ACT programs should be included in future projects because of the problems associated with the newly established ACT program in this study. In addition, it will be necessary to assess the suitability of these measures for individuals from various minority groups. Only two of the 56 consumer participants involved in the study were classified as belonging to a minority group. Although the small percentage of minority participants involved in the study mirrored the county census, it is obviously not reflective of national demographics. Sensitivity to individual cultural and ethnic differences is an essential component of both service provision and 143 the evaluation of those services (DHH S, 1999; Patton, 1997); therefore, future research should include a larger sample of individuals fiom minority groups in order to examine the efficacy of these measures with different minority groups. The effect of time on the measures tested in this study is another issue that needs to be addressed in future research. Program evaluation and, specifically, the assessment of program implementation is a dynamic process that is usually accomplished over time; therefore, it will be helpfirl to understand the temporal properties of these measures. For example, do scale scores change over time in response to changes in people’s lives? Is there a relationship between the recovery process and the three proxy indicators over time? Do staff opinions and behaviors change in response to changes in consumers? Future research will need to explore these issues in more detail. Finally, although the measurement model developed in this study was designed for assessing the implementation of consumer-centered services, a long-term goal after establishing the fidelity of the model is to examine the effectiveness of these services in facilitating the process of recovery from mental illness or substance use disorders. Therefore, a natural evolution in the development of this evaluation framework is assessment of the impact of consumer-centered services on consumers’ lives. As noted by Anthony (1991 ), “Recovery is what people with disabilities do. Treatment, case management, and rehabilitation are what helpers do to facilitate recovery.” The ultimate goal of this and future research endeavors is not only to ensure that services are empowering and consumer-centered, but that they are also helping people recover from mental illness. 144 Future Directifl Due to the limitations of this study, more research is needed to further examine the validity of the measurement model and select measures of consumer-centered services. The next step is to expand the study to multiple case management programs in a large community mental health center located in a medium- sized city in Illinois. This will include testing the measurement model in four different ACT programs and four SCM programs. The ACT or ACT type pro grams include a traditional ACT model that has been operating for nearly 10 years; a new forensic Continuous Treatment Team (an ACT model for individuals with a 00-0an substance use disorder involved in the criminal justice system) that was instituted in February 2002; another new ACT program, which will be modeled after Rapp’s (1998) strengths model of case management that will be instituted in the summer 2002; and an intensive case management program that was not modeled after the ACT program and has been operating for nearly 10 years. The four SCM programs are in the early phases of adapting a consumer-centered, strengths-based model of service delivery, have been operating since the mid-19808, currently employ a traditional, office-based, clinical model, and have large caseloads (50:1 average ratio of consumers to staff). Funding for this project is being pursued through NIMH’s B/START—New researcher grant program in addition to funding from the CMHC and grant dollars remaining from the present study. The long-term goal of this research is to use the measurement model to evaluate the effectiveness of community-based services for individuals with mental illness. As noted throughout this document, there is limited empirical research on consumer-centered 145 services. It is hoped that the measurement model can be linked to treatment outcomes in order to examine the impact of consumer-centered services. Lessons Learned This study was the culmination of my four-year relationship with the Ionia Community Mental Health Center. In addition to the findings reported in this study, I learned many valuable lessons about doing applied research with individuals who have a serious mental illness and the staff that treat them. For example, an essential component of this study was the unwavering cooperation of the community mental health agency and, specifically, the staff of the two case management programs. There was a high and constant degree of turmoil in both case management programs throughout the duration of the study. Beginning in the summer of 2001 , the standard case management program experienced staff turnover and the transferring of some of their clients to the newly established ACT program. The ACT program was new and had problems, such as the early and unexpected departure of the psychiatric nurse after only three months. In addition, the agency experienced several profound changes during the study, including the appointment of a new CEO in July 2001, an ongoing merger with two other public community mental health agencies in Michigan, and numerous changes in state regulations that included the need to implement an ACT program. The constant flow of agency level changes fostered an atmosphere of stress that permeated all programs. Despite the persistent feelings of stress associated with all these changes, staff in both programs remained invested in the project and continued to commit some of their limited time to recruiting consumers on their caseloads and completing case manager survey protocols. 146 I attribute some of this commitment to my relationship with the community mental health center. I was extremely familiar with both case management supervisors and most of the staff. The importance of these long-term relationships cannot be overstated as they contributed substantially to the ongoing support of staff. Without this support, the project would have failed before being initiated. In addition, my familiarity with the agency also facilitated the high recruitment rates of consumers for both the pilot study and full study. Many of the consumers who participated in both the pilot and full study knew me prior to the interviews, which contributed to the nearly 80% response rate. Another lesson learned through observing the evolution of case management services in Ionia over the past four years is that providing consumer-centered services and traditional mental health services simultaneously is a complex and often conflicting task. Consumer-centered services and the term consumerism imply equality of power and choice. In contrast to the term, individuals who receive mental health services in the public system usually do not have a choice of services and do not share power with their providers (Chamberlin, 1990). Numerous advocates of the ex-patient/survivor movement disapprove of the term consumer because it is misleading and minimizes the lack of control individuals with serious mental illnesses have had over their own lives and with the public mental health system (Chamberlin, 1990; McLean, 1995). Individuals who receive services in the public mental health system are rarely given the option to choose their service providers, psychiatrists, programs, or case managers. In addition, forced treatment, such as mandatory hospitalization, case management, and pharmacotherapy, is still used in certain cases in all 50 states, including Michigan. Furthermore, public service providers are frequently the payee for individuals’ disability benefits and in some 147 states, such as Michigan, are the gatekeepers (e.g., HMO gatekeeper) for primary healthcare services. These components of the traditional mental health system conflict with the concepts of consumerism and self-determination. In application, the case management programs examined in this study delivered consumer-centered services within the structure of a traditional mental health system. Consumers enrolled in both case management programs were given lirrrited options for treatment or other services and, some individuals were court ordered to receive mental health services. Moreover, many of the consumers who participated in the study were living in Adult Foster Care (AFC) homes because they either could not afford better and more independent housing or they had been placed there years ago by CMHC and found it difficult to leave. Although MDCH is pushing community mental health providers to help consumers relocate out of AFC homes, non-licensed boarding homes, and nursing homes there are limited options available, besides homelessness, for individuals who have few resources (e.g., income or family) and are often discriminated against by landlords. Sirrrilarly, there are few employment options, especially in rural communities, such as Ionia County, for individuals with serious mental illnesses. Case managers are also constrained by the structure of the system and can only provide a limited array of services to consumers. The ability of case managers to deliver consumer-centered services is constrained by the size of their caseloads, the hours they work, and billing requirements for service provision established by state guidelines and Medicaid (e. g., allowable billable activities of service provision). 148 All of these factors undermine the application of a true consumer-centered model and, therefore, should be considered when evaluating the impact and effectiveness of this model in application. It may be that a fiilly operational, consumer-centered program cannot exist within the present mental health system, at least not until changes have been made at the system level. Nevertheless, the measurement model and measures employed in this study can be used to assess the degree to which services are evolving towards a consumer-centered, strengths-based program. Despite the persistent and pervasive problems associated with public mental health treatment, services providers can adopt or adapt aspects of the consumer-centered model to existing services. Moreover, the measurement model developed in this project can help administrators to construct new services or improve existing services, which in turn can lead to more effective implementation and adherence to the principles of consumerism and the strengths-based paradigm. The public mental health system in Michigan is slowly evolving towards consumerism; it is hoped that the measurement model developed in this study can help facilitate that process. 149 APPENDIX A Strengths Scale (Case Manager Version) The following list of statements reflect personal strengths or personal assets that the consumer may posses. Please read each statement and rate the degree to which you believe this statement is reflective of the consumer. In other words, rate how much you agree with each statement as an accurate description of the consumer. For example, if you strongly agree that the consumer makes friends easily then circle a 4 (strongly agree) for that statement. In contrast, if you feel that the consumer does not at all possess this attribute or strength, then you would circle a 1 (strongly disagree). If you are not sure about a particular attribute or strength then select the DK/N A category (Don't Know, Not Applicable). Strengths and Assets Strongly Agree Disagree Strongly DK/NA Agree Disagree * l. The consumer makes fiiends easily 4 3 2 l 9 2. The consumer is sociable 4 3 2 l 9 3. The consumer works well with other 4 3 2 1 9 people 4. The consumer is responsible and 4 3 2 1 9 dependable 5. The consumer has many goals in life that 4 3 2 1 9 he or she would like to achieve 6. The consumers has a good sense of 4 3 2 1 9 humor 7. The consumer is resourceful 4 3 2 1 9 8. The consumer is assertive 4 3 2 l 9 9. The consumer believes in him or herself 4 3 2 l 9 10. The consumer is an independent person 4 3 2 1 9 l 1. The consumer has the ability to 4 3 2 l 9 rebound from any crisis 12. The consumer is emotionally strong 4 3 2 l 9 13. The consumer is physically strong 4 3 2 l 9 14. The consumer is healthy physically 4 3 2 l 9 15. The consumer is creative 4 3 2 1 9 16. The consumer can accomplish his or 4 3 2 1 9 her goals if given a chance 17. The consumer has acquired many work 4 3 2 1 9 related skills 18. The consumer is a dedicated employee 4 3 2 l 9 when he or she is on the job 19. The consumer is a quick learner 4 3 2 l 9 150 Strengths and Assets Strongly Agree Disagree Strongly DK/NA Agree Drsagr: * 20. The consumer has a good support 4 3 2 1 9 system 21. The consumer's family is supportive 4 3 2 9 22. The consumer has supportive fiiends 4 3 2 9 that she or he can rely on 23. The consumer has financial stability 4 3 2 9 24. The consumer has had competitive job 4 3 2 9 experiences in the past that will help him/her in the future 25. The consumer demonstrates good 4 3 2 1 9 ADL/life skills 26. The consumer has good leisure skills 4 3 2 9 27. The consumer is spiritually strong 4 3 2 9 28. The consumer does well in school and 4 3 2 9 other academic settings 29. The consumer is easy to work with 4 3 2 1 9 30. The consumer is undemanding and easy 4 3 2 1 9 gomg 31. The consumer is easy to contact or to 4 3 2 1 9 meet to with 32. The consumer is motivated in treatment 4 3 2 l 9 33. The consumer is likable and enjoyable 4 3 2 1 9 to be around 151 Strengths Scale (consumer version) The following list of statements reflect personal strengths or personal assets that you may posses. Please read each statement and rate the degree to which you believe this statement is reflective of you. In other words, rate how much you agree with each statement as an accurate description of yourself. For example, if you strongly agree that you make fi'iends easily then circle a 4 (strongly agree) for that statement. In contrast, if you feel that you do not at all possess this attribute or strength, then you would circle a 1 (strongly disagree). If you are not sure about a particular attribute or strength then select the DK/NA category (Don't Know, Not Applicable). Please read and answer all statements. Strengths and Assets Strongly Agree Disagree Strongly DK/NA Agree Disagree * 1. I make fiiends easily 4 3 2 l 9 2. I am sociable 4 3 2 1 9 3. I work well with other people 4 3 2 1 9 4. I am responsible and dependable 4 3 2 1 9 5. I have many goals in life that I would 4 3 2 l 9 like to achieve 6. I have a good sense of humor 4 3 2 1 9 7. I am resourceful 4 3 2 1 9 8. I am assertive 4 3 2 l 9 9. I believe in my myself to succeed 4 3 2 l 9 10. I am an independent person 4 3 2 1 9 11. I have the ability to rebound from any 4 3 2 1 9 crrsrs 12. I am emotionally strong 4 3 2 1 9 13. I am physically strong 4 3 2 1 9 14. I am healthy physically 4 3 2 1 9 15. I am creative 4 3 2 l 9 16. I can accomplish my goals if given a 4 3 2 1 9 chance 17. I have acquired many work related 4 3 2 1 9 skills 18. I am a dedicated employee when I am 4 3 2 l 9 on the job 19. I am a quick learner 4 3 2 l 9 152 Strengths and Assets Strongly Agree Disagree S®ngly 910'“ A ee Disagree "‘ 20. I have a good support system 4 3 2 9 21. My family is supportive of me 4 3 2 9 22. I have supportive friends that I can rely 4 3 2 9 on 23. I have financial stability 4 3 2 9 24. I have had competitive job experiences 4 3 2 9 in the past that will help me in the future 25. I have good ADL/life skills 4 3 2 1 9 26. I have good leisure skills 4 3 2 l 9 27. I am spiritually strong 4 3 2 1 9 28. I do well in school and other academic 4 3 2 1 9 settings 153 APPENDIX B Consumer Needs Assessment The following list of items represent categories of need or areas in consumers' lives that need to be addressed. Please read and rank each item in terms of need for the consumer. Please place a rank in the box next to the category. For example, if transportation is currently a serious issue or area of need for the consumer then you would probably rank this item as a high need (place a 4 in the box). If an item is not at all important or not related to the consmner then rank that item as no need (place a 1 in the box). Please be sure to rank all items, even items that are not related to the consumer. High Need ............. 4 Moderate Need . . .. 3 Low Need ............... 2 No Need ................ 1 Don't know ............. 0 Area of Need or Help Area of Need or Help 1. Acquiring Independent Housing 18. Leisure activities 2. Education (e.g., going to school) 19. Financial support 3. Vocational Training 20. Child care (e.g., respite care) 4. Finding a good job 21. Counseling/Therapy (individual and/or family) 5. Spirituality 22. Medication (e.g., acquiring expensive medication) 6. Legal assistance (e.g., getting a lawyer) 23. Medication management 7. Medical care and treatment 24. Reduce or eliminate hospitalizations 8. Physical health 25. Reduce or eliminate alcohol or drug abuse 9. Dental care 26. Access to entitlements (e.g., SSDI or Medicaid) 10. Social support 27. Money management 11. Daily living skills 28. Transportation 12. Daily living items (e.g., food, clothin 29. Becoming more independent 1 3. Personal safety 30. Social Functioning 14. Personal hygiene 31. Control or reduce psychiatric symptoms 15. Relationships (e. g. finding a Lartner/spouse) 32. Parenting classes 16. Developing better insight into his/her illness 33. Symptom relief from mental illness l7. Companionship (e.g., finding friends) 34. Housing (e.g., moving to a better apartment) 154 Personal Needs Assessment The following list of items represents categories of need or areas in your life that you want to address or need help in. Please read and rank each item in terms of need in your life. Please place a rank in the box next to the category. For example, if transportation is currently a serious issue or area of need in your life then you would probably rank this item as a high need (place a 4 in the box). If an item is not at all important or not related to you then rank that item as no need (place a l in the box). Please be sure to rank all items, even items that are not related to you or your life. High Need ............. 4 Moderate Need .........3 Low Need ............... 2 No Need ................ 1 Area of Need or Help Area of Need or Help 1. Acquiring Independent 18. Leisure activities ‘ Housing 2. Education (e.g., going to 19. Financial support school) 3. Vocational Training 20. Child care (e.g., respite care) 4. Finding a good job 21. Counseling/Therapy (individual and/or family) 5. Spirituality 22. Medication (e. g., acquiring expensive medication) 6. Legal assistance (e.g., getting a 23. Medication management lawyer) 7. Medical care and treatment 24. Reduce or eliminate hospitalizations 8. Physical health 25. Reduce or eliminate alcohol or drug abuse 9. Dental care 26. Access to entitlements (e.g., SSDI or Medicaid) 10. Social support 27. Money management 11. Daily living skills 28. Transportation 12. Daily living items (e.g., food, 29. Becoming more independent clothing) 13. Personal safety 30. Social Functioning 14. Personal hygiene 31. Control or reduce psychiatric symptoms 15. Relationships (e. g. finding a 32. Parenting classes partner/spouse) 16. Developing better insight into 33. Symptom relief from mental jour illness illness 17. Companionship (e.g., finding 34. Housing (e.g., moving to a better fiiends) apartment) 155 APPENDIX C Consumer Survey of Mental Health Services Please indicate your agreement with each of the following statements by circling the number that best represents your opinion. If the question is about something you have not experienced, circle the number 9, to indicate that this item is not applicable (NA) to you or you don't know. 7 Survey Item Treatment Planning and Goal Development 1 My case manager is helping me achieve my 1 2 3 4 5 9 goals in life. 2 My case manager is very aware of my needs 1 2 3 4 5 9 and goals in life. i 3 My case manager understands my needs and 1 2 3 ‘ 4 5 9 concerns. 7 4 I was able to choose when and where my 1 2 3 4 5 9 treatment-planning meeting occurred. 5 I, not my case manager, seIected my 1 2 3 4 5 9 treatment goals. 6 I was able to choose who attended my 1 2 3 4 5 9 treatment-planning meeting. 7 I was encouraged to bring family, friends, 1 2 3 4 5 9 and other community members into my treatment planning process. Service Provision 8 My case manager provides me with a lot of 1 2 3 4 5 9 encouragement and support to achieve my _goals. 9 My case manager helps me to appreciate my 1 2 3 4 5 9 strengths and capacities. 10 I have control over when and where my 1 2 3 4 5 9 case manager meets with me. 11 Most of my meetings with my case manager 1 2 3 4 5 9 are away from the community mental health agency. 12 My case manager is available when and 1 2 3 4 5 9 where I need him or her. 156 Survey Item I Promoting Independence 13 ‘ My case manager has helped me to become more independent in living. 14 My case manager has helped me to find reliable. transportation 15 My case manager has helped me to find a job. 16 My case manager has helped me to become more independent financially. 17 My case manager has helped me to become more independent of mental health services. 157 Consumer Survey of Mental Health Services: Outcomes Please indicate your agreement with each of the following statements by circling the number that best represents your opinion. If the question is about something you have not experienced, circle the number 9, to indicate that this item is not applicable (NA) to you or you don't know. Outcomes Resulting from Case Management Services Survey Item 19 As a result of case management services, I l 2 3 4 5 9 am better able to control my life. 20 As a result of mental health services, I have 1 2 3 4 5 9 not experienced any side effects fi'om the medication that I am taking. 21 As a result of case management services, I l 2 3 4 5 9 am better able to deal with crisis. 22 As a result of case management services, I l 2 3 4 5 9 am getting along better with my family. 23 As a result of case management services, I l 2 3 4 5 9 do better in social situations. 24 As a result of case management services, I l 2 3 4 ' 5 9 do better in work 25 As a result of case management services, I 1 2 37 4 5 9 am doing better with my leisure time 26 As a result of case management services, 1 2 3 ‘ 4 5 9 my housing situation has improved. 27 As a result of casemanagement services, 1 2 3 4 5 9 my symptoms are not bothering me as much. 28 The medication I have been prescribed has 1 2 3 4 5 9 been helpful in controlling my symptoms 29 As a result of case management services, I l 2 3 4 5 9 am more aware of my strengths and personal assets 158 APPENDIX D Relationship with Your Case Manager This next set of questions is designed to assess your opinions of your case manager. For each item, please select if you strongly disagree, disagree, agree, or strongly agree. 1 My case manager turns away from me when I talk to 1 2 3 4 8 her/him 2 My case manager does not understand me 1 2 3 4 8 3 It is hard to get my case manager to listen to me 1 2 3 4 8 4 My case manager encourages my independent thinking 1 2 3 4 8 5 My case manager believes what I say 1 2 3 4 8 6 My case manager asks about and respects my religion or 1 2 3 4 8 spirituality 8 My case manager understands my feelings of anger and 1 2 3 4 8 helps me deal with them 9 I feel free to complain to my case manager 1 2 3 4 8 10 My case manager is not helpful to me if I disagree with l 2 3 4 8 him or her 11 My case manager recognizes my abilities I 2 3 4 8 12 My case manager does not respect me and other 1 2 3 4 8 consumers as people 13 My case manager gives me hope about my future. . l 2 3 4 8 14 My case manager rs afraid of me 1 2 3 4 8 15 My case manager gives me confidence to make my own 1 2 3 4 8 decisions 16 I do not trust my case manager to keep what I say 1 2 3 4 8 confidential 17 My case manager compliments me when I do 1 2 3 4 8 something well 18 My case manager walks into my apartment/room/ home 1 2 3 4 8 without being invited 19 My case manager has a clear idea of what my goals are l 2 3 4 8 20 I feel sure that my case manager is able to help me 1 2 3 4 8 21 My relationship with case manager is very important to me 1 2 3 4 8 (Ruth Ralph, 2002; COSP study) 159 APPENDIX E CONSUMER EMPOWERMENT SCALE (CES) Please read and answer the following 28 statements relating to your perspective on life and having to make decisions. For each of the following statements, please indicate whether you strongly agree (1), agree (2), disagree (3), or strongly disagree (4). Indicate how you feel now. First impressions are usually best. Do not spend a lot of time on any one question. Please be honest with yourself so that your answers reflect your true feelings. Strongly Agree Disagree Strongly Agree Disagree 1. I can pretty much determine what 1 2 3 4 will happen in my life. 2. People are only limited by what they 1 2 3 4 think is possible. 3. People have more power if they join 1 2 3 4 together as a group. 4. Getting angry about something never 1 2 3 4 helps. 5. I have a positive attitude toward 1 2 3 4 myself. * 6. I am usually confident about the l 2 3 4 decisions I make. "‘ 7. People have no right to get angry just 1 2 3 4 because they don’t like something. 8. Most of the misfortunes in my life 1 2 3 4 were due to bad luck. 9. I see myself as a capable person. * 1 2 3 4 10. Making waves never gets you 1 2 3 4 anywhere. 11. People working together can have 1 2 3 4 an effect on their community. 12. I am often able to overcome 1 2 3 4 barriers. "' 13. I am generally optimistic about the 1 2 3 4 future. 14. When I make plans, I am * almost 1 2 3 4 certain to make them work. 160 Strongly Agree Disagree Strongly Agree Disagree 15. Getting angry about something is 1 2 3 4 often the first step toward changing it. 16. Usually I feel alone. 1 2 3 4 l7. Experts are in the best position to 1 2 3 4 decide what people should do or learn. 18. I am able to do things as well as 1 2 3 4 most other people. * 19. I generally accomplish what I set 1 2 3 4 out to do. * 20. People should try to live their lives 1 2 3 4 the way they want to. 21. You can’t fight city hall. 1 2 3 4 22. I feel powerless most of the time. 1 i 2 3 4 23. When I am unsure about something, 1 ' 2 3 4 I usually go along with the rest of the group. 24. I feel I am a person of worth, at 1 2 3 4 least on an equal basis with others. "' 25. People have the right to make their 1 2 3 4 own decisions, even if they are bad ones. 26. I feel I have a number of good 1 2 3 4 qualities. * 27. Very often a problem can be solved 1 2 3 4 by taking action. 28. Working with others in my 1 2 3 4 community can help to change things for the better. ‘ items that were included in the self-esteem/self-efficacy subscale (Rogers, Chamberlin, Ellison, & Crean, 1997). 161 APPENDIX F Quality of Life Scale (Sullivan and Bybee, 1999) This next set of questions is designed to assess how you feel about various parts of your life. After you read each question, please write down the number that corresponds to the most appropriate feeling towards each question. For example, if you feel "PLEASED" towards the first question then you would write down the number 2 next to the question. The list of feelings is provided below (only use items from this list to answer the questions). If you feel that a question doesn't apply to you, just select the not applicable category (select the number 8). EXTREMELY PLEASED .................................... l PLEASED ....................................................... 2 MOSTLY SATISFIED ........................................ 3 MIXED (EQUALLY SATISFIED & DISSATISFIED)..4 MOSTLY DISSATISFIED ................................... 5 UNHAPPY ...................................................... 6 TERRIBLE ...................................................... 7 (Not applicable) ................................................ 8 1. First, a very general question. How do you feel about your life overall? .................... _ 2. In general, how do you feel about yourself? .................................................... __ 3. How do you feel about your personal safety? ................................................... _ 4. How do you feel about the amount of fun and enjoyment you have? ........................ __ 5. How do you feel about the responsibilities you have for members of your family? ...... _ 6. How do you feel about what you are accomplishing in your life? ........................... _ 7. How do you feel about your independence or freedom-that is, how free you feel to live the kind of life you want? .................................................................... 8. How do you feel about your emotional and psychological well-being? ................... 9. How do you feel about the way you spend your spare time? ............................... 162 APPENDIX G Demographics Survey Please answer the following questions to the best of your ability. It is important that you answer all the questions. Please feel free to ask about any of the questions for clarifications. What is your age? (please provide a number in years) What is your sex? a. Female b. Male What is your marital status? a. Never married b. Married c. Separated/divorce/widowed What is the highest level of education that you have achieved? a. Ph.D.lM.D. or other advanced degree b. Masters/MSW or other similar degree c. BA. or BS. (1. Associates degree e. Some college f. High school diploma or GED g. Some high school How long have you been working with your current case manager (support coordinator)? How often do you meet with your case manager? How much time do you spend with your case manager each week? 163 The next set of questions will focus on your living situation. 1). How long have you been living at your current residence? 2) How many times have you moved in the past year? 3) How would you describe your current living situation? (please check only w box) a. You Own your own home or jointly own b. You reside with a family member(s) who owns the home you reside in (1. You are currently renting e. You live with a significant other or friend who is currently renting f. You live in an Adult Foster Care (AFC) home or similar residence i. You live in a nursing home or similar residence j. None of the above apply to me The next set of questions is designed to assess your work experience. Please check a_ll categories that apply to you. a. Currently employed full time b. Currently employed part time 0. Currently volunteering d. Currently involved in vocational training e. Currently involved in education to enhance my job skills f. Cmrently unemployed but seeking a job g. Currently unemployed and not seeking a job h. Currently involved in participating on a committee, community action organization, state level committees, church organizations, and/or board of directors. 164 APPENDIX H ID# Participant Consent Form and Study Description Michigan State University is currently conducting a project to examine the effects of support coordination (case management) services on people who receive these services. 1. You have been asked to participate in this study because you are an adult who is currently receiving support coordination (case management) services fi'om Ionia County Community Mental Health. This study was developed by David Loveland at Michigan State University and supervised by Cris Sullivan, Ph.D. associate professor of psychology at Michigan State University. The goal of this research is to evaluate the services that you receive from Ionia County Community Mental Health. Our intentions are to evaluate the services you received or are receiving. In doing so we hope to learn what is going well and what needs improvement in the future in order to provide better services. We want to find out how effective support coordination services are at meeting your needs in the community. To accomplish this task, the researchers of this project need to do two things. The first is to ask you some questions about your satisfaction with the services you have received from Ionia County Community Mental Health, the effects these services have had on your life, how you are currently feeling, and what resources you have acquired in the community as a result of case management services. The second task is to access your service and clinical data from Ionia County Community Mental Health. The reason we need data fiom the agency is to find out if there is a relationship between types and intensity of services that you have received and your quality of life. The data we want to access from the agency is the amount and types of services you have received, how long you have been receiving these services, and the cost of these services. Although we are accessing data fi'om the agency, we are not going to share the information we will collect from you in these surveys with any staff members of the agency. Your confidentiality will be protected to the maximum extent allowable by law (your information will not be released to anyone except David Loveland). After we have collected the data, we intend on sharing the research findings with all research participants and anyone else in the community who is interested in what we have found. The data will be presented in grouped summaries. Therefore, no identifia'ng information will be released. We will also use this information in the future to improve the quality of services provided by Ionia County Community Mental Health. If you would like to know more about this research project you can call the numbers provided to you on this consent form and on the flyer sent to you. If you have any questions or concerns regarding your rights as a research participant you can contact David E. Wright, University Committee on Research Involving Human Subjects (UCRIHS) at (517) 355-2180. If you are unable to make calls where you live you can contact your case manager who will provide you with access to a telephone to make the call. 165 2. Your participation in this study will consist of answering a series of surveys in person (face to face). You will be asked questions regarding your access to community resources, quality of life, your sense of control over your life and environment around you, your level of satisfaction with the agency's services (current or past services), and questions regarding how your are currently feeling. This interview will take approximately 90 minutes and will be conducted at a location most convenient to you. In addition, your agreement to participate in the study means that it is okay with you for David Loveland to review your current and past clinical records and service records from Ionia County Community Mental Health. 3. Upon completion of the surveys you will b paid $30.00 for your time. It is important to stress that payment will only be made after all the surveys have been completed. 4. Your confidentiality will be protected to the maximum extent allowable by law. Only David Loveland will have access to the data provided in these surveys. 5. Nothing that you say will be attributed to you directly in any research report. Your participation will remain confidential in any report of research findings. Your participation in this research will have no effect on your current or future services with Ionia County Community Mental Health. 6. Any questions about this study may be asked at any time by contacting: David Loveland, M.A. Cris Sullivan, Ph.D. Research Coordinator Associate Professor of Psychology Michigan State University 129 Psychology Research Bldg. (517) 887-1964 Michigan State University East Lansing, MI 48824-1117 (517) 353-5015 After you have read the six items above and you would like to participate in this study, please sign and date the consent form on the line below. By signing this consent form, you are agreeing to participate in this study. If you sign this form, you may still discontinue your participation at any time before, during, or after the surveys have been administered. Name Date 166 ID# Participant Consent Form and Study Description Michigan State University is currently conducting a project to examine the effects of support coordination (case management) services on people who receive these services. 1. You have been asked to participate in this study because consumers of support coordination services who are assigned to your caseload have been selected for voluntary participation in a program evaluation study. This study was developed by David Loveland at Michigan State University and supervised by Cris Sullivan, Ph.D. associate professor of psychology at Michigan State University. The goal of this research is to evaluate the services that consumers of support coordination receive from Ionia County Community Mental Health. Our intentions are to evaluate the extent to which case management services in Michigan follow the consumer- centered, strengths-based model of service delivery mandated by the Michigan Department of Community Health (MDCH). Your participation in this study is needed to help us acquire a clinician's perspective on how well consumers assigned to your caseload are doing. If you would like to know more about this research project you can call David Loveland, the research coordinator of this project at (517) 887-1964. If you have any questions or concerns regarding your rights as a research participant you can contact David E. Wright, University Committee on Research Involving Human Subjects (UCRIHS) at (517) 355-2180. 2. Your participation in this study will consist of recruiting and scheduling eligible consumers assigned to your caseload for the study. After the research coordinator has interviewed consumers, you will be asked to fill out a Multnomah Community Ability Scale (MCAS), a Consumers' Strengths and Assets Scale, a Consumer Needs Assessment Scale, and a list of treatment goals for each consumer assigned to your caseload who is involved in this study. The number of clients selected will be based on the number of clients currently assigned to your caseload that are willing to voluntarily participate and have been receiving support coordination services for at least four continuous months prior to their interview. The survey protocol will take approximately 30 minutes to complete for each consumer involved in the study. Your total time will depend on how many survey protocols you are requested to fill out. Your participation in this study is completely voluntary. Your choosing to participate or not to participate will have no effect on your job. 3. Upon completion of each survey protocol you will receive $20.00 for your time. Thus, if you complete six survey protocols you will receive $120.00. 4. Your confidentiality will be protected to the maximum extent allowable by law. Only David Loveland will have access to the data collected from the survey protocol. 167 REFERENCES American Psychiatric Association (APA) (1994). Diaggstic and Statistical Manual of Mental Disorders: Fourth Edition. Washington, DC: American Psychiatric Association. Andrews, F.M. & Withey, SB. (1976). Social Indicators of Well-Being: Americans’ Perceptions of Life Quality. New York: Plenum Press. Anthony, W.A. (1991). Recovery from mental illness: The new vision of services researchers. Innovatiomd Research. 1. 13-14. Anthony, W.A. (1993). Recovery from mental illness: The guiding vision of the mental health service system in the 19908. Psychosocial Rehabilitation Journal, 3;, ll- 23. Anthony, W.A. & Blanch, A. (1989). Recsearch on community support services: What have we learned. Psychosocial RehJabilitation Journal, Q, 55-81. Bachrach, LL. (1993). Continuity of care: A context for case management. In Harris, M. & Bergman, H.D. (Eds.). Case management for mentally ill patients: Theog and practice. New York: Harwood Acdenric Publishers. Baker, 8., Barron, N., McFarland, B.H., & Bigelow, DA. (1994). A community ability scale for chronically mentally ill consumers: part 1. Reliability and validity. Commmrigy Mental Health Journal, 30, 363-382. Baker, S., Barron, N., McFarland, B.H., Bigelow, D.A., & Camahan, T. (1994). A community ability scale for chronically mentally ill consumers: part 11. Applications. Community Mental Health Journal, 30, 459-472. Baker, F. & Intagliata, J. (1982). Quality of life in the evaluation of community support systems. Evaluation and Proggam Planning, 5, 69-79. Barker, 8., Barron, N., McFarland. B., & Bigelow, D. (1994). A community ability scale for chronically mentally ill consumers: part 1. Reliability and Validity. Communng Mental Health J ourng, fl, 363-383. Barker, S., Barron, N., McFarland, B., Bigelow, D., & Camahan, T. (1994). A community ability scale for chronically mentally ill consumers: part 11. Applications. Community Mental Health Journal, 39, 459-472. 169 5. Nothing that you say will be attributed to you directly in any research report. Your participation will remain confidential in any report of research findings. Your participation in this research will have no effect on your job with Ionia County . Community Mental Health. 6. Any questions about this study may be asked at any time by contacting: David Loveland, M.A. Cris Sullivan, Ph.D. Research Coordinator Associate Professor of Psychology Michigan State University 129 Psychology Research Bldg. (517) 887-1964 Michigan State University East Lansing, MI 48824-1117 (517) 353-5015 After you have read the six items above and you would like to participate in this study, please sign and date the consent form on the line below. By signing this consent form, you are agreeing to participate in this study. If you sign this form, you may still discontinue your participation at any time before, during, or after the survey protocol has been administered. Name Date 168 Bedell, J .R., Cohen, N.L., & Sullivan, A. (2000). Case management: The current best practices and the next generation of innovation. Commqu Mental Health J oumfi, fl, 179-194. Bickman, L. (1985). Improving established statewide programs: A component theory of evaluation. Evaluation Review. 9. 189-208. Bickman, L. (1996). A continuum of care: More is not always better. American Psychologist, fl, 689-701. Bickman, L. (1996). The evaluation of a children's mental health managed care demonstration. The Journal of Mental Health Administratiop, 2;, 7-15. Bickman, L., Summerfelt, T., & Bryant, D. (1996). The quality of services in a children's mental health managed care demonstration. The J ourn_al of Mental HeaLth Administration, Q, 30-39. Bigelow, D.A., Brodsky, G., Stewart, L., & Olson, M. (1982). The concept and measurement of quality of life as a dependent variable in evaluation of mental health services. In Stahler, G.J. & Tash, W.R. (Eds.). Innovative Approaches to Mental Hea_l;h_ Evaluation. New York: Academic Press, (pp. 345-366). Bigelow, D.A., McFarland, B.H., Gareau, M.J., & Young, DJ. (1991). Implementation and effectiveness of a bed reduction project. Community Mental Health Journal, _2_7, 125-133. Bond, G.R., Witheridge, T.F., Dincin, J ., Wasmer, D., Webb, J ., & Graaf-Kaser, R. D. (1990). Assertive community treatment for frequent users of psychiatric hospitals in a large city: a controlled study. American Journal of Communig Psychology, 3, 865- 891. Bond, G.R., Miller, L.D., Krumwied, R.D., & Ward, RS. (1988). Assertive case management in three CMHCs: A controlled study. Hoapital and Community PsychiatLy, 3, 411-417. Bond, G.R., Witheridge, T.F., Wasmer, D., Dincin, J ., McRae, S.A., Mayes, J ., & Ward, RS. (1989). A comparison of two crisis housing alternatives to psychiatric hospitalization. Hospital and Communig Psychiafl, fl, 177-183. Bond, G.R., McDonel, E.C. Miller, L.D. & Pensec, M. (1991). Assertive community treatment and reference groups: an evaluation of their effectiveness for young adults with serious mental illness and substance abuse problems. Psychosocial Rehabiliation J ourpal, g, 31-43. Bond, G.R., Pensec, M., Dietzen, L., McCafferty, D., Giemza, R., Sipple, H.W. (1991). Intensive case management for frequent users of psychiatric hospitals in a large 170 city: A comparison of team and individual caseloads. Psychosocial Rehabilitation Journal, L5, 90-98. Borland, A., McRae, J ., & Lycan, C. (1989). Outcomes of five years of continuous intensive case management. Hospital and Communigl Psychiag, 40, 369- 376. Brekke, J. (1987). The model guided method for monitoring program implementation. Evaluation Review. 11. 281-299. Brekke, J. & Test, M. (1992). A model of for measuring the implementation of community support programs: Results from three sites. Communig; Mental Health Journal, _2_8, 227-247. Burns, B.J. & Santos, AB. (1995). Assertive community treatment: an update of randomized trials. Psychiatric Services. fl, 669-675. Campbell, D. & Stanley, J. (1966). Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin Company. Campbell, J. (1996). Toward collaborative mental health outcomes systems. New Directiona for Mental Hglth Services. A, 69-78. Campbell, J. (1997). How consumers/survivors are evaluating the quality of psychiatric care. Evaluation Review. 21, 357-363. Campbell, J. (1998b). Th_e__ technical assistance needs of consumer/survivor a_n_d family stakeholder ggoups within state mental health agencies. St. Louis, MO: Missouri Institute of Mental Health. Campbell, J. (1998a). Consumerism, outcomes, and satisfaction: A review of the literature. In Manderscheid, R. & Henderson, M. (Eds.). Center for Mental Health Services. Mental Health. United States, 1998. DHHS Pub No. (SMA) 99-3285. Washington, DC: Supt. Of Docs., US. Government Printing Office. (pp. 11-28). Center for Mental Health Services (CMHS) (1994). Making a difference: Interim status raport of the McKinney Research Demonstration Progzam for Homeless Mentally Ill Adults. Rockville, MD: Center for Mental Health Services. Center for Mental Health Services (CMHS) (1995). Evaluating quality of life for msons with severe mental illness. Rockville, MD: Center for Mental Health Services Research. Center for Mental Health Services (CMHS) (1996). The MHSIP consumer oriented mental health report card: The final report of the mental health statistics 171 improvement pragram (MHSIP). task force on a conaumer—oriented mental health report gar_d. Rockville, MD: SAMHSA. Chamberlin, J. (1978). On our own: Patient controlled alternatives to the mental health system. New York: McGraw-Hill Book Company. Chamberlin, J. (1990). The ex-patients' movement: Where we've been and where we're going. The Journal of Mind and Behavior. l_1, 323-336. Chamberlin, J. & Rogers, J. (1990). Planning a community-based mentla health system. American Psycholog'gt, 35, 1241-1244. Chen, H. (1990). Theory-Driven Evaluations. Newbury Park, CA: Sage Publications. Comtois, G, Morin, C., Lesage, A., Lalonde, P., Likavcanova, E., & L'Ecuyer, G. (1998). Patients versus rehabilitation practitioners: A comparison of assessments of needs for care. The Canadian J ourn_al of Psychiatry. fl, 159-165. Consumer-Operated Services Program (COSP) (2000). Home page. http://www. cstprogram.org/cosp/index.html. Cook, J .A. & J onikas, J .A. (1996). Outcomes of psychiatric rehabilitation service delivery. In Steinwachs, D.M. & Flynn, L.M. (Eds.), Using client outcomes information to improve mental healthJand substance abuse treatment. New Directions for Mental Health Services, No. 71. (pp. 33-47). San Francisco, CA, US: Jossey-Bass Inc, Publishers. 136 pp. Conigan, P.W., Faber, D., Rashid, F., & Leary, M. (1999). The construct validity of empowerment among consumers of mental health services. Schizophrenia Research. 38, 77-84. Coursey, R.D., Farrell, E.W., & Zahniser, J .H. (1991). Consumers’ attitudes toward psychotherapy, hospitalization, and aftercare. Health and Social Work, 16, 155- 161. Crane-Ross, D., Roth, D., & Lauber, B.G. (2000). Consumers’ and case managers’ perceptions of mental health and community support service needs. Communig Mental Health Journal, 36, 161-178. Curtis, J .L., Millrnan, E.J., Struening, E., & D'Ercole, A. (1992). Effect of case management on rehospitalization and utilization of ambulatory care services. Hospital and Community Psychiatg, fl, 895-899. Coursey, R., Farrell, E., & Zahniser, J. (1991). Consumers' attitudes toward psychotherapy, hospialization, and aftercare. Health and Social Work, M, 155-161. 172 Cutler, D.L. (1992). A historical overview of community mental health centers in the United States. In Cooper, S. & Lentrrer, T.H. (Eds.). Innovations in Commqu Mental Health. Sarasota, FL: Professional Resource Press, (pp. 1-22). Deci, A., Santos, A., Hiott, W., Schoenwald, S., & Dias, J. (1995). Dissemination of assertive community treatment programs. Psychiatric Services, 46, 676-678 Department of Health and Human Services (DHHS) (1999). Mental health: A report of the Surgeon General. Rockville, MD: US. Department of Health and Human Services. Dimsdale, J ., Klerrnan, G., & Shershow, C. (1979). Conflict in treatment goals between patients and staff. Social Psychiafl, 151, 1-4. Drake, R. E. & Burns, B. J. (1995). Special section on assertive community treatment: An introduction. Paychiatric Servicea, fl, 667-668. Drake, R.E., Essock, S.M., Shaner, A., Carey, K.B., Minkoff, K., Kola, L., Lynde, D., Osher, F.C., Clark, R.E., & Rickards, L. (2001). Implementing dual diagnosis services for clients with severe mental illness. Psychiatric Services. 52. 469-476. Drake, R.E., Mercer-McFadden, C., Muesser, K.T., McHugo, G.J. & Bond, GR. (1998) A review of integrated mental health and substance abuse treatment for patients with dual disorders. Schizophrenia Bulletin. 211, 589-608. Eisen, S., Wilcox, M., Leff, H., Schaefer, E., & Culhane, M. (1999). Assessing behavioral health outcomes in outpatient programs: Reliability and validity of the Basis- 32. Journal of Behavioral Health Services Research. 2_6_, 5-17. Eisen, S.V., Wilcox, M., Schaefer, E., Culhane, M., & Leff, S. (1997). Use of BASIS-32 for outcomeassessment of recipients of o_utpatient mental health services. Technical report prepared for the evaluation center at the human services resea_r_ch institute. Health Services Research Institute. Elbeck, M. & Fecteau, M. (1990). Improving the validity of measures of patient satisfaction with psychiatric care and treatment. Hosptial and Community Psychiatry, 51, 998-1001. Ellison, M.L., Rogers, E.S., Sciarappa, K., Cohen, M., & Forbess, R. (1995). Characteristics of mental health case management: results of a national survey. m Journal of Mental Health Administration, 22, 101-112. Essock, S.M., Frisman, L.K., & Kontos, NJ. (1998). Cost-effectiveness of assertive community treatment teams. Amengn J ourpal of Orthgrsychiatry. 68, 179-190. 173 Essock, S. & Goldman, H. (1997). Outcomes and evaluation: System, program and clinican level measures. In Minkoff, K. & Pollack, D. (Eds), Managed Mental Health Care in the Public Sector: A Survival Manual (pp. 295-308). Amsterdam, The Netherlands: Harwood Academic Publishers. Fairweather, G.W. & Onaga-Fergus, E. (1993). Empowering the mentally ill. Austin, TX: F airweather Publishing. Felton, C.J., Stastny, P., Shem, D.L., Blanch, A., Donahue, S.A., Knight, E., & Brown, C. (1995). Consumers as peer specialists on intensive case management teams: impact on client outcomes. Psychiatric Services. 56, 1037-1044. Felix, RH. (1967). Mental illness - progess and prospects. New York: Columbia University Press. Fetterman, D. (2000). Foundations of Empowerment Evaluation. Newbury Park, CA: Sage. Frese, F .J . (1998). Advocacy, recovery, and the challenges of consumerism for schizophrenia. Psychiatric Clinics of North America. 2_l_, 233-249. Ganju, V. (1999). Draft for Review: The MHSIP Consumer Survey. www.mhsip.org/documents/MHSIPConsurnerSurveypdf. Goering, P.N., Wasylenki, D.A., Farkas, M., Lancee, W.J., & Ballantyne, R. (1988). What difference does case management make? Hospital and Comqu Psychiatly, 2, 272-276. Goldman, H.H. & Morrissey, J .P. (1997). A conceptual framewoak for evaluating the intersystem impacts of managed behayioral health we: Report on a roundtable discussion. Rockville, MD: Substance Abuse and Mental Health A. Grob, G.N. (1991). From Asylum to Communig: Mental Health Policy in Modern America. Princeton: Princeton University Press. Grob, G.N. (1994). The Mad Among Us: A hi§tory of gtl_re care of America's Mentally Ill. New York: The Free Press. Harris, M. & Bergman, HQ (1987). Case management with the chronically mentally ill: A clinical perspective. American Journal of Orthopsychiaggg, _5_7, 296-301. Hasenfeld, Y. (1985). Community mental health centers as human service organizations. Ameficaa Behavioral:Scientist. 2.8, 655-668. Heflinger, C. (1996). Implementing a system of care: Findings from the Fort Bragg Evaluation Project. Journal of Mental Health Administratiop, 2_3_, 16-29. 174 Henderson, M., Minden, S., Foster, 8., & Manderscheid, R. (1998). Service analysis for transition to health care reform. In Manderscheid, R. & Henderson, M. (Eds.). Center for Mental Health Services. Mental Health. United States. 1998. DHHS Pub No. (SMA) 99-3285. Washington, DC: Supt. Of Docs., US. Government Printing Office. (pp. 1-10). Hodge, M. & Knisley, M. (1997). Emerging questions for case management in behavioral health managed care systems. In Giesler, L.J . (Ed.). Case Managament for Behavioral Managad Care. Cincinnati: National Association of Case Managers (NACM), (pp. 74-96). Hodge, M. & Giesler, L. (1997). Case management practice gaidelines for adults with severaand persistent mental illness. Ocean Ridge, F 1: National Association of Case Managers (NACM). Holloway, F. & Carson, J. (1999). Subjective quality of life, psychopathology, satisfaction with care and insight: An exploratory study. The International Journal of Sociflsyctnagy, fl, 259-267. Holloway, F., Oliver, N., Collins, E., & Carson, J. (1995). Case management: a critical review of the outcome literature. European Psychiag, 1_Q, 113-128. Intagliata, J. (1982). Improving the quality of community care for the chronically mentally disabled: The role of case management. Schizophrenia Bulletin, 8, 655-672. International Association of Psychosocial Rehabilitation Services (IAPSRS) (1995). Measuring psychosocial rehabilitation outcomes. Hmnan Services Research Institute (HSRI) Toolkit. http://www.h_sri.org/cgi/hsri.cgi. J eger, A.M., & Slotnick, R.S. (1982).'Community mental health and behavioral ecology: a handbook of theory. research, and practice. New York: Plenum Press. Johnsen, M., Sarnberg, L., Calsyn, R., Blasinsky, M., Landow, W., & Goldman, H. (1999). Case management models for persons who are homeless and mentally ill: The access demonstration project. Commqu Mental Health Journal, fl, 325-346. Kanter, J. (1989). Clinical case management: definitioin, principles, components. Hospital and Community Psychiag, i0, 361-367. Kaufinann, C. L. (1999). An introduction to the mental health consumer movement. In Horwitz, A. & Scheid, T. (Eds.). A handbook for the study of mental health: Social contexts. theorieskand systema. Cambridge: Cambridge University Press, (pp. 493-507). 175 Kessler, R., McGonagle, K., Zhao, S., Nelson, C., Hughes, M., Eshleman, S., Wittchen, H., & Kendler, K. (1994). Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States. Archives of General Psychiatry. 5_l_, 8-10. Kiesler, C. & Sibulkin, A. (1987). Mental Hospitalization: Myths and Facts about a National Crisis. Newbury Park, CA: Sage Publications. Kiesler, C. & Simpkins, C. (1994). The unnoticed majorig in psychiatric inpatient gig. New York: Plenum Press. Kiesler, D. (2000). Beyond the disease model of mental dianders. Westport, CT: Praeger. King, J .A., Morris, L.L., & Fitz-Gibbon, CT. (1987). How to Assess Prtgram Implementation. Newbury Park, CA: Sage. Kisthardt, E. & Rapp, CA. (1992). Bridging the gap between principles and practice: Implementing a strengths perspective. In Rose, S.M. (Ed.) Case Management & Social Work Practice. White Plains, N.Y.: Longrnan, (pp. 112-125). Kisthardt, W. (1993). An empowerment agenda for case management research: Evaluating the strengths model from the consumers' perspective. In Harris, M., & Bergman, H.C. (eds.). Case management for mentally ill patients: Theogy and practice. Harwood Academic Publishers, (pp. 165-181). Kisthardt, W. (1997). The strengths model of case management: Principles and helping functions. In Saleeby, D. (Ed.). The Stren s Perspective in Sociial Work Practice: Second Edition. New York: Longrnan, (pp. 97-114). Lehman, AF. (1988). A quality of life interview for the chronically mentally ill. Evaluation and Progzam Planning, _1_l, 51-62. Lehman, A.F., & Steinwachs, D.M. multiple other authors (1998). At issue: Translating research into practice: The schizophrenia patient outcomes research team (PORT) treatment recommendations. Schizophrenia Bulletin, 25, 1-10. Levine, M. (1981) The histopy andpolitics of community mental health. New York: Oxford University Press. Levine, M. & Perkins, D.V. (1997). Principles of community psychology, 2nd edtion: Perspectives and applications. New York: Oford University Press. Libassi, M.F. (1988). The chronically mentally ill: A practice approach. Social Casework: The Journal of Contemporapy Social Work, 88-96. 176 Lipsey, M., Crosse, S., Dunkle, J ., Pollard, J ., & Stobart, G. (1985). Evaluation: The state of the art and the sorry state of the science. In Cordray, D.S. (Eds.). Utilizing Prior Research in Evaluation Planning: New Directions for Program Evaluation. no 27. San Francisco: Jossey-Bass (pp. 7-28). Lipsey, M.W. & Pollard, J .A. (1989). Driving toward theory in program evaluation: More models to choose fiom. Evaluation and Proggam Flaming, _l__2_, 317-328. Lynch, M. & Kruzich, J. (1986). Needs assessment of the chronically mentally ill: Practitioner and client perspectives. Administration in Mental Health _l_3, 237-248. Lyons, J .S., Howard, K.I., O'Mahoney, M.T., & Lish, J .D. (1997). m measurement & management of clinical outcomes in mental health. New York: John Wiley & Sons, Inc. Macias, C., Kinney, R., Farley, O.W., Jackson, R., & Vos, B. (1994). The role of case management within a community support system: partnership with psychosocial rehabilitation. Community Mental Health Journal, fl, 323-339. Manderscheid, R.W., Henderson, M.J., Witkin, M.J., & Atay, J .E. (1999). Contemporary mental health systems and managed care: Definitions and perspectives. In Horwitz, A. & Scheid, T. (Eds.). _A_l;andbook for the study of menmrealth: Sorfl contexts. theories. and systems. Cambridge: Cambridge University Press. (pp. 412-426). Mark, T., McKusick, D., King, E., Harwood, H., & Genuardi, J. (1998). National Expenditures for Mental Health. Alcohol and Other Drug Abuse Treatment. 1996. SAMHSA Document, DHHS Publication No. (SMA 98-3255). http:/lwww.mentalhealth.org. Marty, D., Rapp, C., & Carlson, L. (2001). The experts speak: The critical ingredients of strengths model of case management. Psychiatric Rehabilfition J ouml, 2_4, 214-221. McCabe, S., & Unzicker, R. (1995). Changing roles of consumer/survivors in mature mental health systems. In Stein, L. & Hollingsworth E. (Eds.). MaturinLMental Health Systema: New Challenges and Qpp_ortunities. San Francisco: Jossey-Bass Publishers, (pp. 61-74). McGrew, J ., Bond, G., Dietzen, L., & Salyers, M. (1994). Measuring the fidelity of implementation of a mental health program model. J oumg of Consultingand Clinical Psychology, Q, 670-678. McLean, A. (1999). Empowerment and the psychiatric consumer/ex-patient movement in the United States: contradictions, crisis and change. Socgrl Science a_rfi Medicine, 40, 1053-1071. 177 Mechanic, D. Schlesinger, M., & McAlpine, DD. (1995). Management of mental health and substance abuse services: State of the art and early results. Ih_a Millbank Qparterly, 3, 19-55. Mechanic, D. (1999). Mental health and mental illness: Definitions and perspectives. In Horwitz, A. & Scheid, T. (Eds.). A handbookjor the study of mental health: Social contexts. theories. and systems. Cambridge: Cambridge University Press. (pp. 12- 28). Mechanic, D. (1999). Mental Health and Social Policy: The Emergence of Managed Care. 4th Edition. Boston: Allyn and Bacon. Mental Health Statistics Improvement Project (MHSIP), (1996). Consumer- oriented mental health care: The final report of the mentaflrealth sta_tistics improvement project (MHSIP) taiforce on a consumer-oriented mental health report card. Rockville, MD: Center for Mental Health Services. Mental Health Statistics Improvement Project (MHSIP) Task Force (1998). Performance indicators for mental health services: Values. accountability. evaluatiop, and decision support: Final report of the task force on the design of performance indicators derived from the MHSIP content. http://www.mhsip.org/documents/perfind.htrn. Michigan Department of Community Health (MDCH) (20003). Community Mental Health in Michigan: Background. MDCH: www.mdch.statemi.us/BH/Factshveactshthtm. Michigan Department of Community Health (MDCH) (2000b). Final Revised Plan for Procurement - Full Version Sent to HCF A - September 2000 . MDCH http://www.mdch. state.mi.us/BH/procurementhtm Michigan Department of Community Health (MDCH) (1999a). MDCH: Community Mental Health Se_rvices Proggam: Managed Speciality Supports and Services Contract: October 1. 1998 - September 30. 2000 (Revised - May 1. 1999). Lansing, MI: MDCH. Michigan Department of Community Health (MDCHb) (1999). Competition for Management of Publicly-Funded Specialg Services: Supp_orting Conaumer-Directed Services. Lansing, MI: MDCH. Michigan Mental Health Code (PA 194 Section 330.1712) (1997). MENTAL HEALTH CODE (EXCERPT) Act 258 of 1974 330.1712 Individualized written plan of services. htth/wwwmicfiganlegislaturecrg/law/GetObiect.asp?oijame=330-l 712. Mitchell, J .E., Pyle, R.L., & Hatsukami, D. (1983). A comparative analysis of psychiatric problems listed by patients and physicians. Hospital and Communig Psychitry. 34. 848-849. 178 Modrcin, M. Rapp, C.A. & Poertner, J. (1988). The evalaution of case management services with the chronically mentally ill. Evaluation and Proggarn Planning, 1_1, 307-314. Moncher, F. & Prinz, R. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review. 1_1, 247-266. Morrissey, J .P. (1999). Integrating service delivery systems for persons with a severe mental illness: Definitions and perspectives. In Horwitz, A. & Scheid, T. (Eds.). A handbook for the studyof mental health: Social contexts. theories. and systems Cambridge: Cambridge University Press. Morrissey, J .P. Goldman, H. (1984). Cycles of reform in the care of the chronically mentally ill. Hospital and Communig Psychiatg, 3;, 785-793. Mowbray, C., Rusilowski-Clover, G., Arnold, J ., Allen, C., Harris, S., McCrohan, N., & Greenfield, A. (1994). Project WINS: Integrating vocational services on mental health case management teams. Communig Mental Health Journal, 3, 347-362. Mueser, K. Bond, G. Drake, R., & Resnick, S. (1998). Models of community care for severe mental illness: A review of research on case management. Schizophrenia Bulletin, 2_4, 37-74. Mueser, K. Bond, G., & Drake, R. (2001). Community-based treatment of schizophrenia and other severe mental disorders: Treatment outcomes? Medscape Mental Health. 6(1). www.medscape.com/Medscape/psychiatry/ioumal/ZOO1/v06.m01/mh341 8.muesOl .htrnl. National Association of State Mental Health Program Directors (NASMHPD) (2000). 1999 State Mental Health Agency Profiling System R§p_ort. httpz//www.nasmhpd.or2/n_ri/profiles.cfm. Neale, M.S. & rosenheck, R.A. (1995). Therapeutic alliance and outcome in a VA intensive case management program. Psychiatric Services. 46. 719-721. Olfson, M. (1990). Assertive community treatment: An evaluation of the experimental evidence. Hospital and Community Psychiafl, 4_l_, 634-641. Patton, MO. (1997). Utilization-Focused Evaluation: The New Centary Text. Edition 3. Thousand Oaks, CA: Sage. Phelan, J. & Link, B. (1999). The labeling theory of mental disorder (I): The role of social contingencies in the application of psychiatric labels. In Horwitz, A. & Scheid, T. (Eds.). flandbook for the study of mental health: Social contexta. theories. and systems, Cambridge: Cambridge Universrty Press, (pp. 139-151). 179 Phillips, S.D., & Burns, B.J., Edgar, E.R., Mueser, K.T., Linkins, K.W., Rosenheck, R.A., Drake, R.E., & McDonel Herr, EC. (2001). Moving assertive community treatment into standard practice. Psychiatric Services. 52. 771-779. Pickett, S.A. Cook, J .A., & Razzano, L. (1999). Psychiatric rehabilitation services and outcomes: An overview: Definitions and perspectives. In Horwitz, A. & Scheid, T. (Eds.). A handbook for the study of mental health: Social contexts. theories. and systems. Cambridge: Cambridge University Press, (pp. 484-492). Price, R.H. & Smith, SS. (1983). Two decades of reform in the mental health system (1963-1983). In Seidman, E. (Ed.). Handbook of Social Interventioin. Beverely Hills: Sage Publications, (pp 408-43 7). Proctor, E.K. & Stiffrnan, AR. (1998). Background of services and treatment research. In Williams, J .B. & Ell, K. (Eds.). Advances in mental health research: Implications for practice. Washington, DC: NASW Press, (pp. 259-286). Quinlivan, R. Hough, R. Crowell, A. Beach, C., Hofstetter, R., & Kenworthy, K. (1995). Service utilization and costs of care for severely mentally ill clients in an intensive case management program. Psychiatric Services. fl, 365-371. Rapp, CA. (1993). Theory, principles, and methods of the strengths model of case management. In Harris, M. & Bergman, H.C. (Eds.). Case Management for Mentally 111 Patients: Theory and Practice. Harwood Academic Publishers, (pp. 143-163). Rapp, CA. (1998). The strengtas model: Case management with pgple suffering fiom severe anQrersistent mental illness. New York: Oxford University Press. Rapp, CA. (1996). The active ingredients of effective case management: A research synthesis. In Giesler, L.J. (Ed.). _Case Mapagement for Behavioral Managed Care. Cincinnati: National Association of Case Managers (NACM), (pp. 5-51). Rapp, C.A., Gowdy, E. Sullivan, W.P., & Wintersteen, R. (1988). Client outcome reporting: The status method. Community Mental Health Journal, a, 118-133. Rapp, C.A., Shera, W., & Kisthardt, W. (1993). Research strategies for consumer empowerment of people with severe mental illness. Social Woakg3 8. 727-735. Rapp, R.C., Siegal, H.A., & Fisher, J .H. (1992). A strengths-based model of case management/advocacy: adpating a mental health model to practice work with persons who have substance abuse problems. In Ashery, R.S. (Ed.). Progress and Issues in Case Management. National Institute of Drug Abuse (N IDA) Resegch Monograph Series. E. Rockville, M.D.:National Institute on Drug Abuse, (pp. 34-53). 180 Rapp, C. A. & Wintersteen, R. (1989). The strengths model of case management: Results fiom twelve demonstrations. chhosocial Rehabilitation Journal. 3, 23-32. Rappaport, J Reischl, T.M., & Zimmerman, M.A. (1992). Mutual help mechanisms in the empowerment of former mental patients. In Saleebey, D (Ed.) Ihe strengflrs perspective in social work practice. New York: Longrnan, (pp 84-97). Rappaport, J & Chinsky, J .M. (1974). Models for delivery of service from a historical and conceptual perspective. Professional Psychology, 5, 42-50. Rappaport, J ., Davidson, W.S., Wilson, M.N., & Mitchell, A. (1975). Alternatives to blaming the victim or the environment: Out places to stand have not moved the earth. American Psychologiat, 525-528. Rappaport, J. (1981). In praise of paradox: A social policy of empowerment over prevention. American Journal of Community Psychology, 2, 1-25. Rappaport, J. (1990). Defining excellence criteria in community research. In Tolan, P., Keys, C., Chertok, & Jason, L. (Eds.). Resarching community psychology: Issues of theory and methods. Washington, DC: American Psychological Association, (pp. 51-63). Regier, D., Farmer, M., Rae, D., Locke, B., Keith, S., Judd, L., & Goodwin, F. (1990). Comorbidity of mental disorders with alcohol and other drug abuse. Journal of the American Medical Association. 264. 2511-2518. Ridgeway, P. (1988). The voice of consumers in mental health systems: a call for change. Burlington, VT: Center for Community Change. Rochefort, D. (1997). From poorhouses to homelessness: Policy analysis and menfital health care. Second Edition. Westport, CT: Auburn House. Rogers, E.S., Chamberlin, J ., Ellison, M.L., & Crean, T. (1997). A consumer - constructed scale to measure empowerment among users of mental health services. Psychiatric Services. fl, 1042-1047. Rogers, E.S. & Palmer-Erbs, V. (1994). Participatory action research: Implications for research and evaluation in psychiatric rehabilitation. Psychosocial Rehabilitation Journal, 18, 3-12. Rose, SM. (1992). Case management: An advocacy/Empowerment design. In Rose, S.M. (Ed.) Case Management & Social Work Practice. White Plains, N.Y.: Longrnan, (pp. 271-297). Rosenfield, S. (1992). Factors contributing to the subjective quality of life of the chronic mentally ill. Journal of Health and Social Behavior, 3_3_, 299-315. 181 Rossi, P.H., Freeman, H.E., & Wright, SR. (1979). Evluation: A Systematic Approach. Beverly Hills, CA: Sage. Rossi, P.H. & Freeman, HE. (1985). Evluation: A Systematic Approach. Beverly Hills, CA: Sage. Rubin, A. (1987). Case management. Social Work , _2_8, 49-54. Ryan, C., Sherman, P. & Judd, C. (1994). Accounting for case manager effects in the evaluation of mental health services. J oumal of Consulting and Clinical Psychology, 62, 965-974. Ryan, C., Sherman, P., & Bogart, L. (1997). Patterns of services and consumer outcome in an intensive case management program. Journal of Con_sulting_ar1d Clinica_l Psychology, _6__5_, 485-493. Saleebey, D. (1997). Introduction: Power in the People. In Saleebey, D. (Ed.). The strengths perspective in social work practice. White Plains, N.Y.: Longrnan, (pp. 3- 20). Sanfort, F., Becker, M., & Diamond, R. (1996). Judgments of quality of life of individuals with severe mental disorders: Patient self-report versus provider perspectives. American Journal of Psychiatry. 153. 497-502. Santos, A.B. Henggeler, S.W. Burns, B.J. Arana, G.W., & Meisler, N. (1995). Research on field-based services: models for reform in the delivery of mental health care to populations with complex clinical problems. American Journal of Psychigy, 1_5;, l 1 1 1-1 123. Scheff, T.J. (1984) Being mentally ill: A sociological theory (2nd Eds_.)_. New York: Aldine. Scheid, T.L. & Horwitz, A. (1999). Mental health systems and policy. In Horwitz, A. & Scheid, T. (Eds.). A handbook for the study of mental health: Social contexts. theories. and systems. Cambridge: Cambridge University Press, (pp. 377-391). Scheirer, M.A. (1994). Designing and using process evaluation. In Wholey, J .S., Hatry, H.P., & Newcomer, K.E. (Eds.). Handbook of Practical Program Evaluation. San Fransico: Jossey—Bass Publishers. (pp. 40-68). Scheirer, M.A. & Rezrnovic, EL. (1983). Measuring the degree of program implementation: A methodological review. Evaluation Review. 7. 599-633. Schlesinger, M. & Gray, B. (1999). Institutional change and its consequences for the delivery of mental health services: Definitions and perspectives. In Horwitz, A. & 182 Scheid, T. (Eds.). A handbook for the study of mental health: Social contexts. theories. and systems, Cambridge: Cambridge University Press, (pp. 427-448). Scott, J. & Dixon, L. (1995). Assertive community treatment and case management for schizophrenia. Schizophrenia Bulletin. A, 657-667. Sechrest, L., West, S., Phillips, M., Redner, R., & Yeaton, W. (1979). Some neglected problems in evaluation research: Strengths and integrity of treatments. In Sechrest, L., West, 8., Phillips, M., Redner, R., & Yeaton, W. (Eds.). Evaluation Studies Review Annual. Volume 4. Beverly Hills: Sage Publications, (pp. 15-3 5). Segal, S.P. Silverman, C., & Temkin, T. (1995). Measuring empowerment in client-run self-help agencies. Commqu Menal Health Journal, 31, 227. Shem, D.L., Wilson, N.Z., Coen, A.S., Patrick, D.C., Foster, M., Bartsch, D.A., & Demmler, J. (1994). Client outcomes II: Longitudinal client data fiom the Colorado treatment outcome study. The Milbank Quarterly. 2, 123-148. Solomon, P. (1998). The conceptual and empirical base of case management for adults with severe mental illness. In Williams, J .B. & Ell, K. (Eds.). Advances in mental health research: Implications for practice. Washington, DC: NASW Press, (pp. 482- 497) Stein, L. I. & Test, M.A. (1980). Alternative to Mental Hospital Treatment 1. Conceptual model, treatment program, and clinical evaluation. Archives of General Psychiaay, 32, 392-397. Stroup, T.S. & Dorwart, R. (1997). Overview of public sector managed mental health care. In Minkoff, K. & Pollack, D. (Eds.). Managed Mental Health Care in the Public Sector: A Survivial Manual. Amsterdam, Netherlands: Harwood Academic Publishers. (pp. 1-12). Substance Abuse and Mental Health Services Administration (SAMHSA) (2000). State Profiles. 1999. On Public Sector Managed Behavioral Health Care. Washington, DC: SAMHSA Document DHHS Publication No. (SMA)00-3432. Sullivan, C.M., Tan, C., Basta, J ., Rurnptz, M., & Davidson, W.S. (1992). An advocacy intervention program for women with abusive partners: Initial evaluation. American Journal of Community Psychology, 201 3 1, 309-322. Sullivan, C. & Bybee, D. (1999). Reducing violence using community-based advocacy for women with abusive partners. Journal of Conflingand Clinical Psychology, Q2, 43-53. Sullivan, WP. (1992). Reclainring the community: The strengths perspective and deinstitutionalization. Social Work, _31, 204-209. 183 Sullivan, WP. (1997). The strengths model of case management. In Saleebey, D. (Ed.). The strengths perspective in social work practice. New York: Longrnan, (pp. 183- 197) Talbott, J .A. (1 97 9). Deinstitutionalization: Avoiding the disasters of the past. Hospital and Communng Psychiatay, 3_Q, 621-624. Taube, C.A. Morlock, L. Burns, B.J., & Santos, AB. (1990). New directions in research on assertive community treatment. Hospital and Communig Psychiafl, fl, 642-647. Teague, G., Bond, G., & Drake, R. (1998). Program fidelity in assertive community treatment: Development and use of a measure. American Journal of Orthopsychiafl, _6_8, 216-232. Teague, G., Drake, R., & Ackerson, T. (1995). Evaluating use of continuous treatment teams for persons with mental illness and substance abuse. Psychiatric Services, 4a, 689-695. Test, M.A. (1998). Community-based treatment models for adults with severe and persistent mental illness. In Williams, J .B. & Ell, K. (Eds.). Advances in mental heal_t_h_ research: Implications for practice. Washington, DC: NASW Press, (pp. 420-436). Thompson, B. (1984). Canonical Correlatioin Analysis: Uses and Interpretation: Series 47: Quantitative Applications in the Social Sciences. Newbury Park: Sage. Torrey, ER (1990). Economic barriers to widespread implementation of model programs for the seriously mentally ill. Hospital and Community Psychiatay, all, 526- 531. Turner, J .C. & TenHoor, W.J. (1978). The NIMH community support program: pilot approach to a needed social reform. Schizophrem’a Bulletin. 5, 319-344. Weisbrod, B.A. Test. M.A. & Stein, L1 (1980). Alternative to mental hospital treatment II. Economic benefit-cost analysis. Archives of General Psychiatry. 31, 400- 405. Wholey, J .S. (1994). Assessing the feasibility and likely usefulness. In Wholey, J .S., Hatry, H P., & Newcomer, K.E. (Eds.). Handboole Practical Program Evaluation. San Fransico: Jossey-Bass Publishers. (pp. 15-39). Wilson, SF. (1992). Community support and community integration: New Directions for client outcome research. In Rose, S.M. (Ed.) Case Management & Social Work Practice. White Plains, N.Y.: Longrnan, (pp. 245-257). 184 Wolff, N., Helminiak, T.W., Morse, G.A., Calsyn, R.J., Klinkenberg, W.D., & Trusty, ML. (1997). Cost-effectiveness evaluation of three approaches to case management for homeless mentally ill clients. American Journal of Psychiatrg 154. 341- 348. Wowra, S.A. & McCarter, R. (1999). Validation of the empowerment scale with an outpatient mental health population. Psychiatric Services. 50. 959-961. Yeaton, W. & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness. Journal of Consultingand Clinical Psychology, 4_9, 156-167. Zani, B., McFarland, B., Wachal, M., Barker, S., & Barron, N. (1999). Statewide replication of predictive validation for the multnomah community ability scale. Community Mental Health Journal, 3_5_, 223-229. 185