. .y . . 3; .. av: v f 2v... .. 53.1.0.5: 1 ... .a‘ \.. . a.x.s)x.: L... .. l ’0‘.‘ . f .Pt«yrvv 9.421.) 01.5. V Ivtia. .u. .. ...3 on! 3.2.- $le¢ v 2: 1. i r. 4. z I . ll»?!- r.$ a I;»‘ an.) 3.01. .31. I. I. cl. . ‘3. (y j: 5 . J IIIIIIIIIIIIIIIIIIIIIII III II IIIIIIIIIIIIII 31293 01410 329 This is to certify that the dissertation entitled SCIENTIST-PRACTITIONER INTERESTS , RESEARCH SELF-EFFICACY , PERCEPTIONS OF THE RESEARCH TRAINING ENVIRONMENT AND THEIR RELATION TO DISSERTATION PROGRESS presented by CAROL CATHERINE GEI SLER has been accepted towards fulfillment of the requirements for Ph.D. degree in Counseling Psychology MSUi: an Affirmative Action/Equal Opportunity Institution 0-12771 LIBRARY MIChIQan State University I__ PLACE N RETURN BOX to remove We chockout from your record. TO AVOID FINES return on or Moro duo duo. DATE DUE DATE DUE DATE DUE MSU loAn Affirmatvo ActioNEwol Opportunity lrutltuton MA SCIENTIST-PRACTITIONER INTERESTS, RESEARCH SELF-EFFICACY, PERCEPTIONS OF THE RESEARCH TRAINING ENVIRONMENT AND THEIR RELATION TO DISSERTATION PROGRESS BY Carol Catherine Geisler A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Counseling, Educational Psychology and Special Education 1995 ABSTRACT SCIENTIST-PRACTITIONER INTERESTS, RESEARCH SELF-EFFICACY, PERCEPTIONS OF THE RESEARCH TRAINING ENVIRONMENT AND THEIR RELATION TO DISSERTATION PROGRESS by Carol Catherine Geisler This study was designed to explore scientist interests, practitioner interests, research self-efficacy, perceptions of the research training environment and their relation to dissertation progress in doctoral students who entered APA— approved Counseling Psychology Programs between the years 1987 and 1991. Surveys including the following instruments, Scientist-Practitioner Inventory, Self-Efficacy Research Measure, Research Training Environment Scale, and the Dissertation and Demographic Questionnaire, were mailed to 23 programs. A total of 255 subjects returned usable surveys. The major findings at the .05 level of significance of this study were that: (a) research self-efficacy was positively related to dissertation progress, (b) perceptions of the research training environment were not significantly related to dissertation progress, (c) scientist interest and research self-efficacy were positively interrelated, (d) perceptions of the research training environment varied across programs, and (e) using discriminant function analysis, research self-efficacy was the most influential predictor of dissertation progress (other variables did not make a significant contribution). Implications of these findings for training of counseling psychologists are discussed. Copyright by CAROL CATHERINE GEISLER 1995 Dedicated to Eugene and Lucile Geisler ACKNOWLEDGMENTS Even though my name stands alone is on the title page, this dissertation was not written in isolation and would not be a reality had it not been for the love, support, and assistance of many people. I would like to acknowledge and thank those who have helped me. I am very grateful to my parents who have given me the support and encouragement to succeed in my dream of becoming a psychologist. Without their money, wisdom, and persistence, I would not be where I am today. I am very appreciative of their guidance and assistance. I would like to thank my life partner, Jeff, for his emotional support, his computer knowledge, for stuffing hundreds of envelopes, and for doing more than his share of household tasks while I sat at the computer. I would like to thank the many friends who have supported me throughout graduate school and many of life’s difficult and joyous moments: Co, Mary, Andrea, and Dana. It is rare to find true friends with whom one can share their soul; I have been truly blessed. I would like to thank my committee (Dr. Lee June, Dr. Betsy Becker, Dr. Gloria Smith, Dr. Janet Bokemeier) and the faculty who have helped in this process. Specifically, I'd like to acknowledge the following people: Dr. Lee June, for vi his patience; Dr. Betsy Becker for her encouragement; Dr. Linda Forrest for being a magnificent role model; and Dr. Frances Harris for her unending wisdom. And finally, I would like to thank all of the graduate students in counseling psychology who took the time to fill out my survey. vii TABLE OF CONTENTS LIST OF TABLES.............. .......... ....................ix LIST OF FIGURES.. ..... ................. ........ ...........xi INTRODUCTION...............................................1 REVIEW OF THE LITERATURE...................................4 PrOblem StatementOOOOOOOOOOOOOOOOOOOOOO00....000......9 METHOD....................................................10 Design...............................................10 Sample...............................................10 Instruments..........................................17 Procedure............................................20 Primary Study........................................21 Research Questions and Hypotheses....................24 Data Analysis........................................25 RESULTS...................................................27 Counterbalancing of Instruments......................27 Relations Between Key Variables......................28 Perceptions of the Research Training Environment.....33 Dissertation Progress................................34 Holland Codes........................................56 Research Experience..................................58 Employment Preferences.. ..... ........................60 DISCUSSION................................................62 Overview of Findings.................................62 Limitations of This Study............................66 Implications and Applications of Findings............69 Future Research .....................................70 APPENDICES................................. ......... ......74 Scientist Practitioner Inventory.....................74 Research Training Environment Scale..................77 Self-Efficacy Research Measure.......................82 Demographic and Dissertation Questionnaire...........84 Letter to Participants...............................90 Figures..............................................91 REFERENCES.OOOOOO....0.000......OOOOOOOOOOOOOOOOOO0..00.0.95 viii 10. 11. 12. 13. 14. 15. 16. 17. LIST OF TABLES Distribution and Return of Surveys from EaCh ProgramOOOOOIOOOOOOOOOOOOOOOOOO0.0.0.... ...... .013 Percentages of Sample Demographics...................16 Descriptive Statistics for SP1, SERM, and RTES.......29 Correlation Matrix for Scientist Interest, Practitioner Interest, SERM, RTES, and Dissertation Progress................................31 Percentages of Dissertation Progress by Demographics.38 Percentages of Dissertation Progress by Program......39 Percentages of Dissertation Progress by Dissertation Type...0.0.0.0...OOOOOOOOOOOOOOOOO0.0.00.00000000000042 Reported Research Experience.........................43 Standardized Discriminant Coefficients: Entire sampleOOOOCOOOOOCOOOOOOOOOOOOOOOOOOOOOO00.0.0046 Actual and Predicted Group Membership for Entire sampleOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO 000000 .47 Standardized Discriminant Coefficients: Cross validation for Entire sample...OOOOOOOOOOOOOOOOOO0.0048 Actual and Predicted Group Membership: Cross Validation for Entire Sample.........................49 Standardized Discriminant Coefficients: Form A......51 Standardized Discriminant Coefficients: Cross validation for FormAOOOOOOOOOOOOOOO00.0.0000...00.0.52 Actual and Predicted Group Membership: Cross validation for FormAOOOOOOOOOOOOOOOOOOOOOOOOOOO00.0.53 Standardized Discriminant Coefficients: Form B......54 Standardized Discriminant Coefficients: Cross Validation for Form B................................55 ix 18. 19. 20. 21. Actual and Predicted Group Membership: Cross Validation for Form B................................56 self-reporting Of HOlland COdeSo o o o o o o o o o o ooooooooooo 57 Percentages of Dissertation Progress by Holland COdes.0.0.......0..0...O...OOOOOOOOOOOOOOOOOOO...0.0.58 Preferred Place of Employment After Graduation. ...... 61 LIST OF FIGURES Predicted Dissertation Progress Membership from the Obtained Discriminant Function Scores for All PartiCipants.OOOOOOOOOOOOOOOOOOOO0.......00000000000091 Actual Dissertation Progress by Obtained Discriminant Function Scores for all Participants.................91 Predicted Dissertation Progress Membership from the Obtained Discriminant Function Scores for Form A.....92 Actual Dissertation Progress by Obtained Discriminant Function scores for FormAOOOOOOOOOOOOOOOO0.00.00.00.92 Predicted Dissertation Progress Membership from the Obtained Discriminant Function Scores for Form B.....93 Actual Dissertation Progress by Obtained Discriminant Function Scores for Form B...........................93 Histogram of Research Experience......................94 xi - INTRODUCTION Counseling Psychology has traditionally espoused the scientist-practitioner model of education and training first discussed at the Boulder Conference in 1949 and more recently at the Georgia Conference in 1987. An underlying assumption of this model is a balance of training in research and practice with the aspiration that counseling psychologists will value and use both professionally. However, Counseling Psychology has traditionally focused and placed more importance on the practitioner side of the model (Magoon & Holland, 1984). In a survey of APA-approved counseling psychology training directors, Galassi, Brooks, Stoltz, and Trexler (1986) found that programs indeed perceive themselves as placing more emphasis on the practitioner dimension of training. Watkins, Lopez, Campbell, and Himmell (1986) surveyed 716 APA members who were either counseling psychologists or clinical psychologists, and found that both professions primarily identify with the practitioner role. Mike Patton chose the theme of his year as President of APA Division 17 (1990-1991) "Science into Practice, and Practice into Science." In a Division 17 Newsletter (1990), he stated that the basic premise of the scientist- practitioner model is the underlying theme that unites all 1 2 counseling psychologists, regardless of their theoretical orientation or interests. He strongly suggests that building a more effective interface between science and practice may be the field's key to the future. In his final letter as President (Division 17 Newsletter, 1991), Patton outlined the top 10 problems and challenges he saw facing counseling psychologists. Included in the list was: ...education in the scientist-practitioner tradition. Too many doctoral programs do not integrate science and practice throughout the curriculum, and too many internship centers ignore science altogether. The danger for us is producing highly credentialed graduates who can function only as skilled technicians, not as thoughtful scientist/practitioners (p. 1). A task force, Project to Integrate Practice and Science (PIPS), was established in 1991 to examine the scientist- practitioner model of training. One goal included promoting the integration of science and practice in both research and practice settings, and a second was conducting descriptive studies of the Scientist/Practitioner to better understand how they think and function. Examining graduate students as they are developing into scientist-practitioners, as this dissertation does, can provide useful information. The dissertation, typically, plays a key role in the development of the scientific dimension of training for counseling psychologists and may impact future research productivity. Therefore, it is worth examining how scientist interests and practitioner interests 3 influence dissertation progress in counseling psychology graduate students. REVIEW OF THE LITERATURE Garcia, Malott & Brethower (1988) report that "of all students entering graduate school, as many as one half drop out before obtaining the M.A. or Ph.D. Degrees" (p. 186). In addition, of those who do drop out, 25% do so after completing their courses but before completing their dissertations. Porter, Chubin, Rossini, Boeckmann, and Connolly (1982) found that if a person with a doctorate did not publish an article within two years after graduation, that person was not ever likely to do so. Gelso (1979) suggested that students enter into a counseling psychology program with some ambivalence about research and often the positive end of this ambivalence is not adequately tapped and nurtured in research training environments. He suggested 10 key ingredients for positively influencing students' attitudes towards research. They include: 1) faculty modeling appropriate scientific behavior; 2) faculty and the environment reinforcing scientific activity among students; 3) early and minimally threatening involvement in research; 4) separating research and research design from statistics; 5) ”looking inward" for research questions and ideas 6) teaching students how science could be partly a social experience as well as an isolated, private one; 7) making students aware that all 5 experiments are flawed and limited; 8) teaching and modeling a variety of research styles; 9) encouraging students to do research that applies directly to their clinical experience; and 10) focusing training on how research gets done in agencies. Royalty, Gelso, Mallinckrodt, and Garrett (1986) found in a survey of 358 students from 10 APA-approved Counseling Psychology programs that training programs varied in both their impact on research attitudes as well as perceptions of research training environments. Holland (1986) purported that the lack of research in the field of Counseling Psychology arises because of the personality types that are drawn to counseling psychology. Social types may be less prone to doing research than Investigative types. Studies have shown that counseling psychology, on average, tends to attract more "Social" types than "Investigative" types on the Holland Codes (Holland, 1986). Students who are in graduate programs may be doing research out of necessity rather than desire (Betz, 1986). DeMuse’s (1987) study of 39 Industrial[Organizational psychology programs suggested a strong relation between students’ opinions of program quality and research productivity. In at least two studies (Gelso, 1979; Watkins, Lopez, Campbell & Himmell, 1986) the modal number of annual publications for counseling psychologists was found to be zero. Royalty et a1. (1986) examined ten Counseling Psychology programs and found that programs had 6 differing impacts on students' attitudes towards research. Thus, perceptions of research training environments seem to be an important factor in dissertation progress. Shemberg, Keeley, and Blum (1989) surveyed 62 directors of clinical training and found that non-traditional research (e.g., phenomenological studies, surveys, library research, and case histories) were not as accepted or valued as traditional methods of research for completing dissertations. Thus, most graduate students who complete research have traditional empirical designs. "Many continue to define science narrowly in very traditional terms" (Shemberg, Keeley & Blum, 1989, p. 192). Many practicing clinical psychologists do not contribute to or even read the empirical literature (Barlow et al., 1984; Haynes, Lemsky, & Sexton-Radek, 1987). The Georgia Conference Research Work Group proposed that research self-efficacy in graduate training be examined (Gelso, Betz, Friedlander, Helms, Hill, Patton, Super, & Wampold, 1988). Bandura (1977) first introduced the concept of self-efficacy, which represents one’s beliefs about the ability to perform a certain task or behavior. Bandura (1986) suggested four sources of information from which one derives his/her self-efficacy: personal performance, or accomplishments; vicarious experience, that is learning through another person’s experience; social persuasion, or verbal persuasion; and finally, emotional arousal or one's 7 physiological state. Bandura suggested that these sources of information continually interact to determine one's behavior. In addition to the sources of self-efficacy, Bandura defined three dimensions of self-efficacy: level, or the degree of difficulty of a behavior that one feels capable of performing; strength, or the confidence which a person has to complete a given task; and generality, or the span of circumstances in which a person considers him/herself to be successful. Phillips (1991) examined the relationship of research self-efficacy and perceptions of the research training environment to research productivity for graduate students in counseling psychology. Research self-efficacy was significantly related to both research productivity and perceptions of the research training environment. Mallinckrodt, Gelso, and Royalty (1990) found that training environments, persons and person-environment interactions all influence research interest. More specifically, they found that positive change in research interest was related to (a) conveying to students that all experiments are flawed and that a particular study need not make a great contribution to knowledge to be worth doing and (b) 'wedding science to practice' - that is, teaching students to use their clinical experiences as a source of research ideas (p. 30). They suggested that changes in the research training environment may increase levels of student research productivity. 8 For many graduate students, the dissertation may be their first experience with research and this may profoundly influence their subsequent thoughts and attitudes about research. By examining some of the internal factors, such as research self-efficacy, and external factors such as the nature of the research training environment, this study may be helpful in discovering which factors predict a successful first research experience. Such information could prove valuable in re-designing a scientist dimension of training programs in counseling psychology. A comprehensive literature review produced no articles on dissertation progress in Counseling Psychology. Muszynski and Akamatsu (1991) examined delay in completion of doctoral dissertations in clinical psychology and found that "cognitive and affective factors related to procrastination are predictive of delay in completion of dissertations among clinical psychology students" (p. 122). However, several articles related to dissertation progress were found in other fields (e.g., Jacks, Chubin, Porter, & Connolly, 1983; Monsour & Corman, 1991). These studies examined a wide range of variables such as advisor/advisee relationship, student relationship with the dissertation committee, finances, family obligations, procrastination, job offers, and internal vs. external locus of control. In general, there has been little consistency in the variables examined across studies. 9 The doctorate is the standard entry level accomplishment for counseling psychology and a dissertation is required in all APA-approved Counseling Psychology programs. Yet, a review of the literature suggests that little is known about what predicts successful completion of a dissertation in Counseling Psychology. I intend to examine this issue. In addition, this study may provide some insight as to why the modal rate of annual publications in Counseling Psychology is zero (Gelso, 1979; Watkins, Lopez, Campbell, & Himmell, 1986), in spite of the fact that most training programs espouse a scientist-practitioner model of training. Problem Statement There is a need to examine variables that predict successful dissertation progress in APA-approved Counseling Psychology programs. The purpose of this study is to expand the empirical base of information regarding the training of counseling psychologists in the area of research. Specifically, this study will examine scientist-practitioner interests, perceptions of the research training environment, and research self-efficacy and their relation to dissertation progress. METHOD Design This exploratory study was a descriptive field study with a passive design using cluster sampling and survey analysis. Heppner, Kivlighan, and Wampold (1992) describe descriptive designs as, ...research strategies that enable the investigator to describe the occurrence of variables, the underlying dimensions in a set of variables, or the relationship between or among variables (p. 194). The study was designed to garner more information about variables that may be related to dissertation and program completion. Participants were not assigned to groups and the independent variables were not intentionally manipulated, therefore the design is best described as a passive design. Doctoral programs in Counseling Psychology were used as clusters for the cluster sample and individuals within the programs were surveyed for the analyses. Scheaffer, Mendenhall, & Ott (1986) describe as a cluster sample as, "A simple random sample in which each sampling unit is a collection, or cluster, of elements" (p. 197). Sample Computations for a one-stage cluster-sampling design were used to determine the number of APA-approved Counseling Psychology programs necessary for this study. There were 10 ll fifty-eight fully accredited APA-approved doctoral programs in counseling psychology (APA, 1993), thus the total number of clusters in the population was 58. Provisionally approved programs were not considered for this study. Because all programs met APA standards, clusters were assumed to be fairly similar. The average size of a cluster was determined by calculating the average number of students admitted each year to all APA-approved Counseling Psychology programs. The following variables were taken into consideration as elements in the cluster that may be related to dissertation progress: (a) number of full-time and part- time faculty; (b) whether or not the program accepted only those students who already had a master's degree; (c) the number of degrees awarded per year; (d) the number of applications received per year; (e) the number of students enrolled on a full-time and part-time basis; (f) the percentage of students who completed a program once they were accepted; (9) the number of hours needed to complete a doctorate degree; (h) whether or not pre-dissertation research was required; and (i) self-identification as a scientist-practitioner program. Taking the aforementioned variables into account, a cluster sample size of twenty-four programs was calculated. Twenty four APA-approved Counseling Psychology programs were then randomly selected to participate in the study. Twenty of the original twenty-four programs that were 12 selected agreed to participate, two declined to participate due to policy reasons, and two did not respond to phone and letter inquiries. Therefore, four more programs were randomly selected; of these, three participated and one did not. The final number of programs participating was twenty- three. Two programs were located at the same university, but in different departments. Approximately 870 surveys were sent to the twenty-three selected programs. Three hundred thirty-one subjects completed usable surveys. However, only 255 surveys were used in the data analyses because these represented students who entered their doctoral programs during the years 1987- 1991 (see Table 1). These years were pre-selected because they should comprise those students who have recently completed their dissertation or have been in the final stages of dissertation completion, which was the topic of interest in this study. There was a fairly even distribution of subjects from the years of interest (1987=38, 1988=53, 1989=S4, 1990=52, 1991=58). The lower number for 1987 may be attributed to those students having been out of their programs for a longer period of time and therefore being more difficult to locate. The initial count of 870 surveys was based upon an estimate of the number of students admitted during the years of interest. This number represented an overestimate, as some requests by programs for surveys may have been 13 Table 1. Distribution and Return of Surveys frgm Each Program. Survey sent Directly to Number Number Subject by Return Program Mailed Returned Researcher Rate Program 1 & 60 15 No 25% Program 3 46 17 No 37% Program 4 44 22 Yes 50% Program 5 41 13 No 32% Program 6 42 9 No 21% Program 7 25 6 Yes 24% Program 8 53 7 Yes 13% Program 9 52 15 Yes 29% Program 10 25 8 Yes 32% Program 11 13 3 Yes 23% Program 12 54 22 Yes 41% Program 13 18 5 Yes 28% Program 14 31 8 Yes 26% Program 15 86 9 Yes 10% 14 Table 1 continued Program 16 24 8 Yes 33% Program 17 20 9 Yes 45% Program 18 26 11 Yes 42% Program 19 20 5 No 25% Program 20 65 25 No 38% Program 21 30 7 Yes 23% Program 22 35 9 No 26% Program 23 60 22 No 37% Total 870 255 30% exaggerated, given the actual number of students enrolled. For example, APA’s Graduate Study in Psychology (1993) reports one program as having approximately 45 full-time students, but that program requested 65 surveys. In addition, several incomplete or unusable surveys were returned to the researcher. Examples include: The participant was not admitted during the years of interest; the participant was not in the counseling psychology program, but in a different program in the same department; and the participant was no longer at the current address. 15 The overall response rate based on initial counts requested by programs varied from 10% to 50%, with an average of 30%. Table 2 contains demographic information about the sample. The sample consisted of 171 females (67%) and 84 males (33%). Their ages ranged from twenty-five to fifty- six (M=36, §Q=6.834). The majority (85%) of the subjects self-identified as Caucasian, followed by Chicano/Latino (4%), African American (4%), Other (3%), Asian American (2%) and Native American (2%). In terms of partner status, most subjects were married (57%), followed by those who were single (22%), living with partner (9%), in a committed relationship but not living together (5%), divorced (4%), widowed (2%), and separated (1%). Psychology was the most frequently reported undergraduate major (64%), with the other 39 reported majors varying from nursing to mechanics. Demographic information about graduate students in Counseling Psychology programs was not readily available. However, Gelso et al. (1986) surveyed 358 graduate students in ten counseling psychology programs. Their sample included 167 males (45%) and 190 females. Ethnicity and partner status were not reported. The age range for Gelso's sample was 22 - 52 with a mean of 31 years. The current sample reports having more females, which should be expected given the nine year time difference. The age range is similar to that in Gelso’s study. 16 Table 2. Percentages of Sample Demogpaphics. Percentage Gender Male 33% Female 67% Age 25-30 24% 31-35 26% 36-40 22% 41-45 14% 46-50 11% 51-56 3% Ethnicity Caucasian 85% Chicano/Latino 4% African American 4% Other 3% Asian American 2% Native American 2% 17 Table 2 Continued Partner Status Married 57% Single 22% Living with 9% Partner Committed 5% Relationship but not Living Together Divorced 4% Widowed 2% Separated 1% Ipstpuments The Scientist-Practitioner Inventory for Psychology. The Scientist-Practitioner Inventory for Psychology (SPI) (Leong & Zachar, 1991) (Appendix A) is designed to measure scientist and practitioner interests of psychology students. Research indicates that "graduate students in psychology tend to be interested in either science or practice, but rarely both" (Leong & Zachar, 1991, p. 340). The inventory consists of 42 items. Twenty-one items measure scientist interests and 21 items measure practitioner interests, using 18 a four-point Likert scale. Test reliability was demonstrated by the authors to be .85 for the scientist scale and .93 for the practitioner scale. The scientist scale has been positively correlated with Holland’s VPI Investigative occupational interests, and the practitioner scale positively correlated with Holland’s VPI Social occupational interests, demonstrating initial construct validity for the instrument (Leong & Zachar, 1991). The seven subscales of the SP1 were not used for this study because of low reliability rates reported by the authors. The Research Training Environment Scale. The Research Training Environment Scale (RTES; Royalty et al., 1986; Appendix B) is a forty-five item inventory based on nine of the ten "ingredients" Gelso (1979) proposed to enhance research training environments: faculty modeling of scientific behavior, reinforcement of student research, early involvement in research, untying of statistics and research, facilitating students, looking inward for research ideas, science as a partly social experience, teaching that all experiments are flawed and limited, focusing on varied styles of investigation, and the wedding of science and clinical practice. The overall reliability for the instrument is good (alpha = .92) as is the test-retest reliability, .83. In addition to a total score, there are nine subscales which vary in reliability (r = .57 to r = Ch pr: exa 19 .84). These subscales were not used in this study, but may be used in further analysis of the data. Self-Efficacy Research heasupe. The Self-Efficacy Research Measure (SERM; Appendix C) was developed by Phillips (1991) using a procedure suggested by Bandura (1977) to measure self-efficacy within a specific area. Phillips used factor analysis to evaluate four areas: research design; practical research skills; quantitative and computer skills; and writing skills. Graduate students were asked to generate a list of tasks related to the four areas. The tasks were then randomly ordered and participants were asked to respond to each task, using a 10-point Likert-type scale. The total score is the mean strength of overall research self-efficacy. The instrument consists of thirty- three items and internal consistency reliability was reported at alpha = .96. The Demographic and Dissertation Questionnaire. The Demographic and Dissertation Questionnaire (DDQ; Appendix D) was developed for this study, and pilot tested on graduate students in the APA approved Counseling Psychology program at the University of Utah. The first section of the DDQ asks several demographic questions such as age, gender, ethnicity, year in program, marital status, number of children, and partner status. Several items related to program progress, including thesis status, comprehensive exam status, internship status, and dissertation status are 20 asked in the second section. The third section asks participants to self-identify their Holland codes. Tasks necessary to complete a dissertation are listed in the fourth section, and participants are asked to check all that apply. A similar system is used in obtaining information about previous research experience. Subjects are then asked to rank order their preferred place of employment after graduation. The final section includes several qualitative questions about their graduate school experience. Pgocedure Eilpt spudy. In March of 1994 a cover letter and a copy of the most current DDQ were sent to the twenty-five students who entered the doctoral program in Counseling Psychology at the University of Utah between the years 1989 and 1992. The participants were asked to fill out the survey and provide feedback on how to improve the survey. Sixteen students responded which resulted in a return rate of sixty-four percent. Of those who responded, eighty percent were female and twenty percent male and the mean age was thirty-six (S2 = 6.834). The majority were Caucasian (86%) followed by Asian (7%) and Asian American (7%). Forty percent were single, forty percent married, thirteen percent divorced and seven percent living with a partner. In terms of progress in the program, forty-seven percent had passed comprehensive exams and fifty-three percent had completed all of the required coursework. Although no one had 21 completed internship, one third of the students were currently at their internship site. None of the participants in the pilot study had completed their dissertation. About half of the students had selected their chairperson and had picked a topic. Forty-four percent had started writing their proposal, and approximately one-third of the students had defended their proposals. Participants were asked when they expected to complete the dissertation and the response range was five years, 1994 to 1998. Sixty percent of the students planned to do a qualitative dissertation. Overall, students in the pilot study reported a high rate of involvement with research prior to, or concurrent with their dissertation. Seventy-five percent had worked with a faculty member on a research project, forty-four percent reported having a research mentor, and all of the students had collected data. Feedback was used from the pilot study to make minor revisions of the Demographic and Dissertation Questionnaire. For example, the format of several questions were changed to make the survey easier to read, and the wording on the types of dissertation was clarified. Primary Study The secretary, the administrative assistant, and/or the training director at each of the selected schools was contacted, and he or she assisted in facilitating the 22 distribution and collection of survey materials. Complete mailing lists were requested from each school for students who met the criteria for the study. Fifteen schools were willing to cooperate and sent names and addresses of students with whom they maintained contact. There were several problems with this approach, however. Few schools keep track of students who have graduated from their programs and therefore did not have current addresses. For those schools which could provide names of students who had met the selection criteria, but who had not current addresses, the researcher worked with the alumni office or other students in that cohort to attempt to locate participants. This process resulted in a minimal degree of success. Eight schools had policies which prohibited providing names and/or addresses of current and/or past students. Surveys were mailed to these schools and then a contact person was responsible for either putting the surveys in students' mailboxes, or addressing pre-paid envelopes and mailing the surveys. The survey packet included a cover letter (Appendix E), The SPI (Leong.& Zachar, 1991), The RTES (Royalty et al., 1986), The SERM (Phillips, 1991), The DDQ, a self-addressed stamped envelope, and a self-addressed stamped postcard. The order of the instruments in the packet was as follows: The SPI was first, the SERM and the RTES were counterbalanced, and the DDQ was at the end. Heppner, 23 Kivlighan, & Wampold (1992) recommend counterbalancing materials to help control for threats to internal validity. The researcher used the following rationale for the order of materials. The SPI is the most general inventory because it measures both scientist and practitioner interests, so it was first in the packet. The SERM and the RTES both specifically address research and therefore might have influenced how a subject would have answered the SPI if they had come earlier. Thus, the SERM and the RTES followed the SPI and were counterbalanced to assure that the order did not create a bias. The DDQ was last because it asks specific questions about dissertation progress, the outcome variable. Each program was mailed equal numbers of the two sets of counterbalanced materials. Data was collected from May, 1994 through January, 1995. To increase the return rate, a postcard was sent to selected programs for whom the researcher had addresses. These seven programs were selected because they were the programs that surveys had been sent to during the months of July through December, 1994. Sending postcards to subjects who had received the original survey more than six months ago did not seem to be advantageous. Participants were asked to return the postcard separately if they wanted a summary of the results sent to them. A brief summary of results will be sent to the 161 people who requested that information. 24 geseargh Qpespions and Hypotheses The main research question examined in this study was: Which variables related to research training best predict successful dissertation completion for graduate students in APA approved Counseling Psychology programs? In addition, the following questions were explored: 1) What percentage of graduate students drop out of APA-approved Counseling Psychology programs before completing their dissertation? and 2) What percentage of graduate students complete the dissertation before going on internship? Specific hypotheses were as follows: 1. Scientist interest, research self- efficacy, and perceptions of the research training environment will significantly predict dissertation progress of graduate students and recent graduates of APA-approved Counseling Psychology programs. High research self-efficacy will be positively related to dissertation progress. Positive perceptions of the research training environment will be positively related to dissertation progress. 25 4. Scientist interest and research self- efficacy will be positively related. 5. Perceptions of the research training environment will vary across training programs. t n sis Data were analyzed in several ways using descriptive statistics, correlations, ANOVA, and discriminant analysis. First, descriptive statistics were completed to compile information about the participants who answered the survey. Descriptive statistics were used because they describe and assist in defining the population. Second, ANOVA was conducted to see if the key variables differed based on the order of the instruments in the survey packet. ANOVA was used because it compares the variability of values within groups (scientist interests, practitioner interests, perceptions of the research training environment, research self-efficacy, dissertation progress) with the variability of values between groups (form A, form B). Third, correlations were tested to see which key variables were significantly related. Correlations measure the strength of the relations between variables, but are not predictive in nature. Finally, discriminant analysis was used to discriminate between those who had not started their dissertation, those who were in the process of doing their dissertation, and those who had completed their 26 dissertation. Discriminant analysis is used for classifying subjects into groups using several variables simultaneously. Discriminant analysis calculates which variables are most useful in determining group membership and combines the variables into a mathematical equation which predicts the most likely group outcome (Klecka, 1980). A significant amount of qualitative data was also collected from the DDQ and will be analyzed in a future study. RESULTS Data analyses were performed using SPSS/PC version 5.0 ‘for descriptive statistics, correlations, and ANOVAs. Discriminant analyses were conducted using Systat version 5.2.1 (Wilkinson, Hill, & Vang, 1992). Analyses were completed using 225 surveys of students who entered doctoral level graduate programs in counseling psychology between 1987 and 1991. An alphalevel of .05 was used as a minimal criterion for statistical significance. Counterbalancing of Instruments As mentioned in the methodology section, Heppner, Kivlighan, & Wampold (1992) suggest that counterbalancing materials is one manner in which to control threats of internal validity. Therefore, analyses were performed to determine if the order of administration would result in any effects. One hundred thirty-six subjects returned form A with materials presented in the following order: SPI, SERM, RTES, and DDQ, whereas 119 subjects returned form B with the SERM and the RTES reversed. The ANOVA revealed no significant differences between those subjects returning form A and those returning form B on dissertation progress, practitioner interests and scientist interests. However, there were significant differences between those returning form A and form B on research self-efficacy (2(1, 252) = 27 28 4.338, p < .038) and perceptions of the research training environment (£(1,239) = 10.971, p < .001), variables measured by the two counterbalanced instruments. The mean scores for research self-efficacy and perceptions of the research training environment were significantly higher for subjects receiving the perceptions of the research training environment first (h = 217.03, SQ = 39.04 for research self- efficacy; M = 143.06, SQ = 11.22 for perceptions of the research training environment), compared to subjects who received research self-efficacy first (h = 205.65, S2 = 46.70 for research self-efficacy; M = 138.18, S2 = 11.48 for perceptions of the research training environment). Therefore, successive statistical analyses considering SERM and/or RTES scores will indicate form (i.e., order of presentation). Because program was positively correlated with perceptions of the research training environment and research self-efficacy, form differences on these two variables may have reflected an uneven return rate. Further analysis revealed that no programs returned significantly more of one form than the other. Each school received equal numbers of form A and form B. Relations Between Key Variables Means and standard deviations for Scientist Interest, Practitioner Interest, Research Self-Efficacy, and Perceptions of the Research Training Environment scales are presented in Table 3. The maximum possible score on the 29 Table 3. Qescriptive Statistics f0; SPIl SERMI and RTES. Instrument Subscale Mean SD Range N Scientist- Practitioner Scientist Inventory Interest 52.24 12.26 28-82 251 Scientist- Practitioner Practitioner Inventory Interest 67.45 7.29 38-83 253 SERM Total 210.91 43.61 77-293 253 RTES Total 140.39 11.60 100-166 240 30 scientist interest and the practitioner interest scales is 84. As Table 3 shows, subjects in this sample tended to have stronger interests in practitioner related tasks (h = 67.447, SQ = 7.289) than scientist related tasks (M = 52.243, SQ = 12.263). The mean total score on the research self-efficacy scale was 210.91 out of a total possible score of 297 (SQ = 43.613). Finally, the mean total score for perceptions of the research training environment was 140.39 out of a total possible score of 225 (SQ = 11.597). Researchers (Gelso et al, 1986; Royalty, 1990) who have previously used the RTES have not reported the overall mean for the RTES, therefore it is unknown if the mean for this sample differs from the norming sample. Some restriction of range was obtained on all scales; however, perceptions of the research training environment revealed the most restriction of range with 63%, followed by PRACTOT at 29%, RSE at 27%, and SCITOT at 14%. Correlations among Scientist Interests, Practitioner Interest, Research Self-Efficacy, and Perceptions of the Research Training Environment are shown in Table 4. As hypothesized, there was a significant positive relationship between Scientist Interest and Research Self-Efficacy (; = .480, p < .000). When correlations were computed for groups receiving either form A or B, correlations changed (form A, a = .456, p < .001; form B, r = .498, p < .001), with those 31 Table 4. Coppaiatioh hatrix for Scientist Interest, Practitioner Interest, SERMl RTESl and Dissertation Progress. Practit- Disserta- Scientist ioner tion Correlation Interest Interest SERM RTES Progress Scientist Interest 1.0000 Practitioner Interest -.091 1.000 SERM .480** -.005 1.000 RTES .217** -.069 .292** 1.000 Dissertation Progress .110 -.158* .249** .1045 1.000 hate. u = 25“; *p < .05. **p < .001. receiving form A the correlation decreased and those receiving form B, the correlation increased slightly. A significant positive relationship between scientist interests and perceptions of the research training environment was found (p = .217, p < .001). When 32 correlations were calculated for groups of participants receiving either form A or B, the value of the correlation was slightly attenuated for those receiving form A and a slight larger for those receiving form B (form A, p = .175, p < .048; form 8, p = .231, p < .017). Practitioner interests were not significantly related to research self-efficacy or to perceptions of the research training environment. When correlations were performed separately by form, practitioner interests and research self-efficacy'remained non-significant. Practitioner interests and perceptions of the research training environment were not significantly correlated. However, when computed for responses to each form, the correlations between practitioner interests and perceptions of the research training environment were noticeably different (form A, p = -.0089, p > .920; form B, p = -.1757, p > .069); however neither correlation was significantly different from zero. There was a significant positive relationship between research self-efficacy and perceptions of the research training environment (a = .292, p < .000). When correlations were performed separately, in groups receiving either form A or B, correlations did not noticeably change (form A, p = .299, p < .001; form B, p = .232, p < .016). 33 t' s t e ese ch a' i v'ronmen As noted above, ANOVA revealed a significant difference across all programs in terms of perceptions of the research training environment (E(21,218) = 5.388, p < .000). Using a Scheffe test with a significance level of .05, one program was significantly lower in perceptions of the research training environment than two of the twenty-four programs. These three programs were further examined further. Program 5 had the lowest mean on the RTES. Eighty percent of the participants from program 5 were in the process of completing their dissertations. The majority of the participants from Program 5 had completed or were working on quantitative dissertations (64%), followed by a combination of qualitative and quantitative (20%), and qualitative (16%). Program 5 also had the lowest program mean on research self efficacy (h = 182.8). Program 10 had the highest RTES mean, and 41% of the participants had completed their dissertations. The majority of participants from Program 10 had completed or were working on quantitative dissertations (88%), followed by qualitative (6%), and a combination of quantitative and qualitative (6%). Program 10's mean scores on research self-efficacy (M = 217.18), scientist interests (h = 52.63), and practitioner interests (h = 73.19) were all near the overall sample mean for each scale. Program 18 had the next highest mean on RTES (h = 149.67), and 67% of those participants were working on their 34 dissertations. Program 18 had an even spread of type of dissertation: quantitative (33%), qualitative (33%), and combination (33%). Program 18's mean score for research self-efficacy was high (h = 230.67), with the overall sample mean at 210.91. To further explore differences among programs, each program's perception of the research training environment was compared to the overall sample mean. Given the fact that the sample in this study represents more than one-third of the total population of APA-approved doctoral programs in counseling psychology and that programs were randomly selected, and assuming random samples of respondents, the mean of the RTES represents an accurate estimate of the population mean. Therefore, any school that is significantly different from the sample mean would be perceived by its students to have either a more positive or less favorable research training environment. Z-tests comparing each school to the sample mean revealed no significant differences. hisseptatiph Erpgzess Students who were admitted to their programs between 1989 and 1991 should have been nearing completion of their dissertation at the time of data collection in 1994. Sixty- four percent reported they were currently working on their dissertation and 36% had completed their dissertation. 35 Dissertation progress was defined by three categories: (a) not having started the dissertation, (b) working on the dissertation (which varies from talking to an advisor about a topic to making final revisions), and (c) having completed the dissertation. However, everyone in the sample either fit into the second or third category, therefore, only two levels of dissertation progress were used for analysis. Given the wording of the survey, it was somewhat difficult to determine in what order subjects had completed basic program requirements such as comprehensive exams, internship, and dissertation, because they were only asked to give the year of completion of each item. One hundred twenty-six subjects reported having completed both their dissertation and internship at the time of the survey. Of those subjects, 44 (28%) reported completing their internship and dissertation in the same calendar year. Only nineteen of those subjects (8%) completed their dissertation before internship; sixty-three subjects (40%) completed their internship before their dissertation. I was not able to determine how many students dropped out of their program before completing their dissertation for two reasons. First, survey questions did not indicate if the respondent had dropped out of the program, or was planning to drop out. Secondly, programs either no longer tracked or maintained records of those students who had dropped out of the program or were not at liberty to share 36 that information with the researcher. Although one program provided the name of a student who was no longer matriculated and gave the reason for her departure, sufficient sample size was not obtained to draw any substantive conclusions about drop cut rates or drop out characteristics. As predicted in the second hypothesis, research self- efficacy and dissertation progress were significantly positively related (a = .249, p < .001) across all subjects. When correlations were performed separately, in groups receiving form A and form B, there was an increase for those with form A (p = .279, p < .001) and an attenuation in the value of the correlation for those with form B (a = .214, p < .021). Analyses did not support the third hypothesis, that is, a significant positive relation between perceptions of the research training environment and dissertation progress (; = .105, p < .106). When correlations were performed separately, in groups receiving form A and B, correlations did not noticeably change and continued to be non- significant (form A, p = .119, p < .177; form B, p = .088, p < .364). Analysis using all subjects, regardless of form, was then performed on research training environment and dissertation progress by year of admission. No significant differences were found. There was no significant relation 37 between year of dissertation completion or expected year of dissertation completion and perceptions of the research training environment. Scientist interests and practitioner interests were not significantly related to dissertation progress (r = .110, p < .083; g = -.158, p < .012, respectively). Table 5 lists the percentages of dissertations in progress and dissertations completed by gender, ethnicity, partner status, and requirement of a Master’s thesis. Table 6 lists percentages of dissertation progress by program. 38 Table 5. Earcahtagea pf pisseppatipp Proggess by Qemogpaphic Variabies. Dissertation Dissertation in Progress Completed Gender Male 56% 44% Female 68% 32% Partner Partnered 68% 32% Status Unpartnered 55% 45% Ethnicity African 78% 22% American Asian American 67% 33% Caucasian 62% 38% Chicano/Latino 80% 20% Native 75% 25% American Other 63% 37% Master's Required 61% 39% Thesis Not Required 66% 34% 39 Table 6. hisseptatiph Progress by Eppgzah. Program # Dissertation in Dissertation Progress Completed 1 67% 33% 2 100% 0% 3 50% ' 50% 4 89% 11% 5 80% 20% 6 0% 100% 7 67% 33% 8 100% 0% 9 67% 33% 10 59% 41% 11 60% 40% 12 91% 9% 13 77% 23% 14 100% 0% 15 25% 75% 16 23% 77% 40 Table 6 Continued 17 58% 42% 18 67% 33% 19 86% 14% 20 15% 85% 21 11% 89% 23 88% 12% Post-hoc exploratory analyses using correlation coefficients revealed dissertation progress as defined by two categories (working on it, or completed) was, as expected: negatively related to year admitted to doctoral program (a = -.351, p < .000); negatively related to year of graduation (; = -.652, p < .000); positively related to comprehensive exam status (a = .278, p < .000); positively related to status of course work (a = .280, p < .000); and positively related to status of internship (; = .534, p < .000). In addition, dissertation progress was significantly related to partner status (;=-.127, p=.044). Although ANOVA revealed no significant differences in dissertation progress between those subjects whose programs required a master's degree for admission and those programs that didn’t, the analysis approached significance (£(1,254) 41 = 3.70, p < .055). There were no significant differences in dissertation progress between those participants who were completing their Ph.D. in the same program as their M. A. and those who were in different program. There were no significant differences in dissertation progress by gender, ethnicity, undergraduate gpa, or graduate gpa. Two categories were created for partner status: partnered and unpartnered. Partnered status (72%) included those who were married, living with a partner, or in a committed relationship but not living together. Unpartnered status (28%) included those who reported being single, separated, divorced, or widowed. ANOVA revealed no significant differences on dissertation progress between those who were partnered or unpartnered (E(1,253) = 3.470, p < .064). The majority of the subjects reported doing a quantitative dissertation (67%), followed by a combination of quantitative and qualitative (15%), qualitative (14%), theoretical (2%) and other (2%). There was a significant main effect for type of dissertation on dissertation progress (£(2,209) = 3.26, p < .040), however, simple comparisons using Scheffe post hoc procedure revealed that no two groups were significantly different from each other on dissertation progress. Table 7 indicates dissertation progress by dissertation type. 42 Table 7. erce es 's e tat'o re hissephation Type. Dissertation in Dissertation Progress Completed Quantitative 58% 42% Qualitative 78% 22% Theoretical 40% 60% Combination 82% 18% Other 75% 25% A composite variable representing total research experience was generated by summing across the 18 individual research items (Table 8). Research experience was then correlated with dissertation progress, resulting in a significant correlation between these two variables (a = .28, p < .000). Discriminant function analyses were performed on the following dependent variables in order to predict dissertation progress: scientist interests, practitioner interests, research self-efficacy, and perceptions of the research training environment. Of the 255 subjects 21 subjects were eliminated because of incomplete scores on the 43 Table 8. Repppted Research Experiences % who Research Experience said yes 1. I have been a member of a research team 70% 2. I have worked with a faculty member on a research project 69% 3. I have a research mentor 46% 4. I have made a presentation at a professional conference 49% 5. I have submitted a research article for publication in a refereed journal 44% 6. I have co-authored a research article in a refereed journal 42% 7. I have been the first author of a research publication in a refereed journal 20% 8. I have co-authored an article in a non- refereed journal 15% 44 Table 8 Continued 9. I have been the first author of an article in a non-refereed journal 11% 10. I have been an invited author in a journal 5% 11. I have been the first author of a chapter in a book 5% 12. I have been the co-author of a book chapter 9% 13. I have collected data 91% 14. I have entered data into a computer 82% 15. I have constructed a code book 35% 16. I have done statistical analysis of data (other than in class) 72% 17. I have analyzed qualitative data 43% 18. I have presented a poster session at a conference 38% variables of interest, leaving a total of 234 subjects for the discriminant analysis. The aforementioned sample size is considered acceptable in that it exceeds a 20 to 1 45 subject per variable ratio (Stevens, 1992). No significant outliers were found. Only one discriminant function can be derived when a grouping variable has two levels (Tobachnick & Fidell, 1989). The first discriminant function analysis (ignoring form) allowed for the derivation of a significant discriminant function (gua,a;234) = 28.008, p < .000). The squared canonical correlation was .11, indicating that 11% of the variance in the discriminant function was explained by the groups. A strong relationship between group variation and the discriminant function suggests that the discriminant function is a valid predictor of group membership. Table 9 lists the standardized discriminant coefficients for deriving the discriminant function scores from standardized predictor scores. Standardized discriminant function coefficients indicate the reiahive importance of a variable in deriving the discriminant function, (which in turn, predicts group membership.) The predictors that separated dissertation progress were scientist interest, practitioner interest, and research self-efficacy. However, as can be seen in Table 9, research self-efficacy clearly stands out as the most influential predictor, that is, it contributed most to the development of the discriminant function. The negative sign associated 46 Table 9. Standardized Discriminant Coefficients: Entipe Sample Dependent Variable Coefficient Scientist Interest 0.310 Practitioner Interest 0.587 Research Self-Efficacy -0.982 Perceptions of the Research Training Environment 0.022 with research self-efficacy and the positive sign associated with practitioner interests can be interpreted as follows: higher the research self-efficacy and lower practitioner interests were the best relative predictors of dissertation completion. This point is emphasized by the significant univariate F test for research self-efficacy across dissertation progress (£(1,232) = 18.88, p < .000; M = 200.30 for dissertation progress level one and M = 225.26 for dissertation progress level two). The univariate F test for practitioner interest was significant as well (£(1,232) = 7.787, p < .006; M = 68.41 for dissertation progress 47 level one and M = 65.64 for dissertation progress level 2), but did not remain so in the cross validation analysis. Table 10 presents the actual group membership and that predicted by the discriminant function analysis. Sixty- percent correct for each group is better than what can be predicted by chance, thus the discriminant function is providing a useful model. Table 10. Actual and Predicted Group Membepship f0; Entire Sample Group 1 Group 2 Percentage Predicted Predicted Total Correct Group 1 Actual 88 58 146 60% Group 2 Actual 35 53 88 60% Total 123 111 234 Table 11 lists the standardized discriminant coefficients obtained from the cross validation analysis (Tobachnick & Fiddle, 1989). In order to complete the cross validation analysis, the computer randomly selected half of 48 Table 11. Standafdized Discriminant Coefficients: Cross Validatiph fog Ehtifa Sample. Dependant Variable Coefficient Scientist Interest 0.389 Practitioner Interest 0.412 Research Self-Efficacy -1.137 Perceptions of the Research Training Environment 0.103 the scores from the sample and computed the cross validation on the remaining half of the sample. The latter clearly shows that research self-efficacy is the most reliable determiner of dissertation progress. As was mentioned above, the univariate F test for practitioner interest was no longer significant, whereas the univariate F test for research self-efficacy remained significant (£(1,115) = 6.558, p < .012). The frequencies of predicted and actual group assignment agreement as well as correct percentages are shown in Table 12. 49 Table 12. tua a 'cte e s ' ° s Validation fp; Ehtifa §ampie Group 1 Group 2 Percentage Predicted Predicted Total Correct Group 1 Actual 32 26 58 55% Group 2 Actual 24 35 59 59% Total 56 61 117 Figures 1 and 2, illustrating the discriminant function analysis results, are located in Appendix F. Figure 1 (PREDICT by FACTOR) is the predicted dissertation progress membership from the obtained discriminant function scores. The graph may be interpreted as follows: The discriminant function scores used to determine predicted group membership cluster within the -1 to 1 range. Hence, the distance between where dissertation progress 1 and 2 begins is minute, despite there being no overlap. The latter indicates that although the obtained discriminant function significantly determined group membership, this determination is only somewhat visually distinct. (What 50 would be visually distinct, for example, would be if group 1 predicted scores ended at -1 and group 2 predicted scores began at 1 or 2 and/or if clustering occurred toward the outer tails of the plot.) Figure 2 (GROUP by FACTOR) is the actual dissertation progress plotted against the obtained discriminant function scores. The main idea here is that there is extreme overlap between the two dissertation progress levels, with indistinct clustering, which illustrates the misclassification of dissertation progress seen in Table 10. If classification were perfect, we’d see a graph more similar to the PREDICT by FACTOR graph. As was mentioned previously, discriminant function analyses were performed for each form presentation order. The first discriminant function analysis for form A allowed for the derivation of a significant discriminant function (33(4, M=129) = 15.776, p < .000). The squared canonical correlation was .12, indicating that 12% of the variance in the discriminant function was explained by the groups. Table 13 lists the standardized discriminant coefficients for deriving the discriminant function scores from standardized predictor scores. The predictors that separated dissertation progress were that of scientist interest, practitioner interest, and research self-efficacy. As before, research self-efficacy clearly stands out as the most influential predictor. This point is again emphasized by the significant univariate F test for dissertation 51 Table 13. Stahdapdiaea Qiscrimihaht Coeffigiahts; Fogh A Dependent Variable Coefficient Scientist Interest 0.368 Practitioner Interest 0.421 Research Self-Efficacy -1.044 Perceptions of the Research Training Environment -0.022 progress within research self-efficacy(£(1,127) = 12.661, p < .001; M = 193.16 for dissertation progress level one and M = 221.85 for dissertation progress level 2). Unlike in the initial analysis, the univariate F test for practitioner interest was not significant, nor was it significant in the cross validation analysis. This may be a result of a smaller n and therefore less statistical power. Table 14 lists the standardized discriminant coefficients obtained from the cross validation analysis (Tobachnick & Fidell, 1989). The latter clearly shows that research self-efficacy is still the most reliable determiner of dissertation progress, and the univariate F test was 52 Table 14. Standardized Discriminant Coefficients: Cross Vaiidafioh f0; Fogp A: Dependent Variable Coefficient Scientist Interest 0.515 Practitioner Interest -0.044 Research Self-Efficacy -1.186 Perceptions of the Research Training Environment 0.078 significant (E(1,63) = 8.805, p < .004). The frequencies of predicted and actual group assignment agreement are shown in Table 15. The aforementioned table was developed by using the discriminant coefficients obtained from half of the form A sample on the remaining half of the form A sample (Tobachnick & Fidell, 1989). Figures 3 and 4, illustrating the discriminant function analysis results for form A, are located in Appendix G and are interpreted as before. The second discriminant function analysis for form B allowed for the derivation of a significant discriminant function (_x_2(4, N=105) = 12.624, p < .013). The squared canonical correlation was .12, indicating that 12% of the variance in the discriminant function was explained by the 53 Table 15. Actual ahd Predicted Group Membership: Cross Vaiidation f0; Form A- Percent Group 1 Group 2 Correctly Predicted Predicted Total Predicted Group 1 Actual 26 21 47 55% Group 2 Actual 4 13 17 76% Total 30 34 64 groups. Table 16 lists the standardized discriminant coefficients for deriving the discriminant function scores from standardized predictor scores. The predictors that separated dissertation progress were that of scientist interest, practitioner interest, and research self-efficacy. As before, research self-efficacy clearly stands out as the most influential predictor. This point is again emphasized by the significant univariate F test for dissertation progress within research self-efficacy (£(1,103) = 6.434, p < .013; M = 209.20 for dissertation progress level one and M = 229.33 for dissertation progress level two). As in the 54 Table 16. tanda di ed 'sc i ' ant C e 'c ents: EQIE_B Dependant Variable Coefficient Scientist Interest 0.236 Practitioner Interest 0.754 Research Self-Efficacy -0.859 Perceptions of the Research Training Environment 0.096 initial analysis, the univariate F test for practitioner interest was significant (E(1,103) = 6.045, p < .016). Table 17 lists the standardized discriminant coefficients obtained from the cross validation analysis (Tobachnick 8 Fidell, 1989). The aforementioned table was developed by using the discriminant coefficients obtained from half of the form B sample on the remaining half of the form B sample (Tobachnick & Fidell, 1989). Table 17 clearly shows that research self-efficacy is still the most reliable determiner of dissertation progress. The univariate F test for research self-efficacy was not significant (£(1,40) = 0.148, p > .702). In cross validation, however, replicating 55 Table i7, tan ' ed ' c ' ' Coe 'c'ents: Cfpss Validation fog Foam B. Dependant Variable Coefficient Scientist Interest -0.036 Practitioner Interest 0.976 Research Self-Efficacy -0.473 Perceptions of the Research Training Environment 0.234 significance is not as important as maintaining correct prediction percentages. The frequencies of predicted and actual group assignment agreement are shown in Table 18. This cross validation reflects an increase in the number of correctly predicted participants who are in the process of working on their dissertation. Figures 5 and 6 illustrating the discriminant function analysis results for form B, are located in Appendix H and are interpreted as before. The results are similar to those for the entire sample as well as those receiving form A. 56 Table i8. Actual ana Predicted Group Membership: Cross Validation for Form B. Percent Group 1 Group 2 Correctly Predicted Predicted Total Predicted Group 1 Actual 27 12 39 69% Group 2 Actual 11 13 24 54% Total 38 25 63 Hoiiand Codes Participants were given descriptions of Holland Codes (Holland, 1986) (Realistic, Investigative, Artistic, Social, Enterprising, and Conventional) and asked to rank them in order of how well each description characterized them. As is typical of counseling psychologists (Holland, 1986), the majority picked Social (62%) as best describing them, followed by Investigative (20%) and Artistic (10%), for the first letter of the code (Table 19). For the term second best describing them, 35% chose Artistic, followed by Investigative (26%) and Social (20%). For the third term 57 best describing them, Investigative ranked highest (29%), followed by Artistic (24%) and Enterprising (18%). Mallinckrodt et al. (1990) found a similar distribution of Holland Codes among Counseling Psychology graduate students. Results indicated no significant difference on scientist or practitioner interests for subjects whose first letter codes were either Social or Investigative which is contrary to Zachar and Leong's (1992) research. T e 9. Self-Reporting pf Hpiiand dees First Second Third Holland Code Code Code Code Realistic 2% 4% 8% Investigative 20% 26% 29% Artistic 10% 35% 24% Social 62% 20% 12% Enterprising 5% 12% 18% Conventional 1% 3% 9% Table 20 lists Holland Codes by dissertation progress. As would be expected, those who self-identified as Investigative had the highest percentage of completed 58 dissertations (45%). Those participants who self-identified as Enterprising had the lowest rate of completed diesertations (15%). Subsequent analysis using Chi-Square revealed no significant differences between dissertation progress or completion by Holland Code types. Tabie 20. Percentages of Qissertation Efogfess by Holland Codes. Dissertation in Dissertation Progress Completed Realistic 75% 25% Investigative 55% 45% Artistic 60% 40% Social 66% 34% Enterprising 85% 15% Conventional 50% 50% Research Enperience In addition to asking questions about dissertation progress, information about research experience was requested. Several items were adapted from Phillips (1991) research productivity scale. Items included experience on a research team, working with faculty, submitting articles for 59 publications and presentations, and tasks associated with research such as data collection. The descriptive results are shown in Table 8 and in a histogram in Figure 7. A large number of students reported having collected data (91%), having entered data into a computer (82%), and having done statistical analysis (72%). However, it was not clear from the questionnaire whether participants had done these tasks as part of a class, as part of their thesis or dissertation, or as part of other research experience, except for "have done statistical analysis of data", which specified "other than class". Excluding the research task related items, areas where participants reported the most research experience were: "have been a member of a research team" (70%), "have worked with a faculty member on a research project" (69%), and "have made a presentation at a professional conference" (49%). "Having been an invited author in a journal" (5%), "having been the first author of a chapter in a book" (5%), and "having been the co-author of a book chapter" (9%) were the least reported types of research experience. The items which ranked the lowest are not surprising since being an invited author and a first author often do not happen until one's career and reputation are well established. To further examine research experience, a composite research experience variable was created which was the sum of the overall research experience. The composite research 60 experience score ranged from 0-17, with a mean of 7.459 and a standard deviation of 3.757. Research experience was significantly positively correlated with dissertation progress (f = 0.228, p < .000). However, as with perceptions of the research training environment, Program 5’s research experience mean was the lowest (M = 4.80) group mean and Program 18's (M = 11.33) was the highest group mean. Further analysis using Chi-Square revealed no significant differences of research experience by program. Employnent Preferences Subjects were asked to rank their top three places of employment after graduation (Table 21). The highest ranking for first choice of employment was Counseling Center (21%), followed by Private Practice (20%), and tenure track academic position (16%). The least preferred place of employment was non-tenure track academic position (3%). For the second most preferred place of employment, private practice ranked first (18%), followed by Counseling Center (16%), Community Mental Health (12%) and Hospital Out- patient (12%). For the third most preferred place of employment, private practice (19%) again ranked first, followed by non-tenure academic position (14%) and Counseling Center (13%). The overall least preferred place of employment was post-doctoral training. 61 Table 21. Praferrea Place pf Employnant Aftef Gfaduatipn Place of First Second Third Employment Choice Choice Choice Academic Position (tenure track) 16% 9% 7% Academic Position (non-tenure track) 3% 9% 14% Community Mental Health 7% 11% 11% Consulting 5% 9% 10% Counseling Center 21% 16% 12% Hospital In- patient 7% 7% 5% Hospital Out- patient 8% 11% 8% Post-doc Training 7% 5% 7% Private Practice 20% 18% 20% Other 6% 2% 2% Missing 6% 3% 4% DISCUSSION This study’s analysis of doctoral students' dissertation progress has provided pertinent information for training programs and has implications for educating counseling psychologists. The purpose of this study was to expand the empirical base of information regarding the training of counseling psychologists in the area of research. This study is unique in its contribution to the literature because the variables that potentially predict dissertation progress had not been previously examined for this population. The findings of this study provide counseling psychology students and their educators with information that may be helpful in planning and developing curriculum and educational experiences. The first section of this discussion recounts the main findings from the study. The second section outlines limitations of this study. The third section addresses some of the implications and potential applications of the findings. The fourth section suggests future research. And finally, the fifth section outlines the importance of these findings for training counseling psychologists. Ovefview pf findings In revisiting the stated hypotheses, the following conclusions can be made. The hypotheses about relationships were supported with the exception of one: Perceptions of the research training environment were not significantly 62 63 related to dissertation progress, regardless of the order of the instruments in the research packet. Research self- efficacy was significantly positively related to dissertation progress. This relationship was significant when the data were analyzed for all participants, as well as when the data were analyzed separately for students who received instruments ordered differently. Scientist interest and research self-efficacy were significantly positively related when the data were analyzed for all participants, as well as when the data were analyzed separately by order groups. Results replicated Phillips' (1991) research in that research self-efficacy and perceptions of the research training environment were positively related, regardless of form. Research experience was significantly positively related to both dissertation progress and research self-efficacy. As predicted, perceptions of the research training environment varied across programs. One program was significantly lower in perceptions of the research training environment when compared to two others. However, comparing each school to the overall sample mean revealed no significant differences. Interestingly, perceptions of the research training environment appeared to have no significant relationship to dissertation progress. This finding is related to Phillips’ (1991) research, where 64 perceptions of the research training environment were not significantly related to research productivity. Based on the correlations, it is not surprising that research self-efficacy contributed significantly to further analyses. This finding is consistent with previous literature that reported self-efficacy was a significant predictor of academic achievement and persistence (Phillips, 1991; Lent, Brown, & Larkin, 1984; 1986; 1987; Multon, Brown & Lent, 1991). Discriminant function analysis was used to examine the relative influence of each of the variables (scientist interests, practitioner interests, research self-efficacy, perceptions of the research training environment) on dissertation progress. Research self-efficacy was the most influential predictor of dissertation progress; the other variables did not make as great a contribution. The aforementioned findings suggest that high research self-efficacy is a key factor in dissertation progress. Betz (1986) and Gelso's et al. (1988) recommendation of applying self-efficacy theory to the area of research training was supported by the current study.~ As Bandura (1986) suggests, increased self-efficacy is derived from four sources of information: personal performance, or accomplishments; vicarious experience; social or verbal persuasion; and emotional arousal. Therefore, it could be hypothesized that students having more personal performance 65 and accomplishments related to research, such as previous research experience, being a member of a research team, etc., would have higher research self-efficacy and therefore be more likely to complete their dissertation in a timely manner. In addition, students who learned vicariously from working with a faculty member on a research project might also increase their research self-efficacy. Results indicated that research experience and research self- efficacy were positively related. Subjects were asked to self-identify the first, second, and third Holland code that best described them. The majority of subjects picked Social as the first code best describing them, followed by Artistic as the second, and Investigative as the third. Previous research has suggested that low research productivity after graduate school in counseling psychology is a function of the predominance of people with Social interests rather than Investigative interest in the field (Holland, 1986; Osipow, 1979). There was no significant difference on scientist interests and practitioner interests for subjects whose first letter codes were either Social or Investigative. Furthermore, as suggested by Leong and Zachar (1991), scientist interests and practitioner interests were not significantly correlated. 66 Limitations of This Study This study has several limitations that suggest a cautious interpretation of the results. In descriptive field studies, external validity is generally high and internal validity low. The design and instrumentation poses several threats to internal validity. A well-defined population was used in order to increase internal validity. However, the sampling method had several limitations. In general, it was difficult to locate subjects, particularly those who had entered the program in 1987. Several schools did not have address lists for the students currently enrolled in their programs which made it extremely difficult to get a representative sample from those schools. Many schools did not keep track of students who were on internship or who had already graduated. It is not clear if students in counseling psychology who responded to the survey are in any way different from students who did not choose to reply. It is possible that the aforementioned students could be very high or very low on the variables of interest, and thereby influence the results. Instrumentation posed further threats to internal validity. All of the instruments are relatively new and have not been widely used. Self-report measures raise the question of participant honesty. In addition, there was no measure to see if participants’ perceived attitudes, 'd interests, and self-efficacy are congruent with their actual 67 behaviors and/or abilities. It is important to remember that the SP1 measures interests, not abilities or competencies in those areas. 0n the same note, the RTES has acceptable levels of reliability for the full scale score, but it again measures the students’ perceptions of the environment, not the environment itself. The RTES also does not account for faculty perceptions of the training environment. Two of the instruments, the SERM and the DDQ, have limited information about their validity and reliability. Both instruments were constructed for dissertation research and have not been widely used or studied and therefore the results must be interpreted with caution. Because no instruments measuring dissertation progress were found, it was necessary to develop and pilot test the DDQ. There were several problems with the DDQ. The wording was geared towards those still in the program, and it is possible that people who had already graduated did not return the survey because they did not think it pertained to them. In fact, several surveys were returned with a note to that effect. The researcher was not able to obtain any addresses of people who had dropped out or had left their programs. The wording on many items on the DDQ made it difficult to analyze the data. For example, describing levels of dissertation progress and then having subjects check the appropriate level may have improved the response 68 accuracy. It was impossible to obtain an exact count of students who finished their dissertations before going on internship because of the wording of the questions. Participants were asked in what year they completed each task, so if they completed their internship and dissertation in the same year, it was impossible to determine which event came first. This was not noticed in the pilot study because none of the participants had completed their internship and dissertation in the same year. The question regarding number of children was eliminated from the analysis because participants did not follow instructions and responded incorrectly. Again, due to the wording of Holland Codes, it was impossible to do correlation analyses with other variables. Finally, a shortened version of the DDQ might have increased the response rate. The overall return rate of 30%, although is within acceptable statistical standards, raises some concern considering the population surveyed. Counseling Psychology graduate students are taught to understand and value research within their profession and yet, they did not demonstrate support by returning surveys. Those who answered the survey may be significantly different from those who did not return the survey. Return rate may have been improved by sending follow-up postcards to the entire sample and shortening the survey materials. As mentioned in the Method section, the response rate represents an 69 underestimate because more surveys were sent out than were actually needed. Given the correlational design of this study, the researcher cannot derive conclusions regarding causal relationships among the variables. Thus, it is not clear if more dissertation progress increases research self-efficacy or if higher self-efficacy promotes dissertation progress. Overall, generalizations are limited to APA-approved counseling psychology doctoral students. Implications and Applications of Findings In general, programs in this study did not have a systematic method for keeping track of students in their program, or students who had completed their program. It seems it would be in the best interest of programs to monitor their students, particularly those who have not finished their dissertations, so as to ensure dissertation and program completion. It may also prove beneficial to keep track of students who have completed the program; they may serve as a valuable resource to those still in the program. Research self-efficacy was clearly the best predictor of dissertation progress. It would be important for training programs to provide early positive experiences with research through both "hands-on" work and vicarious experiences. It may even be beneficial for programs to require research experience during the first or second year .Il ix] .'||.lo '70 of the program and/or to require membership on a research team. In addition, coordinating statistical coursework with "hands-on" experience may also be beneficial in raising research self-efficacy. Having faculty who are interested in research and actively pursuing research may provide positive role models and positive vicarious experiences for doctoral students in counseling psychology programs. Programs could implement a role model or mentoring program in the area of research for incoming students. Although this study did not directly examine research support groups, dissertation support groups may be beneficial as students grapple with conducting research projects. If programs are not willing to provide training experiences in research skills, this should be reflected in their admission standards by accepting only those students who already have research experience and high research self-efficacy. Few of the students in this study completed their dissertation before going on internship. Perhaps requiring dissertation completion before internship would assist students in getting their dissertations done and would prevent the "all but dissertation" status that appears to be somewhat common. The latter status may also exacerbate diminished research self-efficacy. Future Research Further investigation of Counseling Psychology graduate students' experiences, perceptions, interests, and self- 71 efficacy as they relate to dissertation completion would extend the current findings in important ways. With regard to replication of this study, consideration of other factors such as school size, reputation, faculty research productivity, ratio of faculty to students, pre- dissertation research, etc. may provide broader information about the environmental influences of dissertation progress. In addition, doing an event analysis would help determine, how many people complete dissertation before internship, and what personal characteristics they possess. Future studies may also survey faculty at APA-approved Counseling Psychology programs, thereby obtaining information about their perceptions of the research training environment, their research productivity, and perceptions of their students dissertation progress -- which then could be compared to students' responses. This type of study might provide insight into discrepancies of perceptions between faculty and students and give guidance as to how to better train students in the area of research. Analyzing qualitative data similar to that collected in this study, as well as gathering further qualitative data about what helps and what hinders students in completing their dissertation may provide useful insights. Contributing to the latter would be an examination of students who completed their dissertation in a timely manner, and are now in the work force; their rates of 72 productivity and why they have chosen particular employment settings may assist in explaining why Counseling Psychologists have such a low modal rate of publications (Gelso, 1979; Watkins et al., 1986). Conducting a longitudinal study using event analysis and collecting information about interests, perceptions, ability, and dissertation outcomes could prove to be very fruitful. Ability to do research is a construct that has not been previously examined in relation to dissertation progress. If students were contacted as soon as they entered their program, the researcher could track them throughout their program and into their early professional career. Information could also be garnered about what causes students to leave programs. Examining perceptions of the research training environment across programs and then using that information to explore which factors help create a positive perception of the research training environment would be useful for educators. If one could pinpoint what types of educational experiences might help increase research self-efficacy, educators could incorporate this information into curriculum planning for graduate students. In addition, future research may explore the causal nature of research self- efficacy and dissertation progress. In summary, the present study helped expand the 'empirical data base about research training and dissertation 73 progress in APA-approved doctoral counseling psychology programs. However, as counseling psychology continues to grow and develop as a profession, conducting good research is vital to the profession's livelihood. Further exploration of graduate students' research experience and its impact on professional development is needed. APPENDIX A The following questions ask about interest in activities often performed by psychologists. Please put your answer in the blank next to the question. The response categories are as follows: 1 2 3 4 Very low Low Medium High interest interest interest interest 1. Writing an article commenting on research findings. 2. Conducting a psychotherapy session with an individual client. 3. Analyzing data from an experiment you have conducted. 4. Conducting a diagnostic interview with a client. 5. Presenting research findings at a conference. 6. Planning a behavior modification program for a client. 7. Formulating a theory of a psychological process. 8. Designing a new treatment method for a mental health agency. 9. Designing an experiment to study a psychological process. 10. Administering a psychological test to a client. 11. Writing a scientific book for psychologists. 12. Conducting couples and family therapy. 13. Supervising students' research projects. 14 . Consulting with school personnel about a new prevention program. 15 . Collecting data on a research project you designed. 74 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 136. 75 (nganizing a treatment program in a mental hospital. Reviewing journal articles . Presenting a report during a case conference. Applying for research grants. Supervising practicum students in clinical and counseling psychology. Writing research papers for publication. Reading about new approaches to psychotherapy. Reviewing the literature on an issue in psychology. Giving advice about psychological problems on a radio talk show. Working for a funded research institute. Interpreting a test battery for a client. Serving as an editor for a scientific journal. Helping a client get in touch with feelings. Learning new strategies for dealing with psychological problems. Writing a statistical program. Reading a book on innovative research designs. Going through therapy to make yourself a better person. Learning about a new statistical procedure. Attending a conference on psychotherapeutic ‘techniques. Brainstorming about possible research with colleagues. Consulting with other psychologists about a particular client’ s concern. 76 37. Helping a colleague understand confusing statistical findings. 38. Reviewing an agency’ s intake form for a new client. 39. Developing new explanations of well accepted empirical studies. 40. Reading a book written by a famous psychotherapist . 41. Conducting group psychotherapy sessions. 42. Serving on a thesis or dissertation committee. Appendix B Research Training Environment Scale Below is a series of statements concerning research training. Please note: We define research broadly. "Research" when used:h1this survey includes the following types of activities: designing and executing research projects, preparing manuscripts of a theoretical nature of a critical review of the literature, conducting program evaluations or needs assessments, making presentations at professional conferences, participating as a member of a research team engaged in any of the above activities, and advising the research projects of others. Please respond to the following statements in terms of the doctoral program in which you are currently receiving your training. (Note: If you are currently on internship, please rate the program in which you were previously Consider each statement using the following trained.) scale: 1 2 3 4 5 disagree somewhat neutral somewhat agree agree disagree PLACE YOUR RATING IN THE BLANK TO THE LEFT OF EACH ITEM. In my graduate training program there are 1. opportunities to be a part of research teams. 2. II was encouraged to get involved in some aspects of research early in my graduate training. :3. Our faculty seems interested in understanding and teaching how research can be related to counseling practice. 4. inhis training environment seems to promote the idea of science as a lonely and socially isolating experience . Some of the faculty teach students that during a phase of the research process, it is important for 77 10. 11. 12. 13. 14. 15. 16. 78 the researchers to "look inward" for interesting research ideas. Students in our program feel that their personal research ideas are squashed during the process of collaborating with faculty members, so that the finished project no longer resembles the student's original idea. Many of our faculty do not seem to be very interested in doing research. Choosing an advisor in this program also determines the methodology of one’s study (e.g., field, laboratory, or survey), since faculty members are largely unwilling to consider alternatives to their preferred methodology. In my research training, the focus has been on understanding the logic of research design and not just statistics. I feel that I need to choose a research topic of interest to my advisor. I have gotten the impression in my graduate training that my research work has to be of great value in the field to be worth anything. In general, my relationship with my advisor if both intellectually stimulating and interpersonally rewarding. (If you advisor has been newly assigned or chosen, respond in terms of what you expect the relationship to be.) Most faculty do not seem to really care if students are genuinely interested in research. The faculty does what it can to make research requirements such as the thesis and dissertation as rewarding as possible. Faculty members often invite graduate students to Ibe responsible collaborators in the faculty member's own research projects. When first or second year students collaborate twith faculty or advanced students in research, they seem to end up doing much of the "dirty work" .in the project. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 79 My advisor is able to oscillate between the roles of thoughtful critic, on the one hand, and consultant/colleague who allows appropriate autonomy on the other. Often it seems that our faculty does research mainly because it is a requirement for promotion, tenure, and/or pay raises at the University. Faculty members in our counseling psychology program are willing to let students know about their struggles and failures in research and publication. The faculty here only seem to notice a few selected students in terms of reinforcing scholarly achievements. Many different research styles (e.g. filed vs. laboratory) are acceptable in my graduate program. There seems to be a general attitude that there is one best way to do research. In my program the faculty members believe that we must be highly knowledgeable about statistics in order to do research. My graduate training program has enabled me to see the relevance of research ideas to clinical service. There is informal sharing of research ideas and feelings about research ideas in my program. My graduate program has a formal way of recognizing the scholarly achievements of the students (e.g., in program meetings, in program newsletter). The faculty does not seem to value clinical experience as a source of ideas for research. It is unusual for first year students in this program to collaborate with advanced students or faculty in research projects. The faculty members here are quite open in sharing their research with students. 30. 31. 32. 33. 34. 35. 36. 370 38. 39. 40. 41. 42. 80 Students in the program who are "go getters" in terms of research are not very well-liked by their peers. A fairly clear message in my doctoral training environment is that every piece of research originates from hypotheses derived from existing theory (as opposed to personal experience). I feel that my advisor expects too much from my research project. The faculty members of my graduate program show excitement about research and scholarly activities. There is a general impression around here that research and statistics are almost synonymous. The faculty members of my graduate program enjoy discussing ideas. The faculty members of my graduate program encourage me to pursue the research question in which I am interested. Much of the research we become involved in prior to the thesis is organized in a way that is highly anxiety provoking to students. Much of the research we become involved in prior to the thesis is intellectually challenging and stimulating. My graduate program provides concrete support for graduate student research (e.g., typing manuscripts, travel money for making presentations, or free postage for mailing surveys). Students are given the impression in my program that the "cookbook" use of statistics is inappropriate. The faculty in my graduate training program is involved in the conduct and publication of high quality research (or theory). 'The general view in my training program is that Iknowledge is best advanced through programmatic research . 43. 81 Faculty members here teach students that any single experiment is inevitably flawed and limited. 44. My graduate program rarely acknowledges the 45. scholarly achievements of students. Students generally feel here that they are able to follow their own methodological preferences in designing research (provided that their preferences fit the question being asked). APPENDIX C The following items are tasks related to research. Please indicate your degree of confidence in your ability to successfully accomplish each of the following tasks on a scale from 0 - 9 with 0 representing no confidence and 9 representing total confidence. 0 ----- 1 ----- 2 ----- 3 ----- 4 ----- 5 ----- 6 ----- 7 ----- 8 ----- 9 no total confidence confidence 1. Selecting a suitable topic for study 2. Knowing which statistics to use 3. Getting an adequate number of subjects 4. Writing a research presentation for a conference 5. Writing the method and results sections for a research paper for publication 6. Manipulating data to get it onto a computer system 7. Writing a discussion section for a thesis or dissertation 8. Keeping records during a research project 9. Collecting data 10. Designing an experiment using non-traditional :methods e.g., ethnographic, cybernetic, phenom- ological approaches 11. Designing an experiment using traditional methods e.g. , experimental, quasi-experimental designs 12. Making time for research 13. Writing the introduction and literature review for a dissertation 14. Reviewing the literature in an area of research interest 82 o ----- 1 ----- 2 ----- 3 ----- 4 ----- 5 ----- 6 ----- 7 ----- 8 ----- 9 no total confidence confidence 15. Writing the introduction and discussion sections for a research paper for publication 16. Contacting researchers currently working in an area of research interest 17. Avoiding the violation of statistical assumptions 18. Writing the method and results sections of a dissertation 19. Using simple statistics e.g., t-tests, anova, correlation, etc. 20. Writing the introduction and literature review for a thesis 21. Controlling for threats to validity 22. Formulating hypotheses 23. Writing the method and results sections of a thesis 24. Utilizing resources for needed help 25. Understanding computer printouts 26. Defending a thesis or dissertation 27. Using multivariate statistics e.g., multiple regression, factor analysis, etc. 28. Using statistical packages e.g., SPSS-X, SAS, etc. 29. Selecting a sample of subjects from a given population 30. Selecting reliable and valid instruments 31. Writing statistical computer programs 32. Getting money to help pay for research 33. Operationalizing variables of interest APPENDIX D DEMOGRAPHIC AND DISSERTATION QUESTIONNAIRE Please check the appropriate blank or fill in the blank. 1. Gender: Female Male 2. Year of Birth: I 3. Partner Status: Single Married Separated Divorced Widowed Living with partner 4. Ethnicity: African American Asian American Caucasian Hispanic Native American other (please specify) 5. Please indicate the number of children in each age group currently living in your household: 0 - 5 5 - 10 10 - 15 15 - 20 20 - 25 over 2 5 6. What was your undergraduate GPA: 7. What is your graduate GPA: 8. Does your doctoral program require a master's degree prior to admission? 84 10. 11. 12. 13. 14. 15. 85 Yes No In what year did you begin working on your Master’s Degree? 19 Was a thesis required for completion of your Master's degree? Yes No In what year did you complete your Master's Degree? 19 In what year were you admitted to your doctoral program? 19 Have you or will you receive your Ph.D. from the same department/college as your Master's degree? Yes No In what year will you graduate from your doctoral program? 19 What is the current status of your doctoral level comprehensive/qualifying/preliminary exams? I I I I I I have not taken them am in the process of taking them have taken them and do not know the results yet have taken them and have not passed am in the process of retaking them have taken them and have passed If you have successfully completed these exams, in what year did you do so? 19 86 If you have not yet completed these exams, in what year do you plan to do so? 19 16. What is the current status of your required doctoral course work? I have completed less than 1/2 of required courses I have completed more than 1/2 of required courses, but have not completed all courses I have completed all required courses If you have successfully completed all required course work, in what year did you do so? 19 If you have not yet completed all required course work, in what year do you plan to do so? 19 17. What is the current status of your internship? have not applied am in the process of applying for internship am in the process of reapplying for internship am in the process of completing internship have completed internship HHHHH If you are in the process of completing or have completed internship, in what year did you do so? 19 If you are not currently doing an internship, in what year do you plan to do so? 19 18. What is the current status of your dissertation? Please check all that apply: have not started working on dissertation have picked a topic have talked to my advisor about the topic have completed a literature review HHHH 87 have selected a dissertation chair person have selected a dissertation committee have started writing the proposal have completed the introduction section have successfully defended my proposal have collected the data have analyzed the data have written the methods section have written the results section have written the discussion section have written the entire dissertation but have not defended it yet have successfully defended the dissertation have turned in the final copy of the dissertation to the University HHHHHHHHHHH HH I have received a dissertation certificate of completion I have published the results from the dissertation If you have turned in the final copy of the dissertation, in what year did you do so? 19 If you have not yet completed the dissertation, in what year do you plan to do so? 19 My dissertation will be or was: quantitative qualitative theoretical combination of above other (please explain) Please check all of the items that apply: I have been a member of a research team I have worked with a faculty member on a research project I have a research mentor I have made a presentation at a professional conference I have submitted a research article for 21. 22. 23. 24. 25. 26. 88 publication in a scholarly journal I have co-authored a research article in a scholarly journal I have been the first author of a research publication in a scholarly journal Please rank your top three places of employment (in order) after graduation: academic position (tenure track) academic position (non-tenure track) community mental health consulting counseling center hospital in-patient hospital out-patient post-doc training private practice other (please explain) What experiences have helped your overall progress in the program? What experiences have hindered your overall progress in the program? In what ways do you think your training has adequately prepared you to be a scientist-practitioner? In what ways do you think your training has not prepared you to be a scientist-practitioner? What experiences have helped your growth as a scientist? 27. 28. 29. Any 89 What experiences have hindered your growth as a scientist? What experiences have helped your growth as a practitioner? What experiences have hindered your growth as a practitioner? other comments you would like to make.................. APPENDIX E Carol Geisler, M.A. 3035 SO. Connor Salt Lake City, UT 84109 801-486-9357 Dear Colleague: I am conducting a study on the interests, preferences, perceptions, and attitudes towards the scientist- practitioner model of training for graduate students in APA approved Counseling Psychology programs. I would greatly appreciate your cooperation in this manner! The materials take approximately 30 minutes to complete. Please DO NOT put your name on the materials. All responses are completely anonymous. A self-addressed, stamped envelope is enclosed for return of the materials to me. If you are interested in receiving a summary of the results of the study, please write your name and address on the enclosed postcard and send it to me separately from the rest of the materials. Throughout my years as a graduate student I have believed in "research karma". That is, I have faithfully answered surveys that I received in the mail in hopes that others would do the same when it came time for my research. I am hoping that this strategy will pay off. I know life as a graduate student can be hectic so I’d like to thank you in advance for your cooperation! Sincerely, Carol Geisler 90 Figure 1. Egedicted Qissertatiop Progress Membership from the Obtained Discriminant Functiop Scprgs for All, Participanps. 91 I 3 Completed Dissertation 2 ' £3 E Dissertatf6n In Progress -3 Figure 2. ctua 'ssertat'o . Discpimipapt Function Scoggs for All gagpigipanps. FACTOR o ess t e 3 Completed 2 ~ ll Dissertation a. D (3 :z (D Dissertation l - I" In Progress 0 -3 92 Figure 3. Predicted Dissertation Progress Membership from the Obtained Discriminant Function Scores for Form A. 3 I I l l I I Completed ' 2 - I" llllfl lllfllllfllflll - Predict .Dissertation _ Images. . . HIIIMIHHHHHHH lHH H - o 1 l I l I 1 -3 2 l O l 2 3 4 Figure 4. c ua 'sser at' n o ess Obta'ned Discriminant Function Scones for Form A. Factor 3 I I I I T I Completed Dissertation 2 - II III! flllllllllllllfllllfllllll — Group Dissertation ‘ l Ingmar... .- l zlll llllllllllllllfllllllllfllll llllllllllll lllll ll - 0 ; l u L 1 J .3 -2 -| O l 2 3 4 Fucmr 93 Figure 5. Eredicted Dissertation Progress Membersnip from the Obtained Discriminant Function Scores for Form B. Completed 2 - I II "I!" llllllllllll - Dissertation Pmdkt “S““a‘m‘l - llfllllllflllllllflllll H l - In Progress fimmt Figure 6. Actuai Dissertation Progress by Obtained Discriminant Function Scores for Form B. 3 1* I I ' ' ' Completed vsssssssosss 2 - I II II II flllllllllllllllll Ill - Group Dissertation Ismsssss .— l l |lllllfllllllfllllllflllllllllflll II I - 0 1 l 1 l ' 1 34 .3 .2 -l O l 2 3 Factor Cmmt 94 Figure 7. Histogram of Research Experience. (A) C) 6.00 8.00 10.00 7.00 9.00 11 1290 1490 1100 m 1am RESEXP 95 REFERENCES APA-Accredited Doctoral Programs in Professional Psychology: 1993- (1993)- Ameriean.£exsnelegi§t. g§(12), 1260-1270. American Psychological Association. (1993). Graduate Stmdy in Psycnoipgy. Washington, D.C.: APA. Bandura, A. (1977). Socinl learning theory. Englewood Cliffs, N.J.: Prentice Hall. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. c 'ca ev' w, 84(2), 191-215. Bandura, A. (1986). Sociai foundations of tnought and gctipn: A sociai cggnitivg phggry. Englewood Cliffs, New Jersey: Prentice-Hall. Barlow, D. H., Hayes, S. C., & Nelson, R. O. (1984). The scienrist practitioner: Researcn and accountabiiity in clinical and educational settings. New York: Pergamon. Betz, N. E. (1986). Research training in counseling psychology: Have we addressed the real issues? Ing_ Counseiing Dgycnglogigr, 15(1), 107-113. DeMuse, K, P. (1987). The relationship between research productivity and perceptions of doctoral program 96 quality. 0 e 'o s ch 0 ° esea h and Practice, i§(1), 81-83. Galassi, J. P., Brooks, L., Stoltz, R. F., & Trexler, K. A. (1986). Research training environments and student productivity: An exploratory study. The Counseling Psycnglogist, i5(1), 31-36. Garcia, M. E., Malott, R. W., & Brethower, D. (1988). A system of thesis and dissertation supervision: Helping graduate students succeed. Teaching of Ps cholo , i§(4), 186-191. Gelso, C. (1979). Research in counseling: Methodological and professional issues. The Counseiing Dsychoiogist, §(3), 7-35. Gelso, C. J., Betz, N. E., Friedlander, M. L., Helms, J. E., Hill, C. E. Patton, M. J., Super, D. E., & Wampold, B. E. (1988). Research in counseling psychology: Prospects and recommendations. Imp Counseling gsychologist, i§(8), 385-406. Haynes, S. N., Lemsky, C., & Sexton-Radek, K. (1987). Why clinicians infrequently do research. Professional Psychology: Research and Practice, i§(9), 515-519. Heppner, P. P., Kivlighan, D. M., & Wampold, B. E. (1992). Research design in counseiing. Pacific Grove, CA: Brooks/Cole. 97 Holland, J. L. (1986). Student selection, training, and research performance. The Counseling Psychglogist, li(1), 121-125. Jacks, P., Chubin, D. E., Porter, A. L., & Connolly, T. (1983). ABC's of ABD’s: A study of incomplete doctorates. Journal of College and University Teacning, ;l(1), 74-81. Klecka, W. R. (1980). Discriminant Analysis. Newbury Park, California: Sage Publications. Lent, R. W., Brown, S. D., & Larkin, K. C. (1984). Relation of self-efficacy expectations to academic achievement and persistence. qurngl of Counseling Psycnglggy, 31(3). 356-362. Lent, R. W., Brown, S. D., & Larkin, K. C. (1986). Self- efficacy in the prediction of academic performance and perceived career options. Journgl of Counseling Psychology, ;;(3), 265-269. Lent, R. W., Brown, S. D., & Larkin, K. C. (1987). Comparison of three theoretically derived variables in predicting career and academic behavior: Self- efficacy, interest congruence, and consequence thinking. Journal of Counseling Psychology, 33(3), 293-298. Leong, F. T., & Zachar, P. (1991). Development and validation of the scientist-practitioner inventory for 98 psychology. Journal of Counseling Psycnology, ;§(3), 331-341. Magoon, T. M., & Holland, J. L. (1984). Research training and supervision. In S. D. Brown & R.W. Lent (Eds.), Handbopk of Counseling Esychology. New York: John Wiley & Sons. Mallinckrodt, B., Gelso, C. J., & Royalty, G. M. (1990). Impact of the research training environment and counseling psychology students’ Holland personality type on interest in research. Drofessipnal Psychology: Research and Practice, ;l(1), 26-32. Monsour, M., & Corman, S. (1991). Social and task functions of the dissertation partner: One way of avoiding terminal ABD status. Communicntipn Education, 42(2), 180-185. Multon, K. D., Brown, S. D., & Lent, R. W. (1991). Relation of self-efficacy beliefs to academic outcome: A meta-analytic investigation. Journal of Counseling Psychology, ;§(1), 30-38. Muszynski, S., & Akamatsu, T. (1991). Delay in completion of doctoral dissertations in clinical psychology. Professional nggnglogy: Resegrcn and Practice, ;2_(2) , 119-123. Osipow, S. H. (1979). Counseling researchers: Why they perish. The Counseling Egygnplggist, 8(3), 39-41. 99 Phillips, J. C. (1991). ese c se -e 'cac a d t e researcn training environment in counseling psychology. Unpublished doctoral dissertation, The Ohio State University, Ohio. Porter, G. L., Chubin, D. E., Rossini, F. A., Boeckmann, M. E., & Connolly, T. (1982). The role of the dissertation in scientific careers. American Scisntist, 1g(5), 475-481. Royalty, G. M., Gelso, C. J., Mallinckrodt, B., & Garrett, K. (1986). The environment and the student in counseling psychology: Does the research training environment influence graduate students' attitudes toward research? The Counseling Dsycnologisp, ig(1), 9-30. Scheaffer, R. L., Mendenhall, W., & Ott, L. (1986). Elementary Survey Ssmpling. Boston, Duxbury Press. Schunk, D. H. (1991). Self-efficacy and academic motivation. Educational Psychologist, ;§(3 & 4), 207- 231. Shemberg, K., Keeley, S. M., & Blum, M. (1989). Attitudes toward traditional and nontraditional dissertation research: Survey of directors of clinical training. Professional Psychology: Research and Practice, ;n(3), 190-192. 100 Stevens, J. (1992). Applied multivariste ststistics for the social sciences. Cincinnati: Lawrence Erlbaum Associates. Stevens, J. (1986). ' t' ' t ' i o the sgsisl sciences. Hillsdale, NJ: Lawrence Erlbaum Associates. Tobachnick, B. G., & Fidell, L. S. (1989). Using multivariate statistics. New York, NY: HarperCollins. Watkins, C. E., Lopez, F. G., Campbell, V., & Himmell, C. 5 (1986). Contemporary counseling psychology: Results of a national survey. u a 0 se Psycholggy, ;;(3), 301-309. Wilkinson, L., Hill, M., & Vang, E. (1992). Sysrat yersign .2.1. Evanston, IL: Systat Inc. 101 MICHIGAN STATE UNIV. LIBRARIES 111|W"11111111111“1Ml”11111111111111 31293014103299