93 01409 6501 WIWI!”IIIIUIHIIUIHIHIJUH”)UHIIIHIHIIHIWI This is to certify that the thesis entitled A Comparison of Computer-Assisted Interviewing and Paper-and-Pencil Interviewing on ‘Responsescto Open—Ended Questions presented by Judith M. Berkowitz has been accepted towards fulfillment of the requirements for M. A. Communication degree in Major prgfessor Date April 7, 1995 0-7639 MS U is an Affirmative Action/Equal Opportunity Institution LIBRARY Michigan State University PLACE N RETURN BOXto romanthb mum your mood. ' TO AVOID FINES Mum on or baton duo duo. DATE DUE DATE DUE DATE DUE MSU IaM mm Action/Emil Opportunity IMRMOII WM! A COMPARISON OF COMPUTER-ASSISTED PERSONAL INTERVIEWING AND PAPER-AND—PENCIL INTERVIEWING ON RESPONSES TO OPEN-ENDED QUESTIONS By Judith M. Berkowitz A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF ARTS Department of Communication 1995 ABSTRACT A COMPARISON OF COMPUTER-ASSISTED PERSONAL INTERVIEWING AND PAPER-AND-PENCIL INTERVIEWING ON RESPONSES TO OPEN-ENDED QUESTIONS By Judith M. Berkowitz As the use of computers grows in the business sector, the personal computer ofi‘ers several advantages to research organizations relative to other data collection procedures. If computerized data collection procedures may ultimately replace traditional paper-and- pencil methods, then systematic investigations of the quality of data produced using the new method are warranted. This thesis compares computer-assisted personal interviewing and paper-and-pencil interviewing on responses to Open-ended questions during a face-to- face personal interview. In the present investigation, three split ballot tests were conducted with a total of 1,972 respondents. Participants were assigned to either the computer interviewing condition or the paper-and-pencil interviewing condition. For one of the three tests, responses to open-ended questions were recorded on paper first, then entered into the computer. The total number of thought units given in response to two- open-ended questions were compared using two-tailed T-tests. In addition, three-way Analysis of Variance was run separately for each of the three split ballot tests to determine if the total number of thought units varied by mode. Although a significant main effect was found for mode (CAPI vs. PAPI) for one of the tests, the effect size was small, suggesting the difference was due to the large sample size. Results indicate that the total number of thought units was not dependent upon condition. PREFACE Interests developed over the course of time are produced by multiple forces. My interest in research methods was shaped and refined during my tenure as a project director at a market research firm. The BASES Group provided me with the opportunity to experience all aspects of the research process, fi'om design to data collection and analysis. My employment began as the organization was introducing computers to the face-to-face interview setting. Having had the Opportunity to Observe interviews and to program questionnaires, I was intrigued by how the introduction of computers into an interpersonal interaction might influence responses. I am deeply indebted to BASES for their generous support and assistance without which this thesis would not have been possible. My gratitude goes to the project directors, client service managers, analysts, coders, field directors, report processors, and data entry personnel. I would like to offer special appreciation to Joe Mllke, Ken Knipmeyer, Steve Froehle, Mark Goertemiller, J ano Edmonds, Howard Lemonick, Sanserrae Frazier, Chris Adams, Dan Glassmeyer, and Margie Ruddick for their assistance. To Robin Stahl, I thank you for your encouragement and knowledge and wish you health and happiness. To Jim Dearing, as my advisor, your patience, support, ideas, and spinach pizza are deeply appreciated. Additional recognition is owed to Patrice Buzzanell for her help in clarifying my thoughts, and to Franklin Boster for professional and statistical advice. My appreciation to Dr. Mick Couper and Dr. Tom Smith in locating relevant resources. Finally, thanks to Mom, Dad, and J efi‘ -- for both the tangible and intangible. iii TABLE OF CONTENTS Page Number List of Tables .............................................................................................. v List of Figures ............................................................................................ vi Chapter 1. Introduction .............................................................................. 1 Chapter 2. Literature Review ..................................................................... 8 Chapter 3. Methods ................................................................................... 24 Chapter 4. Results ..................................................................................... 32 Chapter 5. Summary and Conclusions ........................................................ 37 List Of References ....................................................................................... 40 iv Table 1. Table 2. Table 3. Table 4. Table 5. Table 6. Table 7. LIST OF TABLES Page Number Number of Interviews by City for Each of the Split Ballot Tests ......... 25 Number of Respondents by Condition for Each Of the Split Ballot Tests ................................................................................ 26 Frequency of Total Thought Units in Response to the Question “What is there you think you would like about this produc .” by Condition for each Split Ballot Test ................................................ 30 Frequency of Total Thought Units in Response to the Question “What is there you think you would dislike about this product?” by Condition for each Split Ballot Test ................................................ 31 Analysis of Variance Results for Test 1 .............................................. 36 Analysis of Variance Results for Test 2 .............................................. 36 Analysis of Variance Results for Test 3 .............................................. 36 LIST OF FIGURES Page Number Figure 1. Three Components Of the Survey Interview as a Microsocial System: Interviewer, Respondent, and Task, and their Relationship to the Response Elicited. Based on Sudman & Bradburn’s Model (1974) ................................................................. 12 CHAPTER 1 INTRODUCTION Researchers pay careful attention to how aspects of research designs impact the amount and quality of data. Yet, there are times when we may overlook certain variables and their impact on data quality and quantity. One such variable is the mode of administration of survey interviews. A mode is the means through which information may be collected fi'om a respondent. For survey research, modes may be classified in various ways, including: interviewer or self-administered; telephone, face-to-face, or mail; and computer or paper-and-pencil methods. Since respondents can be sensitive to many aspects of the survey Situation, responses to the same questions collected using alternate modes may differ. When a new technology is introduced, we must undertake systematic investigations of its impact on the data we may obtain. Survey research is the systematic, standardized collection of information from a sample of individuals, households, or larger organized entities through a series of questions and responses in order to draw conclusions about a larger population (Rossi, Wright, & Anderson, 1983). Unlike ethnography, survey research does not require the researcher to participate in the daily lives of the research participants (Churchill, 1991). A researcher obtains survey data by asking research participants questions. These questions may be either closed-ended or Open-ended. Closed-ended questions provide respondents with prespecifred responses from which to choose. By prespecifying the form and language responses must take, the researcher is able to code the data quickly and inexpensively. However, specifying the form in which respondents may answer a question 1 2 limits the insights the data may provide to the preconceived notions the researcher has about the phenomenon. In some instances, researchers gain greater insight into a phenomenon under investigation through the use of open-ended questions. Open-ended questions, unlike closed-ended questions, allow the respondent the freedom to respond to the question using words and phrases the respondent believes are most appropriate. For the most part, the length, clarity, and organization of the responses to open-ended questions are decisions left to the respondent. This thesis examines one potential source of bias in the survey interview situation: the impact of the use of computer-assisted interviewing on responses to Open-ended questions. For this endeavor, three split ballot tests were conducted, designed to assess the impact, if any, the use of a computer to record responses in a face-to-face interview has on responses compared to traditional paper-and-pencil methods of recording. It is presumed that responses to Open-ended, rather than closed-ended questions, may be most susceptible to influence from the use of computers in the interview situation. Therefore, this thesis compares responses to open-ended questions across the two modes. This chapter provides background about the ways in which computers have been used by survey organizations in general as well as in the interviewing situation. The advantages and disadvantages of computer-assisted interviewing versus paper-and-pencil methods are discussed. Computers in Interviewing The use Of computers in survey research has revolutionized the way in which survey organizations do business. After their introduction, computers were used to assist in sample design and selection, data entry, editing, coding, tabulation, and data analysis (Churchill, 1991; Jones & Polak, 1993; Karweit & Meyers, 1983; Weeks, 1992). Yet, most of the actual interviews were conducted using traditional paper-and-pencil methodology in order to record responses. In the past two decades, survey organizations have begun to use computers in order to support data collection over the telephone, and more recently, with the advent of afi‘ordable laptop computers, some organizations have begun to use computers for self-administered and interviewer-conducted face-to-face interviews. By entering the responses to the survey into the computer directly, firms have been able to lower costs, such as data entry and field monitoring, better monitor progression of studies as they are being fielded, as well as to reduce the time it takes to process the data upon completion of the job. While there are a variety of Computer-Assisted Survey Information Collection (CASIC) systems currently in use or under development (Weeks, 1992), this thesis examines only Computer-Assisted Personal Interviewing (CAPI). CAPI, an interviewer- administered mode, is the CASIC system most similar to traditional Paper-And-Pencil Interviewing (PAPI). As organizations' desire for quicker access to survey results increases, they may begin to utilize CAPI methodology more often. In a CAPI interview, the interviewer sits in front of the computer screen and reads the questions to the 4 respondent. In some situations, the respondent is allowed to read the questions on the computer screen as they appear; in other situations, the screen is positioned such that only the interviewer may see the questions. Responses to both closed-ended and open-ended questions are entered directly into the computer by the interviewer. In some cases, the interviewer may be permitted to record responses to open-ended questions on a separate form to be entered into the computer after the conclusion of the interview. Advantages of Computer-Assisted Interviewing CAPI has several advantages over PAPI (Baker, 1992; Churchill, 1991; Costigan & Thomas, 1992; Jones & Polak, 1993; Karweit & Meyers, 1983; Martin, O'Muircheartaigh, & Curtice, 1993; Olsen, 1992; Snijkers, 1992; Weeks, 1992). TO begin, survey organizations may capitalize on the computer's time keeping capabilities. The internal clock of the computer may be programmed to keep an exact time length of the interview, thus allowing the survey organization to monitor production and costs more eficiently. Second, it has been suggested that CAPI reduces costs associated with photocopying questionnaires, shipping, data entry, data cleaning, and questionnaire storage. Although some of the costs at the end of the data collection process may be reduced, there may be an increase in costs in questionnaire preparation related to programming and interviewer training. In addition, costs associated with the purchase of computer hardware and software in order to conduct interviews may result in an increased 5 cost per interview as interviewing agencies try to recoup some of their initial investment (Baker, 1992; Simpkins, 1992). The third main advantage of CAP] is a quicker turnaround time for data processing. Since data entry is, in efi‘ect, done on-line, analysis of the closed-ended questions may begin as soon as interviewing is completed. Open-ended questions would still need to be coded and processed through a separate data entry procedure; however, there remains the possibility of reading and coding the open-ended responses using computerized methods. Fourth, CAPI enables the questionnaire to be tailored to each respondent's situation. For example, question wording may be tailored to include responses from previous questions. For an interviewer-based interaction, this may provide an additional feeling of closeness. In addition to the computer's capacity to insert personalized text into the question wording, the computer may also provide probes to clarify responses. This ability would increase the likelihood of obtaining fully probed answers since the computer controls the information being sought, thus reducing the likelihood of human error. In many respects use of a computerized interview may reduce the likelihood of human error, especially with respect to missed questions or improperly followed skip patterns. Provided the questionnaire is programmed correctly, the computer controls which questions are asked, eliminating the need for the interviewer to make these decisions. Thus, the number of incorrectly answered questions or missing data would be reduced. Finally, the computer has the capability to provide on-line editing of responses for consistency. Because the computer is able to perform quickly and accurately calculations 6 and comparisons of key responses to ensure consistency of responses, any inconsistencies or discrepancies may be flagged and clarified by the respondent. Ultimately, this may result in improved data quality. Yet, CAPI is constrained in at least two ways as compared to PAPI. First, the length of the questionnaire may be limited by the nature of the software or hardware used to conduct the interview. Some programs are unable to handle extremely long questionnaires, whereas with PAPI instruments may be as long as desired. Similarly, some CAPI software limits the length Of responses to Open-ended questions. Whereas with a PAPI interview, responses may be continued on separate sheets of paper as needed, with CAPI, the length may be restricted to a maximum number of characters. To this point, the discussion has focused on how survey organizations have used computers and the benefits the use of computers Ofl‘er the organizations. However, the discussion has not addressed how the computer may influence the responses of survey respondents. Rather than examining the computer’s impact on the survey organization, this thesis focuses on how the use of the computer may influence responses of interviewees. This chapter examined the ways in which the computer has been utilized in the survey process and previous research on the impact Of computer-assisted personal interviewing on data quality. The next chapter examines: (a) the ways in which response bias may enter into the survey interview; (b) conversational norms; and (c) the ways in which the survey interview violates these norms. Chapter 3 will describe the procedures used to conduct the split ballot tests and the coding of the Open-ended responses in the 7 present investigation. Chapter 4 presents the results of the analyses used to compare the mean number of codable responses between conditions. Finally, Chapter 5 presents conclusions and suggestions for further research on mode efi‘ects for open-ended questions which may arise in the computer-assisted personal interview context. CHAPTER 2 LITERATURE REVIEW The goal of survey research is to collect information from a sample Of individuals in a systematic, standardized manner by asking questions about pre-specified topics in order to draw conclusions about a specific population (Rossi, Wright, & Anderson, 1983). Based on the sample's responses to the questions, estimates for the population as a whole may be Obtained using statistical procedures. While this process provides quantifiable results, the researcher is charged with providing a meaningful interpretation of the numbers. The survey has been used as a tool of social scientific inquiry for approximately 100 years (Bulrner, Bales, & Sklar, 1991; Herbst, 1993; Rossi, et al., 1983). Approximately 90 percent of all social scientific investigations, including experiments, utilize survey methodology. These estimates vary by field (Bradbum & Sudman, 1988; Briggs, 1986; Rossi, et al., 1983). For example, approximately 87 percent Of research articles published in public opinion utilize survey methodology while only 12 percent of the articles published in social psychology utilize survey methodology (Rossi, et al., 1983). Survey results have been used for a variety of purposes, including the generation, evaluation and refinement of policy (Bradbum & Sudman, 1988; Price, 1992). Because organizations, institutions, businesses, and government may or may not implement policy and procedures based on survey results, an examination of the factors which influence question responses in survey research is warranted. To the extent that responses do not reflect the underlying “true” value, subsequent conclusions and actions may be affected. 8 9 An underlying assumption of survey research is that an accurate picture of reality may be obtained by asking questions designed to reveal the respondent's behaviors, intentions, perceptions, attitudes and beliefs about a given topic. This implies that there is an empirical reality, resulting in a true answer to a question, and respondents not only know this true answer, but are able and willing to share that information with the researcher. The survey process is believed to be a vehicle through which the researcher may access this true value; however, this does not mean that the observed value will equal the “true” value in all cases. It is possible for the observed values to difl‘er in either a non- systematic or a systematic manner across all respondents. The extent to which the observed response to a question differs from the “true” response in a systematic manner is termed bias (Sudman & Bradbum, 1974). Bias may arise due to how the sample was executed or to other factors such as interviewer characteristics, respondent characteristics, task characteristics, and the interview context itself. “fithout an awareness of the various ways in which data may deviate systematically fi'om this underlying true score, the subsequent meaning ascribed to the results may be distorted. There has been considerable interest in investigating the variables which affect data quality. This chapter describes the survey interview situation in order to illustrate why mode efi‘ects are plausible. To do this, it (1) outlines and discusses the components of the survey interview - the interviewer role, the respondent role, and the task characteristics - and the survey interview context; (2) examines how bias may enter the survey situation; and (3) focuses on how computer-assisted interviewing may influence responses in a survey interview. 10 The Survey Interview as a Microsocial System One way in which survey data may be collected is through interviews. A survey interview is a ”process of dyadic, relational communication, with a predetermined and serious purpose designed to interchange behavior and involving the asking and answering of questions" (Stewart & Cash, 1991, p. 3). Interviews are dynamic interactions where the members of the dyad have an interpersonal connection within the constraints of the intentional interactive situation. Stewart and Cash's definition implies a two-way exchange of information. Yet, at least one person, the interviewer, brings a specific goal to the interaction, thus creating a task-oriented situation. In survey research, the interview may be considered a somewhat scripted conversation, where the words and actions of the interviewer are prescribed by an (often absent) third party, the researcher. As a task- oriented scripted situation, the direction and flow of exchange is controlled by the interviewer based on the prespecifications of the researcher, thus altering the social context of the interaction. Surveys may be either self- or interviewer-administered; the respondents’ experience and the nature of the interaction differ depending on the mode of administration. To the researcher, the survey interview is an instrument; it is a means through which relevant information may be gathered. Yet, the implementation of a survey interview instrument is a dynamic process. During an interview, the interviewer and respondent respond to each other in a fashion which is unlike that in a self-administered 1 1 interview. Thus, the interview process creates a situation that is unique [Tom a self- administered survey, reflected in the task-oriented, somewhat scripted conversation. In a self-administered survey, the respondent and interviewer (if one is present) do not necessarily interact in a fashion which may directly shape responses. The survey interview, on the other hand, depends upon the interaction of interviewer and respondent, and allows responses to be Shaped by the interaction. Thus, the interview may be seen as a microsocial system, a collection of interdependent social actors acting within the confines of a temporally and spatially limited environment. Sudman and Bradbum (1974) conceptualized the survey interview as a microsocial system in which responses are influenced by the interaction of three components: the role of respondent; the role of interviewer; and the task (see Figure 1). “frthin this system, the task is the completion of the survey through the elicitation of the relevant information. The interviewer’s role is to obtain the information within the constraints as designated by the researcher in terms of behaviors, mannerisms, question wording, and probing. The respondent’s role is to provide the desired information in the form desired by the researcher. The interviewer and the respondent are linked in pursuance of a common objective: the completion of the survey. It is the task that bonds the two interactants and determines the nature of the interaction. Responses do not depend solely upon the respondent; rather, the interrelationship of the three components Shapes the responses to 12 Interviewer Role: Respondent Role: To obtain information fi'om To provide information respondent in form as to interviewer; influenced designated in survey script; by role demands, influenced by role demands demographic and demographic characteristics and desire characteristics to cooperate Task: Survey Responses characteristics, such as question wording and . to order; mode of administration; subject Questions matter, etc. Figure 1: Three Components of the Survey Interview as a Microsocial System: Interviewer, Respondent and Task, and their Relationship to the Response Elicited. Based on Sudman & Bradbum’s Model (1974). 13 the questions posed. This interrelationship influences which response to a close-ended questions is chosen, how responses to open-ended questions are phrased, and the amount of detail provided in response to open-ended questions. Thus, Sudman and Bradbum believe the task is central in understanding the responses in the survey interview and data quality. Although the survey interview is based upon the elements of conversation, its dynamic difl‘ers from that of ordinary conversation in that it strives for standardization across many conversations. Thus, interactions within this system differ from those in other contexts and may influence responses to deviate from the “true” response. While Sudman and Bradbum consider the task central to data quality and recognize the role demands of the interactants, their model overlooks the properties unique to the context of the interview. As Suchman and Jordan (1990) point out, the social context of the interviewing situation is quite difl‘erent than that of ordinary conversation. ”[T]he survey interview suppresses those interactional resources that routinely mediate uncertainties of relevance and interpretation" (p. 232). The interviewer's job is to inquire, not to validate responses. If the interviewer validates responses, s/he may introduce additional variables that would compromise the standardization sought by the survey situation. In addition, the situation is further constrained due to the fact that the topic is predetermined by an often absent third party who attempts to "control not only what gets talked about in the interview, but precisely how topics get talked about as well" (Suchman & Jordan, 1990, p. 233). Suchman and Jordan suggest that "local control over the conversation is what sustains participants' interest in talking” (p. 233). Whereas the l4 interviewer has an external reward for completing the scripted interview (i.e., a wage), the respondent may not have a substantial external reward (i.e., an incentive payment) and may only have the intrinsic satisfaction of completing the task of providing information. As the norms of ordinary conversation are violated and the respondent realizes that they are being violated without recourse, the respondent may change his/her behavioral responses to reflect the circumstances. Responses may include boredom, physical withdrawal, impatience, abbreviated responses or response sets. While these violations of conversational norms are present within both computer- assisted (CAPI) and paper-and-pencil (PAPI) interviews, the question of the extent to which these violations difi‘er across the mode of administration remains. It is possible that the new mode of data collection offers a more extreme violation of conversational norms. Whereas people are accustomed to conversational partners taking notes on paper, they may not be accustomed to these partners taking notes on a computer. Therefore, it is plausible for mode efl‘ects to exist within a face-to-face interview based on the data collection procedures used. As mentioned previously, the extent to which the observed response to a question differs from the “true” response in a systematic manner across all respondents is termed bias and may arise due to either (1) how the sample was executed, or (2) other factors. These other factors include the components of the survey situation as described above: interviewer characteristics; respondent characteristics; task characteristics; and the interview context (Bradbum 1983; Briggs, 1986; Churchill, 1991; Suchman & Jordan, 15 1990; Sudman & Bradbum, 1974). The next section examines how bias may enter the survey interview process. Bias in the Survey Interview The survey interview may be described as a series of questions and answers designed to elicit behavioral, attitudinal, or afl‘ective information. Both closed-ended and open-ended questions may be used to derive such information. Bias, the deviation of a response from its underlying “true” score in a systematic manner, is a constant threat to the derivation of behavioral, attitudinal, and affective information and our interpretation of it. Behavioral information may be validated empirically; that is, the “true” answer may be determined through alternative methods of inquiry or validation. By this comparison, the extent of the bias may be assessed; for behavioral questions, then, bias refers to the degree of accuracy (Sudman & Bradbum, 1974). Responses to attitudinal or afl‘ective questions, on the other hand, may not be verified through alternative methods. Thus, responses may not be verified and checked for accuracy in the same sense as behavioral questions. Sudman and Bradbum ( 1974) suggest that responses to attitudinal and affective questions may be examined for consistency relative to an external criterion. Responses to attitudinal or affective questions may be examined for their consistency across methods or within attitudes. While the relevant external criterion is subject to debate, for a study of mode efl‘ects, the criterion should be an existing mode similar to the new mode. The present investigation examines one way by which computer-assisted personal interviewing (CAPI) 16 may afl‘ect responses to open-ended questions during a face-to-face interview. The criterion by which CAPI should be compared in this context is paper-and-pencil interviewing during a face-to-face interview, since this is the mode to which it is most similar. The model of the interview situation posited by Sudman & Bradbum (1974) considers only the interviewer, the respondent, and the task as potential sources of bias and how they contribute to distortions of responses. According to the authors, a considerable amount of research has examined interviewer efl‘ects on responses to survey questions. Typically this research has focused on interviewers’ characteristics (e.g., demographics, interviewing experience, etc.) which may lead to difl‘erential responses from respondents. However, this is not the only way in which the interviewer role may serve as a source of bias; the manner in which the person chooses to fulfill the role of interviewer may also serve this fiinction. Bradbum (1983) noted that interviewer efl‘ects are relatively small compared to efl’ects which fall under the rubric of “task characteristics.” Similar to the ways in which the interviewers’ characteristics may influence responses, so too may the characteristics of the respondent. The demographic characteristics of the respondent in comparison to the interviewer may serve to elicit systematically different responses for sensitive tOpics (e.g., race relations, sexual activity, etc.). More important, however, is the respondents’ willingness to cooperate and to provide the desired information. To the extent that the respondent may tire of the task or I? desire to provide the socially desirable response, responses obtained may difi‘er from the “true” response. Given the survey as a task-oriented social interaction, responses may be more heavily dependent on the demand characteristics of the situation (Ome, 1969). The task structure itself may provide a considerable source of bias, especially considering its influence on the interviewers’ and respondents’ roles. Sudman and Bradbum (1974) “consider the task to be the central concept and the task variables to be the most important sources of response efi‘ects [or bias]” (p. 18). The task structure includes: question wording and order; questionnaire length; mode of administration; survey topic; and the saliency of the information requested, etc. In their examination of the literature on response efi‘ects, they concluded that mode of administration (face-to-face, telephone, and mail surveys) does impact response effects. When a new mode of data collection is introduced, it is important to assess what impact the new methodology may have on responses to questions. Ifthe new methodology influences responses, then it may be thought that there has been a change in behavior or attitudes, when in fact that relationship is spurious (Olsen, 1992). As computers become incorporated in the data collection procedure, investigations as to their impact, if any, on data quality is warranted. The next section focuses on the previous research on the use of computers in the interview setting. 18 Previous Research on the Use of Computers in the Interview Setting Since the introduction of CAPI, there have been few systematic studies investigating its effect on responses and those articles which have appeared have been descriptive in nature (Martin et al., 1993). For example, Jones and Polak (1993) discuss the advantages of a computer-based personal interviewing protocol. Similarly, Costigan and Thomson (1992) cite the advantages CAPI has over PAPI and address the pragmatic issues relevant to practitioners designing a study using CAPI. Sirnpkins (1992) also addresses pragmatic issues regarding the tradeofl‘s between conducting a study using CAPI or PAPI and provides a checklist as to when it might be appropriate to use CAPI. In summarizing the extant research on CAPI versus PAPI, Baker (1992) explored cost difl'erentials between the two modes of administration, interviewer and respondent acceptance of CAPI, and CAPI's impact on data quality. Citing studies conducted by the National Opinion Research Center at the University of Chicago (N ORC), Baker noted that interviewers were capable of using the new methodology without dificulty. Similarly, he noted in the NORC studies respondents were accepting of the new technology as well, showing either indifference or enthusiasm about the new technology. With respect to data quality, the research to which Baker referred was comprised of closed-ended questions. Mth the exception of the number of illegal sldpsl , no differences were found in the number of questions with missing data, refusals, and don't know responses between the ' With a CAPI questionnaire, it is not possible for the interviewer to follow a skip pattern incorrectly since the questionnaire is pre-programmed to follow the skip correctly. l 9 two modes. Yet, Baker notes that in one NORC study, "there is some evidence that respondents are more likely to report what they view as negative or embarrassing behavior (such as excessive drinking or use of contraceptives) in a CAPI interview than in a standard pencil and paper interview" (p. 152). Although no reason is reported in the study, Baker suggested that this may be due to the interviewer's ability to see the adjacent questions in a PAPI interview rather than in isolation as is the case in a CAPI interview. Baker also noted that there was no difl‘erence in the recording of responses to open-ended questions, with the exception of typographical errors, between CAPI and PAPI. However, it is not noted as to either the existence of a systematic investigation of the quality of the responses to the open-ended questions themselves or the results of such an inquiry. Weeks' (1992) summary of Computer-Assisted Survey Information Collection (CASIC) methodologies draws on the same research studies, and thus provides similar conclusions to those noted by Baker (1992). Most studies conducted to compare the two modes of administration have utilized longitudinal data sets and have focused exclusively on closed-ended questions (see Martin et al., 1993, or Olsen 1992). With respect to data quality, Martin, O'Muircheartaigh, and Curtice (1993) focused on response patterns to attitude questions and stability of responses over time. They conducted three split sample comparisons of the two modes of data collection. Two of these samples used a panel of respondents; one used a separate sample. They found no difl‘erence in responses to Likert-type attitude questions administered using CAPI and that the mode of administration did not affect the stability of responses over time. 20 Olsen (1992), like Martin and his colleagues, compared responses to closed-ended questions for interviews using CAPI or PAPI from a longitudinal data set. Olsen assessed data quality based on item non-response, including the number of refirsals, don't know responses, and incorrectly followed Skip patterns. As discussed previously, with the exception of the number of incorrectly followed skip pattems, no mode efl‘ect was found for these measures. However, Olsen did find evidence of a mode efi‘ect for revealing sensitive behaviors (i.e., alcohol consumption). While respondents in the CAPI condition were more likely to reveal their alcohol use than were those in the PAPI condition, it is important to note that interviewers, not respondents, were randomized between mode of interview administration. Thus, the mode efl‘ects may be confounded with interviewer effects. While it is important to understand the efl‘ects of the new methodology by utilizing longitudinal data sets, Martin and colleagues (1993) and Olsen (1992) introduce another potential source of bias in that the respondents are used to being interviewed at regular intervals. Thus, they may not respond difl‘erently due to the new methodology since (1) the interview is no longer a novel task and (2) the topic of the interview is familiar. To counter these concerns, it is important to investigate the impact of the new methodology using cross-sectional studies. Although studies using self-administered methodology is not fiIlly comparable, the results may prove useful in understanding the effects of the use of a computer in an interview setting. Kiesler and Sproull (1986) used a computer-based questionnaire and a traditional paper and pencil questionnaire to examine individuals health attitudes, 21 behaviors, and personal traits. They sampled individuals who had used an electronic mail account at a university. The questionnaire was administered using traditional paper and pencil methodology via the mail for half of the sample, while the other halfwas administered the questionnaire using a computer program to which the respondent had access from his/her own computer terminal. They found that those who were in the computer condition were less likely to respond with socially desirable responses than were those who responded via traditional methodology. In addition, they found that for open- ended questions, there was no difference between conditions for the total number of words, pronominal use, or descriptions provided. In a subsequent interview with a subsample of the original respondents, the researchers administered an Open-ended instrument. Respondents were asked to complete this instrument using the mode which they had not used in the initial study. Kiesler and Sproull found that those who responded via the electronic survey were more likely to provide longer responses, use first-person pronouns, and use more self-descriptive terminology. It is important to note that in the follow-up study, the text-editing capabilities of the program were amended such that it was easier to edit responses. It is proposed here that the responses to open-ended questions may be most susceptible to influence from the use of a computer in the interview situation. The survey interview creates a unique social environment regardless of mode. However, the social situation created within the survey interview may difi‘er by mode. Meyrowitz (1985) notes that individuals’ behaviors differ across various social situations and suggests that the use of electronic media changes those behaviors. Traditionally, social situations were bounded 22 physically and limited to those actors located within that physical locale. However, the advent of electronic media allows social actors in other geographic locations access to social situations to which they do not have direct physical access. The introduction of computers into the survey interview situation changes the boundaries of the situation. The researcher is provided greater physical access; simultaneously, the respondent’s access is restricted in that access to the researcher flows through the interviewer, excluding the respondent fi'om participation in interactions between researcher and interviewer. This may heighten the respondent’s awareness of the scripted nature of the conversation and influence responses. Smith (1988) notes that as compared to closed-ended questions, open-ended questions require a greater level of efl‘ort to answer. In the computer-assisted interview situation, respondents may provide less detailed information in response to open-ended questions due to inability to be included directly in the interaction between researcher and interviewer. Previous research has not directly addressed the question of what efl‘ects, if any, computers have in survey interview situations. Research is also lacking which systematically examines the impact of CAPI on open-ended questions. Since it is the responses to open-ended questions which CAPI might be expected to most impact, this thesis focuses on the answers to open-ended questions. In general, research on open- ended questions has been conducted on their merits relative to close-ended questions (Bradbum, 1983; Converse, 1984). The main advantage of open-ended questions is their ability to provide the researcher with contextual. insight into the phenomenon under investigation. To the extent that a mode of interview administration may impede this 23 function, by limiting the either the stmcture or the amount of information elicited, corrective measures should be taken. Therefore, the present investigation addresses the research question: RQ: What is the impact, if any, of computer-assisted personal interviewing on responses to open-ended questions compared to those in a paper- and-pencil interviewing face-to-face survey interview? CHAPTER 3 METHODS A major U. S. market research firm conducted three split ballot tests in the spring of 1990 in order to assess how the use of computer-assisted interviewing might afl‘ect data quality in a face-to-face interview setting. Each test consisted of independent samples recnrited to participate in either the computer-assisted (CAPI) or paper-and-pencil (PAPI) interviewing condition and the interview focused on the respondent's opinions regarding a new consumer product. For each of the split ballot tests, interviews were subcontracted to and conducted by agencies specializing in interviewing. For each test, interviews were conducted in cities geographically dispersed throughout the United States. Approximately the same number of interviews were conducted in each city (see Table l). Interviewers were required to have had at least six months prior interviewing experience. Prior to conducting interviews, each interviewer was required to attend a briefing session on the procedures for the study, and participate in a practice interview. Interviewers conducted interviews in both the CAPI and PAPI conditions each time they worked. Interviewers were randomly assigned to work halfof the shift on CAPI interviews and the other half on PAPI interviews. 24 25 Table 1. Number of Interviews Per City for Each of the Split Ballot Tests m; lit-£2 Test 3 fl! PAPI QAfl BAH CAPI m1 Atlanta — - - - 19 24 Baltimore - - - - 10 23 Boston 40 50 - - 23 24 Bufi‘alo - - - - 23 19 Charleston 39 50 - - - - Chicago - - - - 19 29 Columbus - - 33 34 - - Dallas 40 50 39 41 23 24 Denver - - 17 40 - - Greensboro - - - - 24 24 Indianapolis 39 50 - - - - Jacksonville 4O 2O 33 34 - - Los Angeles 39 50 - - 19 20 Milwaukee - - - - 22 27 Minneapolis 40 50 33 34 - - New York City - - 25 32 - - Omaha 3O 50 - - - - Orlando - - - - 23 25 Philadelphia - - 32 46 - - San Francisco - - 12 41 - - San Jose - - - - 11 24 Seattle - - 33 - - - Spokane - - - - 16 26 St. Louis - - - - 15 24 Syracuse 40 50 - - - - Note: A “-” signifies that an agency in that city was not contracted to interviews for that split ballot test. 26 Participants A convenience sample of 1,972 respondents was recruited using mall intercept interview techniques. The distribution of participants to condition by tests is presented in Table 2. Participants had to meet the following criteria: women at least 18 years of age; primary residence within 100 miles of the mall; principal grocery shopper for their family; and employed by firms not specializing in market research, advertising, or consumer goods manufacturing. Those willing to participate were escorted to the agency's ofices, located in the mall, in order to complete the interview. Table 2. Number of Respondents by Condition for Each of the Split Ballot Tests CAP; M Total N Test 1 347 469 816 Test 2 257 336 593 Test 3 2_59 3_l_3_ i5; 854 1118 1,972 The Interview Interviews were conducted in semi-private areas in the agency's facilities in the mall. The interviewer sat across fiom the respondent. Only the interviewer was able to see the interview script. In the CAPI condition, the interviewer was instructed to have the 27 screen of the desktop or portable computer angled such that the respondent was unable to view the screen. The interviewer read the interview script to the respondent as it appeared on the screen. Responses were recorded directly into the computer for closed-ended questions. For open-ended questions, the interviewer had the option to enter the verbatim responses as the respondent was speaking (Simultaneously) or after the interview had been concluded. Ifthe interviewer chose the latter option, then the responses were recorded onto a separate piece of paper to be typed alter the respondent had left the facility. In the first split ballot test, this option was not provided; responses were recorded on paper and entered into the computer at the end of the interview. For the other two tests this option was provided. Approximately 95 percent of the interviews were recorded directly into the computer. Participants reviewed a mock advertisement for a new consumer product. Questionnaire wording was the same in both the CAPI and PAPI condition. After reading the mock advertisement, the respondent was asked her likelihood of purchasing the new product. If the respondent expressed a positive purchase intent, then she was asked the question, "What is there that you think you would like about this new product?” followed by ”What is there that you think you would dislike about this new product?" Ifthe respondent expressed a neutral or negative purchase intent, then she was asked these two questions in the reverse order. Responses were probed and clarified a maximum of three times. To ensure that procedures were followed properly, field auditors were assigned to assess the agency's adherence to company standards. 28 Coding the Open-Ended Responses Professional coders associated with the market research firm read and coded the responses to the same question in both the CAPI and PAPI conditions.2 Statements were coded into appropriate categories based on the most detailed statement given by the respondent. In other words, if the interviewer probed a response, the clarified response was coded for subsequent analysis. Responses were parsed and coded based on thought units. A thought unit is a distinct, detailed idea conveyed by a respondent. For example, the size of the product is distinct from its price; the price in general (e. g., “it’s inexpensive”) is distinct from the price of the product in comparison to other comparable products on the market (e. g., “it’s cheaper than Brand X”). The coder examined each distinct thought contained in the response. Each thought was coded only once, into positive or favorable (i.e., "likes") or negative or unfavorable (i.e., "dislikes”) responses regarding the product based on the fully probed response. If a negative thought was given in response to the ”likes” question, it was coded as a "dislikes". Total thought units per respondent were calculated for both open-ended questions (i.e., “What is there that you think you would like . . . ?” and “What is there that you think you would dislike . . . ?” ). This measure was calculated by summing the total number of 2 An exact intercoder reliability estimate is not calculable given the coding procedures. Training procedures for coders include a period of double coding until the trainee reaches a high degree of agreement with the trainer. At that juncture, the need for double checking of work is eliminated. Most of the studies center on consumer food or health and beauty aid products and responses to open-ended questions tend to be similar across tests. 29 codable responses for each question separately; thus, each respondent has two scores: total thought units for favorable comments and total thought units for unfavorable comments. The total thought units excludes “Don’t Know,” “Nothing Liked” and “Nothing Disliked” responses. Tables 3 and 4 contain the frequency of total thought units and descriptive statistics for favorable and unfavorable comments, respectively, by condition for each of the three split ballot tests. 30 Table 3. Frequency of Total Thought Units in Response to the Question “What is there that you think you would like about this product?” by Condition for Each Split Ballot Test Number of TEST 1 TEST 2 TEST 1 Total Thought Units CAR! 2...! CARI BAH QAl’l £A_PI O 20 27 17 25 43 47 l 0 5 l 0 2 2 2 46 62 3O 49 16 12 3 65 90 32 69 44 49 4 75 91 19 28 10 13 5 55 81 57 60 59 55 6 48 59 30 47 31 54 7 19 36 29 29 16 43 8 14 10 19 l3 18 29 9 3 3 13 12 7 6 10 1 3 5 1 4 2 ll 0 2 3 3 0 1 12 1 0 l O 0 0 I; Q Q i Q Q 9 Mean 4.16 4.12 4.97 4.34 4.18 4.61 Mode 4.00 4.00 5.00 3.00 5.00 5.00 Median 4.00 4.00 5.00 4.00 5.00 5.00 Std. Dev. 2 00 2.01 2.60 2.34 2 64 2.62 N 347 469 257 336 2.50 313 31 Table 4. Frequency of Total Thought Units in Response to the Question “What is there that you think you would dislike about this product?” by Condition for Each Split Ballot Test Number of TEST 1 TEST 2 TEST S Total Thought LES CAB! 2&1 CARI BALI 9&1 £A_PI 0 248 359 161 203 176 221 1 3 9 25 36 0 0 2 81 82 4 3 24 28 3 12 13 44 66 34 40 4 3 4 13 23 4 5 5 O 2 8 5 8 16 6 0 O 2 0 2 0 1 Q 9. Q Q 2 3. Mean 0.61 0.51 1.05 1.06 0.93 0.95 Mode 0.00 0.00 0.00 0.00 0.00 0.00 Median 0.00 0.00 0.00 0.00 0.00 0.00 Std. Dev. 1.01 0.98 1.58 1.50 1.58 1.62 N 347 469 257 336 250 313 CHAPTER 4 RESULTS Three split ballot tests were conducted in order to compare computer-assisted personal interviewing (CAPI) and paper-and-pencil interviewing (PAPI) on responses to open-ended questions during a face-to-face interview. Each split ballot test contained two questions regarding the respondent’s liking or disliking of a new consumer product. In Test 1, responses to the open-ended questions were entered into the computer after the completion of the interview. In Tests 2 and 3, 95 percent of the interviewers entered responses directly into the computer. The total number of favorable and unfavorable thoughts were calculated for each respondent. This chapter (1) compares favorable comments about the product by condition; (2) compares unfavorable comments about the product by condition; and (3) examines the efl‘ect of question order and liking of the product on responses by condition. Distinction Between Liking the Product by Condition To test the impact of CAPI on responses to the open-ended questions, group means of total thought units for favorable and unfavorable comments were compared using two-tailed tests. In Test 1, there was no difference in the total favorable thought units overall between CAPI (M = 4.16) and PAPI (M = 4.12) conditions, 32 33 t (814) = -.28, p > .05. In Test 2, significant differences were found in the total favorable thought units overall between CAPI (M = 4.97) and PAPI (M = 4.34) conditions, t(591) = -3. 13, p = .002. In Test 3, the difi‘erence in the total favorable thought units overall between CAPI (M = 4.18) and PAPI (M = 4.61) was non-significant, t(561), = 1.90, p > .05. Distinctions Between Disliking the Product by Condition For Test 1, no significant difl‘erence was found in the overall number of thought units between CAPI (M = .61) and PAPI (M = .51) conditions, t (814) = -1.52, p > .05 for disliking the product. For Test 2, no significant difference was found in the overall number of unfavorable thought units between CAPI (M = .1.04) and PAPI (M = 1.06) conditions, t (591) = .12, p > .05. Similarly, for Test 3 no significant difference was found in the overall number of unfavorable thought units between CAPI (M = .93 ) and PAPI (M = .95 ) conditions, t (561), p > .05. Examinations to determine if the CAPI condition led individuals to provide either more or less positive or negative thoughts as compared to PAPI was undertaken. Again, group means of total thought units for liking and disliking the product were compared for each of the following three groupings: ( 1) those expressing positive purchase intent (“Definitely Would Buy” or “Probably Would Buy”); (2) those expressing neutral purchase intent (“Might or Might Not Buy”); and (3) those expressing unfavorable purchase intent (“Probably Would Not Buy” or “Definitely Would Not Buy”). No 34 significant difl‘erences were found for these comparisons, except in Test 2 for those who expressed a favorable purchase intent (CAPI (M = 5.68) and PAPI (M = 4.98), t (319) = -2.66, p = .01). Examination of Interview Question Order by Condition In addition, to account for any effects due to question order, a three-way Analysis of Variance with repeated measures was used for each test. The factors were mode (CAPI or PAPI), afi‘ect toward the product (positive or neutral/negative purchase intent), and question order (asking the liking question before asking what the respondent disliked about the product, or vice versa). AN OVA results show significant main efl‘ects for question order and affect toward the product for all three tests. Question order and affect for the product impact the total number of favorable or unfavorable thought units. This similarity in results would be expected since question order was determined by afl‘ect toward the product. Intuitively, it would also be expected that those with higher afl'ect toward the product mention more favorable codable comments (i.e., liking) than those with lower afl‘ect toward the product. Similarly, those with lower affect toward the product would be expected to mention more unfavorable codable comments (i.e., disliking) than those with higher afl‘ect toward the product. ANOVA results for each of the split ballot tests appear in Tables 5, 6, and 7. For Test 1 in which interviewers recorded open-ended questions on paper first, then entered responses into the computer, no main effect for mode was found for either 35 open-ended question. For the other two tests in which, for the majority of respondents, responses were entered directly into the computer, a main efi‘ect for mode was found only in Test 2, F (1, 589) = 7.15, p < .01. Although the difl‘erence in means is significant, the efl‘ect size for mode associated with the difl‘erence of means (a CAPI = 4.97, 5 pm = 4.34) is small (1' = 0.055). This suggests that the result of this significance test may be attributable to the large sample size (n = 593) rather than a true effect. 36 Table 5. Analysis of Variance Results for Test 1 Source of Variation SS DF MS F Lvalue Subjects m m 1526.62 812 1.88 Mode 1.73 1 1.73 .92 .337 Order 13.26 1 13.26 7.05 .008 Mode by Order 2.36 1 2.36 1.26 .262 Subjects m .3.“ 2094.57 812 2.58 Afl‘ect 4562.57 1 4561.31 1768.28 .000 Mode by Afl‘ect .18 1 .18 .07 .794 Order by Afl‘ect 430.89 1 430.89 167.04 .000 Mode by Order by Afl‘ect 6.78 1 6.78 2.63 .105 Table 6. Analysis of Variance Results for Test 2 Source of Variation SS DF MS F p value Subjects “.04., 0.... 2006.74 589 3.41 Mode 24.37 1 24.67 7.15 .008 Order 5.46 1 5.46 1.60 .206 Mode by Order 1.14 1 1.14 .34 .562 Subjects m .3,“ 2393.94 589 4.06 Afl’ect 3413.51 1 3413.51 839.85 .000 Mode by Afi‘ect 10.36 1 10.36 2.55 .111 Order by Affect 539.33 1 539.33 132.70 .000 Mode by Order by Afl‘ect 6.38 l 6.38 1.57 .211 Table 7. Analysis of Variance Results for Test 3 Source of Variation SS DF MS F p value Subjects mode. «a. 2400.42 559 4.29 Mode 9.32 l 9.32 2.17 .141 Order 103.03 1 103.03 23.99 .000 Mode by Order .01 l .01 .00 .956 Subjects m .3,“ 2323.63 559 4.16 Afi‘ect 3324.90 1 3324.90 799.88 .000 Mode by Afl‘ect 3.86 1 3.86 .93 .336 Order by Afi’ect 482.56 1 482.56 116.09 .000 Mode by Order by Afi‘ect .62 1 .62 .15 .700 CHAPTER 5 SUMMARY AND CONCLUSIONS The use of the personal computer in the interview situation is a task variable through which bias may enter the survey interview situation. Prior research has examined the impact of computer-assisted personal interviewing (CAPI) on responses to closed- ended questions. This study examined the impact of the use of computers in a personal interview situation on responses to open-ended questions. Results indicate that the total number of favorable and unfavorable thought units did not difi‘er by interview mode. While two significant differences were found and one difl'erence approached Significance, with the number of statistical tests conducted, it would be expected that these results would occur due to chance alone, especially given the large sample size. Moreover, across the three split ballot tests, neither mode consistently elicited more thoughts per respondent. It is important to note that the topics utilized in this investigation (i.e., new consumer food products) may not have been highly salient for the participants. Since inexpensive consumer goods may not be considered a risky investment by consumers, it is plausible that responses reflected a low level of perceived salience, or importance, regarding the product. The low level of salience, or personal relevance, may have contributed to the findings. It is possible that for more highly salient topics, mode effects for open-ended questions between CAPI and PAPI may exist. Should mode efl‘ects occur, additional interviewer training or alternative probing strategies may prove fi'uitful in guarding against a mode effect. 37 38 Estimates of the time needed to respond to these two open-ended questions is approximately one to two minutes apiece. These questions were asked early in the interview. These factors may not provide the opportunity for mode efl‘ects to appear because (1) the interchange duration is short and (2) the opportunity for violations of conversational norms to be recognized by the participant may not have been fully manifest. To counter these concerns, before concluding that no mode efl‘ect exists, it is suggested that future research consider two additional variables: (1) the length of time needed to respond to the open-ended questions; and (2) relative position in the questionnaire (e.g., beginning, middle, or end). It may be insightful to consider if the number of words needed to express each thought unit or if pronominal use differs by mode. Strategic use of pronouns (e.g., lack of “I” statements) may be indicative of a more impersonal nature of the survey situation itself or be a function of the data collection mode. While consideration should be given to the number and type of words used to express the same thought unit, the thought unit is most often the unit of analysis for examining open-ended questions. Thus, future studies investigating mode effects for open-ended questions should continue to focus on the thought unit. It also may be insightful to consider if the number of probing questions and the phrasing of probing questions asked difl‘ers by mode. Probing questions are used to clarify the respondents’ thoughts. In order to obtain the same amount of information, it is possible that interviewers ask either difi‘erent or a greater number of probing questions within the context of the situation created by each mode. Examination of these variables may suggest alternative probing strategies are more effective for a given mode. 39 Unlike prior research on computer-assisted interviewing, this study did not utilize longitudinal data sets. Using cross-sectional data counters concerns that the interview task and the topic are no longer novel to the respondents. This is not to say that participants in this study had never been interviewed before; rather, their previous experience with this survey organization and topic may be more limited. In turn, their commitment to the success of the project may be less than those participating in long term studies. Does the use of computer-assisted personal interviewing afl‘ect the number of open ended responses in the nricrosocial system of an interview situation? The present results suggest that the new mode of computer—assisted data collection does not Significantly alter the interview situation with respect to responses to open-ended questions. However, before concluding that CAPI and PAPI are interchangeable, investigation of other variables mentioned here (e.g., length of question response, relative position in the questionnaire, salience of the topic, etc.) is warranted. LIST OF REFERENCES REFERENCES Baker, R P. (1992). New Technology in Survey Research: Computer-Assisted Personal Interviews (CAPI). MW 112 (2), 145-157. Bradbum, N. M. (1983). Response Effects In P. H. Rossi, J. D. Wright, and A B. Anderson (Eds) Handbook ef Survey Regatch, 289-328. New York: Academic. Bradbum, N. M., & Sudman, S. (1988). Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass. Briggs, C. L. (1986). Learning How te Ask: A Seg'elingtu'stic Apprg'g ef the Role 9f the Interview in Soeifl Seieng Regateh. New York: Cambridge University Press. Buhner, M., Bales, K., & Sklar, K. K. (1991). The Social Survey in Historical Perspective. In M. Buhner, K. Bales, and K. K. Sklar (Eds) The Social Sm in Historical Perspective 1880 - 1940. 1—48. New York: Cambridge University Press. Churchill, G. A. Jr. (1991). Marketing Research: Methodological Foungetione (5th Edition). New York: Dryden. Converse, J. M. (1984). Strong Arguments and Weak Evidence: The Open/Closed Questioning Controversy of the 1940s Publie Opinion Quarterly, 4S, 267-282. Costigan, P., & Thomson, K. (1992). Issues in the Design of CAPI Questionnaires for Complex Surveys. In A. Westlake, R. Banks, C. Payne, and T. Orchard (Eds) Survey and Statigical Computing, 147-155. New York: Elsevier. Herbst, S.(1993). Numbged Voices: How Opinion Polling Has Shapfi Ameriea_n Pelitics. Chicago: The University of Chicago Press. Jones, R, & Polak, J. (1993). Computer-Based Personal Interviewing: State-of-the-Art and Future Prospects. Journal of the Mar_ket Research Society, SS (S), 221-233. Karweit, N., & Meyers, E. D. Jr. (1983). Computers in Survey Research. In P. H. Rossi, J. D. Wright, and A. B. Anderson (Eds) Handbook of Survey Research, 379-414. New York: Academic. 40 41 Kiesler, S., & Sproull, L. S. (1986). Response Effects in the Electronic Survey. Public Opinion Ogarterly, _5_O, 402-413. Martin, J., O'Muircheartaigh, C., & Curtice, J. (1993). The Use of CAPI for Attitude Surveys: An Experimental Comparison with Traditional Methods. Iottrnal of @dal Statigiee (forthcoming). Meyrowitz, J. (1985). No Sang of Place. New York: Oxford University Press. Olsen, R. J. (1992). The Efl‘ects of Computer-Assisted Interviewing on Data Quality. Working Paper -- November 17, 1992. Ome, M. T. (1969). Demand Characteristics and the Concept of Quasi-Controls. Artifaets in Behavioral Research. New York: Academic. Price, V. (1992). P_u_blic Opinion. Newbury Park, CA: Sage. Rossi, P. H., Wright, J. D., & Anderson, A. B. (1983). Sample Surveys: History, Current Practice, and Future Prospects. In P. H. Rossi, J. D. Wright, and A. B. Anderson (Eds) Handbook of Survey Research, 1-20. New York: Academic. Sirnpkins, H. (1992). Using CAPI to Ensure Consistency and Quality in International Studies. In A. Westlake, R. Banks, C. Payne, and T. Orchard (Eds) Survey gt! Stetigieal Oomputing, 157-166. New York: Elsevier. Smith, M. J. (1988). Oentemmrm Oommunimipn Research Methegs. Belmont, CA: Wadsworth. Snijkers, G. J. M. E. (1992). Computer-Assisted Interviewing: Telephone or Personal? A Literature Study. In A Westlake, R. Banks, C. Payne, and T. Orchard (Eds) Survey and Statistical Oemputing, 137-146. New York: Elsevier. Stewart, C. J., & Cash, W. B. Jr. (1988). Interviewing: Principles and Practices (6th Edition). Dubuque, IA: William C. Brown. Suchman, L., & Jordan, B. (1990). Interactional Troubles in Face-to-F ace Survey Interviews. Journ_al of the American Statistical Association, SS (409), 232-241. Sudman, S., & Bradbum, N. M. (1974). Response Effects in Surveys; A Review a_ng Smthesis Chicago: Aldine. Weeks, M. F. (1992). Computer-Assisted Survey Information Collection: A Review of CASIC Methods and Their Implications for Survey Operations. Joumel of Ofiirifl Statistics S (4), 445-465. nrcurcau STATE UNIV. LIBRARIES illllllllllllllllllllllllllllllllllllllllllllll 31293014096501