0 . 1:12... W ' . .... U A ..T.... .. ,. m ”....HP..,.. mmub. - M . m. . . s a! . IE .'1 as us: .~-I '.‘ . . 1 i ‘. m u 0 ION? 'e" {’13 .‘ . . .€.: 3 PW .. :o. I, 55819 'o ..l. i A. 1,- . U. for ., ,e. '4 01:1. . “Ed.- "' “ |.'l an“- .. ',.\- n 0 art ? This is to certify that the thesis entitled An Adaptation of Programmed Instruction for Selected Telecommunication Audience Analysis Concepts: Deve10pment and Evaluation presented by Erik Linn Fitzpatrick has been accepted towards fulfillment of the requirements for Ph.D. degree in Higher Education Major professor 0-7639 LIBRARY Michigan State University ABSTRACT AN ADAPTATION OF PROGRAMMED INSTRUCTION FOR SELECTED TELECOMMUNICATION AUDIENCE ANALYSIS CONCEPTS: DEVELOPMENT AND EVALUATION By Erik Linn Fithatrick The purpose of this study was to provide a viable aid in the teaching and learning of selected telecommunication audience analysis fundamental concepts. Toward that end, the specific purposes of the study were: 1) To develop and evaluate for retention and applica- tion effectiveness, in relation to traditional classroom lecture-recitation instruction, a series of short, branching programmed instruction units dealing with broadcast audience studies fundamentals. 2) To evaluate the developed programmed units in regard to their relative teaching efficiency; that is, learner time expended in grasping the programmed content as! 'compared to learner time consumed in conventional classroom instruction. 3) To assess student differences in attitudes toward .3" (9% Erik Linn Fithatrick the developed audience analysis programmed instruction units and conventional classroom lecture instruction. It was suggested that programmed instruction was a promising method by which to improve the learning process at the undergraduate fundamental level and that a systematic investigation should be made of this instructional procedure. Forty undergraduate Television and Radio students enrolled in a Broadcast Management course at Michigan State University were randomly assigned to one of two instructional conditions: non-directed branching programmed instruction, and conventional classroom lecture instruction. An objective post-test over the content of three prepared instructional units for both experimental groups and a Likert scale questionnaire dealing with student attitudes was administered to each of the subjects. Mean learner time spent with the programmed instruction treatment was compared with the known mean time taken for conventional lecture instruction. Cognitive retention/application post-test student scores, total subject attitudinal scores and mean learner times were analyzed for statistically significant differences between experimental groups using a portion of the.Finn Univariate and Multivariate Analysis of Variance, Covariance and Regression computer program for the post-test-only control group research design. Erik Linn Fitzpatrick The results of this study (all statistically signifi- cant differences are at the .05 level of confidence) can be summarized as follows: a) Students taught via programmed instruction scored significantly higher than students receiving conventional classroom lecture instruction on the same objective post- test which assessed retention and limited application of telecommunication audience analysis fundamental information. b) Programmed instruction subjects spent significantly less time in learning identical material taught by means of non-directed programmed instruction and traditional classroom lecture instruction. c) Students taught selected audience analysis funda- mentals through programmed instruction possessed significantly more favorable attitudes toward their mode of instruction than did students taught by conventional classroom lecture means. In brief, experimental subjects taught selected tele- communication audience analysis concepts by programmed instruction retained significantly more information, consumed Significantly less time in instruction and reported signifi- cantly more favorable attitudes toward their mode of instruction than did students taught via conventional class- room lecture presentation. AN ADAPTATION OF PROGRAMMED INSTRUCTION FOR SELECTED TELECOMMUNICATION AUDIENCE ANALYSIS CONCEPTS: DEVELOPMENT AND EVALUATION By Erik Linn Fithatrick A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Administration-Higher Education 1974 ACKNOWLEDGMENTS With sincere appreciation I wish to thank Dr. William Sweetland for his curricular guidance, professional counsel and constant encouragement. Special thanks are extended to Dr. John Abel for his unselfish willingness to direct the experimental research and his perceptive service in the compilation and editing of the final manuscript. I am also most grateful to the other members of my Guidance Committee, Drs. Norman Bell and Richard Featherstone, without whose respective research interests and faculty efficiency concerns the present study might never have reached fruition. Next,] would like to express my indebtedness to Mr. Robert Carr who assisted in processing the collected data, and Mrs. Virginia Foster who assisted with the typing and reproduction of the programmed instructional units. Apprecia- tion is also extended to Dr. Leroy Olson for his statistical aid in implementing the item analyses for the cognitive post-test instrument. To Deborah, I wish to express my unreserved gratitude for her seemingly infinite patience, unfailing inspiration and otherwise indefatigable personal support. ii TABLE OF CONTENTS LIST OF TABLES . LIST OF FIGURES Chapter I. II. III. IV. THE PROBLEM, RATIONALE, AND RELATED RESEARCH . Need . Purpose . Theory and Related Research Programmed instruction- -related Attitude- related . . . Mass communication instruction- -related . Hypotheses . . . . . . . . . . . Overview . . . . . . . . . . . . . . . EXPERIMENTAL DESIGN AND METHODOLOGY Subjects Instructor Treatments Procedures Instruments . Testable Hypotheses Design and Statistical Analysis Summary ANALYSIS OF RESULTS Dependent Variable: Cognitive Achievement Dependent Variable: Time Consumed Dependent Variable: Attitudes Summary . . . . . . . SUMMARY AND DISCUSSION . Summary Discussion . . Suggestions for Further Study iii Page vi APPENDICES . . . . . . . A. HEC'HL‘UUO w (.4 LIST PROGRAMMED INSTRUCTION FOR SELECTED AUDIENCE ANALYSIS CONCEPTS . . BEHAVIORAL OBJECTIVES FOR AUDIENCE ANALYSIS INSTRUCTION UNITS . . . DIRECTIVE TO CONVENTIONAL LECTURE SUBJECTS . . DIRECTIVE TO PROGRAMMED INSTRUCTION SUBJECTS . CONVENTIONAL LECTURE ATTITUDINAL INSTRUMENT . PROGRAMMED INSTRUCTION ATTITUDINAL INSTRUMENT COGNITIVE TEST FOR AUDIENCE ANALYSIS CONCEPTS ITEM ANALYSIS DATA FOR COGNITIVE POST-TEST . CROSS TABULATION OF PAIRED BIPOLAR ATTITUDINAL ' QUESTIONNAIRE ITEMS . . . TIMES CONSUMED BY CONVENTIONAL LECTURE AND PROGRAMMED INSTRUCTION SUBJECTS . OF REFERENCES . iv 41 4.1 7.9 80 82 86 95 96 98 99 LIST OF TABLES Table Page '3.1 Univariate ANOVA - cognitive achievement . . . . . 25 3.2 Cell means and standard deviations - achievement . 25 3.3 Univariate ANOVA - mean and adjusted mean time . . 27 3.4 Cell means and standard deviations - time consumed . . . . . . . . . . . . . . . . . . . . 28 3.5 Univariate ANOVA - attitude scores . . . . . . . . 29 3.6 Cell means and standard deviations - attitudes . . 29 APPENDIX H ITEM ANALYSIS DATA FOR COGNITIVE POST-TEST . 95 APPENDIX I CROSS TABULATION OF PAIRED BIPOLAR ATTITUDINAL QUESTIONNAIRE ITEMS . . . . . 96 APPENDIX J TIMES CONSUMED BY CONVENTIONAL LECTURE AND PROGRAMMED INSTRUCTION SUBJECTS . . . . . 98 n» “1:". I can LIST OF FIGURES Figure 1. Graphic representation of the post-test-only control group research design . 2. Fourfold representation of possible subject attitudes toward instruction . . . . . . . vi Page 20 22 CHAPTER I THE PROBLEM, RATIONALE, AND RELATED RESEARCH E2921. Virtually all national higher education commissions have warned that the years ahead for American colleges and universities will be characterized by general parsimony in funding and intense pressures for faculty teaching efficiency. Academic departments of telecommunication can claim no exemp- tion from the future's financial realities. As the demand for ever more faculty productivity intensifies (many faculty members are currently over-extended in the diverse areas of teaching, research, administrative internal duties and Community service activities), telecommunication faculty members must discover more efficient means than conventional lecture—recitation by which to convey the fundamentals of their undergraduate subjects so as to reserve their valuable "free time" and occasional classroom hours for the kinds of teaCheI‘~student interactions which are totally inimitable. Certainly prograImned instruction cannot and should not 5UPP13nt face-to-face "classroom" discussion, but it can perhaps provide the basic concepts with which potential telecommunication practitioners and scholars must be equipped, l while releasing faculty personnel from the unproductive drudgery of continually presenting the somewhat mundane, but nonetheless essential, principles of broadcast studies to each new succession of undergraduate students. Traditionally content to subject their students to a seemingly endless chain of in~classroom "courses," tele- communication divisions have been remiss in exploring newer Ways by which students might be taught--a reluctance which in the future could lead to a loss of student, industry and general public trust. A mode or modes of instruction must be developed-~this study suggests branching programmed instruc- tion--by which mature students of telecommunication audience analysis can gain a rudimentary understanding of the under— lying principles within their subject area without physically attending scheduled "classes" day after day, month after month. If broadcast faculty members are to approach maximum Productivity, given available resources, alternatives to traditional undergraduate classroom lecture-recitation for instruction in fundamentals must be actively sought. Purpose The purpose of this study is to provide a viable aid in the teaching and learning of telecommunication audience analysis fundamentals. Toward that end, the specific purposes 0f the study are: 1) To develOp and evaluate for retention and application effectiveness, in relation to traditional class- room lecture-recitation instruction, a series of short, branching programmed instruction units dealing with broad- cast audience studies fundamentals. 2) To evaluate the develOped programmed units in regard to their relative teaching efficiency; that is, learner time expended in grasping the programmed content as compared to learner time consumed in conventional classroom instruction. 3) To assess student differences in attitudes toward the developed audience analysis programmed instruction units and conventional classroom lecture instruction. Theory and Related Research This study's components draw from three diverse areas of knowledge: the extremely broad subject of programmed instruction, the indefinitive domain of attitudes and the yet emerging field of mass communication—related instruction. Reviewed literature relating to these three spheres is cited. Programmed instruction-related The relatively brief history of programmed instruction began with the research of Pressey (1926, 1927, 1932). Initially, he was interested in the scoring of tests and built a machine that did not move to the next question until 5:: min 9v VA au 1 ”I , 6.. .. on. .f‘,‘ ~‘~\ the student correctly answered the question he or she was currently studying. Thus, the student knew immediately whether or not he or she was correct; the student received immediate feedback. Pressey found that students using this machine and getting feedback actually learned the material on which they were being tested. But Pressey had no real influence on educational theory or practice at the time he published his results. His work was generally ignored by educational institutions for a number of years. Programmed instruction did not reach general attention and influence until the publication of Skinner's article in 1954, "The Science of Learning and the Art of Teaching." According to Skinner, a child performed learning tasks in order to escape aversive stimuli. He suggested that princi- ples of learning theory be employed in a method of teaching. Later, Skinner (1954, 1958) described a machine in which the important features were immediate reinforcement for the right answer and individually self-paced learning. The material is presented in sequential order and the student responds overtly to the material. There is an interaction between the student and the material to be learned. On the basis of reinforcement theory, the student is rewarded immediately for each correct response. Such a program usually consists of small steps or frames, each containing 20 to 30 words. This is a stimulus-response pattern with immediate feedback of results. Skinnerian programs are called linear programs due to their step-by-step presentation. Skinner's ideas were accepted by many who were interested in improving practical aspects of education. Crowder (1959, 1960) developed what is termed branching programmed instruction. He asserted that people in daily life are confronted with situations in which they must make decisions and that in these circumstances they make mistakes. When a learner makes a mistake, Crowder stated, he or she should be branched. The term "branched" means that the material to be learned is explained to the learner in a new way or the learner is directed to a remedial sub-routine. Thus, a branching frame usually contains more material than a linear frame. Answers to the frames determine how the student is branched through the different parts of the program and how many frames he or she will study. Some of the characteristics of programmed learning are common to all methods. Macdonald-Ross (1969) summarized the research on programmed instruction in the 19605 and listed several common characteristics of programmed learning. First of all, programmed learning requires clearly stated objectives. They are necessary in order to define precisely what is going to be taught (what the student is to know at the completion of instruction) and to enable the develOpment of ways of measuring whether learning has in fact taken place (criterion test). The objectives should be behavioral objectives, written in observable and measurable terms. Another characteristic is the feedback control made available in programmed instruction; the immediate checking of the right answer serves as a reinforcer. Anderson, Kulhavy and Andre (1971) reported a study of 356 subjects who completed a computer-based instructional lesson which insured that the subject responded before he or she received knowledge of the correct response (KCR). Students who received KCR after every frame performed better on the criterion test than stu- dents who were tested on the same criterion without receiving KCR. It was suggested that students attempting to read the XOR before responding performed poorly on the criterion test. An additional common characteristic of programmed instruction is the fact that it is essentially an individual- ized method of instruction. Each individual progresses at his or her own pace in making responses and receiving feedback, or in the case of branching programs, follows a unique path through the learning process according to individual responses. To this point the theory and assumptions underlying linear and branching programmed instruction have been presented. Another vital question is: Do students learn from programmed instruction? Schramm (1964) in his review of the research on programmed instruction mentioned 36 experiments comparing different types of programs with conventional classroom instruction. Sixteen of these experiments were done in colleges, four in secondary schools, five in primary schools, 10 with adults and one with retarded children. Of the 36 experiments, 18 showed no significant differences when the two groups were measured on the same criterion test. Seventeen showed a significant increase for classroom students working with programmed instruction, and only one showed a favorable significant increase for students taught in a conventional manner. In addition, a number of studies have shown that programmed material is more effective or efficient for retention of information, learner time spent and application of learned material than are prose-text and conventional modes of instruction (Pressey, 1963; Hough, 1962b). Roe (1962) reported research conducted with 189 freshmen engineering students. The program consisted of an introduction to probability concepts written in linear and branching forms. Significant learning occurred with both programs, but when considering programs separately, there were no significant differences between any of the simple branching methods and the linear program when measured by the amount of learning that took place. However, it was determined that a logical sequenced program resulted in more learning over time than did a random sequenced program. Conversely, Shull (1969) reported a difference between branching and linear programmed instruction. One hundred and twenty students enrolled in an industrial arts course were assigned to one of three groups: control, linear and branching. The control group received only a post-test. The linear group received linear programmed instruction and the post-test and the branching group received branching programmed instruction and the post-test. The findings indicated a significant difference between the three experimental groups at the .05 level of significance in favor of the branching group. Attitude-related An additional issue relevant to this study is the attitude of a given student exposed to programmed instruction. Dawes (1972) has described the nebulous nature of attitudes in the following manner: In 1935 Gordon Allport observed that "attitudes are measured more successfully than they are defined." This statement is still true today (p. 2). Although the term "attitude" is elusive, Osgood (1956) stated that attitudes are (l) learned, (2) implicit and (3) the basis for an evaluative response. Thus, attitudes are referred to as "tendencies of approach or avoidance" or as "favorable or unfavorable." Zimbardo and Ebbesen (1969) summarized the general- evaluation nature of attitudes: Attitudes have generally been regarded as either mental readinesses or implicit predispositions which exert some general and consistent influence on a fairly large class of evaluative responses (p. 34). Katz and Stotland (1959) considered that attitudes have three major components. The information component forms the foundation on which the attitude is built. The affective component involves feelings and it is this segment which attitude scales attempt to measure. The third component, the action component, represents the extent to which the attitude has associated with it any habits of action. Dealing specifically with attitudes toward programmed instruction, O'Toole (1964) stated that there is evidence that attitudes will be a strong factor in the successful implementation of programmed instruction in a school setting. Nauman (1962) described the performance reactions of approximately 40 college students who worked the first third of the Holland-Skinner psychology program written in 1961. Seventy-five percent of the subjects indicated that without a teaching machine they would have learned much less from the course. The use of self-instructional programs was favored by 64 percent of the group. Other reactions of students reported in this study include: (1) about one- fourth of the students felt that at some point during the course they were treated like experimental organisms. A small minority felt that the use of teaching machines reflected adversely upon their dignity as human beings. (Z) About two-thirds of the group stated that they thought the instructor was trying to teach as much as possible within the limits of basic reality consideration. (3) The final point was a statement about the possible lack of opportunities to reflect on the material learned and to tnqnfifi quJO' . ‘l 34 fivhla 1 131 10 consider its implication. Tobias (1969) assessed the attitudes of students in educational psychology classes as they pertained to non- traditional media and conventional instruction. Student attitudes accounted for a substantial percentage of achievement variance. Subjects with attitudes favorable to conventional instruction tended not to learn as well from unfamiliar media such as programmed instruction. Scherman (1973) found student sex category to be a determinant of favorable or unfavorable attitudes to programmed instruction. Male students expressed generally positive attitudes to programmed instruction, while females were equally split between favorable attitudes to programmed instruction and conventional instruction. Mass communication instruction-related There is a dearth of programmed instruction research (or any other form of instruction research) in the general area of mass communication course-work content, the reason for which is not altogether clear. Directing his remarks to America's educational broadcasters, Eurich (1968) stated that although educational systems generally lag behind other societal institutions in the adoption of innovations, higher education is perhaps the most reluctant of all to change. Eurich's assertion finds support in the literature. Three research studies are reported, all of which deal with programmed instruction of selected journalism rudiments; 330 0 ‘43.“ I. Lev ZICEC 11 two of the three studies are unpublished doctoral disserta- tions. No research is reported in the area of this study-- broadcast audience analysis and programmed instruction. Francois (1967) compared programmed instruction in beginning news writing to conventional instruction in the subject. He found no significant difference in news writing ability between the two experimental groups, but he reported that programmed instruction proved to be more efficient than conventional instruction, with learner time used as the measure of efficiency. Boredom with the linear programmed instruction and a lack of concrete, real-life experiences were the salient perceived weaknesses. Prejean (1968) constructed a linear programmed unit on journalistic style with frames logically grouped and compared learning gain through programmed and conventional instruction. She found that programmed self-teaching outside of the classroom was as effective as conventional lecture in terms of retention and application, and that programmed instruction usage provided extra class hours for material requiring teacher-student clarification. In measuring the effectiveness of programmed instruction as a method for teaching basics of elementary photography (film exposure and flash), Griffith (1969) found that in eight comparisons of programmed and conventional instruction, three showed no significant differences and five comparisons showed a significantly higher level of achievement for students '3‘ oh» are . .11 12 receiving programmed instruction. The Griffith findings are consistent with those reported in the general programmed instruction literature in which either no significant differences were found between programmed and conventional instruction in terms of retention and application, or when differences were found, they were generally in favor of programmed instruction. Griffith also concluded on the basis of his comparisons that non-directed programmed instruction was the most efficient form to utilize. As aforementioned, programmed instruction research in this study's area of telecommunication audience analysis is nonexistent, but Meeske (1972) has alluded to the implementa- tion of measurable behavioral objectives (the prime consti- tuents of programmed instruction) in broadcast course-work instruction and the seemingly universal reluctance of broad- cast faculty members to innovate: It is possible to determine the skills a (broadcast) course is designed to impart, to devise tests to see if the students can demonstrate those skills, and to apply the tests. The simple fact is that little effort has been made to do so (p. 221). Hypotheses To explore the questions raised by the investigator, based upon the findings of the literature and general field observations, the following hypotheses were developed. 1. Students receiving branching programmed instruction will score significantly higher than students receiving conventional classroom lecture instruc- tion on the same objective post-test that assesses l3 retention and application of broadcast audience analysis fundamental information. 2. Students taught by means of branching programmed instruction will consume significantly less time than students taught via conventional classroom lecture instruction in learning identical telecommunication audience analysis fundamental concepts. 3. Students taught broadcast audience analysis fundamentals through branching programmed instruc- tion will possess significantly more favorable attitudes toward their mode of instruction than will students taught by traditional classroom lecture means. Overview In Chapter I the problem was introduced, theory and related research were reviewed and hypotheses were stated. In Chapter II the subjects, treatments, procedures and instruments will be described; the research design identified, and the tested hypotheses, statistical analysis and level of significance presented. In Chapter III the results of the data analysis will be reported. Chapter IV will conclude the study with a discussion of the results and some suggestions for further research. CHAPTER II EXPERIMENTAL DESIGN AND METHODOLOGY Upon reviewing the related literature, stating the hypotheses, and describing the purpose(s) of the investigation as presented in the previous chapter, an experimental study was planned and subsequently conducted. Subjects Students enrolled during the Spring term of 1974 in Television and Radio 401 (Broadcast Management) at Michigan State University served as treatment subjects for the study. Of the total enrolled students available, approximately one- third were excluded from treatment because of their prior knowledge gained in Television and Radio 335 (Audience Studies); that is, any students reporting that they had experienced the audience studies course-work were excluded from any treatment for the purposes of the study. The Television and Radio undergraduate subject population was preponderantly male-~nearly 80 percent of the students were males. The studied subjects, with few excep- tions, fell into an age range of 20 to 23 years. Without exception, all of the subjects were either Juniors or Seniors 14 15 and all were Television and Radio majors. After accounting for students excluded from the study due to prior knowledge and other miscellaneous exemptions (such as the inevitable few students withdrawing from the management course), the studied population consisted, quite conveniently, of precisely 40 students, 20 served as programmed instruction subjects and 20 as conventional lecture subjects. Instructor No formal instructor was anticipated or necessary for the non-directed programmed instruction component of the study. The conventional lecture component required an in- classroom instructor and that instructor was the scheduled Television and Radio faculty member assigned to the broad- cast management undergraduate class. Treatments A short course consisting of three units in selected telecommunication audience analysis fundamentals was designed by the investigator. The designed fundamentals course was then written into two forms: branching programmed instruction (Apppendix A), and paragraphed text to be used in conventional lecture. These instructional materials served as the treatment conditions for the study. The short audience analysis fundamentals course was built upon conventional, Mager-style 16 behavioral objectives for discriminative forms of behavior (Appendix B). Procedures Within the studied Television and Radio Management class (excepting those students who were automatically excluded for the previously stated reasons), each individual student was randomly assigned to one of the two treatment groups--conventional lecture or programmed instruction. During the first class meeting of the term, students to be excluded from the study were identified through self- report. Approximately two weeks later--allowing ample time for class student numbers to stabilize due to the customary "drops" and "adds"--the random assignment of students to treatment groups was accomplished. In actuality, students to be "excluded" from the study results did receive the conventional lecture presenta- tion, but their cognitive scores and attitude indices were not calculated for the data analysis purposes of the study. Students assigned to conventional lecture treatment from their instructor were required to attend two succeeding class periods (Appendix C), while subjects assigned to programmed instruction treatment were gathered at one prede- termined time in a separate, reserved classroom to receive the developed non-directed programmed units (Appendix D). In short, subjects were given an instructional treatment f env- 1%.: “PW. . A 17 corresponding to the experimental group to which they had been randomly assigned. The instruction was in one of two forms: conventional lecture, or branching non—directed programmed instruction. Conventional lecture subjects were admonished to take clear, reasonably detailed class notes and to review those notes at their leisure. Programmed instruction students were advised to complete their individual units at one study session; that is, without undue interruption, just as one might receive one-hour lecture-recitation. Programmed instruction subjects were also asked to review their completed instructional units at their leisure, and one important additional requirement was that each programmed instruction subject indicate on the front page of his or her instructional unit how much time in minutes was spent in completing the unit. Absolutely no "help" (instructor clarification of terms, intended meanings of phrases, etc.) was afforded the non-directed programmed instruction student, while conventional lecture subjects were allowed the usual classroom interactions characteristic of traditional instruction. Instruments Following the scheduled lecture and programmed instruction treatments, the two instructional treatment groups were assembled at the same hour in their usual classroom to 18 evaluate the audience analysis "course" they had experienced through a l6-item Likert scale attitudinal questionnaire (Appendices E, P). Then, a 30-question application and retention post-test constructed from representative areas of the instructional treatment units was administered (Appendix G). Both of the evaluative instruments--the attitudinal questionnaire and the cognitive post-test--were developed by the investigator. Item difficulty for the cognitive achievement post-test was determined via a standard item analysis using extreme criterion groups (Appendix H). During develOpment of the post-test, the scores of tested subjects (Television and Radio majors who had experienced audience studies course-work) were selected at the upper 27 percent and lower 27 percent ends of the rating continuum, as suggested by Borg and Gall (1971), and a determination made as to whether different proportions of subjects in each extreme group answered a given test item correctly. Attitudinal questions dealt with the clarity and organization of presented materials, time spent in receiving instructional presentations, the degree to which instruction motivated a given student to seek additional information in the subject area, overall stimulation from the course and other general affective "feelings" concerning the treatments. An attempt was made to check the consistency of subjects' responses to the attitudinal items by cross tabulating 19 individual responses to three pairs of items considered to be biqpolar in meaning; students reporting negative attitudes to one item might be expected to express positive attitudes towartl another item with identical, but positively stated conterit (Appendix I). Testable Hypotheses In order to assess the difference between the two treatments employed as represented by the measures used, mnl on the basis of the hypotheses developed and described in Chapter I, the following testable hypotheses were fornuilated: 1 H0 MP (cog. achiev.) = ML (cog. achiev.) H1 MP (cog. achiev.) # ML (cog. achiev.) Key: - null hypothesis - alternate hypothesis H H M - mean of group L - lecture P - programmed instruction There will be no difference in post-test mean scores between the two treatment groups when the dependent variable is cognitive achievement. 20 MP (time) = ML (time) H : MP (time) # ML (time) There will be no difference in minutes consumed for instruction between the two treatment groups when the dependent variable is time. MP (attitudes) = ML (attitudes) Mp (attitudes) # ML (attitudes) There will be no difference in mean scores between the two treatment groups when the dependent variable is attitudes. Design and Statistical Analysis The post-test-only control group design, using the notation of Campbell and Stanley (1966), can be graphically represented as shown in Figure l. .Programmed Instruction R X 01 . Conventional Lecture ' R ‘ 02 Figure 1. Graphic representation of the post-test-only control group research design. Key: R - random assignment of subjects to groups 0.- post-test X - treatment 21 According to Campbell and Stanley, the design is well suited to "avoiding an experimentor-introduced pre-test session, and in avoiding the 'giveaway' repetition of identical or highly similar unusual content, as in attitude change studies" (p. 26). The design may be considered as the last two groups of the Solomon four-group design. An analysis of variance test for significance of differences between variances was made to determine whether tfluzbetween-groups variance was significantly greater than the within-groups variance. It was determined prior to undertaking the study that the .05 level of confidence would be used in interpreting "F" ratios and "p" values. The study design assumed the two experimental groupS'UJbe basically alike in such characteristics Gas aptitude scores, sex distribution, years of television and radio formal education and major field of study. The mean learner time spent using programmed instruction was compared with the known mean time consumed by the conven- tional lecture treatment group (Appendix J). As the collected attitudinal data regarding programmed instruction and traditional lecture resembled that depicted in Figure 2, the mean group attitudinal scores were analyzed as any other dependent variable. 22 Attitudes Favorable Unfavorable C.L. ## %% P.I; ## %% Figure 2. Fourfold representation of possible subject attitudes toward instruction. The five possible alternatives for each question on the Likert attitudinal instrument were ordered so that a weight of "one" was given the most negative expressed attitude and a weight of "five" was ascribed to the most positive attitude. A total index or "score" of attitudes (which were assumed to be unidimensional) was calculated and tested for differences between experimental treatments. All of the investigated dependent variables (cognitive achievement, time, and attitudes) were analyzed using a portion of thelthIUnivariate and Multivariate Analysis of Variance, Covariance and Regression computer program. Summary Undergraduate Television and Radio students enrolled in a Broadcast Management course at Michigan State University were randomly assigned to one of two instructional treatments: 1. Non-directed programmed instruction. 2. Conventional classroom lecture. 23 An objective post-test over the content of the three prepared instructional units for both treatment groups and a Likert scale questionnaire dealing with student attitudes was administered to each of the subjects. Mean learner time spent with the programmed instruction treatment was compared with the known mean time taken for conventional lecture instruction. Cognitive retention/application post—test student scores, total subject attitudinal scores and mean learner times were analyzed for statistical significance between treatment groups using an analysis of variance test for significance for the post-test-only control group experimental design. CHAPTER III ANALYSIS OF RESULTS The Control Data 6500 computer system at Michigan State University was employed in calculating the statistical analyses. Use of the Michigan State University computing facilities was made possible through support, in part, from the National Science Foundation. The level of confidence for all of the significance testing was .05. Dependent Variable: Cognitive Achievement A Finn univariate analysis of variance program was used to test for differences between experimental treatment (programmed instruction) and control (conventional lecture instruction). The primary ANOVA components of the test for cognitive achievement differences are presented in Table 3.1. Observed cell means and standard deviations comprise Table 3.2. Inspecting Table 3.1, it appears that there was a significant difference in performance between the experimental groups on the 30-question objective post-test for retention and limited application of fundamental audience analysis concepts (as measured by mean test scores). 24 25 Table 3.1. Univariate ANOVA - cognitive achievement. Source df MS F P less than Between 1 140.625 41.634 .0001 Within 38 3.377 Total 39 Ke df - degrees of freedom MS - mean squares F - F ratio P - P value: the probability of obtaining an F ratio as large or larger than the calculated F ratio if the null hypothesis is true. Realizing a "p" value of less than .0001, the next step in analyzing the data was to consult the calculated cell means presented in Table 3.2. Table 3.2. Cell means and standard deviations - achievement. Group N Cell Means Standard Deviation P.I. 20 28.850 .988 C.L. 20 25.100 . 2.403 On the basis of the data expressed in Table 3.2, a conclusion can be drawn that the significant difference found through ANOVA was in favor of the treatment group-- programmed instruction. 26 It should be noted that a ceiling effect is evident in the results; few students scored below 25 correct on the 30-item post-test, and a perusal of Appendix H reveals that the test was not a particularly good discriminating instrument. The nature of the test itself (Appendix G) predicts a simple chance score of 12 and a perfect score of 30. For purposes of least- and most-knowledgeable discrimina- tion, an ideal overall mean score would be approximately 21. The actual mean score for the post-test obtained by the two experimental groups (treatment and control combined) approached 27 correct. Nevertheless, the "p" value contained in Table 3.1 is small, and the sufficiently small standard deviation figures suggest that the difference between treatment and control was statistically significant. Table 3.2 reports that the significant difference is in favor of the programmed instruc- tion treatment group. Dependent Variable: Time Consumed The statistical computer analysis of the data for the dependent variable time spent with instruction proved to be rather a formality and foregone conclusion. First, there could be no within-group variance of total time spent for the conventional lecture control group because all lecture subjects consumed the same amount of time. Secondly, and more importantly, the two most deliberate programmed 27 instruction subjects used a total of only 35 minutes apiece to complete their three short units of instruction and thus complete the deve10ped "course" for fundamentals. In comparison, the conventional lecture control group used 94 minutes (close to three times more minutes than the slowest-paced P.I. subjects) in receiving the audience analysis course content. Such a wide discrepancy in time consumed by the two experimental groups indeed constitutes a statistically significant difference in favor of the programmed instruction treatment group. Table 3.3 details the univariate ANOVA results for group-mean/adjusted-mean times. Table 3.3. Univariate ANOVA - mean and adjusted mean time. Source df MS F P less than Between 1 105,125.000 2122.608 .0001 Within 19 49.526 Total 20 A study of the cell means in Table 3.4 points out that the ANOVA significance testing utilized adjusted mean time for conventional lecture subjects (due to the fact that there could be no within-group variance) and actual or "real" mean time for the programmed instruction group (within-group variance did exist). 28 The total time figures for the two experimental groups (Appendix J) explain that the real time taken for completion of the three units of instruction was 94 minutes for the conventional lecture control group and a mean of 21.5 minutes for the programmed treatment group. Table 3.4. Cell means and standard deviations - time consumed. : Mean Time Adjusted Standard Group N (minutes) Mean Time Deviation P.I. 20 21.5 ---- 7.037 C.L. 20 ---- -72.5 7.037 Adjusted mean time for the lecture control group was arrived at by adding the mean time of the programmed instruc- tion group (21.5) to -94 minutes. The negative number was necessary to test the null hypothesis that mean time minus 94 minutes is equal to zero. It is eminently clear (at first blush and upon ANOVA statistical analysis) that a significant difference existed between the two experimental groups when the dependent variable was time taken to complete three units of audience analysis instruction. The known uniform time taken by conventional lecture was 94 minutes vs. 21.5 minutes mean time consumed by programmed instruction subjects-~a minutes ratio of approximately four to one. 29 Dependent Variable: Attitudes Once again, a univariate ANOVA computer program was used to analyze the data and again it is apparent that a significant difference existed in total attitudinal scores between the experimental groups. Table 3.5 presents the results of the ANOVA significance test for differences with attitudes as the dependent variable. Table 3.5. Univariate ANOVA - attitude scores. Source df MS , F P less than Between 1 366.025 4.154 .0486 Within 38 88.103 Total 39 Having established that a significant difference obtained between groups, the investigator next considered the cell means contained in Table 3.6. Table 3.6. Cell means and standard deviations - attitudes. Group N Cell Mean Standard Deviation P.I. 20 58.500 7.104 C.L. 20 52.450 11.213 30 The analyzed data point to a significant difference between experimental group attitudinal scores in favor of the treatment programmed instruction subjects. Due to the weighting of the attitudinal responses, higher group mean scores are to be interpreted as more positive attitudes toward a given mode of instruction; in this case, the programmed group students expressed significantly more positive attitudes toward their mode of instruction. As established in Chapter 11, an attempt was made to determine how consistent subjects were in their attitudinal responses (the attitude instrument itself was considered to be unidimensional). The complete cross tabulations are encapsuled in Appendix I, but it may be generally stated that subjects' responses to the attitudinal items were rather consistent.. If the cognitive achievement instrument can be characterized as a somewhat poor discriminator, then the attitudinal instrument proved to be its contrast--it apparently possessed considerable discriminating power. Summary Three investigated dependent variables were analyzed for differences between experimental treatment and control groups using a Finn univariate ANOVA computer program. The dependent variables and the results were as follows: 1. Cognitive Achievement--a significant difference between groups was found in favor of the programmed 31 instruction treatment group (obtained significantly higher post-test scores). 2. Time Consumed in Instruction-~a significant difference between groups was found in favor of programmed instruction (used significantly less time to complete three units of course content). 3. Attitudes-~a significant difference between groups was found in favor of programmed instruction (subjects expressed significantly more favorable attitudes toward their mode of instruction). The level of confidence for all of the significance testing was .05. A discussion of the above statistical results will be presented in the following chapter. CHAPTER IV SUMMARY AND DISCUSSION Summary Developing three branching units of non-directed programmed instruction for selected telecommunication audience analysis concepts and testing their relative effectiveness to conventional classroom lecture instruction was the general purpose of this study. Toward that overall end, the more specific purposes of this study were: 1) to evaluate the programmed units in the light of relative cognitive achievement of experimental subjects; 2) to evaluate the programmed units in regard to their relative teaching efficiency, with time consumed in instruction as the efficiency measure; and 3) to evaluate expressed student attitudes toward the programmed units in relation to tradi- tional classroom lecture instruction. It was suggested that programmed instruction might be a viable means by which to improve teaching productivity for' audience analysis funda- mentals and that a systematic study should be undertaken of this instructional procedure. The investigator developed three short programmed units touching upon rudimentary principles of audience studies 32 33 such as a definition of a rating, an outline of the most popular audience measurement method and the primary reason for conducting audience research. The units were subsequently tested at Michigan State University. The hypotheses advanced were: 1) students receiving programmed instruction will achieve significantly greater post-test scores than will conventional lecture subjects; 2) a significant difference between groups in favor of programmed instruction will be found in time consumed by students learning identical information; and 3) programmed students will possess signifi- cantly more favorable attitudes toward their mode of instruc- tion than those taught by conventional means. Forty undergraduate Television and Radio students were randomly assigned to one of two instructional conditions: 1) non-directed branching programmed instruction; or 2) conventional classroom lecture instruction. A post-test for cognitive achievement and an attitudinal questionnaire were administered to both of the experimental groups. Programmed instruction subjects were additionally required to report the time(s) taken in completing their instructional units. The three dependent variables of the study--cognitive achievement, time consumed in instruction, and expressed attitudes toward mode of instruction--were tested for signifi- cant differences between treatment (programmed instruction) and control (conventional lecture) using a Finn univariate analysis of variance computer program. 34 Discussion The results of the present study (all statistically significant differences are at the .05 level of confidence) can be summarized as follows: a) Students taught via programmed instruction scored significantly higher than students receiving conventional classroom lecture instruction on the same objective post-test which assessed retention and limited application of tele— communication audience analysis fundamental information. b) Programmed instruction subjects spent significantly less time in learning identical material taught by means of non-directed programmed instructionzunitraditional classroom lecture instruction. c) Students taught audience analysis fundamentals through programmed instruction possessed significantly more favorable attitudes toward their mode of instruction than did students taught by conventional classroom lecture means. In brief, experimental subjects taught by programmed instruction retained significantly more information, took significantly less time in instruction, and generally felt better about their method of instruction than did students taught by conventional classroom lecture. In regard to the above statistically significant differences, the question might logically be posed: "Are these differences socially as well as statistically significant?" A reasoned answer may be: "Yes, with some 35 reservations." First, a conscious effort was made to ensure that investigator-introduced biases were held to a minimum. For example, the programmed units were developed by the investigator, but the conventional lecture presentations were prepared by the regularly scheduled classroom instructor (a Television and Radio faculty member), not the investigator. Also, the investigator attempted to meet with the experimental subjects as little as possible. But, most importantly, the programmed instruction units were completely non-directed; no questions from programmed subjects were fielded and no instructional help offered. Meanwhile, conventional lecture subjects were allowed the usual classroom interactions such as clarification of misunderstood concepts, multiple questions (although due to the fundamental nature of the information, few questions were actually posed) and the sorts of other- wise general human contacts that should have provided a wider stimulus array and, thus, a learning advantage. It must be stated, however, that even though efforts were made to reduce possible biases in the research, the experimental subjects did vaguely know that they were participating in "some sort of study" and the programmed instruction treatment students may have fallen under the hazy glow of a "halo" effect due to their unusual instructional condition. Also, conventional lecture subjects were "required" to attend class (Appendix C) but no attendance 36 was taken (to duplicate a true daily lecture setting). Hence, a few lecture subjects came to and left from the classroom at erratic intervals, as they normally did during their usual Broadcast Management lecture presentations, but most subjects (perhaps due to simple courtesy) did not leave the classroom until the completion of the respective unit lecture periods--some of these students, left to their own devices, might have actually grasped or "learned" the audience information in a shorter period of time. Programmed students were all at least physically presented with the prepared instructional materials, but it must be recalled that whether or not programmed instruction subjects actually chose to give their units anything more than cursory attention was left entirely to their discretion (to duplicate a true non-directed programmed setting). It is not known if these differences in attendance vs. attention produced anything other than randomly distributed biasing effects, however some comfort can be found in the knowledge that each instructional method was employed according to its inherent behavioral properties. The literature concerning programmed instruction suggests that programmed instruction enjoys the potential to be more effective than other traditional modes of instruction (most notably, conventional lecture), but even programming's enthusiasts recognize limitations. None of the most ardent see programmed instruction replacing the instructor himself; 37 they realize that individualized conceptual learning must ‘ of necessity be so varied that no self-instruction can supplant the interactions between human beings. Filep (1962) emphasized twelve years ago that programmed instruction should become "a part, not the sole teaching method, a measure, not an end," and that the use of self instruction should not "absolve the instructor of his responsibilities" (p. 175). It is essential to note that the programmed units evaluated in the present study dealt with telecommunication audience analysis fundamentals for undergraduate students. Had the programmed units incorporated more advanced material (e.g., the phi1050phical implications of collecting personal mass media behavior data from supposedly "anonymous" respondents), the significant differences found in favor of the programmed instructional treatment might easily have been swayed in the direction of conventional classroom lecture. The results of this study, then, can be said to suggest promising new directions in the effective, efficient teaching of telecommunication audience analysis fundamentals. The present study results do not constitute a statistical diatribe aimed at traditional lecture-recitation instruction or those who practice such instruction. Indeed, Gagne (1970) has observed that probably the finest teaching available recognizes and uses the intrinsic differences of dissimilar modes of instruction to full advantage, each instructional method 38 complementing the other: It may be that the most striking effects of instruc- tional planning are to be sought in various combina- tions of media, where each may perform a particular function best (p. 61). Suggestions for Further Study The sheer bulk of telecommunication audience analysis fundamentals available for instruction is considerable. Perhaps a second and third series of units could deal with more advanced concepts for undergraduate students. In the interest of physical dimensions, these succeeding units could omit the principles detailed in the "beginner's" units. The electronic media have become pervasive tools in American higher education, and the value of programmed audience analysis units could be enhanced by structuring them for electronic media such as video tape, or most easily, the interactive computer. The investigator has written an intro- ductory APL computer program dealing with audience analysis problem-solving instruction and the future of similar applications is promising. Another variation of programming the audience analysis units, through either the medium of print or one of the electronic media, would be to compile the information into Skinner's linear form explained in Chapter I. These suggestions obviously place much emphasis on the 39 deve10ped programmed units themselves, but audience research is to practitioners and scholars of telecommunication as the periodic table of elements is to the chemist—-all telecommuni- cation observation, theory and investigation hinges upon a systematic knowledge of differing audiences. The recommendations thus far have been in the interest of the student and instructor. There are studies of equal importance in research where quantitative data would be of assistance. Examination of the data from this experimental study suggests further statistical investigation. Pre- and post- tests could be administered to determine actual gain in learning, but such experimentation should allow considerable time lapse between the two measures so as to minimize pre-test information effects. Also, the relationship of significantly greater cognitive achievement to intelligence test scores or grade point averages warrants experimental style, using undergraduate telecommunication majors. One of the more interesting postulates is the effect, if any, of programmed instruction on long-term learning retention of audience analysis information. Since mastery of telecommuni- cation audience analysis fundamentals is an accepted quality of professionalism, and since the audience principles receive constant use, a series of programmed audience analysis units should provide ideal treatment material for longitudinal study of long-term retention. 40 Finally, much additional experimentation is required to validate the present study's audience analysis programmed units, the cognitive achievement post-test and the Likert scale attitudinal instrument. Such validation evolves only through an empirical research and development cycle of testing, analyzing and revising of programmed instructional units, cognitive achievement instruments and attitudinal question- naires--a cycle which demands extraordinary patience and even greater quantities of time. APPENDICES APPENDIX A APPENDIX A PROGRAMMED INSTRUCTION FOR SELECTED AUDIENCE ANALYSIS CONCEPTS Three Non-Directed Branching Units for Students of Telecommunication Compiled By Erik L. Fithatrick April, 1974 ...—— Not for distribution-~contains c0pyrighted material incorporated for limited research purposes under provision of academic fair use. 41 42 Time Spent Completing this unit: Your Name: Student Number: UNIT ONE WHO NEEDS BROADCAST "RATINGS? AND WHAT ARE THEY? Module I. -- The Main Reason for Audience Measurement Advertising agencies who plan and place broadcast campaigns must be able to choose the "best" vehicle for their selling purposes, and that means measuring audiences to determine not only the numbers of persons reached by messages but their ages, income levels and so forth. Other less important reasons for conducting audience research include station programming decisions, evaluation of on-air personnel (particularly in radio), and long-range station policy planning. Module II. ~- What the Ratings Really Mean What's a rating? Let's talk in terms of television. An audience rating is a statistical estimate (not a guess) of the number of homes viewing a program as a percent of all homes owning a television set. For example, a rating of 20 for a network television program indicates that 20% of U.S. television homes tuned in the program. All audience ratings are based upon samplings. Why use sampling? Sampling is clearly the best way. No one could afford a complete "census" of the television- viewing population and, even if they could, the "census of viewing" would have to be conducted far too often in order to detect changing audience trends. Besides, the U.S. Census Bureau checks the validity of its data with a national sam 1e. Without sampling, how could a fire- cracker manufacturer test the quality of his product and still have a product left to sell? Are 1,165 homes enough? The A.C. Nielsen Co. samples about that number of homes nationally to determine network TV viewing. Okay, so sampling works (anyone who has had a blood test can agree), but isn't this stretching things a bit? No, statisticians state that a sample of 1,000 43 households provides ratings of about the same accuracy for a nation of over 50 million households as for a city of only 100,000. (But, it is far more difficult to select a quality 1,000 home sample for the U.S. than for a city of 100,000.) Can sampling measure not only firecrackers but people? The idea that sampling is no match for man may be ego- building, but it's incorrect. The failure of the Truman vs. Dewey political polling is often used to show how sampling can't deal with peOple, but the major source of error in political polling stems from predicting and Egg from sampling. A pollster is trying to pre 1ct what people will do on election day by what they tell him on a given day prior to the election; he is measuring future behavior and that's very risky. What are ygp going to do next week? You get the idea. Ratings aren't predictions! They report only what people did after the fact, not before. Nielsen for example, does not report what people plan to watch or "expect" to watch on television. How accurate are ratings? As we have seen, ratings are arrived at by sampling and they are statistical estimates. This means that although ratings are expressed as numbers (Johnny Carson pulled a 10 rating), they don't have the precision we usually associate with a number. Statisticians refer to the indefinite nature of ratings numbers as "sampling error." As a general rule, the larger the sample, the smaller the sampling error. The idea to remember is that just because a rating of 10 is reported for a given segment of the "Tonight Show," that does not mean that exactl 10% of U.S. television homes watched—The program-~tfie truth is somewhere "around" 10 (it could easily be 9 or 11). To repeat, ratings points are not exact numbers; they are estimates. How should ratings be used? A first rule of carpentry is: "only use a tool the way it was designed to be used." It applies equally to the use of ratings. Keep the limi- tations in mind. Remember that a rating is not a precise number and that it shouldn't be interpreted as such. If you are a station manager and the "competi- tion" is "beating" you by a ratings point or two (say 30 for you and 31 for them), don't get too excited and fire the entire staff from janitor on up. On the other hand, if "they" are whipping you by 20 ratings points, you might try appointing the janitor as manager! (NOW, TURN THE PAGE AND WORK PROGRESS CHECK 1) 44 PROGRESS CHECK 1 The main reason for measuring broadcast audiences is: (a) because it makes for good reading. (b) advertisers are interested in the data. (c) chambers of commerce often require it. (d) the "seal of good practice" depends upon the results. A rating of 20 for a network TV program indicates (a) an advertiser would have to pay 20 dallars to advertise on it. . (b) the NAB has evaluated the program's worth with a score of 20. (c) of the U.S. homes with television sets, 20% were watching. (d) 20 critics thought that the program would ease a fertilizer shortage. Sampling has come to receive a bad name. Why? (a) Most of the problems have been with prediction of human behavior. (b) The problems have been in describing past behavior after the fact. Ratings companies frequently report what pe0ple will watch in the future. (a) true (b) false In terms of ratings accuracy, what is the important point to recall? (a) ratings-points are precise numbers. (b) ratings are statistical estimates with ranges of error. (c) ratings are program evaluations by columnists. (d) ratings are to be taken at their face number value without hesitation. (NOW TURN THE PAGE AND SCORE YOUR PROGRESS CHECK) 01‘th 45 ANSWER KEY: PROGRESS CHECK l (IF YOU MISSED MORE THAN ONE QUESTION, TURN THE PAGE AND READ THE "VERBOSE EXPLANATION” FOR UNIT ONE. IF YOU MISSED NONE OR ONLY ONE, YOU ARE READY TO WORK THROUGH UNIT TWO AT YOUR CONVENIENCE.) H I .40»; 46 UNIT ONE: VERBOSE EXPLANATION Module 1. -- The Main Reason for Audience Measurement Every manufacturer of goods or purveyor of services using broadcasting to any degree, and every advertising agency concerned with planning and placing broadcast campaigns stands to benefit from audience research. Knowing and being able to select the audiences most likely to purchase a given firm's products (older persons for denture cleansers, teenagers for acne preparations, for example) can result in a very real competitive advantage. Audience measurement supplies this vital data to advertisers to help them peddle their wares. Remember, it is not only the big-volume advertiser who has a stake in efficient use of broadcasting. Indeed, the smaller advertiser with a limited budget must be especially aware of the audience(s) he is attempting to reach. Module II. -— What the Ratings Really Mean What's a rating? Let's confine our discussion to television, although it's the same idea for radio. In television, an audience rating is a statistical estimate of the number of homes viewing a program as a percent of all homes having a television set. A rating of 20 for a network television program indicates that 20% of U.S. TV homes tuned in the program. 47 Ratings of this sort aren't confined to television. Radio, magazines and newspapers all use ratings to report statistical estimates of the sizes of their audiences. These ratings are similar to batting averages, with one important difference: all audience ratings are based upon samplings. Because of sampling, statisticians call audience ratings estimates. This is their way of saying that a rating is subject to a margin of statistical error because it is based upon information obtained from a sample and not from the entire population. Why use sampling? Is there a better way than sampling to obtain TV ratings? People have suggested that the best way to get accurate information about television viewing is to check everybody, so as to avoid the use of sampling. But a little reflection shows this simply is not practical. A complete census of television would be prohibitively expensive. A census costs many millions of dollars. And a "census" of television viewers would have to be taken many times per year-~almost every week--so that sponsors and broadcasters would know public reaction to their continuing changes in program offerings. Sampling is used in television research just as sampling is used throughout industry, agriculture, medicine and government (to mention only a few areas of application) 48 as the best way of obtaining information. The Campbell Soup Company knows that if they keep stirring, one can of tomato soup tastes just as good and exactly the same as the rest of the kettle. The quality of wheat, milk and cotton are all checked by examining small samples. City water systems are checked daily for impurities by samples of a few gallons out of millions. For a blood test, even the least mathematical doctor takes only a few drops. But can sampling measure people? Many have implied that to use sampling we must reduce the richness of our experience to the simplest common denominator, that sampling is no match for man. The argument may be ego- building, but it is incorrect. The talented writer, Goodman Ace, once quipped: "Polls are fascinating. They are read by everyone from the farmer in the field all the way up to Tom Dewey, President of the U.S." The famous failure of political polling in the 1948 Presidential election is often cited as an example of the inability of sampling to c0pe with peOple. But was the predicted election of Tom Dewey an indictment of the sampling process? Not at all. The major source of error in political polling comes from predictipg and not from sampling. A pollster who had questioned every registered voter in the country 49 about his voting intention before an election would be in substantially no better a position to predict the outcome of the election than his rival who had used proper sampling techniques. This is true because some voters are undecided about how they are going to vote, others 'change their minds after they are asked and still others don't make it to the polls at all. In a close election-- such as in 1948--these peOple can make the difference. But TV ratings aren't predictions. They report only what peOple did after the fact, not before. Nielsen, .for example, does not report what programs people plan to watch or expect to watch on television. Of course the behavior of people is more difficult to measure than the characteristics of an inanimate object. ‘A family's choice of TV programs will change from one week to another. But this doesn't mean that a family's TV viewing behavior can't be accurately measured; it means only that TV viewing behavior must be measured more often. .How accurate are ratings? How should ratings be used? Re-read these sections in the initial presentation of UNIT ONE:‘ THEN, turn this page and worE PROGRESS CHECK 2. 50 PROGRESS CHECK 2 It is essential to measure the commercial broadcast audience. Why? (a) Because it's there. (b) Advertisers want the data. (c) It builds character. (d) It isn't essential at all. The 6:00 p.m. News receives a rating of 50. What does it mean? (a) 50 stories were presented. (b) 50 people work for the station, probably in the research department. (c) Of the 100 TV homes in the market, 50 are tuned to the 6:00 News. (d) All 50 of the community's leaders have evaluated the program. Which statement best characterizes ratings? (a) Ratings are accurate predictors of future viewing behavior. (b) Ratings report only what pe0ple did after the fact. The first rule of carpentry applies to ratings, too. What is it? (a) Always get your fee in advance. (b) Use a tool only as it was designed to be used. (c) Use a tool any way you can; get the most from it. (d) Never work any harder than is necessary. If you were a rating, how would you describe yourself? (a) a precise number (b) beautiful (c) an estimate (d) a phalarope (NOW TURN THE PAGE AND SCORE YOUR PROGRESS CHECK) h M o 0 0‘0‘ 51 ANSWER KEY: PROGRESS CHECK 2 (IF YOU MISSED MORE THAN ONE QUESTION, BETTER RE-READ THE SECTIONS THAT GAVE YOU DIFFICULTY: THEN MOVE ON TO UNIT TWO AT YOUR DISCRETION. IF YOU MISSED NONE OR ONLY ONE, YOU ARE READY TO WORK THROUGH UNIT TWO AT YOUR CONVENIENCE.) A 52 Time Spent Completing this Unit: Your Name: Student Number: UNIT TWO ONE METHOD FOR GATHERING AUDIENCE DATA AND SOME DIFFICULTIES IN MEASUREMENT Module I. -- The Most Popular Measurement Method: The Diary Diaries are by far the broadcast industry's primary source of audience information. More commercial time is bought and sold on the basis of diary-produced estimates than by any other method. The actual diaries used, placement and sampling procedures vary between audience research companies, but the principle is the same--people are asked to keep a written record of their listening or viewing behavior. In addition to basic viewing (speaking in terms of television), diaries usually furnish information about the viewing audience. Diaries supply detailed instruc- tions for recording separately, for each day of the week, time viewed, station, program and audience composition. Some of the advantages of the diary method: it provides a record of who is viewing rather than merely set Operation; adaptable to any size geographic area; can cover all broadcast hours; and can provide a wide variety of "other" information very economically. This "other" information can include the occupation of the male head of household, which commercial during the week was considered ”best" by family vote and like topics. Each week during a survey the sampled households are changed--the reasons are: 1) the mail delivery of diaries makes them easy to change from house to house, neighborhood to neighborhood, and 2) studies have shown that families will usually record one week's viewing faithfully and that the viewing record becomes less satisfactory if a family is retained in the sample for a longer period of time. Diary keeping does require an effort on the part of the families, but studies conducted by the American 53 Research Bureau have found that, when properly approached and informed as to the purpose of the survey, most families are quite willing to cooperate. In most cases (contrary to p0pu1ar belief), no money is given for their efforts, although token payments have proven useful in chronic low-c00peration areas (inner city residents are notoriously reluctant to participate). Problem areas in audience measurement are discussed later in Module II, but a few issues are particularly associated with the Diary. For example, we know that some diary entries are made days after the actual viewing has occurred on the basis of hazy memories. Coding respondent entries to make them digestible for computers can be an invitation to human and mechanical error. And, there is the question of possible viewing differences between those who keep a diary and those who refuse--they may be different "types" of people. Module II. -- Problem Areas in Audience Measurement Differences in data: When two ratings services don't turn up with the same numbers it is not because of sampling theor but rather because of sampling ractice. We have aIreagy seen that sampling is the only IogicaI solution to today's audience research needs. However, in implementing sampling theory, differences will occur. Some of the factors which can contribute to these differences are: individual corporation measurement techniques (phone interview, diary, etc.); differing geographical areas measured; and differing survey dates. Misuses of data: Audience estimates are to be used only in the environment in which they were obtained. National ratings cannot be directly applied to local areas; they were not obtained in any one "local" area but rather in a series of areas that by themselves mean nothing statistically. The use of ratings in a vacuum is another misuse. Ratings alone cannot and should not carry the entire burden when appraising a station as an advertising vehicle. . . there are important differences in types of audiences as well as sizes. Standard deviation: We repeat that a rating is an estimate, an approximation. When a program is rated at 10 (remember, that's 10% of the potential viewing homes-- those with TV sets), it means about 10, not exactly 10. The plus or minus range of the rating will depend upon standard deviation. The important thing to remember is that this "range" can be measured mathematically. 54 The non-response factor: Audience measurement firms are constantly employing new techniques or "gimmicks" to improve their cooperation rate. But, in the real world, there will always be those who will not participate in any form of survey. Once one household refuses, the carefully preselected sample is no longer really "pure." The research companies have attempted to identify and describe these non-respondents, but characterizing them is very difficult because--what else?--they won't cooperate. (NOW TURN THE PAGE AND WORK PROGRESS CHECK l) SS PROGRESS CHECK 1 By far the most p0pu1ar audience measurement method is (a) the telephone coincidental (b) the mechanical device (c) the personal interview (d) the diary Each week during a typical diary survey, the homes are changed. Why? (a) The audience research firms run out of money. (b) Most families will only keep accurate viewing records for a week. (c) It is a custom established by the post office to sell stamps. (d) Who could watch TV for seven straight days? Two rating services have reported different data for the same program. (a) They probably just trumped up the figures for kicks. (b) Each one may have used its own individual measurement technique. (c) No one trusts them anyway, so why worry? (d) The trouble is that liars can figure, and these firms can, too. Ratings alone should decide which station's commercial time is purchased. (a) True--they're fair and objective. (b) False--they can't tell us enough. What is probably the worst part of the non-response factor? (a) We cannot tell if these persons' behaviors might be different. (b) Such peOple slam doors and break too many noses. (c) There is no place for a recluse in modern America. (d) It bothers those characters in academia, but no one else. (NOW TURN THE PAGE AND SCORE YOUR PROGRESS CHECK) O‘U‘O‘Qa 56 ANSWER KEY: PROGRESS CHECK 1 (IF YOU MISSED MORE THAN ONE QUESTION, TURN THE PAGE AND READ THE "VERBOSE EXPLANATION" FOR UNIT TWO. IF YOU MISSED NONE OR ONLY ONE, YOU ARE READY To WORK THROUGH UNIT THREE AT YOUR CONVENIENCE.) 57 UNIT TWO: VERBOSE EXPLANATION Module I. 7' The Most Popular Measurement Method: The Diary We have seen that diaries are the primary source of audience information for practically every audience measurement firm. More television time is bought and sold on the basis of estimates produced by the diary than by 33y other method. The principle is always the same—-a sample of peOple are asked to keep a written record of a particular media activity, such as television viewing or radio listening. The chief advantage of the diary method is that it can be designed to obtain a wide variety of information on the sample, in addition to specific media activity. Also of importance is that the diary can be adaptable to any size geographical area; it covers all broadcast hours; it is economical; and, in the case of television and radio, it can report individual activity, not merely set Operation (as with mechanical devices). Diaries furnish detailed instructions for recording separately, for each day of the week, time viewed, station, program and audience composition. In addition, the diary gathers information on the age, sex and number of school years completed by each member of the household; whether or not the lady of the house works outside the home more than 35 hours a week; occupation of male head of household and type of employment; number of TV 58 sets being used in the home and so on. Although this requires no small effort on the part of diary keepers, research firms have found that, when properly approached and instructed as to the purpose of the survey, families are usually quite willing to cooperate. In most cases, no direct compensation is given for their efforts, although a token payment may be made in low- cooperation areas, such as inner city dwellings. Each sample used for diary surveys is completely changed and composed of new and different families every week during a survey and from survey to survey. Most families will keep one week's record of their viewing and, after that, the record becomes less satisfactory if the family is retained in the sample for a longer period. To review briefly the advantages of the diary method-- it provides a record of individuals viewing rather than merely set Operation; it is adaptable to any size area; it can cover all broadcast hours; and it can provide a wide variety of information economically. But these advantages hrnatheir meaninig if the original source-~the diary itself--is lacking in validity or is inaccurate in any way. We know, for example, that some diary entries may be made on the basis of hearsay or the "guesstimate" of the diary keeper. As with any written questionnaire, the transcribing of information from the point of its 59 inception to the final report is subject to human and mechanical error. And, there is the question of possible differences in viewing habits between those who keep a diary and those who do not. Of course, many of the errors do tend to at least partially cancel out in the end results. A number of tests have been conducted which compare the results of a diary study with the results of other techniques of audience measurement, such as personal and telephone interviews conducted in the same areas at the same time. As a general conclusion, it can be stated that for practical business decisions, the diary is the most valuable technique in providing audience estimates for broadcast industry use. To revisit a point: the diary is Ehg most p0pu1ar measurement device currently used in broadcast audience research. It is cheap, reasonably reliable and quite versatile. Module II. -- Problem areas in Audience Measurement Differences in data: Differences in data between one research firm and another do not occur because of sampling theory but rather because of sampling practice. There is undeniable evidence all around us that sampling is, indeed, the only solution to the audience research needs of today. However, it is in implementing the 60 sampling theory that differences in data will occur. Some of the factors which can contribute to these differences are: individual corporation measurement techniques (phone, personal interview, diary, etc.); differing survey dates; and differing geographical areas measured. Misuses of data: There are obviously cases when estimates are misused within the broadcasting and advertising industries--these further compound the problems of audience research. For instance, difficulties arise when audience estimates are used out of the environment in which they were obtained, such as applying a local rating nationally or applying it to another local area. Local ratings apply only to the area in which they were obtained, and at the same time national ratings cannot be directly applied to various local areas. Before any survey begins, research companies have a clear understand- ing of the area to be measured and, when that area is so defined for the user, it is the only area for which the estimates are applicable. The use of ratings in a vacuum is another instance of misuse. Ratings alone cannot and should not carry the entire burden when appraising a station as an advertising medium. Some of the best television campaigns, in terms of sale of merchandise, have been the lowest-rated in a market. Probably the best examples of such programs are the daytime women's 61 shows. The secret here seems to be that the products are all extremely well-matched to reach and sell a big part of a comparatively small, loyal audience of house- wives. Today's successful broadcaster recognizes the importance of audience pypgg as well as audience £133. He realizes that the lowest cost per prospect is not necessarily found zumnug the higher rated programs. Also, station "image" has become important not only as a measure of responsibility to the viewing public, but also as a saleable value to prospective clients. Standard deviation: To repeat, a rating is an approximation. When a program is rated at 10 (remember, that's 10% of the potential viewing homes), that means gpppp 10, not exactly 10. The plus or minus range around the rating which makes up this "about" depends upon the standard deviation. The important thing is that this range in a "good" sample can be measured mathematically. All audience estimates, no matter who produces them, have a standard deviation, as does any measurement based upon sampling. The non-response factor: Ideally, each home selected Should participate in the survey in order to create, or maintain, a pygg probability or "good" sample. But, practically speaking, there are obviously some peOple who will never participate in surveys. Once one person refuses, the sample is no longer really pure. The 62 important consideration, however, is what effect a non- cooperator has on the overall validity of the resulting estimates. The American Research Bureau's studies of cooperating families vs. non-cooperating homes have stated that these households are essentially the same, at least in terms of the gmgppp of time spent viewing television. (NOW TURN THE PAGE AND WORK PROGRESS CHECK 2) 63 PROGRESS CHECK 2 The absolute most popular audience measurement method is (a) personal interview (b) mechanical device (c) diary (d) telephone coincidental The same program has been rated by two services and their data disagree. (a) One may have used personal interview while one used diary. (b) No one knows why these things happen. (c) The figures are probably just fabricated anyway. (d) What, me worry? Should ratings alone decide which station's commercial time is purchased? (a) Sure, they're black and white and what could be fairer. (b) No way, Often audience types may be more important than size. The homes are changed each week during a typical diary survey. Why? (a) No one could stand to watch TV for over seven days. (b) It is a custom established by the post office to hustle stamps. (c) Most families will only keep accurate viewing records for a week. (d) The rating firms need time to duck out of town. When we say that a rating of 8 doesn't mean exactly 8, what do we mean? (a) It could easily be a bit more or less than 8. (b) We are just being evasive. (c) Ratings have no meanings to advertisers. (d) So, what's this "we" stuff? (NOW TURN THE PAGE AND SCORE YOUR PROGRESS CHECK.) 64 ANSWER KEY: PROGRESS CHECK 2 (IF YOU MISSED MORE THAN ONE QUESTION, BETTER RE-READ THE SECTIONS THAT GAVE YOU DIFFICULTY: THEN MOVE ON TO UNIT THREE AT YOUR DISCRETION. IF YOU MISSED NONE OR ONLY ONE, YOU ARE READY TO WORK THROUGH UNIT THREE AT YOUR CONVENIENCE.) 65 Time Spent Completing This Unit: pp .1 Your Name: Student Number: UNIT THREE SOME TELEVISION VIEWER BEHAVIOR PATTERNS AND MEASUREMENT SELF-REGULATION Module 1. -- A Few Essential Facts and Figures First of all, virtually every American household owns at least one television set--about 96%, to get Specific. Forty percent have two sets or more. Of those households owning a TV set, the majority have at least one color TV set lurking under their roofs—-about 55% of those households. TV usage tends to be higher in homes with color sets-~during the weekend ball games, the plus margin for color-equipped homes is 21%. The TV audience builds in size throughout the broad- cast day, and TV viewing reaches its peak (understandably) in mid-evening (between 8 and 9). Audience composition changes by season of the year-- viewing in general is highest in the Winter, but there are also shifts in the composition of the audience itself. For example, teens watch more late night TV in the Summer than they do in the Winter. , TV households average a whopping 49 hours of TV usage per week-~an average of 7 hours every day. Women view almost 30 hours of TV weekly—-4 hours more than men. Of course, most of the extra female viewing can be attributed to housewives pulling up the average. Walk into an average American home, and over half of its evening time will be devoted to television-~and it doesn't make muCh difference as to income levels; the affluent like their tube-watching, too. As might be expected, different types of programs vary in audience appeal--young adults watch more feature 66 films and general drama, but older adults prefer variety Shows and westerns. Sunday night still attracts the biggest adult TV audience-~Friday continues as the night with the least amount of viewing. Module II. -- The BRC: Referee for Audience Research The Broadcast Rating Council (BRC), located in New York City, oversees the activities of the various rating services (A. C. Nielsen, ARB, and a host of others) to ensure that they meet certain standards of audience research and integrity. To those research firms meeting its standards, the BRC grants a check-mark of approval. It's very much like the National Association of Broad- casters' "Seal of Good Practice"--strictly voluntary compliance on the part of participating research organiza- tions. To repeat, the Broadcast Rating Council has no regulatory power at all. If a research firm should Hippen to continually violate the BRC standards of performance, about the most drastic action that the BRC could take would be to send a guy down to the firm's offices to scrape their "seal of approval" off the front door with a razor blade. Nevertheless, the fact remains that most reputable research firms engaged in broadcast audience measurement do adhere to the BRC'S minimum standards for ratings YEporting and methodology. Just for good measure-~no pun intended--the BRC conducts periodic audits of the various research firms. During the course of these audits, the BRC examines a firm's compliance with the following sorts of standards: ETHICAL AND OPERATIONAL STANDARDS The anonymity of all interviewers, research supervisors and field servicemen is to be protected. If a respondent (such as a family keeping a diary) has been assured that his anonymity will be protected, his name, address or other information is not to be disclosed. Any sample chosen for an audience study must reasonably represent that population which it was chosen to measure (don't choose homes with no males present if you're trying to measure viewing in "typical" homes across the country). 67 Appropriate quality control steps must be taken in regard to data editing, keypunching, printing and other Operations which might affect the final study results. The field work of interviewers should be verified by the use of spot checks to certify that interviews were indeed completed. All field personnel shall be thoroughly trained in their work. They Should know the duties of their positions, adhere to interview instruction, and avoid attempts to bias information obtained from respondents. DISCLOSURE STANDARDS Each ratings report shall mention any omissions, errors or biases known to the rating service which could influence the report findings. If the rating organization has deviated from its standard Operating procedures, the rating publication must report the deviation(s). The geographic areas surveyed, and the reasons for their selection, should be clearly defined in each ratings report. Each rating service Should indicate the normal sample return for each survey (for example, how many managed to mail back their diaries) and, if the return is below normal, the ratings publication must explain the fact in a prominent position. If a station has engaged in special, non-regular promotional activity during a survey period in order to build an inordinately large audience (called "hypoing" the ratings), this activity must be published along with the rating report. If a rating service has knowledge of possible rating distorting influences such as pre-emptions, station failures, etc., the rating report must disclose that such conditions occurred during the survey period. (NOW TURN THE PAGE AND WORK PROGRESS CHECK 1) 68 PROGRESS CHECK l Of all American homes, how many own at least one television set? (a) 47% (b) 68% (c) 32% (d) 96% The night of the week which attracts the largest adult TV audience is (a) Wednesday (b) Sunday (c) Monday (d) Thursday Most reputable audience research firms are accredited by the BRC. (a) true (b) false Any sample chosen for an audience study must reasonably represent (a) anything the rating firm can get away with. (b) the number of persons who purchased television sets last fiscal year. (c) that population which it was chosen to measure. (d) only married couples. Special, non-regular station promotion during a survey period is called (a) doing what comes naturally. (b) the name of the game in broadcasting. (c) hypoing the ratings. (d) insuring the sampled audience. (NOW TURN THE PAGE AND SCORE YOUR PROGRESS CHECK) N o «504 69 ANSWER KEY: PROGRESS CHECK 1 (IF YOU MISSED MORE THAN ONE QUESTION, TURN THE PAGE AND READ THE "VERBOSE EXPLANATION" FOR UNIT THREE. IF YOU MISSED NONE OR ONLY ONE, YOU HAVE COMPLETED YOUR ASSIGNED PROGRAMMED UNITS. DON'T FORGET TO REVIEW ALL THREE OF YOUR UNITS AT WHAT YOU CONSIDER TO BE THE APPROPRIATE TIME.) 70 UNIT THREE: VERBOSE EXPLANATION Module I. -- Some of the Necessary Facts and Figures Ninety-six percent of U.S. households own a TV set-- Television ownership continues to inch up closer to the point where almost every household owns a TV set. Only 9% of the households owned TV sets in 1950, compared to 87% in 1960 and 9§% today. Majority of households own color sets--Estimated color set penetration is about 55% of all TV households. The influx of small solid-state sets has contributed to this growth figure. TV usage higher in color set households: PeOple who own color TV sets tend to watch more television than those owning only a black and white set. The plus margin among color-equipped homes ranges from 9% during late night to 21% weekend daytime. TV viewing reaches peak in mid-evening: The television audience builds in size throughout the day as housewives, school age children and then the working force get into the action. Between 8 and 9 p.m., estimates Show that two-thirds of all Eastern and Central time zone TV households have their sets on. Although viewing follows the same overall pattern in both of these time zones, it is interesting to note the lunchtime rise and the higher mid to late night usage in the Central zone. Audience composition changes by season: It's no 71 surprise that viewing, in general, is at its highest in the Winter, but there are some interesting changes in the composition of the audience by seasons of the year. For example, non—adults watch more morning and late night television in the Summer than they do in the Winter. Overall, women make up the largest share of the audience, except during Summer mornings and early evening in the Winter, when they are out—viewed by non-adults. (The term "non-adults" includes teens as well as children. It is the teens who account for most late night non-adult viewers.) The only daypart when total viewing goes down in the Winter is late night, despite a substantial increase in TV usage by men. TV households average 49 hours of TV per week: According to Nielsen estimates, television households average almost 49 hours of TV usage a week, or almost 7 hours a day. Highest viewing levels, averaging over 62 hours a week, were registered among households with five or more peOple. Also well above average were $10,000+ households with non-adults. Women view TV over 4 hours more than men: During December, women viewed nearly 30 hours of television a week, over four more hours than men. The group with the lowest rate were men in households with $15,000+ income, but even they viewed over 21 hours a week. Over half of evening time devoted to TV: Household TV 72 usage avearages 216 minutes out of the total 420 minutes available each evening. (6 p.m.-1 a.m.) The rate is highest in households with income $10,000- $l4.999--232 minutes an evening. Households with incomes $15,000+ view television 213 minutes per evening-~just three minutes below the composite average. Types of programs vary in audience appeal: Each category of TV programming varies in its appeal to different segments of the audience. Younger adults watch more feature films and general drama. Older adults prefer variety shows and westerns. Situation comedies, along with westerns, also attract the largest number of non-adults. Sunday night attracts biggest adult audience: Sunday night is by far the most popular TV night of the week among adults. Tuesday has replaced Monday as the second heaviest viewed night. Friday still continues as the night with the least amount of viewing. Module 11. -- The Broadcast Rating Council and Standards The Broadcast Rating Council Inc. believes that adherence to a set of minimum standards is necessary to meet the basic objective of valid, reliable and effective broadcast audience measurement. Acceptance of these minimum standards by a rating service is one of the conditions of accreditation by the BRC. Most reputable 73 audience research organizations 19 comply with the BRC'S standards of methodology and reporting, but bear in mind that it is a voluntary compliance -~the BRC has no power to punish; it can only withdraw its accreditation which, in the audience research game, can mean a lot to image- conscious research firms. During its periodic audits of approved rating companies, the BRC checks for compliance with minimum standards such as the following: ETHICAL AND OPERATIONAL STANDARDS The anonymity of all interviewers, supervisors and servicemen Should be preserved. The audit firm, however, would have the right to check with these and any other apprOpriate persons as part of the auditing process. If a respondent has been led to believe, directly or indirectly that he is participating in an audience measure- ment survey and that his anonymity will be protected, his name, address or such other identifying information shall not be made known to anyone outside the rating service organization, with the following exceptions: (l) The audit firm of the Broadcast Rating Council, Inc., in the performance of an audit, or such disclosures as required in a hearing before the Broadcast Rating Council, Inc. (2) The broadcast rating service at its discretion may permit other reputable research organizations to reinterview respondents in the conduct of special research studies. 74 The sample design for each rating report should be SO constructed to represent, to a reasonable degree, the universe being measured (households, individuals, sets, etc.). Where significant deviations are considered, by the rating service, to be desirable and/or unavoidable, such deviations will be described clearly in each rating report. Appropriate quality control measures Shall be maintained with respect to all internal and external Operations which may influence the final results. Quality control Shall be applied, but not necessarily limited, to data collection, editing, collating, tabulating and printing. All field personnel (including supervisors) Shall be thoroughly trained in their work. Such training Should provide assurance that: they know the responsibilities of their positions; they understand all instructions governing their work, that they will deviate from such instructions only when apparently justified by unusual conditions and that such deviations will be reported in writing; they recognize and will avoid any act which might tend to prejudge, condition, misrepresent or bias the information obtained from respondents. The field work of each rating service should be verified by spot checks or other procedures apprOpriate to the techniques used to verify or inspect the work 75 of interviewers, supervisors and other field personnel. DISCLOSURE STANDARDS Each report shall mention all omissions, errors and biases known to the rating service which may exert an effect on the findings shown in the report. Each rating report should point out known deviations, from standard operating procedures of the rating service, which could also affect the reported results. Geographic areas surveyed should be clearly defined in each rating report. In each case, the criteria and/or source used in the selection of the survey area Shall be given. (Thus, if the area surveyed is the Metro area as defined by the U.S. Census, it should be so recorded in the report.) Each rating service shall indicate the normal sample return for each survey, and shall indicate in a prominent position when the return is below normal but not below the minimum required for issuance of a report. If a rating service has established that any station has employed special non-regular promotional techniques that may distort or "hypo" ratings, then said rating service will publish in the apprOpriate report a notice of this effect. If a rating service has knowledge of apparent rating distorting influences such as unusual weather, catastrophes, political or social events, pre-emptions such as world 76 series, elections, congressional hearings, station failures, etc., the rating service will indicate in its reports the existence of such conditions during the survey period. (NOW TURN THE PAGE AND WORK PROGRESS CHECK 2) 77 ’PROGRESS CHECK 2 On the average, who watches more TV: a black 8 white or color set family? (a) black 8 white-~it's cheaper to operate. (b) color-~it's more fun to watch. In general, TV viewing is at its highest in (a) Summer. (b) Fall. (c) Winter. (d) Spring. It is mandatory that research firms be accredited by the BRC. (a) true (b) false What action can the BRC take against those who violate its standards? (a) fine them heavily and impose S-year prison sentences. (b) withdraw their seal of approval check-mark. (c) kick then square in the pants. (d) help them beat the system. If a station is detected "hypoing" its ratings, what happens? (a) It is reported in the rating publication for that survey period. (b) The station receives the NAB medal of honor. (c) It is overlooked as the way of the world. (d) NO one cares, so nothing happens at all. (NOW TURN THE PAGE AND SCORE YOUR PROGRESS CHECK.) b M o o 78 ANSWER KEY: PROGRESS CHECK 2 (IF YOU MISSED MORE THAN ONE QUESTION, BETTER RE-READ THE SECTIONS THAT GAVE YOU DIFFICULTY. YOU HAVE NOW COMPLETED YOUR ASSIGNED PROGRAMMED UNITS. DON'T FORGET TO REVIEW ALL THREE OF YOUR UNITS AT WHAT YOU CONSIDER TO BE THE APPROPRIATE TIME.) APPEND I X B APPENDIX B BEHAVIORAL OBJECTIVES FOR AUDIENCE ANALYSIS INSTRUCTION UNITS Primary Objective: Upon completion of the branching programmed instruction materials, given a fixed alter— native cognitive test of achievement, the student will identify correct responses to questions relating to representative areas of the instructional treatment units (minimum acceptable level of performance not established, pending further revision). Sub-Objectives: The student will select from among alter- native responses the chief purpose of conducting broadcast audience analysis. The student will discriminate between the nature of broadcast rating figures and the precision associated with whole numbers. The student will distinguish the advantages and disadvantages characteristic of diary audience measurement methodology. The student will select appropriate general problem areas in broadcast audience analysis. The student will isolate essential facts and figures pertaining to American tele— vision viewer behavior patterns. The student will indicate ethical, opera- tional and disclosure standards necessary to achieve valid, reliable and Objective broadcast audience measurement. 79 APPENDIX C 8**8* APPENDIX C DIRECTIVE TO CONVENTIONAL LECTURE SUBJECTS YOU HAVE BEEN RANDOMLY SELECTED to receive a few units of Broadcast Audience Studies concepts via short classroom lectures from your instructor. IT IS ESSENTIAL (AND MANDATORY) that you attend the class meetings of: Wednesday -- April 17 and Friday -- April 19 YOU WILL EARN COURSE CREDIT for your participation, so be on time and please don't "cut" either of the above class meetings. THANKS for your cooperation; your help is greatly appreciated! 80 APPENDIX D ***8* APPENDIX D DIRECTIVE TO PROGRAMMED INSTRUCTION SUBJECTS YOU HAVE BEEN RANDOMLY SELECTED to receive a few units of Broadcast Audience Studies concepts via 3 short programmed instruction packages. IT IS ESSENTIAL (AND MANDATORY) that you attend the class meeting of: Wednesday -- April 17 Friday -- April 19 YOU WILL EARN COURSE CREDIT for your participation, so be at class promptly on Wednesday, April 17. You will receive further instructions at that time. THANKS for your cooperation; your help is greatly appreciated! 81 APPENDIX E C. L. APPENDIX E CONVENTIONAL LECTURE ATTITUDINAL INSTRUMENT Affective Questionnaire for Audience Analysis Concepts Name: Student Number: Sex: The classroom lecture units dealing with Audience Analysis concepts were too long. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The classroom lectures held my attention very well; I was not easily distracted from the concepts being presented. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I found that I was generally turned off by the lecture presentation of the audience information. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree 82 83 If I had a friend who was looking for some worthwhile instruction, I would suggest the presented Audience Analysis units. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The repetition within the lectures was very objectionable. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree Presenting the Audience Analysis concepts via classroom lecture proved to be a very efficient use of my time. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The lecture units tried to present too much material for the average undergraduate student. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I probably would not recommend the presented Audience Analysis units to other T-R students. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I was stimulated by the audience concepts presented to the point that I would like to take other audience-related courses and/or expand my knowledge by self reading. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree 10. 11. 12. 13. 14. 84 I find the subject of broadcast Audience Studies very interesting. 1) strongly 2) agree 3) neutral 4) disagree 5) strongly agree disagree I frequently found myself thinking about the material presented, after listening to the classroom lectures. 1) strongly 2) agree 3) neutral 4) disagree 5) strongly The lecture ppabsorbing college. 1) strongly 2) agree 3) neutral 4) disagree 5) strongly agree disagree units presented qualify as some of the most and otherwise dull experiences I've had in agree disagree Overall, taking everything into consideration, I would rate the content organization of the presented audience units as excellent. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The redundant nature of the lecture units probably served as a learning aid for me. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree 15. 16. 85 If the presented Audience Analysis concepts were to be expanded into an elective "course," I would like to take the course. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I frequently found my mind wandering as I sat listening to the classroom lectures. I) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree APPENDIX E .1. APPENDIX E PROGRAMMED INSTRUCTION ATTITUDINAL INSTRUMENT Affective Questionnaire for Audience Analysis Concepts Name: Student Number: g3 Sex: The programmed instruction units dealing with Audience Analysis concepts were too long. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The programmed instruction units held my attention very well; I was not easily distracted from the concepts being presented. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I found that I was generally turned off by the programmed presentation of the audience information. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree 86 87 If I had a friend who was looking for some worthwhile instruction, I would suggest the presented Audience Analysis units. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The repetition within the programmed units was very objectionable. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree Presenting the Audience Analysis concepts via programmed instruction proved to be a very efficient use of my time. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The programmed units tried to present too much material for the average undergraduate student. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I probably would not recommend the presented Audience Analysis units to other T-R students. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree 10. 11. 12. 13. 88 I was stimulated by the audience concepts presented to the point that I would like to take other audience-related courses and/or expand my knowledge by self reading. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I find the subject of Broadcast Audience Studies very interesting. 1) strongly agree 2) agree 3) neutral 4) disagree w 5) strongly disagree I frequently found myself thinking about the material presented, after reading the programmed instruction units. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree The programmed instruction units presented qualify as some of the most ppabsorbing and otherwise dull experiences I've had in college. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree Overall, taking everything into consideration, I would rate the content organization of the presented audience units as excellent. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree 14. 15. 16. 89 The redundant nature of the programmed units probably served as a learning aid for me. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree If the presented Audience Analysis concepts were to be expanded into an elective ”course,” I would like to take the course. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree I frequently found my mind wandering as I sat reading the programmed instruction units. 1) strongly agree 2) agree 3) neutral 4) disagree 5) strongly disagree APPENDIX C APPENDIX C COGNITIVE POST-TEST FOR AUDIENCE CONCEPTS Cognitive Test for Audience Analysis Concepts Name: Student Number: Sex: Television viewing levels tend to change by season of the year. When is viewing generally the highest? 1) Spring 2) Fall 3) Winter Are most American families willing to COOperate in audience surveys? 1) yes 2) no All things considered, is there a better way than sampling to gather audience information? 1) yes 2) no Ratings reports speculate as to what people will probably be watching in the near future. 1) true 2) false During an average evening in an average American household, about how much of the night will be devoted to television? 1) 70% 2) 50% 3) 30% 90 "“; t. u _'_‘. 10. 11. 12. 91 All audience research firms operating in the U.S. must be accredited by the Broadcast Rating Council. 1) true 2) false Any special, non-regular station promotion during a survey period is called: 1) insuring the sampled broadcast audience 2) adversarial ratings methodology 3) hypoing the broadcast ratings About how many American homes own at least one television set? 1) 96% 2) 87% 3) 78% A national television rating for a particular segment of "Bonanza" is 25. What does that mean? 1) an audit by the Broadcast Rating Council will result in an independent variable of 25 Dollars per second of advertising 2) of the households in the U.S. owning television sets, about 25% were watching the program 3) the United National Church Services have given the program their wholesomeness evaluation number of 25 Ratings alone should probably decide which station's commercial time is to be purchased. 1) true 2) false About how many households are sampled nationally by the A. C. Nielsen Co. to determine network television viewing? 1) 1,000 homes 2) 1,000,000 homes 3) 100,000 homes You are a station manager and your local competition is beating you in the latest audience report by one full rating point. Should you be unusually worried? 1) yes 2) no 13. 14. 15. 16. 17. 18. 19. 92 On the average, which household watches more television: a black 8 white or color set home? 1) black 6 white 2) color National ratings cannot be directly applied to local areas. 1) true 2) false When two ratings services do not report the same figures for identical time periods, what is probably the difficulty? 1) sampling theory 2) sampling initiation 3) sampling practice The most important reason for measuring the broadcast audience is: 1) it is required by the NAB Board of Station Practices for representation 2) broadcast advertisers will want to use audience data for selling purposes 3) local citizen pressure groups can now demand the quantitative information You are explaining the accuracy of ratings to a friend. What is the most important point to stress? 1) ratings are program evaluations by leading TV columnists 2) ratings are statistical estimates with ranges of error 3) ratings are precise numbers that may be interpreted as such During a survey period a station is caught conducting special, non-regular promotion. What happens? 1) the station receives a fine from the Broadcast Rating Council 2) the act is reported in the ratings publication for that period 3) the promotion department of the offending station is indicted The "plus" or "minus" interpretation of a given audience estimate will depend most directly upon: 1) the quality of research personnel involved 2) the general climate for broadcast evaluation 3) the reported standard deviation 20. 21. 22. 23. 24. 25. 26. 27. 93 On the average, who watches more television per week: a typical male or female? 1) male 2) female American households average how many hours of television usage per day. 1) 11 hours 2) 9 hours 3) 7 hours The major source of error in political polling stems from: 1) predicting future behavior 2) describing past behavior 3) acquiring unbiased respondents What exactly is a "rating?” 1) an evaluation of a given program by a panel of BRC broadcasters 2) the published number of available advertiser dollars 3) a percent of all homes owning a television set One audience measurement method is by far the most popular. Which is it? 1) the diary 2) the personal interview 3) the telephone coincidental There is a distinct possibility of different program preferences between those who participate in audience surveys and those who do not. 1) true 2) false Most reputable audience research firms adhere to the Broadcast Rating Council's minimum reporting and methodology standards. 1) true 2) false One night of the week attracts the least amount of television viewing. Which is it? 1) Wednesday 2) Friday 3) Monday 28. 29. 30. 94 A sample chosen for an audience study must reasonably represent: I) married couples owning one or more receiving sets in working order . 2) that population which it was chosen to measure 3) homes purchasing receiving sets in the last survey period. Your rich Uncle is a broadcast rating. What word would best characterize him? 1) a phalarope 2) a precise number 3) an estimate Each week during a typical local-market survey period the sampled households are changed. Why? 1) families keep accurate viewing records for a week 2) postal departments have encouraged the switching techniques 3) field teams can run short of research funds in local markets APPENDIX H APPENDIX H ITEM ANALYSIS DATA FOR COGNITIVE POST-TEST Number of subjects - 40 Number of test items - 30 RAW SCORE DISTRIBUTIONS Raw Cumulative Percentile Standard Score Frequengy Frequency Rank Score 30 6 6 92 61.5 29 10 16 72 57.7 28 5 21 53 53.9 27 3 24 43 50.1 26 6 30 32 46.2 25 2 32 22 42.4 24 3 35 16 38.6 23 Z 37 9 34.8 22 1 38 6 31.0 21 2 4O 2 27.2 Mean - 26.97 Standard Deviation - 2.62 Variance - 6.90 Standard Score has Mean of 50 and Standard Deviation of 10 (Perfect score = 30; Chance score = 12; Ideal discriminating score = 21) A Mean Item Difficulty - 10. Difficulty is the percentage of the total group marking a wrong answer. Mean Item Discrimination - 21. Discrimination is the difference between the percentage of the upper 27% group marking the right answer and the percentage of the lower 27% group marking the right answer. Kuder-Richardson Reliability #20 — .6635. K-R Reliability measures the internal consistency of the test through analysis of the individual test items. 95 APPENDIX I APPENDIX I CROSS TABULATION OF PAIRED BIPOLAR ATTITUDINAL QUESTIONNAIRE ITEMS The following cross tabulation tables can be perceived as one might interpret a simple correlation matrix-- in this case, the more nearly subjects' numerical responses tend to approach the drawn diagonal lines, the more "consistent" their responses to bipolar attitudinal items can be said to be. For example, subjects providing a response of "2" (agree) to the positively-stated item #2 could be expected to provide a response of "4" (disagree) to the negatively-stated item #16 which possesses essentially the same attitudinal content. Attitudinal Item #2 \O H \ T \ 5 4 3 2 l e \. 31 1 3 H r32 )21 .5 .U 3 l 1 l l 3 «441 4181 :3 <5 MK Totals 2 6 7 20 5 (40 subjects) 96 97 Attitudinal Item #4 \ 4 (40 subjects) oo * 5 4 3 2 l E \. ‘ Q) r: \c E 11\2 l Lg 124\1ppp \. .2 51161 p 4.) < l 1 2\ Totals 2 9 17 8 Attitudinal Itemfi_ t, r-i * 54321 5\ p 1\ H 3;; 11>-52 £2 lg 4\31 fl \\ i3 65 \fi << \ \ Totals 2 22 13 3 0 (40 subjects APPENDIX J APPENDIX J TIMES CONSUMED BY CONVENTIONAL LECTURE AND PROGRAMMED INSTRUCTION SUBJECTS Conventional Lecture Times Time Spent Presenting Unit ONE: 25 minutes " " " Unit TWO: 34 minutes " " " Unit THREE: 35 minutes Total: 94 minutes Programmed Instruction Times (Minutes) Unit Unit Unit Total Subject # ONE TWO THREE ATime l. 4 5.5 5.5 15 2. 7 S 5 l7 3. 7 8 7 22 4. 4.5 3.5 4 12 5. l3 9 13 35 6. S 5.5 7.5 18 7. 5 6 6 17 8. 4 S 5 l4 9. 4.5 S 4.5 14 10. 9 6 7 22 11. 12 10 10 32 12. 10 10 15 35 13. 8 10 7 25 14. S S 7 17 15. 8 9 6 23 16. 6 7 10 23 17. 6.5 5.5 5.5 17.5 18. 8 6 5 19 19. 9 ll 12 32 20. 6 7 7 20 Average Total Time for Group: 21.5 98 LIST OF REFERENCES LIST OF REFERENCES Anderson, R.; Kulhavy, R. W.; and Andre, T. "Feedback Procedures in Programmed Instruction." Journal of Educational Psychology, Vol. 62 (1971), pp. 148456. Borg, W. R., and Call, M. D. Educational Research: An Introduction. New York: David McKayFCo., I971. Campbell, D. T., and Stanley, J. C. Experimental and Quasi- Experimental Designs for Research. CHICago: ‘Rand— McNally, 1966. Crowder, N. A. "Automatic Tutoring by Means of Intrinsic Progamming." Automatic Teaching. Edited by E. Galanter. New York: John Wiley 6 Sons, 1959. "Automatic Tutoring by Intrinsic Programming." Teaching_Machines Programmed Learning: A Source Book. Edited by A. A. Lumsdaine andiR. Glaser. Washington, D. C.: National Education Association, 1960. Dawes, R. M. Fundamentals of Attitude Measurement. New York: John WiIey 8 Sens, I972. Eurich, A. C. "Man and Media in Higher Education." Educational Broadcasting Review, Vol. 2 (1968), pp. 3-9. Filep, R. "The Potential Impact of Programmed Instruction on the Curriculum and Counseling." Prospectives in Programming. Edited by J. B. Granger. New York: The Macmillan Co., 1962. Francois, W. B. "Designing, Developing and Testing Programmed Instruction for Beginning News Writers." Unpublished Ph.D. dissertation. Dissertation Abstracts, Vol. 28 (1968), 5002a. Gagne, R. M. "Educational Media and Individualized Instruc- tion." Educational Broadcasting Review, Vol. 4 (1970), pp. 49162. Griffith, J. L. "An Evaluation of Programmed Instruction in Journalism." Journalism Quarterly, Vol. 46 (1969), pp. 613-17. TI 99 100 Hough, J. B. "Research Vindication for Teaching Machines." Phi Delta Kappan, Vol. 42 (1962b), pp. 240-42. Katz, D., and Stotland, E. "A Preliminary Statement to a Theory of Attitudinal Structure and Change." Psychology: A Study of a Science. Edited by S. Koch. New York: McGraw Hill, 1959. Macdonald-Ross, M. "Programmed Learning: A Decade of Deve- lopment." International Journal of Man-Machine Studies, Vol. 1 (1969), pp. 73-100. Meeske, M. D. "Teaching Radio-Television in a Department of Communication." Educational Broadcasting Review, Vol. 6 (1972), pp. 219-223. Nauman, T. F. "A Laboratory Experience in Programmed Learning ikn' Students in Educational Psychology." Journal of Programmed Instruction, Vol. 1 (1962), pp.’9-181 Osgood, C. E.; Suci, G. J.; and Tannenbaum, P. H. The Measurement of Meaning. Urbana: University of IIlinois Press, 1957. O'Toole, J. F. "Teachers' and Principals' Attitudes Towards Programmed Instruction in the Secondary School." AV Communication Review, Vol. 12 (1964), pp. 429-433. Prejean, B. G. "Programmed Instruction in Journalism: An Experimental Study." Unpublished Ph.D. dissertation. Dissertation Abstracts, Vol. 29 (1969), 3569a. Pressey, S. L. "A Simple Apparatus Which Gives Tests and Scores-~and Teaches." School Sociology, Vol. 23 (1926), pp. 373-6. . "A Machine for Automatic Teaching of Drill Material." School Sociology, Vol. 25 (1927), pp. 549-52. "A Third and Fourth Contribution Towards the Coming 'lndustrial Revolution' in Education." School Sociology, Vol. 36 (1932), pp. 668-72. "A Puncture of the Huge 'Programming Boom'?" Edited by W. Scramm. The Research on Programmed Instruction, Vol. 35 (1964), p. 87. Roe, A. "A Comparison of Branching Methods for Programmed Learning." Journal of Educational_Research, Vol. 55 (1962), pp. 407-16? 101 Scherman, A. "Free-choice, Final Performance and Attitudes Toward Different Types of Programmed Instruction." Journal of Education, Vol. 155 (1973), pp. 56-63. Shull, H. I. "Programmed Instruction: A Comparison of Learning and Retention of Information Learned Through the Use of Small Step (Linear) Programmed Instruction and Large Step (Branching) Programmed Instruction." Unpublished Ph.D. dissertation. Dissertation Abstracts International, Vol. 30 (1970), p. 5266a. Skinner, B. F. "The Science of Learning and the Art of Teaching." Harvard Educational Review, Vol. 24 (1954), pp. 86-97. . "Teaching Machines." Science, Vol. 128 (1958), pp. 969-77. Tobias, 8. "The Effect of Attitudes to Programmed Instruc- tion and Other Media on Achievement from Programmed Materials." AV Communication Review, Vol. 17 (1969), pp. 299-306. Zimbardo, P., and Ebbesen, E. B. Influencing Attitudes and Changing Behavior. Menlo Park: Addison-Wesley, I969. ml A“. RI! B" UT " V” ”All S” R“ E" V” N” U" T” u H "I I" H" 3 1293 03056 5125