TH! EVALUATWE ASSERTECIR ANALYSIS AS A METHOD OF CONTENT ANALYSIS mm {m fin Wu 0&3 M. A. MCWGAH $T’A‘E‘E UNWERSETY Gigi E. Fasks €960 "m w...-.m. - .4. |II|IIIIIII||II|IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII I L. 31293 10442 8010 L I B R A R Y Michigan Stan University I “1 yr a; “J; ‘ 7 I, I ' “in: I. M? I” 7" ’~: 4 -' i . géfiEP if” " AFSTRACP TF3 EVALUATIVE A33 RTI ceivtair1 "CCNM Dlzlflc&ulifi;f' te‘rns ~11ic:1 vuill nw»3 n 1A1: sagma thirq; t »:AIl Innsrs (3f sqrvlitfii. The results uf hnth reliaaility checks gin wed that the Evaluative Assertion Analysis is not a relia- ble rese LLZ‘CII 3-11:5tPI-1£Z'l€-':I’.'.b. The first test 51:..ea-.-<;—;-d. that different re: ults will be obtained when different ‘ .3 E n -‘ “' ' 3‘ '\ 1” ‘fif' "‘ I l. ‘ 1'1— - ’ J ‘7 A ‘1" PBSUHPCLBPS use tne instrugent, and tne second oneti trument is U) basic assumptium of the in CD y. .I '— C} (T) CL P - J re :3- ‘3 not valid. The Evaluative Assertion Analysis is not an * 3t at (D objective research instrument been se an a tt earcher's bias dces not succeed. The ('0 reducing the re lack of reliability also affects its ocjectivity since a criterion of objectivity is reproducibility and this is impossible when different researchers obtain different results. The Evaluative Assertion Analysis does not have utility. It is a laborious, cumbersome, time-consuming process which does not produce reliable inf(r:ation. rfiTV-ra 7‘11 .3 v ‘r? r. r!" . V'! t. flHTfifiT P‘IV‘ 1" ,-. 1r.f1 g In; AJALUAIIVL moonllltu nHALio u «H ‘m Y'V'N'jv“ .v-fi n/“T‘ffi “\Y A3 A ntTnoD or Ubulbfl A THESIS Submitted to Kichigan State University in partial fulfillment of the requirements for the degree of FASTER CF ARTS Science Department of Political 1960 ACEZNC LDDC‘JZ f‘ I would like to record my sincere gratitude to Dr. Frtnk A. Pinher, Director of the Bureau of Social nnr Political Research, for the patience, understanding, and forbearance with which he directed the writing of this thesis. A note of appreciation goes to Dr. Hideya Kumata, of the Communications Research Center, Dr. Edward J. Weidner, Director of the Institute for Research on Cverseas Programs, and Dr. Frank B. Cliffe, of the Political Science Department, for reading this thesis and for the many helpful suggestions they made. mvrv‘rrx .' r1 :r-‘nrm AUiXL‘vL/JLUJDGi “Lgd T3 0 o o o o o o o o o o o o o I. THE FIELD CF CCN' _EHT A”ALYSIS . . . . Content Analysis II. THE THEC 'R.E FICAL E EICPIEHT CF TH.E . EVAIU ATIVE ASSEEIIC~ HALYSIS The Semantic Differential The Evaluative Assertion Analysis III. US THE EVALUATIVE ‘7” J'J 11; YSIS IE A RESEARC J. :' SEJL“. J. I L 7LT o o 0 L] 1: loterial, Concepts and Sample Prediction Cf heaninrs Procedure IV. ELIYZGLLYSIS ‘LI‘a D EIJfL U:L.LICH o o o o o o o Attitude Scores of Concepts teliicildirr Checks Evaluation of the Fethod BIBIJIKZ'H "UT: u/‘LPIY‘H o o o o o o o o o o o o o o o 0 iii. +¢m Ho: 12 (D able Cl.i'f'.’111€(2tOI‘ Distribution . . . Common leaning Term Ch ) m i t)? C t O 1" Va r i a 1’1 C 6! Common 3 Test Tarn Distribution 0 O O O Vazr'i ance iv. 4 (:3 Figure 1. \\ O C\ .q Keiiation Process Switchbward . Two-dimensional Semantic Three—dimensional Seaantic Serce Scale Score Semantic Space . . . The Bernsmtic Space Che<:1«:‘er‘r~..3,rd Relative Score Scale , , , . , . Profile Covariation . . . . . . Common Keaning Term Clusters . . Connector Clusters . . . . . . . Test Gale and Dimensions . . . U} V'. Pave I\\) _.s UT CO 3) 0 '70 m -4 «J INTRODUCTION In the past two decades the analysis of propa- ganda has received increasing attention from both acad- emicians and government agencies, especially those concerned with foreign policy. There have been studies made of every phase of prOpaganda: the source, the content, and the audience. The classic statement of fields of communication, "339 says EQQE to Ehgm_and with what effect" has been the subject of analytical studies attempting to answer a variety of questions. A wide range of techniques have been used, and conclusions have been reached that at times are in agreement and at other times are divergent. Usually the types of conclusions reached are dependent upon the purpose of the study, the reliability of the research techniques used, and the skill of the researcher in using these techniques and analyzing the results. In propaganda analysis, the emphasis has usually been on analyzing the content and the effect of the communication process. The impetus given to the tech- nique of content analysis by Harold D. Lasswell in the 1950's served to stimulate interest in the EQQE: or content, of propaganda, and subsequent studies by him and other scholars, mainly Bernard Berelson, Paul F. 1 Lazarsfeld, Natan C. Leites, Ithiel de Sola Pool, and Morris Janowitz, have refined content analysis to such a degree of SOphistication that more and more scholars are using the technique for a wide variety of studies both in the field of propaganda analysis and in other fields.1 Studies of the effect of communications have become numerous only in recent years. Significant studies of audience reaction and the effect of prOpaganda on audien- ces were made in the early lQAO's by Berelson, Lazarsfeld, and Robert K. Merton. With the advent of the cold war ‘I and the increasing interest in social research durine the last decade, more scholars in the fields of political science, sociology, and psychology have turned to the effects of propaganda on audiences as a field of fruitful research. Illustrations of this increased activity may be found in the rapidly growing number of studies such as the stereotype studies of Howard V. Perlmutter and David Shapiro, Harold R. Issacs, W. Buchanan and Hadley Cantril, and Frederick T. Davis; the general studies of effect by Wilbur Schramm, Carl Hovland, and S. N. Eisen- stadt; and in numerous studies made by the Bureau of Applied Social Research of Columbia University, the International Broadcasting Division of the De; per ment of . “:1 :relsor , CV‘HItF—‘l‘lt anal T.ns Researcn (C lricoc, Ill.: F "D- -‘l H [In (L 0.“ *3 (D U) (0 Q b] State, and International Public Opinion Research, Incorporated. The relationship of the elements of the "Iho, what, whom, what effect" communications model have been studied from a variety of different viewpoints but, as noted above the emphasis has been on the what and 9 .. ._____.._ what effect elements and their relationship. Of course, any study of propaganda would necessarily include at least a mention of the whg and whgg elements, and often these elements are an important part of the study because the content and its effect has little meaning when not associated with the communicator and communicates. It is in this area, of relating the content and its effect to the communicator and the communicates, that the least amount of research has been done. The research reported here is an attempt to investigate the usefulness and reliability of a research tool, the Evaluative Assertion Analysis, designed to measure communicator attitudes from assertions made in the content of his prepaganda. A research method that can reliably measure communicator attitudes and relate them to audience attitudes would be verv useful in this area of research. J THE FIELD CF CIT??? AIALYSIS Proprganda has been defined in many ways. In some parts of the world propaganda is considered an unethical technique, tahing the form of a secret dissemination of ideas, information or gossip for the purpose of helping or injuring a person, group of persons, or a cause. In other parts of the world it is considered an educational process and is an acceptable form of disseminating infor- mation. In this study, propaganda is broadly defined as a process which attempts to persuade and convince persons to believe and act according to the desire of the propa- gandist. This definition does not distinguish between the techniques or ultimate goals of the prOpagandists. Anyone who attempts to manipulate people's activities and beliefs, whether he is an educator, advertiser, public relations agent or newspaper editor is a propagandist. From this definition of propaganda, it is evident that one of the basic objectives of propaganda is to influence the attitudes of an audience, either by chang- ing existing attitudes or by influencing the formation of attitudes where they do not exist, in such a way as to suit the objectives of the propagandist. The effec- tiveness of this particular objective of propaganda is the degree of change in attitude brought about by the propa- gandist, assuming that the formation of a new attitude is a change from no attitude to the new attitude. If we use the degree of change in attitude as the measure of effectiveness, we must know the attitudes of the audience both before and after the propaganda has been introduced. The direction of this change in attitude must be tow'rd the attitudes the prepagandist wishes the audience to have. For example, the audience has a slightly favorable attitude toward the idea, or nation, or whatever the case may be, toward which the propagandist wishes to create a strongly favorable attitude. If, after exposure to the propaganda, the attitude is a strongly unfavorable one, there has been, on an effective or non-effective basis, a negative effect. If we are interested in measuring the effective- ness of our own propaganda we will, or should, know the attitudes we wish the audience to have. If we are interested in measuring the effectiveness of some other propagandist's activity, we must find out what attitudes he wants the audience to have. In most such situations we will not be able to go directly to the source and get this information, so we must get it indirectly. The most logical way to do this is to analyze the content of his propaganda. We cannot be sure this will tell us what attitudes the propagandist wants to create in the audience, but if we can find out the attitudes of the propagandist from this material, we can measure the effect- iveness of his propaganda in creating attitudes similar to his own. In this case we must, in the absence of supporting or contrary evidence, assume that the propa- gandist's objective is to create his own attitudes in the audience. Analyzing propaganda content in this way is an attempt to relate the attitudes of the 339, or source, to the attitudes of the whgm, or audience. In order to measure the kind of effectiveness discussed above, two different types of attitudes must be measured: source attitudes and audience attitudes. Each of these types of attitudes requires a different kind of measuring instrument: source attitudes should be measured from the content of the propaganda output, while audience attitudes should be measured by going to the audience. For comparative purposes, the data for each type of attitude should be collected by similar, if not identical, procedures. An instrument is needed that can be applied to both content and audience with the least modification of procedure. The most obvious and widely used method for determining the nature of the content is content analysis. Content Analysis Content analysis, as defined by Serelson, is a "research technique for the objective, systematic, and quantitative description of the manifest content of- communication".2 This method, then, has four fundamental characteristics: the requirement of objectivity, a system of analysis that excludes data not relevant to the problem or hypothesis, quantitative expression, and description of "manifest content". Also implied are three assumptions: 1. That inferences about the relationship between content and intent or between content and effect can validly be made, or the actual relationships established; 2. That the study of the manifest content is meaningful; and 5. That the quantitative description of communication content is meaningful.3 In view of these requirements and assumptions, is the method of content analysis applicable to a study of the relationship between communicator attitudes and audience attitudes? Let us examine each with this criterion in mind. The first three requirements are highly desirable for this type of study. Objectivity, or the requirement that the procedure of analysis be such that different analysts will get the same results from the same content, is a basic requirement of any scientific study. A system is essential to any study that seeks to establish {U H g C24 0 e O C“ \JJ CO scientific propositions so that data extraneous to these propositions will be weeded out, leaving only relevant data to be considered. Quantitative expression is especially desirable in the present study because of the need to compare the results of two different kinds of measurement procedures. Cne requirement, that of a description of mani- fest content, and one assumption, that the study of the manifest content is meaningful, i. e., capable of detecting the intended meaning of the content, are not applicable to the type of study we are interested in. The requirement of a description of manifest content, or the "syntactic-and-semantic requirement", is made to limit the analysis only to what is said, not to the motives behind what is said or the expected reactions of the audience to the content. Since attitudes are not always "manifest" in the content, but often have to be derived from the assertions made by the propagandist, the syntactic-and—semantic requirements would exclude these attitudes from the area of legitimate content that analysts are interested in when using the content' analysis procedure. For future reference, I will list the three reasons given by Berelson for this limitation to manifest content: 1. The low validity of the analysis, since there can be little or no assurance that the assigned intentions and responses actually occurred; 2. The low reliability of such analysis, since different coders are unlikely to assign the material to the same categories of intentions and responses with sufficient agreement; and 1 The possible circularity involved in establishing relationships between intent and effect, on the one hand, and content, on the other, when the latter is analvzed in terms applying to the former. The assumption that the study of manifest content is meaningful presents another facet of the same problem. I do not take issue with the assumption, but the reason- ing behind it makes it even more evident that standard content analysis methods are not applicable to the type of analysis we are concerned with. The reasoning behind the assumption is that the meaning assigned to the content corresponds to the meaning intended by the communicator, or understood by the audience; that there is a common universe of discourse among the relevant parties so that the manifest content can be taken as a valid unit of study.5 Berelson suggests that there are various kinds and levels of communication content, and that analysis of the mani- fest content for meanings can apply to some and not to others. Suppose we have a continuum along which different communications are placed depending upon the degree to which different individuals get the same meaning from 41b1d., ’d 4 O 7. 51bid., *d lO them. At one end we have the communications which do not depend upon individual interpretation for their meaning, such as a simple news story, and at the other end we have communications which depend almost entirely upon indivi- dual interpretations for their meaning, such as an ab- stract modern poem. At some point along this continuum the diversity of understanding is too great for reliable analysis. The content that can be analyzed would be that at the end of the continuum where understanding is simple and direct and not at the other end, where the understanding depends primarily upon the individual's interpretation of the communication. Because analysis of the manifest con- tent assumes a certain uniforimty of comprehension and understanding, it must deal with relatively denotative communication materials and not with relatively connotat— ive materials.6 Attitudes, however, are connotative meanings, not denotative; since content analysis is restricted to denotative materials, it is not a suitable method for attitude measurement in its standard form. What is needed is a modified form of content analysis that attempts to measure connotative meaning. The Evaluative Assertion Analysis develOped by C. E. ngood, Sol Saporta, and Jum Nunnally attempts to measure evaluations of particular concepts in the propa- ganda material by a process of quantifying attitudes and 11 so seems appropriate for this type of study. The measurement of audience attitudes presents few problems where the selection of a tool is concerned. There are a variety of techniques designed for this purpose, ranging from simple questionnaires to elaborate scaling methods. However, since a method is needed which uses the same procedures and expresses the results in the same terms as the Evaluative Assertion Analysis, we need only go to the research tool from which it wa developed, the Semantic Differential. Also developed by Osgood and others, the Semantic Differential is a method of measuring meaning, and may be applied to attitude measurement with no modification of procedure. These two research tools will be explained fully in Chapter II. ChAPTEi II TNT THELTETICAL DBVELCPKENT CF THE EVALUATIVE ASSERTICN ANALYSIS This study is primarily concerned with the relia- i bility and applicability of a tool to :easure communicator L— attitudes through the analysis of the content of the messages he disseminates. The discussion of the theory of this tool, the Evaluative Assertion Analysis, is intro- duced through a general diacussion of another research tool, the Semantic Differential. This discussion of the Semantic Differential may at first seen only tangential to the problem under consideration, but it is appropriate for two reasons: 1. The Evaluative Assertion Analysis is primarily a modification of the method of the Semantic Differential which attempts to apply the theory of semantic differen- tiation to printed materials. The Semantic Differential is a tool to be used with respondents in an interview or testing situation and requires a particular' ype of ques- tion. _The Evaluative Assertion Analysis modifies this method by attempting to determine the “espouses a communi- cator would make to the questions of the Semantic Differ- ential on the basis of assertions made in the messages he J‘.‘ J disseminates. A general discussion oi the theory of J... semant c differentiation is essential to an understanding of the Evaluative Assertion Analysis. 12 2. In Chapter III, a See intic Differential is used as a test of the validity of an assumption of the Evaluative Assertion Analysis. A discussion of its methodology here eliminates the need of presen tin~ such a discussion w: en the vs lility test methodology is pre- sented. An explanation of the method of semantic differ- entiation is more easily understood if it is discussed in the context of the theory on which it is based. Th e Sofia Q1310 Di. ffe rent ial r7 ! 1.1. The Se m.nt ic ff tr enti.l is a measuring tool ‘3: which attempts to measure conrotative meaning. The theory of connotative meaning as it is used here is a modified application of learning theory, based on Pavlov's "con- ditioned response" theory, to the use of language as a response to linguistic stimulation. Pavlov trained, or "conditioned" dogs to salivate when a bell was sounded by giving them food and sounding a bell at the same time. After repeating this procedure a number of times, he found that the dogs would salivate when the bell sounded even though there was no food present. By conditioning the dogs to accept the bell as 7All ma terial used in this description is from Charles E. ngo ood, George J. Suci, and rercy H. Tannen-- baum, The leasx rcuent of heaninc, (Urbana: The University of Illinois Mr ss,13?:7, or from lectures given ‘ ,, oy Dr. Hideya Kumata at Ewichi an State University. Cnly direct quotations are spe ccifically noted. 12. a symbol of food, he was able to elicit the same response from the symbol as would be elicited from the food itself. This diagram shows the relationship between the stimulus and the symbol: S (food) - - - - - - R (salivation) s (symbol) - - - - - — R (salivation) ngood maintains that the single—stage condition- ing theory does not go far enough in explaining linguistic behavior; reactions made to symbols are seldom the same as those made to the objects symbolized. Instead he inserts a second stage in the conditioning process which he calls a "representational mediation process". To understand this process, two terms must be defined: significatc and sicn. A sipnificate is any stimulus which, in a given situation, regularly and reliably pro- duces a predictable pattern of behavior. A sign is a symbol of the significate if it evokes a part of the behavior elicited by the significate. In Pavlov's experi- ment, the food would be the significate and the bell would be the sign of the significate "food". Osgood's process is called "representational" because it is part of the same behavior produced by the significate, and "mediational" because it produces a self- stimulation which may be associated with different acts related to the total pattern of behavior. This modifi- cation of the S - R pattern may be illustrated as follows: S. —-—-—-—-—--——-- ------- R; // / S is the significate which produces the total pattern of behavior Rt: SX is the sign of the significate which evokes the representational mediation process rm------sm which in turn produces an E portion of the behavior pattern, RX. Formally stated in psychOIOgical terms, this is Osgood's theory: "A pattern of stimulation (SK) which is not the significate (S) is a sign of that sig— nificate if it evokes in the person a mediating process, (rm-----sm), this process (1) being some fractional part of the total behavior elicited by the significate and (2) producing responses (Rx) which would not occur with- out the previous contiguity of non-significate and sig- nificate patterns of stimulation”.8 Words (signs) represent things (significates) because they produce a mediation process which is a part of the actual behavior toward these things. The meaning of a word, according to this theory, may be measured in terms of the responses (RX) which are pro- duced when the word is used as a stimulus. Consider the two-stage process as a model which includes the "little black box" representing the nervous system of the individual: Q , 14' UOsgood, heasurement of heanin", D. 7. . SX --—-----oqb- rm ——-0 S —— ————— -—- RX ‘ J O Ieaning is a variable of human behavior which, according to ngood's theory, is identified with the r.fl ----- -sm process that ta we place within the box. To measure this process, some ooservable outp ut from it may be used as an index. Since RX is the output of the process, some characteristic or sampling of Rx may be used as a means of inferring what is happening in the rm ----- sm process. If we want to know what something means to an individual, we ask him to tell us. If a quantitative index can be devised to measure the responses he gives us (RX), we may have an index of the meaning of the word we have used as a stimulus (SX). To use linguistic responses as this index of meaiing, we need: I. A carefully devised sample of alternative verW:tl recnous s whic? can be sta;11fi d a01oss suzmj ct b8; 2. These alternatives to be elicited from .he suoiects rather tlan e.it ted so that en coding fluency (ability to express meaning in verbal terms) is eliminated as a var able; and 3. TLlOSU alternatives to be representative of the major ways in which meanings vary.9 Osgood maintains that selection among successive pairs of common verbal Opposites (good-bad, strong-weak, hard-soft, etc.) should gradually isolate the meaning of the word. If a scale is devised on which the subject is asked to judge the meaning of the word (or "concept" as I w" it will be used hereafter) as to whether it is 'gOOQ or "bad", and if the scale has successive steps to indicate how good or hag bad he thinks the concept is, then the scale will be a measure of the direction of the judgment (good gg: bad) and the intensity of the judgment, (slightly good, extremely good, slightly bad, etc.). The scale that Osgood uses has seven steps, GCCD DAD with the number of the steps corresponding to these verbal expressions: 1. extremely good 2. quite good 3. slightly good 4. neither good nor bad 5. slightly bad 6. quite bad 7. extremely bad 18 The instructions for scoring the scales include these verbal descriptions of the scale steps. If a series of scales is devised which includes all possible pairs of verbal Opposites that can be used to describe the concept, then the subject's responses on this series of scales will be the total meaning of that concept to him. Each of these scales may be considered a separate representational mediation process. The individual's total mediation process may be schematically illustrated as a kind of switchboard, with each scale represented by a row of possible circuits from which one may be chosen, and each mediation process as the individual (the operator) selecting the proper circuit and "plugging" a Jack into the board to complete the circuit. (Figure 1.) l . ,~~. 2 ’C‘ _£:> ~ 0 \\ ’— I / s ff #6 .OKK.‘/ \\\\~ \ 50\\ O? e.-=,L_. 7 Fig. l--Eediation Process Switchboard 19 Only five scales are used in this switchboard to make the illustration simple. Actually, a switchboard would consist Of many scales, as noted below. The circuits would each have a number, corres- ponding to the step on the scale into which the Jack was plugged. The response (Rx) which is used as an index would be analogous to the "number" of the concept. In the above illustration, the concept "politicial act" (3x) would have a number of 24216, which could be ex- pressed verbally as quite good, neither fair nor unfair, quite kind, extremely active, and quite weak. If we had a row of circuits for each Of all possible verbal Opposites which could be used to des- cribe the concept, there could conceivably be a switch- board consisting Of several hundred rows Of circuits, with a resulting "number" Of the equivalent number of digits. This would be quite unmanageable and undesir- able in a measuring instrument. If a sample Of pairs Of verbal Opposites could be selected which would be as representative as possible Of all the ways in which meaningful judgments can vary, and still be small enough to be handled efficiently in its application, then this Operational defect would be eliminated. Factor analysis is the legical tool to use to find such a sample. Factor analysis is a technique which attempts to cluster the correlation coefficients of a group of tests 20 to find the least number of such tests that will most completely describe the individuals who took the tests with the most accuracy. If a large number of mental ability tests are given to a large number Of people, every individual and every test will differ from each other. To completely and accurately describe an individ- ual by the process of giving him every test that could be found to measure any portion of mental ability would involve a highly complicated and extremely clumsy pro- cedure of testing and scoring. If, however, these tests could be collected into groups which measure similar traits, then by selecting a small number of tests from all possible groups, the complete range of mental ability could be measured with a small and manageable number Of tests. Factor analysis is a method by which a large number of tests can be grouped together into these groups or "factors" which measure the same traits. Each of the scales of the Semantic Differential is considered a "test" in the terminology Of factor analysis. The series Of scales will make up a battery of tests, and a matrix of correlation coefficients can be calculated from the scale scores. If there are a number of pairs Of verbal Opposites which measure the same kind of meaning, they will cluster together and will be highly correlated with each other. This will result in a common factor, because as was noted in the 21 discussion of factor analysis methods, the Objective of factor analysis is to group tOgether those tests which measure similar traits on the basis Of their correlation with each other, so that the least number of tests can be found that will most completely describe what is needed in the Semantic Differential: the least number of pairs of verbal Opposites that will most completely cover the range of possible meaning a concept will have for an individual. ngood used Thurstone's Centroid Method Of factor analysis on an experimental group of scale scores and found that the scales did cluster together into several common factors. In the first analysis extraction of factors was stopped after the fourth factor because this factor took out less than two per cent of the total variance and the pattern of scales having high loadings on it had no semantic meaning. These four factors and their characteristics (after rotation) were: 1. Factor I accounted for 33.8 per cent of the total variance and 68.6 per cent of the common variance (the common variance is that portion Of the reliable variance which correlates with other variables). This factor was named the evaluative factor because the scales which had the highest loadings on it were composed of those adjectives which are usually used in making evalu- ative judgments, such as good-bad, beautiful-ugly, R) 11) sweet-sour, clean-dirty, and kind-cruel. All these scales had loadings Of .75 or higher. These scales are almost purely evaluative because the variance which they extract from the total common variance is due almost entirely to the first factor. In other words, these scales had a high loading on Factor I and very small loadings on the other three common factors. 2. Factor II accounted for 7.6 percent of the total variance and 15.5 per cent of the common variance. This factor was named the potencv factor, since it con- sisted of scales such as large-small, weak-strong, heavy-light, and thick-thin. Many Of the scales in this factor also had high loadings on the evaluative factor, although their highest loadings were on the potency factor. For example, the scales hard-soft, brave-cowardly, rough-smooth, and loud-soft had very high loadings on the evaluative factor. 3. Factor III accounted for 6.3 per cent of the total variance and 12.7 per cent of the common variance. This factor was named the activity factor and was charac- terized by such scales as fast-slow, active-passive, and hot-cold. Several scales had high loadings on this factor but had as high or higher loadings on the evalu- ative factor: red-green, young-Old, and tense-relaxed. 4. Factor IV accounted for 1.5 per cent of the total variance and 3.1 per cent of the common variance. These percentages were too low to consider this a reliable factor, and the scales, as noted above, made no sense semantically. It appears that there is one dominant factor, the evaluative, which accounts for a very high part of the common variance (69 per cent), with the next two factors together accounting for slightly more than one-fourth of the common variance (15.5 Per cent and 12.7 per cent). This pattern was consistent in two later factor studies by Osgood and also in studies by Solomon and Tucker.10 Other factors appeared in later analyses, especially when different methods of rotation were used, but theSe accounted for only a small percentage of the common variance and were not considered significate in the present studies. With a different sample of scales and more complete analyses, ngood hopes to be able to iden- tify other factors, but his research thus far has not reliably done this. It appears likely that other major factors will appear, especially when it is noted that the present three factors account for only 47.6 per cent of the total variance. It is also possible that the remaining variance may not be explained in terms of a few major factors, but in terms of a large number of relatively small specific semantic factors, each taking 10Ibid., pp. 66-8. 24 out a very small portion of the total variance. Which of these possibilities will be true, if either, must be shown by further research. The three factors discussed may, with reasonable qualifications, be corsidered at H leaet grin r; zactors of semantic meaning. To devise a test of the least number of pairs of verbal Opposites needed to measure this primary linguistic meaning, we need only select the scale with the highest factor loading in each factor, giving us a test composed of only three scales. This seems to give us the ideal test in terms of manageability, but what about representativeness? Three scales would make the test representative of the number of primary factors, but one test from a factor does not completely represent that factor, nor does it give a preciseness of measure-» ment needed to distinguish one concept from another. As an illustration, consider the concepts SUCCESS and FREEDOM, judged on a test of only three scales, good-bad (I), strong-weak (II), and active-passive (III). An individual could score each of these concepts the same, for example good 3, stronm 2, and active 2. This would mean that the concepts mean exactly the same thing to this individual, but ask him if SUCCESS and FREEDOM are the same thing and chances are he will say no. We evidently need a series of scales which will give a richer meaning than will three, a series that will I“) U1 precisely locate the meaning of the concept in semantic Semantic space can best be understood if an illustration is used from factor analysis. The tests are located in two dimensional space by drawing perpendiculars from two orthogonal axes, representing two factors. The distance of the base of the perpendiculars from the point where the two axes meet corresponds to the factor loadings of each test on the two factors, measured along the refer—— ence axes from 0 (where the axes meet) to 1.0, the high-~ est possible loading on a factor. The location of the test is the point in space where the two perpendiculars meet: 7' I it Fig. 2--Two Dimensional Semantic Space If we add a third reference axis to this diagram, the test may be located in three dimensional space by measuring the distance along the third reference axis (a third factor), drawing two perpendiculars at right angles D.) O\ to each other, and drawing a three-dimensional box to locate the test: Fig. 3——Three Dimensional Semantic Space We can use the same illustration to locate a point in semantic space which corresponds to a concept by using scale scores instead of factor loadings to determine the point along the reference axes which locates the bases of the perpendiculars. The scale itself will be the reference axis instead of the factor, and one scale will be the reference axis for two quadrants. We cross the scales at their mid-point; if the scale is good-bad, all "good" judgments will be in one quadrant and all "bad" judgments will be in another quadrant. There are three factors in the Semantic Differential, so a diagram of three-dimensions composed of one scale in each of the three factors, (which will be called dimensions) 27 will illustrate this construction: 7 o MSSWE Fig. 4--Sca1e Score Semantic Space There may be as many dimensions as there are scales, so the semantic space has many possible dimen- sions (is multidimensional). Another diagram which illustrates the location of a concept in semantic space, and which is more useful in analyzing the results of semantic measurement, is a sort of checkerboard: (41,491.) (“3" L ll j we". 3 -2 I4 .00 . g .21 Sm”: [ I I ; m /- L 1.1 n 0-4 (’L;’~I.S’ -) Fig. 5--The Semantic Space Checkerboard 28 A concept can be defined by its location in a multidimensional semantic space. It is difficult to imagine more than three dimensions, because in reality we are famaliar with only three-dimensional space, but mathematically we can conceive of as many rectangular axes as there are scales in a space of as many dimensions. The semantic space is thus a region of unknown dimensionality, Euclidian in character, with each scale of verbally opposing adjectives representing a dimension (which is a straight line) passing through the origin of the space. To define the space with maximum efficiency, the smallest number of orthogonal axes (dimensions) is needed which will completely exhaust the dimensionality of the space.11 By factor analysis, this number has tentatively been identified as three; these do not ex- haust the ways in which meanings may vary (the dimen- sionality of the semantic space), but the existence of a large number of additional dimensions in the total semantic space does not mean that meaning may not be measured with scales taken from these three dimensions because the additional dimensions account for relatively little of the total variance as compared to that variance taken out by the three dominant dimensions. According to this rationale, a measuring instru- ment can devised, composed of scales with heavy loadings 11Ibid., p.25. 29 on the evaluative, potency, and activity factors, that will measure a dominant portion of the meaning that a concept has for an individual if the scales are carefully selected to be representative of gggh factor and, of course, relevant to the concepts being measured. (A scale with the adjectives dry-wet, for example, would hardly be relevant in judging a concept such as SCCIALISH.) Osgood suggests that the minimum number of scales should be three from each factor, making a total of nine. The Evaluative Assertion Analysis The Evaluative Assertion Analysis12 is a method of studying the semantic content of printed materials in an attempt to determine the evaluations being made of significant concepts by the writer of the material. It attempts to eliminate the possibility of cedar bias from entering into the analysis, and to provide a con- sistent method of quantifying evaluations that will be reliable when used by coders with a minimum of training in the method. It is an outgrowth of the Semantic Dif- ferential, and attempts to apply the 10310 of semantic differentiation to printed materials by modifying the technique of ashing the individual what a concept means 12Charles E. Osgood, Sol Saporta, and Jum C. Nunnally, Evaluative Assertion Analysis, (Urbana: The University of Illinois Institute of Communications Research, 1954). 30 to that of finding out what the concept means by analyzing the assertions made by him (in this case, the communicator of the message) about the concept. In other words, instead of having a series of scales designed to force the subject to make particular judgments, the judgments that are made by the subject are analyzed in an attempt to find out how the subject would have scored the concept on the series of scales. This analysis is based on the coder's judgment as to the score on the seven-step scale that the assert- ion should be given. For example, if the assertion is made in the material that "X is extremely bad", the coder would assign a score of —3 (or 1, since the steps may be scored from -3 to +3 or from 1 to 7) on a good-bad scale as the attitude toward concept "X". Notice that the coder is not trying to measure the meaning of concept "X", but only the attitude toward it. Since the evaluative dimen- sion of the Semantic Differential is a reliable index of attitude15, the use of a scale (good-bad or favorable- unfavorable) which has a high factor loading on the evaluative dimension can be used to measure attitude. There are four basic assumptions made in the logic of the “valuative Assertion Analysis; that sophis- ticated users of English 1. can distinguish attitude objects from common ‘5Ib1d., p. 1 ff. meaning_materials. "Attitude objects" are those concepts whose meanings result from individual differences in variables such as past experience, education, and social attitudes. "Common meaning" terms are those terms upon whose meaning all users of English must agree before they can communicate. An example of an attitude object would be "compulsory military training"; and an example of a common meaning term would be "fair". 2. can make valid and reliable judgments as to when two alternative constructions are equivalent If? non-oguivalent in meaninm. This assumes that a coder can detect two complete assertions in a single state- ment and translate them into separate assertions with- out losing any of their evaluative significance. For example, the statement "People of good will denounce these Communist aggresrtrs" could be translated into these two statements: "Communist aggressors are de- nounced by peopb of good will" and "Communists are aggressors". 3. can arree to a satisfactory decree on the J LA — direction and intensity of assertions. This assumes "Communists that coders will agree that the statement are denounced by peOple of good will is a dissociative assertion of strong intensity, while the statement "Communists may have been avgressors" is an associative L) \N F0 assertion of weak intensity. A "dissociative assertion" is a statement in which the verbal connector separates, or emphasizes, the difference between the attitude object "associative asser~ and the common meaning term; and an tion" is a statement which identifies, or emphasizes the similarity of, the attitude object with the common mean- ing term. 4. can agree n tne direction and degree of evaluativeness f common leaninm terms. This assumes H. t that coders will agree that terms such as 'people of good will" are extremel' positive in evaluation "dele- v ’ "aggressors" and gates"eue:neutral in evaluation, and "violators" are quite negative in evaluation.14 The validity of these assumptions, especially number four, will be discussed in Chapter III. There are four steps in the procedure of the Evaluative Assertion Analysis. They will be discussed fully in Chapter III, so I will only outline the object- ives of each step here: Stage I: attitude objects are identified in the material and nonsense syllables (XC, RU, etc.) are substituted for them each time they appear; the mater- ial is then transcribed, so that the coder working on subsequent stages will not know what concept is being 14Ibid., pp. 1-3. 33 evaluated. This is to prevent the bias of the coder from entering into his evaluation of the assertion. Stage II: the transcribed material is translated into an exhaustive series of evaluative assertions made about the attitude objects. Stage III: the assertions and common meaning evaluative terms are assigned directions and intensity values. Stage IV: the assertions made about each attitude object are collected and the values assigned to them are averaged to obtain a final numerical value denoting the attitude toward each particular attitude object. The attitude objects are then placed on an evaluative scale, ranging from +3 to -3, according to this final numerical value. CHAPTER III USING THE EVALUATIVE ASSERTION ANALYSIS IN A RESEARCH PROBLEM In order to properly evaluate the Evaluative Assertion Analysis method, I used it in an actual research problem, designed specifically for this purpose. The research reported here was conceived as one phase of a three-phase study of pr0paganda effects and techniques in a country bound by cultural, historical, and language ties, but separated by political and administrative di- chotomies. The country, Vietnam, is a small country in Southeast Asia which has been divided by international agreement into two states, each having separate govern- ments although inhabited by a homogenuous populace which, until the separation five years ago, was strongly nationalistic and unified. The separation is made sharper by the difference in political ideOIOgies. The North, with a population of thirteen million, is under Communist control and was the center of government and education before the separation. The South, with a population Of twelve million, is non-Communist and composed of most of its original pOpulation plus almost one million peeple who moved from the North when the 34 Communist regime took over. Also important is the fact that for ninety-three years before the separation Vietnam had been a colony of France, along with Cambodia and Laos, and for the ten years immediately preceeding the separation had been fighting for independence from France. The sep- aration of the nation and Vietnam's independence were the results of this fight. Ho Chi Minh, a dedicated Communist, led the fight for independence and became the leader of the NOrth after the separation. One of the provisions of the international agree- ment (the Geneva Convention of 1954) which partitioned Vietnam was that free elections were to be held to decide which of the two governments was to be the official government of the country. These two competing govern- ments were those of Ho Chi Minh, leader of the Communist- supported North, and Ngo Dinh Diem, leader of South and supported predominantly by the United States. The impending elections caused a considerable amount of propaganda activity on the part of the two competing governments in addition to the amount of Communist and non-Communist prepaganda activity existing before the separation. The propaganda analyzed in this study is a portion of that disseminated by the NOrth Vietnamese government in an attempt to sway the peOple of South Vietnam (and those of the North who were not already convinced) to follow the leadership of Ho Chi Minh and U1 0\ the Communist-controlled North Vietnamese government. Material, Concepts and Sample The first step in the study was to select (1) the material to be analyzed, (2) the concepts to be studied, and (3) the sample. The nature of the material selected necessarily limits the scepe of the study. The total propaganda output of North Vietnam could hardly be obtained in any form, and even if it could be, would be so voluminous that a satisfactory analysis could not be undertaken, even through sampling, within the sCOpe of this study. Arbitrary limits were set to make the analysis to a certain extent manageable. The first of these limits was that the material to be analyzed should consist 9: radio broadcasts from the radio station at Hanoi, the capital of North Vietnam. The best sources for this material are the daily reports of a monitoring station which transcribes output from various radio stations throughout the world. Because of the nature of the research tool used and the objectives of the study, the second limitation was that only material which contained the complete text ,\ ‘0 - n." v 1 m .1n L A V. w 1 - ' I\ 1 V7 “ 4 _4 u‘ ‘ oi th? messages monitored would be indiyzfid. Because the context of he message is important in determining attitude intensity, it is obvious that excerpts or summaries would not provide suitable material to be analyzed by the Evaluative Assertion Analysis method. Also because of the nature of the tool, a third 1.. limitation was that only texts in the English language would pg analyzed. The basic assumptions of the Evalua- tive Assertion Analysis require that the material to be :orked with be in English, for the reason that no work“ a...__-—- .-—.— ‘_,_- .—.——— has been done Dalia? anslisasility 0f the tool 19-9thsn. glanguages; The texts of the propaganda analyzed in this study were originally in English, since translations would modify the context of the message to the extent that reliable results would not be obtained. The concepts selected for this study were those which (1) were frequently used, or (2) should show some extreme variations in meaning from source to audience, (3) would be expected to show relative similarities in meaning from source to audience, and/or (3) would give information in analyzing policy lines of the source. With these criteria in mind, the concepts selected were: America (United States), capitalists, China, Cambodia, colonialists, democracy, Diem, Diem administration, Dulles, Eisenhower doctrine, French government, French peOple, Formosa, Geneva Convention of 1954, imperialists, Japan, Laos, Southeast Asia Treaty Organization, South Vietnam, South Vietnamese people, South Vietnamese administration, United Nations, 38 United Nations agencies, and Soviet Russia. Each of these concepts will satisfy at least one of the given criteria, some will satisfy more than one, but none will satisfy all four, criteria (2) and (3) being logically incompatible. Criteria (1) and (4) would be most nearly satisfied by all the concepts selected. A period of one year, from January 1, 1957 to January 1, 1958 was considered sufficient for the time period to be covered in the analysis. As no output was transcribed on Sundays, this time interval genera- teci a population of three hundred thirteen days output, from which a sample of sixty-five days was drawn. A probability sample was drawn without replacement by means of a table of random numbers. The days selected for analysis were distributed fairly evenly over the year, with three from January, six from February, eight from March, nine from April, five from May, eight from June, eight from July, five from August, five from Sep- tember, three from October, three from November, and two from December. Ten of the days' output contained no references to any of the concepts being studied (March, one; April, one; July, one; August, three; September, one; Hovember, one; and December, two.) 59 Prediction of Meanings of Concepts Twelve of the concepts were put into six "similar groups", composed of concepts which would have similar scores. Similarity in this case was determined by the relative attitude score the concepts were assigned on 'the good-bad scale in the final stage of the analysis. These groups and the reasons for their expected similarity were: 1. United States and Qigg_, During the year (1957) in which the broadcasts were made, the United States was giving considerable support to South Vietnam in the form of technical assistance and military aid, and much of this aid was given to President Ngo Dinh Diem as he was trying to consolidate his position as leader of South Vietnam. The assumption was made that the Nérth Viet- namese would attack both Diem and the U. S. and criticize the U. S. for interfering in the internal affairs of Vietnam, while trying to build the picture of President Diem as a "puppet" of the U. S. ’2. Diem administration, Diem, and South Viet- namese administration. These concepts were almost synonymous, and were expected to be interchangeable, with the choice of the term to be used depending upon whether a charge was to be made against Diem personally or just against his government. 4O Unite States, imperialists, and capitalists. kw! o The charge that the U. S. is a capitalist or imperialist nation has been a favorite charge of Comminist propaganda throughout the world. This grouping had two primary purposes: (1) to see if the north Vietnamese consistently made this charge, and (2) to see how closely the attitude "capitalists" and imperialists" correspond with toward the attitude toward the U. S. and if there were any other countries with which the term imperialists was used as consistently. 4. France and colonialists. Because Vietnam had been a colony of the French for a long period of time before its separation, it seemed obvious that these two concepts would be closely similar in score. 5. Soviet Russia and China. These two countries are the accepted leaders of the Communist cause in the West and the East, respectively, and it was assumed that for ideological reasons both would receive approximately c+ the same atti ude score. 6. Cambodia and Laos. Emile North and South Vietnam had been under the colonial rule of France as one nation, they had been joined with Cambodia and Laos ad French Indo-China. These two states border South Vietnam on the west and Border North Vietnam on the as- I southwest. goth. U) tate; have large Communist parties. JJ 1 North Vietnamese Communists would be expected to express 41 favorable attitudes toward both states in an effort to win then over to their side as allies a ainst South Vietnam. These states are not large, as are China and Soviet Russia, but their alliance w th North Vietnam would have a strata ic value to any attempt to take over South Vietnam. The attitudes toward them would' be expected to be as favorable as those toward China or Soviet Russia. Cf the twenty-four concepts, they were expected to fall into two general categories, those toward which North Vietnam would express a favorable attitude, and those that would have an unfavorable attitude expressed toward them. Those expected to have a favorable attitude expressed toward them were China, Cambodia, democ:acy, Geneva Agreement of 1954, Laos, South Vietnamese people, Soviet Russia, United Nations, and United Hations agencies. Those expected to have an unfavorable attitude expressed toward them were the United States, capitalists, colonialists, Diem, Diem administration, Dulles, Eisenhower doctrine, French government, French people, Formosa, imperialists, Japan, SEATC, South Vietnam, and South Vietnamese administration. Procedure The procedure of the analysis was that of the 42 Evaluative Assertion Analysis, consisting of four stages: 1. The isolation of attitude objects and the substitution of nonsense syllables for them, and the transcription of the material. 2. The translation of the masked materials into an exhaustive set of evaluative assertions about each attitude object. 3. The assigning of direction and weight to the assertions and common meaning terms. 4. The collecting of all assertions about each attitude object and averaging the scores of the total assertions made about them. The attitude object is then aSsigned to its position, by average score, on an evaluative scale ranging from +3 to -3. Stage I The first stage of the analysis method was done by a coder other than myself in order to reduce the bias assumed to enter into the evaluation when the evaluative coder knows the identity of the concept he is evaluating.15 For example, if the coder has a very unfavorable attitude toward China and an assertion is made that "China performs aggressive acts", the coder's bias might color his judg- ment of the term "aggressive acts". This masking procedure is meant-t0 guarantee some degree of objectivity to the r 13A word of thanks to Miss Pat Terrill of the Bureau of Social and Political Research for the capable and efficient way in which she performed this task. 43 method by reducing the coder bias. The concepts were assigned nonsense syllables and the material, totaling one hundred fifty two pages, was transcribed by the coder in Stage I. The identity of the concepts was unknown to me until stages three and four were completed. Stage II The transcribed material is translated into an exhaustive series of evaluative assertions made about the attitude objects. The transformation of the material into an exhaustive set of evaluative assertions which have the same grammatical form without changing the meaning of the assertions in their original form involves a complicated procedure of grammatical manipulation. A common form is needed so that the assertions relating to each attitude object may be cumulated and compared. The form used in the Evaluative Assertion Analysis is the actor-action—complement form, with the actor, usually a noun, representing the attitude object, the action, a verb or verb phrase, representing the connector, and the complement, a noun, adverb, or ad- jective, representing the common meaning term (or another attitude object). The association of this construction with the assertions evaluated in the analysis is given by this definition: An assertion is a linguistic 44 construction in which an actor is associated with or dissociated from a complement via a verbal connector.16 The difficulty in transforming the material into assertions of this form lies in deciding which of the parts of speech in.a statement is essential to the evalua- tive meaning of an assertion, and when it is, how it will be translated into the common form. ngood's definition of a common meaning term helps guide the decision as to whether a phrase is essential or not: A common meaning element in a message is evaluative when it is clearly closer in meaning to one pole of the evaluative dimen- sion than the other.17 In other words, a common mean- ing term is evaluative when it can be classified as either "good" or "bad". For example, "following the 1 H ' r; as is clearly good, "stealing' is clearly had, and is neither good nor bad and would not be a common meaning term. g In deciding how a statement is to oe translated into the common form, the Evaluative Assertion Analysis has an outline of most of the grammatical constructions in the English language and the way they may be cast into the actor-action-complement form. I will cite only a few examples of these illustrations to show the general / 1bOsgood, Saporta, and Nunnally, Evaluative Assertion Analysis, p. 22. 17Ibid., p. 20. 45 procedure for translating into the common form. When the attitude object is a noun evaluated by a common meaning term of one or more adjectives: adj1 plus adjg plus AC (attitude object) = / AO / verb "to be" / adj1 and / A0 / verb "to be" / ad32 aggressive, strong AO :-/AC/is/a53ressive and : /AO/is/strong When the A0 is a noun in‘the possessive case and is evaluated by a noun: AO's noun : / AG / verb "to be" / noun AC's good qualities : /AO/has/good qualities When the complement is another AC: AC1 plus verb plus A02 : /AC1/V8Pb/AOQ X joins Z = /X/joins/Z Constructions involving coordinators (and, but, etc.) between two common meaning terms: AO plus verb plus cm1 (and, etc.) cmg : / AG / verb / cm1 and /A0 / verb / 0mg AC loves truth and justice _ /AO/loves/truth and : /AC/loves/justice When the material has been transformed into the exhaustive set of evaluative assertions, they are written into an Assertion chart containing six columns: (1) (2) (3) (30) (4) (40) S A01 connector cm or A02 Jd committed 158 violations Kr provokes fishermen Cq terrorized inhabitants Xi is an aggressor Column (1) is the source of the assertion, in this case the radio station at Hanoi. Stage III In this stage, weights and directions are assigned to the connectors and evaluative content in the assertion chart. This consists of making two Judgments: (l) the intensity of the association or dissociation of the connectors, and (2) the intensity of the favorable- unfavorable evaluation of the common meaning terms. The Connector If the connector associates the material in column (4) with the A0 in column (2), it will be given a + sign; if it dissociates the material from the A0, it will be given a - sign. For example, in the first assertion in the sample assertion chart, Jd / committed / 158 violations, the connector committed indicates that the A0 Jd is associated with the common meaning phrase 47 158 violations, so it would be given a + sign. If the statement were Jd / did not commit / 158 violations, the connector did not commit would be dissociative, and would be given a - sign. Other examples of associative connectors are: love, accept, has, is, are, and permits. The negative forms of these connectors, such as did not love, has not, did not accept, would be dissociative. Other dissociative connectors would be: denounce, be against, repudiate, hinders, and injures. The nega- tives of these would, of course, be associative. The connector will vary in the degree to which the material in column (4) is associated or dissociated with the AG in column (2). The rules governing inten— sity are as follows: 1. Strong intensity of connection: value, +3 or -3. Connectors which imply either complete iden- tification or complete separation of the AC from the common meaning material are classified as +3 or -3 respectively. The most direct example is the use of the verb "to be". To say that X / is / a drunkard completely identifies X with the drunkard class, and to say that X / is not / a drunkard completely separates X from the drunkard class. Most simple, unqualified verbs are at the strongest intensity level whether in present or past tense, e. 5., love, hate, be devoted to, denounced, confused, commits, committed, etc. Also adverbs such as entirely, absolutely, or forcefully are at the strongest intensity level. 2. Moderate intensity of connection: value, +2 or -2. Connectors which imply probable, partial, increasing, etc. association or separation are classi- fied as +2 or -2 respectively. An example would be the use of compound verbs like try to, plan to, and the like. To say the X / tried to / protect the nation implies a definite ten ency toward association of X with the nation, but not complete identification. Other forms: most verb constructions involving the use of auxiliary verbs implying change in status over time, such as evaded, has been seen; unqualified verbs, such as be like, favor; and adverb modifiers such as naturally, reasonably, and usually. eax intensity of cannection: value, +1 or -1. Connectors which imply only possible or hypo- thetical relation or separation are classified as +1 ‘ ‘- IQ‘F" “ rr‘ “1 " 0 .—~ I“ '3- ’ .4 -~ A -‘/ a q or «1 PeSldCULVCl‘. oxamjies. may he, might, presents, ,3 , ~ ,.. on -- ,, ,. 0-3.4. p, P --a ,. advero mxdiiie‘s such as slitntly, Cascally, possible, A judgment about the connector in each assertion is made and tie value, from +3 to -3, for each connector is written in column 30. If the connector neither associates nor dissociates the so and tie common rtjnjng _ 0‘ S. ‘ I . . '-. , -. / " material, the value is O. x / was flyinb to / the airport, for exanple, shows no association. ’a 1“ “Y a, r' r The Common heanin Evaluation he judgment as to the evaluation of the material in column (4) is made on a seven step scele of flood-bad or favorable-unfavorable. A + sign is given good evaluations, and a minus (-) sign is iven bad evalua- a.“ tions. The terms extremely, moderately, and slightly \ are given the weights 3, 2 and 1 respectively; if the term is considered extremely good, it will have --7 a the value +;; if it is extremely bad, it will have -v p. the value -,; slightly good, +2, and so on. The values 5 are then written in column (4c). If the term is neither good nor bad, it will have the value 0. Stage IV After the weights and directions are assigned to the connectors and common meaning terms in Stage III, they are transferred to an Evaluation Computation chart: AG cm evaluation AC evaluation Connector on Product connector AC eval. product ill (’9) (3) (1+) (5) it?) i?) (8) Jd +3 -2 -6 +3 Xu -2.4 “7.2 " +2 +1 +2 +2 CZ “101 “'20?- " +? -3 ~0 "'"'"“".I _ P! ._._.Q I -O 1! O. ‘L 2 ‘109 I {—4 ) l [U \ ‘r H H bl TI O H C ' 3 ..'l VF; -L.’ - The steps in this process are: 1. Use an Evaluation Computation Chart for each AG in the Assertion Chart. 2. For every assertion about an A0, put the A0 in column (1); in column (2), the connector value and direction from column (3c) of the Assertion Chart; in column (3), the common meaning value and direction from column (40) of the Assertion Chart; and in column (4), place the product obtained from multiplying column (2) by column (3), following alge- braic rules for multiplying the signs. J. Columns (5), (6), (7), and (8) are for those assertions where one attitude object, A01, is evaluated by an assertion which uses another attitude object, A02, as the evaluative meaning term. For example, the statement "Rc supports Cz" uses Cz as the A02 to make an evaluative assertion about RC. These columns are filled in step 5. 4. Add the values in column (2),_re5ardless of sign, to get the connector total. Add the values in column (4), noting sion, to get the product total. Divide the product total hr the connector total to get the sub-concept-value, which is entered near the bottom of column (1). This value has the sign of the product total. Repeat this step for each AC, until there is a sub-concept-value for each AC. 51 5. Isolate every assertion in the material that uses an A02 for an evaluator. In the computation chart for the A01 in the assertion insert the value and direction of the connector in column (5); in column (6), the A02; in column (7), the "sub-concept- value" from the Evaluation Computation Chart for that AC; multiply column (5) by column (7) to get the pro- duct, column (8). Add the values in column (5), regard- less of sign, to get the connector total. Add the values in column (8), noting sign, to get the product total. Divide the product total by the connector total to get the "second sub-concept-value", which has the sign of the product total. These two "sub-concept- values" should be in the same direction and of approxi- mately the same value. 6. Add the product total from column (4) to the product total from column (8), and divide by the absolute value of the sum of the connector total in columns (2) and (5). The product, which is given the sign of the grand product total, is the final evaluative score for the concept and is placed at the bottom of column (1). 7. Cn an evaluative scale of seven steps, from +3 to -3, place the concepts according to their final evaluative score. CHAPTER IV ANALYSIS AN EVALUATICN Attitude Scores of the Concepts Six of the concepts were discarded before the final attitude scores were computed because they did not meet an arbitrarily set minimum usage test: there had to be at least ten references to the concept or it was not considered worthwhile to compute its score. The concepts discarded were: capitalists, Dulles, Eisen- hower doctrine, French people, Formosa, and the United Nations. The remaining eighteen concepts, their frequency of occurrence, and their attitude scores were: China (59 +1.34 Cambodia (36) +2.58 Colonialists (51) -2.17 Democracy (13) +2.75 Diem (191) -?.39 Diem administration (21) -2.39 French government (131) -l.59 Geneva Agreement of 1954 (16) +2.46 Imperialists (157) -?.48 Japan (18) -C.96 Laos (17) +1.71 FA 3 Li} W‘- ( {‘3 \JJ South Vietnam ( r—l ‘ )1 U1 l‘) t’ ) &>uth Viet 9 €: se people ( ) ) ) South Vietnamese administration(fi 9) —2.47 ) ) 0) Soviet Russia (164 +2.46 United Nations Agencies (32 +1.33 Uzfi ted States (430 -2.3o Figure 6 illustrates the relative position of each of those concepts on the +3 to -5 scale. An interesting sidelight is the frequency with which secondary ACs are used as evaluative material about negative concepts as compared to the infrequent usage of such terms when assertions are made about posi— tive concepts. Of the four hundred eL ghty assertions made about the United States, one hundred seventy-three used other ACs as evaluative material; of the one hundred sixty-four assertions made about Soviet Russia, only seven used other ACs. This evidently does not affect the final attitude scores for the two concepts, however, because the "sub-concept-value" (obtained from assertions using common meaning material instead of ACs) for the two concepts is almost the same as the final attitude score: for the United States, -2.37 as compared to the final attitude score of -2. 6; for Soviet Russia, +2.50 as compared to the final attitude score of +2.46. U1 ———Iemocracy (+2.75) Cambodia (+ Soviet Russia (+2.46) +2 South Vietnam People (+2.4 988 Q U \ Geneva Agreement (+2.46) China (+1.84) Laos (+1.71) -—-”"”flflflflfi—_ -——-U. N. Agencies (+1.33) +1 0. _1 Japan (-O.96) C) I South Vietnam (-1.3 )‘—“\\_§r “" French government — Colonialists (-2.17) United States (-2.36) Diem administration (-2.723// S. Vietnamese adm. (-247) _ F1” Q. 6r,,+J mperialists (-2.48) 6--Relative Score Scale Comparison with Predicted In Chapter III, 55 six groups of concepts were grouped into "similar groups", and it was predicted that these concepts would have scores. in semantic space, ing. concepts are as follows: 1. W o 4 Diem administration Diem Imperialists Capitalists French government Colonialists Soviet Russia China Cambodia Laos In the first three groups, similar final attitude If these concepts were located near each other they would show a similarity in mean- The groups and the final attitude scores of the l R) O \ )1 ON I I0 0 \‘J \\(> No score 4.59 -2.17 +2.46 +1.84 +2.58 +1.71 the concepts had very similar attitude scores, with the exception of the concept "capitalists", which was not used frequently enough to be given a significant score. Group four showed less association, group five even less, and group six least of all. The direction is the same in each group, but the intensity of the attitude differs, especially when the concepts are nations instead of individuals or poli- cies. From this brief attempt at prediction, it appears that similarity of attitude scores toward the concepts of nations is not as predictable as attitude scores toward individuals or policies. Reliability_Checks Two reliability checks were made, one as a check on Stage III of the Evaluative Assertion Analysis, the other as a specific check on the evaluative meaning of O 9 some "common meaning' terms. Check Number 1 In the check of Stage III, nineteen undergraduate students were asked to make judgments of the direction and intensity of thirty assertions, each consisting of an AC, a connector, and the common meaning material. These assertions were taken from the masked material that was being analyzed in this study, and were mimeo— graphed in the following form: (2) (3) (3c) (4) (4c) AC Cnpnector common meaning 1. Jd committed 158 violations 2. Qv stands for peace The students were asked to put the direCtionfi (+ or -) and intensity~(0 to 3) of the association or dissociation of the connector in column (3c), and to put the direction and intensity of the evaluativeness of the common meaning material in column (4c). Two pages of instructions were given as to how this was to be done, and an oral explanation of fifteen minutes was given to clarify the procedure and to answer questions. Each student worked independently under the supervision of an instructor in a classroom situation. The average time required to make the thirty judgments was approxi- mately forty minutes. The data collected consisted of 1,140 judgments (570 about connectors and 570 about common meaning materials) with a value of between +3 and -3. Each student's score for each judgment was recorded on a card along with the following information: name, major field of study, class, marital status, sex, age, grade point average, and the grade made on three terms of freshman English. These students were in no sense a random sample, nor were they selected with any specific criteria in mind. They were used simply for the sake of expediency, since they were members of a class in introductory research methodolo;y taught by an instructor who was ' Y . .. .‘. a A A . ,. .iis',‘ willing to 'donate" their time for one claSs period. m to The following tahle shows the range and diversity of the respondents with respect to the above-mentioned cateMories V ’ D {ajor lield of study: J. 13 0105108.]. 3018-106 0 o o o o 1 O C O O O O O O —b Forestry . . U} c l' 5 2;.) H‘ H. r:' C, O O 0 —§ \3 Felice Admini 1 Social Work . . . . . . . . D Idtrital status: ”infle . . . . . . . . . . . 9 :IwIWTlrfil o o . o . . . . . . 13 *3. w {6 .—J t . . . . . . . . . . . \fl C ) H do CD 1"!) fl ‘ PI". ' fi if 11%;: blarien o a o o o o o o o o 2 '7‘~\.'\' ,-‘ '23:: .341 .LU' LUrb u o o o o o o o o o 7 JLULiG 1‘s 0 o o o o o o o o o 5 Huwi‘“r 2 Dufi “I .3 o o 0 o o o o 0 0 O ,’ r1 . _'- ,i ,. '-.TI‘<9.'.lLUa. tot} S o o o o o o o o o 1 Arte : V4 ~r~s _ , f) w .7 1:11.10“! C. .L o o o o o o o o o o .3 Grade Point Average: 1.53 to 1.99 . . . . . . . . . ?.00 to 2.49 . . . . . . . . . 2.30 to 2.9” . . . . . . . . . 3.th to 5.99 . . . . . . . . . Grade iii.freslmun1 E1Qlish; A B First term . . . . . . 2 ” Second term . . . . . . 1 4 ’Thirfl terw1 . . . . . . C) 3 The first step in the analysis was distribution chart of responses to the two m J. ‘ [\j (1 41 ‘KD thirty questions. device. ;t3-S 'S L very lefiiite rrnu«s on little doubt that there and little, if any, required. As tables 1 tributions showed high variability, there was a high degree tsted a statistical analysis to determine the ficance of the disasree The selection 0 to analyze these data presented som For instance, a t test not be used because the and 2 s intended to be W“ .M ‘ ply DJ -PCJ \J L“ h \I C hone O 3 1 13 1 1? to make a SctS of Wrinarily a L the responses clustered into each Question, was hirn 1 now, S O 1 - - ' ~ ‘ “ ‘ fl Oi Glha;rebmcht. n -.- a statistical too (D there could ho wever , m J. l difficu be degree of agreement statistical analysis would be the dis— indicating that 1'_ .. .l in NBC 1" CO 8 S S igni- with which 1 A. t fpoblems. could of difference between means sampling distribution for various 3 ‘1 TM CCKNECTCR DISTRIBUTION Range of Answers: Questions 2.24! 012/0 230/046 99.4.8 IJCJPDA. 21,22 167.5(54n08 O... 1 1 235960576022754665/0466r04659574 2.4.43012352552252214266665658.3214 001.00531121110.4222120011000220 bl 0505201224112321OR¢23221«104.129.11 25021000130020122110¢3300~301000 Oiooud.0000210110000010100000001 1 2 54 56 7.8 90 1. 2 .54 56 7.8 90 2 1 3.4 56 7.00 90 29242222223 TABLE G TERM DISTRIBUTICN v LV COMCC N -433 Al? I +2 +3 +1 Questions 4001033442432120485124 98644587. 121.4 51021932 2.4 2.32 1232 5.4 QQSBSPDBB/OB 02 ~33312 1.4 7J1 27)?“ r3120r371f+ nd 2.4 113,722), 064.49._1.423321103453133611112112 165222112231421114320011245212 2Z/652345222J25540120013000273001 1OOOrDOO531%.4/02/OACO10020012111101 2 3.4 56 7.8 90 1 9.514. 56 78 90123.4 22/0 7.00 90 1 1 1nd2nd9_ncndn/_2ncnd_3 1111111 1| 62 items would be skewed whenever the mean was not exactly at the midpoint of the +3 to -3 scale. The probability of every possible answer on the scale being used is exactly the same. here can therefore be no valid reason to assume that scores at the midpoint will be more frequent than at either end. A normal distribu- tion is thus impossible, and tests depending upon a normal distribution must be ruled out. The same prob- lem is encountered in many studies where there is a limit to the scale, such as an eleven-point scale ranging from "strongly agree" to "strongly disagree", but in this case the scale is usually corstructed so as to have a normal distribution and answers on either extreme are to be expected only rarely. The Pearson product-moment correlation test was not applicable to the data since a correlation coefficient becomes meaningless when there is a pro- file covariation but discrepancies between the means of the profiles. This is illustrated in Figure 7 where the profile of a, while being completely different in score and mean, covaries perfectly with profile 2. A correlation test would indicate that A and 3 agree completely when their actual scores show that they 1 , I) C disagree very strongly. The "D", or generalized distance test discussed 1COsgood, et al, Measurement of Veanins, p. 90. 01 CO I?» 63 IUJ —2 -1 0 +1 Fig. 7--Profile Covariation +| m 64 by ngood measures the linear distance between points in the semantic space, and is useful in indexing the similarity between concepts Judged by an individual or a group as well as in comparing the perception of a concept by two or more individuals or groups. The "D" could not be used, however, because of the difficulty of determining when one "D" is significantly larger than another. ‘he sampliig distribution of "D" is not known, and is probably not normal, so normal curve statistics cannot be used. We cannot be sure that the "D"s obtained are significantly different. In compar- ing group responses, several non—parametric tests such as Wilcoxon's matched pairs signed test of the Mann— Nhitney "U" tes may be used, but there are none that may be used in comparing individual responses.19 The test finally selected was an analysis of variance test for a two-way classification. This analysis of difference between means is a separation of the variance of all observations into parts, each part measuring variability attributable to some specific source, e. 3., to internal variation of each population, to variations from one population to another, etc. In this test, I am interested in measurinv the t‘. V variability in judgments attributable to the source, ‘9Ib1d., pp. 101-3- the connectors being considered one population, the _gm§ one population, and the coders another population. I will have two analyses: one of ctmmon meaning terms with coders and one of connectors with coders. In each analysis, I will be testing the validity of three hypotheses: 1. The variance attributable to the common meaning terms or to the connectors (column effects) is not significantly larger than error variance. he test of this hypothesis is made independent of the coder variance. 2. The variance attributable to the coders (row effects) is not significantly larger than error variance. The test of this hypothesis is made inde- pendent of the column effects. 3. The variance attributable to the common meaning terns is not significantly different (larger or smaller) than the variance attributable to the coders. f ngood's assumption that "reasonably sophis- H (4' 1..» O 93 d. (T Q: C sers of English can agree on the direction an: degree of evaluativenoss of common meaning terms" is true, al- variance should be mainly due to concept variance and a small amount due to error variance. If the coders agree as to the meaning of the concepts, there should be high inter-coder correlation and little, (h C“ \ if a y, coder variance. Tables 3 and A show the final computations in the analysis of variance. For hypothesis (1), E is the ratio of the mean square for column means (connector or common weaning term variance) to the residual mean square; for hypo- thesis (2), E is the ratio of the mean square for row means (coder variance) to the residual mean square. At the .05 level of significance, the critical region for hypothesis (1) is 3 larger than F 95(29, 511), which is equal to 1.47; for hypothesis (2), 3 larger than F 95(19, 511) which is equal to 1.59. Referring to Table 3 (for connectors), we see that hypothesis (1) must be rejected, since 5 : #.51. Hypothesis (2) must also be rejected, since 3 : 7.72. Referring to Table 4 ( for common meaning terms), we see that hypothesis (1) must be rejected, since 2:4.98. Hypothesis (2) must also be rejected, since 3:2.65. For hypothesis (3), we see that in both tables the ratio of concept variance to coder variance is 1"} not larger than the critical £ (1.95), so we must accept the hypothesis that there is no significant difference between concept variance and coder variance. By means of the 3 statistic, we are testing whether concept variance or coder variance is larger than error variance and if there is a significant C(wVAY‘V-r-a r" 11.;JJKJ TOR VARIA \? Arc-1 1N U41; Sum of "I -‘.\ Sraares _L 1'1/f at ,_ £188.11 A \“ II‘.‘ I 1 f‘, L) _a q 4_ l_ ‘. I" '3 F Ratio Concept means (Columns) |\) 0\ v1 1‘0 29 Coder means (Rows Remainder (Error) 551 Total 599 TABLE 4 iHI”N MEANING VARIIICE Sum of Kean Squares d/f Square F Ratio Probability oncont C .1“ r- ., “01.70, Q ._ .. means 030,) 29 21,14 p:-7_-—:a.9o 2 :1.47 r-r-x ..-- . "If (Columns) ( pj.ot %.:Q (29,551) (”\er- JlJ v -_ “1 A. A A a, 1:. "5 fr" '1" rpm neans aOo.o 1y 11.33 F:---‘- .o; r 21.35 f‘ '7. ’ 7? r-r- (Rows) ( a3.ay *oJO (13,331) 1" _' I. . “GHQLUJEP quq‘ C m, w /' r- I 7C {\r‘ m‘ 35‘ p q (LI‘I‘OI‘) ('40,).0 351 4.1)) F: 1.3!? VIN 1 c; t’L’1 .121.€7 ( 11 (1-.. ,) Con:-:~r-11.-,:6 J. an, a Lu 'uiii- \)l ‘1} KL) '5 CD difference. If concept variance is larger than error variance, it is not chance error but is ascribable to characteristics of the concept. The same is true for coder variance. Since the E test rejects both hypo- theses in the two tables, it is clear that coder variance is-significantly different from error variance for both common meaning terms and connectors. Since the percen- tage of total variance attributable to coder variance, or internal variation in the coder population, is so great in both tables, it must be concluded that there is a low degree of agreement among the coders, since the variance is the difference between the means of the coders. In View of this significant variance due to coders, ngood's assumption (III) cannot be true since, as stated before, all variance should be due to connec- tor or common meaning variance. It is interesting to note that there is more disagreement among coders on connector: than on common meaning terms. This indicates that those terms that one would expect to have thy most common meaning actually have less conron meaning than value- laden terms which have no grammatical demands of continuity of meaning. The high degree of disagreewert between coders, as shown in the analysis of variance, compared to ngood's low degree of disa;reement sugjests that there must he some variables which influence these judgments. ngood, using a t test, found a very high degree of agreement among the coders used in the development o? ;he Evalua- tive Assertion Analysis. This strong relationship was not borne out in my stud‘. I suggest three variables which may eyplain a part, but not all, of the difference between ry test results and tsgood's: 1. Trainin". ngood's coders were well trained in the technique of making the required judgments, and had made many judgments of this nature prior to C‘vood's V U‘ test for agreement; my subjects had only a limited introduction to these techniques. 2. Association. ngood's coders had worked together for an extended period of time; n were not always even acquainted with each other, since they had been classmates for only four weehs. when a small rnmumncrnf people vnnfi< together clxmnilr for a ‘ d period of time, they may tend to make the same judg- ments. This may have happened with ngood's coders. 3. Psychological. Some people, when operating independently and not as a member of a trained group, will seldom make an extreme judgment; others will seldom make a conservative judgment. This example, plus other similar psychological factors, could explain a lack of agreement, especially as the group size increases and more personality typeS'are used as coders. If factors of general intelli3ence had anything to do with performance as a coder, we would expect persons wi oh similar nualifications to perform similarly on the test. For instance, verbal ability as refe ed in En3- lish lang ua3e trainin3 mi3ht af: fect a coder' rfo ro anze. yd In order to see if any of the coders clustered t03ether on the basis of their correlations and then to t1 r had si1iilar traits if coders who clustered to 3e 1 (I) CD 86‘ (wi' hi n the bounds of tie personal information I had about the coders), I made a chiitir hlfuentary Linlia e Analysis of the two correlation matrices. This analysis is a method of clusterin; people, or items, whic1 have distinctive cluster- on'1ucteriotics ,3?) in M? er to find any possible typal structure that may exist. A typal structure is one in which eVery member of a type is more like some other we mher of that type (with respect to the data analyzed) than he is like any member of any other type. In terms of correlation coefficients, every oerson in a tyne would have a hi 3her correlation with some other person in the type than he would with anyone not in the type. 13asical this Jr ! method is similar to Thurstonian factor analysis except that factor enalfsis is les i ned to isolate simple structure, wheres s 111 ka3e an.alysis is desi3ned to isolate typal structure. Linkage analysis has the 71 advantages of being simple, objective, rapid, and appro- priate for matrices of all re sontble sizes. After a m .n type has been found, it is possible to define a prototype, which is some composite of the characteristics possessed 20 by the members of the type. In the common meaning term matrix, I found the following two clusters: 9 15 ‘2 {16 i 12 17" i 18 , L '\N3 =>6 ~74 1-—+131E:;19 / 2m 11 14 7 f T 8 10 Type I Type II Fig. 8--Common Meaning Clusters In the connector matrix I found four clusters: 5 2 ‘12 1“—_-;.1J+‘_-;,--6’K \15—43219-9—17 Type I Type II 16\* 13\$ 7—91124 9—->181—_,.—1o<——8 Type III Type IV Fig. 9--Connector clusters 2OLIc-‘éauitty, Louis L. "Elementary Linkage Analysis for Isolating orthogonal and Oblique Types and Typal Rele- vancies", Educational and Psychological Keasurement, Vol. 17, No. 2, summer, 1957, pp. 207—213. -q [‘0 In analyzing these results, the first thing I looked for was similarity of clusters from one matrix to the other. If a cluster or clusters were repeated to any extent in both matrices this would not be a chance pheno- menon, since the errors would be systematic and related to the types of coders. However, there was no similarity of clusters because there was only one small cluster inde- pendent of the group cluster in the common meaning term matrix. The only repetition was with coders thirteen and eighteen in the Type II cluster who also appeared in the Type IV connector cluster. The next thing I looked for was a variable that was common for all coders in any cluster. I had seven variables in my data: major field of study, marital status, sex, class, age, grade point average, and grade in freshman English courses. In no cluster, however, were any of these variables or a combination of them present to a degree that could be called a causal relationship. There were absolutely no repetitions of identical variables in all coders of a cluster. For example, in the Type II cluster in the common meaning term matrix, the coders had the following characteristics: Coder: 1 13 18 19 Major Field: Bio. Science Police Police Police. Adm. Adm. Adm. Marital Status: Single Farrie' Lu (J) [—1. :1" LjJ (I) IV i F "S *‘5 H‘ (‘D Q.» Sex: Female Kale Class: Junior Senior Age: 20 40 Grade Point Average: 2.30 3.00 English Grade: Jaived Yale Val Junior Boy 23 23 1.5 Kai e homore ved In the Tyge I cluster in the connector matrix, the coders had the following characteristics: Coder: 1 2 Kajor Field: Eio. Police Science Adm. I-Iarital Status: Sinrle Lanried Sex: Female Yale Class: Junior Soph. Age: 20 23 Grade Point Average: 2.30 1.79 English Grade: C, B, T Waived do not explain the clusters, nor jud Hunts rose by the coders. T of the variables tested here go either twees. This does not mea - a L -, - _ ,_ - ..‘. - l. &“e LUU individual \sfcaol: iCal ‘1 ,... ---’,~ at. . ~J -. l‘ ,L -\ '- q one Luxnio Ul. Juo;}muuu8 l vie ill -’- 'v w “:1 a! 4‘1, ‘ "Nr': «- ¢ u- :1 Analysis. rtcther study using u r L) Police Adm. harried Iietlxé I31 0, C, C see vari do the; irfluenc here ar- types, b into th prototyp n, houever, th.t . 0 fl... “‘fi iiierwurt Veffixtola 7“” A' {-1 wit?“ :3 A *‘n c the ut none A I by K j s we do exist. Some intrrestiog V idfflflhjs to study would be eading comprehensiGN, intelligence, anti lorit:>m r19 ni gm, and :flVH‘ ”i tical shill. V, ~ A u;i‘rfU L1.r11'..‘)e’° :.’.' L ‘ ‘x -“ ‘- ‘ ‘r ‘ ‘ " ’. l c 1‘ “. ’- 'I "" t‘ 7" " '1 ‘ ‘L " ~‘ J'- Tm: bcCHnnl (3;i*;i‘c.“.. r! 1.8 1.1m? 0th WI“ -‘1.-J L 1 (1f a. portals (1’: L 1, feeling of doubt, as the analysis was beinv carried out, K.) 1 '1_ 'l-‘o_, _ _o _. “. A__""' o 1". LHBL LLB ulfiectldfl and d1-: ‘r'ee of HVlEUuLlVeUbe JQH;- ments bei n; 2'1'-.;;_e were no t the types of judb;~ :ents that would be made ry all users of Ergl sh, but were judgments that were peculiar to the researcher. I felt that not only would these judgments Vary between individuals with similar cultural bw,.FlHu11 but t! at the variation would lmr ease to a considerable extent if coders from a diffemrert cult re made the judgwents. This is, in feet, a hypothesis that a large portion, if not all, 4- r- ...... LJILe CU'ILILUI to C. :5 E3 (T) :0 ‘1 ,4 }_. '3 3 terms were not connon at all, but reliant upon the same variables that attitude ob ects relied norm for their meaning, 1. e., part, exhaerience, education, and social attitudes. This ssu ption of COMR'Q n meaning terms is fundamental to the Validity of the Evaluative Assertion Analysis, so in the interest of clearing away doubt as to the validity of this assum- E; p. ption, a test was de vi to determine if common mcn nin: terns did have "comrmn" meanir‘lg. Tris test was in tse form of a Semantic Differential, consisting of sixteen scales on which fifteen of the common meaning terms most frequently used in the material under analysis were placed as the concepts to be tested. he scales of the Semantic Differential were selected to include five sets of polar adjectives with the highest factor loadings in each of the three dimen- sions, evaluative, potency, and activity. These sets of adjectives were selected from a number of such sets factor analyzed by Osgood, and were the five sets which were consistently as relevant as possible to the wide range of terms being tested. hese scales, and the dimension to which they belong, are listed in Figure 10. In the evaluation and interpretation of the answers to this test, only the scales on the evaluative dimension were considered, as this is the major dimension used in making attitude judgments. The common meaning terms placed on the Semantic Differential did not meet any rigid criterion of fre- quency, but were selected at random from the material with only the stipulation that they must have been used at least twenty-five times. The terms selected were: aggressors, clique, coexistence, crime, fascist acts, henchmen, invaders, monOpolists, peace, plots, propa- ganda machine, repression, sabotage, scheme, and war. This Differential was then given to twenty—four .oondents. Eleven of the respondents were middle-aged ietnamese students doing advance work in social science, + DJ ol {J REFUTABLE 7E5IYUuIIf; __ 1.1. (7'; § Dimension: V'.“ T'N TT hELrFUL CCNARDLY XII» T UNSUCC 1:430 FLTL FQCLISH STRQKG UNDESIRAELE CAUTICUS IIBFF I CIE NT rCCD UI‘IFAIR 7-“ 1.7m IV”? D;£‘.L14L\io 1.0 U1: K I I? D FRCCAESSIVE 10--Test Scales and Dimensions (III) ( I) ( II) ( II) (III) ( II) ( II) (III) (III) ( II) (III) 77 and the remaining thirteen were American graduate students in Political Science. he object of this procedure was to determine whether other persons would rate the concepts in the same way I did. If this were the case, there should be no statistical difference between average ratings of others and my own ratings. Therefore, I ad- ministered the Semantic Differential to myself and com- pared my scores with the average of the twenty-four respondents. Mv hypothesis was: The twenty-four respondeits who took this Semantic Differential will not agree with me on the meaning of the fifteen concepts. In this case, "direction and degree of evaluativeness" in the ' in the original assumption is synonmous with "meaning' hypothesis because meanilg 3 expressed in direction and intensity (1. e., degree of evaluativeness) on the Semantic Differential. To test the hypothesis, I applied the t test to see if my score differed significantly from the mean of all the scores. In order to use this statistic, the mean of all respondents (including mrself) was considered the sample mean, and my score was considered the theo- s thertel test of‘tflra SD retical mean score. The t test w modified 3 hypothesis: Sample mean is equal to a Specified constant (my score). In order to facil‘tate comnutations, the five scale SCCT‘G evaluative SO reoio V than scores far each concept in the evalua th scores in for each concept population meal The level of s1” V ere were n would then 10711 far 8, t.",-_TQ-taile d in each :fl C'ctln‘liu; twenty—four deyrees of 3 smaller than test. v“ yr . "d H lI‘E—Z‘e cf- H. < C\ evaluative selected was -10711 an“; T""3' ‘- L.'k’t_1's‘1. .10. "Y , '1‘. .Lllt} aggressors '1‘qu Ciigie fl 1.... .. - ‘ux’GXiP-I LIUILCC Invaders chopolists Peace ‘ "‘\ - W‘- Freyagnnoa Seicn f1 3‘ _- 501"). we “a? 1 5'0. 0"‘4 73 Jo; ('I‘ “3 ~Y—— Because the judgments made in the evaluative Assertion Analysis are attitude judgments, only the results in the evaluative dimension are significant for the acceptance or rejection of my hypothesis. The modi- fied t hypothesis was rejected for twelve of the fifteen concepts in this dimension. This is 80.0 per cent of the concepts, a sufficiently large percentage of rejec- tions to justify the accep ance of my hypothesis that the twenty-four respondents did not agree with me in the meaning of the concepts. Although this is a small sample, the hifih percentage of disagreehent shows that the assumption is not true when precise agreement is required. Evaluation f the The results of the reliability tests discussed above indicate a serious shortcoming in the Evaluative Assertion Analysis method. If different individuals will make different judgments about the assertions in the material being analyzed, the final results will not depend upon the method itself but upon each individual's application of the method. This shortcoming in the method led me to examine the method with reference to standard criteria for systematic research tools. The standards used are those given by ngood rs aopropriate. 21Osgood, Measurement of Fnaninr, p. 11. 1. Validitv: the tool should actually measure what it purports to measure. The Evaluative Assertion Analysis purports to measure communicators' attitudes; since it actually depends upon the existence of common meaning terms for its validity it cannot be valid if there are none. Ky tests make me doubt the existence of common meaning terms, so the validity of the Evalu- ative Assertion Analysis is doubtful. 2. Reliability: the tool should yield the same results (within an acceptable margin of error) when different researchers apply it to the same material. The two reliability checks discussed above show that the reliability of the Evaluative Assertion Analysis is very low, since two of the assumptions fundamental to it are not valid when tested by empirical methods. This could be corrected by having the values of the connec- tors and common meaning terms arbitrarily assigned by the person directing the research, and the coders given instructions to assign these values every tima a par- ticular connector or common meaning term is used. There would be high reliability among the coders of a parti— cular study, but the problem would still exist when a different research director set up the values for the connectors and common meaning terms: there would be low reliability between research directors. Since the values of the cemmon meaning terms in particular are assumed to be those of the communicator, only he would be qualified to assign values to them. Any attempt by a person other than the communicator would only be an educated guess. 3. ijectivity: the data obtained by the use of the tool should be verifiable, reproducible, and independent of the researcher's attitudes, opinions, and bias. The lack of objectivity in the Evaluative ssertion Analysis is most evident in the masking pro- cedure. If a person is famaliar with the general sub- ject of the material being analyzed, the masking will revent him from knowing the concept represented ‘- ‘ not by the nonsense syllable. In the present study, the material under analysis was Horth Vietnamese propaganda; I was famaliar with the general political situation in Vietnam, so it was not difficult to identify the non- sense syllables after analyzing ten or twelve pages of the material. In fact, it was difficult not to identify them. The reliability also affects the objectivity of the tool. If the tool is unreliable, then it is diff - cult to obtain reproducible data and data independent of the researcher's personal idiosyncrasies. 4. Comparability: the data obtained from the use of the tool should lend itself to comparisons with ,3, . , 1.. . .. .3- a» ”a » A.“ .. . 2.3 udt'l 15071 0131151“ “asearbil tum-.3 5.111% t1-) CU...paPibQflE-Jv '.~.I. c!- p...‘ y.) data obtained in other studies usin the tool. In this In —-\ 1,» a ‘I —. __r~—\- -.Q z-‘n .D j x . ‘2 . -- — - - J-j- 7.; “espect, the nuderical eipression 0; tie uatd ffufl ULL Evaluative Assertion Analysis makes comparahil ty a simole matter when the results of Evaluative Assertion Analysis studies are ff“;31( , or when the results of other research tools are expressed in substantially equivalent numerical values. There are many re searcn to«ls which do not utilize numerical earpre ssicn of data, so the comparability of the Evaluative Assertion Analysis is limited in this respect. Quantitative expression of data does not insure comparability; units of measurement may differ from one research tool to another, and compare aoilit y still be mainmtl 1ed.Gor- relation anal” is, for emairle, does not require the same units. Quantitative expression is not really iwn(r ta.nt in evaluating the comparability of a tool. U] . Utilitvz the collection of data should not involve sue: a laborious pro0ess as to make it inefficient, i. e., the information ;ained should be ., worth the effort expended to gather it. Ti Evaluative 1 *3 (i) \ assertion Analysis is a cumbersime procedure which .2 J. n- volves a large amount of time to perform when compared to similar analysis techniques (a thematic content analysis, for example), an<3 the additional infor:°t1on gained (the quantitative evorcsslon of dati) is not worth the extra effort. ( _)\ . :n it in '2 normal distinctions in meaninr V made in communication should be adequately reflected 1r ma, the data obtained from the tool. The Evalua ive Assertion Analysis, with its emphasis on detecting differences in k—O A *2: H (+- f[, ’1 |._u C) :— H D‘ H: {.0 (2‘ d‘ u 8 meaning adequately satisfies tn. the Evaluative Assertion Analysis reflects these distinc- 'tions to a.finer deQree then usually made in normal communications. People generally cannot make the fine distinctions required, so the Evaluative Assertion Anal— ysis measures a finer distinction between coders than actually exists. The Evaluative Assertion Analysis satisfies only one of these six criteria. It is my conclusion that the method is not worth using because of its inadecuacy in satisfying important criteria for evaluating systematic research methods. 31212133123: Uo'Hs and F‘fiphlets Ettbi", Ifillifiwn. Ei‘fiernfi PLfiYIic C 1i‘iffli. ITWV Yur“1t KcGraw-Hill Book Co., Inc., 1955. Berelson, Bernhird Content An'1 :i3 in Conmnrioations Research. Glencoe: Free Ir e33, 1951. Bush, Chilton R. and Carter, Roy E. Jr. 3;;3r1nents in Pre-Tosting Printed 131er1313' g Rencrt to the U. 35. Ill P'P'”VHLH:J 1-1”€ M"’. PEJJ) Altfi. ° Ills tltfllte V for Journelis1ic Studies, St3nford University, 1394. Dau;nerty, H. 2. and Janow t7,01ris. é P3303010"ieal Werfare Casetook. Ba ltiznor e: Johns Honkirs Iress, 193;). Division of Communications, Illinois A330013+1orel Code for University of Illinois. he Content Analysis. Urbuna: niversity of Illinois, 1955. Dixon, W. J. and Massey, Frank J. Jr. Introda tion to Stetistioal An913913.3ew York : “91P0W-n11] Book Co., Inc” 1957. George, Alexander L. The I;_m11 "enee V3lue of Content finelysis. Santa Run on: TEe RAND Co1o., _l9= . Note on Content Analysis of Soviflt Tess Communication". Santa Monica: The RAED Coro., 1931 goode, 'Jilliam J. a.nfi Hett, Paul K. Vetnods in Social Research.1?ew York: ICIPZLW-Hill Bocli Co., Inc., 1952. Humata, Hireya. Attitude Change Through 7333 COW'Wll- cations. Urbsns: Institute of Communicat ons Reseaich, University of Illinois, 1954. . Nedie of Communios1ion a tne Free World 33 Seen by Czeohos1ovak, Ulflkdrfi a 1d volish Refu36-e,. flew York: International Public o1;inion ' 1: l [—T 116:;6ct‘t.0~.1, I110., 19),). liillmir. Fo1r3 Urbans: _ - 7‘1 9 -\ , and nonramm, Propagania TheorV. cations Research, “Orr 1L3 Institite of Cuernl- University of Illinois, 1‘; '4 "61"8 0'11 1:755! Lado, R. Lingnistics Across Cultures. Ann Arbor: The University of Lichigsn Press, 1937. Lasswe11,Harold D., Lerner D., and FJOl, I. T1e Cum— 1scs+ive Stndv of S‘nools. Series C, Sysxcols, g1, Eoover Institute Studies. Stanford: Stanfori University Press, 1952. LazarSfPld, P3 and Stanton, F. CO 33‘» Izmir; git-11.0118 R9393r0h 49-13 Few York: Harper, 1943. Lieberman, J. Ben and floodcock, Robert L. The Conrnni— cation An1rosch to Trchnicsl Assistence. Stanford: Stunikaxi Resecl‘c _Instiftmx3,11 1956. Niefield, S. J. Kev Words in American and Free World Progggsnds. neshin ton: fiuresu of Social bcience Research, Inc., The American University, 1952. ngood, C. 3., Saporta, 301, an nd Nunnally, Jum. Evs1us- tive Assertion inslvsis. U1“D.mana: Institute of Communications Research, University of Illinois, 1954. CSCOOd. C. 3., Suci, George J., and Tannenbaum, Percy H, The F