A FACTQR ANALYTIC STUDY OF THE EVALUATIQN OF CGLLEGE DEBA'fES Thesis for flu Degree of M. A. MECHEGAN STATE UNIVERSITY Thomas Martin Steinfatt 1966 THES!S ' LIBRARY Michigan State University A FACTOR ANALYTIC STUDY OF THE EVALUATION OF COLLEGE DEBATES By Thomas Martin Steinfatt A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF ARTS Department of Speech 1966 Copyright by THOMAS MARTIN STEINFATT '1966 ACKNOWLEDGMENTS As for my indebtedness to the persons who have assisted in the preparation of this thesis, I would first gratefully acknowledge Dr. Murray A. Hewgill for his most generous help with all stages of the research and for serving as the major professor. To Dr. Gordon L. Thomas and Dr. Jerry M. Anderson, the other members of my committee, I eXpress my sincere appreciation for reviewing the manuscript and offering help- ful suggestions. Special acknowledgment is also given to Dr. Anderson for providing the opportunity to conduct parts of this study in an eXperimental debate tournament which he conceived and obtained a Michigan State University All— University Research Grant to conduct. Dr. Anderson also granted permission to use the 1965 Michigan Intercollegiate Speech League General Debate Tournament for this study. To Dr. Kenneth G. Hance who also reviewed the manu- script and offered helpful suggestions, I eXpress sincere thanks. Acknowledgment is extended to Michigan State University for financial support of the experimental debate tournament and for encouraging research in debate. ii TABLE OF CONTENTS Page INTRODUCTION . . . . . . . . . . . . . . . . . . . . 1 Chapter ' I. PURPOSE OF THE PRESENT STUDY AND REVIEW OF THE LITERATURE . . . . . . . . . . . . 4 Purpose and Merit . . . . . . . . . . 4 Review of the Literature . . . . 5 Distinctiveness of the Present Study . . 18 Summary . . . . . . . . . . . . . . . . 23 II. METHOD . . . . . . . . . . . . . . . . . . . 25 The Questionnaire . . . . . . . . . 25 The EXperimental Ballot . . . . . . . . 28 The EXperimental Tournaments . . . . . . 29 Analysis of the Data . . . . . . . . . . 32 Summary . . . . . . . . . . . . . . . . 34 III. RESULTS . . . . . . . . . . . . . . . . . . 35 Results of Control Analyses . . . . . . 35 Overall Analysis . . . . . . . . . . . 37 Preliminary Examination . . . . . . 38 Analysis of Affirmative Speeches . . . . 54 Analysis of Negative Speeches . . . . . 56 Analysis of Constructive Speeches . . . 59 Analysis of Rebuttal Speeches . . . . . 61 Analysis of First Affirmative Constructive Speeches . . . . . . . . 64 Analysis of First Negative Constructive Speeches . . . . . . . 67 Analysis of Second Affirmative Constructive Speeches . . . . . . . . 69 Analysis of Second Negative Constructive Speeches . . . . . . . . 71 Analysis of First Negative Rebuttal Speeches . . . . . . . . . . 74 iii Chapter Analysis of First Affirmative Rebuttal Speeches Analysis of Second Negative Rebuttal Speeches . . Analysis of Second Affirmative Rebuttal Speeches Summary . IV. DISCUSSION OF THE RESULTS AND CONCLUSIONS OF THE STUDY BIBLIOGRAPHY APPENDIX Discussion of the Results . Suggestions for the Construction of Future Debate Ballots Suggestions for Further Research Summary iv Page 77 82 84 86 86 92 106 107 111 113 Appendix A. B. LIST OF APPENDICES QUESTIONNAIRE AND PERSONS RETURNING IT TWO TYPES OF FORMS OF EXPERIMENTAL BALLOT TWO THROUGH SEVEN FACTOR ROTATIONS OF OVERALL ANALYSIS PROPOSED BALLOT Page 113 116 119 125 INTRODUCTION Almost since the beginning of educational debate there have been controversies over how debates should be judged.1 One of these controversies concerns the relation— ship between the decision and the debate ballot. On one hand, the judge may listen to the debate as it progresses, perhaps taking notes or possibly just sitting and listening to the arguments. After it is over, he may make his decision and fill out the ballot on the basis of the decision made. On the other hand, the judge may listen to the debate, again with the option of taking notes or of simply listening; but in this case he may fill out the bal- lot first--in whatever form it may be——and then make the . decision on the number of points each side has amassed on the ballot. One argument in favor of the first method is that the judge's job is to judge the totality of the interaction which comprises a debate and that he must do this before he can know how effective the participants were in the 1Hugh Neal Wells, "Judging Debates,” Quarterly Journal of Speech, III (October, 1917), 337; and J. M. O'Neill, ”The Juryman's Vote in Debate,” Quarterly Journal of Speech, III (October, 1917), 351. individual aSpects of their debating. Directors of debate tournaments are occasionally confronted with this very argu- ment as a judge explains why he gave the losing team more points than the winners. The judge will state that an item such as Analysis or Organization is much more important than the other items which he was asked to rate. Thus he feels quite justified in giving the losing teams more points on the ”unimportant” items. A point in favor of the second method is that there is much less chance for an error in judgment to occur if the judge must make several lower-level decisions rather than a single overall decision as to what has occurred in the debate. It is the intent of the present study to investigate the nature of judges' evaluations of college debaters on certain lower-level items. This investigation uses factor analysis as a principal technique. The results of the fac— tor analyses are employed in making recommendations concern- ing the construction of future ballots which will be used to evaluate college debate. Chapter I contains a more detailed discussion of the purposes of the present study and a review of the relevant literature. Chapter II concerns the methods employed by the present study in obtaining and analyzing data on the evalua- tion of college debates. Chapter III contains the results of the factor analyses performed on these data. Chapter IV is a discussion of the results and their application to the construction of future college debate ballots. CHAPTER I PURPOSE OF THE PRESENT STUDY AND REVIEW OF THE LITERATURE Purpose and Merit By one method of conceptualization, the evaluation of debate may be divided into two categories: the overall type of evaluation, and the evaluation of lower-level items. This study will not be concerned with arguing the merits of either method. Rather, it will be concerned with investigating several questions which must be answered by proponents of the second method: (1) What are the lower- 1evel items which judges use in an overall evaluation of a debate? (2) Do judges use these items in the same way in evaluating affirmative and negative teams? (3) Do judges use these items in the same way in evaluating constructive and rebuttal Speeches? (4) Do judges use these items in the same way in evaluating each Speech in the debate? (5) Can these items be reduced to a manageable size and still retain their psychological meaningfulness? (6) What are the impli- cations of the answers to the first six questions for the construction of a debate ballot? The intrinsic merit of a study of judging in educa— tional debate assumes the value of that form of debate. Given that debating performs a valuable educational service and that the decision in debate is important as a motivating factor and as an educational device, it is justifiable to consider the methods by which decisions are made and the nature of the bases for these decisions. Once a list of items which are related to debating is obtained, the list must be examined to discover which items are important to judges' decisions in particular circumstances. Since these lists may be rather long, they must be reduced to a manage— able size while still retaining their meaningfulness to the debate. Thus the items which are important to judges' deci— sions in particular circumstances must be discovered. Then these must be reduced to a manageable size in order to con- sider the methods by which decisions are made and the bases for these decisions. Review of the Literature Though there have been many controversies over judg- ing in debate, very little quantitative research of any form has been done in the evaluation of debate. Scott (1951) pointed out that "up to 1951, there exists no objective 1 study on debate judging and little written opinion." His 1 Robert Scott, ”A Study of the Objectivity of Debate Judges” (unpublished Master's thesis, Department of Speech, University of Nebraska, 1951), p. 4. study consisted primarily of checking the number of affirma- tive and negative decisions which certain judges gave. In a somewhat similar study, Halstead (1940) investigated the influence of such factors as side in the debate, home school, and size of school on judge‘s decision.2 He did not employ a significance test, but reported percentages of wins and losses. Giffin (1959) employed a questionnaire in attempt— ing to study the opinions of judges regarding the factors involved in decisions they had made.3 He did not attempt to study their actual behavior during the debate, but only the factors they reported using. These factors, in order of decreasing importance, were: case, evidence, refutation, analysis, delivery, organization, and language. Roever (1958), using a questionnaire, also studied the opinions of judges in an attempt to determine if ”aca— demically defensible” criteria were employed in debate tournaments.4 Roever departed slightly from the Giffin study in that he made an attempt to study the reliability 2William P. Halstead, ”Who Wins Debates?” Quarterly Journal of Speech, XXVI (April, 1940), 213. :HChn Giffin, ”A Study of the Criteria Employed by Tournament Debate Judges," Speech Monographs, XXVI (March, 1959), 69. 4J. E. Roever, "A Study to Determine to What Extent Academically Defensible Criteria are Employed in the Judging of Tournament Debates” (unpublished Master‘s thesis, Department of Speech, University of Kansas, 1958). of the judges' stated opinions. He compared the number of times the judges actually employed the various criteria on their completed debate ballots with the number of times these criteria were mentioned by the judges bn the question— naire, and found positive correlations of over .90 for both 1957 and 1958 data. While not a direct measure of judge behavior, the attempt to study reliability readily distin- guishes the Roever study from the Giffin study. Williams and Webb (1964) conducted a pilot study to determine the possibilities of applying factor analysis to the evaluation of high school debate.5 They selected from debate texts, journal articles, and debate ballots, thirty— Seven terms which were associated with the evaluation of debate. In the Williams and Webb study, during a single high school demonstration debate, twenty experienced judges were asked to rate each Speaker on a seven-point scale running from "good” to ”bad” which was associated with each item. A separate sheet was used for each Speaker plus a fifth sheet which was used for a general ranking of the Speakers in terms of this overall performance. These data were factor analyzed on the CDC-1604 computer at the University of 5Fredrick Williams and Sally Ann Webb, "Factors in Debate Evaluation: A Pilot Study,” Central States Speech Journal, XV (May, 1964), 126—128. Wisconsin using factor analysis program BIMD 17 which employs a Varimax rotation of a principal axes solution. Unities were placed in the principal diagonal of the corre— lation matrix, and what appears to be the Kaiser criterion for ”when to stop factoring” was employed.6 Using this criterion, the computer extracted six factors. Williams and Webb state that in undertaking an interpretation of the rotated factor matrix only scales loading above .60 were considered since loadings less than .60 on given factors tended to load appreciably higher on other factors.7 They interpreted the Six factors in the following manner: (1) Argument, (2) Vocal Correctness, (3) Overall Delivery, (4) Apparent Character, (5) Uninterpretable, and (6) Uninterpretable. In the text and in a footnote they reported that the percentage of the total variance accounted for by each factor was, (1) 36%, (2) 14%, (3) 20%, (4) 11%, (5) 14%, (6) 6%, totaling 100%, considering rounding error. 6Harry H. Harman, Modern Factor Analysis (Chicago: The University of Chicago Press, 1960), cites the Kaiser criterion, p. 363. Kaiser used ”extensive application of electronic computers" to find the number of common factors that are "necessary, reliable, and meaningful for the expla- nation of correlations among the variables." His recommenda- tion is that with unities in the diagonal ”the number of com- mon factors should be equal to the number of eigenvalues greater than one of the correlation matrix.” This is usually from one-sixth to one-third of the number of items. Harman does not cite the exact procedures of Kaiser‘s work and the paper in which it appears is unpublished. Thus, the accep- tance of this criterion must, for the present time, be based upon trust in Kaiser and Harman. 7Williams and Webb, op. cit., pp. 126—7. By adding the Scores of each Speaker on each item which loaded over .60 on a given factor it was possible to rank the Speakers on each factor by comparing the sums. These rankings were then compared with the overall ranking given to each Speaker by the twenty judges. Factor (1) appeared to be the best predictor of overall ranking, although no significance test was employed. As a follow up to this pilot study, Webb used factor analysis in a more extensive study of high School debate. She made extensive use of the findings and techniques of Williams and Webb, with several major and minor differences. She collapsed the thirty-seven scales of Williams and Webb into thirty scales, and the four—page experimental ballot into one page. She asked the judges to write the number from one to seven opposite each item, rather than mak- ing a check on a seven—point scale. The criterion for ”drop— ping" the scales was failure of the Scale to load above .60 on any of the four factors reported in Williams and Webb. Webb was not consistent on this criterion as she included General Effectiveness, Clarity, and Word Choice in her study though their highest loadings were .57, .49, and .53, reSpec- tively. The extent of this inconsistency may be seen by the 8 . . Sally Ann Webb, "Factors of Judgment 1n Evaluation of High School Debate” (unpublished Master's thesis, Depart- ment of Speech, University of Wisconsin, 1964). 10 observation that Six of the seven scales she dropped for failing to have a high loading, loaded higher than Clarity, which she included. Webb gathered her data by asking judges to volunteer to fill out the ballot in three high school practice debate tournaments. Out of a possible 621 debates, 138 had judges who returned a completed ballot. There were 552 debaters who took part in these 138 debates. The Webb study was intended to answer two major questions. These were: (1) What factors of evaluation occur when analyzing the results from all debates as a group; (2) What factors of evaluation occur when analyzing the results from, (a) individual tournaments, (b) different types of judges, (c) judges with x years of experience, and (d) each side in the debate separately. In order to answer the two major questions stated above, Webb used the University of Wisconsin's CDC 1604 com- puter to perform Varimax rotations on a principal axes solu- tion,9 feeding the data to the computer in seventeen differ- ent forms. These forms corresponded to the categories neces- sary to make the comparisons suggested in question two. 9Webb apparently used the same BIMD 17 program as Williams and Webb. ll BIMD—17 apparently called for unities in the prin— 10 though Webb did cipal diagonal of the correlation matrix, not state this. A further departure from the pilot study is that Webb chose .68 as her cutoff point for high loadings, as opposed to .60 for Williams and Webb. The computer, using the Kaiser criterion for when to stop factoring, produced a four-factor solution in most of Webb's seventeen analyses. These factors were labeled Argu— ment, Apparent Character, Delivery, and Vocal Correctness. The only exceptions of these four factors in the seventeen analyses were with judges who were not debate coaches and judges with differing experience. The Vocal Correctness fac— tor dropped out with the former, and a fifth ”uninterpretable” factor was found with several of the latter cases. The answers to her two major questions were, in general, the four factors mentioned above. There are several points of both the Williams and Webb and the Webb study which are somewhat confusing. The first of these involves the choice of cutoff points, the naming of factors, and the use of Varimax rotation. Though the choice of cutoff points is somewhat arbitrary, it would seem that some rationale for a particular choice should be stated. Williams and Webb, in a statement quoted in this 10Williams and Webb, op. cit., p. 128. 12 paper, chose .60 for their cutoff point, because items load- ing less than this tended to load higher on other factors. There are two primary methods for rotating factors obtained in the orthogonal case of a principal axes solution. One of these is the Varimax method employed by Williams and Webb, and the other is the Quartimax method. The Quartimax method was first developed in 1952 by Carroll and soon after, independently, by several other researchers.11 Kaiser first reported on the Varimax method in his unpublished Ph.D. dis- sertation in 1956.12 The goal of both methods is close to the intuitive concept which Thurstone referred to as "Simple-Structure.”l3 Both methods attempt to obtain the nmnn:parsimonious solution in terms of number of factors and number of items loading high on particular factors. To be more Specific, the goal of the Quartimax method is to have 11John B. Carroll, "An Analytical Solution for Approximating Simple Structure in Factor Analysis,” Psycho— metrika, XVIII (1953), 23—38. 12Henry F. Kaiser, ”The Varimax Method of Factor Analysis" (unpublished Ph.D. dissertation, Department of Psychology, University of California, 1956). Much of this was later published: Kaiser, "The Varimax Criterion for Analytic Rotation in Factor Analysis," Psychometrika, XXIII (1958), 187-200. 13L. L. Thurstone, ”An Analytic Method for Simple Structure," Psychometrika, XIX (1954), 173—182. 13 each item load high on one and only one factor, and low on the other factors. Williams and Webb listed Harman as a reference work on factor analysis. On the Quartimax method of rotation of a principal axes solution Harman states: The ultimate objective would be a unifactor solution, in which each variable would be of complexity one, i.e., involve only a Single common factor. If a unifactor solution were possible, the variance of each variable would result from but one factor loading; and a reasonable ap- proach to this ideal would seem to require the maximum inequality in the distribution of the variance among the several factors for each variable in the factor pattern.14 This may be contrasted with the goal of the Varimax solution which is to simplify the description of each fac- tor by reducing the number of high—loading items on each factor. Again quoting Harman: The emphasis in the Quartimax method is on simplification of the description of each row, or variable, of the factor matrix. In contradistinction, Kaiser15 places more empha— sis on simplifying the columns, or factors of the factor matrix in an attempt to meet the requirements for simple structure. The important point here is that not only are the goals of the two methods different, but a choice must be l4Harman, op. cit., p. 294. 15Kaiser, op. cit. 16Harman, op. cit., p. 301. 14 made between them-—one cannot employ both methods Simulta- neously. Harman says: Thus, while simplicity of each variable may be attained concurrent with a large loading on the same factor, such a general factor is precluded by the simplicity constraint on each factor.1 With this in mind, it is unclear why Williams and Webb should have used the rationale that items which loaded lower than .60 on a given factor and which ”tended to load appreciably higher on other factors” should not be included in the interpretation of the rotated factor matrix. Had they used a Quartimax rotation, this rationale would be acceptable. Since they chose to employ a Varimax rotation, such a rationale is questionable. To clarify the effect of this point on the analysis, one may consider the interpretation or naming of Factor I in the Williams and Webb study. They reported Factor I as having high loadings on Supporting Material, Logic, Analysis, Reasoning, and several other scales which may reasonably be construed to be aspects of Argument.l8 Each of these loaded between .83 and .65 on Factor 1. Yet Enthusiasm loaded .55 on Factor I, Ethics .56, Intelligibility .55, Sincerity .49, Clarity .49, Use of Figurative Language .46, and Word Choice 7 Ibid. 18 Williams and Webb, op. cit., p. 127. 15 40.19 If a Quartimax rotation had been employed, then the fact that items such as Ethics, Intelligibility, and Use of Figurative Language, load higher on factors other than Factor I, could reasonably be used as a rationale for not including them in the interpretation of Factor I. Since a Varimax rotation was used, such a rationale is difficult to understand. This point was illustrated by reference to Williams and Webb, but the same difficulty is encountered in each of the seventeen analyses of the Webb study. The second point which is difficult to understand involves the percentage of variance accounted for by the factors in Webb, and Williams and Webb studies. As mentioned previously, Williams and Webb state that their six factors accounted for essentially 100% of the total variance of the items, and that they used unities in the principal diagonal of the correlation matrix.20 However, the values which are placed in this principal diagonal determine the percentage of the variance of the items which will be factored out. Referring to the principal diagonal, Harman states that ”the portions of the variances to be factored are determined by these diagonal elements.”21 When unities are used, 19These figures are taken from Appendix B of Webb which contains the rotated factor matrix of the Williams and Webb study. 20Williams and Webb, 0 . cit., pp. 127-128. 21Harman, op. cit., p. 69. 16 generally as many common factors are needed to account for the total variance of the items as there are items. That is, thirty-seven items normally require thirty-seven common fac— tors to account for 100% of the variance, with unities in the diagonal. Harman states that ”when unities are employed, the resulting descriptions of the 3 variables, are in terms of p (rarely fewer) common factors.”22 Thus it is difficult to understand how only Six factors can account for the total variance of thirty—seven items. Webb also claimed to have accounted for 100% of the total variance in each of her analyses. While she did not state the nature of the quantities which she used in the principal diagonal of the correlation matrix, she did use the same computer routine as did Williams and Webb, which probably employed unities as communality values. But even if values less than unities had been used for communalities, the total variance of p items still could not, in all prob— ability, have been accounted for by less than 3 factors. The use of values less than one for communalities in no way changes the total variance. It simply ascribes a part of the variance of each item to a factor unique to that item and analyzes the remaining variance in terms of common factors. zzlpid. A similar statement occurs on p. 159. 17 A third difficulty is that it is not immediately obvious that Webb's decision to exclude from her study all items of Williams and Webb which failed to load above .60 on any factor is well founded. Considering the fact that a Varimax rotation does not aim for each item's loading high on only one factor, it might be eXpected that several items would have their highest loading below .60 and still be rel— evant to the evaluation of debate. When it is considered that the pilot study involved only a single debate, it is difficult to understand how a decision could be reached that items which in one Special case were not relevant to the decision should be excluded from consideration in all other debates. As a fourth and final point a question must be raised concerning the number of factors which the Varimax rotation produced in the different analyses of the Webb study. Williams and Webb used the Kaiser criterion for when to stop factoring. While Webb did not state the criterion used for when to stop factoring, it is safe to assume that it was the Kaiser criterion since Webb employed the same com- puter routine as Williams and Webb. It was mentioned that the status of this criterion is somewhat uncertain since Kaiser's paper concerning it is unpublished and Harman does not explain exactly what Kaiser did to arrive at it. But aside from this, it must still be remembered that any single factor rotation, be it equal in number to the number of 18 eigenvalues greater than one or otherwise, is still only a single way of ordering the relationships among the original data. There is nothing magical or inflexible in the number of factors used to account for the variance among the items. Kaiser’s method is but one way which meets certain Specified standards that may or may not be relevant to the intent of a given study. In the Webb study there is little basis for implying that one set of judges structured the item Specif- ically into four factors while another set structured them Specifically into five. Webb fails to point this out. Distinctiveness of the Present Study Because of the Similarities between them, it is appropriate to discuss the distinctions between the present study and the Webb study. The first distinction concerns the origin of the items used. Whereas Williams and Webb took their items from debate texts and debate ballots in current use, the present study began with items from these sources but added to them by soliciting items from author— ities on educational debate. Appendix A contains the names of these authorities. The importance of the list of orig— inal items may be Seen from the fact that a factor analysis gives out only the data which are put in, but in another form. If the original list of items contains many terms concerning a single aSpect of debate, for example, evidence, then it becomes much more likely that a factor will appear 19 in the analysis that could reasonably be labeled Evidence. Thus, it is important to be as careful as possible in con- structing the original list of items not to load it in a particular direction. The items used in the Webb study were Selected from those used in Williams and Webb. An attempt was made by Webb to use a somewhat objective criterion in this Selection process by using the loading of .60 as a cutoff point, although she did not use it consistently. As stated, the present investigator asked authorities in debate to complete a list of items Such that the completed list would tend to cover the field of evaluation of debate. A Second distinction concerns the judges used in the two studies. Webb asked for volunteers to use her ballot in three tournaments. She got several different “types“ of judges as a result, among these being many students and otherwise inexperienced judges. The present study was con» cerned with obtaining only expert judgments of debate in order that the findings would be applicable to the construc— tion of a debate ballot which might be used by such expert judges in evaluation of college debate. This brings us to the third distinction. Webb eval_ uated high school debate while the present study was con- cerned with college debates. 20 The fourth distinction concerns the type of compar- ison which was undertaken in the present study as opposed to the Webb study. Webb made comparisons of affirmative team and negative team ratings, comparisons of each Speaker in the debate, comparisons by type of judge, and comparisons by experience of judge. Since the present study used only col- lege debate coaches with considerable debate experience as judges, a comparison as to type of judge was not considered meaningful. The two studies are similar in that the present study also made comparisons of affirmative and negative Speakers. The principal difference in the type of comparison made is that Webb asked judges to rate each Speaker and the present investigator asked judges to rate each Speech. At the beginning of this chapter it was pointed out that one way to conceptualize the evaluation of debate is to contrast the evaluation of the totality of the interac- tion with the evaluation of several lower-level items. The reason for considering the lower-level items as a means of debate evaluation is that, while it may be argued that eval- uating the total interaction of a debate should be the'final goal of the judge, this evaluation asks a great deal of one man or a group of men. To make Such a complex evaluation is an exceedingly difficult task. Thus it may be expected that some error will be almost unavoidable in a total evaluation. 21 If we may reduce the complexity of the evaluation the judge must make, however, while continuing to evaluate the total- ity of the debate, the probability of Such error may be reduced. One of the goals of the present study is to make Such a reduction possible. Webb was interested primarily in obtaining informa— tion from the judges in a form suitable for use in a factor analysis. Thus her study does not Suffer from instructing the judge to rate each Speaker. The present study is con- cerned not only with obtaining data suitable for a factor analysis, but also with obtaining data from the judge in such a manner that the judge need make only lowest-level decisions. In this way he need not attempt to average two or more decisions or evaluations. For example, on any given item such as Rate, in the present study the judge was asked to make one judgment for the first affirmative constructive Speech, and a second distinct judgment for the first affirma_ tive rebuttal Speech. In the Webb study he was asked, in essence, mentally to combine these two judgments and to come up with a single judgment for the first affirmative Speaker as to Rate. It must be understood, however, that these two distinct evaluations of Rate could be broken down still further. It is in no sense contended that the evaluation of Rate for one Speech is a true, lowestalevel item. The eval- uation of each Speech is the lowest convenient level. Thus 22 the probability of errors being introduced through a summa— tion process which occurs within the judge is considerably reduced. The present study also considers (l) constructive and rebuttal Speeches, and (2) an overall analysis of debates. A fifth distinction is that the present study used a Separate page for each Speech and an actual seven—point Scale which was printed out for each item. Williams and Webb used these techniques, but Webb used only a Single page and asked judges simply to write down a number from one to seven, as opposed to checking a Space or circling a number on a line segment. Sixth, Webb obtained her data from three high School practice debate tournaments using volunteer judges. Thus, there is no way to Specify the range of ability of the debaters involved. The data for the present study were obtained during a tournament at which only top level varsity college debaters were present and during a tournament con— taining varsity college debaters of lesser experience. The results of the present study, therefore, Should be applica- ble to college debate on the varsity level. Seventh, in the present study, two judges heard each debate as a reliability check on each other, while only one judge heard each debate in the Webb study. 23 One final distinction between the two studies may Sum up part of the reason for many of the previously listed distinctions between them. The goal of the Webb study was to investigate judges‘ behavior by means of factor analysis. The goal of the present study was to determine which items judges use together and to draw implications from these find— ings for the construction of future debate ballots. Summary To Summarize, a judge may evaluate the entire inter— action of debate, or he may evaluate several lower level items in an attempt to reach a decision. Little quantita- tive research has been done on the nature of the items which judges actually use to evaluate debates and the relation- ships of these items to each other. Halstead, Scott, Giffin, and Roever each investigated some aSpect of the evaluation of debate with a type of quantitative technique. Both of the Webb studies employed factor analysis as a tool for in- vestigating the evaluation of debate. These two latter studies are a major departure from the previous quantitative work which has been done in this area. While Halstead, Scott, Giffin, and Roever employed procedures Such as after- the-fact questionnaires, and counting numbers of decisions, in an initial attempt to quantify the evaluation of debate, their studies were intended primarily to describe judges' 24 opinions concerning what they had used to evaluate debates, to investigate the hypothesis that judges‘ prior opinions do not Significantly bias them in their judgments of debat- ing, or to describe the relation of variables such as “Size of school" to decision in debate. In contrast to this, Webb, and Williams and Webb, used factor analytic techniques to investigate the actual behavior of judges in their evalua- tions of different aSpects of high school debate. There are several features of these latter studies which are confusing, and several distinctions in methodology may be noted between them and the present study. One reason for some of these distinctions may be the differing intent of the studies. CHAPTER II METHOD Chapter I posed several questions concerning the nature of the evaluation of debate. This chapter will describe the methods used by the present study in attempting to answer those questions. The description is divided into four sections: (1) the questionnaire, (2) the experimental ballot, (3) the experimental tournaments, and (4) the analy— sis of the data. The Questionnaire In order to answer the questions posed in Chapter I, an initial list of items which are relevant to the evalua- tion of debate was compiled from Form C1 of the debate bal- lot published by the American Forensic Association and from a current debate text.2 This initial list was put into the form of a questionnaire and was sent to thirty-six persons recommended to the author as being highly competent author— ities in the field of college debate by members of the 1Form C is available from the secretary of the American Forensic Association. 2Austin Freeley, Argumentation and Debate (San Francisco: Wadsworth Publishing Company, 1961). 25 26 Speech Department faculty at Michigan State University. Twenty-eight of the thirty-six questionnaires were returned. Appendix A contains the questionnaire and the names of those returning it. The questionnaire asked the reSpondent to check each of the items as being either ”possibly relevant” or ”irrele— vant" to the evaluation of debate, and then to add to the list any additional items which, in the reSpondent’s opinion, would be necessary to allow him to evaluate completely any college debate. For any one person to compile such a list would be no small task. But by combining the additional items suggested by each of the respondents into the initial list, it was believed that a list which covered the field of possible debate items would be obtained. Thus, each of the items Suggested by the reSpondents was incorporated into the final list of items. The only exceptions to this were cases of items which were duplicated or very nearly duplicated. For example, if the reSpondent suggested the item Evidence, it was not included as a separate item since four different items suggesting four different dimensions of evidence were either on the questionnaire or suggested by respondents. These were Amount of Evidence, Quality of Evidence, Relation of Evidence to Conclusions, and Refutation of Opponent's Evidence. As mentioned in Chapter I, it was considered important to cover the field but equally important not to load the items toward a particular factor. 27 Since many reSpondents mentioned that a number of the items on the original list were in part or in whole covered by the items which they suggested, most of the items from the original list were changed in one form or another in transposing them to the final list. To give the two extreme examples, Poise and Eye Contact were marked Irrele— vant by over two—thirds of the respondents, and several others questioned their inclusion. They were drOpped from the final ballot. However, General Organization of the Speech, became Simply Overall Organization of the Speech, and Internal Organization of Individual Arguments was used in that form. The use of a questionnaire to obtain information on experts' current Opinions on the items necessary to debate evaluation was regarded as superior to obtaining a list of items solely from debate ballots and argumentation texts. The questionnaire allows not only a more current sampling of opinion from a larger group of authorities on debate, but also the sampling of the opinions of those who may actually be involved in using the items suggested. It is in no way contended that the twenty—eight individuals who returned the questionnaire constitute any form of random sample of any population. It is contended, however, that this group is large enough to represent a use». ful portion of the current thought on evaluation in debate. 28 The Experimental Ballot The final list of items obtained from the question— naires included thirty-one items. Thirty of these pertained to individual speakers or Speeches, in general, and one per- tained to the team as a whole. The consensus of the reSpon- dents was that Total Effect as a judgment on an individual Speech or Speaker was not generally relevant to the evalua- tion of a debate, but that the Total Effect of the team as a unit through the entire debate might be a very significant item. Thus the judge was asked to give a Total Effect team rating on a sheet which was separated from the other thirty items. Unity of Team Presentation was also seen as a team item, but one which could vary from Speech to Speech with significant influence on the outcome of the debate. Thus it was included in the evaluation of each speech. The thirty items were typed on eight and one-half inch by fourteen inch paper. A seven point scale running from ”performed very poorly” (one) to ”performed very well” (seven) was associated with each of the items. A separate sheet of items was used for the evaluation of each Speech. These sheets were stapled together and labeled in order, First Affirmative Constructive, First Negative Constructive, Second Affirmative Constructive, Second Negative Construc- tive, First Negative Rebuttal, First Affirmative Rebuttal, Second Negative Rebuttal, and Second Affirmative Rebuttal. 29 An additional sheet containing identification of debate and judge, rating of teams, decision, and other desired informa- tion, was used in conjunction with the eight—Sheet ballot. The combination of the eight-sheet ballot and the identifica— tion sheet will be referred to as the experimental ballot. Appendix B contains the two types of sheets used in the experimental ballot. The Experimental Tournaments The experimental ballot was employed in two inter- collegiate debate tournaments held at Michigan State Univer— sity. These were the Michigan State University Varsity Invitational Experimental Debate Tournament on February 26 and 27, 1965 and the Michigan Intercollegiate Speech League (M.I.S.L.) Debate Tournament on March 18, 1965. The Experi— mental Tournament was the prime source of data for the pres- ent study. Fourteen teams of experienced varsity college debaters from twelve schools participated.3 These teams were invited in large part due to their reputation for top Teams were present from the following colleges and universities: Albion College, Clarion College, Hiram Col— lege, Michigan State University (2), Northern Illinois University, Purdue University, University of Detroit, Univer- sity of Illinois at Navy Pier, Wayne State University (2), West Virginia University, Western Michigan University (men's team), and Wooster College. 30 quality debating as perceived by the forensic coaches of Michigan State University. Judging was done by the coaches of the teams entered in the tournament. Since one coach was present for every team, it was possible to have two judges listen to each debate. This served as a partial reliability check on the findings of this study. Each of the judges filled out both an experimental ballot and a Form C debate ballot of the American Forensic Association in each debate and in that order. The purpose of this was to minimize the possible effect of filling out Form C on the filling out of the experimental ballot. The Experimental Tournament consisted of Six pre- liminary rounds of debate plus a semi-final and champion~ ship round. The experimental ballot was used by two judges in each debate in the six preliminary rounds, but was not used in the two final rounds. Teams switched Sides on alternate rounds. The proposition for debate was the National Intercollegiate Debate Proposition for 1964-65, Resolved: That the Federal Government should establish a national program of public work for the unemployed. The range of experience of participating judges was from four to thirty—five years with a median of ten years. Upon preliminary investigation of the data from the Experimental Tournament, it was determined from the judges' team ratings that the range of ability of the debaters 31 participating was from fairly good to superior. In order to extend the applicability of the present study to poor as well as superior debating, experimental ballots were given to three members of the forensics coaching staff of Michigan State University for use in the M.I.S.L. varsity debate tournament. One of the teams which these judges heard had participated in the Experimental Tournament. This allowed for a comparison of judges' ratings of teams in the two tournaments. By examination of these three judges' ratings of teams, it was determined that at least one of the two teams in three of the debates conformed to the ”poor” category. Thus, the ballots from these three debates were included in the analysis. Procedures for filling out the experimental ballot in the M.I.S.L. Tournament were identical to those used at the Experimental Tournament. Judges rated each speech on each of thirty items on a seven-point scale, as well as giving an overall team rating for each team. Judges were allowed as much time after each Speech as was necessary in order to fill out the ballot. This usually was less than two minutes. 32 Analysis of the Data The data were transposed to Univac cards for use in the Control Data 3600 computer of the Michigan State Univer— sity Computer Laboratory and were analyzed using factor analysis program Fanod 3. Certain items on the experimen- tal ballot did not apply to the evaluation of the first affirmative constructive Speech and the first negative constructive speech. For purposes of analysis, a middle rating of four was assigned to each such item on these Speeches.4 It was believed that such items should not be eliminated from the experimental ballot simply because they could not be rated in each speech. Rather, it was felt that they should be included for that very reason since this would tend to point up any differences involved in evaluat— ing different Speeches. 4A separate analysis of the first affirmative con— structive Speech and the last seven speeches of the debate was conducted to determine the influence of this procedure on the variance being analyzed. A comparison of the results of these analyses indicates that the inclusion of these items did not influence the factors of the different analyses of this study to a noticeable degree. A comparison of the results of an analysis which did not include these items with the results of those which did also indicates that their inclusion did not noticeably affect the factors. 33 Factor analysis program Fanod 3 was used in the analysis of all data.5 Fanod 3 allows the user the Option of various types of analyses. The particular choices made for the present data were: (1) a Varimax rotation of a principal axes solution, (2) use of data in normal form, (3) unities in the principal diagonal of the original corre- lation matrix, (4) a complete rotational analysis, and (5) use of the Kiel—Wrigley criterion set at three for number of factors to be rotated.6 The Varimax method was chosen over the Quartimax method because the simplification of the hypo— thetical factors is more important to the interpretation of the results of this study than is the simplification of the description of any individual item. The choice of unities for communalities was made since there is no good way of knowing what portions of the variance of the individual items were due to unique and to common factors. As Harman states: 5A description of Fanod 3 is contained in the Mich- igan State University Computer Institute for Social Science Research Technical Report 2, September 22, 1964. 6The procedure of the Kiel—Wrigley criterion is to rotate successively larger numbers of factors with largest eigenvalues, beginning with two factors and adding a factor for each rotation until the solution yields a factor with fewer than the Specified number of items which have their highest loading on that factor. Three is the conventional number of items. 34 Under the assumed composition of the variables, the communalities are the basic quantities to be analyzed. Herein lies the trouble-—there is no a priori knowledge of the values of the communalities.7 Summary To summarize the method of this study, a question— naire containing an initial list of items relevant to the evaluation of debate was Sent to thirty-six qualified per- sons, who were requested to add items to the list so that it could be used to evaluate any college debate. Twenty-eight of these persons reSponded, and essentially each of the items which they Suggested was incorporated into the final list of items. This final list is assumed to cover the field of items necessary to evaluate a college debate. It was printed into ballot form and used by experienced debate judges in the evaluation of debates which ranged from low to high in qual- ity in two college tournaments. The data so obtained were factor analyzed by a computer to determine which items tended to be grouped together in their employment by judges. 7Harman, op. cit., p. 69. CHAPTER III RESULTS Chapter I poses six questions which concern problems relevant to the construction of a debate ballot. Chapter II discusses the formation of an experimental ballot and its use to gather data in an attempt to answer these questions. The present chapter is concerned with the results of the control analyses and the results and interpretation of the factors of the overall analysis, the analysis of affirmative speeches, negative speeches, constructive speeches, rebuttal Speeches, and the analyses of the eight individual Speeches of a debate. Results of Control Analyses An attempt was made in this study to check for cer- tain types of error by comparing the results of factor anal- yses of particular groupings of the data. There is no sta- tistical procedure for comparing the results of one analysis with the results of another. If upon inspection the struc— ture of the factors of one analysis is Seen to be different from the structure of the factors of a second analysis with respect to item inclusion and item strength, the only conclu— sion which may be drawn is that the data analyzed in the two 35 36 cases are probably different but might be the same. If, on the other hand, the structure of the factors of two analyses is almost identical with respect to the items loading high on a given factor and their relative position in terms of load- ings, it would seem intuitively correct to treat the data on which each of the analyses were based as if they had come from_the same population of data. It is on this intuitive basis that the results of several analyses concerning con- trol were compared. There are two important distinctions with respect to judges between the present study and the Webb study. The present study used only judges of known competence, and per~ formed a check on the reliability of those judges' ratings. This check was based on the assumption that persons with eleven or more years of experience in working with college debate are competent judges. When the data of the present study were grouped according to judges of eleven or more years of experience and judges of ten or less years of expe- rience, a comparison of the four—factor solutions of the two analyses Showed that the two groups behaved almost identical— ly on all but one or two items on each of the four factors. (The method of choosing the four—factor solution as opposed to another solution is covered in some detail under the over- all analysis.) This strong similarity would tend to indi— cate that the judges with ten years of experience or less 37 were behaving in essentially the same manner as the judges with more than ten years of experience. A partial check on the reliability of the judges' ratings was provided by the presence of two judges in each debate. The two ballots from each debate were randomly . assigned to two groups labeled A and B reSpectively. A fac- tor analysis was then performed on each group, and the five- factor solutions of these analyses were compared. The five- factor solutions are almost identical in every respect with the noteworthy exception of the three items pertaining to ethics. The items Trustworthiness, Proper Identification of Sources, and Behaved Ethically and Fairly, did not follow the same pattern in the A and B groups. This would tend to indicate that the judges' use of all but these three items was consistent throughout the debates. Overall Analysis The analysis of the data and the interpretation of the analyses will be the principal concern of the remainder of this chapter. An analysis over all of the judges, speeches, and debates was run to answer the first question of this study: What are the lower—level items judges use to evaluate a debate? Using the Kiel-Wrigley criterion set at three items, the computer produced Six matrices of rotated factor loadings for two—factor through seven-factor solu- tions. These matrices may be found in Appendix C. Five of 38 the roots of the characteristic equation of the correlation matrix (eigenvalues) were greater than one. Thus Kaiser would suggest the use of the five-factor rotation of the principal axes solution. Preliminary Examination The solutions will be examined in order from smallest to greatest number of factors. No attempt will be made to interpret a particular rotated solution in detail until each of the rotations has undergone a preliminary examination. The rotated factor matrices for the two-factor through seven— factor solutions of the overall analysis may be found in Appendix C. Two-factor Solution The first factor extracted in the two-factor solu— tion has loadings of .50 or above on nineteen of the thirty- one items, and accounts for 33% of the total variance.1 The two-factor solution accounts for 54% of the total variance. The first factor has relatively high loadings (as operation- ally defined at .50 or above) on items concerning organiza- tion, evidence, reasoning, adaptation, refutation, analysis lAll correlations of items and factors reported in the text are rounded to two decimal places. The interpreta- 'tion of the seven-factor solution of the overall analysis gives a full explanation of the use of .50 as a cutoff point for considering items on a given factor. 39 of the prOposition, and analysis of issues.2 It correlates .63 with overall team ratings. The second factor has rela— tively high loadings on items referring to aspects of delivery and ethical behavior, and it correlates .27 with overall team ratings. From 22% to 76% of the variance of particular items is accounted for by the two-factor solution. Four items have less than 40% of their variance accounted for by this rotation. Three-factor Solution The three-factor rotation accounts for 60% of the total variance of the items, with the factors accounting for 31%, 13%, and 16%, respectively, of this variance. The first factor has eighteen items which load .50 or higher. They are aspects of organization, evidence, reasoning, adaptation, refutation, analysis of proposition, and analy~ sis of opposing arguments. This factor correlates .62 with overall team ratings. The second factor has only five items loading .50 hr above, and these are all aspects of delivery. This factor correlates .17 with overall team ratings. The third factor has seven items with relatively high loadings. These were aSpectS of persuasive techniques, evidence, and ethical behavior; and they correlated .25 with overall team 2This list of terms is not an attempt to name the factor. It is Simply a statement using conventional debate concepts which indicate the general content of the factor. 40 ratings. From 41% to 76% of the variance of the individual items is accounted for by the three-factor solution. Four-factor polution The four—factor rotation accounts for 65% of the total variance. Eleven items load .50 or above on the first factor. These are aspects of reasoning, adaptation, refuta— tion, and analysis. Factor I correlates .46 with overall team ratings and accounts for 22% of the total variance. Four items concerned with persuasive techniques and ethical behavior load above .50 on the second factor. This factor correlates .19 with overall team ratings and accounts for 14% of the total variance. Factor III contains five items with relatively high loadings. These are aspects of deliv— ery. Factor III accounts for 11% of the total variance and correlates .11 with overall team ratings.3 Factor IV con- tains nine items with loadings of .50 or higher. These items are aspects of organization, evidence, reasoning, and analysis of the proposition. This factor correlates .47 with overall team ratings and accounts for 17% of the total variance. Between 42% and 78% of the variance of individ~ ual items is accounted for by the four—factor rotation. 3Negative correlations of items and factors are mathematically important but have no practical nor statis— tical significance. No minus signs are reported in the text for this reason. 41 Five-factor Solution The five-factor rotation accounts for 68% of the total variance. The first factor accounts for 23% of this total variance and correlates .47 with overall team ratings. Twelve items load .50 or above on Factor I. These items are aspects of reasoning, adaptation, refutation, analysis of opposing arguments, and analysis of the pr0position. The second factor loads high on only three items which are aSpects of delivery. Factor 11 accounts for 8% of the total variance and correlates .10 with overall team ratings. Factor III loads high on seven items which are aSpects of persuasive techniques and ethical behavior. This factor correlates .20 with overall team ratings and accounts for 15% of the total variance. Factor IV loads high on four items. These are concerned with evidence, clarity, and organization. Factor IV accounts for 15% of the variance and correlates .47 with overall team ratings. The fifth factor again loads high on only three items. These are different aspects of delivery from the items loading high on Factor II. Factor V correlates .05 with overall team ratings and accounts for 7% of the total variance. From 42% to 80% of the variance of the individual items is accounted for by this rotation. 42 Six-factor Solution The six—factor solution accounts for 71% of the total variance and from 52% to 83% of the variance of the individual items. Factor I loads high on eleven items which are aspects of reasoning, adaptation, refutation, and analysis. This factor correlates .47 with the overall team ratings and accounts for 21% of the total variance. Factor 11 accounts for 8% of this variance and correlates .08 with overall team ratings. Three items, all of which are aspects of delivery, load above .50 on this factor. The third fac- tor accounts for 11% of the variance and correlates .19 with overall team ratings. This factor has four items which load above .50. Those items are concerned with motive appeals and ethical behavior. The fourth factor correlates .45 with overall team ratings and accounts for 10% of the variance. Three items load high on Factor IV. These are concerned with organization and clarity. Factor V also has only three high loadings. These are aSpects of delivery separate from those items loading high on Factor II. Factor V accounts for 7% of the total variance and correlates .06 with overall team ratings. Factor VI has seven items which load .50 or above. These are aspects of the quality of preparation for the debate. This factor accounted for 14% of the total variance and correlated .23 with overall team ratings. 43 Seven-factor Solution The seven-factor solution accounts for 74% of the total variance and from 53% to 85% of the variance of the individual items. Factor I contains eleven items which load .50 and above. These are concerned with reasoning, refutation, analysis, and adaptation. Factor I correlates .46 with overall team ratings and accounts for 21% of the variance. The second factor has three items loading high. These are aSpects of delivery. It correlates .07 with overall team ratings and accounts for 7% of the variance. Factor III contains four items with high loadings. These items are concerned with motive appeals and ethical behavior. This factor correlates .20 with overall team ratings and accounts for 10% of the total variance. Factor IV accounts for 11% of the variance and correlates .47 with overall team ratings. There are three items which load .50 or above on this factor. They are aSpectS of organization and clarity. The fifth factor accounts for 7% of the variance while corre- lating .00 with overall team ratings. There are also three items which load high on Factor V. These items are aSpects of delivery. Factor VI contains four items which load above .50. They are concerned with aSpects of evidence. This fac- tor correlates .14 with overall team ratings and accounts for 11% of the total variance. The seventh factor contains two items which load above .50. These are aSpects of the 44 interestingness of the Speaker. Factor VII correlates .19 with overall team ratings and accounts for 7% of the total variance. This ends the preliminary examination of the six rotated factor solutions of the overall analysis. The prob— lem is to decide which of the solutions best fit the data and yields the most information concerning the relationships of the items as they were employed by the judges. Criteria for Choosing a Solution The best solution that might be hoped for under the given conditions of the type of Varimax rotation which this study employs would be one which (1) accounts for approxi- mately 70% of the total variance, (2) gives a relatively specific breakdown of the hypothesized conceptualization of the items by the judges, and (3) accounts for a relatively high and even percentage of the variance of the individual items. If much more than 70% of the variance is accounted for, the number of factors will approach the number of items, thus defeating the purpose of the factor analysis. If much less than 70% is accounted for, the number of factors will be small, giving only a very general picture of the judges‘ categorization of the items. Also, the picture given by a solution accounting for much less than 70% of the total vari— ance will not be an accurate representation of the use of 45 the items by the judges. This is the meaning of the phrase "percentage of variance accounted for.” If too general a breakdown of the conceptualization of the items is given, information is lost. This works in conjunction with the percentage of variance accounted for. A high and relatively even percentage of the variance of individual items should be accounted for in order to bias the solution as little as possible. In a Varimax rotation of a principal axes solution, each of these three criteria is positively related to the others. The factor analysis program used in the present study, Fanod 3, employs the Kiel— Wrigley criterion for when to stop factoring, set at three items. This means that every factor of each of the rotated solutions produced by the computer will contain at least three items which have their highest loading on that factor. This, in turn, means that normally the rotated solution pro- duced by the computer which contains the largest or next to the largest number of factors will best fit the above three criteria. Use of Criteria in Choosing a Solution The two-factor solution is unacceptable for several reasons. First, it divides the items into such large and general categories that they are relatively uninterpretable and useless in providing information concerning the similar use of items by judges. Second, only 54% of the total 46 variance is accounted for by the two factors. To account for 100% of the variance of N items, N factors are needed. While N factors would provide little information on the method of categorization which the judges employed, two factors are so general and, relatively, account for So little of the total variance that a two-factor solution is not Significantly better. A third reason for rejecting the two-factor solution is the large inequity in the amount of variance which indi- vidual items contributed to this solution. The percentage of variance accounted for by individual items ranges from 22% to 76%. Four items have less than 40% of their variance accounted for by this solution. This means that the two- factor solution is biased in the direction of the remaining twenty-seven items. The two—factor solution is unacceptable for these reasons. The three—factor solution is an improvement on the two-factor solution in reSpect to each of the three criteria. It accounts for 60% of the variance as opposed to 54% for the two-factor solution. It has three factors with eighteen, five, and seven items, reSpectively, loading above .50 on each of its factors, as opposed to nineteen and nine, reSpec— tively, for the two-factor solution. The three-factor solu- tion accounts for from 41% to 76% of the variance of partic- ular items. It is still biased, as all Solutions must be to some extent, but not so much as the two—factor solution. 47 The three-factor solution is better than the two- factor solution, but still is defective in light of these standards. The four-factor solution accounts for 65% of the variance, the five-factor solution for 68% of the variance, the Six-factor solution for 71% of the variance, and the seven-factor solution for 74% of the variance. A relatively specific breakdown of the item usage is provided by each of the solutions with four or more factors. The four—factor solution accounts for between 42% and 78% of the variance of individual items. The five— factor solution accounts for between 42% and 80% of this individual item variance, the Six—factor solution accounts for from 52% to 83%, and the seven-factor solution for from 53% to 85% of this variance. Neither the four—factor solu- tion nor the five-factor solution is satisfactory in account- ing for individual item variance since the range is large and the percentage of variance accounted for is relatively small. The range of the percentage of individual item vari— ance accounted for is still fairly large with both the six- factor and seven—factor solutions. Since the range is easily affected by extreme values, it may be useful to con— sider the range if the lowest value of each of these solu- tions is neglected. The six—factor solution, then, has a range of twenty-eight percentage points, while the seven- factor solution has a range of twenty percentage points. 48 The more even percentage of individual item variance accounted for and the greater percentage of total variance also accounted for determine the seven—factor solution as the solution which best meets the stated criteria for choice of solutions.4 This completes the choice of solutions of the overall analysis. Naming of Factors Rationale for Choice of Cutoff Point In order to name the factors, a choice must be made regarding a criterion for determining when an item loads high enough on a particular factor to be given consideration in the naming of that factor. Usually this criterion is in the form of a cutoff point, with any item loading below this cutoff value not considered in the naming of the factor. Chapter I lists the difficulties encountered,given a Varimax rotation, in defending a choice of cutoff points on the basis of items loading higher on other factors. Yet some cutoff point must be used since the purpose of a factor analysis is to determine which items were used in a Similar manner. The analysis is not useful unless it yields a difference in the item structure of different factors. For the purposes of 4It should be noted that the Kaiser criterion is not employed in the present study. 49 the present study it is assumed that items which contribute less than 25% of their variance to a given factor will prob— ably have little meaning for recommendations concerning the construction of debate ballots. This assumption is Somewhat arbitrary and there are several fractions other than 25% which could reasonably be defended. For the sake of ease in the naming of factors and ease in referring to the relative strength of items, those items which load above .50 on a given factor will be referred to as contributors of that factor; and items loading above .70 will be referred to as principal contributors. These correlations correspond to 25% and 49%, reSpectively, of the total variance and were chosen primarily because of their close correSpondence to the relatively familiar fractions of one-fourth and one— half. It should be noted that the factors of each of the analyses of the present study are named according to two general rules. First, names are composed of the essential feature or features of every contributor of a given factor. Second, these features are arranged in order in the name, from highest loading to lowest loading, insofar as this is possible. The names which result from the application of these two rules are often fairly long and complex. This disadvantage, however, is more than offset by the utility 50 of the name, since the reader knows at a glance the essen- tial item structure of the factor. At this point in the analysis, it would be a mistake to assume that the judges necessarily structured their ratings into concepts which can be named in one or two words. Use of Cutoff Point in Naming of Factors The characteristic equation of the overall analysis contains five roots which are greater than one. The seven- factor solution of the overall analysis was chosen as the one which best meets the requirements of the stated criteria. This solution accounts for 74% of the total variance and from 53% to 85% of the variance of individual items. Factor I accounts for 85% of the total variance and correlates .46 with overall team ratings. There are eleven contributors to factor one. Principal contributors are the items: Recognition of Main Issues of the Debate (.81), and Ability to Follow the Main Issues through the Rebuttal Period (.80). The nine other contributors are the items: Refutation of Opponent's Reasoning (.79), Adaptation to Opponent's Case (.78), Unity of Team Presentation (.77), Ability to Find Fallacies (.68), Analysis of the Proposition and Selection of Arguments (.61), Fulfillment of Responsibil- ities of Speaker's Position in the Debate (-60), Refutation of Opponent's Evidence (.55), Depth and Scope of Knowledge ‘of Topic (.53), and Quality of Reasoning (.52). 51 In light of the two rules for naming factors, Factor I may be called the Continuous Analysis and Attack of the Principal Issues of the Debate by the Use of Consistent, Well-Reasoned Attacks on the Opposition's Reasoning, Case, and Evidence. ‘ Factor II accounts for 7% of the total variance and correlates .07 with overall team ratings. There are three contributors to Factor 11. There is only one principal con- tributor which is Grammer (.85). The two other contributors are Articulation (.63) and Word Choice (.57). .Factor 11 may be named Fundamental Verbal Facility with respect to Grammar, Articulation, and Choice of Words. Factor III accounts for 10% of the total variance and correlates .20 with overall team ratings. There are four contributors to Factor III. The principal contributors are Use of Motive Appeals other than Humor (.81), Use of Humor (.76), and Behaved Ethically and Fairly (.75). The only remaining contributor is Trustworthiness (.67). Factor III may be named Use of Persuasive Techniques in an Ethical Manner.5 Factor IV accounts for 11% of the total variance and correlates .47 with overall team ratings. There are three 5The terms persuasive and persuasion as used in this paper are always intended to be understood as referring to extra-logical persuasion. 52 contributors to Factor IV. Principal contributors to Factor IV are Overall Organization of the Speech (.85) and Internal Organization of Individual Arguments (.79). The other con- tributor is Clarity of Statements (.64). Factor IV may be named Overall and Internal Organization and Clarity of the Speech. Factor V accounts for 7% of the total variance and correlates .00 with overall team ratings. There are three contributors to Factor V. Principal contributors are Rate (.88), and Delivery in General (.75). The other contributor is Articulation (.54).6 Factor V may be named General Deliv- ery with Emphasis on Rate and Consideration of Articulation. Factor VI accounts for 11% of the total variance and correlates .14 with overall team ratings. There are four contributors to Factor VI. The only principal contributor is Proper Identification of Sources (.76). The three other contributors are Quality of Evidence (.68), Amount of Evi- dence (.68), and Relation of Evidence to Conclusions (.62). Factor VI may be named the Proper Use, Quality, and Amount of Evidence. Factor VII accounts for 7% of the total variance and correlates .19 with overall team ratings. There are two 6It Should be noted that Articulation was a contrib- utor of Factor 11 but was not eliminated from consideration on Factor V. See the discussion in Chapter I of the distinc— tion between Quartimax and Varimax rotations of a principal axes solution. 53 contributors to Factor VII, and the only principal contrib— utor is Dynamism (.75). The other contributor is Interest- ingness (.59). Factor VII may be named Dynamism and Inter- estingness. This concludes the naming of the seven—factor solu~ tion of the overall analysis. In each of the remaining analyses the procedures used in reporting the results of the overall analysis were repeated step-by-step. Since a description of the preliminary examination and the choice of solutions of each of these analyses would involve unneces- sary repetition, the particular choice of solutions is Sim- ply stated in each case. The naming of the factors involves the use of the same rules as in the overall analysis. Having completed the overall analysis, the remaining twelve analyses may now be discussed. One of the questions posed in Chapter I concerns the possible difference between judges‘ ratings of affirmative and negative teams. An anal— ysis of affirmative speeches and an analysis of negative Speeches were completed in order to answer this question. 54 Analysis of Affirmative Speeches The six-factor solution of the affirmative speeches factor analysis was chosen in accordance with the stated criteria. Five of the eight eigenvalues of the affirmative Speeches analysis are greater than unity. The Six—factor solution accounts for 66% of the total variance and from 47% a to 79% of the variance of individual items. Factor I accounts for 18% of the total variance and correlates .35 with overall team ratings. There are seven contributors to Factor 1. Of these, the principal contrib— utors are Refutation of Opponent‘s Reasoning (.84), Adapta— tion to Opponent's Case (.80), Recognition of Main Issues of the Debate (.79), Unity of Team Presentation (.78), Ability to Follow the Main Issues Through the Rebuttal Period (.76), and Ability to Find Fallacies (.73). The one other contrib— utor is Refutation of Opponent‘s Evidence (.59). Factor I may be named the Continuous Analysis and Attack of the Prin— cipal Issues of the Debate by the Use of Consistent Attacks on the Negative's Reasoning, Case, and Evidence. Factor II accounts for 15% of the total variance and correlates .22 with overall team ratings. There are six contributors to Factor 11. Principal contributors are Use of Motive Appeals other than Humor (.79), Use of Humor (.76), Behaved Ethically and Fairly (.75), and Trustworthiness (.72). The other two contributors are Quality of Evidence 55 (.52) and Amount of Evidence (.50). Factor 11 may be named Use of Persuasive Techniques and of Evidence in an Ethical Manner. Factor III accounts for 8% of the total variance and correlates .12 with the overall team ratings. There are three contributors to Factor III. The principal contributors are Grammar (.87) and Articulation (.76). The one other con— tributor is Word Choice (.60). Factor III may be named Fundamental Verbal Facility with respect to Grammar, Articu- lation, and Choice of Words. Factor IV accounts for 17% of the total variance and correlates .41 with overall team ratings. There are eleven contributors to Factor IV. Principal contributors are Over— all Organization of the Speech (.79) and Internal Organiza- tion of Individual Arguments (.75). The nine other contrib- utors are Clarity of Statements (.65), Relation of Evidence to Conclusions (.64), Amount of Evidence (.58), Quality of Evidence (.57), Fulfillment of Responsibilities of Speaker's Position in the Debate (.56), Quality of Reasoning (.54), Proper Identification of Sources (.53), Analysis of the Proposition and Selection of Arguments (.52), and Depth and Scope of Knowledge of Topic (.51). Factor IV may be named Overall and Internal Organization and Clarity; Use, Quality, and Amount of Evidence; Reasoning, Analysis, and Depth of Knowledge of Topic. 56 Factor V accounts for 7% of the total variance and correlates .02 with the overall team ratings. There are three contributors to Factor V. Principal contributors are Rate (.85) and Delivery in General (.80). The one other con— tributor is Interestingness (.54). Factor V may be named Delivery in General with Emphasis on Rate and Considering Interestingness. Factor VI accounts for 7% of the total variance and correlates .27 with the overall team ratings. The principal contributor is Dynamism (.76) and the other contributor is Interestingness (.58). Factor VI may be named Dynamism and Interestingness. This concludes the naming of the Six- factor solution of the affirmative speeches analysis. Analysis of Negative Speeches Five of the eigenvalues of the negative speeches analysis were greater than one, and the four-factor solution of this analysis was chosen in accordance with the stated criteria. This solution accounts for 66% of the total vari- ance and from 47% to 79% of the variance of individual items. Factor I accounts for 35% of the total variance and correlates .67 with overall team ratings. There are twenty contributors to Factor 1. Principal contributors are Refuta— tion of Opponent's Reasoning (.85), Adaptation to Opponent's Case (.84), Analysis of Proposition and Selection of Argu- ments (.82), Recognition of Main Issues of the Debate (.81), 57 Quality of Reasoning (.81), Fulfillment of Responsibilities of Speaking Position in the Debate (.81), Depth and Scope of Knowledge of Topic (.80), Relation of Evidence to Conclu— sions (.74), Ability to Follow the Main Issues Through the Rebuttal Period (.72), Quality of Evidence (.71), and Unity of Team Presentation (.70). The nine other contributors are Competence (.69), Ability to Find Fallacies (.67), Refuta- tion of Opponent's Evidence (.60), Internal Organization of Individual Arguments (.58), Amount of Evidence (.56), Over- all Organization of the Speech (.54), Clarity of Statements (.54), and Dynamism (.52). Every item on the experimental ballot which is not a measure of delivery or of ethical behavior is a contributor to this factor. This is reflected in the name of Factor I which is Refutation, Adaptation, Analysis, Recognition of Issues, Reasoning, Knowledge of Topic, Evidence, Organization, and Clarity. Factor 11 correlates .25 with overall team ratings and accounts for 7% of the total variance. There are four contributors to Factor 11, although it has no items loading above .70. The four contributors are Delivery in General (.57), Internal Organization of Individual Arguments (.53), Overall Organization of the Speech (.52), and Clarity of Statements (.51). Factor 11 may be named Delivery with Emphasis on Organization and Clarity. 58 The third factor of the analysis of negative Speeches correlates .19 with overall team ratings and accounts for 14% of the total variance. It has five contributors, of which two are principal contributors. These principal contributors are Use of Motive Appeals other than Humor (.83) and Use of Humor (.75). Other contributors to Factor III are Behaved Ethically and Fairly (.68), Trustworthiness (.59), and Proper Identification of Sources (.57). Factor 111 may be named Use of Persuasive Techniques in an Ethical Manner. 1‘ Factor IV has four contributors, correlates .08 with overall team ratings and accounts for 10% of the total vari- ance. Articulation (.81) and Grammar (.74) are principal contributors to Factor IV, while Rate (.64) and Word Choice (.57) are the other contributors. Factor IV may be named Fundamental Verbal Facility, Rate, and Articulation. This concludes the analysis of the negative Speeches. Another of the questions asked in Chapter I concerns the possible differences in judges' ratings of constructive and rebuttal Speeches. An analysis of all constructive speeches and an analysis of all rebuttal speeches were com- pleted in order to answer this question. 59 Analysis of Constructive Speeches The six—factor rotation of the analysis of construc- tive speeches was chosen in accordance with the stated cri— teria. Five of the eigenvalues of the constructive Speeches analysis are greater than unity. The six-factor solution accounts for 72% of the total variance and from 56% to 85% I of the variance of individual items. Factor I accounts for 14% of the total variance and I mu :1 correlates .29 with the overall team ratings. There are Six contributors to Factor 1. Principal contributors are Ability to Follow the Main Issues Through the Rebuttal Period (.78), Recognition of Main Issues of Debate (.77), Unity of Team Presentation (.77), Adaptation to Opponent's Case (.72), and Refutation of Opponent‘s Reasoning (.72). The one other contributor is Ability to Find Fallacies (.53). Factor I may be named The Continuous Analysis and Attack of the Prin- cipal Issues of the Debate by the Use of Consistent, Well— Reasoned Attacks on the Opposition‘s Case and Reasoning. Factor II accounts for 14% of the total variance and correlates .18 with the overall team ratings. There are four contributors to Factor II. Principal contributors are Use of Motive Appeals other than Humor (.83), Use of Humor (.81), and Behaved Ethically and Fairly (.73). The one other contributor is Trustworthiness (.65). Factor II may be named Use of Persuasive Techniques in an Ethical Manner. 60 Factor III accounts for 7% of the total variance and correlates .07 with the overall team ratings. There are three contributors to Factor III. The principal contributor is Grammar (.81). The two other contributors are Articula- tion (.59) and Word Choice (.54). Factor III may be named Fundamental Verbal Facility with respect to Grammar, Artic- ulation, and Choice of Words. Factor IV accounts for 15% of the total variance and correlates .54 with the overall team ratings. There are five contributors to Factor IV. Principal contributors are Overall Organization of the Speech (.88), Internal Organiza- tion of Individual Arguments (.82), and Clarity of State- ments (.72). The two other contributors are Fulfillment of Responsibilities of Speaker's Position in the Debate (.57) and Overall Team Rating (.54). Factor IV may be named Over- all and Internal Organization and Clarity of the Speech and Responsibilities of Speaker's Position. Factor V accounts for 8% of the total variance and correlates .04 with the overall team ratings. There are four contributors to Factor V. Principal contributors are Rate (.82), and Delivery in General (.78). The two other contributors are Articulation (.57) and Interestingness. (.55). Factor V may be named General Delivery with emphasis on Rate and Consideration of Articulation and Interesting- ness . 61 Factor VI accounts for 11% of the total variance and correlates .30 with the overall team ratings. There are five contributors to Factor VI. The principal contributor is Dynamism (.79). The four other contributors are Interesting— ness (.56), Analysis of the Proposition and Selections of Arguments (.53), Quality of Reasoning (.52), and Quality of Evidence (.50). Factor VI may be named Dynamism, Interest— ingness, Analysis of the Proposition, and Quality of Reason- ing and Evidence. This concludes the naming of the six-fac— tor solution of the analysis of constructive speeches. Analysis of Rebuttal Speeches Five of the eigenvalues of the characteristic equa— tion of the analysis of rebuttal Speeches were greater than one.7 The six-factor solution of this analysis was chosen in accordance with the stated criteria. This solution accounts for 74% of the total variance and from 66% to 87% of the variance of the individual items. Factor I accounts for 31% of the total variance and correlates .67 with overall team ratings. There are seven— teen contributors to Factor I. Principal contributors are Refutation of Opponent‘s Reasoning (.85), Fulfillment of 7The roots of the characteristic equation are called eigenvalues. A brief discussion of the characteristic equa- tion may be found in Harman, op. cit., pp. 156-7. The char- acteristic equation is essentially the equation which repre— sents the expanded determinant of the coefficient of the uniqueness terms. 62 Responsibilities of Speaker's Position in the Debate (.83), Recognition of Main Issues of the Debate (.82), Adaptation to the Opponent's Case (.81), Ability to Follow the Main Issues Through the Rebuttal Period (.80), Analysis of the PrOposition and Selection of Arguments (.79), Ability to Find Fallacies (.76), and Unity of Team Presentation (.75). The nine other contributors are Depth and Scope of Knowledge of Topic (.69), Overall Team Rating (.67), Quality of Reason— ing (.65), Overall Organization of the Speech (.64), Compe— tence (.60), Refutation of Opponent's Evidence (.60), Internal Organization of Individual Arguments (.59), Clarity of Statements (.55), and Relation of Evidence to Conclusions (.51). Factor I may be named the Continuous Analysis and Attack of the Principal Issues of the Debate by the Use of Consistent, Well—Reasoned Attacks on the Opposition‘s Reasoning, Case, and Evidence; Overall and Internal Organi- zation and Clarity. Factor 11 accounts for 8% of the total variance and correlates .18 with overall team ratings. There are three contributors to Factor II. Principal contributors are Grammar (.84), and Articulation (.73). The other contrib- utor is Word Choice (.56). Factor 11 may be named Fundamen- tal Verbal Facility with respect to Grammar, Articulation, and Choice of Words. 63 Factor III accounts for 11% of the total variance and correlates .26 with overall team ratings. There are four contributors to Factor III. Principal contributors are Use of Motive Appeals other than Humor (.82), Use of Humor (.76), and Behaved Ethically and Fairly (.73). The other contributor is Trustworthiness (.66). Factor III may be named Use of Persuasive Techniques in an Ethical Manner. Factor IV accounts for 6% of the total variance and correlates .03 with overall team ratings. There are two contributors to Factor IV. The one principal contributor is Rate (.86). The other contributor is Delivery in General (.70). Factor IV may be named General Delivery with Empha- sis on Rate. Factor V accounts for 9% of the total variance and correlates .06 with overall team ratings. There are four contributors to Factor V. The only principal contributor is Proper Identification of Sources (.71). The three other con- tributors are Quality of Evidence (.63), Amount of Evidence (.63), and Relation of Evidence to Conclusions (.61). Fac- tor V may be named the Proper Use, Quality, and Amount of Evidence. Factor VI accounts for 8% of the total variance and correlates .12 with over-all team ratings. There are two contributors to Factor VI. There are no principal contrib— utors. The two contributors are Dynamism and Interestingness. 64 This concludes the naming of the six—factor solution of the analysis of rebuttal speeches. Having completed the interpretation of the affirma— tive, negative, constructive, and rebuttal Speeches, the eight individual speeches of a debate may be discussed. An analysis was conducted of each of these eight Speeches in order to answer the question posed in Chapter I concerning individual speeches. Analysis of First Affirmative Constructive Speeches The first Speech of any debate is inherently differ— ent from the other Speeches in that the speaker has no one preceding him whom he must refute. Because of this there are certain items which pertain to the last seven speeches of the debate which do not apply to the first affirmative constructive. The items which do not apply were dropped from the analysis of this speech only. 8The items which were not used in the analysis of the first affirmative constructive Speech are Adaptation to Opponent's Case, Unity of Team Presentation, Refutation of Opponent's Reasoning, Refutation of Opponent's Evidence, Recognition of Main Issues of the Debate, and Ability to Follow the Main Issues Through the Rebuttal Period. In order to provide a check on the effect of the exclusion of these items, a regular thirty-one item analysis was conducted on the first affirmative constructive Speech. The five—factor rotation was chosen as best fitting the stated criteria for choice of a solution. In the five-factor rotation the six excluded items are the only contributors of one of the fac— tors. This would be expected since these items were each assigned the same rating (four) in most of the debates, i.e., 65 Six of the eigenvalues of the characteristic equa— tion of the analysis of first affirmative constructive speeches are greater than one, while the four-factor rotated solution of this analysis was chosen in accordance with the stated criteria. This solution accounts for 65% of the total variance and for from 47% to 80% of the variance of individual items. Factor I accounts for 25% of the total variance and correlates .70 with overall team ratings. There are thire teen contributors to Factor 1. Principal contributors are Analysis Of the Proposition and Selection of Arguments (.81), Quality of Reasoning (.75), Depth and Scope of Knowledge of Topic (.74), and Overall Team Rating (.70). The nine other contributors are Internal Organization of Individual Argu- ments (.68), Relation of Evidence to Conclusions C64), Over- all Organization of the Speech (.64), Quality of Evidence (.64), Fulfillment of ReSponsibilities of Speaker‘s Position in the Debate (.61),Competence (.59), Amount of Evidence (.57), Clarity of Statements (.56), and Dynamism (.52). Factor I may be named Depth of Knowledge of Topic as in all those debates in which the judge did not mark these items. Thus, these items tend to isolate themselves from the other factors even without a separate analysis. The item structure of the remaining four factors is slightly altered from the item structure of the twenty-five item factor analy- sis due to the influence of the six excluded items. For this reason the twenty-five item analysis is reported in the text. 66 Evidenced by Analysis of Topic and Selection of Arguments, Reasoning, Internal and Overall Organization and Clarity, and Proper Use, Quality, and Amount of Evidence. Factor II accounts for 15% of the total variance and correlates .07 with overall team ratings. There are two con- tributors to Factor 11, both of which are principal contrib- utors. They are Delivery in General (.82), and Rate (.81). Factor 11 may be named General Delivery with Emphasis on Rate. Factor III accounts for 16% of the total variance and correlates .12 with overall team ratings. There are five contributors to Factor III. Principal contributors are Use of Motive Appeals other than Humor (.75), Use of Humor (.74), Trustworthiness (.70), and Behaved Ethically and Fairly (.70). The other contributor is Interestingness (.55). Factor III may be named Use of Persuasive Techniques in an Ethical Manner, and Interestingness. Factor IV accounts for 12% of the total variance and correlates .08 with overall team ratings. There are five contributors to Factor IV. There is only one principal con- tributor, which is Grammer (.76). The four other contrib— utors are Articulation (.69), Ability to Find Fallacies (.59), Word Choice (.59), and Proper Identification of Sources (.53). Factor IV may be named Fundamental Verbal Facility with respect to Grammer, Articulation, and Choice 67 of Words, Ability to Find Fallacies, and Proper Identifica- tion of Evidence. This concludes the naming of the four— factor rotation of the analysis of first affirmative con- structive speeches. Analysis of First Negative Constructive Speeches The item Unity of Team Presentation does not apply to the first negative constructive Speech Since the second Speaker has not spoken yet, and it was not included in the analysis for this reason. The three—factor solution of the analysis of the first negative constructive speech was chosen in accordance with the stated criteria. Six of the eigenvalues of the characteristic equation were greater than one. The three-factor solution accounts for 68% of the total variance and from 60% to 82% of the variance of indi- vidual items. Factor I accounts for 40% of the total variance and correlates .73 with overall team ratings. There are eigh- teen contributors to Factor I. Principal contributors are Analysis of the Proposition and Selection of Arguments (.86), [knrth and Scope of Knowledge of Topic (.86), Quality of Ihuasoning (.86),.Fulfillment of Responsibilities of Speaker's POSistion in the Debate (.86), Refutation of Opponent's Rea- sunning (.85), Recognition of Main Issues of the Debate (.82), Clarity of Statements (.82), Adaptation to Opponent's Case 68 (.82), Relation of Evidence to Conclusions (.78), Competence (.75), Overall Organization of the Speech (.75), Overall Team Rating (.73), and Internal Organization of Individual Arguments (.72). The five other contributors are Quality of Evidence (.68), Dynamism (.65), Ability to Find Fallacies (.63), Word Choice (.61) and Amount of Evidence (.56). Fac— tor I may be named The Analysis and Attack of the Principal Issues of the Debate by the Use of Well Reasoned, Clear, Well Organized, and Well-Worded Attacks on the Opposition's Reasoning and Case, Using Sufficient and Good Evidence Prop- erly Related to the Conclusions Drawn, and Delivered in a Dynamic Manner. Factor 11 accounts for 17% of the total variance and correlates .23 with overall team ratings. There are five contributors to Factor II. Principal contributors are Use of Motive Appeals other than Humor (.86), Use of Humor (.84), and Behaved Ethically and Fairly (.73). The two other con— tributors are Trustworthiness (.57) and Refutation of Oppo— nent's Evidence (.56). Factor II may be named Use of Per- suasive Techniques and Refutation of Evidence in an Ethical Manner. Factor III accounts for 11% of the total variance and correlates.l3 with overall team ratings. There are four contributors to Factor III. Principal contributors are Articulation (.86), Rate (.77), and Delivery in General (.71). 69 The one other contributor is Interestingness (.57). Factor 111 may be named General Delivery with Emphasis on Articula- tion and Rate and Consideration of Interestingness. This concludes the naming of the three—factor rotation of the analysis of rebuttal Speeches. Analysis of Second Affirmative Constructive Speeches The characteristic equation of the factor analysis of second affirmative constructive speeches has Six roots which are greater than one. The five-factor solution was chosen in accordance with the stated criteria. This solu— tion accounts for 72% of the total variance and from 57% to 89% of the variance of individual items. Factor I accounts for 26% of the total variance and correlates .51 with overall team ratings. There are fifteen contributors to Factor I. Principal contributors are Refuta- tion of Opponent's Reasoning (.80), Analysis of the Proposi- tion and Selection of Arguments (.76), Recognition of Main Issues of Debate (.74), Unity of Team Presentation (.74), Ability to Follow the Main Issues Through the Rebuttal Period (.73), and Overall Organization of the Speech (.73). The nine other contributors are Adaptation to Opponent's Case (.69), Internal Organization of Individual Arguments (.69), Fulfillment of Responsibilities of Speaker's Position .in the Debate (.64), Clarity of Statements (.61), Quality of 70 Reasoning (.56), Ability to Find Fallacies (.56), Depth and Scope of Knowledge of Topic (.54), Refutation of Opponent's Evidence (.52), and Overall Team Rating (.51). Factor I may be named The Continuous Analysis and Attack of the Principal Issues of the Debate by the Use of Consistent, Well Reasoned, Organized, and Clear Attacks on the Opposition's Reasoning, Case, and Evidence. Factor 11 accounts for 21% of the total variance and correlates .35 with overall team ratings. There are twelve contributors to Factor 11. Principal contributors are Use of Motive Appeals other than Humor (.83), Use of Humor (.81), Behaved Ethically and Fairly (.74), and Trustworthiness (.71). The eight other contributors are Quality of Reasoning (.68), Quality of Evidence (.68), Relation of Evidence to Conclusions (.62), Ability to Find Fallacies (.57), Competence (.55), Refutation of Opponent‘s Evidence (.55), Proper Identifica- tion of Sources (.55), and Amount of Evidence (.52). Factor 11 may be named Use of Persuasive Techniques, Reasoning, and Evidence in an Ethical Manner. Factor III accounts for 9% of the total variance and correlates .15 with the overall team ratings. There-are four contributors to Factor III. Principal contributors are Rate (.84), and Delivery in General (.77). The two other contributors are Interestingness (.55) and Articulation (.52). Factor III may be named General Delivery with Emphasis on Rate and Consideration of Interestingness and Articulation. 71 Factor IV accounts for 9% of the total variance and correlates .11 with over-all team ratings. There are two contributors to Factor IV, neither of which is principal con- tributors. These are Dynamism (.65) and Interestingness (.64). Factor IV may be named Dynamism and Interestingness. Factor V accounts for 8% of the total varaiance and correlates .00 with over-all team ratings. There are three contributors to Factor V. The principal contributor is Grammar (.81). The two other contributors are Articulation (.63) and Word Choice (.62). Factor V may be named Fundamen- tal Verbal Facility with respect to Grammar, Articulation, and Choice of Words. This concludes the naming of the five— factor solution of the analysis of second affirmative con- structive speeches. Analyeis of Second Negative Constructive Speeches Six of the eigenvalues of the characteristic equa- tion of the analysis of second negative constructive speeches are greater than one. The six-factor solution was chosen in accordance with the stated criteria. This solution accounts for 74% of the total variance and for from 64% to 85% of the variance of individual items. Factor I accounts for 24% of the total variance and correlates .52 with overall team ratings. There are thirteen 72 contributors to Factor 1. Principal contributors are Recog- nition of Main Issues of the Debate (.84), Ability to Follow the Main Issues Through the Rebuttal Period (.82), Adapta- tion to Opponent‘s Case (.76), Refutation of Opponent‘s Reasoning (.74), and Fulfillment of Responsibilities of Speaker's Position in the Debate (.72). The eight other con— tributors are Analysis of the Proposition and Selection of Arguments (.67), Quality of Reasoning (.67), Unity of Team Presentation (.67), Depth and Scope of Knowledge of Topic (.65), Quality of Evidence (.65), Relation of Evidence to Conclusions (.56), Ability to Find Fallacies (.53), and Overall Team Rating (.52). Factor I may be named The Con- tinuous Analysis and Attack of the Principal Issues of the Debate by the Use of Consistent, Well—Reasoned Attacks on .the Opposition‘s Case and.Reasoning, and by the Proper Use of Qualitative Evidence. Factor II accounts for 9% of the total variance and correlates .10 with overall team ratings. There are three contributors to Factor 11. The principal contributors are Grammar (.83) and Articulation (.79). The one other con- tributor is Word Choice (.60). Factor 11 may be named Fundamental Verbal Facility with reSpect to Grammar, Artic- ulation, and Choice of Words. 73 Factor III accounts for 8% of the total variance and correlates .11 with overall team ratings. There are four contributors to Factor III. The one principal contributor is Rate (.84). The three other contributors are Delivery in General (.67), Behaved Ethically and Fairly (.52), and Trustworthiness (.52). Factor III may be named General Delivery with Emphasis on Rate and Consideration of Ethical Behavior. Factor IV accounts for 12% of the total variance and correlates .55 with overall team ratings. There are five contributors to Factor IV. Principal contributors are Over— all Organization of the Speech (.82) and Internal Organiza— tion of Individual Arguments (.81). The other three contrib— utors are Clarity of Statements (.70), Amount of Evidence (.55), and Overall Team Rating (.55). Factor IV may be named Overall and Internal Organization and Clarity and Amount of Evidence. Factor V accounts for 12% of the total variance and correlates .03 with overall team ratings. There are five contributors to Factor V. Principal contributors are Use of Motive Appeals other than Humor (.82) and Use of Humor (.70). The three other contributors are Behaved Ethically and Fairly (.62), Proper Identification of Sources (.59), and Trust- worthiness (.56). Factor V may be named Use of Persuasive Techniques and of Evidence in an Ethical Manner. 74 Factor VI accounts for 9% of the total variance and correlates .16 with overall team ratings. There are three contributors to Factor VI. The one principal contributor is Dynamism (.73). The two other contributors are Interesting— ness (.58) and Quality of Reasoning (.51). Factor VI may be named Dynamism, Interestingness, and Quality of Reasoning. This concludes the naming of the Six factor solution of the analysis of second negative constructive speeches. Analysis of First Negative Rebuttal Speeches Five eigenvalues of the characteristic equation of the analysis of first negative rebuttal speeches were great— er than one. The six-factor solution was chosen in accor- dance with the stated criteria. This solution accounts for 77% of the total variance and for from 67% to 89% of the variance of individual items. Factor I accounts for 32% of the total variance and correlates .72 with overall team ratings. There are sixteen contributors to Factor 1. Principal contributors are Inter- nal Organization of Individual Arguments (.84), Overall Organization of the Speech (.83), Analysis of the Proposi- tion and Selection of Arguments (.79), Ability to Follow the Main Issues Through the Rebuttal Period (.79), Recognition of Main Issues of the Debate (.78), Fulfillment of Responsi- bilities of Speaker's Position in the Debate (.75), 75 Adaptation to Opponent's Case (.73), Overall Team Rating (.72), Depth and Scope of Knowledge of Topic (.72), and Refutation of Opponent's Reasoning (.72). The Six other contributors are Competence (.67), Unity of Team Presenta— tion (.66), Clarity of Statements (.64), Ability to Find Fallacies (.63), Quality of Reasoning (.62), and Dynamism (.51). Factor I may be named The Well-Organized, Clear and Continuous Analysis and Attack of the Principal Issues of the Debate by the Use of Consistent, Well-Reasoned Attacks on the Opposition’s Case and Reasoning, in a Dynamic Manner. Factor 11 accounts for 7% of the total variance and correlates .21 with overall team ratings. There are two contributors to Factor II, both of which are principal con- tributors. They are Use of Humor (.80) and Use of Motive Appeals other-than Humor (.76). Factor 11 may be named Use of Persuasive Techniques. Factor III accounts for 8% of the total variance and correlates .08 with overall team ratings. There are three contributors to Factor III. The principal contributors are Rate (.88) and Articulation (.74). The one other contrib- utor is Delivery in General (.58). Factor III may be named General Delivery with Emphasis on Rate and Articulation. Factor IV accounts for 17% of the total variance and correlates .27 with overall team ratings. There are seven contributors to Factor IV. The principal contributors are 76 Proper Identification of Sources (.78), Quality of Evidence (.75), and Relation of Evidence to Conclusions (.74). The four other contributors are Amount of Evidence (.68), Refuta- tion of Opponent‘s Evidence (.57), Quality of Reasoning (.56) and Depth and Scope of Knowledge of the Topic (.50). Factor IV may be named The Proper Use, Quality and Amount of Evi- dence, Refutation of Evidence, Quality of Reasoning, and Knowledge of Topic. Factor V accounts for 9% of the total variance and correlates .13 with overall team ratings. There are two contributors to Factor V, both of which are principal con— tributors. These are Trustworthiness (.84) and Behaved Ethically and Fairly (.80). Factor V may be named Ethical Behavior. Factor VI accounts for 5% of the total variance and correlates .04 with overall team ratings. There is only one contributor to Factor VI, which is Interestingness (.66). Factor VI may be named Interestingness. This concludes the naming of the six-factor rotation of the analysis of first »negative rebuttal Speeches. 77 Analysis of First Affirmative Rebuttal Speeches Six eigenvalues of the characteristic equation of the factor analysis of first affirmative rebuttal Speeches are greater than one. The five factor solution was chosen in accordance with the stated criteria. This solution accounts for 71% of the total variance and for from 56% to 89% of the variance of individual items. Factor I accounts for 23% of the total variance and correlates .36 with overall team ratings. There are eleven contributors to Factor I. Principal contributors are Recog- nition of Main Issues of the Debate (.81), Ability to Follow the Main Issues Through the Rebuttal Period (.81), Fulfill— ment of Responsibilities of Speaker‘s Position in the Debate (.81), Adaptation to Opponent‘s Case (.79), Refutation of Opponent‘s Reasoning (.78), Unity of Team Presentation (.74), and Analysis of the Proposition and Selection of Arguments (.73). The four other contributors are Ability to Find Fallacies (.66), Depth and Sc0pe of Knowledge of Topic (.54), Overall Organization of the Speech (.52), and Quality of Reasoning (.51). Factor I may be named The Continuous Analysis and Attack of the Principal Issues of the Debate by the Use of Consistent, Well—Reasoned, and Organized Attacks on the Opposition's Case and Reasoning. 78 Factor 11 accounts for 11% of the total variance and correlates .27 with overall team ratings. There are five contributors to Factor II. Principal contributors are Grammar (.82) and Articulation (.79). The three other con- tributors are Internal Organization of Individual Arguments (.68), Clarity of Statements (.58), and Word Choice (.53). Factor 11 may be named Fundamental Verbal Facility with respect to Grammar, Articulation, and Choice of Words, and, Internal Organization and Clarity of Arguments. Factor III accounts for 9% of the total variance and correlates .14 with overall team ratings. There are two contributors to Factor III, both of which are principal contributors. They are Rate (.82) and Delivery in General (.73). Factor III may be named General Delivery with Emphasis on Rate. Factor IV accounts for 20% of the total variance and correlates .43 with overall team ratings. There are twelve contributors to Factor IV. Principal contributors are Amount of Evidence (.71) and Quality of Evidence (.70). The ten other contributors are Refutation of Opponent's Evidence (.69), Trustworthiness (.69), Behaved Ethically and Fairly (.68), Relation of Evidence to Conclusions (.63), Use of Motive Appeals other than Humor (.61), Quality of Reasoning (.60), Competence (.59), Use of Humor (.59), Depth and Scope of Knowledge of Topic (.56), and Proper Identification of 79 Sources (.55). Factor IV may be named Use of Evidence and Other Persuasive Techniques in a Proper and Ethical Manner, Quality of Reasoning, and Knowledge of TOpic. Factor V accounts for 8% of the total variance and correlates .17 with overall team ratings. There are two contributors to Factor V. The principal contributor is Dynamism (.80), and the other contributor is Interestingness (.60). Factor V may be named Dynamism and Interestingness. This concludes the naming of the five-factor rotated solu— tion of the analysis of first affirmative speeches. Analysis of Second Negative Rebuttal Speeches The five-factor rotated solution of the analysis of second negative rebuttal speeches was chosen in accordance with the stated criteria. Five of the eigenvalues of the characteristic equation of this analysis were greater than unity. The five-factor rotation accounts for 74% of the total variance and for from 63% to 86% of the variance of individual items. Factor I accounts for 36% of the total variance and correlates .67 with overall team ratings. There are nine- teen contributors to Factor I. Principal contributors are Adaptation to Opponent‘s Case (.89), Refutation of Opponent's Reasoning (.89), Analysis of the Proposition and Selection of Arguments (.86), Unity of Team Presentation (.85), 80 Fulfillment of Responsibilities of Speaker‘s Position in the Debate (.85), Recognition of Main Issues of the Debate (.83), Quality of Reasoning (.82), Ability to Follow the Main Issues Through the Rebuttal Period (.80), Depth and Scope of Knowl- edge of Topic (.77), Ability to Find Fallacies (.75), Refuta- tion of Opponent‘s Evidence (.72), and Relation of Evidence to Conclusions (.72). The other seven contributors are Overall Team Rating (.67), Quality of Evidence (.66), Compe- tence (.66), Overall Organization of the Speech (.62), Inter- nal Organization of the Individual Arguments (.59), Clarity of Statements (.56), and Amount of Evidence (.51). Factor I may be named The Continuous Analysis and Attack of the Prin— cipal Issues of the Debate by the Use of Consistent, Well- Reasoned, Organized, and Clear Attacks on the Opponent's Case, Reasoning, and Evidence, and by the Proper Use of Evidence. Factor II accounts for 9% of the total variance and correlates .24 with overall team ratings. There are three contributors to Factor II. Principal contributors are Grammar (.85) and Articulation (.79). The other contributor is Word Choice (.50). Factor II may be named Fundamental Verbal Facility with respect to Grammar, Articulation, and Choice of Words. Factor III accounts for 12% of the total variance and correlates .21 with overall team ratings. There are five 81 contributors to Factor III. Principal contributors are Use of Motive Appeals other than Humor (.78), and Use of Humor (.71). The three other contributors are Proper Identifica- tion of Sources (.67), Amount of Evidence (.61) and Behaved Ethically and Fairly (.52). Factor III may be named Use of Persuasive Techniques and of Evidence in a Proper and Ethical Manner. Factor IV accounts for 10% of the total variance and correlates .07 with overall team ratings. There are four contributors to Factor IV. The only principal contributor is Rate (.71). The other three contributors are Behaved Ethically and Fairly (.67), Trustworthiness (.65), and Deliv— ery in General (.53). Factor IV may be named General Deliv- ervaith Emphasis on Rate and Consideration of Ethical Behav- ior. Factor V accounts for 7% of the total variance and correlates .01 with overall team ratings. There are two contributors to Factor V. The only principal contributor is Dynamism (.79), and the other contributor is Interesting— ness (.63). Factor V may be named Dynamism and Interesting- ness. This concludes the naming of the five-factor rotation of the analysis of second negative rebuttal speeches. 82 Analysis of Second Affirmative Rebuttal Speeches The five—factor solution of the analysis of second affirmative constructive speeches was chosen in accordance with the stated criteria. Five of the eigenvalues of the characteristic equation of this analysis were greater than one. The five-factor solution accounts for 74% of the total variance and for from 60% to 85% of the variance of the indi- vidual items. Factor I accounts for 27% of the total variance and correlates .63 with overall team ratings. There are thir- teen contributors to Factor 1. Principal contributors are Fulfillment of Responsibilities of Speaker's Position in the Debate (.88), Recognition of Main Issues of the Debate (.85), Refutation of Opponent's Reasoning (.85), Ability to Follow the Main Issues Through the Rebuttal Period (.82), Analysis of the Proposition and Selection of Arguments (.76), Adapta— tion to Opponent's Case (.75), and Ability to Find Fallacies (.75). The six other contributors are Unitey of Team Presen— tation (.67), Depth and Sc0pe of Knowledge of Topic (.65), Refutation of Opponent's Evidence (.63), Overall Team Rating (.63), Competence (.52), and Quality of Reasoning (.51). Factor I may be named The Continuous Analysis and Attack of the Principal Issues of the Debate by the Use of Consistent, Well-Reasoned Attacks on the Opposition's Case, Reasoning, and Evidence. 83 Factor II accounts for 10% of the total variance and correlates .30 with overall team ratings. There are three contributors to Factor 11. Principal contributors are Use of Humor (.84) and Use of Motive Appeals other than Humor (.79). The other contributor is Behaved Ethically and Fairly (.55). Factor II may be named Use of Persuasive Techniques in an Ethical Manner. Factor III accounts for 8% of the total variance and correlates .30 with overall team ratings. There are two contributors to Factor 111, both of which are principal con- tributors. They are Grammar (.83) and Articulation (.77). Factor III may be named Fundamental Verbal Facility with respect to Grammar and Articulation. Factor IV accounts for 16% of the total variance and correlates .14 with overall team ratings. There are nine contributors to Factor IV. The only principal contrib- utor is Relation of Evidence to Conclusions (.74). The eight other contributors are Quality of Evidence (.68), Internal Organization of Individual Arguments (.68), Amount of Evidence (.64), Proper Identification of Sources (.57), Quality of Reasoning (.56), Clarity of Statements (.54), Dynamism (.51), and Word Choice (.50). Factor IV may be named Proper Use, Quality, and Amount of Evidence and Its Organized, Clear, Relationship to Reasoning; Dynamism and Choice of Words. W77?! . 84 Factor V accounts for 13% of the total variance and correlates .05 with overall team ratings. There are five contributors to Factor V. Principal contributors are Rate (.85) and Delivery in General (.77). The other three con- tributors are Interestingness (.63), Trustworthiness (.57), and Behaved Ethically and Fairly (.55). Factor V may be named General Delivery with Emphasis on Rate and Considering Interestingness, Trustworthiness, and Ethical Behavior. This concludes the naming of the five-factor solution of the analysis of second affirmative constructive speeches. Summary Chapter III is concerned with the results of two types of factor analyses. The first type includes the con- trol analyses which were performed to act as a check upon certain procedures used in the present study. The second type includes the analyses performed in terms of the judges‘ ratings of particular speeches or groups of speeches, in order to obtain information concerning the judges‘ hypothet- ical structuring of the items. The results of the control analyses indicate that the judges used in the present study who had ten years eXpe- rience or less rated the debates in essentially the same manner as did judges with more than ten years of experience in debate. Thus if the experienced judges are highly 85 competent in rendering evaluations of debate, the group of judges employed in the present study are of the same degree of competency. The control analyses also indicate that the judges‘ use of all items on the experimental ballot was relatively consistent throughout the debate with the excep— tion of the items Trustworthiness, Proper Identification of Sources, and Behaved Ethically and Fairly. The reliability of the judges‘ ratings of these three items is less than might be considered desirable. The analysis of the data from all of the speeches, the choice of solutions of this analysis, and the naming of the factors of the chosen solution are used as detailed examples of the procedure followed for each of the twelve remaining analyses. The choice of a particular rotated solution for each of the twelve analyses is stated, and the factors of this solution are named. The twelve group- ings of speeches used are (l) affirmative, (2) negative, (3) constructive, (4) rebuttal, (5) first affirmative con- structive, (6) first negative constructive, (7) second affirmative constructive, (8) second negative constructive, (9) first negative rebuttal, (10) first affirmative rebuttal, (11) second negative rebuttal, and (12) second affirmative rebuttal. CHAPTER IV DISCUSSION OF THE RESULTS AND CONCLUSIONS OF THE STUDY In Chapter I, Six questions were raised concerning the nature of items relevant to the evaluation of debate. In an attempt to answer these questions several factor anal— yses were conducted of judges‘ ratings of intercollegiate debates. The results of these analyses are reported in Chapter III. This chapter will examine and discuss the results and will utilize them in making recommendations for the construction of debate ballots. Discussion of the Results The criteria used for choosing the solution which is the best representation of a particular factor analysis are stated in the discussion of the overall analysis in Chapter 111. It should be recognized that any set of criteria used for this purpose would be somewhat arbitrary. The chosen criteria are used because the present study requires a fac- tor solution which is (l) neither so narrow that it does not account for much of the variance, (2) nor so broad that it requires a relatively large number of factors to account for 86 87 the variance, and (3) not so constituted that an uneven per— centage of the variance of each of the individual items is accounted for. Thus, the study requires (1) that a rela— tively high percentage of the total variance (at least 65%) be accounted for, (2) that the number of factors necessary to account for this variance not approach the number of items, and (3) that a relatively even percentage of the vari- ance of individual items be accounted for by the chosen solu— tion. Since the rotated solutions chosen in each of the thirteen analyses of the present study do meet these crite— ria, it is possible to compare the solutions with that basis in mind, even though the number of factors of each solution is not necessarily the same. Thus, the Similarities and differences among the analyses performed in this study may be discussed. The analysis of all of the Speeches taken as a group indicates those areas of concern which were of major importance to the participating judges. The seven-factor rotated solution of the overall analysis was chosen as the best representation of the hypothetical structuring of the items of this analysis by the judges. The remaining twelve analyses furnish more Specific information on the structuring of the factors of the overall analysis. In order to perform these twelve analyses, the data were sorted in three different ways: (1) construc- tive Speeches and rebuttal Speeches were grouped, (2) affir— mative Speeches and negative Speeches were grouped, and 88 (3) each of the eight individual Speeches was taken by itself. This made it possible to contrast constructive Speeches with rebuttal Speeches, affirmative Speeches with negative Speeches, and to contrast each of the eight individ- ual Speeches of the debate with each other. It is necessary to compare the judges‘ differential use of the items in these different circumstances in order to answer the questions which were posed in Chapter I. Perhaps the most striking feature of the results of the thirteen different analyses is the similarity in the item structure of some of the factors. This is particularly the case with the first factor of each of the analyses, with the lone exception of the analysis of the first affirmative con- structive speeches. With this one exception, each of the first factors accounts for a major portion of the variance of its particular Solution and is primarily concerned with analysis, reasoning, and case. Most of the remaining twelve analyses have several other factors which are Similar or identical to the other six factors of the overall analysis, but only one of these, the factor concerned with delivery, occurs consistently in each of the analyses. Two other factors appear in almost all of the anal— yses. These are the Fundamental Verbal Facility factor and the Use of Persuasive Techniques in an Ethical Manner factor. These two factors appear in all analyses with the exception 89 of the first negative rebuttal Speeches analysis. In that analysis, two new factors appear which are named Use of Persuasive Techniques, and Ethical Behavior, reSpectively. The factors concerning organization and clarity, evidence, and interestingness and dynamism, occur haphazardly through— out the analyses. The factor Dynamism and Interestingness is the most frequently occurring of these three. The behavior of certain items is also of interest. The items Amount of Evidence and Quality of Evidence often do not appear as contributors of the same factor. This sug- gests that there may be some merit in treating them separate— ly when faced with the task of constructing a debate ballot. The items referring to evidence seldom cluster on the same factor. Often they are correlated with the factors concern— ing organization, or motive appeals, or analysis and case. This suggests that the judges found it difficult to evaluate evidence as a unitary concept and that evidence may have more than one dimension. These dimensions appear to correSpond to the particular function of evidence in the Speech, so that evidence may correlate with organization in one Speech (or part of a Speech) and with analysis and case in another. The items concerning motive appeals are often highly correlated with the same factor as are the items concerning ethical and trustworthy behavior. This may indicate that the judges saw a relationship between the use of persuasive 90 appeals and the ethical behavior of the debaters. (The fac- tor, Use of Persuasive Techniques in an Ethical Manner was named with this assumption in mind.) Alternatively, it may indicate that the judges saw the items concerning motive appeals and ethical behavior as similarly unimportant to the outcome of the debate and tended to rate these items in the same manner for that reason. The control analyses reported in Chapter III indicate that the judges‘ use of the three , items concerning ethical behavior may not have been entirely consistent. The informational analyses indicate that the two items Behaved Ethically and Fairly, and Trustworthiness, con- sistently correlated highly with the same factor, Use of Per- suasive Techniques in an Ethical Manner. The third item, Proper Identification of Sources, is usually a contributor of the factor which has most of the evidence items as con- tributors. In some cases, notably in the negative Speeches, this third item correlates highly with the factor Use of Persuasive Techniques in an Ethical Manner. Since the items in question frequently appear on this factor, its importance and meaning may not be as high as the percentage of variance accounted for might indicate. There are differences as well as similarities among the thirteen analyses. The seven factors of the overall analysis do not all appear in the remaining twelve analyses. The affirmative Speeches analysis does not Show a separate 91 factor concerned with evidence. The negative Speeches anal- ysis combines two of the factors of the overall analysis, namely, those factors of the overall analysis which concern delivery and organization. The negative Speeches analysis does not contain a separate factor concerned either with evidence of with dynamism. The evidence factor is also miss- ing from the constructive Speeches analysis but it reappears in the rebuttal Speeches analysis. The items concerning ‘ 4m reasoning and evidence are correlated with the Interesting- .LJ.‘v ness and Dynamism factor in the constructive Speeches. Organization correlates highly with Factor I of the rebuttal Speeches analysis and does not appear as a separate factor in that analysis. These are the most apparent differences from the pattern of the overall analysis which may be found in the analysis of the affirmative, negative, constructive, and rebuttal Speeches. Since it is not the purpose of this paper to discuss in detail the many other minor features of the results, the further examination of the analyses will focus attention on the applicability of the results to suggestions for the con- struction of future debate ballots. 92 Suggestions for the Construction of Future Debate Ballots There are several types of debate ballots which it is possible to construct. One type which is in general use today assumes that the same lower—level items, or factors, may be used to evaluate each Speaker in a debate. A second type might assume that there are differences between the affirmative Speakers and the negative speakers, or between the constructive Speeches and the rebuttal Speeches, and in— struct the judges to evaluate the debaters along these lines. A third type of ballot is the principal subject of this sec— tion. This ballot assumes that each of the eight speeches in a debate may be Significantly different from each of the other Speeches. Form C of the American Forensic Association is an example of the first type of debate ballot. This ballot seems to make four assumptions: (1) that the factors Analysis, Reasoning, Evidence, Organization, Refutation, and Delivery are both necessary and sufficient to the evaluation of a college debate, (2) that these factors are each of equal importance, (3) that these factors may be used to evaluate each speaker in a debate, and (4) (implicitly) that these factors may be used to evaluate each speech in a debate. The four assumptions made by the type of ballot which Form C represents raise several empirical questions 93 which may be discussed in view of the findings of the pres- ent study. The names of the factors of the overall analysis indicate that the Six factors employed by Form C may not necessarily be the best choices. The fact that each of the factors of the overall analysis accounts for a different percentage of the variance may indicate that equal weight Should not be given to these factors. An examination of the analyses of constructive, rebuttal, affirmative, and negative Speeches shows that the judges evaluated these types of Speeches using substantially different factors. This may indicate that the same factors should not be used to evaluate each speaker in the debate. Furthermore, the results of the analyses of the eight individual Speeches of the debate may indicate that the method of evaluation should be structured in such a manner that each Speech, as opposed to each Speaker, could be evaluated separately. An example of the second type of ballot might be found by employing the results of the Webb study in the con- struction of a new ballot, although Webb did not intend her study to be used in this manner. Such a ballot would prob— ably not employ the factors used in Form C, though it might assume equal weighting of the factors, and would assume that the Speakers and not Speeches are the units to be evaluated. 1355.. ‘r. —v-—-... _..’ . . . a 94 An example of the third type of ballot is the ballot resulting from this study. This ballot makes several assump— tions which are different from those made by the first two types of ballots. These assumptions derive their support from the results of this study. The assumptions are (1) that each of the eight Speeches in a traditional debate should be evaluated separately, (2) that each of these eight Speeches should be evaluated on the basis of the factors which are important to that speech, (3) that the factors . “as“ " ‘ 5 resulting from the present study are close approximations of the factors which are important to each Speech, and (4) that these factors are not necessarily of equal importance to the evaluation of that Speech. A fifth assumption which seems probable is that some speeches may be of greater importance to the debate than other speeches. This last assumption is not directly checked by this study and for that reason is not included in the proposed ballot. Translating these assumptions into a practical ballot is easy with some assumptions and rather complex with others. Translating the first assumption into practice involves pro- viding enough Space to evaluate each of the eight Speeches. The second assumption means that enough Space must be pro- vided such that there is room for the names of the different factors for the various Speeches. The third assumption means that the names of the factors of the rotated solution 95 chosen to represent each of the individual Speeches are the ones which will be used on the proposed ballot. In the pro— cess of incorporating these factor names into a ballot, the names would not have to be used in the form in which they appear in the present study. Since the names are rather long, a simplification of the name could be substituted for it and the entire name could be printed on a supplemental sheet to be used by the judge. This is the procedure that the pro— posed ballot will incorporate, though the problem could also 1 be solved by the use of larger ballots or smaller print. Translating the fourth assumption into a practical part of a ballot is a more complex task than was the case with the first three assumptions. This assumption means that the factors of each of the eight speeches must be weighted by some system such that the weighted scores of a given affirmative or negative Speech may be added to produce a final score for each Speech which, when added to the scores of the other affirmative or negative Speeches, yields a total which is greater than the total for the Oppositions Speeches if the affirmative or negative has won the debate, and less than this total if the affirmative or negative has lost. To do this is not an easy task. If the problem were only one of determining the weights of the individual items with reSpect to the factors, the factor score co-efficients could be calculated with the help of a computer, thereby 96 providing the item weights. The problem is one of determin- ing the relative weights of the factors with reSpect to each other, however, and there is no ideal way to do this. There are two primary methods by which factors may be weighted. Each of these has its inherent advantages and disadvantages. The first method is simply to employ intu- itive weights: to examine the factors and to determine how much weight each factor should receive on the basis of its WW1... E - . merits. That is, to make value judgments and to assign weights to the factors on the basis of these value judgments. This method has all the advantages and all of the faults common to any value judgment, and is rejected by this inves- tigator because of its extreme subjectivity. The second method is to employ one or more statis— tical indices of the quantitative or qualitative importance of the factor and to use these indices in a formula for weighting the factors. Two such indices are available in the form of the percentage of variance accounted for and the correlation of the factor with the overall team rating. The percentage of the variance which is accounted for by a fac- tor is a measure of the statistical importance of that factor to the rotated factor solution. Statistical importance, how- ever, is not necessarily related to the actual importance of these factors as seen by the judges. It is quite possible that the judges saw several items as being similarly 97 unimportant to the evaluation of debate and rated them accordingly. These items would then load high on the same factor and could account for a relatively high percentage of the variance while being relatively unimportant to the evaluation of the debate as the judges saw it. To the extent that the original items can be said to cover the field of possible items relevant to judging a 1 college debate, and only cover the field, the percentage of variance accounted for is also a measure of the qualitative importance of the factor. Even though the items were gath— ered such that they should cover the field of relevant items, no independent check has been made to determine if this actually is the case. And even if, by some chance, the items should cover the field, there can be no guarantee that some of the items do not overlap. Thus, the percentage of variance accounted for is a good measure of the statistical importance of a factor, but is not necessarily a good mea- sure of the practical importance of a factor. The correlation of the factor, however, with the overall team rating is a measure of the practical importance of the factor to the overall performance of a team. That is, if the correlation is high for a factor, and the team received a high score on this factor, we could predict that the team‘s performance taken as a whole should be very good. 98 Since the percentage of the variance accounted for by a factor is a measure of its quantitative importance, and the correlation of the factor with the overall team rating is a measure of the qualitative importance of the factor, it would be ideal if these two measures could be combined into a formula for weighting the various factors. Unfortunately, there are two problems that such a combination of measures must face, both of which concern the measurement of overall performance. The first problem with this rating is that the judge has been asked to evaluate both of the Speakers on a team over all possible categories of judgment which he might choose to use, in each of their four Speeches, as Opposed to evaluating a single Speaker on each of thirty items in a single Speech. The rating of one Speaker on one item in a single Speech is not necessarily comparable to the rating of two different Speakers on all possible items in four differ- ent Speeches. The correlation of the factor with the over- all team rating assumes that these two types of judgments are comparable. The second problem is closely related to the first and concerns the precision of the overall team ratings. In assigning an overall team rating, the judge must mentally combine all of the items necessary to evaluate all of the Speeches and must arrive at a single rating. As is pointed 99 out in Chapter I, to make such a complex evaluation, is an exceedingly difficult task. Thus we might expect the over- all type of rating to be relatively imprecise when compared with the rating of a lower—level item. To sum up the two problems, it is questionable whether a judge can make a major overall judgment with precision, and assuming that this were possible, it is questionable whether such a rating would be I directly comparable to the lower-level judgments made by the 5 judges in the present study. i Let us consider the problem of comparability first. While it is true that a different level of judgment is involved in rating two Speakers on their overall performance as a unit, as opposed to rating a single speaker on a single item, say Interestingness, there is no immediately apparent reason why the two types of judgment are not comparable. That is, it is not immediately obvious that the two types of ratings either are or are not comparable. They may be or they may not be, but the important point is that there is no obvious reason why they should definitely not be compara— ble. The answer to the first problem, therefore, is that while the assumption of comparability mey not be valid, it is an assumption that can be accepted until it is shown to be invalid and until the harm from making the assumption is pointed out. 100 The second problem is one of precision. To state the crux of this problem as a truism, it is much easier to judge something Simple than it is to judge something complex. Much of the present study is based on this truism. The judges were asked to rate the Speeches on the basis of lower— level items rather than on the basis of overall judgments. The factors were derived from these lower-level items, but *rq the overall team ratings were derived from overall judgments. The problem, stated simply, is: ”Can we rely on the overall I judgments to the same degree as we can rely on the lower- level items?” If we can, then we need not have done the study. If we can‘t, can the overall team rating be used? The answer is that we can rely on the overall judgments but to a much lower degree of precision than we can rely on the lower-level judgments. Thus the study has increased the precision of measuring the Speeches. Even though its pre- cision is low, the overall judgment can still be employed as a weighting device, so long as this device does not assume precision of judgment. One device which does not make such an assumption is a step—function. To sum up, the first problem can be lived with, and the second problem can be solved by not assuming precision when employing an overall rating or a correlation based on an overall rating. Thus the percentage of the variance accounted for and the corre- lation of the factor with the overall team rating, can be 101 used as independent variables in an equation for weighting the various factors. The remaining problems are (l) to Specify the nature of the weighting function and (2) to substitute the apprOpriate values of the variables, in order to determine the actual weights. The first problem is to Specify the weighting func- tion. This function will be composed of two parts called the P function and the C function, for ”percentage" and ”correlation," respectively. Specifying the P function is relatively simple and involves no calculations. The value of P is the value of the percentage of variance accounted for by the factor in question. Specifying the C function is more difficult. The correlation of a factor with the overall team rating is based in part on overall judgments made by the judges, which may not be as precise as the judgments of lower-level items. In order to avoid the assumption of unwarranted precision, a step-function will be utilized in Specifying the nature of C. The seven point scale of the overall team rating will allow the use of a step function containing from one to seven steps. Five has been chosen as the most convenient number of steps. The choice of cutoff points for this function is arbitrary, so points in relatively common usage will be employed. The exact values of the C function are not uniquely determined. The values zero, one-fourth, one-half, three-fourths, and 102 one have been chosen but a similar set of values Such as zero, one—third, oneehalf, two-thirds, and one, could be defended. It is clear that the two extreme values of the function will be zero and one, and the only arbitrary choices of values involve the internal points. For a factor with more positive than negative correlations, the C function becomes: zero for correlations greater than minus infinity but less than .10, one-fourth for correlations greater than or equal to .10 but less than .30, one-half for correlations greater than or equal to .30 but less than .50, three—fourths for values greater than or equal to .50 but less than .70, and one for correlations greater than or equal to .70. For factors with more negative than positive correlations the Signs of the correlations are reversed. The cut-off points correSpond approximately to 1%, 9%, 25%, and 50%, of the total variance of an item, reSpectively. Symbolically, '\ (0 for — < correlation < .10 1/4 for .10 5 " < .30 c = 31/2 for .30 5 ‘t < .50 3/4 for .50 _<_ ” < .70 l 1 for .70: n j The weighting function is a combination of C and P such that C operates as a constraint on P and the sum of the factor weights is equal to one. The weight of Factor i is the C 103 value for i multiplied by the P value for i divided by the sum of the products of the C and P values for each of the n factors of the particular rotated solution in question. C. P. Symbolically, weighti = 1 1 . The effect of divid- ZN: (Ck Pk) k=1 ing the product of C and P by the sum of the C-P products is to insure that the factor weights will sum to unity. In terms of factor analysis, this means that a hypothetical miscellaneous factor which would account for the remainder of the total variance (which is not accounted for by the rotated factor solution) is not postulated. Now that the nature of the weighting function has been Specified, it remains for us to substitute the appro- priate values of the variables into this function in order to determine the actual weights. Rather than detailing the calculations for the factors of each of the analyses of the eight constructive and rebuttal Speeches, the application of the function to the four factor solution of the first affir- mative constructive Speech will serve as a sufficient example Of the process. The values of P for the four factors are .25, .12, .16, and .12. The correlations of the four factors with the overall team ratings are .70, -.10, .12, and .10 reSpec- tively. (The majority of the correlations of Factor II are 104 negative). The resultant values of C are 1, 1/4, 1/4, and 1/4, reSpectively. The weight of each factor is determined Ci‘Pi k=l by weighti = Then the weight of Factor I (l x .25) and this (1x .25)+(l/4x .12)+(l/4x .16)+(l/4x .12) equals equals .71. The weight of Factor II is ljh-x..l6 which .35 equals .09. The weight of Factor III is 1/43E'115 which equals .11. The weight of Factor IV is 1/43’; '12 which is the same as the weight of Factor 11, or .09. It should be noted that .71 + .09 + .11 + .09 = 1.00. The weights for the factors of the remaining constructive and rebuttal Speeches may be found on the prOposed ballot which is con- tained in Appendix D. This ballot is the practical result of the four assumptions of the third type of ballot. Since various interpretations of some of the implications of the assumptions are possible, this ballot is only one of many possible ballots which could be formed by using these assump- tions. The interpretations used in the proposed ballot are those which are best supported by the research of the present study. When putting the ballot into practice, it is usually necessary to arrive at a final score for each Speaker and I “ “I 4V 105 each team. The proposed ballot makes it possible to obtain these scores and the score for each Speech as well. Con— sider the example of the first affirmative constructive Speech. The factor weights are .71, .09, .11, and .09 reSpectively. (It would also be possible to allow the indi- vidual who is judging a given debate to assign his own weights to the factors if this is seen to be desirable.) Given a five—point scale and assuming that the first affir- mative constructive Speech received a rating of 4, 4, 3 and 3, reSpectively, on each of the four factors, the total score for that Speech would be (.71 x 4) + (.09 x 4) + (.11 x 3) + (.09 x 3) or 3.8 of a possible five points. In order to obtain the total points for the first affirma- tive speaker, the points for his rebuttal Speech should first be calculated just as was done for his constructive Speech. Let us assume that his total score for the rebuttal Speech was 3.4. These two scores may then be added to produce the final Speaker score of 7.2 out of ten possible points. This step assumes the equal weighting of the constructive Speech and the rebuttal Speech for a given Speaker. As mentioned above, this assumption is not investigated by the present study. If it is desired to have the Speaker points in a more familiar form, the final Speaker score can be multi- plied by three in order to change the total possible points 106 from ten to thirty. In this case, the first affirmative Speaker would receive 7.2 x 3 or 21.6 out of a possible 30 Speaker points. Another way to arrive at the thirty point total would be to employ a fifteen point scale originally in place of the five point scale. Since most judges are prob- ably more familiar with the use of the five point scale, it was chosen for use in the proposed ballot over the fifteen point scale. The team score may be found by adding the total points of the first Speaker to the total points of the second Speaker. The team with the highest number of points may then be declared the winner. If desired, the proposed ballot may be employed only to obtain Speaker points, with the decision itself being decided by other means. Suggestions for Further Research In the preceding section several recommendations have been made concerning the nature of ballots for the evaluation of debate. The proposed ballot, which follows these recom- mendations, should be used in tournament competition and checked for reliability against the eXperimental ballot. This was done at the Second Annual Spartan Invitational Experimental Debate Tournament which was held at Michigan State University on February 25 and 26, 1966. The data is being processed at the time of this writing. 1E1m~ -‘ _ 107 A study should also be conducted to determine if each Speech in a debate is of equal importance to the out- come of the debate. This would give a good indication of the risk involved in assuming equal weighting for each Speech. In addition, research is needed in several areas of factor analytic theory.. A reliable statistical test for the differences between two factor analyses would be a most use- ful research tool. Another area of need which has long been recognized by authorities in factor analytic technique is the development of a procedure analogous to factor analysis which is not dependent on the assumption of linear relations between each pair of variables. Summary Chapter IV discusses and compares the interpreta- tions of the results of the present study as presented in Chapter III, and shows some of the similarities and differ- ences among the analyses of different Speeches. These analyses are representative of the reSpective Speeches and can be compared on the basis of their similarity on the criteria used to choose a rotated factor solution. Neither the chosen rotated solution nor the analyses them- selves, however, are the only possible structuring of the items. 108 It is suggested that at least three types of ballots may be used in judging debates,according to the assumptions made by these ballots. The four assumptions of the proposed ballot are the suggestions which are derived from the results of this study. These are (1) that each of the eight Speeches in a traditional debate Should be evaluated separately, (2) that each of the eight Speeches should be evaluated on the basis of the factors which are important to that Speech, (3) that the factors of the present study are close approx— imations of the factors which are important to each Speech, and (4) that these factors are not necessarily of equal importance to the evaluation of that Speech. These four assumptions are transformed into practical form by means of the proposed ballot which appears in Appendix D. In conclusion, the present study is concerned with answering six questions which are posed in Chapter I. They are: (1) What are the lower—level items which judges use in an overall evaluation of debate? (2) Do judges use such items in the same way in evaluating affirmative and negative teams? (3) Do judges use these items in the same way in evaluating constructive and rebuttal Speeches? (4) Do judges use these items in the same way in evaluating each Speech in a debate? (5) Will these items be psychologically meaningful? (6) What are the implications of the answers to the first five questions for the construction of a debate ballot? 109 The results of the overall analysis form the answer to the first question. The factors of this analysis are the lower-level items which judges use in an overall evaluation of debate. The results of the affirmative and negative Speeches analyses form the answer to the second question, and the results of the constructive and rebuttal speeches analyses form the answer to the third question. A compari— son of the affirmative and negative Speeches, and also of the constructive and rebuttal Speeches, shows that there are substantial differences in the factors and their relative weights between these pairs of analyses. The results of the analyses of the eight individual speeches form the answer to the fourth question. A compari- son of each Of these eight analyses with the remaining seven analyses shows that there is a substantial amount of unique- ness in the importance of the factors of each of these anal- yses, and also in the factors themselves. The fifth question concerns the psychological mean- ingfulness of the lower-level items or factors. The results of the control analyses indicate that although the number of units to be handled is reduced by the process of factor anal— ysis the psychological meaningfulness of the items should still be represented by the factors. Question six concerns the implications of the answers to the five previous questions for the construction of future debate ballots. 110 Chapter IV has considered three types of ballots, each of which makes certain assumptions about the evaluation of debates. Some of these assumptions are more tenable than others. Specifically, the third type of ballot, represented by the ballot proposed in this study, makes assumptions which are considerably more plausable, on the whole, than are the assumptions made by either of the first two types of ballots I 1 Since these assumptions are supported by the research which 1 answers the first five questions of this study. i Thus the proposed ballot forms a practical answer to question six. Three areas for future research are suggested. The first of these involves checking the reliability of the proposed ballot by comparing it with a replication of the debates in which the EXperimental Ballot was used to gather data. The second area for further research concerns the assumption of equal importance of the eight individual Speeches in their effect on the outcome of the debate. The final suggestion for further research is in the area of expanding factor analytic theory and techniques. BIBLIOGRAPHY Books Freeley, Austin J. Argumentation and Debate. San Francisco: Wadsworth Publishing Company, 1961. Harman, Harry H. Modern Factor Analysis. Chicago: University of Chicago Press, 1960. Periodicals Carroll, John B. “An Analytical Solution for Approximating Simple Structure in Factor Analysis," Psychometrika, XVIII (1953), 23-38. Giffin, Kim. ”A Study of the Criteria Employed by Tournament Debate Judges,” Speech Monographs, XXVI (March, 1959), 69-71. Halstead, William P. ”Who Wins Debates?" Quarterly Journal of Speech, XXVI (April, 1940), 213—221. Kaiser, Henry F. "The Varimax Criterion for Analytic Rotation in Factor Analysis,” Psychometrika, XXIII (1958), 187-200. O‘Neill, J. M. ”The Juryman‘s Vote in Debate," Quarterly Journal of Speech, III (October, 1917), 346-355. Thurstone, L. L. "An Analytical Method for Simple Structure,” Psychometrika, XIX (1954), 173-182. Wells, Hugh Neal. "Judging Debates,” Quarterly Journal of Speech, 111 (October, 1917), 336—345, Williams, Frederick, and Webb, Sally Ann. “Factors in Debate Evaluation: A Pilot Study," Central States Speech Journal, XV (May, 1964), 126-128. 111 112 Unpublished Material DeJonge, James J., and Sim, F. M. ”Factor Analysis Programs: Roever, Fanod 3 and Fanim 3.” Technical Report 2 (Revised). East Lansing, Michigan: Computer Institute for Social Science Research, Michigan State University, 1964. (Mimeographed.) J. E. "A Study to Determine to What Extent Academically Defensible Criteria Are Employed in the Judging of Tournament Debates.” Unpublished M.A. Thesis, University of Kansas, 1958. Scott, Robert L. ”A Study of the Objectivity of Debate Judges. Unpublished M.A. Thesis, University of Nebraska, 1951. Webb, Sally Ann. "Factors of Judgment in Evaluation of High School Debate.” Unpublished M.S. Thesis, University of Wisconsin, 1964. APPENDIX A QUESTIONNAIRE AND PERSONS RETURNING IT Dear Sir: I am involved in the project of Obtaining as many items as ,1 possible which different debate judges consider relevant to g the job of judging debate. The questionnaire below consists of a list which has already been suggested to me. Since it is important that the list include all of the possible items which you as a judge might consider in evaluating debate performance, regardless of whether such performance is of high or Of low help me add to response after other items as quality, I would appreciate it if you will this list by (l) checking the appropriate each of the items and (2) listing as many are necessary, in your Opinion, to allow you to evaluate completely any college debate. Thank you for your cooperation. Sincerely yours, Tom Steinfatt Department of Speech Michigan State University East Lansing, Michigan 113 114 Possibly Relevant Irrelevant Analysis of Proposition Recognition of Opposition‘s Assumptions Use of Supporting Evidence Refutation of Evidence Refutation of Reasoning General Organization of Speech Internal Organization Of Individual Arguments Use Of Internal Summary Speaking Ability Poise Sincerity Enthusiasm Eye Contact Total Effect (Space for listing other items) 115 Names of Persons Returning Completed Questionnaires Dr. Roger Hufford Clarion State College Dr. Jerry Anderson Michigan State University Dr. Paul Crawford Northern Illinois University Dr. Stanley Rives Illinois State University Mrs. Deldee Herman Western Michigan University Mr. John Monsma Northern Michigan University Dr. Maxine Schnitzer Ferris Michigan State University Dr. Kenneth Hance Michigan State University Dr. Robert Huber University of Vermont Mr. John Madsen Wisconsin State University Whitewater, Wisconsin Dr. Austin Freeley John Carrol University Dr. Joseph Wenzel University of Illinois Dr. Otto Bauer Bowling Green University Mr. Howard Rebach Michigan State University Dr. Fred Alexander Michigan State University Dr. Ted Jackson University of Illinois-Chicago Dr. Kenneth Anderson University of Michigan Dr. George Ziegelmueller Wayne State University Dr. Otis Aggertt Indiana State College Dr. Gordon Thomas Michigan State University Dr. John Boaz Wayne State University Dr. Raymond Tucker Western Illinois University Dr. Wofford Gardner University of Maine Dr. John Oostendorp Wisconsin State University ‘River Falls, Wisconsin Dr. Neil Claussen Bradley University Dr. Nicholas Cripe Butler University Mr. Marvin Kleinau Southern Illinois University Mr. Harold Cook University of Missouri Kansas City, Missouri APPENDIX B TWO TYPES OF FORMS OF EXPERIMENTAL BALLOT Performed Performed Very Poorly Very Well Rate ----------------------- : l : 2 : 3 : 4 : 5 : 6 : 7 Articulation --------------- : 1 : 2 : 3 : 4 : 5 : 6 : 7 Use Of Humor --------------- : l : 2 : 3 : 4 : 5 : 6 : 7 Use of Motive Appeals Other Than Humor --------------- : l 2 3 : 4 5 6 7 Grammar ———————————————————— : l 2 3 : 4 5 6 7 Word Choice ---------------- : 1 2 : 3 4 : 5 6 7 Delivery in General ———————— : 1 2 : 3 ' 4 . 5 . 6 7 Interestingness ———————————— : l 2 : 3 4 : 5 : 6 7 Competence ----------------- : l 2 : 3 4 : 5 6 : 7 Trustworthiness ------------ : l 2 : 3 4 : 5 6 7 Dynamism ------------------- : l 2 ° 3 4 : 5 6 7 Overall Organization of the Speech ------------------- : 1 2 3 : 4 5 6 7 Internal Organization of Individual Arguments ----- : 1 2 : 3 4 5 6 7 Clarity Of Statements ------ : 1 2 3 4 5 6 . 7 Amount of Evidence --------- : l : 2 : 3 : 4 5 6 : 7 Quality Of Evidence -------- .: 1 : 2 : 3 : 4 5 6 : 7 Performed Performed Very Poorly T Very Well 116 117 Performed Performed Very Poorly Very Well Quality Of Reasoning ------- : 1 : 2 : 3 : 4 : 5 : 6 : 7 Relation of Evidence to Conclusions —————————————— : l : 2 : 3 : 4 : 5 : 6 : 7 Proper Identification Of Sources ------------------ : 1 : 2 : 3 : 4 : 5 : 6 : 7 Adaptation to Opponent‘s Case ————————————————————— : l : 2 : 3 : 4 : 5 : 6 : 7 Unity of Team Presentation — : l : 2 : 3 : 4 : 5 : 6 : 7 Refutation of Opponent‘s Reasoning ---------------- : l : 2 : 3 : 4 : 5 : 6 : 7 Refutation of Opponent‘s Evidence ————————————————— : 1 : 2 : 3 : 4 : 5 : 6 : 7 Ability to Find Fallacies -— : 1 : 2 : 3 : 4 : 5 : 6 : 7 Fulfillment of ReSponsibil— ities Of Speaker‘s Posi— tion in the Debate ------- : l : 2 : 3 : 4 : 5 : 6 : 7 Recognition of Main Issues Of the Debate ------------ : 1 : 2 : 3 : 4 : 5 : 6 : 7 Ability to Follow the Main Issues Through the Rebuttal Period —————————— : l : 2 : 3 : 4 : 5 : 6 : 7 Analysis of the Proposition and Selection of Arguments ---------------- : l : 2 : 3 : 4 : 5 : 6 : 7 Depth and Scope of Knowledge of Topic ----------------- : l : 2 : 3 : 4 : 5 : 6 : 7 Behaved Ethically and Fairly ------------------- : l : 2 : 3 : 4 : 5 : 6 : 7 Performed Performed Very Poorly Very Well 118 Division Round Room Date Judge Affirmative Negative In this debate: (check one) I kept quite a complete flow Sheet I took some notes I listened, taking very few notes In comparison with other college debate teams I have heard this year, I rate the: Affirmative Team : 1 : 2 : 3 : 4 : 5 : 6 : 7 : Very Very Poor Negative Team 1 : 2 : 3 : 4 : 5 : 6 : 7 In my Opinion, the team did the better (Aff. or Neg.) job Of debating. Judge School Total number of years of debate experience (as a debater, coach, or judge). APPENDIX C TWO THROUGH SEVEN FACTOR ROTATIONS OF OVERALL ANALYSIS Varimax Rotation Analysis Proportion Of Variance 1 .3291 2 .2123 Highest Loadings 1 .8561 2 .7273 Rotated Factor Loadings Communalities l 2 l —.ll73 .7013 .5055 2 .1147 .5584 .3249 3 .2285 .4725 .2754 4 .1901 .6274 .4297 5 .2061 .4235 .2218 6 .2718 .6922 .5531 7 .2445 .7273 .5888 8 .4021 .6023 .5245 9 .6631 .5685 .7628 10 .2563 .7122 .5729 11 .4707 .4375 .4129 12 .5530 .3211 .4089 13 .5709 .4169 .4997 14 .5510 .4559 .5114 15 .5173 .4551 .4747 16 .6326 .4682 .6193 17 .7441 .3815 .6993 18 .6600 .4116 .6050 19 .4015 .4694 .3815 20 .8261 .1487 .7046 21 .6643 .0301 .4421 22 .8561 .1364 .7516 23 .6452 .2610 .4844 24 .7275 .2032 .5705 25 .7595 .2959 .6644 26 .8190 .1769 .7020 ’27 .7431 .0920 .5606 28 .8013 .3067 .7362 29 .7637 .4171 .7573 30 .2094 .7233 .5670 31 .6314 .2680 .4704 119 120 Varimax Rotation Analysis Proportions Of Variance 1 .3136 2 .1283 3 .1622 Highest Loadings l .8465 2 .7963 3 .8145 Rotated Factor Loadings Communalities l 2 3 l —.1075 .7143 .2613 .5900 2 .1603 .7963 .0195 .6601 3 .1463 .0616 .7590 .6012 4 .1103 .0977 .8145 .6851 5 .2496 .6571 .0645 .4982 6 .2722 .6439 .3411 .6051 7 .2450 .6786 .3543 .6461 8 .3780 .4102 .4620 .5246 9 .6367 .3624 .4760 .7633 10 .2072 .3621 .6639 .6148 11 .4582 .3267 .3143 .4155 12 .5531 .2940 .1843 .4263 13 .5752 .4083 .2041 .5393 14 .5525 .4281 .2389 .5456 15 .4669 .1127 .5658 .5509 16 .5921 .1836 .5163 .6509 17 .7200 .2006 .3788 .7021 18 .6312 .2012 .4175 .6132 19 .3607 .1850 .5052 .4195 20 .8158 .0668 .1861 .7046 21 .6825 .1317 .0613 .4869 22 .8465 .0591 .1779 .7517 23 .6046 .0103 .4202 .5422 24 .6965 .0065 .3367 .5986 25 .7530 .2280 .2271 .6706 26 .8208 .1670 .1219 .7164 27 .7585 .1712 .0092 .6047 28 .7875 .1932 .2812 .7366 29 .7443 .2622 .3668 .7573 30 .1535 .3320 .7090 .6365 31 .6187 .1661 .2451 .4705 121 Varimax Rotation Analysis Proportions of Variance 1 .2209 2 .1426 3 .1130 4 .1691 Highest Loadings 1 .8155 2 .8170 3 -.7921 4 -.7629 Rotated Factor Loadings Communalities 1 2 3 4 1 .0370 .2889 .7533 .0651 .6565 2 .1402 —.0293 .7921 —.1515 .6709 3 .1476 .7701 .0451 -.0848 .6240 4 .0947 .8170 .1080 —.1242 .7035 5 .1583 -.1035 .6202 —.2831 .5007 6 .1418 .2891 .5975 -.3855 .6092 7 .2257 .3508 .6824 -.1879 .6750 8 .3813 .4738 .4265 -.1653 .5790 9 .4886 .4267 .3131 —.4950 .7638 10 .1556 .6498 .3555 -.2267 .6243 11 .3806 .2911 .3040 -.3096 .4179 12 .2241 .0507 .1561 —.7629 .6592 13 .2668 .0798 .2809 -.7448 .7112 14 .2983 .1383 .3257 -.6482 .6344 15 .2465 .4829 .0296 -.5602 .6087 16 .4039 .4496 .1157 —.5395 .6698 17 .5635 .3276 .1458 —.5090 .7052 18 .4167 .3390 .1200 -.5904 .6516 19 .1649 .4300 .1110 -.4916 .4661 20 .7892 .1933 .0673 —.2798‘ .7430 21 .7483 -.0187 .1681 -.0712 .5936 22 .8032 .1784 .0522 -.3157 .7794 23 .5327 .4054 .0283 -.3135 .5472 24 .6748 .3453 .0025 -.2439 .6341 25 .6035 .1781 .1738 -.4973 .6734 26 .8155 .1375 .1764 —.2486 °7768 27 .8012 .0251 .1992 —.l390 .7016 28 .6447 .2366 .1433 —.4951 .7373 29 .5639 .3051 .1969 —.5623 .7661 30 .1076 .6969 .3281 -.2033 .6462 31 .4602 .1897 .1065 -.4732 .4831 122 Varimax Rotation Analysis Proportions of Variance l .2310 2 .0801 3 .1490 4 .1468 5 .0744 Highest Loadings 1 .8207 2 .7907 3 .7935 4 —.8l30 5 .8347 Rotated Factor Loadings Communalities 1 2 3 4 5 1 -.0604 .7907 .1805 .0374 .2478 .7243 2 .1370 .4677 .0070 .1178 .6683 .6980 3 .1661 .1463 .7345 .0544 —.l902 .6276 4 .1156 .2283 .7935 .0771 -.0500 .7036 5 .1786 .0971 .0237 .1269 .8347 .7546 6 .1600 .3016 .3521 .2948 .5748 .6578 7 .2070 .7624 .2450 .2993 .1691 .8024 8 .3712 .5887 .3714 .2499 -.0063 .6849 9 .5052 .2488 .4414 .4572 .2079 .7643 10 .1796 .2607 .6738 .1346 .2818 .6518 11 .3865 .2755 .2802 .3051 .1587 .4221 12 .2293 .1338 .0739 .8130 .0497 .7393 13 .2723 .2109 .1040 .7822 .1565 .7657 14 .3023 .2697 .1478 .6839 .1664 .6813 15 .2735 .0100 .5299 .5009 .0459 .6087 16 .4369 .0229 .5257 .4266 .2239 .6998 17 .5848 .0639 .3671 .4469 .1636 .7073 18 .4482 .0509 .4249 .4817 .2538 .6804 19 .2071 .1454 .5551 .3193 .3587 .6028 20 .8000 .0543 .1943 .2424 .0623 .7433 21 .7513 .0728 .0178 .0285 .1924 .6079 22 .8127 .0579 .1749 .2907 .0316 .7800 23 .5503 .0261 .4092 .2725 -.0472 .5475 24 .6885 .0506 .3370 .2073 -.0326 .6342 25 .6146 .1197 .1951 .4793 .1307 .6769 26 .8207 .1454 .1261 .2268 .1214 .7768 27 .8013 .1503 .0076 .1234 .1486 .7020 28 .6583 .1037 .2535 .4683 .1081 .7394 29 .5881 .0551 .3629 .4825 .2497 .7757 30 .1313 .2681 .7128 .1180 .2335 .6657 31 .4702 .0960 .2012 .4695 .0520 .4939 Varimax Rotation Analysis Proportions of Variance 1 .2128 2 .0852 3 Highest Loadings 1 .8089 .7735 3 Rotated Factor Loadings 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 l —.0602 .1319 .1708 .1141 .1840 .1395 .1932 .3209 .4679 .1632 .3178 .2548 .2868 .2951 .2055 .3560 .5298 .3756 .1441 .7770 .7699 .7965 .5228 .6769 .6059 .8089 .7990 .6226 .5466 .1270 .4714 2 .7681 .4534 .0782 .1644 .0449 .2981 .7735 .6825 .2948 .2294 .4132 .0734 .1693 .2752 .1302 .1188 .1600 .0788 .0571 .0896 .0177 .0804 .0591 .0497 .1212 .1571 .1472 .1661 .1151 .2061 .0768 3 .2202 -.0169 .7805 .8213 .0264 .2833 .2251 .2366 .3164 .6423 .0606 .1078 .1084 .0939 .2928 .2496 .1753 .1668 .3347 .1186 .0487 .1180 .3211 .3052 .1490 .0897 .0054 .1226 .2104 .7235 .1865 123 .1052 .8213 4 .0622 .1049 .0930 .1014 .1363 .2349 .2642 .1073 .3337 .1065 .0941 .8424 .7831 .6290 .2860 .1742 .2630 .2461 .1291 .1538 .0691 .2185 .1810 .1618 .4245 .1715 .0981 .3352 .3362 .1293 .4468 4 .0998 5 .0706 6 .1380 4 -.8424 5 .8483 6 .7285 Communalities 5 6 .2817 -.0460 .7274 .6778 .0669 .6982 -.l504 .1378 .6948 -.0137 .1865 .7597 .8483 .0988 .7846 .5679 .3054 .6595 .1812 .1469 .8104 -.0451 .3631 .7702 .1765 .4665 .7661 .2940 .2844 .6705 .0799 .5156 .5565 .0788 .1737 .8279 .1753 .2264 .8179 .1606 .3048 .6860 —.O354 .6569 .6595 .1263 .7285 .7801 .0987 .5637 .7337 .1619 .6908 .7391 .2849 .6212 .6198 .0389 .3041 .7434 .2236 —.0202 .6506 .0160 .2801 .7812 —.0712 .3602 .5475 -.0347 .2509 .6441 .1247 .2962 .6875 .1145 .2209 .7784 .1524 .1111 .7052 .0664 .4443 .7443 .2029 .5172 .7780 .2643 .2179 .7162 .0585 .2298 .5188 124 Varimax Rotation Analysis Proportions of Variance l .2100 2 .0677 3 .1048 4 .1073 5 .0707 6 .1051 7 .0747 Highest Loadings 1 .8057 2 .8475 3 .8148 4 —.8469 5 —.8776 6 .7557 7 -.7475 Rotated Factor Loadings Communal- ities 1 2 3 4 5 6 7 1 .0369 .1918 .1867 .0455 -.8776 .0639 .0134 .8495 2 .1411 .6286 .0329 .0987 -.5398 .1072 .0540 .7318 3 .1794 —.1690 .7603 .0880 -.0933 .2076 .0271 .6992 4 .1165 -.O286 .8148 .1028 -.1469 .2110 .0704 .7599 5 .1774 .8475 .0323 .1377 —.0924 .0872 .0320 .7869 6 .1221 .5742 .3149 .2570 -.2l31 .1766 .3278 .6938 7 .1981 .1247 .2251 .2726 -.7524 .1134 .2678 .8305 8 .3038 -.0506 .2821 .1475 -.4802 .1421 .5917 .7971 9 .4501 .1915 .3464 .3673 -.1478 .2950 .4398 .7966 10 .1470 .3052 .6738 .1282 —.1215 .1723 .3006 .7200 11 .2808 .1253 .1392 .1534 —.1144 .1670 .7475 .7372 12 .2481 .0775 .1053 .8469 -.0688 .1603 .0402 .8280 13 .2782 .1723 .1119 .7927 -.1455 .1832 .1236 .8180 14 .2894 .1503 .0951 .6420 -.2379 .2459 .2045 .6865 15 .2203 -.0598 .2468 .2909 —.1401 .6761 .1895 .7102 16 .3625 ~.1183 .2210 .1906 -.0821 .6797 .3076 .7940 17 .5192 .1143 .1865 .2934 —.0405 .4203 .3985 .7406 18 .3769 .1634 .1477 .2653 -.0280 .6191 .3127 .7429 19 .1715 .2477 .2583 .1151 -.0723 .7557 .0290 .7479 20 .7765 .0411 .1131 .1704 -.0487 .2527 .1802 .7452 21 .7721 .2194 .0421 .0717 -.0514 .0010 .0347 .6549 22 .7935 .0221 .1171 .2364 -.0301 .2197 .1805 .7815 23 .5450 -.1053 .2649 .1737 —.1383 .4640 .0286 .6437 24 .6818 -.0406 .2884 .1695 -.0434 .2560 .0781 .6519 25 .6044 .1211 .1414 .4364 -.1015 .2634 .1414 .6901 26 .8057 .1151 .0926 .1889 -.1135 .1595 .1836 .7786 27 .7956 .1527 .0114 .1130 —.1161 .0578 .1388 .7054 28 .6096 .0838 .1402 .3653 —.0463 .3003 .3612 .7545 29 .5337 .2219 .2243 .3648 —.0079 .3822 .3558 .7903 30 .1143 .2694 .7479 .1445 —.1268 .1459 .2185 .7510 31 .4587 .0745 .2034 .4657 .0004 .1412 .1907 .5305 APPENDIX D PROPOSED BALLOT Division _Round ”Room Date Judge Affirmative Negative Please rate each Speech on the factors listed beneath it using the following scale: 1, poor; 2, fair; 3, average; 4, excellent; 5, superior. Place the number (from 1 to 5) in the blank next to the factor. Decimal numbers represent relative factor weights. Full names of the factors may be found on the supplementary sheet. FIRST AFFIRMATIVE CONSTRUCTIVE I. Depth of Knowledge of Topic . (.71) II. General Delivery . . . . . (.09) 111. Use of Persuasive Techniques (.11) IV. Fundamental Verbal Facility (.09) FIRST NEGATIVE CONSTRUCTIVE 1. Analysis and Attack of the Principal Issues of the Debate . . . . (.78) 11. Use of Persuasive Techniques (.13) 111. General Delivery (.09) SECOND AFFIRMATIVE CONSTRUCTIVE I. The Continuous Analysis and Attack of the Principal Issues of Debate (.56) 11. Use of Persuasive Techniques (.31) 111. General Delivery (.06) IV. Dynamism and Interestingness (.07) V. Fundamental Verbal Facility (.00) 125 II. I_[I. IV. VI. II. III. IV. v1: II. III. IV. II. III. IV. 126 SECOND NEGATIVE CONSTRUCTIVE The Continuous Analysis and Attack of the Principal Issues of Debate Fundamental Verbal Facility General Delivery Overall and Internal Organization and Clarity . . . . Use of Persuasive Techniques . . . Dynamism and Interestingness FIRST NEGATIVE REBUTTAL Well-organized, Clear, and Continuous Analysis and Attack of the Principal Issues of Debate . . . . . . . Use of Persuasive Techniques General Delivery . Proper Use, Quality, and Amount of Evidence . . . . . Ethical Behavior Interestingness FIRST AFFIRMATIVE REBUTTAL Continuous Analysis and Attack of the Principal Issues of Debate Fundamental Verbal Facility General Delivery . Use of Evidence and Other Persuasive Techniques Dynamism and Interesting SECOND NEGATIVE REBUTTAL Continuous Analysis and Attack of the Principal Issues of Debate . Fundamental Verbal Facility . . . Use of Persuasive Techniques General Delivery . Dynamism and Interesting AAA AAA AAA/\A .53) .06) .06) .28) .00) .07) .80) .04) .00) .11) .05) .00) .43) .10) .00) .39) .08) .84) .07) .09) .00) .00) 127 SECOND AFFIRMATIVE REBUTTAL 1. Continuous Analysis and Attack of the Principal Issues of Debate . . . . . . . (.60) 11. Use of Persuasive Techniques . . . . . . . (.16) III. Fundamental Verbal Facility . . (.12) IV. PrOper Use, Quality, and Amount of Evidence . . . . (.12) V. General Delivery (.00) Basis for Decision and other Comments: In my opinion the better debating was done by the Affirmative or Negative Judge's Signature School 128 Supplementary Sheet for PrOposed Ballot lst Aff. Construc. Depth of Knowledge of TOpic as evidenced by analysis of topic, selection of argu- ments, reasoning, internal and overall organization and clarity, and proper use, quality and amount of with emphasis on rate Use of Persuasive Techniques in an ethical manner, and interestingness Fundamental Verbal Facility with reSpect to grammar, articulation, and choice of words, ability to find falacies, and prOper identifica- Analysis and Attack of the Principal Issues of the by the use of well reasoned, clear, well organized and well worded attacks on the oppositions reasoning and case using sufficient good evidence which is properly related to the conclusions drawn and delivered in a Use of Persuasive Techniques and refutation of evidence in an ethical manner with emphasis on articulation and rate, and considera- tion of interestingness. The Continuous Analysis and Attack of the Principal by the use of consistent, well reasoned and clear attacks on the opposition‘s reasoning, case, and Use of Persuasive Techniques reasoning and evidence, in an ethical manner. with emphasis on rate and consideration of interesting- ness and articulation. Dynamism and Interestingness I. evidence. II. General Delivery 111. IV. tion of evidence. lst Neg. Construc. I. Debate dynamic manner. II. 111. General Delivery 2nd. Aff. Construc. 1. Issues of the Debate evidence. II. III. General Delivery IV. V. Fundamental Verbal Facility with respect to grammar, articulation and choice of words. 129 The Continuous Analysisgand Attack of the Principal Issues of the Debate by the use of’consistant, well reasoned attacks on the oppositions case and reasoning, and by the prOper use of qualitative evidence. Fundamental Verbal Facility with reSpect to grammar, articulation and choice of with emphasis on rate and consideration of ethical Overall and Internal Organization and Clarity and amount offevidence. Use of Persuasive Techniques and of Evidence in an ethical manner. Dynamism, Interestingness and Quality of Reasoning The Well Organized Clear and Continuous Analysis and Attack of the Principal Issues ofIthe Debate by the use of consistent, well-reasoned attacks on the Opposition‘s case and reasoning in a dynamic manner. Use of Persuasive Techniques with emphasis on rate and articulation PrOper Use, Quality, and Amount of Evidence refutation of evidence, quality of reasoning and knowledge of topic. The Continuous Analysis and Attack of the Principal Issues of the Debate by the use of’consistent, well-reasoned and organized attacks on the Opposition's case and reasoning. Fundamental Verbal Facility with reSpect to grammar, articulation, and choice of words, and internal organization and clarity of argu- with emphasis on rate Use of Evidence and Other Persuasive Techniques in a prOper and ethical manner, quality of reasoning and knowledge of topic. 2nd. Neg. Construc. I. II. words. 111. General Delivery behavior IV. V. VI. lst Neg. Rebuttal I. II. III. General Delivery IV. V. Ethical Behavior VI Interestinggess lst Aff. Rebuttal I. II. ments. 111. General Delivery IV. V. Dynamism and Interestiggness 130 The Continuous Analysis and Attack of the Principal by the use Of consistent, well-reasoned, organized, clear attacks on the opposition‘s case, reasoning, and evidence, and by the prOper use of evidence. Fundamental Verbal Facility reSpect to grammar, articulation, and choice of words. Use of Persuasive Techniques and evidence in a proper and ethical manner. with emphasis on rate and consideration of ethical Dynamism and Interestingness. The Continuous Analysis and Attack of the Principal by the use of consistent, well-reasoned attacks on the Opposition‘s case, reasoning, and evidence. Use of Persuasive Techniques Fundamental Verbal Facility With reSpect to grammar and articulation Proper Use, Quality and Amount of Evidence and its organized, clear, relationship to reasoning; also, dynamism and choice of words. 2nd. Neg. Rebuttal 1. Issues of the Debate 11. III. IV. General Delivery behavior V. 2nd. Aff. Rebuttal I. Issues of the Debate II. in an ethical manner III. IV. V. General Delivegy with emphasis on rate and considering interestingness, trustworthiness, and ethical behavior. muumiuuum umuInning:WM 30