METHODOLOGY FOR EVALUAEING ECONOMIC TH-EORIES Thesis for {fl-see Degree of Mm D. MECHEGAN STATE UNEVERSSTY Darrei Harvey Flaunt 396-5 LIBRAR '1" Mia; 'San Stac; University READING REubRGL 5;; '" 'I’SONAT "OPY ITEMS This is to certify that the thesis entitled METHODOLOGY FOR EVALUATING ECONOMIC THEORIES presented by Darrel Harvey Plaunt has been accepted towards fulfillment of the requirements for Ph.D degree in Ag. Economics 4&4 Major M or Date-“W “‘1 l g i ~~~~ raw-“W ' ‘ ' 0-169 90m 11 @1404 i ABSTRACT NETHODOLOGY FOR EVALUATING ECONOMIC THEORIES by Darrel Harvey Plaunt The purpose of this study is to examine alternative procedures of theory evaluation with a view to identifying some of the more promising techniques and synthesizing these techniques into a more comprehensive framework of analysis. The need for such a framework of analysis arises out of the attempts of researchers to explain and to predict the phenomena of reality. Efforts to improve the accuracy and reliability of explanation and prediction involve choosing among alternative theories and developing more powerful sets of hypotheses. As one distinguished philosopher has said: "All scientific activity amounts to the invention of and the choice among systems of hypotheses." The development of this framework of analysis begins with an examination of the nature of theories and their role in scientific explanation and prediction. This examination provides a set of criteria for the logical and the empirical adequacy of theories. These criteria provide the broad outlines of the framework of analysis required for the examination of alternative techniques of theory evaluation. The use of these techniques is illustrated by their application to the evaluation of the classical theory of Darrel Harvey Plaunt economic growth. This theory was chosen because it seemed sufficiently recalcitrant to evaluation to facilitate the illustration of many of the concepts involved. The investigation of the logical adequacy of theories begins with the concept of an underlying structure or calculus, the concept of deduction relatedness and the concept of a model. This investigation is continued with the deduction of a number of apparently inconsistent theorems from the original formulation of classical axioms. Proof of such inconsistency would be extremely damaging to any putative theory. Such proof is not easily forthcoming, however. An inconsistency may arise either from the nature of the axioms employed or from the vagueness and imprecision with which they are expressed. Since this vagueness and imprecision is characteristic of many social science theories, it is often necessary to more fully formalize the theory before unequiv- ocal results can be obtained. This process of formalization is illustrated in two stages in this study. The first stage involves the interpretation of the classical axioms in terms of first differences. This interpretation provides a model of the classical axioms from which an inconsistency is deduced. This deduction would likewise have been extremely damaging if it had been possible to demonstrate that this formalization represents an adequate interpretation of the theory under examination. The second stage in this process Darrel Harvey Plaunt of formalization involved the specification of the axioms in general equation form. The problems of assessing the logical adequacy of theories expressed in equation form require the use of more powerful analytic techniques. One such technique, applic- able to linear systems, is the Test of Determinants. Another technique, applicable to both linear and non—linear systems is developed in detail in Chapter V. This technique is called the Method of Numerical Interpretation. It evolves out of the classification system developed in that chapter. It provides a direct means of proving the consistency of a set of axioms once a sufficient level of formalization has been attained. As such, it may usefully compliment the other analytic techniques more commonly employed in the evaluation of the logical adequacy of theories. The investigation of the empirical as opposed to the logical adequacy of theories, on the other hand, comes to focus on the truth of the empirical propositions employed. Many of the problems of ascertaining the truth of these propositions involves questions of whether or not the variables can be measured and the accuracy and reliability of these measures. Other problems involve the accuracy and reliab- ility with which the parameters can be estimated once the values of the variables are known. The concepts required for the investigation of the measurability of the variables are drawn from analytic philosophy. The concepts required for assessing the accuracy and reliability with which the Darrel Harvey Plaunt parameters could be estimated are drawn from mathematics and statistics. These sets of concepts, together with the logical concepts employed in the examination of the consistency of theories, all come to focus within the general framework provided by the concepts of the logical and empirical adequacy of theories. Further progress in the development of this framework for the a priori analysis of theories is likely to go hand in hand with the refinement of the concepts of analytic philosophy and the development of more powerful mathematical and statistical techniques. METHODOLOGY FOR EVALUATING ECONOMIC THEORIES By Darrel Harvey Plaunt A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Agricultural Economics 1965 ACKNOWLEDGEMENTS The author wishes to express his appreciation: to Michigan State University for financing this study. to Dr. H. R. Jensen, University of Minnesota, who kindled the author's first sparks of interest in the problems of research methodology in economics. to Richard S. Rudner, University of Saint Louis, who intro- duced the author to the discipline of analytic philosophy and who guided the initial phases of the project. to Dr. Robert Barrett, University of Saint Louis, who con- tributed a major portion of the analytical framework employed. to Dr. Gerald J. Massey, Michigan State University, who guided and directed the final stages of the analysis. to Dr. Glenn L. Johnson who stimulated the author's interest in the interrelationships between analytic philosophy, mathematics and statistics and their application to economic theory, who guided and directed all phases of the study, and who made the whole graduate program a genuinely stimulating experience. Fe Fe TABLE OF CONTENTS ACKNOWLEDGEMENTSOOOOOOOOOOOOOOOOOOOOOOOOOOOO0.0.00.0... TABLE OF CONTENTSOOOO0.0.0.0000.00000000000000000...... Chapter I II III IV INTRODUCTION TO PROBLEMS OF THEORY EVALUTATION The PrOblemOOOOOOOOO0.0.0.0000...OOOOOOOOOO The ProcedureOOOOOOOOOOOOO0.0.0...0.0.0.... THE STRUCTURE OF SCIENTIFIC INQUIRY WITH SPECIAL EMPHASIS ON THE NATURE AND ROLE OF ECONOMIC THEORIESOOOOOOOOOOOOCOOOOOOOOOOOCOOOO Introduction............................... An Examination of the Language of Science.. Laws in the Structure of Science........... Theories in the Structure of Science....... Nbdels in the Structure of Science......... Summary.................................... EXAMINATION OF THE CLASSICAL THEORY OF ECONOMIC GROWTH WITH SPECIAL EMPHASIS ON THE TECHNIQUES OF LOGICAL ANALYSISOOOOOCOOOO00.000.000.000... A Summary of the Classical Theory of Economic Growth............................ Evaluation of the Classical Theory of Economic Growth............................ Conclusions................................ TECHNIQUES OF PARTIAL FORMALIZATION AND THE NATURE OF THE CLASSICAL AXIOMS................ Techniques of Partial Formalization........ The Nature of the Classical Axioms......... METHODS OF ANALYSIS OF PROPOSITIONAL FUNCTIONS AS A MEANS OF EVALUATING THE EXPLANATORY AND PREDICTIVE POTENTIAL OF THEORIES.............. Introduction............................... The Structure of the Propositions of SCienCeOOOOOOOOOOOOOOOOO0......00.0.0000... iii Page ii iii 1 l 6 l5 l5 l8 23 35 49 55 58 58 64 83 85 85 9O 95 95 97 1v Chapter Page The Import of the Structure of Propositions in the Structure of Science................................... 114 summaryOOOOOOOOOOOOOOOOOOO00.000.00.000... 127 VI APPLICATION OF THE ANALYSIS OF PROPOSITIONS TO THE EVALUATION OF THE LOGICAL ADEQUACY OF THEORIESOOOOOOO'CCOOOCOOCCOCOOOOOOOOOOOOOOO 129 Introduction.............................. 129 The Analysis of Propositions Applied to Equation FormSeeeeeeeeeeeeeoeeeeeeeeeeeeee 133 The Criteria of Logical and Empirical Adequacy.........;........................ 137 IsumarYOOOOOOOOO‘...OOOOOOOOOOOOOOOOOOOOOOO 168 VII PROBLEMS OF THE EMPIRICAL ADEQUACY OF THEORIES....0.OOOOOOOOOOOOOO...O...O.O0...... 173 IntrOductiOHOOOOOOOOOCOOOOOOOOOOOOOCOOOOOC 173 Problems of Estimating Parameters......... 176 Problems of Measuring the Variables....... 191 VIII SUWARYOOOOOOOOOOOOOOOOOOOOOOOOOOOOCOOOOOOOOO 209 Purpose of the Study.......3.............. 209 Method of Analysis........................ 210 APPENDIX A............................................ 226 APPENDIX B............................................ 233 APPENDIX C............................................ 241 BIBLIOGRAPHY..........................................' 245 CHAPTER I INTRODUCTION TO THE PROBLEMS OF THEORY EVALUATION THE PROBLEM The purpose of this study is to develop an approach to the problems of evaluating the explanatory and predictive potential of theories of economics. The need for such an approach arises out of the necessity for choosing among alternative theories, and for the development of more power- ful sets of hypotheses. As Professor Goodman has put it, "All scientific activity amounts to the invention of and choice among systems of hypotheses.”1 It would seem, therefore, that the problems of theory evaluation are central to a major part of scientific inquiry. The significance of this assertion becomes apparent upon examination of the nature of theories and their role in science. The nature and the role of theories will be presented in capsule form below and in more detail in Chapter II. A theory is defined as "a systematically related set of statements, including some lawlike generalizations, which l . . . . Nelson Goodman ”The Test of Simp11c1ty " Selence CXXVIII (1958), p. 1064: ’ 2 is empirically testable."2 The term "lawlike" here means, among other things, that the statement is of the universal conditional form. The statement "If investment increases then output will increase,“ for example, is of this form. It is universal in the sense that it means for all cases not just for sgme cases. It is conditional in the sense that it assumes the "if...then” form. A theory is systematically related in the sense that each statement in it is related to one or more other statements in the set in such a way that it functions either as an axiom or as a theorem within the set. Hence a theory is made up of two sets of statements-~the axioms, and the theorems deduced therefrom. A simple illustration of the nature of Hmonms may be constructed out of a set of statements drawn from one of the more widely accepted theories of employment. This set might include the following types of statements. The first might say that investors will continue to adjust investment toward the level at which the marginal efficiency of capital equals the interest rate. The second might say that investment is a negative function of the interest rate. The third would say that income is a positive function of consumption plus the level of investment. Finally, the fourth would claim that employment is a positive function of income. If these 2Richard S. Rudner, ”On the Structure of Economic Theories,” unpublished paper presented before the Joint Economics Agricultural Economics Seminar, Michigan State University, East Lansing, Michigan, May 26, l958. 3 statements are treated as the axioms of a system, then one of the theorems that follows is that employment is a negative function of the interest rate. One possible use of this theory might be to predict the level of employment to be expected in the future or to explain the level of employment attained in the past. The first of these tasks would require the conjunction of the relevant antecedent conditions, a decline in the interest rate, for example, with the theorems presented above, and with some assumption to the effect that other variables, not in the system, would remain constant. This latter assumption is required only if it is believed that there are other variables that may have 'an effect on the equilibrium of the system as a whole. Given the antecedent conditions, and the theory, it should be possible to predict the consequent, an increase in employment, for example. Alternatively, given the consequent and the theory it should be possible to adduce the antecedent conditions, that interest rates fell, for example. Since the antecedents and the consequents are generally singular state— ments, the problem of ascertaining their truth is mainly one of observation and measurement. The problems associated with the theory, on the other hand, include these problems of observation and measurement and a large set of other problems 3 as well. It is this whole group of problems associated with 3The types of problems included in this group will be developed in detail as the study progresses. L. theories and hence central to the process of explanation and prediction that is the main concern of scientific endeavor. The progress of science, then, depends, to a large extent, upon the ability of scientists to "choose among systems of hypotheses.“ This choice depends, upon their estimation of the explanatory and predictive potential of the alternative systems involved. This estimate, in turn, depends upon the group of problems mentioned above and the researcher's ability to understand and cope with them. If these problems were easily specified and solved then man's ability to explain, predict, and hence control his environ- ment, would be much better than his record would indicate. These problems are not easily solved, however. Professor Rudner has summarized the state of the arts with respect to man's ability to handle these problems of choice as follows: “Whatever may be the case for the serenity or unselfconsciousness with which practicing scientists go about the business of accepting or rejecting theories, it will surely not be denied that the problem of constructing an adequate philosophical rationale for such practice remains in its perennial state of crisis.“4 Even though the rationale for accepting or rejecting theories, and hence for selecting among them may be in its ”perennial state of crisis" this problem of developing an ALRichard S. Rudner, ”An Introduction to Simplicity," Philosophy of Science, Vol. 28, No. 2 (April, 1961), p. 109. /\ 5 adequate rationale is attracting the attention of some of the ablest minds in economics. The 1962 Proceedings issue of the American Economic Review, for example, reports a session in which Machlup, Papandreou, Nagel, Krupp, Archibald, Simon and Samuelson, bend their efforts to the analyses of problems of the evaluation of theories.5 Their main point of focus is Professor Milton Friedman's position that the adeq- uacy of a theory must not be judged by the "realism of its assumptions" but by examining the concordance of the theory's logical consequences with the phenomena the theory is designed to explain.6 Nor is this kind of interest limited to the field of general economics. At the meetings of the Inter- national Association of Agricultural Economics in MExico in 1961 a substantial portion of the deliberations of the dis- cussion groups were devoted to the process of theory evaluation, particularly as it applies to theories of economic growth. This interest is not limited to discussions at the annual meetings of professional associations, however. It is part and parcel of the everyday work of researchers in economics. This evidence of both interest and involvement in the problems of evaluating theories merely points up the fact that these problems are both important and unsolved. The fact that 5See the American Economic_Review, ”Problems of Meth- odology,” (Proceedings Issue, May, 1963, Vol. LIII, No. 2), Pp. 20h-236. 6Milton Friedman, ”The Methodology of Positive Economics,” Essays in Positive Economics,(University of Chicago Press, 1953.) 6 they have hindered the development of economics ever since the inception of the science, and the fact that they have attracted the attention of some of its ablest minds over the preceding generations suggest that they are not amenable to quick and easy identification and solution. All that can be hoped in this study is that it will be possible to identify one or more of the connecting threads of thought that are relevant to theory evaluation, and to recognize and develop the tools required to bring that thesis to bear on the v/ problems of choosing among these systems of hypotheses. THE PROCEDURE TRADITIONAL PROCEDURES There are several approaches to the evaluation of theories that might be attempted here. One of these involves simply testing the accuracy with which the theory predicts. This generally involves quantifying the relationships employed, substituting in values for the exogenous variables, solving for the values of the endogenous variables, and comparing these results with past observations on the values of the variables to be predicted or explained. In this approach, the better its record of prediction, the better the theory is presumed to be. This procedure for the testing of theories is probably the most commonly used in economics. This is especially likely to be true in agricultural economics, where, if one permits the broad Rudnerian definition of theory to 7 include many of those systems commonly, and mistakenly, referred to as models, a relatively large proportion of research time is devoted to the development and testing of models (i.e. theories) in the manner described above. This type of testing is an important method of procedure. It is particularly appropriate in circumstances in which the data is readily available or can be made available at a modest cost, and in which the statistical fitting procedures are not likely to be particularly expensive. For many theories this is not the case, however. In the first place, the tasks of observing, measuring, collecting and processing the data may become extremely expensive before sufficient reliability is attained to permit its useful application. In these cases it becomes important to obtain an understanding of its explahatory and predictive potential a priori. Such an understanding would enable the research group to avoid a substantial investment in quantif- ication and testing of theories whose explanatory and pred- ictive potential is particularly weak, and for which this conclusion could have been foreseen by adequate a priori analysis. In addition, the sole reliance on testing, in contrast to other techniques of evaluation, is not likely to afford sufficient understanding of the causes of the adequacy or the inadequacy of the theory under examination. These causes may lie in either its logical structure or in its empirical 8 content. Testing alone does not necessarily help to identify either of these sets of causes. An understanding of both the logical structure and the empirical content is required both in the evaluation of extant theories and in the development of more powerful sets of hypotheses. Hence, a major portion of this study is devoted to a consideration of the logical and empirical adequacy of theories. These considerations apply to both theories in general and to economic theories in particular. THE PROCEDURES EMPLOYED Concepts Drawn from Analytic Philosophy In view of the importance of developing an under- standing of the logical and empirical conditions required for adequate explanation and prediction, the first phase of this study concentrates on a selection of concepts drawn from analytic philosophy. These concepts constitute the major ideas employed in the framework of analysis to be developed below. An initial selection of these concepts is presented in Chapter II. These concepts include a group of notions about the logical structure of science, starting with the concept of a logical term and ending with a description of the structure of theories in general. They explain the manner in which the logical terms combine with descriptive terms in order to provide the empirical assertions of science. In 9 addition, they explain the manner in which these sentences are combined to form theories whose function it is to pro- vide the lawlike statements used in explanation and prediction. This leads to a consideration of the role of these laws or lawlike statements in the structure of explanation and prediction. Examination of this structure provides an initial set of criteria for the logical and empirical adequacy of theories in general. These criteria focus attention on the concepts of deductive relatedness or deductive subsumption. This concept is explained in terms of the concepts of form- alization. These concepts, in turn, provide the bases for a consideration of techniques of partial formalization and the use of models in theory construction and evaluation. Out of this investigation of the logical structure of science evolves the thesis that guides the development of this study. This thesis asserts that the explanatory and predictive potential of any given theory depends on both the truth of its empirical propositions and the logical adequacy of its structure. Hence, the techniques designed to facilitate the evaluation of extant theories and the concepts required to guide the development of more powerful sets of hypotheses must perform a dual role. They must facilitate the recog- nition of the truth of the propositions employed. In addition, they must facilitate the recognition of the logical adequacy not only of individual propositions but also of sets 10 of propositions as they function in science. This thesis, drawn from the field of analytic philosophy, provides the bases for an initial assault on the w' problems of theory evaluation. These problems are illustrated by examination of the classical theory of economic growth. Since there are a number of interpretations of this particular aspect of economic theory, it was decided to concentrate on one particular formulation and in so doing to avoid a number of questions more closely related to the history of economic thought than to the problems of choosing among alternative sets of hypotheses. The formulation chosen for examination is the Higgins summary of the classicaltheory.7 This formulation was chosen because it has already attained a minimum level of formalization and because it seems to be sufficiently recalcitrant to evaluation to permit the illustration of most of the techniques developed below. This summary of the classical theory is presented in Chapter III. It is followed by an examination of some of the logical consequences of the theory. This examination is carried to the point where evidence of inconsistency begins to arise, and where it becomes apparent that more rigorous techniques of analysis are required. In the initial search for these techniques it was decided to limit the actual techniques employed to those with 7Benjamin Higgins, Economic Development, Principles, Problems, and Policieg, (New York: W.W. Norton and Co., Inc., 1959). pp. 85-106. 11 which researchers in economics could readily become familiar. In this stage a number of abortive attempts were made at ascertaining the logical consistency of the system using traditional and symbolic logic on the level of the calculus of propositions. Direct approaches to questions of consistency and inconsistency through the use of both economic and math- ematical models were also attempted. None of these approaches produced unequivocal results. The reason for this may be largely attributable to the vagueness and imprecision with which the propositions of the theory were stated. Since these problems of vagueness and imprecision are characteristics of a large numberof theories in economics, it was decided to allocate an important portion of this study to the examination and application of a variety of techniques of partial formalization. The first stage in this procedure will be the formalization of the classical axioms in terms of first differences so that the direction, if not extent, of each of these changes can be ascertained. This new formal— ization will then be analyzed in terms of its logical struct- ure. If this analysis provides unequivocal results then it may be fruitful to proceed directly to a consideration of the application of selected techniques of mathematics and statistics. If the results are not unequivocal then it will become important to more thoroughly investigate both the techniques of formalization and the techniques of evaluation before the mathematical and statistical approaches are l2 attempted. The development and application of the techniques of formalization and the techniques of analysis generally complement each other. The formalization of a theory generally permits the application of more rigorous techniques of analysis and the application of these techniques generally facilitates more adequate formalization. The first step in this formalization would be to exhibit each of the classical axioms in terms of general equation forms, making certain that these forms adequately reflect the intent of these axioms. Once this is accomplished, attention can be returned to a consideration of analytic philosophy in order to develop a set of techniques which are more powerful than those available from the initial selection of concepts presented in Chapter II. In this initial select- ion, the unit of investigation was the concept of a theory as a whole. The difficulty in finding or developing a set of techniques of sufficient logical power and analytic precision to adequately evaluate these sets of statements suggests the more rigorous analysis of the individual statements that comprise the theory. This type of analysis may be termed the analysis of propositions and may provide the basis for the evaluation of the logical structure of theories and their role in explanation and prediction. Concepts Drawn from Mathematics and Statistics Up to this point attention has been focused on the 13 logical structure of theories and on a selection of concepts available from various areas of analytic philOSOphy. There is some question, however, whether this discipline can provide sufficiently precise tools for the analysis of the problems . at hand. Most theories of economics require concepts of metricization. These concepts can not be handled adequately within the framework of formal logic without recourse at least to general quantification theory. The use of general quantification theory usually requires the skills of a professional logician. In order to avoid this arduous type of analysis and yet to employ a system of sufficient logical power, it would seem appropriate to turn to informal applications of mathematics and statistics. After a sufficient level of formalization has been attained, it may be possible to bring certain mathematical techniques to bear on the problems of whether, for a given theory, it is possible to uniquely determine the values of the variables given the estimates of the parameters. In the same way, it may be possible to determine, a priori, whether it is possible to consistently estimate the values of the parameters, given the values of the variables involved. Finally, if the parameters can be estimated, and if the equations can be solved, it becomes important to assess the potential accuracy and reliability with which these para- meters can be estimated and the predictions made. These last three phases of the analysis, together with 14 the rigorous analysis of its logical structure, suggests that a comprehensive investigation of the techniques required to assess the explanatory and predictive potential of theories is likely to involve forays into a variety of areas in analytic philosophy, mathematics, and statistics. These forays will comprise the major investigations conducted below. CHAPTER II THE STRUCTURE OF SCIENTIFIC INQUIRY WITH SPECIAL EMPHASIS ON THE NATURE AND ROLE OF ECONOMIC THEORIES INTRODUCTION The purpose of this chapter is to examine some of the key concepts of analytic philosophy with a view to developing a thesis, or a set of concepts, which is powerful enough to guide the a priori analysis of theories of economics.1 Since there is no well established set of criteria to guide the choice of these concepts, the initial selection depends, in large part, upon the insight and imagination that can be mobilized in the initial stages of such an inquiry. Once this initial selection is thoroughly examined, however, it is expected that a set of concepts and criteria can then be developed to the point where they can be used to guide and direct the investigation of the problems of theory evaluation. At this stage of the inquiry it would seem appropriate to initiate an appraisal of the explanatory and predictive potential of theories with an examination of the language 1The term a priori is used here to refer to the broad scope of scientific inquiry which is prior to systematic data collection and empirical analysis. The need for this type of analysis was established in Chapter I. 15 16 used to express the statements involved. This examination will concentrate on the terms and sentences employed in scientific discourse and set the stage for the examination of concepts, facts, laws, and theories. This will pave the way for a more detailed examination of laws or lawlike statements and their role in the structure of science. This examination comes to focus on the structure of explanation and prediction and provides an initial set of criteria for the evaluation of the role of theories in science. This overall view of the structure of science provides the background for a more detailed study of the nature of theories. This study comes to focus on the concepts of deductive subsumption or systematic relatedness. These concepts can be explained in terms of the concept of a calculi. This concept of a calculi leads to a consideration of the broader field of formalization and eventually to a 5 study of certain proposed techniques of partial formalization as a means of clarifying the nature and structure of theories, and rendering them amenable to logical analysis. This phase of the investigation culminates in a consideration of the method of models as a means of testing the explanatory and predictive potential of theories a priori. In summary, then, this chapter progresses from an examination of individual terms to an examination of state- ments and finally to an examination of sets of statements. The examination of individual terms concentrates on logical l7 and descriptive terms as they function in declarative sentences. The consideration of individual statements foc- uses on the nature of lawlikeness. The study of sets of statements, on the other hand, comes to bear first on the structure of explanation and prediction and then on the nature of theories whose function it is to provide the lawlike statements used in this structure. This study of theories begins with a consideration of their underlying calculi, progresses to a consideration of the techniques of formal- ization, and finally culminates in an examination of models as a means of theory evaluation. The concepts presented in this chapter are intended as an initial selection of the concepts useful in the process of theory evaluation. They constitute a very small sample of the concepts available from the broad scope of analytic philosophy. Other concepts will be drawn from analytic philosophy and from mathematics and statistics as the needs of the analysis arise. These needs continue to arise as these concepts are brought to bear on the problems under examination.2 2This chapter represents, in some parts, a selection of some of the key points in Richard S. Rudner's paper entitled, “On the Structure of Economic Theories,“ presented to the Joint Economics - Agricultural Economics Seminar, May 26, 1958 at Michigan State University, East Lansing, Michigan, which presents ideas on theory construction which have become common in the literature. In addition, it includes drawing on: (1) Carl S. Hampel and Paul Oppenheim, nThe Logic of Explanation,” Readings in the Philosophy of Science, ed. Herbert Feigl, and May Brodbeck (New York: Appleton-Century-Crofts, Inc., 1953), pp. 319-331; and (2) May Brodbeck, "Models, Meaning 18 AN EXAMINATION OF THE LANGUAGE OF SCIENCE Before any exposition of the underlying structure of science is attempted, it is desirable to explain some of the central terms and concepts employed. This section will be concerned with this explanation. TERMS AND SENTENCES The immediate product of any science is a set of declarative sentences--a set of linguistic entities. In thinking about science, it is useful to distinguish between the product and the process of science. The product itself contains only declarative sentences, yet the meta—language-- the language used in discussions of science, may contain any or all of the sentence forms of a natural language. These declarative sentences of science are made up of two types of terms: descriptive terms and logical terms, The descriptive terms or predicates may be usefully classified into several levels. The_first level of predicates express properties, characteristics, or attributes of individual things or events. The second level of predicates denotes properties of these properties. The third level of predicates “- ..,-——-»—— -~—., . a and Theories," Symposium on Sociological Theory, ed. Llewellyn Gross (New York: Row, Peterson & Co., 1959), pp. 373-401. These works present the standard treatments of some fairly well agreed upon concepts of concept formation, the structure of theories, and the logic of explanation and prediction. The problems of partial formalization have not been given extensive treatment in the literature. The considerations on partial formalization presented below are drawn from Rudner, o . cit. l9 express properties of properties of properties, and so on for higher level predicates. These descriptive termsmare W connected together to form statements of fact. These state- ..._.. ..‘_ um, -.—-=—— ..m ‘1“- .._ ...—abu- -—-' \ ments in turn may be jointed to each other to form compound statements of fact. It is the logical terms mentioned above .-————~..~._-_—_. -— __-_‘—-—~""'-- N‘... -u...— that do this connecting. These include: ”and,” ”or,” ”if ... then,” and ”if and only if.” They do not denote anything; they, and other log1cal terms, function only to give the ____. .-h .-.. —- —— ‘h-n-v— language its form by connecting those terms that do denote .—».__.. t“. M— -_~_.__ .__.._ something, and by expressing the connections which obtain '— --——.—— ._....~_....__ _—_ ~-—.._... —_ ....fi .. ...— v among the facts expressed. If a sentence is stripped of its *—— -..~..—— meaning by replacing its constituent statements with symbols, it still retains the form given to it by the logical terms employed.3 Thus far it has been pointed out that words can be classified into two types: logical words and descriptive words. The sentences of sc1ence can also be classified into two _._.__.»._ -...— ...... - -~.__ . _ -types(9empirical or synthetic statements andéanalytic 31t should be pointed out in passing that the family of logical terms alluded to above include more than just those connectives mentioned. They include: all (x), some (Ex), is a member of (x80), the terms of quantification and class membership and groups of other terms depending upon the formul- ation chosen. But in its most sophisticated forms, all of logic and mathematics can be constructed in terms of three primitives: (l) the Sheffer stroke, which is a connective in terms of which ”or, ” ”and,” and ”not” can all be defined, (2) the quantifier (x) or (3x); (3) the notion of class member- ship or identify, (4) variables; and (5) grouping indicators. 20 statements. These are distinguished from each other on the basis of whether or not their truth or falsity can be determined from their form and form alone. Statements of the form "If X then X" or "Either X or not X,” for example, are judged to be true no matter what statement X stands for, as long as it always stands for the same statement. Similarly, statements of the form ”X and not X" can be judged false without recourse to the meaning of the terms employed. Statements like these, whose truth or falsity J, does not depend upon the meaning of the descriptive terms .— employed, but depends on form and form alone, are called “a --—-...---— _— —— \ ' analytic statemgnts. The other group of sentences, those whose truth or falsity depends upon their descriptive terms as well as their form, are called empirical, contingent, or synthetic statements. It will become apparent in the ensuing development that each of these broad classes of statements plays quite different roles in the process of empirical investigation. CONCEPTS, FACTS, LAWS, AND THEORIE84 In addition to distinguishing between two different kinds of terms and two different kinds of statements, it is also useful to distinguish between concepts, statements of fact, laws, and theories. hBrodbeck, op. cit., pp. 373-401. 21 The word ”concept” usually refers to a property or relation. A fact, on the other hand, is a particular event, that the general level of net new investment is such and such a value, for example. Part of the significance of factual statements in science arises from the way several such statements may lead to the formulation of generalizations or laws. A lawlike statement may be construed as a statement that says, given certain conditions, that wherever and when- ever there is an instance of one kind of fact, there is also an instance of another kind of fact. It should be noted in passing that the occurrence of this second fact may or may not coincide in point of time with the occurrence of the first fact. It should be noted also that these laws are always empirical generalizations. They say, as Say's law does for example, that whenever aggregate supply expands, then aggregate demand also expands. This is an ordinally quantified law. A cardinally quantitative law, on the other hand, would say how much aggregate demand expanded when aggregate supply expanded by one unit. This type of law would likely be expressed in equation form. But it should be noted that even though it is in equation form, it is still an empirical or synthetic assertion whose truth or falsity depends upon its descriptive content; it is not analytic. It should also be pointed out, however, that these universal empirical assertions are not the only type of lawlike state- ment. As shall become apparent in a subsequent section, the 22 universal analytic assertions may also play an important role in science. Thus far the language of science has been built up by combining terms to form statements, and statements by means of connectives to form compound statements, and finally, by attaching operators to sentence forms. Some of these statements are lawlike. Now some laws are so related to one another in such a way as to constitute a theory. A theory is a deductively related set of statements, at least one of ——--—-~.— 4-.- F which or perhaps all of which are lawlike. But the set of ‘ statements comprising a theory is also divided into two sub- sets: the axipms, and thgflthpprems--the latter being deduced from the former. The axioms are generally empirical laws which are, for the moment at least, assumed to be true in order to determine what other statements, the theorems, would also be true if the axioms are in fact true. Axioms need not be self-evident or otherwise privileged. They are simply empirical laws that may at the same time function as axioms of one system, theorems of another, and hypotheses in a third. They are empirical lawlike statements which are the most general statements used--most general in the sense of being those from which all other statements of a theory, the theorems, are derived. It should be noted in passing that some empirical laws are inductive generalizations or hypotheses. The term ”hypothetico-deductive system" is often used to refer to such 23 empirical axiom systems or theories. The laws contained in these theories are usually expressed in the universal cond- itional form or as equations. But there are many equation forms that might be used, hence theories differ not only in their content (or descriptive terms) but also in their form. For example, laws in the social sciences may take the form of quantified linear equations expressing the correlation between the values of variables, while in the physical sciences they may take the form of the Cobb-Douglas, Carter- Halter, or Spillman function, etc. LAWS IN THE STRUCTURE OF SCIENCE Laws or lawlike statements are central to all scientific theorizing and to the structure of explanation and prediction. It is important, therefore, to examine the structure and meaning of laws in more detail than in the previous section. This examination will concentrate on two of the outstanding characteristics of laws. The first has to do with the truth of lawlike statements. The second concerns the characteristics of lawlikeness itself. In order for a statement to be a law it must be a true statement. Hempel and Oppenheim argue that the require- ment of a high confirmation instead of truth is not sufficient.5 If a statement were considered to be a law 5Carl G. Hempel and Paul Oppenheim, "The Logic of Explanation," Readingp in the Philosophy of Science, ed. Herbert Feigl and May Brodbeck, (New York: Appleton-Century- Crofts, Inc., 1953), p. 322. 24 relative to one set of evidence and not a law relative to another set of evidence then some serious consequences would ensue. In the event that a lawlike statement was highly confirmed at an earlier stage in science and becomes highly disconfirmed as the result of more recent empirical evidence, this relativized concept of law would force the conclusion that it was a law in the earlier stages of the discipline but that it ceased to be a law as more evidence was acquired. This relativized concept of law does not accord with common usage of the term. The requirement that a statement must be true in order to be a law, on the other hand, would permit the more usual conclusion that the limited original evidence had given a high probability to the hypothesis that the statement was a law, but that more recent evidence had reduced the probability of it being true, and hence of it being a law. In order to avoid the problems associated with the relativized concept of law the requirement of truth will be taken as a necessary condition for a law throughout this study. Statements which have all the other necessary charact- eristics of scientific laws, with the possible exception of truth, are called lawlike. The second outstanding characteristic of a law is the criterion of lawlikeness. The central notions in the concept of lawlikeness are the notions of the forms of state- ments and scope of the variables employed. The concept of lawlikeness was introduced in a previous section. It will be 25 examined in more detail below. One of the major characteristics of lawlike state- ments is that they are of uniyersal form. They may say, for example, that: ”All firms tend to maximize profits,” or ”Every time supply decreases, with demand unchanged, the prices rise.” In addition, lawlike statements are usually of the ”if ... then ...” form. This is referred to as the conditional form. This combined universal conditional form is expressed symbolically as (x) (CX:;EX), which says that if any object is C it is also E. It should be noted in passing that these universal conditionals can be equivalently translated into other logical forms so that the requirement of lawlikeness is not that laws must be stated in the universal conditional form but they they must be at least translatable into the universal conditional form. This concept of the universal conditional statement form takes on additional clarity when it is discussed in contrast with the existential conditional and singular forms. The existential conditional is expressed symbolically as (3X) (CX?EX). This says, for example, that there is at least one case in which if supply decreases, with demand unchanged, then prices rise. The comparable singular statement form would yield a statement to the effect that if supply decreased with demand unchanged, then prices fell. Both of these last two statements are quite different in their meaning from the universal conditional. Neither of these last two 26 types of statements are lawlike in that neither type has universal range although they are both of quantified conditional form. There are a number of other types of statement forms whose predicates are of sufficient scope to make them universal conditionals. The two that should be distinghished here are the analytic universal conditionals and the empir- ical universal conditionals. This distinction is drawn at this point in order to expose the commonly held misconception that all laws are tautologies. These types of statement forms are analyzed in detail in Chapter V and the analysis extended to cover equation forms in Chapter VI. Hence only an intuitive notion of tautologies and empirical statement forms need be presented here. It was pointed out in the foregoing section that the truth value of analytic statements (including tautologies) does not depend on the meanings of the descriptive terms employed. It depends upon their form and form alone. In the case of empirical statements, on the other hand, their truth values depend upon both their form and the descriptive terms employed. Laws may be expressed in either of these two types of statement forms. All the laws of mathematics, for example, are analytic statements. They contain no symbols referring to descriptive properties or relations. They say nothing in and of themselves about the real world.6 6Brodbeck, op. cit., p. 377. 27 In the empirical assertions, on the other hand, the variable must be given meaning before its truth or falsity can be established.7 For example, Galileo's law of falling bodies is stated d = l6t2. This is an empirical law; it is not a tautology. The letters ”d” and ”t” must be given meaning as distance and time before its truth value can be ascertained. Its truth value can not be established (as in the case of tautologies) from examination of its form and form alone. b1 b2 The statements, "Y’= aXl X2 X b3" 3 and ”D = a - bP" drawn from production economics and from price theory respectively, can also be considered as empirical general- izations of universal conditional form whose truth values can not be ascertained without examination of both form and content. It will be demonstrated in Chapter V that it is these empirical, contingent or synthetic statements that perform the key role in expressing the empirical content of science. This does not deny a role to the tautological group of the analytic functions, however. In fact, it will also be demonstrated that tautological statements like ”If A>B, and B>C, phgn A>C," and "(x+y)2 = x2 + 2xy + y2”, for example, play a key role in the logical structure of science.9 The 7Brodbeck, o . cit., p. 378. 8Brodbeck, loc. cit. 9Tautology is used here very broadly as a synonym for analyticity, i.e. truth in virtue of meaning alone. 28 intuitive distinction between tautological and empirical assertions presented above together with their more rigorous analysis in Chapter V should be sufficient to dispel the commonly held misconception that 311 laws in science are tautologies. Having briefly examined the forms of lawlike state- ments and concluded that they must be universal conditionals, or translatable into universal conditionals, and either empirical or tautological in form, attention will now be turned to an examination of the scope of the statements employed. One of the basic requirements of lawlikeness is that of unlimited scope. For example, the statement, "Every time supply increases with demand unchanged, prices fall," would be considered lawlike. On the other hand, the state- ment, ”Every time supply increased during the past fiscal year, with demand unchanged,prices fell,” would not be lawlike. In the first instance the scope of the sentence is unlimited, in the second case it is limited to the finite number of changes occurring within a given year and this limitation on scope is apparent from the meaning of the terms employed. The second may therefore be simply an accidental generalization even though it may be fully confirmed by observation of all cases. This is perhaps the most important distinction between lawlike and non-lawlike state- ments. The first is accepted as true while many cases of it 29 remain to be determined, the further unexamined cases being predicted to conform with it.10 The second is accepted as I a description of certain facts after the determination of all cases has been accomplished. No prediction of any of its instances is based on it. A lawlike statement, if it is true, is therefore something that can be used in predicting other cases. It is lawlike only if it is acceptable prior to the determination of all its instances. Finally, in order for a sentence to be lawlike, its acceptance must not depend upon the determination of any given instance, that is, it must contain no essential occurrences of designations of particular objects. This final requirement of lawlikeness poses some serious problems. Defined terms which appear to have no essential occurrences of references to particular things often turn out, upon examination, to be defined in terms referring to specific things. Hence certain restrictions have to be placed on the predicates that occur in fundamental lawlike sentences. The required restriction is that the predicates employed be purely qualitative. That is, that a statement of its meaning does not require reference to any one particular thing or spatio-temporal location. For example, the terms "elasticity of production,” ”consumption function," "marginal efficiency of capital” are acceptable lOSee Nelson Goodman, Fact, Fiction and Forecast, Cambridge University Press, p. 26. 3O terms or phrases within lawlike statements, while ”rates of growth in excess of those experienced in the developed areas," ”the rates of capital accumulation in the underdevelOped countries,” or, ”prices higher than in the 1930's,” are not acceptable in lawlike statements. If this requirement of purely qualitative predicates is fulfilled within the fundamental laws then it is likewise fulfilled within the derivative laws that follow from them. At the same time, if this requirement is generally fulfilled, the requirements of non-limited scope is also fulfilled because sentences which violate the condition of non-limited scope generally do so by making explicit references to specific things. A more rigorous analysis of lawlikeness and hence of laws is available in the literature of analytic philosophy.ll Sufficient discussion has been presented, however, to provide an intuitive concept of lawlikeness and hence of laws. In summary, then, laws are true statements. They are usually stated in universal conditional form or translatable into universal conditional form.l2 Finally, the statements must be non-limited in scope or be derivable from more fundamental laws which are non-limited in scope. It is these lawlike llSee Hempel, o . cit., and the footnotes and bibliography contained therein. 12It should be reemphasized, at this point, that while laws are generally universal conditionals or translatable into universal conditionals, it by no means follows that all universal conditionals are laws. In fact, Craig's Theorem, suggests the universal conditionals can be translated into single statements of equal predictive potential but no explan- atory significance. 31 statements that are required in the structure of explanation and prediction. THE ROLE OF LAWS IN THE STRUCTURE OF EXPLANATION AND PREDICTION ' EXPLANATION The processes of explanation and prediction employ the laws or lawlike statements discussed above, together with one or more singular statements, in the manner described below. Given the set of relevant lawlike state- ments Ll ... LH and adducing the appropriate singular ante- cedent statements Al ... A statements which indicate certain n ( conditions which occurred prior to, or at the same time as, or even later than, the phenomenon in question) one can explain the event E,by deducing E from the A's and L's. Hence, an event is explained by subsuming it under general laws, that is, by demonstrating that it occurred in accordance with those general laws, as a result of the occurrence of those particular antecedent conditions. To explain a phenomenon, then, is to answer the question "Why does the phenomenon occur” and this is the same as answering the question "according to what general laws, and following what specific antecedent conditions does the phenomenon occur?" It should be noted that ”why” is used here in the sense of "how come" not in the sense of ”for what motive." The foregoing structure is used in the explanation of 32 particular events. The explanation of general laws (state- ments of commonly observed regularities) is also accomplished by the same kind of procedure. For example, ”Why does marginal cost eventually rise as output continues to expand?" i.e. ”Why do costs conform to the law of eventually increasing costs?” The answer is that this law is a conseq- uence of the law of diminishing returns. Hence, the explanation of a general regularity involves subsuming it under another, more comprehensive, regularity, i.e., under a more general law. Events are explained by subsuming them under laws, laws are explained by subsuming them under more general laws. After the foregoing introduction to the structure of explanation, it becomes appropriate to examine this structure in more detail. Its outlines can be summarized as follows: C1 ... Cn Statements of ante— cedent conditions -——Explanans Ll ... L General laws 0 m Loglcal deduction +Explanandum . .13 Statement describing the empirical phenomenon to be explained. An explanation is made up of two sets of statements, an explanans, and an explanandum. The explanandum is usually a singular statement describing the phenomenon to be explained. The explanans, on the other hand, is made up of those two sets of statements which are adduced to account for (i.e., to 33 explain or imply) the explanandum. The first set is made up of statements of specific antecedent conditions, the second of statements of general laws. In addition to having the foregoing structure, the following conditions of logical and empirical adequacy must be fulfilled in order for an explanation to be adequate. LOGICAL ADEQUACY: l) The explanans must imply the explanandum. 2) The explanans must contain at least one lawlike statement. It may be made up entirely of lawlike statements. One or more of these lawlike statements must be required in the deduction of the explanandum. 3) The explanans must have empirical content. It must be amenable to test by observation or experiment. Other- wise it could not imply E which is a description of the phenomenon in question. EMPIRICAL ADEQUACY: l) The sentences asserting the explanans must be true. The reasons for this are two-fold. The first is that to permit high confirmation instead of truth would lead, as in the case of laws, to a relativized concept explanation which is not in accordance with informed usage. The second is that adequate deduction assures the truth of the conclusion provided the premises are true. It does not, however, assure the truth of the conclusions if the premises should happen to be false. 31+ PREDICTION Both the structure and the necessary conditions of logical and empirical adequacy are the same for prediction as they are for explanation. Hence, if a structure can explain it can predict and if it can predict it can explain. The difference is only one of temporal orientation. If the event described by E is known to have occurred, and if the statements of the antecedents and the general laws are adduced after the event then the event E has been explained. If, on the other hand, the statements of the antecedent conditions and the general laws are given, and E is deduced from these prior to the occurrence of the phenomenon it describes, then E has been predicted. Hence, an explanation is not fully adequate unless the explanans which has been adduced to explain the occurrence of the phenomenon E could have been used, with time taken properly into account, to predict the phenomenon in question. This potentially predictive force is, after all, the reason why scientists are interested in explanation. Scientists explain so that they can predict, and hence provide a basis for control of the environment. More specifically, economists are interested in explaining the process of econ- omic growth so that they can more accurately predict the values of the variables involved so that society can control the values of some of these variables in order to be able to increase the value of another variable--the real per capita 35 income of the economy, for example. THEORIES IN THE STRUCTURE OF SCIENCE Up to this point the structure of explanation and prediction and the central role of laws in this structure have been discussed, and the nature of laws investigated. The next task, then, is to show how these laws are combined to form theories, and how these theories function in providing the laws required to explain events, and to explain the lower level laws themselves. THE NATURE OF THEORIES It would be unwise to enter a discussion of the nature and function of theories without pausing for a moment to gain perspective. A theory is a deductively related set of laws. Laws are true hypotheses. Hypotheses are general statements about regularities. They are propositions asserting a universal connection between properties or relations. They state that every thing of a certain kind either has a certain property or stands in a certain relation to other things or events that have certain properties--all of which is to say that hypotheses are of the general form ”everything which has the property A has the property B”, where A and B are sufficiently complex properties or relations. To assert that a theory is a deductively related set of laws is to assert that from some of the hypotheses taken as premises (axioms) all the other hypotheses (the theorems) logically follow. An intuitive grasp of deductive relatedness 36 can be obtained by considering the order in which a deduct- ively related set of statements is usually presented in written form. Deductive systems are usually written so that the spatial relationships among the statements on the page correspond to the logical relationships among these statements. The logical relationship is one of deductive subsumption. The highest level hypotheses (the premises) are recorded first. These are followed by the intermediate level hypotheses which are deduced directly from the higher level hypotheses. These are followed by the lowest level hypotheses (the conclusions of the system) which are deduced from the premises, from the intermediate hypotheses, or from some combination of the two. The highest level hypotheses or premises constitute the axiom set. The intermediate level hypotheses function as conclusions from the highest level hypotheses and premises for the lowest level hypotheses. The lowest level hypotheses are conclusions derived from either of the previous levels. These intermediate and lowest level hypotheses constitute the theorems of the system. THE NATURE OF CALCULI One of the most useful concepts in understanding the nature of theories and of deductive relatedness is the concept of an underlying calculus or structure. This concept plays a key role in the process of theory evaluation. During the last century it was discovered that it is possible to choose a set of symbols to substitute for the 37 propositions of a system, to codify a set of rules for the manipulation of these symbols, to manipulate them and trans- late them back into the natural language in order to obtain a set of statements which are the deductive consequences of the original propositions. This representation of a deductive system so that there is a rule of symbolic mani- pulation for each principle of deduction is called a calculus. The practical advantages that derive from the use of a calculus include the facts that it enables deductions to be made by the manipulation of symbols alone and the further fact that the correctness of these deductions can be checked by inspection of the relationships between these symbols. In addition, once a deduction is proved in a calculus it is simultaneously proved for every interpretation of that calculus. This concept of a calculus or an underlying structure is basic to many of those aspects of formalization concerned with the structure, as averse to the content, of the individual statements and the sets of statements of which theories are constructed. THE NATURE OF DEDUCTIVE RELATEDNESS AND FORMALIZATION The definition of theory as a deductively related set of laws can now be reformulated in order to point up some of the essential concepts involved: ”A theory is a systemat- ically related set of statements, at least one of which is lawlike and which is empirically testable.” Of the three 38 pivotal terms in this definition, the term 'lawlike' has already been discussed. The discussion of the term 'empir- ically testable' will be deferred until the discussion of quantification. The concept requiring further elaboration, at this point, is the notion of 'systematic relatedness.’ 'Systematically related,’ as used here means deduct- ively related or more precisely,the relationship of deductive subsumption. This is the relationship embodied in the concept of a calculus. The process of fully articulating the ded- uctive system that constitutes a theory (which involves exhibiting its underlying calculus) is called the process of formalization. This process of formalization can be carried out to a whole series of different levels, from the lowest level where just one deduction has been made to the highest level where the entire framework has been articulated. Actually, a very few theories have ever been fully formalized, and for good reason. In many areas of economics, for example, knowledge is tentative. Time spent on formalization of these theories might often be better spent on clearing up the meaning and centrality of the concepts employed. Only a very few areas, where substantial agreement exists and where most of the significant assertions about the subject have already been made or where a purely formal system whichcpnstitutes the underlying calculus of the theory already exists, are ripe for easy full formalization. 39 FULL FORMALIZATION The structure and import of full formalization becomes readily apparent when its relationship to the cal- culus of propositions is considered. An understanding of the concept of formalization can also be acquired by studying the procedure by which a language might be generated. As will soon become apparent, a language usually contains a deductive system and hence provides an appropriate place to begin an exposition of deductive systems in general. In broad outline, the steps that would be followed in the generation of a language, English for example, are as follows: First, all the symbols of the language would be listed. Then, all the grammatical rules of the language (the entire syntax of English), for example, would be codified. Finally, all the sentences of the language would be generated by applying the rules to the symbols. The fact that a natural language is continually in a state of flux and hence does not have a finite set of terms or rules need not detract from the central point made here. The scientist usually works with an artificial, not a natural, language, and hence has control over the terms and rules that he admits into the system. This does not necessarily limit the number of terms and rules in an artificial language but it does mean that, conceptually at least, these large numbers do not create a problem. Each of these three elements, the terms, the rules, 40 and the sentences are divided into sub-parts. The class of terms is divided into a set of primitive or undefined terms and a set of defined terms. The latter are defined in terms of the former, but do not enter in an essential (i.e. unelim- inable) fashion because they could always be replaced in any statement, by the primitives in terms of which they are defined. The fact that there must be a class of terms that function as primitives is apparent from the further fact that if all terms were to be defined they would have to be defined in terms of each other. This would involve vicious circul- arity of definition. The need for a set of defined terms, on the other hand, is apparent from the need to introduce new terms into the language of a theory, and the added notational convenience that definition provides. The rules are also divided into two sub-classes--rules of formation and rules of transformation. From these elements of the system a group of all possible combinations of terms could be generated by some procedure designed to form all possible combinations and permutations of these terms. The formation rules are applied to this set of all possible combinations and permutations in order to select the permissable (or gramatically correct) statements from the group. The deductive system within the language will not contain all of these permissable statements. It will con- tain only a subset of statements which will become the axioms or underived statements of the deductive system. The rules 41 of transformation are then applied to this subset containing the axioms in order to generate all possible statements that can be deduced from these axioms. Hence, the rules of formation correspond, in the case of the generation of the English language, to the syntax of English. The rules of. transformation, on the other hand,_are comprised of the rules of deductive logic. The class of sentences on which attention is focused in the process of formalization is also divided into two sub-classes -- the axioms and the theorems. These two sets of sentences together constitute the deductive system. In summary, then, a deductive system, or a deduct- ively related set of statements, is generated by the application of two sets of rules on two sets of terms. In the foregoing example, the generation of the English language, terms are combined to form statements, and statements are combined to form more complex statements by the application of the rules of formation. The simple and complex state- ments are derived from other statements by the application of the rules of transformation, i.e. the rules of deductive logic. The same procedure applies in the generation of an artificial system or a theory. The first task is to identify the relevant variables (terms) of the system and to generate a set of statements connecting these variables in such a way as to describe the phenomenon to be explained or 42 predicted. These statements, or a selection from them, become the axioms of the system. Then the theorems or the implications of these axioms are deduced with the aid of the transformation rules -- the rules of deductive logic. It is these axioms and theorems together that constitute the deductive formulation of a theory -- the part of the language system with which science is so vitally concerned. It is the codification of these axioms and theorems that constitutes the formalization of a theory. Some of these axioms or theorems are then used as inputs into the structure of explanation or prediction and in this way are brought to bear on the problems with which science is concerned. The power of formalization rests in part on the researcher's ability to elaborate the deductive consequences of the axioms chosen. These deductive consequences as well as the axioms themselves become the source of hypotheses for empirical investigation, and hence inputs into the structure of explanation and prediction. The more completely the axioms and the theorems can be codified, the more knowledge is obtained about the deductive consequences of these axioms and hence about the axioms themselves. The examination of such a system is usually both logical and empirical. The deduction of contradictory theorems supplies logical evidence of inconsistency. Empirical evidence for or against any one of the theorems is also evidence for or 43 against the axioms employed. Interpretation of Fully Formalized Systems Such a formalized system may be interpreted or it may be purely formal, with no recourse to meanings at all. If it is purely formal, then in order to make an inter- pretation of it, i.e., in order to make it refer to something or to take on some meaning, it is necessary to formulate a series of rules of interpretation. These rules of inter- pretation or semantical rules are also divided into two subsets -- the rules of designation and the rules of truth. Each one of these sets of rules applies to only one of the two sub-groups of terms that exist within the class of primitives. The rules of designation specify the meaning to be ascribed to each of the descriptive or empirically referential terms. The rules of truth specify the interpret- ations of the logical terms of the system, i.e., they determine a class of truth conditions for each type of logical connective employed. The rules of designation apply to two subsets within the set of descriptive terms. The first comprises names which designate individuals or things. The second comprises a set of predicates which designate properties of individuals or things. It is the role of the rules of designation to specify for each of these names or predicates some thing or property which it is taken to designate. The rules of truth on the other hand, specify that, 1+4 for example, an expression of the form ”A is B” is true if and only if the individual or thing designated by the name in position 'A' actually has the property designated by the predicate in position 'B'. Other rules of truth specify the truth conditions of the other logical terms. These rules of truth of the formalized system known as the calculus of propositions constitute the key concepts involved in Chapter V. In that chapter they are employed in the development of a classification system which provides a means of recognizing certain methodologically important characteristics of the axioms and theorems of the theory under examination. Recognition of these characteristics becomes extremely important in the a priori evaluation of theories in general. A complete codification of these rules of designation and of truth constitutes an interpretation of a formal system. A deductive system which has been thus formalized and interpreted and which contains among its axioms some empirically testable statements is what is meant by a fully formalized empirical theory. Some appreciation of the power of such formal systems can be gained from a realization that when a theorem is proved in a formal system, like the lower functional cal- culus or some branch of mathematics,all of its alternative interpretations in formalized empirical systems are simult- aneously proved, provided these interpretations satisfy the 45 axioms of this calculi. Some appreciation of the applic- ability of such formal systems or calculi to the problems of theory evaluation will be attained from consideration of the concept of a model discussed in this chapter and applied in Chapter VI. PARTIAL FORMALIZATION The foregoing examination of the nature of full formalization indicates that many theorists will likely have to settle for something short of full formalization for some time to come. But awareness of the power of full form- alization in theory construction suggests that partial formalization may also be a fruitful method of inquiry. The next task, therefore, is to examine possible methods of partial formalization. Systematic PresuppoSitiqn in Partially Formalizgd Theories It is commonly observed that partially formalized theories often presuppose large segments of theories from other disciplines. Price theory often presupposes segments of mathematics, for example. Theories of economic growth often presuppose concepts embodied in putative sociological theory. The terms referring to those concepts borrowed from other disciplines are generally not indigenous to the theory in question. These terms, like ”first derivative” as it occurs in economic theories, remain unexplained in most of 1+6 these theories. But unlike the groups of unexplained terms that are indigenous to the theory and function as relative primitives, this group of unexplained terms does not ordinarily have the same set of functions. The importance of recognizing this difference rests on the fact that this recognition is often sufficient to lead the researcher to an awareness that some statements from some other disciplines are being presupposed as premises in the theory in question or as transformation rules. Current theorizing is sometimes carried out without making these deductions explicit, and without codification of the non—indigenous concepts which are actually being presupposed. The process of making these deductions and presuppositions explicit is part of the process of partial formalization. This aspect of formalization is just as important when the presupposed concepts come from the researchers intuition or common sense as when they come from other scientific disciplines. Systematic presupposition is an important aspect of partial formalization in that it helps to clarify the meaning of the concepts presupposed, and in so doing helps to render the otherwise unrecognized presuppositions amenable to logical and empirical examination and test. Egasi Deduction in Partially Formalized Theories Another important technique of partial formalization is that of quasi deduction. Quasi deduction is defined as 1+7 inference which fails to meet the rigorous requirements of deduction by failure to make all the premises and rules and steps explicit. This includes the indigenous as well as the non-indigenous premises referred to above. Here again explicit codification figures importantly in the process of partial formalization in that it subjects new theorems to the weight of empirical evidence. Evidence for or against these theorems constitute evidences for or against the theory as a whole. Concept Formation in Partially Formalized Theories There are three general techniques for introducing concepts into partially formalized theories. The first is the technique of explicit definition. This technique is the same in partial formalization as it is in full formalization except that in the former there is no unique set of primitives in favor of which all the defined terms may be eliminated. The second technique is that of specifying a sufficient condition for the use of these concepts. Consider the statement, ”if a market is characterized by such a large number of small buyers and sellers that no single buyer or seller can influence the price, then that market is perfectly competitive.” This statement specifies a sufficient cond- ition for the occurrence of the property of being perfectly competitive, and affords a sufficient condition for the application of the term 'perfect competition', providing the 48 statement itself is a true statement. It should be noted, however, that this is not a definition, for the same theory may yield a number of different sufficient conditions none of which is a necessary condition for the term's use. These sufficient conditions may provide an entree.for the process of evaluation, however, in that they help clarify the, concepts employed. In addition, if these concepts are couched in observation terms, they permit the researcher to submit these axioms and theorems to the weight of empirical evidence. Finally, partially formalized theories often contain terms which cannot be introduced in either of the foregoing ways. These remaining terms generally function as relative primitives in the system. They are relative (or tentative) in the sense that terms once of primary importance in a theory may play only minor roles when the subject matter is more thoroughly understood. Hence terms may be primitive in one context and assume a non-primitive role in a more fully elaborated context. Another of the techniques of partial formalization, then, involves the identification of what is and what is not primitive in the system and in this way affords a clearer concept of the meaning of the terms employed. This differs from the technique of explicit definition in that the former makes the definition explicitly while the latter simply recognizes which concepts can not be explicitly defined and hence must be treated as a relative primitive. 49 All three of these techniques for introducing concepts into partially formalized theories suggest ways of recognizing the concepts employed in any theory under examination. Recognition is a prerequisite to ascertaining their testability. Assessment of their testability is in turn a useful first step in ascertaining the truth values of some of the axioms and theorems in the theory under examin- ation. The truth values of these statements are of primary importance in the evaluation of theories in economics.13 MODELS IN THE STRUCTURE OF SCIENCE This examination of the specially selected set of logical concepts required for the a priori evaluation bf economic theories will conclude with a consideration of the nature and role of models in science. The concept of a model is an outgrowth of the concepts of calculi and formal- ization developed above and is basic to the particular application of the method of models presented in Chapter VI. As May Broadbeck has pointed out, not only has the term ”model” appeared with increasing frequency in recent social 14 science literature, it has taken on a decided halo effect. According to current fashion, models are regarded as good ‘— _ l3Geoffrey, P.E. Clarkson, The Theory of Consumer Demand: A Critical Appraisal (Englewood Cliff f N.J.: Prentice-Hall, Inc., 1963), p. 11. l[*May Broadbeck, ”Models, Meaning and Theory”, Symposium on Sociological Theory, Llewellyn Gross, ed.,(Row, Peterson and Company, Evanston, Illinois,)p. 373. 50 things and needless to say, ”mathematical models” are regarded as even better. In non-technical use, the term 'model' usually refers to a norm or a replica. The non-technical use of 'model' to refer to a replica provides a point of departure from which to explain the technical meaning of model. A model train is a model of the real thing by virtue of the isomorphism, or sameness of structure, that exists between the two. The iso—morphism requires two conditions: (1) that there be a one-to-one correspondence between the elements of the model and the elements of the original and (2) that certain relations, like scale in this example, be preserved. The foregoing notions can be extended to theories. For any two theories, then, if the laws of one have the same form as the laws of the other, they are isomorphic with each other, and one is a model of the other. This implies, of course, that there exists a one—to—one correspondence between statements in the two theories, and that certain relations are preserved upon translation.15 The next problem is to determine whether or not one theory is in fact a model of another. The following steps will illustrate the process. First, the underlying structure of the original theory may be developed by replacing each of 15For a thorough treatment of the concept of iso- morphism see Gerald J. Massey, ”The Philos0phy of Space and Time”, (unpublished Ph.D. dissertation, Department of Phil- osophy, Princeton University, 1963), p. 123. 51 its descriptive terms by letter variables and the logical operators by the appropriate symbols. This leaves only the variables and the logical operators, as they would appear, for example, in the sentential calculus. Then, the model is generated by systematic substitution of the terms of the new theory for the variables in the underlying calculus. This is what is meant when it is claimed that a model is an . alternative interpretation of an underlying calculus. In each of the laws, the concepts of the original have been replaced by the concepts of the new discipline. If empirical test shows that the resulting laws or hypotheses are true, then the laws of both the new and the old have the same form. This means that the two areas are isomorphic. An excellent example of isomorphism obtains between the theories of production economics and consumption economics. If the underlying structure of production econ- omics were developed by uniform substitution of letter variables for the descriptive terms, and symbols for the logical operators, and if uniform substitution of the concepts of consumption economics were effected, and if the laws of both production and consumption economics are true then iso- morphism exists between these two areas, and production 16 economics would provide a model of consumption economics. 16John R. Hicks established this isomorphism between production economics and the new consumption economics which permitted only cardinal utility measurement by showing that everything in the new consumption economics was also in prod- uction economics. 52 There is nothing absolute about the relationship between a theory and a model, however. Consumption economics could just as well be thought of as a model of production economics as vice versa. Usually, however, the better known area is taken as a model for the area about which less is known, the better known area providing the form of the laws for the area under investigation. In current writings on theories of economic growth there are many putative models in vogue. Economic growth may be likened to the growth of a plant or an animal, for example, with life cycle and all. The ability to determine whether one such theory is a model of the other would be extremely useful in assessing the explanatory and predictive potential of certain recent theories of economic growth. Two ideas drawn from the above discussion are likely to be particularly useful in this regard. First, it must be possible to state what it is that is in one-to-one corresp- ondence with what? Second, it must be possible to determine what formal similarities exist? Not only must the terms correspond, at least some of the laws connecting the concepts must have the same form in order for a putative model to be of any use in the explanation or prediction of the phenomena in question. These conditions of: (l) one-to-one correspondence and (2) the preservation of the relationships that must obtain in order for a putative model to work on the same principles 53 as the theory in the new discipline suggestsways of analy- zing putative models. One of these ways is applied in Chapter VI. This particular approach differs from the usual use of models in science. The general use of models is in theory development rather than in theory evaluation. It is generally hoped that it will be possible to obtain new insights about the area under investigation like consumption economics, for example, by finding similarities between the new area and some more fully developed discipline like production economics, for example. The use of the concept of a model as a diagnostic device is quite different from the approach specified above. One such use is in testing for the consistency of a theory or set of axioms. The general approach to this examination is as follows. The first step is to exhibit the underlying calculus of which the theory in question constitutes one interpretation. The second step is to give this calculus an interpretation which is different from the one represented by the classical axiomsend which is analytic. Such inter- pretations include a wide variety of mathematical interpret- ations as well as those of Boolean Algebra, the higher functional calculi and other systems of formal logic. This analytic interpretation, then, constitutes a model of the original axiom set. Further, it constitutes a model in which every statement is demonstrably true. As 54 pointed out above, when the statements in the model are all true the isomorphism is complete and the laws of the theory and the model have the same form. The fact that the statements in the model are all true has another extremely important consequence, that is, that the statements in the theory are mutually consistent. The fact that a true model of a theory implies that the theory is consistent follows as a corollary of the fact that if the theory were incon- sistent then no true interpretation of the underlying calculus would be possible. Hence, if a set of statements which are an interpretation of the calculus are demonstrably true then the theory is proved to be consistent.l7 ' The reason for translating the underlying calculus into an analytic interpretation rests on the fact that every statement in an analytic system is demonstrably true, while statements in the empirical theory are at best only highly confirmed.l8 If the underlying calculus has at least one true interpretation then that calculus is consistent. If the underlying calculus is consistent then, providing all the axioms of the theory are true, the original theory it- self and every other interpretation of it, insofar as it is an instance of this calculus, is also consistent. The fact that the calculus is consistent simply means that it is 17This conclusion is rigorously demonstrated in Chapter V. 18In light of Godel's incompleteness theorem this statement must be qualified, but it is considered to be sufficiently precise for the purposes at hand. 55 not self-denying. It may be true or it may be false depending upon the empirical interpretations employed but it is not self—denying or inconsistent. The general use of the concepts of a model as a means of theory evaluation in economics is fraught with a number of problems. In the first place, only a very small percentage of extant economic theories have attained sufficient formal- ization to permit the identification of their underlying calculus. In the second place, it is often difficult to find an adequate interpretation which is also analytically true. Besides being difficult, any number of failures to find such an interpretation does not prove inconsistency, it only leaves the problem of consistency open to questions. In view of the difficulties mentioned above, the problem at hand is to develOp a procedure which will permit the researcher to avoid some of these difficulties and still obtain an unequivocal answer to the question of consistency of the theory under evaluation. This is accomplished in part by the combined use of the concepts of model construct- ion and the concepts derived from the analysis of propositions developed in Chapter V. The concepts will then be applied, through the Method of Numerical Interpretation, to the evaluation of the classical theory conducted in Chapter VI. SUMMARY Up to this point the nature and structure of science has been examined. This examination started with a consideration 56 of terms and sentences used in science. Then it progressed to a study of laws and lawlike statements. This study was continued with an examination of the role of laws, or law- like statements, in scientific explanation and prediction. This set the stage for an examination of sets of laws or theories as they function in providing the laws used in science. The study of theories concentrated first on the nature of calculi or the logical structures of theories. This led to a consideration of deductive relatedness and formalization of theories. This consideration of formaliz- ation was divided into an examination of the concept of full formalization and a consideration of the techniques of partial formalization as a means of theory evaluation. These concepts of formalization and formal systems finally led to a consideration of the role of models in theory construction and evaluation. Out of this initial selection of concepts from the field of analytic philosophy the thesis of this study evolves. This thesis asserts that the value of a theory depends upon its ability to provide the lawlike statements required for the explanation and prediction of the phenomenon in question. This ability depends upon the logical adequacy of its structure and the truth of the propositions employed. Having examined this initial selection of concepts, the next step is to investigate their application to the 57 problems of theory evaluation. The theory chosen for investigation is the classical theory of economic growth. The outline of this theory is presented in the following chapter. This theory will be employed, throughout the remainder of this investigation, to illustrate the applic- ation of the techniques of theory evaluation which are developed as this study progresses. CHAPTER III EXAMINATION OF THE CLASSICAL THEORY OF ECONOMIC GROWTH 'WITH SPECIAL EMPHASIS ON THE TECHNIQUES OF LOGICAL ANALYSIS A SUMMARY OF THE CLASSICAL THEORY OF ECONOMIC GROWTH Higgins has summarized what he takes to be the main thesis of the classical theory of growth.1 This summary no doubt does some injustice to most of the members of the classical school. It is a summary of their major points of agreement, and ignores, to a large extent, the differences and the polemics of that period's economics. It concentrates on the works of Smith, Nhll, and Malthus and abstracts what he believes to be the bare bones of their theory of growth, translating them into modern terminology and stating them as functional relationships, taking care to close up the system in such a way that it appears, at least on the surface, to be determinate.2 His presentation will be summarized below.3 lBenjamin Higgins, Economingevelopment Principles, Problems and Policies, (New York: W.W. Norton and Company, Inc., '19597, pp. 85-106. 2The concept of determinateness will be defined and discussed in a later section. 3In this context the term theory of economic growth usually refers to both economic growth and decline. This type of theory is often applied to both the developed and the under- developed economies and is to be distinguished from theories of development. Development theories generally apply only to underdeveloped economies. They may include aspects of social and political change as well as economic change and development. 58 59 Then the theory will be examined and an attempt made at evaluation of its explanatory and predictive potential. Proposition 1: The Production Function The classical concept of the aggregate production function has prevailed in many theories of growth down to modern times. It claimed that output, O, was a function of the size of the labor force, L, the supply of known resources, K, the stock of capital, Q, the level of technique, T, and the way in which these factors are combined. 0 = fl (L; K) Q: T) (1) They thought that this function would probably exhibit constant returns to scale, but that, since the supply of known resources (principally land in this context) was fixed, the most important function was that subfunction which treated the level of land as a constant and traced the relationship between output and population at the existing level of technology. Both increases in the supply of known resources and of technology would shift this subfunction up— wards. It is also fairly clear that they had in mind the traditional three stage production function and that they thought that Europe was, in their time, well into stage two, with APP falling but MPP still positive. Proposition 2: Technological Progress Depends on Capital Accumulation Another proposition that has stood the test of time 60 is that technological progress, T, is a function of invest- I.“ ment, T= f2 (I) (2) They assumed that there was always a plentiful supply of better techniques and new products to introduce (except in agriculture), but that the rate of introduction was limited by the rate of capital formation. Hence, they stressed the need for savings and capital accumulation rather than for technological advance. Proposition 3: Investment is a Function of Profits A third proposition that has played a central role in the classical theory, and ever since, is that investment, I, is a function of profits, R. I=f (R)=dQ (3) Here investment refers to net new investment or the net addition to the capital stock, dQ, and R means profits in the sense of returns to the fixed factors, or returns above variable costs. Proposition 4: Profits Depend on the Labor Supply and the Level of Technique. Unlike some of the more recent theories, the classical theory held that profits were a function of the labor supply itThe author has taken the liberty at this point, of replacing the variety of symbols used by Higgins for ”is a fun- ction of” with the more common ”fn” nomenclature. /w 61 and the level of technique. R = r4 (L, T) (A) The classicists looked upon the whole process of growth as a race between technical progress and population growth. It was through the rate of profits and hence of capital accumul- ation that the outcome of this race was to have its effect. Improvement in technique tended to increase profits. But the growth of the labor force was associated with the employment of more workers on the same land, lowering APP, raising AVC, and thus lowering profits for any given index of product prices. This ignored the response of product prices to increased aggregate output on one hand and increased consumer damand due to both increased output and increased population on the other. The effect of increased population on profits is an empirical question, depending for its answer on the shape of the aggregate production function and the shifts in aggregate demand. The classicists seemed to ignore these effects, but realized that the final outcome of what they assumed to be the depressing effect of population growth on profits and the presumed increasing effects of technology on profits could not be answered a priori. Mill, for example, thought that technology was winning in the short run at least. This will be discussed more fully later in one of the succeeding sections. Proposition 5: The Size of the Labor Force Depends on the Size of the Wage Bill. Another part of their theory was the 'wages fund 62 doctrine' i.e., that the rate of population growth depends on how much money is available to pay wages. The labor force was thought to vary in proportion to the growth in population, L=f (w) (5) where L is the total labor force and W represents the wage bill. Their contention was that there were no checks on the size of labor-class families except the wages available to them and the number of children that could subsist on a given level of real wages. Hence, as wages rose, population rose and the real wage per capita was driven back down to the subsistence level. This assumption was probably quite appropriate to eighteenth century Europe and remains so for many underdeveloped countries today. Proposition 6: The Wage Bill Depends on the Level of Invest- ment. The classicists thought of capital as consisting partly of a "wages fund", W, which was made up of savings and put to work through investment. Hence, The foregoing six equations constitute the basic functional relationships involved in the classical theory of growth. But this results, according to Higgins in seven endogenous variables, and only six equations. Hence, to 63 close the system, i.e., to make it determinate, he adds equation seven, which says that total output is equal to profits plus wages. O=R+W (7) This means, in effect, that the national income is the total cost of all goods and services produced, and that this amount is divided between workers and others. Then Higgins added an eighth variable, w, to represent the minimum wage rate, and an eighth equation to indicate the long run equilibrium condition, W = wL (8) Summapy of the Classical System In summary, then, the "classical system", as interpreted, contains the following equations: (1) O = fl (L, K, Q, T) L1 (2) T=f2 (1) L2 (3) I=f3 (R)=dQ L3 (A) R = f, (T, L) L, (5) L=f5 (W) L5 (6) w = f6 (1) L6 (7) O=R+w and in equilibrium in the long run (8) W = wL 64 While Higgins discusses these relationships in a general way in terms of one variable as a function of one or more other variables, it is apparent from the text that he means that one variable is a positive monotonic function of the others. This is true for equations one through six except for number four. Equation four, on the other hand, says that profits are a positive function of the rate of application of new technology and a negative function of the size of the labor force. EVALUATION OF THE CLASSICAL THEORY OF ECONOMIC GROWTH IMPLICATIONS OF THE CLASSICAL THEORY One of the first steps in the evaluation of this particular theory is to see what it implies with respect to economic growth in general. VICIOUS CIRCLES AND GROWTH SPIRALS It would appear that the circularity of causal connections is among the most important phenomena that the classicists had in mind. This idea will be discussed briefly below. The writer has previously defined the concept of vicious circles and growth spirals as follows: "When certain conditions or processes exist, they tend to set up a chain of causes and effects which act and react upon each other so as to maintain the state or direction of those conditions or 65 processes.”5 It should be pointed out that this concept is quite general. It may be useful in explaining why poor countries ”6 why remain poor. Nelson's ”low—leVel equilibrium trap, poor countries sometimes become poorer, how poor countries can be made to grow, and how rich countries can become richer, remain at a high level of production or decline. Implicit in this idea is the concept of the circularity of the causal relations, that is, that an exogenous change in the magnitude and direction of one of the variables will set off a chain of actions and reactions which will bring about further changes in the same direction, while a change in the opposite direction will cause a circular chain reaction in that direction. It should be noted in passing that this adaptive or feedback concept of circularity bears no relation to the concepts of circularity of definition or of proof as discussed in the previous chapter. The present concept involves a purely empirical as opposed to a logical concept of circularity. These adaptive ideas of circularity appear to be central to the classical theory. The classicists were looking at the possibility of sustained high level production rather 5Darrel Plaunt and Lawrence Witt, ”Recent Theories of Economic Development,” unpublished paper prepared for discussion at the Interregional Marketing Committee meeting in Lexington, Kentucky, October, 1959. 6R.R. Nelson, ”A Theory of the Low-Level Equilibrium Trap,” American Ecqnomic Review, December, 1959, pp. 894-908. 66 than low level stagnation, but the general circularity of causes and effects is similar. If the theory were examined. at equation three, for example, the following type of circularity may be seen: Equ. 2 dT< dO Equ. h Equ. 3 dB Equ. 3 _L dI Equ. A 1; j Equ. 6 d \j Equ . 5 CNN If some exogenous force changes (increases) profits (the opening of a new export market for example) then investment will increase, equation three; this will lead to an increase in the stock of capital, also equation three, and hence to an increase in technology, equation two, which in turn causes profits to rise still further, equation four. If nothing happens to stop this chain reaction the increase in profits will cause further increases in investment, the stock of capital, and the level of technology and hence further increases in profits. If other things remain constant, the continually increasing stock of capital and level of technique 67 may cause a steady rise in aggregate output, which now becomes cumulative over time. But other things do not remain constant. It is inherent in this very growth spiral, according to the classical theory, that this increase in profits and hence in investment causes an increase in the wage fund, equation six, which in turn causes an increase in the labor force, equation five, and this increased labor force applied to a fixed stock of land and capital encounters diminishing returns to labor causing labor costs to rise and profits to fall, equation four. If this vicious circle were to continue, investment, the wage fund, and the labor force would continue to rise in a cumulative fashion and to force profits downward, finally chok- ing off the very process of growth itself. This means of choking off the growth process was implicit in their thesis of stagnation at a high level of output. It also helps explain what they meant by saying that the continuity of economic growth depended on the outcome of the race between technology and population growth. They assumed that the development of technology was steady and that there was always a plentiful supply of new products and processes to be exploited--their application being largely controlled by the rate of new investment. They also thought that popul- ation growth, on the other hand, would soon bring about a sharp rise in wage costs as a result of sharply decreasing returns in agriculture. The outcome for them depended upon 68 the relative strengths of these two opposing forces. Perhaps these postulates are not too far from the situation of certain underdeveloped countries today. In areas of extreme population pressure when the man-land ratio is very high and where the rate of creation and application of new technology is quite slow (for sociological, religious or whatever reasons) it can still be expected that the rising prices of wage goods will make severe inroads into the general level of profits. Here, again, it might be argued that it is only to the extent that investment and the applic- ation of new technology proceeds at a rapid enough rate that profits can be maintained at high enough levels to induce the private sector to institute or to maintain a sustained and cumulative rate of economic growth. Indeed, this is the pre- supposition behind W.A. Lewis' arguments about the nature of economic growth with unlimited supplies of labor, and the difficulty of maintaining growth when the costs of labor inevitably rise due to whatever the cause may be, whether it is trade union activity, the application of social legislation from developed economies to underdeveloped economies, dimin- ishing returns in the agricultural sector, or any one of the several other possible sets of causes.7 OTHER_IMPLICATIONS OF THE CLASSICAL THEORY All of this argument appears to have a high level of 7W.A. Lewis, ”Egonomic Development with_Un1imited Supplies of Labor,” Manchester School, May, 1955. 69 prima facie plausibility. Furthermore, it appears to be consistent with both a larger body of growth theories and with some of the evidence available from the underdeveloped areas. But some curious consequences ensue. These conseq- uences become apparent upon further examination of the nature of the equations presented. Each of these equations, except equation number four, is assumed to express a positive monotonic relationship between the variables involved. Equation four, on the other hand, says two things. It says that an increase in technology causes an increase in profits, and at the same time, an increase in the labor force causes a decrease in profits. On the other hand, the general level of technology and the size of the labor force are both considered to be positive mono- tonic functions of the rate of investment. The next step in the evaluation of this theory might be to determine what these equations imply with respect to the direction of change in each of the variables in the system and hence the direction of change in output. It was argued in the previous section, for example, that an increase in profits, acting through equations three, two, and four, causes a further increase in profits. It was also argued, at the same time, that this increase in profits, acting through equations three, six, five, and four, causes a decrease in profits. In short, these axioms imply that an increase in profits causes both an increase and a decrease in profits. 70 After further analysis, it was found that the system also implies the same kind of conclusion for any of its other endogenous variables except output. An increase in the variable causes a decrease in the same variable. In addition, an increase in any one of the endogenous variables, except output, leads to both an increase and a decrease in any other endogenous variable. Hence it may lead to both an increase and a decrease in output. As the theory is currently formulated, it is extremely difficult to predict the direction of the change in any one of the endogenous variables given the change in any of the other variables. This would suggest that this particular formulation of the classical theory of economic growth is not particularly useful for explanatory or predictive purposes. It would appear that there may be two reasons for the inadequacy of the present formulation. The first may be that this formulation does not provide enough information on the nature of the physical and economic relationships involved to render it useful for explanation and prediction. The restate- ment of the vague relationships discussed by Higgins in terms of almost completely unspecified functional forms does not add sufficient information to render the system capable of prediction or to render it readily amenable to further analysis. The second reason for the inadequacy of the present formulation may lie in its logical structure. The foregoing analysis suggests that the system would imply a simultaneous 71 increase and decrease in any of the endogenous variables. This suggests that the system may be inconsistent. Since the question of consistency of the axiom set is altogether crucial to the evaluation of the explanatory and predictive potential of any given formulation of a theory, this question will be examined in some detail below. This examination is limited by the imprecision of the current formulation of the theory, on the one hand, and the applicab- ility of the logical apparatus examined thus far, on the other. Both the theory and the logic are developed with additional precision as the study progresses, but certain methodologically important considerations in the evaluation of theories can be developed with the information already available at this stage of the inquiry. EVALUATION IN TERMS OF THE LOGICAL STRUCTURE The logical structure of a theory includes the axioms and the theorems deduced therefrom. The axioms are, for the present at least, assumed to be true in order to determine what theorems are true if the axioms are in fact true. In order for the axioms to be true they must, among other things, be consistent. Hence, one of the criteria for the adequacy of the logical structure of a theory is that the axioms be consistent. One of the functions of a theory is to provide true theorems which function as the laws or lawlike statements required for the explanation or prediction of the events in 72 question. The discussion of the Hempel and Oppenheim paradigm of explanation and prediction, presented in Chapter II, listed four criteria for the logical and empir- ical adequacy of the structure of explanation and prediction. It will be recalled that these criteria include the logical requirements that the explanans imply the explanandum and that the explanans contain at least one lawlike statement and have empirical content. They also include the empirical requirement that the statements in the explanans be true. Since one of the functions of a theory is to provide the lawlike statements required for an adequate structure of explanation or prediction these criteria also function as criteria for the evaluation of theories in general. A large proportion of the remainder of this study is devoted to an examination, refinement and application of these criteria. This examination proceeds in several stages. Some of the analytical concepts have already been examined. The classical theory, used to illustrate their application, has already been presented. Conclusive analysis requires both the refinement of the tools of analysis and the clarif- ication and specification of the theory under examination. The immediately succeeding analysis employs the information and the tools already available. Subsequent analyses require the refinements of the theory that will be developed in Chapter IV and the refinements in the analytical technique to be examined in Chapter V. The application of these refinements 73 is made in Chapter VI and Chapter VII of this study. In the initial approach to the evaluation of the classical axioms, it has already been demonstrated that the requirement that the explanans imply the explanandum has not been adequately fulfilled by this particular formulation of the theory. The other criteria of logical adequacy, that the explanans must contain at least one lawlike statement, and that it must have empirical content would appear to be fulfilled by this theory. Hence, a more general discussion of these criteria can be deferred until a later chapter. This clears the way, then, for an initial assault on the question of consistency in general and the question of the consistency of the classical axioms in particular. THE CRITERION OF CONSISTENCY The requirement of consistency is perhaps the most important requirement of the logical structure of a theory and is thought to be one of the most difficult to prove. Its importance is such that the existence of an inconsistency completely destroys the explanatory and predictive potential of that particular formulation of the theory. It will be rigorously demonstrated in Chapter V that the presence of an inconsistency permits the theory to imply anything at all about anything. Hence, it permits it to imply many things which are false as well as many things which are true, and in so doing, destroys the truth preserving characteristics so important to adequate explanation or prediction. 74 A set of axioms is said to be inconsistent if the negatation of any member of the set can be deduced from this set. Alternatively, a set of statements is inconsistent if a contravalid statement, that is one that denies itself, for example, ”p.r4p”, can be deduced from this set. The logical structure of inconsistency will be further clarified and rigorously defined in Chapter V. Techniques for testing for inconsistencies will be developed in Chapter VI. It will be sufficient for the study of the present formulation of the classical axioms, however, to point out two general approaches to testing for consistency and to apply the one appropriate to the level of formalization currently available. METHODS OF TESTING FOR CONSISTENCY The method selected for testing for the consistency of a set of axioms depends in part upon the type of theory in question and the degree of specification or formalization attained. The methods discussed below represent two extremes in their requirements of specification and formalization. The Method of Deduction The method of deduction is examined in detail in Chapter VI. Only its broad outlines will be described here. It involves deducing the negation of one of the axioms or a contravalid statement from the axiom set. Success in making such a deduction proves inconsistency. But any number of failures to make such a deduction does not prove consistency. It is always possible that these failures may result from the 75 inability of the researcher to make a difficult deduction when such a deduction is in fact possible. Direct application of the method of deduction may be made at any level of formalization from an almost completely unspecified system to one that is completely formalized. The major difference is in the level of formal logic appropriate to the analysis. Most economic theories that have attained a relatively high degree of formalization employ concepts of metricization. Adequate translation of these concepts into a logical calculus usually requires the use of one of the higher functional calculi of formal logic. The use of such calculi generally requiresthe skills of a professional logician. Fortunately, as will be demonstrated in Chapter VI, there is a way of avoiding this kind of arduous analysis by employing a direct proof of consistency. This procedure is based on a classification system developed out of the analysis of propositions and the method of models. The analysis of propositions is developed in Chapter V and the method of models elaborated in Chapter VI. This latter method was introduced in Chapter II. Its direct applicability to this initial formulation of the classical axioms will be considered below. The method of deduction, discussed above, provides a proof of inconsistency. The method of models, on the other hand, provides a proof of consistenpy. As in the case of the method of deduction, just discussed, the converse is not true. 76 The failure to derive such a proof of consistency does not prove inconsistency. As explained in Chapter II, the first step in the method of models is to adequately exhibit an underlying calculus of the axiom set. The axioms of the classical system constitute only one of many possible interpreations of their underlying calculus. The second step is to give this under- lying calculus an interpretation which is different from its current interpretation and which is analytic. An analytic interpretation is one whose truth values can be ascertained by examination of its form and form alone. Examples of such interpreations include a wide variety of mathematical inter- pretations as well as those, for example, of Boolean Algebra and other analytic systems. It was argued in Chapter II and it will be rigorously demonstrated in Chapter V that success in finding such an analytically true interpretation of an underlying calculus proves the consistency of that calculus. The problem with the application of this approach to the current formulation of the classical axioms is that it would be extremely difficult to adequately exhibit the under- lying calculus of such a system. This difficulty stems from the imprecision with which the axioms are expressed. The application of the method of models will have to await the further formalization of these axioms. This formalization is conducted in Chapter IV. 77 REFORMULATION OF THE CLASSICAL_AXIOMS Up to this point in the analysis of the classical theory it has been concluded that the current formulation of the theory renders it inadequate for explanatory and predictive purposes. It has also become apparent that it is doubtful whether the current formulation is amenable to fruitful analysis by either the method of deduction or the method of models. The next step, then, is to reformulate these axioms in such a way as to render them amenable to such analysis. The investigation required to specify the axioms with sufficient precision to render them amenable to analysis by the method of models is beyond the scope of this chapter. This investigation will be deferred until Chapter IV. The axioms can be reformulated, however, on the basis of the information already available in the Higgins summary of the theory.8 This reformulation can be effected with sufficient precision to permit the direct application of the method of deduction. This affords an initial test for the consistency of the system. Perhaps the most effective way to formulate the information presented by Higgins is in terms of first differ- ences. Let t1 refer to the base period and t2 refer to any subsequent period with the restriction that t1 and t2 main- tain the same time interval for all variables in the system. “fl ... ~ fi—v 8Higgins, loc. cit., pp. 85-99. 78 The direction of change of each of the variables can then be indicated by the appropriate ”greater than” or ”less than” signs between subscripted variables. The expression E(t2))iI(tlE} for example, can be used to represent the statement that investment in t2 is greater than investment in the base period. If investment can be measured in each of these time periods then its correspondence with reality can be tested and its truth value confirmed. These statements can then be combined to form compound statements by the use of the statement connective ”a” to mean ”if....then....”, the symbol ”.” to mean ”and”, and the symbol ”V” to mean ”or” as 9 these symbols are employed in the calculus of propositions. Equation two of the axiom set can then be stated as follows: EH2)> 1%)]: Ema? T(tl):| This says that if investment increases between t1 and t2 then the rate of application of new technology increases between t1 and t2. If the propositions of the theory were clearly and precisely expressed then there would be little difficulty expressing it in terms of first differences. The problems arise in trying to ascertain the meaning of the propositions from the vague and imprecise manner in which they are stated. 9These terms were introduced in Chapter II and will be rigorously defined in Chapter V. 79 All that the researcher can do under these circumstances is endeavor to insure that his reformulation is consistent with the spirit and intent of the author. In the discussion of equation four, for example, it becomes apparent that, for the classicists, the development of capitalist economies was a race between technological progress and population growth.:LO As population grew, diminishing returns would be encountered in agriculture, raising labor costs and reducing profits. But offsetting this tendency was historically increasing returns, especially in industry, through improvement in technique.ll Higgins' expression for these phenomena is R = R(T, L) which says only that ”the level of profits depend on the level of technique and the size of the labor force.”12 This expression is so general that it appears to be always true. By itself it says little more than that profits either rise, fall, or stay constant as a result of changes in the level of tech- nology and the size of the labor force. The statement in this form is a tautology and, hence, is not amenable to empirical test. It is thought that what Higgins is really claiming, however, is that the classicists asserted that pro- fits are a positive function of technology and at the same lOHiggins, op. cit., p. 87. llHiggins, O . cit., p. 91. 12Higgins, o . cit., p. 91. 80 time profits are a negative function of the labor force. Hence, expression four can be written in two statements as follows: 4(a) T T‘ D R >11 [(1:29 up] [(1:2) (3)] A(b) L > L 3 R R [(1:2) up] [(t2)< (1:1)] If these formulations adequately express the meaning of the classical axioms then the classical system can be summarized as follows: 1. L > L . K K . Q Q . [up (pig [(.2)> (le [(.2)> up] 7. O = R + W This summary should now be amenable to analysis with the use of the calculus of propositions. PROOF OF INCONSISTENCY The method of deduction discussed above indicates that a set of axioms is inconsistent if a statement and its negation can be deduced from the same set of axioms. In order to deduce a single proposition from a set of conditionals,it is often useful to assert an antecedent condition. If one were to assert that investment increased from tl to t2 then it follows from this antecedent condition and statements 2 and 4(a) that profits will increase. 2. _I I I T T (t2)> (t1) 3 [(1:2) > (tlfl :3 ) ) and; 4(a) -T- T _— R R (t2) > (t1 3 [(1:2) > (121:! 82 At the same time, an increase in investment in conjunction with statements 6, 5, and A(b) implies a decrease in profits. a. II I w w (t2) > (til) D [(tZ) > (tl ) .0 17>- ...] g: \/ L“ g: H A(b) In the event of an increase in investment, the classical axioms imply both an increase in profits, 02’ and a decrease in profits, C5, over the same time period, tl to t2. Hence, this interpretation of Higgins' statement of the theory would appear to be inconsistent. 83 CONCLUSIONS The analysis conducted thus far reveals that this particular formulation of the classical axioms seems to be inconsistent. This renders it useless, in its present form, for explanation or prediction. Finding an inconsistency, however, does not mean that the whole set of axioms must be discarded. Identification and removal, or amendation of one of offending statements might be all that is required to render the theory useful for explanation and prediction. Alternatively, in the case that the inconsistency arises from a contravalid statement the problem could be solved by its removal. Finally, the inconsistency might be removed by changing some of the functions or by adding restrictions. which would change the offending relationships in such a way as to make the whole set mutually consistent. At least two more steps must be taken before much progress can be made in identifying the offending statements and in taking the appropriate remedial action. The first is to specify the relationships in such a way as to determine the nature of each of the equations involved. This specif- ication involves further a priori analysis in terms of economic theory and in terms of empirical evidence. This process is closely allied to the types of things econometricians do in choosing the forms of the functions for statistical analysis. It employs some of the tools of partial formaliz- ation examined in Chapter II. This inquiry will be pursued 8A in the subsequent chapter. The second step is to develop the analytic apparatus required to facilitate the recognition of those characteristics of statements and arguments which render them either acceptable or unacceptable for purposes of explanation and prediction. This inquiry will be pursued in Chapter V. Both of these steps involve the development and use of a number of concepts having widespread significance for research methodology in economics. CHAPTER IV TECHNIQUES OF PARTIAL FORMALIZATION AND THE NATURE OF THE CLASSICAL AXIOMS TECHNIQUES OF PARTIAL FORMALIZATION The purpose of this chapter is to introduce certain aspects of the application of the techniques of partial formalization with a view to illustrating ways of clarifying a theory in order to render it amenable to a priori analysis. This inquiry is introduced at this juncture to clarify the nature of the functional relationships contained in classical theory. In Chapter III it was concluded that it would be necessary to clarify the nature of the relationships involved before further attempts could be made to evaluate the theory in question. In Chapter II it was pointed out that this clarity is generally acquired as a theory develops toward full formalization. Formalization involves the explicit development of a theory as a completely elaborated deductive system. As Rudner points out, this goal is seldom attained.l In Chapter III it was also concluded that there was simply not enough information available on the nature of the relationships employed, to render the classical theory lRudner, o . cit., p. 5. 85 86 amenable to conclusive analysis. Little more can be accomplished without additional knowledge about the nature of the equations employed. One approach to the clarification and specification of these equations, and of similar state- ments in the social sciences in general, has been proposed by Professor Rudner in the work cited in Chapter II. He proposed the formalization of theories in the manner accomplished for Classical Mechanics. He points out the often observed notion that the ideal of science is not to heap together disconnected bits of information about the universe but to synthesize these into generalizations about reality. The next step is to fit these statements together in the relation of subsumption in order to construct the theories employed in the explanation or prediction of the phenomenon in question. The relationship among the state- ments of the theory will then approximate, to varying degrees, a complete deduction system. The full formalization of such a theory involves its complete articulation as a deductive system. Such a system has the advantages that all the axioms and the theorems are made explicit and the rules of logic specified. The explicit statement of the axioms and theorems renders these statements amenable to logical analysis and empirical test. Evidence for or against any of these state- ments constitutes evidence for or against the theory as a whole. kaing the theorems and the axioms of the theory explicit is an extremely important step in the corroboration 87 of a theory. It should be pointed out, however, that the overwhelming majority of extant scientific theories and especially theories in the social sciences are not at present susceptible to easy or fruitful full formalization. Nevertheless, there may be substantial gains from the partial formalization of social science theories. The classical theory of economic growth is a case in point. The Higgin's summary already represents such a partial, though extremely elementary, formalization of the classical system. Such formalizations, it should be noted, range all the way from the extreme of almost negligible formalization to an opposite extreme of almost complete elaboration as a deductive system. This range of formalization provides substantial leeway in the extent to which it may be fruitful to formalize. The extent to which it may be fruitful to formalize depends upon the cost in terms of research time and effort, and the benefits to be derived. At this particular juncture in this study it is fruitful to formalize the classical system to a point where it becomes more readily amenable to the application of the logical tests to be developed in Chapter V and applied in Chapter VI. After these tests are applied it will probably become fruitful to again carry the process of formalization to a more advanced level in order to make some assessment of the empirical significance of the classical theory. 88 Some of the techniques of partial formalization were introduced in Chapter II. These include the techniques of: (l) systematic presupposition, (2) quasi deduction and, (3) concept formation. The key concepts in each of these techniques will be presented below. The key concepts in the technique of systematic presupposition includes the notion that many social science theories presuppose large segments of other scientific disciplines or other prescientific areas of ”common knowledge” or common sense lore. The presupposition of any such concept or set of concepts assumes that these concepts are true and that therefore deductions based on them are likewise true. It is only after these presuppositions are made explicit that the weight of evidence or of logical analysis can be brought to bear on their evaluation. Evidence for or against these presuppositions becomes evidence for or against the theory itself. It is a relatively short step from Rudner's approach of systematic presupposition to the slightly more general procedure of adducing explanations for the propositions of the theory in question. The lawlike statements and the antecedents that explain the propositions may themselves presuppose major bodies of concepts. The investigation of these presuppositions may be carried back to any other level of investigation required in order to obtain the evidence necessary for the corroboration of the presuppositions of 89 the theory at hand. In the technique of quasi deduction, on the other ‘hand, the central idea is that it may be possible to deduce revealing conclusions from the theory even though these deductions fail to make explicit all of the statements requisite as premises and all the rules of logic employed. The function of these deductions is again to facilitate corroboration of the theory. Several such deductions have already been made in the discussion of the classical system presented in Chapter III. A third approach to partial formalization involves an examination of the concepts employed. As pointed out in Chapter II, concepts are generally introduced into a theory in any one of three ways: (1) through explicit definition, (2) through specification of a sufficient cond- ition for their use, or finally, (3) as relative primitives. Careful examination of the concepts of a theory may enable the researcher to clarify these concepts, to eliminate unnecessary concepts and to ascertain the measurability of the variables and the empirical significance of the relation- ships involved. This examination of the measurability of these concepts will be deferred until the later part of Chapter VII when these questions, as they relate to the classical axioms, will be studied in some detail. The techniques of systematic presupposition and quasi deduction will be applied, in an informal way, to the formalization conducted below. 90 The purpose of the remainder of this chapter is to carry the formalization of the classical system to the point where the probable nature of the functions employed become sufficiently clear that they can, at least for illustrative purposes, be subjected to the additional tests of consistency developed in the ensuing chapter. THE NATURE OF THE CLASSICAL AXIOMS In order to clarify the nature of the classical axioms, each of the six basic functional relationships will be discussed below. These relationships are presented in Chapter III. They are intended to represent the Higgin's summary of the classical system. The Higgin's summary is taken as given for the purposes of this discussion with little attention being paid to questions of whether it adequately or accurately represents the hard core of class- ical thought. Interesting as these questions may be, their consideration at this point would detract from the main stream of inquiry. The questions at hand concern both the nature of these particular relationships and the general techniques available for ascertaining the nature of the relationships presented in any theory in which the level of formalization is less than desired. . Higgins makes it clear in his discussion of the aggregate production function that he believes that the classicists thought of this function as exhibiting constant 91 returns to scale and diminishing returns to each of the variable inputs. He also makes it clear that, according to the classicists, the relevant range of this function is that stage in which both the marginal physical product and the average physical product are falling but still positive. Perhaps the simplest type of function to represent these characteristics is a Cobb-Douglas production function in which the sum of the elasticities is equal to one. Hence, . b1 b2 b3 b4 . the equation 0 = alL K Q T Will be used to represent the aggregate production function of the classical system. The Higgins' summary is not as clear, however, on the nature of each of the other axioms employed. The second proposition says that the rate of application of new technology is a positive function of the rate of new investment. It is not clear, however, just what type of function this might be. Two of the presuppositions of this relationship are that new technology is capital absorbing and that reinvestment of depreciation reserves is not enough to take full advantage of the steady flow of new technology. This implies that they thought that the y-intercept of this relationship would likely be small or even zero. Considering the slope of the function, there is very little evidence to suggest that they thought that the rate of application of new technology would increase either at an increasing or a decreasing rate as investment increased. In addition, it was assumed that there was always a plentiful supply of new 92 techniques to be introduced with each new increase in investment. It might therefore be thought that the function would be linear. The general equation for this function would then be T = a1 + b5I. The third basic relationship in the classical system is that investment is a positive function of profits. Among its many presuppositions this relationship assumes that capitalists make investments because they expect to earn profits. Here profit is defined as return to the fixed factors or returns above variable costs. This implies that if they expected profits to be zero, they would refrain from investing. At the same time, it is to be expected that depreciation would continue. This implies that the combined effects of depreciation together with the absence of new investments could result in a net disinvestment when profits were zero. This suggests that the y—intercept may be negative. At the same time, there is little suggestion that the classicists thought that investment increased either at an increasing or a decreasing rate as profits increased. It may be convenient to conclude, therefore, that the relationship could be expressed in general form as follows: I = a3 + béR. The fourth basic relationship in the classical scheme is that profits are a positive function of the rate of application of new technology and a negative function of the labour force. This relationship was discussed and treated 93 as two separate equations in Chapter III. If a,P is per- mitted to take on any value then this relationship may well be summarized in the following equation: R = ah + b7T - b8L. The fifth axiom in the classical system said that the labour force was a function of the wage fund. This pre- supposes the concept of an economic limit on family size. It assumes that there are no checks on the size of working- class families except the amount of wages available to them and the number of children that can subsist on those wages. If the wage fund were very low for a given economy, it would be expected that the labour force would be small even though a sizeable population may be supported in the subsistence sector. If the subsistence wage is assumed to be constant, then the equation L = a5 + b9W may be used to represent this iron law of wages. The final axiom in the classical scheme held that the wage fund was a positive function of the level of investment. This assumes that the wage fund is some fraction of the amounts of money being invested. Since there is little evidence that this fraction increased or decreased as investment increased, it would appear reasonable to assume that an equation of the form W = a6 + blOI might adequately express the wage fund doctrine. These six equations, together with equation seven, represent a higher level of formalization than the original Higgins' presentation. This formalization will form the 94 basis of the inquiry to be developed in Chapter VI. It should be noted, however, that this summary represents not only a relatively elementary level of formalization of the system, but also one that is highly tentative. There is simply not enough information presented in the Higgins' summary of the classical system to render it amenable to definitive formalization even at the level presented above. The same charge can be made against a wide variety of putative theories of economic growth, and against a wide variety of social science theories in general. Neverthe— less, some additional clarification of these concepts can often be obtained by adducing some of the premises from 'which these axioms may have been deduced, and deducing some c>f the theorems that these axioms imply. Evidence for and eagainst these premises and theorems provide additional k:nowledge of the shapes of the functions employed. This 1Jrob1ems are sufficiently important that they will be dealt ‘Aftith in some detail below. PROBLEMS OF ESTIMATING PARAMETERS Paramount among the problems of estimating the ];>éarameters of a system are the problems of identification. PROBLEMS OF IDENTIFICATION The question raised at this point is not whether the <:=<3efficients can be estimated, because they generally can. PITIie question is whether they can be estimated reliably. If 1t>liey cannot be estimated by any of the modern statistical IEDZrocedures then the question becomes one of how else they liliight be estimated. It is hoped that they can be estimated S30 that they can provide statements about reality which are 17%sliable enough to be regarded as true. Then the researcher 177 will be able to ascertain what else will be true about reality if these statements are in fact true. EIIMULTANEOUS EQUATION METHODS The classical theory of economic growth is usually ‘thjought to be a general system in which all of the Iéezalationships are taken to hold simultaneously and to act sixfld react upon each other in such a way as to mutually Clesatermine their values. It would be appropriate, there- .fI<:>re, if these equations could be fitted by simultaneous €2<:1uation procedures. The basic theory underlying the 53:1.multaneous equations approach was developed about 1943. rFl'le nmthods based on this theory have come into wide use C>‘\7‘er the past twenty years.1 One of its advantages over t>3E“aditional least squares methods of estimating the para- In-:r use with systems of linear equations. Their use can Ié‘53.adily be extended to systems which become linear in logs. j?171<3 classical system, as interpreted in this study, is taken \ k) 1This general approach to systems of equations can Li‘s? found in such publications as U.S.D.A. Agricultural EEtIndbook, No. 9h, ”Computational thhods for Handling Systems (Dif? Equations Simultaneously,” by Joan Friedman and Richard - Foote, and in Chapter 10 by Chernoff and Divinsky of Eigss1zudies in Econometric Method,” Cowles Committee for ESissearch in Economics, Monograph 14, p. 236. 178 to contain a production function of the Cobb-Douglas type, and six linear equations. Leaving aside the questions of t;he best fitting structure for the time being, the researcher ITiay fruitfully ask if the system would be identifiable if :iyt were treated as linear in logs. Identifiability is a mathematical property of an eaz‘quation that indicates whether the structural coefficients c::éan be estimated by statistical means. The degree of jL.€3rmits unique determination of its coefficients by the IiLtsual statistical means. An overidentified equation, on ‘tllkje other hand, has the mathematical property that a Ifllumber of alternative estimates of its structural <3=C3efficients can be obtained. In this case, the limited :Laruformation or some other procedures must be employed in CDGITder to estimate its coefficients uniquely. An under- :i~ove refer to the rank and order conditions of the set of E3 Ciuations of which each structural equation is a member. ifi-EBLnk and order both refer to the matrix of coefficients of E3‘L‘lch a system. The order refers. to the number of columns jLirl the matrix and the rank refers to the order of the minor 179 of highest order whose determinant is non-zero. These rank and order criteria are variously stated :in the literature in terms of rules of thumb usually called racounting rules." These counting rules are generally .]_imited to a statement of a necessary but not a sufficient <::ondition for identifiability in that the more exact rank <::rdteria are often omitted. An equation is just identified :i_.f the number of variables in the system (endogenous and €32;xogenous) minus the number of variables in the particular <22 quation is just equal to the number of endogenous variables jL.11 the system minus one. It is overidentified if the number <:>;f variables in the system minus the number in the equation :L_:s greater than the number of endogenous variables in the fiijystem minus one. It is underidentified if the number of ‘J‘Eariables in the system minus the number of variables in TIAlje equation is less than the number of endogenous variables :1.Ij_the system minus one. The same conditions are more E3"Laccinctly summarized by Koopmans and Hood in the following VfiréayuB If K** represents the number of exogenous variables jLTithe sytem but not in the equation in question and GA \ '3:- 2A.M.S. Agricultural Handbook, No. 146, Analytical, 337:21018 for Studying Demand and Price Structures, (U.S.D.A., ashington, D.C.,) p. 62. 3Tjalling C. Koopmans or William C. Hood, ”The IE:53timation of Simultaneous Linear Economic Relationships,” C3 agmer VI, Studies in Econometric Method, Cowles Commission 1? <>:r Research in Economics, Mono. 14, (Chapman & Hall, Limited, LQndon, 1953), p. 138. 180 represents the number of endogenous variables in the equation, then the condition for being just identified is as follows: K=:<>:< = G“ - 1. If the equation is overidentified K>i<>l<) GA- 1, and if under- iLcfl_entified K**4-GA - 1. If K and Q in the classical system are exogenous, tgk3_e degree of identification of each equation, on the awss.sumption that it can be treated linearly, is presented in t h e following table: Test of Identifiability Equation K** GA GA'- 1 Degree of Identification Ll O 3 2 Not identifiable L2 2 2 1 Over identified L3 2 2 1 Over identified L4 2 3 2 Just identified L5 2 2 1 Over identified L6 2 2 1 Over identified 4 L7 2 3 2 Just identified In the case in which the capital stock Q is treated Elss' endogenous and the system again treated as linear in logs 131:1EEz results would also be something less than encouraging. :Eljl this case the production function would still be under- icigEntified and all six of the other equations would be over- i q Q ntified. Other attempts to make it identifiable would include 181 treating two variables not in the production function as taxogenous. This might be accomplished by using lagged vairiables for I, R and W but this is likely toinvolve some gar‘cmdems because of the pseudo-recursiveness of the system. ILJ_:1 three of these variables both determine and are determined lajy- other variables in the system. Another attempt to make it identifiable would be to éicfl_blems for which annual data are likely to be the best €1‘JDEailable, the sample size is almost certain to be small. IEt; is not likely, therefore, that this approach would be pea-?ZI:.“ticularly rewarding. Failure to establish the identifiability of the system ESLlegggests that the simultaneous equations techniques are not EllC’Iblicable to the system as a whole, though they may be Ell:>I>licable to sub-sets of equations within the system. This \ “Foote, op. cit., p. 61. 182 does not necessarily mean that the parameters can not be (estimated, it just means that it would likely be difficult t<3 estimate them in such a way as to take account adequately (31? the simultaneous relationships involved. Having reasonably good evidence that the classical 5337’stem is not identifiable as presented and having failed t;<:> find a way of making it identifiable, perhaps the next 531: esp would be to investigate estimation procedures other ‘tlj_£an the methods of simultaneous equations. LE DUCED FORM METHODS The next approach to estimation of the parameters of tlflea classical axioms might be directly through least squares Ilr‘<:>cedures. Foote sounds a warning on this point too, f1C>Wirvever. He says that "... we obtain estimates that are St3:Llowing example is presented for purely illustrative IJLJJITpOSGS with no claims being made about its application to re ality. Suppose a system were postulated in which output (O) VVEquS a function of investment (I) with the level of known FEEissources (K) treated as an exogenous variable. Suppose in ad~Ciition that investment is a function of output with the St3<22>ck of capital available in the previous periodtreated as e:‘ht'ibgenous. These equations could be written as follows: (I;:1‘ ) (I; _ . ;3_) I — b2lO + b22Q (Investment function) 0 = bllI + b12K (Production function) Tl'1‘3353e equations constitute a system of two linear equations ar1:rrocedures the estimates obtained‘for the parameters of the II“Eajanmters of the structural equations from these parameters 035‘ the reduced form equations. Dividing 3.11 by X21 provides .E3;1_l. Dividing£?22 byzgl2 provides b2l' Substituting bll and dividing b and b into 22 ll 21 12 yields b12. These estimates of the structural coeff- al‘ld b21 into 621 provides b 6 j~<::ents will be statistically consistent if the predetermined \73E1riables are known without error and if the unexplained ZrPsidual terms each have a probability districution whose Eif\rerages is zero, and whose variance is independent of the ji~~l’1dependent variables.5 \ 5Foote, o . cit., p. 58. 185 The foregoing example illustrates the use of the Ineethod of reduced forms for a system that is just identified. ITCJr such systems it is possible to uniquely determine the sstzructural coefficients from the parameters of the reduced _fTCDImlequations. For overidentified equations, on the other IjJElnd, at least two values of the structural coefficients are c>1:>tained. Neither value is statistically consistent and t;1:1ere is no direct way of deciding which answer to use. This problem can be illustrated briefly by assuming t;ljuat Ll now contains a second predetermined variable, say t;63 Chnology. The production function would still be just j_ci«entified but the investment function would now be over- i_ciwsntified. The reduced form equations would now be: (L"’l) o= b111122 Q + b111323 T + b12 K 1 ‘ b111321 1 - b111321 1 ' b111121 b b b b (];"‘2) I = 1 - i2 b Q + 1 -2% b T + 1 12b21b K 11 21 11 21 ' 11 21 )Tflwsa structural coefficient bll could now be obtained by diirxriding the coefficient of Q in (L"l) by the coefficient of CQ :i.n (L"2) or by dividing the coefficient of T in (L"l) by 131163- coefficient of T in (L"2). This problem occasioned by t}1*Ez over identification of an equation could be handled d‘ir‘ectly by one of the maximum likelihood approaches: Its Fe:l.evance at this point, however, is with respect to the EiIDIDJicability of reduced form procedures in estimating the 186 sitructural parameters of the classical system. IXIDPLICABILITY OF REDUCED FORM METHODS TO THE CZILASSICAL SYSTEM Two problems arise with respect to the applicability CDLE’ reduced form procedures to the estimation of the 531:.ructural coefficients of the classical system. The first leElsitO do with the identifiability of the system; the second VJCigth the form of the functions. P’Iéwoblems of Idnetifiability It will be recalled that the application of the coun- :1171gg rules to the classical axioms, on the assumption that they C2CD1Jld be treated linearly, revealed that one equation was lJlchjer identified, two were just identified and four were :r‘der to obtain an equation for any of the endogenous \réELriables in terms of all the exogenous variables (K and Q) i.1:, is necessary to substitute equations for the desired eI’ldogenous variables into the production function. To solve f.<:>r I, for example, in terms of K and Q, equation L7 was ESlestituted for O to obtain R + w = al Lbl Kb2 Qb3 Tb“ . £33<3 difficult to solve for I in such a way that it would be g§><3ssible to determine the values of the structural c3<3efficients from the values obtained from the parameters c>;f‘the reduced form thus obtained. After a number of attempts at solution, the same 1:»3/pe of expression was obtained for each of the other five éauridogenous variables, and the same type of conclusion follows. 31:1: would be extremely difficult to estimate the values of 1:u}je parameters of the structural equations of the classical 53:57stem by the method of reduced form. TIEEJE USE OF LAGGED VARIABLES Before leaving the questions of whether the structural C3<:>efficients can be estimated by modern statistical methods, Eilfld which methods might be most appropriate for any given EsZyrstem of equations, it would be well to consider the use (DIET lagged variables as a means of making a model identifiable. jjflis consideration has been avoided up to this time, in the TLIr‘eatment of the classical axioms, in order that consider- a~‘tLions with respect to the simultaneous equations and I‘Eiduced forms procedures might be illustrated first. The use of lagged variables, in the classical system, theiybe considered as one means of adding variables. It might 189 be argued, for example, in equation L5, that the labor jforce is a function of the wage fund in some past period .I“ather than the wage fund in the current year. If it is éaxrgued that population expands as a result of increased real :j_11comes to working class families and hence increased éafhdlity to support larger families, then it follows that 't;lnis increased population will not have its effect on the :1_aabor market until the children of this era have attained inr<3rking age -- perhaps eighteen to twenty years after the :j_11crease in the wage fund. Similarly, it might be argued ‘t;ljat entrepreneurs make their decisions to invest, not on “t;lje basis of profits in the current production period, but <:>rj.the basis of some accustomed rate of profits. This Ea.c:customed rate might be a moving average of the past ‘tlliiree years, for example. If it were thought that the use CDLET these lagged variables were likely to account for a E3:i_gnificant part of the unexplained residual, then Rl could ED and the system would then be: b b b b R = + — L4 a,F b7 T b8 L 190 .Akpplication of the counting rule now provides the following results: ‘ Equation K** GA GA - 1 Degree of Identification L1 2 3 2 Just identified L2 4 2 1 Over identified L3 3 l O " “ L 4 A 3 2 T" 1' L5 3 l O " " L6 4 2 1 " “ L7 4 3 2 " " tIifle production function would now be just identified and t31:1e other equations over identified. On the assumption tlliiat the system can be treated linearly, the addition of t31i1ese two lagged variables would permit estimation of the C3<2>efficients by simultaneous equations techniques. If the addition of these two lagged variables permits t3lfle estimation of the coefficients then the next major c31.1-1estion is whether these estimates are likely to be reliable 63‘1".Lough to permit useful prediction and explanation. The Eirlsmmr'to this question depends, in part, upon the accuracy Elrldreliability with which the variables can be measured. A¥tLtention will be turned at this point to a brief consideration 191 of these problems. PROBLEMS OF MEASURING THE VARIABLES6 It was pointed out at the first of this treatment of ‘C;he question of empirical adequacy of a theory that the saftatements in the explanans must all be true. It was also :ivndicated that both tautologies and contingencies were gg>€nmfitted in the explanans. In addition it was argued that east least one of the statements in the explanans must be c3;f the theory, is the accuracy and reliability with which lehe variables can be measured. The problems of measuring ‘t;lne variables are essentially different from the problems CDLi’estimating the parameters. In many cases, however, the 1JLIriexplained residual associated with the parameters may be Cj~IJ€ as much to errors in measurement of the variables as to ‘tllne accuracy and reliability with which the parameters are E3=£stimated. Hence questions of the measurability of the ‘Vréariables, and of the accuracy and reliability with which tllney can be measured, are of primary importance in determining tllne truth of the propositions in question. The first ITEBquirement is that they must be measurable in principle; the _‘__ 7This acceptance as true is likely to be conditional C311 compliance with some kind of decision rule. This rule in Iqrn is likely to be couched in terms of the extra costs of Ea~<flditional accuracy and the consequences of inaccuracy. 193 second is that they must be measurable with sufficient eiccuracy to permit reliable estimation of the parameters. In any a priori analysis, then, one of the first (11iestions one might ask is whether the variables are ITieeasurable. If the answer is negative for a particular ‘Vrzariable then the statements containing it are not readily EELrnenable to corroboration. These statements cannot be .Es:igmificantly denied and hence cannot be meaningfully £3.53serted. It follows that to show that a given variable is 1i1c3t measurable is to show something particularly damaging eaIt>out the current status of the theory's explanatory and :FDITffliCtive potential, provided that a lawlike statement C:<:>ntaining this variable is required in the structure of €3:<:planation appropriate to the problem at hand. This is not 't-CD say anything particularly conclusive, however. Variables t;lj_at are classified as not measurable at one point in an aIlalysis may well become measurable at some later date. It i—ES only after it is recognized that a certain variable is lIilxely to be important that any research effort is devoted 13C) the development of the techniques of measurement. It is J—jelxely to be more fruitful, therefore, to think of the values ()1? the variables as being either measurable or not currently meeasurable, that is, non—measurable, rather than not measur- EiEDILe in any absolute sense. The effect of non-measurability becomes more apparent VV11€3n it is recalled that most economic theories that are 194 ready for empirical application are couched in terms of equation forms. Their explanatory and predictive performance requires that they determine the values or the direction of change in the endogenous variables given the values or the direction of change in the exogenous variables. If the values of these variables are not measurable at least in ordinal if not cardinal terms then this function cannot be performed and their empirical usefulness is seriously curtailed. It would seem to follow that unless the variables in a system are currently or potentially measurable, that system can provide very little information about reality. It would be useful, therefore, if a set of criteria or techniques could be developed which would enable the researcher to ascertain whether a given variable is measurable. If it is measurable, then it becomes important to determine MMether it can be measured with sufficient accuracy and reliability to permit useful explanation and prediction, and whether the tasks of measurement can be performed at less cost than the value of the information obtained. MEASURABILITY AND THE EMPIRICIST THESIS One possible approach to the question of measurability of a given variable would be to determine whether it is observable. If it is observable then it might be argued that even though the value of a given variable is not currently measurable, it may become measurable. On the other 195 hand, it might conceivably be thought that if it is not observable it is not likely to be measurable. The connection or lack of connection, between observability and measurab- ility has led to a number of important questions in both economics and analytic philosophy. _Few economists would argue, for example, that utility is observable, but many have argued that it is measurable. Orthodox members of the cardinalist persuasion have argued that utility is in fact measurable. Their more modern colleagues have argued that it is at least measurable in principle. The ordinalists, on the other hand, have argued that it is not measurable either in fact or in principle. The existence of such polemics suggests that methodologically important questions of measurement are not likely to be amenable to quick and easy solution in terms of the observability of the variables. Problems with respect to the relationship between observability and measurability have also had a long history in analytic philosophy. The earlier forms of positivism or empiricism held that: "Any term in the vocab- ulary of empirical science is definable by means of observation terms; i.e., it is possible to carry out a rational reconstruction of the language of science in such a way that all primitive terms are observation terms and all other terms are defined by means of them.“8 Hempel refers 8Carl C. Hempel, "Fundamentals of Concept Formation in Empirical Science,“ International Encyclopedia of Unified §glgggg, Vol. 2, No. 7, (Toronto: University of Toronto Press, 1952), p. 23. 196 to this notion as the ”narrower thesis of empiricism.m The thesis is fraught with several difficulties,l however, These center, according to Scheffler, around the use of theoretical terms and dispositional terms. Scheffler claims that the heart of more modern empiricism has been its doctrine of empirical meaning, with its sharp line between the verifiable and the univerifiable and its rejection of non-analytic, non-experiential statements as nonsense. He goes on to point out that this approach to ferreting out the nonsense propositions presents some difficulties. The problem is to ascertain whether a given statement about a particular variable expresses a genuine proposition about a matter of fact. The analytic question behind this problem is whether any of the current or proposed definitions of empirical significance or meaningfulness fulfill the criteria of meaningfulness in such a way as to permit the researcher to judge the significance of any candidate statement. While this battle rages and until its outcome is known, the problems of recognizing those variables which areend which are not meaningful, and which are or are not measurable, remains to plague the practitioner. Hempel's analysis of this thesis comes to bear at two points: the concept of dispositional terms, and the concept of quantitative terms. Both of these concepts will be examined below. A dispositional term designates, not a directly 197 observable characteristic but rather a disposition, on the part of some physical object or entity to display some specific reaction under certain specifiable circumstances. The 'marginal propensity to consume' is a case in point. It is not a directly observable characteristic but a tendency on the part of consumers to allocate their income in a given pattern under different given levels of income. The concepts of '1iquidity preference' and 'economic rationality' fall into the same category. Hempel points out the problems associated with attempts to define dispositional predicates, like those mentioned above, by means of observation terms. These problems hinge on the use of the conditional statement form in the sense of material implication.9 These problems direct attention to the pathbreaking work of Carnap in his introduction of the concept of 'reduction sentences' as a means of avoiding these difficulties.10 The problems with the use of reduction sentences, however, is that they do not offer a complete definition. They only offer a partial, or conditional, determination of its meaning under specified test conditions. This indeterminacy of meaning can be reduced by the use of additional reduction sentences but in general, a set of reduction sentences only partially deter- mines the empirical meaning of a dispositional predicate. 9Hempel, o . cit., p. 25. lORudolf Carnap, ”Testability and Meaning,” Philosophy of Science, III (1936), pp. 419-471. 198 The other major problem area, associated with the idea of solving questions of measurability by defining these concepts with the exclusive use of observation terms, involves the treatment of the metrical terms. These terms are used to represent measurable qualities like length, mass, temp- erature, value, etc. These measurements may be expressed by any positive real numbers. Hempel argues that it is not possible to fully define these concepts purely in terms of observables. Here again Carnap's concept of a reduction sentence, or more properly the concept of an inductive chain of reduction sentences has been adduced to fill the gap. In view of the problems associated with the use of dispositional terms and metrical terms it would appear that the 'narrow thesis of empiricism' may have to be abandoned in favour of what Hempel calls the '1iberalized thesis of empiricism.’ This thesis holds that every term of empirical science can be introduced, on the bases of observation terms, by means of a suitable set of reduction sentences.11 The problem with this ”liberalized” thesis is that it may perhaps be too 'liberal' to rigorously bridge the gap between the concepts employed in science and the requirement of the measurability of these concepts. The central point in this concern is that these sets of reduction sentences are not definitions. They provide both a necessary llHempel, o . cit., p. 31. 199 and a sufficient condition for the use of terms, but these two conditions do not necessarily coincide as they do in the case of definition. It follows that the use of reduction sentences as a means of definition does not satisfy all the requirements of definition. In view of the failure to adequately define disposit- ionals and metricals in observation terms or by use of reduction sentences, and in view of the vast body of literature dealing with these and related problems of concept formation in empirical science, it would appear reasonable to conclude that an adequate criteria for recognizing those terms which are empirically meaningful has not been conclus- ively established.l2 Since solution to these questions of meaningfulness and observability appears to be, in some senses at least, prior to the solution to questions of measurability, it would seem to follow that definitive solutions to questions of measurability will have to await the fruits of more comprehensive analyses of the problems the empiricists confront. It would also appear, however, that there are a number of concepts of measurement which may permit a more direct approach to the problems of measurement than that provided by the traditional empiricist methods. Evidence for the need of such an approach is readily forthcoming from even 12See Hempel, o . cit., bibliography. 200 the most cursory examination of the polemics associated with the measurability or non-measurability of 'utility', 'technological change', and 'management' as these variables are treated in agricultural economics. A selection of these concepts will be examined below. CONCEPTS OF MEASUREMENT Any science, like economics, in which mathematics is applied, involves both counting and measurement. Counting involves discrete objects and the establishment of a one-to-one correspondence between these objects or entities and numbers. The population of a country, for example, is counted, not measured. Measurements, on the other hand, are always concerned with a continuously variable property of objects of processes such as weight, length of duration, rate of return, etc. The fact that measurement makes sense only with respect to variable properties can be seen by comparing one variable like 'income' with another like 'national image.’ It does not make sense to say of one country that it has more national image than another. It does, however, make sense to say that one country has more income than another. That is, to say, properties like the size of the national income can be ordered by a comparative relation, whereas properties like national image cannot. It should be pointed out in passing that such things as the national image do have measurable dimensions such as the per capita income,the general 201 level of education, etc., but then it is these properties, not the concept of the national image itself, that can be arranged in order. For a property to be measurable, then, there must be some determinable characteristic which appears in determinate form and which can be ordered in terms of 'more' or 'less', like more weight or less weight, for example. The term 'quality', for example, generally refers to a determinable property of a determinate form but is not measurable because its determinate form cannot be ordered in terms of 'more than' or 'less than'.13 Quantity, on the other hand, is usually a determinable property of a determinate form that is measurable in that different degrees of this determinate form can be ordered. Finally, certain properties like 'expectations' are not measurable in that there is no determinable property of which it might be said that one was more or less than another. In summary, in order for a property to be measurable, it must be deter- minable, it must be determinate, and it must be possible to put determinate forms in some kind of order. These concepts of metricization are defined more rigorously by Hempel in his sections on 'Fundamental Measure- ment' and 'Derived Measurement'.lh An intuitive notion of these concepts, starting with the concept of order, is 13See Arthur Pap, An Intrgduction to the Philosophy of Science, (New York: The Free Press of Glencoe, 1962), PP- 125-135. 14 . Hempel, o . c1t., pp. 62-74. r‘ 202 presented below. A number of the standard treatments of measurement in analytic philosophy deal with two types of measurement, fundamental measurement and derived measurement. One of the basic concepts in fundamental measurement is the concept of quasi-serial ordering. This is an array that is serial except that several elements may occupy the same place in it. The basic notion of quasi-serial ordering of properties‘ is that variables can be put in order of size, weight, or whatever characteristic is being measured. This ordering requires the specification of criteria of 'coincidence' and 'precedence.'15 In operational terms, these criteria mean, in the case of mass, for example, that if for any two elements the first outweighs the second, then the second has precedence over the first. If, on the other hand, the first just balances the second, then the first is coincident with the second. These two conditions satisfy the requirements of quasi-serial ordering, but they do not result in measurement. The next step is to metricize the quasi-serial order by assigning one real number to each element in the set in such a way that all elements of the same size will have the same number, and, for all pairs of elements, the larger element of each pair has the larger number. One of the crucial 15See Pap, o . cit., p. 131. 203 phases in determining the values to be assigned in this manner is in selecting some specific way of combining any two elements in the set into a new element and stipulating that the number assigned to the new element is the arithmetic sum of the numbers assigned to its component elements. Derived measurement, on the other hand, involves t;he determination of a metrical scale by means of criteria vvhich presupposes at least one previous scale of measure- rnent. Derived measurement may be considered in two sseparate phases -- derived measurement by stipulation, and Ciendved measurement by law. Derived measurement by stipul- éaxtion, consists of defining some ”new” quantity by means of (D‘thers which are already available. For example, the (zeapital-output ratio used in theories of economic develop- rruent may be defined in terms of the capital stock of the I'leation and its gross national product. Derived measurement ‘EDjy law, on the other hand, generally introduces an alternative ITleethod of measuring some quantity that has already been ‘j‘Ijtroduced. This generally consists of discovering some :1—Eiw which represents this quantity as a mathemetical function C31? some other quantity for which methods of measurement are Some of the best examples of this type of The Eljlready known. nfleaasurement are found among the physical sciences. 1T1EBasurement of temperature by means of a thermocouple or \ 16Hempel, op. cit., p. 70. .204 altitude by the use of a barometer should suffice to illustrate the concept. The foregoing concepts of measurement together with a knowledge of the empiricist thesis provides some 0f the background required for an examination of questions of measurability. This background is supplemented with the considerations of partial formalization presented in Chapter II and with empirical knowledge of the variables to be employed. jIncomplete as this information may be, it provides an .igllustration of some of the concepts required in this part- :i<3111ar phase of a priori analysis. MEASURABILITY OF THE CLASSICAL VARIABLES Examination of the classical variables provides an :£l_:1_ustration of several of the problems raised above. This brfifiii.ef examination will be considered in two phases -- the fi.::r‘st will consider the problems of measuring technological CkleEELnge and the second will consider the other variables as a g roup. Technological progress is generally thought to include E“J~<2;h things as changes in organization and in ways of pr‘CDducing products and services. It often includes the jJCT-Liiroduction of new products and of new inputs. Sometimes 113 results in the production of more of a given product, or a. Ciifferent kind of product, from the same input, or from (iiJEferent combinations of the same or different inputs. In view of the discussions of partial formalization 205 initiated in Chapter II and continued in Chapter IV it becomes apparent that this complex concept of technological advances is introduced into the classical theory without explicit definition and without a set of statements specifying a sufficient condition for its use. It would seem to follow, therefore, that the concept is introduced as a relative primitive in the classical system. This 'type of introduction into a system affords little evidence gas to the measurability of the variable introduced. One approach to ascertaining the measurability of ESLJCZh a complex variable is to ascertain which of its several crimsaracteristics are determinable, and determinate, and w%j_€32ther they can be put in order. If a quasi-serial order Cai;rf1 not be established then it is not possible to measure lfld.e32 characteristics. In any attempt to establish a quasi- SEEIrf‘ial order it is necessary to have a technique for (he <:: iding, for any pair of determinate forms, whether they CC>:i_:ncide or whether one takes precedence over the other. In tkleE case of the general level of technology, it would seem E1F>IT~Z>arent that it is clearly possible to recognize different lEE‘J‘els of technology applied at the same time in different QOlmstries and at different times within the same country. rPC> this extent it could be argued that, taken as a whole, 't1163 concept is determinable. The next question is whether t1’Iere exists a determinate form of one or more of the char- aCteristics of the general level of technology such that it 206 is possible to ascertain whether this characteristic coincides with or takes precedence over the others. The answer to this question again seems to be positive. It is to be expected that it could be possible to construct a labor-capital ratio, for example, to represent one aspect of differences in technology. In the same way, it would be possible to construct other ratios to facilitate comparison of'other aspects of technological progress. The next (Question becomes one of whether it is possible to weight 1:17ese indicators and to aggregate them in such a way as to siciesquately reflect differences in technology. If it is, Inkleen.one would expect it would be possible to eventually Calprrry out the other steps required for measurements. There is some real question, however, whether these IKE—Essks of weighting and aggregating are likely to be aC=<::omplished in the forseeable future. Johnson, for example, at1‘t3-empts to avoid these problems by taking advantage of the nc>1:; ion that the so-called upward shift in the production ~ft1171.ction resulting from new technology is generally attrib- 11t-Eagble to a new input or to the use of a new combination of 03~C3. inputs. He argues that, in these cases at least, it beicomes important to identify the new factor or new comb- itlation of factors and to treat it directly in the analysis.17 \ l7Glen L. Johnson, "A Note on Nonconventional Inputs 311d Conventional Production Functions,” Agriculture in 17 these concepts were introduced as relative primitives ihrl the system. Unlike technological change, on the other fléaLnd, they are all amenable to some form of measurement. 171f143 statistics on gross national product, for example, can h>€EB used as a measure of output. The labor statistics FDIET‘ovide a measure of the labor force. In the same way, C>‘t:-her sources of information provide indices for each of tLJEULe other variables. For each of these variables, however, t3ZEfLe questions of accuracy and reliability present a whole E3GEaries of problems. Each of these variables is an aggregate (Dif? a complex set of inputs or products. Each is an index ‘ATthh all the problems of index numbers and time series data. The problems that attend the observation collection Eindaggregation of these data are serious enough in the Eidvanced countries; they are often more serious, however, in ‘the underdeveloped areas in which this particular theory is to be applied. These problems include the difficulty of 208 obtaining and training skilled and reliable enumerators, processors and analysts. These problems of data collection and analysis together with the relatively short periods for which these data are available in most underdeveloped areas makes it extremely difficult to obtain reliable statistics. It also makes it extremely costly to improve the accuracy and reliability of the data already available. This extreme difficulty and high cost of obtaining reliable data lilaces a premium on careful a priori analysis of putative t;heories, prior to commiting the resources required to C>k)tain reliable measures. CHAPTER VIII SUMMARY PURPOSE OF THE STUDY The purpose of this study was to investigate ways and means of evaluating the explanatory and predictive potential of theories. Attention was focused on economic tflqeories in general, with special reference to theories ()1? economic growth. The main focus of the study was on 't;lje processes of evaluation, rather than on the techniques C>4fT testing commonly applied to theories. This process of €3'*graluation includes the broad scope of inquiry that may be C: <:>nducted prior to the collection of data and the EEITIpirical testing of hypotheses. It concentrates on the J—.<:>gfical structure of theories, on the one hand, and on the EBlfirlpirical adequacy of sets of hypotheses, on the other. It was recognized, at the beginning of this iLSr)_quiry, that the process of a priori evaluation of theories 114215 been carried out ever since the inception of science ELrldthat therefore it might be extremely difficult to say Eirlything really new in this area. It was also recognized 1Lhat the concepts that are likely to be most useful in the 51 priori evaluation of theories are drawn from a wide cross Section of mathematics, statistics and analytic philosophy. 209 210 It was further apparent that few of these concepts have ever been drawn tOgether into a unified approach to the problems of theory evaluation. It was hoped, at the same time, that each of these sets of concepts, drawn from different disciplines, could be clarified by their simultaneous application to these problems. Successful selection of these concepts, and their synthesis into a ‘workable framework of analysis would constitute an important contribution to research methodology in economics. It was tJherefore hoped that, as a minimum, some of the more relevant CZIlGStiOHS of research methodology might be raised and some C>;f‘ the more promising approaches suggested. METHOD OF ANALYSIS The general method of analysis, employed in this S‘t; 1idy, involved forays into analytic philosophy, mathematics, alfl.Jncepts of partial formalization. This investigation C111HZlminated in an examination of the concept of a model and jft; :3 use in theory construction and in theory evaluation. The next phase of this study involved an illustration (If? the application of these concepts to the problems of tkj.esaory evaluation. The theory chosen for examination was P1TL<32 classical theory of economic growth. This theory was ‘3kl<:>sen because it seemed to be sufficiently recalcitrant 13C) facilitate the illustration of a number of aspects of 1331€a framework of analysis being developed. The basic concepts fo‘ the theory were presented in Chapter III. This present- anLion was followed by a brief examination of some of the 1~Ogical consequences of this statement of the theory. This 212 examination provided a set of deductions which seemed to agree with the empirical evidence available from some of the underdeveloped areas and with a larger body of theories of economic growth. Further investigation suggested, how- ever, that the theory contained an inconsistency. It revealed, for example, that one set of statements implied a decrease in output and, at the same time, an increase in output. Further investigation also revealed that the same ‘types of implications hold for each of the variables in the system. This discovery of an apparent inconsistency lead to 5111. investigation and attempted application of a number of ties chniques for demonstrating inconsistency. These included 61171 attempt at rigorous deduction of an inconsistency through IlliLee use of the calculus of propositions. It also included Etr). attempted application of the method of models. None of 11k3.€sse initial attempts at evaluation were definitive. They CijL—Cfl, however, point up the necessity of clarifying the S1Seatements of the theory before any more conclusive analysis CC>111d be accomplished. The first phase of this process of clarification was C3<>r1ducted in the latter part of Chapter III. It involved 131313 restatement of the classical axioms in terms of first C1ifferences. This more rigorous statement of the classical e‘Jitioms permitted the direct application of the tools of QEVSrmal logic presented in the calculus of propositions. This 213 illustrated the point that, in the process of theory evaluation, the clarification and refinement of the theory itself goes hand in hand with the develOpment and application of more powerful techniques of analysis. In addition, it demonstrated that this particular formulation of the theory is inconsistent. Thfisresult is not entirely conclusive, however. Whenever a theory is stated with the vaguenessend imprecision which characterizes the Iiiggins version of the classical theory of economic growth, j.t becomes extremely difficult to ascertain whether any égj:ven reformulation of the theory adequately expresses the nieeaaning intended by the author. If it does not, then that I‘ea.statement involves the specification of a different 'tlfieeory. It is different, even though it is about the same Sesa‘t: of phenomena. This uncertainty about the adequacy of tlflzjis particular reformulation of the classical axioms leads t<2> the conclusion that unequivocal results could not be O12>‘t:ained until the theory had been reformulated. This re- fCDIT‘mulation had to be in terms that are more precise than thl<>se obtained by specification in terms of first differences. lltl is only after such additional precision is accomplished tzkléit more powerful sets of analytic techniques can be brought .tic> bear on the problems of theory evaluation. The remainder C31? this study is therefore devoted to the development and EiIDplication of the techniques of partial formalization, on the one hand, and of a more powerful framework of analysis, 214 on the other. A number of the techniques of partial formalization introduced in Chapter II are employed in the clarification and specification of the classical theory of economic growth conducted in Chapter IV. The main focus of this chapter is on the nature of the functional relationships involved. Starting with the information available in the Higgins ‘Version of the theory, supplementing this with information cirawn from other sources, and employing the techniques of ssystematic presupposition and quasi deduction, an attempt vvzas made at specifying the shape of each of the functional I“<3L1ationships employed. In view of the paucity of infor- nleaxtion available, this reformulation is at best only highly Tleszlntative. It does, however, provide an illustration of 53<:>Ime of the approaches to partial formalization. In addition, flit; provides an example which can be used to illustrate the a];>]g:>lication of the framework of analysis to be developed in Ch apter V. Unsuccessful attempts at the direct application of 't171€3 method of models, to the formulation of the theory E31?€esented in Chapter III, suggested that the individual 131?<3position itself rather than the whole set of propositions tVEIat constitute a theory might be a more appropriate unit C31? investigation. Attention was therefore turned, in Chapter \7, to a set of techniques which the author has called 'the E3halysis of propositions'. This analysis concentrates on 215 the individual proposition as the unit of investigation, and on the calculus of propositions as the technique of analysis. There are a number of more powerful calculi which might have been used for this purpose, but the calculus of propositions was chosen in order to keep the logical apparatus at a minimum and at the same time to provide the classification system developed in Chapter V. The key concepts developed in Chapter V are built aaround two pairs of dichotomies. These dichotomies are Ejased on the truth value characteristics of statement forms. IEirery statement form yields a statement which is either 'v7611id or non-valid, but not both. Every statement form is €2:i.ther consistent or inconsistent, but not both. Furthermore, EE‘XJ—ery statement form, has exactly two of these four char- 8L<:: teristics. It follows that every statement form is one C)::‘ 'the other of the following three types: A statement form HIEELjy be valid and consistent in which case every substitution isrj_stance yields a true statement. These types of statement f(Dirms are called tautologies. Alternatively, a statement form U1é3;y be non-valid and consistent in which case some substitution irlstances may be true and some substitution instances may be jTealse. These types of statement forms are called logically Sngdeterminate, or contingent. Finally, a statement form may ‘EDe non-valid and inconsistent in which case every substitution isnstance yields a false statement. These types of statement iTOrms are called inconsistencies. The final combination, 216 valid and inconsistent, would mean both always true and always false. This combination is logically incompatible. Hence, the three pairs of characteristics listed above provide a mutually exclusive and exhaustive set of character- istics of statement forms. Each of these three types of statement forms plays a different role and has a unique effect on the logical structure and on the empirical assertions of science. These roles and effects are discussed in Chapter V. They are (explained in more detail in Appendices A, B and C. These ciiscussions are summarized below. It was demonstrated in (Zliapter V that the presence of an inconsistency permitted t3lfiie theory to imply anything at all, true or false, ssesznsible or nonsensical about the phenomenon in question. Elesance the presence of a single inconsistency renders the t>171.eory useless for explanatory and predictive purposes. The r‘<:>le of the valid statement forms, on the other hand, is ItLE3_inly in the provision of the rules of deductive logic. TTliLeatidio.type of statement form, the contingency, may Vrii.eld statements that are true and statements that are false. EsL‘l'bstitution instances of this latter type of statement form C3<>mprise the empirical assertions of science. In view of the roles and the effects of each of 1Lhese types of statement forms in science, it would be use- f‘ul to be able to recognize any given type of statement form. rI'he criteria of logical and empirical adequacy require, among 217 other things, that the explanans imply the explanandum. It also requires that the explanans must have empirical content. This means that at least one of the statements in the explanans must be contingent or synthetic. That is, they cannot all be analytic even though the relationship between the explanans and the explanandum must be that of logical implication. Finally, each of the statements, taken individually, must be consistent, and the whole group, taken as a unit must be consistent. The assymmetry of the truth value characteristics of £31:atement forms described above provides the key to the Ciesvelopment of methods of recognizing each of these types CDLE‘ statement forms and hence of assessing the explanatory ELIde predictive potential of a theory. It will be recalled f‘nr‘om this classification that all tautologies are true ES‘t: atements. Hence, if one finds a substitution instance VTlflfliCh is not true, then that statement form is not tautologous. ALth1.that is required to prove that it is not tautologous i-ES to find one substitution instance which is not true. Any IlJQaner of true substitution instances, on the other hand, quifill not prove it tautologous. In the case of inconsistencies, EBV‘ery substitution instance is false. If one succeeds in jTZinding one substitution instance which is true, then that Eytatement form is not inconsistent. Since it must be either Clonsistent or inconsistent, it therefore follows that it is Clonsistent. Hence, finding a single substitution instance 218 which is true is enough to prove consistency. Finally, in the case of contingent statement forms, if one can find one substitution instance which makes it true, and one substit- ution instance which makes it false, then it is contingent. In the case of economic theories, however, it is often difficult to analyze the statements in such a way as to adequately exhibit the statement forms involved. As a result, it is difficult to ascertain the truth value char- acteristics of the individual statements employed. These ciifficulties are compounded when attention is turned to tvhole sets of statements as they function in theories. The czconcepts summarized above provide a basis for the types (:Li? techniques developed for use in the evaluation of such t3§kjeories. Nbst theories of economics require the use of metric <3 <:>ncepts. The logical analysis of statements employing t>lf1ese concepts usually requires the use of quantification tLil'leory. This requires the skill of a professional logician. TTliLe same kind of results can sometimes be obtained, however, VVtiqthout this arduous type of analysis. They may be obtained EDZY‘ the use of the concepts presented above together with a ‘nlfinimum set of concepts drawn from mathematics. It was argued above, for example, that any statement Cir set of statements is either consistent or inconsistent. :[h.addition, any statement or set of statements which is iirmonsistent is always false. In terms of equations, this 219 means that every equation or set of equations which is inconsistent is false for every possible substitution of values for its free variables. If, in examining the consistency or inconsistency of any equation or set of equations, one finds a set of values for the free variable such that all equations in the set are true simultaneously, then that equation or set of equations is not inconsistent. If it is not inconsistent, it must necessarily be consistent. This is the basisfbr the approach employed in the Twethod of Numerical Interpretation as presented in Chapter "VII; The statements of the classical axioms were presented .i.r1 equation form with values of the parameters assumed. Then an attempt was made to find a set of values for the ‘Viéa.riables such that all equations would be true simultan- €2<:>‘usly. Success in finding such an assignment of values is eir1L obtain some a priori assessment of the likelihood of ‘tlfiieeir taking on a set of consistent parameters. One approach which appears particularly promising for linear E3:57‘Estems may be called the Test of Determinants. This test employs the notion that the expansion ofthe C16B'tzerminant of a set of linear equations must be non-zero jérl order for that set of equations to be consistent. The i1j~1?st step in applying this concept to the problem of EELSSssessing the consistency of a set of general functions is t3<> solve for the determinant of the system. The next step j~53 to write an inequality which specifies that the expansion (Di? the determinant is not equal to zero. The final step is ‘tWD examine this expression to determine the likelihood of the 221 parameters taking on a set of values which would satisfy this inequality. This technique is illustrated by its application to the formulation of the classical axioms presented in Chapter VII. It is applied first to the linear sub-system comprised of equations two through seven. Examination of this sub- system revealed that there are a very large number of possible values and combinations of values for which the (determinant would be non—zero, that is for which these eaquations would be consistent. The next step in the examination of the classical aag>cioms was to assess the implications of the expansion of tLIiie determinant of the sub-set for the classical System as 51 ‘whole. This was accomplished by substituting functions of t.lf1€2;arameters for the endogenous variables in the aggregate F>3f‘oduction function. In View of the fact that the values <>£57 all the endogenous variables in the system are determined 1329’ the sub-system independent of the production function, inJ ‘would be only by chance that the values of the parameters V’CDIJld.satisfy the production function. It seemed unlikely, thlerefore, that the classical system would be consistent. This application of the Method of Determinants to t;}3433 classical system illustrates its possible role in the ‘Fxr‘Cbcess of theory evaluation in general. One of the primary :rfs’catdienents of the logical adequacy of a system is that it E) . 63 (zonsistent. The traditional methods, employed in 222 economics, for treating consistency concentrated on efforts to deduce a statement and its negation or to deduce a contravalid statement. The Method of Numerical Interpretation, on the other hand, concentrates on giving the theory an interpretation which is analytic, in fact, one which is analytically true. The fact that such a theory is con- sistent follows as a corollary of the concept that, if it were inconsistent, no true interpretation would be possible. This method has limited a priori value, however. Its sipplication requires knowledge of, or at least the assumption (31?, given values for the parameters. The Method of .IDeaterminants, by contrast, states a necessary condition for c:<:>nsistency. This condition is stated in terms of an :i_1f1equality comprising a general or unspecified set of para- n1<23 ters. Examination of this equation gives some concept of 'bfikj.e likelihood of these parameters taking on a set of V’anlues that will satisfy this inequality and hence render t3lfle2 system consistent. This method, while most generally a£F>pllicable to linear systems, can be extended to cover InCDIiotonic functions as well. This extension increases the a-Il>1:)licability of the Method of Determinants to a substantial I1'L-ll'nber of extant theories in economics. The final phase of this study is developed in kaléaqoter VII. It is concerned with the elaboration and allDiliblication of techniques which can assist the researcher i. . . . . . ;r1 eassess1ng the empirical, in contrast to the logical, 223 adequacy of theories. These techniques centre around the problems of estimating the parameters of a theory, and of measuring the variables involved. Consideration of the problems of estimating the parameters, given the values of the variables, comes to focus on the problems of identification, and on tests for recognizing the identifiability of sets of equations. Failure to meet the rigorous requirements of identifiability turns attention toward methods of modifying the structure :in.order to obtain identifiability. Failure to make satis- _f751ctory adjustments in the system ultimately directs attention t;<:> alternative types of estimation procedures. As in the c:a;Lse of the method of determinants applied to the problems <>_i§’ logical adequacy, these techniques have never been fully clesz'veloped for systems of non-linear equations, except for t;1i1Ij— ‘the other hand, are likewise fraught with a variety of er‘ciblems both logical and empirical. These problems seem 13C) (centre around the empiricist criteria of meaningfulness Etr11?€3tation is flexible enough to be applicable to a wide \réadbiety of function, including many non-linear functions, €153 long as they are virtually completely specified. A second method of examining the consistency of a E33?”Eitem involves the use of the Method of Determinants. This 'n163”tihod is applicable to sets of statements which are stated 225 in general form and is therefore more applicable than the thhod of Numerical Interpretation in the early phases of analysis. It is currently applicable to linear functions and to certain monotonic functions. Its extension to other systems of non-linear equations will have to await the development of this phase of mathematics. The question of empirical adequacy, on the other hand, depends in large part, upon whether or not it is possible to measure the variables. The question of whether _it is possible to estimate the parameters, like the question (if consitency, is a matter of logical structure. The rank .Eilnd order conditions required for the identifiability of 53. set of equations is another aspect of the problems of c: (Dnsistency. The determinant of the system must be non-zero iL.1<1 both cases. While this can be determined for linear ESjyrstems, and for certain monotonic functions, neither the crlr‘iteria, nor the techniques of analysis, are available to liuéilidle these questions for systems of non-linear, non- n1c>Iiotonic, equations. Progress in the a priori evaluation of theories is l—jeliely to go hand in hand with the development of both ELflatlytic philosophy and mathematics -- particularly those 131"6:“5L.'r1ches of mathematics required to handle the complicated E33"V'ES‘tLems of non-linear, non-monotonic, functions required in tllthE development of more powerful theories. APPENDIX A SENTENCE CONNECTIVES, TRUTH TABLE TECHNIQUES, AND A CLASSIFICATION OF STATEMENT FORMS SENTENCE CONNECTIVES “_ The discussion in Chapter V defining and explaining Huh} the meaning of this group of sentence connectives is ' summarized in the following table. The left hand section (if the table presents all the different possible combin- éitions of truth values of the constituent propositions. leliese are referred to as the truth conditions. The right ijuaand portion presents the truth value characteristics of egresach of the different statement forms under each of the c3_:i_fferent sets of truth conditions. The whole table is :r‘eszferred to as a truth table. P q "P P-q qu P39 T T F T T T F T T F T T T F F F T F F F T F F T The foregoing truth table was presented in order to e31nly method used for this purpose. The method of negation <>;f Boolean expansion, for example, has come into use since t3jhe development of the Boolean algebra. This latter 1;>jrocedure has some marked advantages over the Truth Table I:>Jrocedures for statement forms having a larger number of Xireszriables than can easily be handled by the truth table rrlesrthods. The truth table concepts are generally simpler, t1<3rwever. Hence, in order to keep the logical apparatus at Ei Ininimum, the truth table procedure will be employed here. 3711xe essential concepts of the truth table procedures are £>I7€3sented below. Limiting the scope of the discussion to the statement iFCDZEYns of relatively simple compound statements containing OrTLZLytwo variables, it becomes apparent that the first QC>1T1_ponent statement might turn out, on investigation, to be t;37”Llea, and the second might also be true. Alternatively, the 228 first might be false and the second true. Similarly, the first might be true and the second false, or the first might be false and the second also false.) This exhausts all the combinations of truth values that two individual component statements may acquire. ‘When there are three distinct variables in a given statement form, there are eight possible combinations of truth values. In general, there are 2n possible combinations, where n is the number of distinct variables in the statement form. All these combinations, called systems of values of the variables, are égenerally recorded under the list of variables presented at ‘tLlne left hand side of a truth table. The truth value of an individual statement depends <:>3r1the truth values of its component statements and the I;>sarticular sentence connectives employed. Hence, the citesafinitions of the sentence connectives employed uniquely CiéEztermine the truth value of the statement under examination SE’CDJ? any given combinations of truth values of its individual C31? a given statement form is uniquely determined for every F><>sssible combination of truth values. Each of the truth VrEBLLue characteristics which a given statement form is LlIliquely determined to have, is represented by a T or an F LIIider the logical operator which has the widest scope in that IDEE~1I‘ticular statement form. In this way the truth value (zlitéiracteristics of the statement form can be exhibited. 229 Hence, any statement which can be adequately represented by one particular statement form in the calculus of propositions, may be analyzed in the foregoing manner. Its truth value can then be unequivocally ascertained. A CLASSIFICATION OF STATEMENT FORMS The next step is to apply the truth table techniques to the analysis of alternative types of statment forms. The relatively simple forms “pvq“, "p.q", “(p.q)VIV(p.q)“, and "(p.q).nu(p.q)“ will be sufficient for illustrative .purposes. Their truth tabular analysis is presented below: p q (pvq) (p.q) (p.q)vv'(p.q) (p-q).~(p.q) T T T T T T F T F F F T T F F T T F F T T F T F F T T F F T .I‘ F F F F T T F F T Examination of the truth value characteristics of 1Ellie first statement form reveals that any given statement Wafflich is a substitution instance of this particular statement ‘i?<3IW1Will be a true statement if both of its component ES"Catements are in fact true or if at least one of them is 133rfiie. It will be false if both are false. ‘Whether it is Tlirfide or false depends not only on the nature of the statement if‘CDIfllbut also on the truth values of the constituent C:Cm'lppnents. For example, the statement, "Investment rose or CtLIWJe labour force declined” is true if either or both of its .4 Uf‘fik/ .. __..—’_-w—— 230 constituent statements is true and false if both are false. The second statement form might take as a substit- ution instance, "Investment rose and output increased.“ This statement is only true if both of the constituent propositions are true and false otherwise. Sentences which are substitution instances of either of these statement forms may be sometimes true and they may be sometimes false; that is, they may take on either truth values but not both at the same time. Their truth values are contingent upon the truth and falsity of the constituent propositions. {These types of statement forms are called contingent, ssynthetic or logically indeterminate. The truth tabular analysis also indicates that gg;t;atements which are substitution instances of the third gs-t;atement form presented above are always true. Not only 31,53 it true when one or all of the constituent statements is tLIélle, it is still a true statement when all (in this case 13VJC>J of its constituent statements are false. For example, ‘tliee statement, "Investment rose and output increased“ is a tlfilée statement regardless of states of affairs in the real WOI":Lci. Its truth can be ascertained without any empirical in"V'Eisstigation. It can be ascertained on the basis of an exeurrItination of the form and the form alone. This type of StéitLenent form is called tautologous. The fourth statement form analyzed in the foregoing tiiFDJLe yields statements which are always false. Statements 231 which are substitution instances of this type of statement form are not only false when one or more of its constituent statements is false; they are false even when all the substitution instances (two in this case) are true. This is intuitively clear for substitution instances like: "Investment rose and output increased, and it is not the case that investment rose and output increased" which is (“em-R false for this substitution instance and is likewise false 93 for any other substitution instance of the same statement .form. Substitution instances of this statement form are ) 1 iTalse regardless of states of affiflis in the real world and ‘;-" “5 ‘t;heir falsity can be ascertained on the basis of form.and JE‘orm alone. The terms 'self-inconsistent' or 'inconsistent' éELIfld the terms 'self-contradictory' or 'contradictory' are <::<3mmonly used to describe this type of statement form. Further application of the truth table procedures tnrc>uld suggest that any statement, which can be adequately tBil—"c'amslated into the calculus of propositions, can be E3L11alyzed in such a way as to determine its truth value C31'TLaracteristics. If the number of constituent statements j—Il. any compound statement exceeds the number that can be ‘I‘Eéadily analyzed with truth table procedures, then other tEichniques like the method of Boolean Expansion can provide 1311e decision rule. Hence it would seem to be possible to CiEétermine whether any such statement form is tautologous, QKDntingent or inconsistent. This provides the beginning of 232 a classification system which is central to the analysis of propositions developed in Chapter V. APPENDIX B THE ROLE OF VALID STATEMENT FORMS IN THE LOGICAL STRUCTURE OF SCIENCE This section on the role of the valid statement forms is devoted to explaining how these valid statement forms provide rules of inference employed in the deductive processes of science. This explanation can be initiated by examination of the following very simple tautologies: ”pap", ”ervp”, "(V(p.rvp)". The form ”pap" might take as a substitution instance, ”If investment rises then investment risesm which is a true statement regardless of whether or not investment actually rises. Any other substitution instance of this form like, ”If investment falls then .investment falls” is likewise true regardless of the case in reality. A little further thought reveals that all the above statement forms yield statements which are true regardless of what happens in reality. Truth, it will be recalled, is a predicate of statements, and to say that the statement itself is true or to say that it is false is not necessarily to say anything at all about reality.1 An examination of the following truth tabular analyses of these statement forms reveals that they cannot possibly take on lAlice Ambrose, o . cit., p. 18. 233 Alli-L..- 23A substitution instances which yield statements which are false. p pap FV~i> «lhh'vp1 T TTT TT F T TF F F FTF FT T T FF T It can be seen from the above truth table, if the constituent statement “p” is true, the resulting statement is true. If, on the other hand, the constituent proposition is false, the resulting statement is still true. No matter whether the statements substituted in these statement forms are true or false they yield statements which cannot be anything but true. To reason otherwise would be to reason fallaciously. If one were to reason, for example, as if the statement "Investment rose or investment did not rise" were false then one would not be reasoning logically. Similarly, if one were to reason asif the sentence, "It is false that output increased and output did not increase" were false then one would not be reasoning logically. Further illustration would reveal a large number of substitution instances of valid statement forms, all of which would yield true statements. The truth table analysis reveals that it could not possibly be otherwise. Hence, one can take it as a rule of logic that statements which are substitution instances of valid statement forms are true--their truth is conditioned by the forms of the 235 statements alone regardless of states of affairs in reality. Other more complicated statements whose tautologous truth characteristics are not so readily apparent but which are nevertheless instances of valid statement forms are likewise laws of logic. 'Consider those statements analyzed in the following truth table, for example. .o .4 p q (p-qbp (pm)? (-~q9~p) (TD-q)? ~(~FV~q) ,- T T T TT T T T T T T F F T F TF T T T F T F T T F F TT F T F F T F T ~11 F F F TF ‘T T T F T F T Every statement which is a substitution instance of one of these forms is true. It is a true statement regard- less of the truth or falisty of its components and hence, regardless of the case in reality. Reasoning that ran counter to these forms would not be reasoning logically. Thus far attention has been focused on the nature and the role of individual statements in science. The next step is to consider groups of statements, particularly those groups that make up theories, and to use the same logical apparatus used to study single statements to bridge the gap to the study of groups of statements. Chains of reasoning are made up of groups of state- ments which are usually expressed in the '"since...therefore ...” framework. They assert that since the premises are 236 true, therefore the conclusion is true. That is, they assert not only that the premises are true but also that the conclusion is true and that the premises imply it. This framework implicitly employs the Principle of Inference.2 The principle of inference applies only to cases where antecedents are asserted to be true, not just taken as hypotheses. It permits one to assert the conclusion by itself and independent of its antecedents and hence to use the conclusion by itself as a premise in establishing further theorems. In comparison, the antecedents in the valid statement form “p.(psq)9q" are taken as hypotheses and not asserted as true. If they are true pth q is true, but they are not asserted to be true. The difference is that the Principle of Inference says that §ip£§;the premises are true, therefore, the conclusion is true. The valid statement forms say only that if the premises are true pheg the conclusion is true. It should be clear, therefore, that the Principle of Inference is not a tautology but a rule of inference used along with the tautologies to form other rules of logic. These chains of reasoning employing the Principle of Inference are always open to questions, however, -- questions of whether or not the antecedents actually do imply the 2This is the term used by Ambrose, op. cit., p. 156, to denote the rule: mGiven that A is a postulate or a proved theorem, and that ADB, we may infer B." Ambrose argues that without this rule no tautology which is a logical consequent in a deductive chain leading back to the postulates could ever be stated separately as a theorem. It is for this reason that she calls it the Principle of Inference. 237 conclusions. These questions can be tested by the “if... then...” form of implication. The antecedents can be treated as a single conjunction. Then this conjunction is taken as a substitution instance for “p” and the conclusion as a substitution instance for ”q” in the simple statement form mpaq'". The argument or chain of reasoning is now expressed as a single statement--a substitution instance of a single statement form in which both the antecedent and the conclusion can be statements of any degree of complexity. This statement form can then be tested by any one of several procedures designed for testing tautologies.3 In order to remain within the framework of analysis already employed, the truth table procedure will be used below. If, upon analysis, the implicative function turns out to be true for all possible combinations of truth values of its constituent propositions, then the function is an instance of a valid statement form. As such it is tautologous and can be employed as a valid form of inference or a rule of logic. It is in this way that the valid statement forms provide a large number of the rules of deductive logic. It should be noted, however, that the implicative function is significantly weaker than the Principle of Inference as employed in chains of reasoning, and that as 3See Ambrose, o . cit., pp. 96-97, pp. 155-156. The reason for restricting the examination to tautological implication, at this juncture, is that this section is only concerned with the role of tautologies in the logical structure of science. lFii-‘V, '7 I. 238 Ambrose puts it, ”It is important to see clearly that the formal validity of an argument is entirely unaffected by the actual truth-values of the premises and conclusion.“F Hence, because one or more of the premises may be false, it is entirely possible to deduce a conclusion which is false, even though the reasoning is correct, that is, even though it is an instance of a valid form of inference. This established the necessity of the criteria of empirical adequacy of explanation and prediction. The statements in the explanations must be true; otherwise, regardless of the logical rigor of the deduction, the conclusion may be false. The final step in the development of the role of the valid statement forms in provinding valid forms of inference is to apply the procedures specified above to the evaluation of several chains of reasoning in order to ascertain which of the forms is, in fact, a valid form of inference. Consider the following simple chains of reasoning: paq paq poq paq .19.. ..._g_ .412. ...—2. .'.~p .'.F . .'."’q . . q The truth characteristics of these are analyzed in the table below. “Ambrose, o . cit., p. 122. o ( r ; - n . . . o O f \ . . \ , . \ a \ . . c. 5 ~ g - Q | I I o n a O ‘ I Q l n . ..Jlu. ' 239 p q (poq)..~q3rvp (paq).q9p (pvq).~p9~q (pquPJq T T T F FT F T T T TT T F FT F T T TTT F T T F FT T T T T FF T T TF F T F FTT T F F F TT F F F F TT F F FT T F F TTF F F T T TT T T F F TF T T TT T T F FTF According to this analysis, the first and the last sets of statements analyzed are true for all possible combinations of truth values of their component statements. These two statement forms can, therefore, be employed in conjunction with the Principle of Inference as valid forms of inference or rules of logic. This is not true for the other two chains of reasoning, however. They are not tautologous, and their use as rules of inference would, sometimes at least, lead to errors in reasoning. Any chain of reasoning that would sometimes lead to faulty conclusions even though the antecedents are true could never function as a rule of inference. The truth table, within its SCOpe of application, providesunequivocal means of ascertaining whether or not a given statement form is valid and hence whether or not it can be used as a rule of inference. These tools apply to a wide variety of statement forms, but not to all statement forms. There is, in fact, no general test for implication. There are other sources of rules of inference, including definitions and equivalences but these will not be developed here. 240 Sufficient knowledge of the logical apparatus of science has been presented to demonstrate the indispensable role of the valid statement forms or tautologies in the logical structure of science. The next task is to show their vacuous effects in the empirical content of science. APPENDIX C THE EFFECT OF VALID STATEMENT FORMS IN THE EMPIRICAL ASSERTIONS OF SCIENCE Further understanding of valid statement forms can be obtained by examination of the following pair of statement forms: ”erVq” and "va/p”. The first of these is a contingent statement form and the second is a valid statement form. A substitution instance of the first statement form might say in explaining an increase in the 1 general level of profits that, mTechnology improved or the labour force declined.” Then, if one were to find that the labour force had not declined, one could conclude that the general level of technology had improved. This is new knowledge. One could ascertain that the general level of technology had improved without ever having observed it directly. All that is required is knowledge of truth of the first two propositions. The counterpart valid form, on the other hand, could not be used to make this kind of assertion. All it could imply is that mtechnology did not improve.’n That is, if one knew that it was not the case that technology improved, i.e., the denial of the antecedent, then one could deduce the consequent, that technology did not improve. This can be written as follows: ”(pvnup).A/p9»vp", and tested with truth table procedures, like any other 241 242 implication within the calculus of propositions. The important point here is that when the valid form is employed in an attempt to make an empirical assertion, the results are trivial. Even though the proposition is known to be true, and the truth value of its antecedent or consequent can be asserted independently, and the reasoning is demonstrably valid, the conclusion about reality is necessarily trivial. It merely asserts what had to be asserted in the first place in order to make any deduction from the valid statement form that was employed. Hence, the use of valid statement forms as vehicles for the communication of empirical information contributes nothing at all to the empirical progress of science. A brief examination of some of the more common valid statement forms will help to further substantiate this point. Consider the form ”/V(p.n4p)”. This might say of investment that it is not true that investment rises and investment does not rise at any point in time. Or consider "(p.q)3 p" which might take as a substitution instance, "If technology increases and the labour force expands then technology increases.” The Principle of Taut- ology asserts "(pvp)o p" which might say, "If output increases or output increases then output increases." The Principle of Association ”pv(qu)3qv(pvr)” is equally illuminating. It might say, “If investment increases or the labour force or technology increases, then the labour force 243 increases or investment or technology increases." None of the statements which are substitution instances of these statement forms convey any empirical information which is not already contained in the antecedent with which it is employed. All of these statement forms are tautologies. They are all true statements--true by virtue of their form and form alone. They are true regardless of the truth values of their constituents. The truth values of their constituents have no effect on the truth values of a tautologous proposition as a whole. Hence, these propos- itions could not be significantly denied. It would provide no new information to say, for example that it is not true that, "If there is an increase in output then there is an increase in output.” This is patently true as is every other tautology. By the same token, the same proposition could not be significantly asserted. In fact, there is an adage whose intellectual roots reach far back into the philosophy of science that statements which cannot be meaningfully denied cannot be meaningfully asserted. This is the case in any attempted use of valid statement forms in the empirical content of a theory. If, for example, all of the six axioms expressing the empirical content of the classical theory of economic growth turned out, upon examination, to be tautologous, then the theory would clearly not be useful for explanation and prediction. 244 It might contain only one empirical non-valid and con- sistent statement form, however, and even if the rest were all valid forms it might still have valuable explanatory and predictive potential. BIBLIOGRAPHY Ambrose, Alice and Lazerowitz, Morris. Fundamentals of Symbolic Logic, (New York: Holt, Reinhart and Winston, Inc., 1962), p. 18. American Economic Review. "Problems of Methodology,” ‘““‘3 (Proceedings Issue, May, 1963, Vol. LIII, No. 2), pp. A.M.S. Agricultural Handbook. No. 146, Analytical Tools for Studying Demand and Price Structures, (U.S.D.A., Washington, D.C.), p. 62. Brodbeck, May. "Models, Meaning and Theories," Symposium , p on Sociological Theory, ed. Llewellyn Gross (New York: ' ‘ Row, Peterson & Co., 1959), pp. 373—401. Carnap, Rudolf. "Testability and Meaning," Philosophy of Science, III (1936), pp. 419-471. Clarkson, Geoffrey P.E. The Theory of Consumer Demand: A Critical Appraisal, (Englewood Cliffs, N.J.: Prentice- Hall, Inc., 1963), p. 11. Friedman, Milton. "The Methodology of Positive Economics," Essays in Positive Economics, (University of Chicago Press, 1953). Goodman, Nelson. "The Test of Simplicity," Science, CXXVIII (1958), p. 1064. """‘ . Fact, Fiction and Forecast, Cambridge University Press, p. 26. Hempel, Carl S. and Oppenheim, Paul. "The Logic of Explan- ation," Readingp in the Philosophy of Science, ed. Herbert Feigl, and May Brodbeck (New York: Appleton- Century-Crofts, Inc., 1953), pp. 319-331. "Fundamentals of Concept Formation in Empirical Science," International Encycloppdia of Unified Science, Vol. 2, No. 7, (Toronto: University of Toronto Press, 1952), p- 23. 245 246 Higgins, Benjamin. Economic Develppment, Principles, Prob- lems, and Policies, (New York: W.W. Norton and Co., Inc., 1959), pp. g5-1060 Johnson, Glenn L. ”A Note on Nonconventional Inputs and Conventional Production Functions," Agriculture in Economics Development, ed. Carl K. Eicher, Lawrence W. Witt, (New York: McGraw-Hill, 1964), pp. 121-122. Koopmans, Tjalling C., and Hood, William C. “The Estimation of Simultaneous Linear Economic Relationships," Chapter VI, Studies in Econometric Method, Cowles Commission for Research in Economics, Mono. l4, (Chapman and Hall, Limited, London, 1953), p. 138. Lewis, W.A. mEconomic Development with Unlimited Supplies of Labor," Manchester School, May, 1955. Massey, Gerald J. "The Philosophy of Space and Time," (un- published Ph.D. dissertation, Department of Philosophy, Princeton University, 1963), p. 123. Nelson, R.R. "A Theory of the Low-Level Equilibrium Trap," American Economics Review, December, 1959, pp. 894-908. Pap, Arthur. An Introduction to the Philosophy of Science, (New York: The Free Press of Glencoe, 1962), pp. 125-135. Plaunt, Darrel and Witt, Lawrence. "Recent Theories of Economic Development," unpublished paper prepared for discussion at the Interregional Marketing Committee meeting in Lexington, Kentucky, October, 1959. Rudner, Richard S. ”On the Structure of Economic Theories," unpublished paper presented before the Joint Economics Agricultural Economics Seminar, thhi an State Univ- ersity, East Lansing, Michigan, May 26, 1958. . "An Introduction to Simplicity," Philosophy of Science, Vol. 28, No. 2, (April, 1961), p. 109. U.S.D.A. Agricultural Handbook. No. 94. "Computational Methods for Handling Systems of Equations Simultan- eously," by Joan Friedman and Richard J. Foote. \ ‘ , '\ . I \ K ‘ o \ o I - ,. , \ \ . .. " a . Q . 1