7 gig £69.? . ‘ .. . 4; HI . =3" ‘ Ewan. .‘wumimg- . :59. 5!... .,. immiugninwrlf :E. 5 . . ‘ uh: fiadfiazfi if .3 53.... 1. . $4.3; . . . .55. 2 .. g ‘3... . mu... i 2.5!.“ ’5. T: 2. 2.3.3.9. :“Lfl 3...... % 3r : url 11:“ 3.23...” :55. L." , ha.» y . a a: (mini. A I. I Q. Ira... .. xvi.” in?» . 6.. v! 3...}. ..2£.z.i.!¢as.. an AS. I»... b 5 IT.“ '3; g 5 F3 4. l . 39!. .l 11. ‘2... 3.2. .4355» § 21...}. a mem. It... .. .. £113.”..flui. v. o);- v; 3...". is: :2 . ll: .. i gun!!! .r. 1%...- .3..." 5....xwauhifiéységuum 93:? 9.. . 5 MelaIVg!.tII..qhhHH A .33.. if 3:371"?! ll! .00.“. 3.. ?. .. .2. I. .15..» l (1.01.1350... . 2‘1?!) 3 11.1311: . 55:1. 7.... Qt: T H 55‘ S " u IGAN STATE unwensmr UBRAR £8 " lI/liflliifillzl!(I!WWI/l!HIIHHIIIIMHH('1 Hi 93 2048 9096 LIBRARY Michigan State University This is to certify that the dissertation entitled A DEFENSE OF A PRAGMATIST VIEW OF SCIENTIFIC LAWS presented by Roger Vajda has been accepted towards fulfillment of the requirements for Ph . D . degree in Philosophy / a Major professor (12.714 2J9) ‘20 V6) Date MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 11/00 cJCIHCDatoOmpes-pjd A DEFENSE OF A PRAGMATIST VIEW OF SCIENTIFIC LAWS By Roger Vajda A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Philosophy Department 2000 ABSTRACT A DEFENSE OF A PRAGMATIST VIEW OF SCIENTIFIC LAWS By Roger Vaj da Laws of nature have, since Descartes, been seen as a central feature of scientific inquiry and philosophical analysis. The standard empiricist analysis of laws has come under challenge by realists, like Armstrong, who believe the analysis leaves out important features we intuitively ascribe to laws (e.g., necessity, explanatory power), and by antirealists like van Fraassen who believe the very notion ensnares us in metaphysics. In my dissertation, I defend the view against both the realists and van F raassen that an empirically respectable notion of scientific law is available within the framework of pragmatism, which relativizes lawhood to the choice of a language and a set of experimental manipulations. In chapter one, I argue that the philosophical problem of scientific laws has a long history, and that van Fraassen has picked out some tensions between views in that history. In chapter two, I analyze and criticize three realists' views and Show that they have a flawed common strategy based on incompatible presuppositions. In chapter three, I argue for my own view, a pragmatist view that builds on the work of Dewey, Nagel, Goodman and Skynns. I argue that replicability is the source of our confidence that regularities are projectible (in Goodman's sense) and resilient (in Skyrm's sense), and that this is the source of our confident assertion of counterfactuals, our belief in the necessary connection between events, and our belief that laws are empirically accessible. In chapter four, I answer objections from the realists, from van Fraassen and from those constructivists who find too much residual realism in my view. I there defend the view that one can have objectivity without being a realist, and relativity without being a relativist. DEDICATION To Louis and Joseph ACKNOWLEDGEMENTS I would like to thank the following people for the help they have given. Martin Benjamin helped me see the central themes of pragmatism as relevant to the philosophy of science. Kay Rout and Kitty Geisler helped the stylistic revisions of chapters one and two. Herb Garelick commented on the entire manuscript. Jo Anne Sorlie edited an earlier draft and gave helpful advice on the final draft. I am especially in debt to my chair, Joseph Hanna for his patient reading, and thoughtful discussion of all the drafts of this dissertation. I would also like to thank my wife and two sons for the sacrifices they made so that I could complete the work. SUMMARY TABLE OF CONTENTS CHAPTER ONE: THE PROBLEM OF SCIENTIFIC LAWS 1 CHAPTER TWO: THREE REALIST POSITIONS 43 CHAPTER THREE: A PRAGMATIST POSITION 80 CHAPTER FOUR: THREE SOLUTIONS IN SEARCH OF A PROBLEM 154 CHAPTER FIVE: CONCLUSION OF THESIS . 207 BIBLIOGRAPHY 21 1 vi DETAILED TABLE OF CONTENTS CHAPTER ONE: THE PROBLEM OF SCIENTIFIC LAWS 1.1 The Problem according to van Fraassen 1.2 Structure of the Chapter and Thesis 1.3 Origins of van Fraassen's Challenge 1.3.1 Van Fraassen's List 1.3.2 Sources of the Philosophical Problem 1.3.2.1 Ancient Greek Reflections on Change 1.3.2.2 Medieval Reflections on God's Rule over Nature 1.3.2.3 Modern Reflections on Newtonian Mechanics 1.3.3 The Standard Empiricist Model of Laws 1.3.4 The Realist Challenge to the Standard Model 1.3.5 Van Fraassen's Antirealist Counterchallenge to the Realists 1.4 Thesis: An empiricist can meet the burden of saying what additional conditions a regularity must meet to be a law without going realist. CHAPTER TWO: THREE REALIST POSITIONS 2.1 Realism's Stake in Science 2.2 Three Realists Positions: a) Armstrong b) McCall and c) Harre vii 16 17 27 32 36 41 43 43 43 2.2.1 Laws as Relations between Universals (Armstrong) 43 2.2.2 Laws as Relations in Branched Space-Time (McCall) 48 2.2.3 Laws as Relations in a Natural Kind Hierarchy (Harre) 49 2.3 An Analysis and Criticism of the Core Realist Assumptions 54 2.3.1 Internal Critique of the Realist Assumptions 58 2.3.2 External Critique of Realist Assumptions 72 2.4 Conclusion ' 79 CHAPTER THREE: A PRAGMATIST POSITION 80 3.] Laws as Constructions 82 3.1.1. Construction Relative to Language 82 3.1.2. The Role of Mathematics 87 3.2 Practical Conditions of Constructions 88 3.2.1. The Connection between Thought and Action 88 3.2.2 The Connection between Words and Action 90 3.2.3 The Social Character of Practice 91 3.2.4 The Practical Character of Science 92 3.2.5 The Continuum of the Practices of Science 93 3.2.6 The Test of Practice: Replicability 107 3.2.7 Laws as Constructed Relations between Measurable Variables 108 3.3 Success Conditions for Constructions 112 3.3.1. Empirical Adequacy Insufficient 115 3.3.2. Consensus Insufficient 125 3.3.3. Reliable Prediction Required. 126 viii 3.3.4. Reliable Prediction from Experimental Replicability 128 3.3.5. Reliability and Decision-Making 131 3.3.6. Laws and Action-Plans 132 3.3.7. Three Potential Problems 132 3.4 Answers to Important Parts of van Fraassen's List 136 3.5 The Remainder of List Mistaken 148 3.6 Summary 152 CHAPTER FOUR: THREE SOLUTIONS IN SEARCH OF A PROBLEM 154 4.1 Laws not Representations, but What Representations are About. 1 55 4.1.1 The Logical Problem from the Extension of 'Law' 156 4.1.2 An Objection from the Conceptual Role of Law 160 4.1.3 Response: Law-statements can be about the world without being about laws 165 4.2 Laws as Socially Negotiated Rules. 166 4.2.1 Replication as Social Achievement 167 4.2.2 First Response: Algorithms are methodologically useful. 173 4.2.3 Second Response: Consensus is prior to dissensus. 176 4.2.4 Third Response: Science is constructed at every level without being constructions all the way down! 177 4.3 Laws as Doppelgéinger for Subjective Probabilities 178 4.3.1 Van Fraassen's Dual Strategy 180 4.3.2 The Problem with Induction 181 4.3.3 The Problem with Inference to the Best Explanation 184 4.3.4 The Problem with Pragmatism 188 ix 4.3.5 Van Fraassen's Position 4.3.6 My Response to the Objection and the Position 4.3.7 Van F raassen's Realism 4.4 Conclusion CHAPTER FIVE: CONCLUSION OF THESIS BIBLIOGRAPHY 191 192 196 205 207 211 Chapter One: The Problem of Scientific Laws 1.1 The Problem according to van Fraassen In science, there are certain relations that have been called laws. These include Newton's laws of motion, the law of gravitation, Ohm's law, Snell's law, Hooke's law, Mendel's law, etc. There is nothing scientifically problematic about these laws. Scientists understand both the conditions under which these generalizations hold up and the conditions under which these generalizations break down. There are, however, philosophical problems involved in understanding the nature of these laws. The long history of philosophical reflection on scientific laws need not be described in detail here. An interesting twist that has occurred recently is worth understanding, and serves as the occasion for this thesis. Bas Van Fraassen, a respected philosopher of science, claims that the whole philosophical problem of laws rests on a mistake. There are no such things as 'laws' as philosophers have described them. All attempted accounts of 'laws' run up against contrary demands. It would seem nothing could do all the work expected of laws. But, van Fraassen reassures us, this result is not bad. We do not need laws to be able to revise our empirical beliefs rationally, nor does science need them to constrain theory choice. Rather, we can revise our empirical beliefs in the light of new experience by updating the probabilities we assign to them by Bayes's Theorem. Also, we can reason from the symmetries of theoretical models in science to some of the traditional conservation laws that have impressed philosophers more than they should. Science does not need laws, but gets along nicely with symmetries. 1.2 The Structure of the Chapter and Thesis It is the thesis of this dissertation that van Fraassen's challenge can be met. I argue in chapter three that there are laws in the required sense. Before I get there, however, I will show that van Fraassen's challenge needs to be met (chapter one) and that the realists have not met it (chapter two). Afier I argue for my position, I will need to address at least a few of the remaining objections, lest the reader wonder whether I have failed to meet the challenge also. That will be the business of chapter four. In chapter one, I defend the claim that van Fraassen has identified the philosophical problem of laws. He does so by drawing up a list of ' properties philosophers claimed for laws, and by showing the internal tensions in that list. As he reflects on that list, he identifies two problems that any successful analysis will have to solve, claims that both problems cannot be solved at the same time, and concludes that there are no such things as laws. So the problem of laws, as I call it, depends, for van Fraassen, on these two other problems that arise from reflection on the list of properties laws must have to do the jobs they do. My problem is related to his two problems in the following way. My problem looks at the whole list and asks if anything can be all those things; van Fraassen breaks the list into two sub-lists and associates a distinct problem with each of the two sublists of properties laws are expected to have. He argues that things that do well in meeting the requirements on one sub-list do poorly in meeting the requirements on the other. So one or the other associated problem remains unsolved. The two problems have independent interest, but are a way of motivating the claim that the entire list is important. In short, if one or the other of van Fraassen's two problems remains unsolved, my problem remains unsolved. Van F raassen maintains that the problems are jointly insoluble. I argue that they are not only soluble, but also solved. Van Fraassen argues that any account of laws that does not solve his two problems fails to satisfy the criteria for laws that philosophers have considered important. The argument, to have any bite, requires that there be substantial agreement about items in the list of properties. If there is no such agreement, then the problems van F raassen sees with laws may not be that serious. I will show in this chapter that the problem, in fact, is of long standing and that the list grew out of a long history of philosophical reflection on natural regularities. There has been such reflection since the presocratics search for the invariant element(s) in the variable. That it has long been seen as an important problem is evident from the knots Leibniz and Locke tie themselves in trying to give a satisfactory account of Newton's laws. They all were working with a list of properties, perhaps implicit, but in some cases (e.g., An'stotle, Berkeley) explicit. I will show that in the twentieth century, something very much like van Fraassen's list was presupposed by both the logical empiricist Hempel, and his realist adversaries Armstrong, McCall and Harre. Thus I will show that van Fraassen's challenge is not one that can be simply ignored. Hence, in chapter one, I first give van Fraassen's list and its historical roots. Then, I go on to give the standard empiricist analysis of Hempel, the realist criticism of Hempel, and the realist alternatives to his view. After I have given the historical context of van Fraassen's challenge, I summarize the most important of van Fraassen's criticisms of the realist alternatives. I conclude chapter one with the claim, which I defend in chapter three, that a pragmatist position remains possible, even after one has eliminated the logical empiricist, realist and antirealist views. In chapter two, I will give an exposition, analysis and criticism of the realist alternatives offered by Armstrong, McCall and Harre. These are not the only realist positions being defended, but they are varied enough to make any common elements they have interesting and worth commenting upon. My goal in that chapter will not be to criticize the particular details of the views, but to look for the larger core assumptions realist views like these share. In chapter three, I will develop a pragmatist analysis of laws that draws on the work of Dewey, Nagel, Goodman, Quine and Skyrms. I will argue that we arrive at the truth-conditions of the conditional claims embodied in laws insofar as we are able to manipulate the conditions expressed in the conditional claims. This view immediately connects up truth and knowability, projectibility and reliability, and, finally, measurement and mathematical models by making linguistic construction subject to empirical test. To state my position briefly, I claim that we construct the world so that its behavior is regular in particular ways, but that because our constructions point forward to events which have not happened yet, we may find that the world fails to behave the way we expect. We then conclude that we have failed to order the world in that particular way. Our construction has failed to generate a coherent practice because there is no consistent context of use. For example, 'like reproduces like,‘ a central claim of Aristotelian biology, fails in the light of genetic mutation and evolution. Our attempt to find orderliness in the biological realm by Aristotle's construction fails, because the expectations it sets up in us are not met in experience. I Our confidence that we can rely on the regularities as constructed, and hence predict the future on their basis (and not simply explain, afier the fact, why what happened did happen) points to the methodological processes by which the constructions were made, a method which ensures that what is constructed in the laboratory will have applicability outside the laboratory as well. The methodological maxim that covers this might be put thus: if it is going to fail, make it fail here where we can see it. Laws are those regularities that do not fail under such conditions. Chapter four looks at an array of objections that might be raised by realists, constructivists and van Fraassen himself to my position. At their core, each objection finds my position unstable, and thus indefensible, but the point of instability is found in different places, and the destabilizing forces are characterized in different ways. I cannot hope to anticipate all the objections anyone might raise to any part of my analysis, but I hope the objections II have chosen give a fair test of my position and a chance to demonstrate the resources my position possesses that enable it to maintain itself. In chapter five, I give a brief summary of the arguments for my thesis. 1.3 The Origins of van Fraassen's Challenge In this section, I give van Fraassen's list, with a brief explanation of the items (section 1.3.1). Then, I give a reconstruction of the historical origins of the list, up to the nineteenth century (section 1.3.2). Then, in successive sections, I give three twentieth century responses to the problem: the logical empiricist view of Hempel, the realist views of Armstrong, McCall and Harre, and the antirealist view of van Fraassen. 1.3.1 Van Fraassen's List Central to van Fraassen's argument against the existence of laws is his list of characteristics anything would need to meet in order to be a law. The list is a philosopher's list, but it is not purely the product of speculative imagination. Philosophers constructed the list after reflecting on scientific practice and the items scientists have called 'laws.‘ I will here give the list, with a brief gloss on the meaning of each requirement. The list van Fraassen gives (Laws 25-3 8) is: 1) Universality: The law must hold under all relevantly similar circumstances. All F's must be G's if it is a law that F's are G's. 2) Relations to Necessity: Laws involve necessity in the following ways: 2a) Inference: One must be able to infer the current observable state of affairs from the law and current conditions if this is a case governed by this law. 2b) Intensionality: One cannot simply obtain the truth of the observable state of affairs by detaching it from the law- operator, because the state of affairs is located in an intensional context. The logic involved in the inference in 2a) is not truth-functional. 2c) Necessity Bestowed: Any necessary connection between natural events derives its necessity from a law. 2d) Necessity Inherited: The view (which van Fraassen calls a minority view) that for laws to confer necessity on their instances, they must themselves be necessary truths. 3) Explanation: Laws must explain why their instances occurred. 4) Prediction and Confirmation: Laws must be connected to observable states of affairs so that our belief in the law can be increased by viewing positive instances and our belief that the state of affairs will occur is increased by our belief in the law. 5) Counterfactuals and Objectivity: Our belief in the connection between events related by law must be such that we would expect it to hold in unobserved instances and even in impossible to observe instances as well. This is part of what we mean by the 'mind- independent' status of laws, i.e., their objectivity. 5a) This includes context-independence. Laws are not strongly theory dependent or language dependent 5b) Laws are objective: They are discovered, not constructed. 6) Relation to Science: Laws are a central (perhaps the central) feature of science. Without laws, science could not perform the intellectual role it does play for us. I will show next that the list is neither idiosyncratic (section 1.3.2) nor innocent (section 1.3.3, 1.3.4, and 1.3.5). 1.3.2. Sources of the Philosophical Problem The list is van Fraassen's own. It sets up the pincer movement of his dialectic against realism, where he shows that important realist views find it impossible to solve both of two problems that emerge from the list, what van Fraassen calls the inference problem and the identification problem. The identification problem is the problem of empirical access. How do we identify laws in experience? This is a problem for realists who do not identify laws with observable regularities. The inference problem is the problem of necessary connection. Why must something which satisfies the antecedent conditions stated in a law also satisfy the consequent conditions? We need an account of laws that enables us to see how laws help us to predict and explain the sequence of events that we find in experience. The problem seems to be that one can set up any number of analyses that solve the inference problem that leave us unclear how to solve the identification problem. It is the very point of the metaphysics deployed by the realists to explain the necessary connection between antecedent and consequent conditions in laws. But it is the very same metaphysics that makes identifying laws in experience problematic for the realists. I think van Fraassen is right, against the realists, on this matter, and will give in chapter two my diagnosis of the weakness in realism that van F raassen rightly criticizes. However, before I do so, I need to Show that the list is not the invention of either the realists or van F raassen, but forms the historical backdrop against which philosophers of the present work. Where did this list come from? Why should we take it seriously? Why is it so difficult to find an analysis that successfully meets all the conditions? 1.3.2.1. Ancient Greek Reflections on the Nature of Change Edgar Zilsel, in his historical article "The Genesis of the Concept of Physical Law,"1 has noted that the term 'law' was not clearly and systematically used to describe universal empirical regularities before 1 This section of the chapter relies primarily on Zilsel, Needham and Ruby. Descartes.2 Descartes' usage became standard only because Newton used it similarly in his enormously influential work, PrincipiaMathemetjca. While I do not challenge Zilsel's history of the term, I start my own historical investigation earlier for two reasons. First, the concept is clearly older (e.g., Galileo has the concept but not the term). Second, even where the concept is absent, there are discussions among the ancient Greek philosophers that are both relevant to our concept and influential in its construction. There is a long tradition, stretching back to the presocratic philosophers, that discusses natural progressions and natural processes. Although they do not call empirical regularities 'laws', what they say about such regularities is sufficiently analogous to what we would call laws to make what was claimed relevant to our discussion. Furthermore, I want to claim that the modern concept of scientific law was influenced by these Greek philosophers. By the time we get to the mature work of Plato and Aristotle, we are discussing views that had historical influence on the modern discussions in Descartes and Newton. In fact, I want to claim that most of the contours of our present philcscphlcal problem of laws can be found in the work of Aristotle, even though the term 'law' is not used. Gregory Vlastos, in his book, Blatc'sflniyerse, finds examples of invariant natural uniformities in the work of both the pre-Socratic physiologoi, (nature-philosophers such as Thales, Anaximander and Empedocles) and Plato himself. Although they differed among themselves whether the reason for the order was to be sought within nature, or outside nature, in some transcendent Being or deity, on this issue they were agreed: 2 The qualification 'systematically' is important: Jane Ruby finds uses in the modern sense in the fourteen and fifteenth centuries by Roger Bacon in optics, and Regiomontanus in astronomy and mathematics. the regularities were inviolable and inviolate. So the path of the sun was fixed, for Heraclitus, in a manner which included an entirely rational explanation for eclipses. And for Plato, the behavior of the sun over the course of the year could be entirely explained in terms of two different circular motions that explained the daily circuit of the sun in the plane of the ecliptic (the Same) and the yearly movement from the solstice north of the ecliptic to the solstice south of the ecliptic, and back again (the Different). Vlastos also claims that Plato had a theory of the behavior of material (sublunary) things based on an account of the material of which they were made (the four elements) and the forms. While Vlastos shows the important, empirical, non-speculative elements in Plato's otherwise poetic and speculative account, Plato's approach seems remote from Newton's and Descartes'. Much more congenial seems to be the comprehensive, detailed philosophy of nature we find in Aristotle. His work contains a great deal that is of interest for our study. But, before we can appreciate the similarities of Aristotle's project to ours (and identify the historical continuities that led modern scientists and philosophers to apply his criteria to what we call laws), it is important to set out, briefly, some of the historical context that set the problems Aristotle was attempting to solve. The speculations of the nature philosophers focused on what one thing could account for the variety of things we see in nature. Water, air, fire and earth, which began as rival principles in competitor accounts of matter, were later combined in the so-called four element theory of matter. Parmenides, and his disciple Zeno, argued that the whole explanatory enterprise rested on a mistake. While their arguments have more often been ignored than refuted, those arguments compelled both Plato and Aristotle to 10 give careful justifications of the explanatory enterprise. Parmenides argues that change is incoherent because the very idea involves the notion of non- being, which is incoherent as well. The real either is or it isn't. But non- being cannot even be thought. So everything real is. Change seems to involve the notion of non-being. If something were to change, it would not be what it once was, and would not have been what it is now. But if so, change seems to involve the idea of being and not being at the same time, which would be incoherent (indeed, contradictory). So change is impossible. And since motion is a species of change, motion is impossible also. Zeno's paradoxes are intended as proofs of the impossibility of motion by showing the absurd conclusions to which the assumption of motion leads. The idea that the intelligible is the changeless becomes explicit with Parmenides's challenge, and both Plato and Aristotle respond to that challenge by partially agreeing with him. Change is problematic, unless it is anchored in something that does not change. Explanation will require intelligible principles, not inert material elements. Both Plato's theory of ideal forms and Aristotle's theory of substantial forms are accounts of change that meet Parmenides's challenge by accepting at least some of his assumptions. Aristotle seeks to find the principle of invariance within the phenomena themselves. Aristotle, in his Physics, gives an account of change that identifies the invariant elements of bodies in motion as the so-called four causes: material, formal (essence), teleological and efficient. A rock may grow hot or cold, but the material remains the same throughout and supports the qualities of hot and cold. A rock falls naturally because it is primarily made of earth (formal cause supervening on the material of prime matter), and earth 'wants' to be near the center of the Earth (teleological 11 cause). A rock may experience violent motion if hurled (efficient cause). Explanations of biological changes are more complex, but the same explanatory strategy prevails: seek the principle of change in something that does not change through the process. Organisms come into being because matter is formed in various ways that mark out the organism as a member of a distinct kind with distinct activities. Plants grow by adding matter. They mature by reaching the goal set by the kind of plant they are. Animals move because they have the sorts of souls (form) that can initiate motion. Humans think because their forms include rationality, and it is the peculiar and proper function of humans to think. These are not what we would call laws, but neither are they irrelevant to the discussion of laws. Many of our laws are conservation laws, laws that specify some quantity that remains invariant through a process of change. Conservation of charge, momentum, angular velocity, mass (in chemical equations), energy, mass-energy (in relativity theory) are all direct descendants of the Greek project to locate the invariant in the variable. Aristotle seems to be the primary historical source for the following parts of the problem of laws (cf. van Fraassen's list above): 1) Universality: explanatory principles must make contact with the essential characteristics or the definition of the thing. An explanation that latches onto accidental characteristics will fail to be truly explanatory of the characteristic behavior of the species (whether rocks, plants, animals or planets), because it will not have picked out the features that are true of all or most of the species. 2) Relations of necessity: principles must be necessarily true, and the relations between subject and predicate in the conclusion of a correct theoretical syllogism must hold of necessity. 12 23) The relation between the characteristic properties and behavior of the things and its essence is deductive. From the definition of the essence and the claim that one has a member, one can infer the appropriate properties. 2b) The major premise of a theoretical syllogism is grounded in the essence, and therefore necessarily true. Being always or for the most part true is necessary, but not sufficient, for a proposition to be part of a scientific explanation of a natural event. Both the conclusion and the major premise are typically universally true, but one cannot interchange them and still have an acceptable scientific explanation for Aristotle. The difference is the role essences play in each. The sorts of changes a substance can undergo, Aristotle maintains, are determined by the essential features of the substance, and it is only in virtue of the essential features that one has a rational explanation for the change. In brief, the connection between the subject term (or universal) and the predicate term (or universal) in the conclusion requires a real, necessary connection between the subject term and the predicate term in the major premise that appeals to the essence of the thing designated by the subject. Since the major premise functions like a law in Aristotelian science, this is equivalent to the claim that a law requires a connection that is not merely universal but necessarily universal. 2c) Anything deduced from necessary principles is itself necessary. 13 2d) One incontestable case of necessary connection between subject and predicate indicates a necessary principle that lies behind and explains the connection. 3) Principles must be invoked in any correct explanation for why phenomena occur or why things have the properties or display the behaviors they do. 4) The features of prediction and confirmation are not covered as such, and this point will be important later in the chapter. However, unlike Plato, who found the phenomenal distracting, Aristotle believes that the road to principles starts with the phenomena. This leads him to the following important epistemic claims: i) That which is most knowable in itself is not necessarily what is most knowable to us. We need to come to principles through the senses, abstracting the features embodied in the principles. ii) There are no transcendent forms or essences. All forms or essences are features of spatio-temporal substances that govern actual processes of change and movement. 5) Counterfactuals are not covered by Aristotle, because the analysis of the conditional awaited the work of Stoic logicians like Chryssipus. However, again, Aristotle does have an analysis of modal propositions that will prove important in the late nineteenth century and early twentieth century developments. Objectivity is not discussed as such, but is present in such realist locutions as 'knowable in itself, ',clear in itself. The essences are objective in the sense of residing in the object, and only present in the subject after being abstracted from perception. But, 14 5a) Principles are context—independent in that their truth is necessary independent of the truth of principles governing other areas of being For example, principles governing the behavior of plants are independent of those governing human beings except of course where human behavior is plant-like, in growth, nutrition and reproduction. Since they are language independent (that is, true whether anyone formulated them in language or not) principles are context independent in not being relative to a choice of a language. Essences determine the one and only correct language. 5b) Principles are objective in that we discover them rather than construct them. They are not relative to any beliefs we may hold. A fortiori, they are not relative to any theories we hold. 6. Aristotle places these principles right in the heart of science. Science just is the discovery and deductive arrangement of natural principles. In conclusion, a substantial portion of the problem of laws emerged early in the reflection on natural regularities, even before the modern notion of laws emerged. In the light of those reflections, the modern notion of laws was formulated. This is not an especially odd result. While other Greek scientists were known in the Middle Ages, Aristotle towered over all the rest. It would be a pardonable exaggeration if we were to claim that Aristotle just was science for his medieval counterparts, whether Islamic, Jewish or Christian (Cf: Singer 150,152). Aristotle set the terms of the debate even for those attempting to break free of his manner of conceiving the sciences of physics, 15 chemistry, astronomy and biology. An important element was added during the Middle Ages to which we now turn. 1.3.2.2. Medieval Reflections on God 's Rule over Nature If Vlastos is right, the Greeks originated the notion of 'cosmos', our idea of an ordered universe. This is not identical to our concept of a law- governed universe. But given the content of Plato's Ilmaeus, the medieval thinkers who reflected on the ancient Greek corpus of scientific and philosophical writings were not wrong to find analogies between the Judeo- Christian God, who created the universe, and the Platonic craftsman demiurge, who fashioned the world in accordance with the ideal forms. Still, Zilsel, Needham and Ruby all agree that among the ancient Greeks, 'law' was simply not used to describe natural regularities until the Stoics, and even there it was primarily with respect to the moral character of the universe. The notion of 'natural law' came in during the Middle Ages as an expression of the law written by God on men's hearts, as opposed to the laws of custom or positive law. Nevertheless, in the Bible are hints of the notion of law as describing God's providential reign over nature as well as his moral rule over humanity. Although Augustine had coined the phrase 'etemal law' centuries before to cover God's rule over non-rational as well as rational beings, Aquinas still viewed 'law' in this context as metaphorical.3 The medieval view combines Christian faith in God's rule with the ancient Greek conception of order. 3 Aquinas, WW: II, 1 qu 90,91,93,95 qtd. in Zilsel 256-258; cf. Ruby 341. " Even creatures without reason share eternal reason in their own way, like reasoning creatures; but because reasoning creatures do it with reason and understanding their sharing in the eternal law is itself law in the true sense, for law is a product of reason, as we have said. Non-reasoning creatures share it in a non-reasoning way, and so in them it is a law W-" 16 Thus the notion of law here is teleological as in Aristotle, rather than descriptive as in the modern view. Nevertheless, we do, in this period, according to J. Ruby, get some of the first uses of the word 'law' in the modern sense to cover the laws of geometry, astronomy and optics. It would become natural to describe the contrasts between the old views of physics and astronomy and the newly emerging views of the coming centuries in terms of the laws governing those phenomena in each view. 1. 3.2. 3. Modern Reflections on Newtonian Mechanics While the history of the idea of 'law' is interesting and well told by Zilsel, Needham, and Ruby we are concerned here with what philosophers have made of the idea. The early history of the concept of 'law' was complicated for a number of reasons. The concept was present without the word. The word was present without the concept. Although the Greeks lacked our concept of law, they developed analogous concepts such as 'order' and 'principle.' Properties of 'order' and 'principle', were later transferred to the concept of law when it was formulated in the Middle Ages, as we have just seen. Our attempt to separate out a philosophical problem of laws, as opposed to a scientific problem of laws, was complicated by the considerable overlap between science and philosophy before the modern era. We could not cleanly separate the scientific development of the concept from the philosophical development because many of the same people were involved in both (e.g., the physiologoi, Plato and Aristotle). Now, the two histories cannot be neatly separated in the early modern period either. Descartes and Newton were creators in both science and philosophy. The distinction becomes clearer afier science and natural philosophy separate in the nineteenth century. A further complication arises l7 in the modern period. Modern philosophers are reacting to the concept of 'law' as much as articulating it. Interpretation and explication go hand in hand, for example, in the eighteenth century reflections on law. Still, philosophers would work out the implications of the Newtonian 'mechanization of the world picture' over the next couple centuries (Dijksterhuis 3-5). Twentieth century philosophers would continue to work on the implications of the notion of 'law' in the light of quantum mechanics and relativity theory. Philosophers from Hempel to van Fraassen would work out the explicit, detailed formulation of those implications captured in van Fraassen's list. The scientific revolution was very complex, and summary historical descriptions of it are controversial. However, since our purpose here is to examine historically important reflections of laws, the risk involved in simplification is worth taking. Buchdahl, in MetaphysicsantheEhilcscphy cfScience, a study of historically important metaphysical reflections of early modern philosophers, gives his summary of the features of the scientific revolution considered salient by those philosophers. Buchdahl claims that during the scientific revolution, the following occurred: i) the reduction of the anthropomorphic character of the world (e.g., the embrace of heliocentrism); ii) the rejection of transcendental perspectives and the growth of materialism (e.g., the revival of Epicureanism and atomism; Descartes relegating biology, physics, astronomy to the realm of matter; Galileo's refusal to consult the Bible rather than experiment on matters of physics); iii) the removal of teleology from the world, and the embrace of mechanistic explanations (e.g., Descartes explanation of the motions of the world, the behavior of animals and the functions of the human body in mechanistic terms); iv) experiment; vi) mathematical 18 formulations replacing qualitative explanations in terms of substantial forms and occult properties (cf. the work of Kepler, Newton, Galileo and Descartes) (Buchdahl 51-78). If such an assessment is accurate, then we have substantial agreement during the scientific revolution on the items on van F raassen's list. Since there is some shifi in the content ascribed to those properties required of laws, it is worth looking at the list once again to see the state of the discussion during that period: 1) Universality: Both the heavens and earth are governed by a common set of laws. Both humanity and nonhumanity are governed by the same set of laws. Laws are the same in the past and present and future. 2) Necessity: Instead of seeking necessity in a connection between what happens and God's eternal will, modern natural philosophers now locate necessity in the invariable concomitance and succession of events (determinism). Kepler is interested in the mathematical relation governing the periodicity of the planetary motions, not the Prime Mover. 3) Explanation: What theologians of the Middle Ages called the secondary cause is now the only cause mentioned in science. Newton's dynamics explains Kepler's law of planetary motion by appeal to central forces. This, rather than the perfection of the circle, explains the approximately circular motion of the planets around the sun. Good explanations mention the mechanism or mundane cause. Explanations are secular and in terms of efficient cause rather than in terms of teleology or design. 19 4) Prediction and confirmation: Attention is paid to experiment and exact mathematical formulation. For example, Boyle shows concern to control relevant variables in experiments and experimental design. Also measurement comes to assume a central place with the work of Tycho Brahe and Galileo. 5) Objectivity: The rejection of anthropomorphism and its replacement by materialism and atomism were part of a new notion of objectivity that made matter mind-independent, and physical models unlike social or psychological models. This also led to the notion that truths about the world were context-independent in the sense of not being strongly conditioned by language, culture, history or social structure. Francis Bacon, for example, defends the view that the world is the way it is, no matter what the church, state or philosophical school say it is. 6) Relation to Science. While many disagreed over the structure and methods of science, all agreed that science aimed to uncover the laws of nature: While Newton, Descartes, Boyle and Francis Bacon agreed on little else, they agreed on that. However, our having said that all this was implicit in claims made during the scientific revolution, does not make our examination of the major philosophers of the next few centuries superfluous. After all, at least part of the problem may be that philosophers misunderstood or even rejected the exact notion of laws that modern science was working with. Buchdahl goes so far as to suggest that Locke, Berkeley, Hume, Descartes, Leibniz and Kant hadsignificant reflections on laws of nature in their philosophies as part of their reflection on the 'nature of the propositional link' between subject and predicate in judgment (Buchdahl 51-61). For Locke, we have no 20 knowledge of what things are really like (their real essence, in Locke's terms), only how things appear to be (i.e., their nominal essence). We locate empirical laws, for Locke, in the connection between the subjects of appearance and their regular properties.4 For Berkeley, perceptible things just were the set of their appearances, and the laws were simply the set of successive appearances.5 The cause was God. For Hume, cause itself gets analyzed in terms of Berkeleian regularities, which we project into the future because we have acquired the habit of expecting the sequence to hold up.6 There is no natural necessity -- only psychological associations. Some psychological associations are better than others, however. Because the possibility of error is everywhere, we need to apply some canons to make sure the regularities we project are worthy of projection.7 This anticipates 4 As Buchdahl shows, Locke's case, far from being the easiest, may well be the most difficult, because in places he seems to have held two different views on the matter. One view, the necessitarian view, maintains that there are real necessary connections in nature, and it is only our ignorance of the real essences of things on a microlevel that prevents us from learning of them. The other view maintains that there is no necessity to know, but rather only the connection we find in phenomena (with the nominal essence of things). But this added complexity only reinforces my point: necessity was viewed by these philosophers as puzzling and mysterious. Relevant passages are found in Locke 527; bk. 4, ch. l-3. 5 Berkeley, Principles 78-80; pt. 1, sect. 30-32: "The ideas of sense are more strong and lively, and distinct than those of imagination; they have likewise a steadiness, order, and coherence, and are not excited at random, as those which are the effects of human wills often are, but in a regular train or series-the admirable connexion whereof sufficiently testifies the wisdom and benevolence of its Author. Now the set of rules or established methods wherein the Mind we depend on excites in us the ideas of sense, are called the laws of nature ; and these we learn by experience, which teaches us that such and such ideas are attended with such and such other ideas, in the ordinary course of things... all this we know not by discovering any necessary connexion between our ideas, but only by observation of the settled laws of nature, without which we should be all in uncertainty and confusion, and a grown man no more know how to manage himself in the affairs of life than an infant just born." He goes on to worry that those who reify such laws 'wander afier second causes' rather than the Creator who is the First Cause. Cf. Berkeley, QeMcm . 6Hume, Treatise 10-13; bk 1, pt. 1, ch. 4. Hume, Ireatlse 155; bk 1, pt. 3, sect. 14 "Of the Idea of Necessary Connexion." 7Hume, Imatise 173-176; bk 1, pt. 3, sect. 15 "Rules by which to judge of causes and effects" " l. The cause and effect must be contiguous in space and time. 2. The cause must be prior to the effect. 3. There must be a constant union between the cause and effect. 4. The same cause always produces the same effect, and the same effect never arises but from the same cause...5...that where several different objects produce the same effect, it must be by means of some quality, which we discover to be common amongst them...6. The difference 21 Mill‘s later reflections on method.8 For Descartes, the laws were contingent choices of God's that led tothe regular order in the world. Since God also sustained that order, it was as immutable as God.9 Such order was not an empirical one, but an intelligible one. Its laws could be grasped only by close, attentive, methodical analysis. On the basis of such a methodical analysis, Descartes believed that he had a priori proof of the laws of motion and optics. Leibniz needs to make the laws secondary, since all that exists are the monads, which neither change nor interact. Monads have all their predicates by creation. For Leibniz, perhaps even more than for Descartes, error was persistent and difficult to remove. We think the monads inert, for example, because our thoughts are confused. We have taken appearance for reality. In reality, all monads are self-moving agents. Nevertheless, Leibniz did not believe all appearances deceive us. Some phenomena -- space, time in the efl‘ects of two resembling objects must proceed from that particular, in which they differ- — - 7. When any object encreases or diminishes with the encrease or diminution of its cause, 'tis to be regarded as a compounded effect, derived from the union of several different effects, which arise from different parts of the cause...8. an object which exists for any time “1 {‘5 fill! perfection without any effect, is not the sole cause of that efi‘ect, but requires to be assrsted by some other principle, which may forward its influence and operation." Mill's canons; are much clearer and his illustrations better informed by the science of his day, but the Pnnciples are very similar. “what is called the uniformity of the course of nature...is itself a complex fact, compounded of all the separate uniformities which exist in respect to single phenomena. These phenomena when ascertained by what is regarded as a sufficient induction, we call, in comfy-s0“ parlance, Laws of Nature. Scientifically speaking, that title is employed in a more giggicted sense to designate uniformities when reduced to their simplest expression" Mill 9D ’ pt 3, ch. 4 "Laws of Nature." De:scartes 1: 92-8; W. ch. 7 "The laws of nature of the new world." For example, it foff-l‘tes says "...God continues to preserve [matter] in the same way that he created it. For must 0W8 of necessity, from the mere fact that he continues thus to preserve it that there actio be many changes in its parts which cannot, it seems to me, properly be attributed to the e ‘1 Of God (because that action never changes), and which therefore I attribute to nature. a rtiles by which these changes take place 1 call the 'laws of nature.” and further down the p ge That is to say, with God always acting in the same way and consequently always 1:32:10 ing the same effect...I shall set out two or three principle rules according to which it expo lfe thought that God causes the nature of this new world to operate." and thence, an 81non of his three laws of motion in Descartes 1: 92-3 say:°frtes 1: 240-3; "Principles of Philosophy", pt 2, sec. 36, 37, 39,40,42; Descartes also see ’ From God's immutability we can also know certain rules or laws of nature, which are 0ndilt'y and particular causes of the various motions we see in particular bodies." Des‘mrtu 1: 40) 22 and matter -- qualify as well-founded, even if not metaphysically basic. Laws are such. Nevertheless, laws do describe the rational order we find in the world for us, because we cannot grasp everything in their particularity as God can.10 Kant goes one step further: lawfulness is a condition of the possibility of experience for finite cognizers like ourselves, a condition we impose because we are rational. What does this thumbnail sketch of modern pronouncements on law contribute to our understanding of the problem of laws? The answer is complex. A great variety of ontological and epistemological statuses are ascribed to laws by the modern philosophers we have just discussed. What is the source of their disagreement? It seems some of this disagreement stems from disagreement over how to accommodate elements of the new and old views of the place of God and humanity in the universe, the nature of the order found in the world, and the possibilities of human knowledge of that order- Locke and Leibniz, for example, seem dubious about whether laws belong in an ultimate accounting of the fumiture of the universe. Berkeley, with his project of defeating the skeptic simply identifies laws with empirical regularities. Descartes agrees that the skeptic must be defeated, but locates the possibility in pure thought. Laws for both are near the foundations of empirical knowledge. Insofar as they all solve the problem by rrlaking our immediate awareness to be of our ideas, they solve the knowability problem by putting at risk its objectivity. Each of them tries to secure objectivity for laws in his own way: Descartes by indubitability of clear and distinct ideas, Hume by method, and Berkeley by sequences of Ideas that are not subject to our wills. 10 There are difficulties in Leibniz's view of laws. cf. Buchdahl 461-464 23 If we read what they say, however, with van F raassen's list in mind, we find that they do indeed seem to take something like that list for granted. In fact, they have re-interpreted some things we already found, in our discussion of Aristotle, into what we now would call the standard interpretation. Let us look at the list again and see how these early modern philosophers have advanced the discussion: 1. Universality is taken for granted as a legitimate demand for principles. However, in the light of skeptical objections, it is problematic. The rationalist Descartes claims that laws are known a priori. The empiricist Berkeley claims that laws are known by immediately perception. But Leibniz and Hume, who agree with Descartes and Berkeley on so much disagree with them here. The laws, if knowable, are not known with certainty precisely because it is hard to know if they are universally true, Hume would argue. Furthermore, even if known, what is known lacks the universality of a true metaphysics, Leibniz would argue, because it is not true of what basically exists, the monads, but only how they must appear to beings such as ourselves. 2. The necessity ascribed to laws, owing to their inviolability, is problematic. Aristotle had seen the need for distinguishing physical necessity from other types, but with the repudiation of Aristotelian substance and the notion of essences, the nature of physical necessity becomes mysterious. While Descartes and Leibniz locate the necessity in God, Hume makes necessity a creature of the mind. Whether necessity can be bestowed from God to creature or whether the laws are necessary truths or not, is already matter for debate. 24 3. Inference becomes mathematical at this stage. Galileo talks about the book of nature being written in the language of mathematics, but it is far from clear what the justification is for applying mathematics and mathematical reasoning to natural phenomena. An unanalyzed Pythagoreanism seems to be the root of the mathematization of physics that occurs in this period. 4. The problems of intensional contexts and opacity of reference have not been developed yet. The assumption is that any idea or perception can be had in common by anybody, and that language is simply the expression of those ideas. However, all is not clear here. Francis Bacon and, following him, Locke, already warned about the distorting influence of language. Leibniz denies that our ideas can always be made clear to others or to us. 5. That laws explain occurrences in the natural world does seem taken for granted by all parties that accord the laws ontological ultimacy. For Hume and Leibniz, however, the explanatoriness of laws is an issue. 6. Prediction and Confirmation are rendered problematic by Hume, who questions the justification of induction. 7. Counterfactuals are raised in unclear form with Descartes's 'hypothesis' about the origin of the world from chaos and Leibniz's notion of possible worlds. In both cases, the issue is the explanation of the current order of the world or the current set of laws. Descartes argues that the laws are contingent on God's choice, but necessary after he chooses them. Given this set of laws, our world will inevitably evolve into its current state, regardless of its original state. Laws have a hypothetical necessity. For Leibniz, however, there 25 could only be one set of laws. God had to choose the set of laws that was the best. To have chosen differently would make God arbitrary rather than the all-wise being he is. But this means that what the laws are is not a subjective matter even for God. This may be the closest Leibniz came to a formulation of the objectivity of laws. For Berkeley, the objectivity of the world order depends on the fact that we cannot change, by an act of our will, the ideas we have while witnessing natural events. He contrasts this with imaginary ideas, which we can change by an act of the will. 8. Context independence appears of several sorts. Thoughts and ideas are independent of language and culture. We see with our own eyes and think with our own minds. This makes the individual independent of his or her history. Next, thoughts are independent of each other. So, for example, the monads of Leibniz do not really influence each other, Descartes' clear and distinct ideas do not require each other, and Hume's associated events are independent. This 'atomicity' of analytical units in the different philosophies is not altogether unlike the behavior of individual physical atoms in revived atomism. The behavior of one 'atom' has nothing to do with the behavior of another, unless they collide. The laws, apparently, need have no connection with each other, except, possibly, in the mind of God. 9. The assumption they all do make is that science concerns itself with laws. Some, like Hume, want to formulate similar laws for moral philosophy based on the psychological laws of association. 26 1.3 .3. the Standard Empiricist Model of Laws The previous section showed, I think, that matters were far from tidy among philosophers reflecting on the nature of scientific laws. This should hardly be surprising. Both empiricists and rationalists were trying to solve problems of knowledge, and had set the standard at certainty. It is far from clear, however, even with the tremendous achievement of Newton, that the methods of experiment lead to certainty. Kant was not mistaken to make that point. The emphasis on method in both traditions served to demonstrate that it is easy to lose one's way in the world without systematic procedures. The twentieth century brought with it a number of scientific revolutions. Darwin's revolution finally won some major battles in the universities. Sociology and psychology claimed that human social and psychic life was fair game for scientific exploration. These revolutions pushed the universality claim rather far, much farther than Descartes or Newton would have liked, since they believed the mind was special. But, after genetics and biochemistry discovered that there was nothing special 3130111: the chemicals involved in living tissues, there were attempts to Progressively reduce the laws of psychology to the laws of physiology, then to the laws of chemistry, and finally, to those of physics. Reductionism, altho‘ngh not universally embraced, testifies to the belief that the laws of physics are indeed universally true. The revolutions that brought about renewed philosophical reflection in the first half of the century were the revolutions in mathematics and phySi cs: the rise of quantum mechanics, relativity theory, and mathematical logic - These led to the work of the logical empiricists who gave a formal treatiInent of the empiricist view of laws we found in Berkeley and Mill. 27 It is a feature of positivism, especially the positivism of Mach and Duhem, that it is the role Of science to provide economic abbreviations of experience. Since mathematical laws are brief in statement, but have a potential infinity of instances, mathematical laws are paradigms of scientific knowledge. When Hempel and Oppenheim gave their initial analysis of laws in l 948, it was part of a larger analysis of scientific explanation. Laws were important because explanations were seen, as in Aristotle, as deductions of empirical states of affairs from general principles. To be able to do that, laws needed to have the right formal structure and empirical content, and the pro j ect Hempel and Oppenheim undertook was to state clearly the formal and material conditions for lawhood. The problems this project encountered are vvell known to those familiar with the troubled career of the Verifiability Criterion of Meaning. Nevertheless, since the analysis Hempel and Oppenheim gave explicitly formulates properties of laws that are implicit in car] i er philosophical discussions of laws, the analysis is still valuable. And since, for a time, it represented the standard view, both the reasons for which it Was held and the reasons for which it was rejected, hold interest. The analysis consisted of two parts. First, one needed to get the forT1121] structure right. One gave the syntax for some formalized language that Would enable one to pick out the well formed formulas and the correct transformations of those formulas. Next, one gave the symbols in that la“gllage semantic content. This enabled one to distinguish the function of “”1115. Some did logical work, others designated empirical objects or their p1“Denies. This sufficed to give one a lawlike statement, a statement that could be either true or false. It is formal because one stipulates in advance how one is going to use the symbols, and one can use them correctly even if one does not know what those symbols mean. The second condition is not 28 formal, but material. The lawlike statement must be true. That cannot be known apart from experience. So one would first set up some model language L which allowed inferences in the lower functional calculus Without identity. After introducing both the logical symbols and the extralogical ones (constants, variables, predicates of arbitrary degree), one then defined the atomic statements. From these, sentences of arbitrary complexity could be built up by means of truth-functional connectives ('and', 'or', 'if...then', 'not'), and quantifiers ('all', 'some'). Once the syntax was in pl ace, one would go on to define important semantic concepts, such as 'formal truth in L', and give the semantic constraints on substitution instances for the constants (e.g., any physical object) and predicates (purely qual itative). Having set up the syntax and semantics of the language in this manner, Hempel defines a lawlike sentence in L as follows: S is a fundamental lawlike sentence in L iff S is purely universal. Simi larly, one defines a law in L as follows: S is a fundamental law in L iff S is purely universal and true. Hempel suggests that (x)[ P(x) ...Q(x)] would be the form that a law would take in the formal language L. Later, Hempel came to have some reservations about the confident prol'louncements about the form of a law, since laws were not often exIDl‘essed in formalized languages. 'This object is soluble in water,' he said, is sit'rgular in form, although it expresses a law, and may also be expressed in a generalized form.ll Nothing important hangs on this worry for our \ 29 purposes, since all parties to the dispute grant that many of the interesting laws of science are expressed in mathematical form and wear their generality on their face, so to speak. What is interesting for our purposes is what parts of the traditional characteristics of laws are preserved, and what parts are given up in this style of analysis. 1. Universality: The law is given a formal treatment as a universally quantified statement in some formal language. If the statement is true, its universality is guaranteed by its form. 2. Necessity: Relations to necessity are more problematic. The necessity admitted is a purely syntactic one, that of the formal deductive relationship between the general claim and its instances, or between the explanation (the law and initial conditions) and the event to be explained. Any form of physical necessity, in addition to this logical necessity, is denied. Intensionality is denied as well: the formal language is set up with unambiguous and precise semantics (for both the logical symbols and the extra-logical ones) , so that the truth-conditions of law statements are purely extensional. 3. Explanation: Explanatoriness is placed center stage in this view. It is precisely laws which make scientific explanations distinctly trustworthy. Again, syntax is central since the relation between explanation and the events explained is conceived as a deductive relationship between the sentences that express them. 4. Prediction and confirmation: It is the logical empiricist analysis which makes both of these quantitative. Past empiricist views, including early logical empiricist formulations, had trouble here, but thanks especially to Popper's analysis of empirical strength, the logical empiricists moved away from concerns for strict verification 30 and falsification towards more sophisticated views of partial interpretation, confirmation, and statistical theories of data. 5. Counterfactuals: Nelson Goodman is the logical empiricist who first discussed, explicitly, the connection between counterfactuals and laws: it was a problem that required one first see why material conditionals were not sufficient to capture all the content of laws. Too many conditionals would qualify as laws, since it suffices for the truth of a conditional that its antecedent is false. So, for example, if no diamond was ever immersed in butter, the conditional 'if a diamond is immersed in butter, then it dissolves.‘ is vacuously true, but few of us would want to call it a law because of that. Some stronger connection seems to be required than the truth-functional one. We want to say that even if a diamond never is immersed in butter, it is still false to say that if it had been so immersed, it would have dissolved. 6. Objectivity was the point of formalizing logic and cutting it loose from the psychologistic form it had acquired as the so-called laws of thought. Formalism is objective because, once the syntax is clear, decision procedures can be set up for significant fragments of the language that are purely mechanical in their operation. No opinion is involved in the inference 'p. . .q & p therefore q'. The inference is decidable mechanically. Verification was seen as guaranteeing intersubjectivity, since observation is publicly checkable and challengeable. However much bias might operate in the context of discovery and in the context of justification, its presence did not, by itself, undermine the justifiedness of a law. The only relation that counted was the logical relation between the claim expressing the 31 law and the claims expressing the evidence. Laws stood or fell only in relation to the evidence, not in relation to the psychological states involved in their formulation. Furthermore, what laws there were was not a matter of logic either. Laws were part of the world, among the 'fumiture of the universe', as contemporary metaphysicians might say. Laws were discovered, not invented. Truth-conditions for sentences were such that the truth of sentences varied independently (so long as there were no syntactic relations between the sentences). This is context-independence in van F raassen's sense: the truth or probability we would assign to law claims is viewed as independent of other beliefs we hold, including theoretical commitments and more general background beliefs. ‘2 It is a view that would be abandoned as Camap and Quine moved in the direction of more holistic views of language and sentences. 1.3.4. the Realist Challenge to the Standard Model. A protest arose among some philosophers that the standard empiricist analysis of laws would not do. While this protest was apparently of long standing,'3 the protest became more focused after Hempel and Oppenheim and N. Goodman published their papers. Kneale protested that the analysis missed the necessity of the regularities governed by laws.'4 Chisholm, criticizing Goodman's analysis of counterfactuals, suggested that perhaps the atomism of the regularity analysis is at fault. Perhaps there is a '2 Since for van Fraassen both explanation and counterfactuals require pragmatic conditions (that is, reference to what the speaker and bearer know, don't know and believe about the situation) as conditions of their acceptability, and hence, require contextual considerations in their evaluation, laws, which are intimately tied to counterfactuals and explanations will require context for their evaluation too. No theoretical model holds of the world without some 'theoretical hypothesis' concerning the intended interpretation of the model. '3 W.E. Johnson, aiming at Hume, in 1924 protested that a regularity view of laws missed law's 'compulsiveness' cf. Johnson, pt.3. t4 Kneale, "Natural"; "Universality"; Chisholm, "Contrary-To-Fact." 32 connection afier all between antecedent and consequent in counterfactual conditionals. It is in the late 1970's, however, that some rival analyses of laws are offered. Three of these will be discussed in chapter two. I would like here to sketch their complaints against the empiricist view, the nature of their alternative analyses, and the reason they believe their views better describe what has been identified in the tradition as 'laws of nature.' Perhaps the most detailed critique came from D.M. Armstrong. For the sake of brevity, I will focus here only on his critique. In the first five chapters of his book, WW, Armstrong criticizes, by turns, first, what he calls the 'naive regularity theory' and second, its attempted sophistications. The 'naive' view identifies laws with exceptionless empirical generalizations. It is naive because it makes no attempt to distinguish accidental regularities from those we would regard as laws. Such a view gets both the extensions and intensions of the term 'law' wrong. An analysis gets the extensions wrong if something can be an exceptionless regularity (he calls it a 'Humean regularity' ascribing such a view to Hume) without being a law, or can be a law without being an exceptionless regularity. That is, the analysis does not pick out all and only instances of law, and consequently, gives neither a sufficient nor a necessary set of conditions for lawhood. For examples of actual regularities that do not correspond to laws, he shows how to construct Humean regularities that apply to exactly one thing, or at most a finite number of things. For examples of (logically possible) laws that are not Humean regularities, he produces thought experiments of laws that are strictly local, laws that have no instances (because their conditions are never fulfilled), and laws with 33 instances missing (because they are represented by a continuous function, with an uncountable infinity of instances, and our samples are all finite).15 As Armstrong argues how the regularity view gets the intension (or the meaning) of law wrong, he deploys concepts that remind us of items on van Fraassen's list. Here are the subheadings of chapter 4 of Armstrong's book, with my gloss showing where each item fits into van Fraassen's list: 1. Lack of Inner Connection: This corresponds to item 2 "relations of necessity", especially 2c 'necessity bestowed' Armstrong denies this element of laws is captured in the regularity analysis at all. (Since Armstrong believes laws are contingently true, he rejects 2d). 2b. 'Intensionality' is captured in the whole chapter. 2. Laws of Nature as Principles of Explanation: Item 3 ("Explanation") of van Fraassen's list. 3. Paradoxes of Confirmation: Item 4 here. Armstrong thinks the regularity analysis is incomplete because it allows irrelevant information to confirm a law. 4. The Problem of Counterfactuals: Item 5 on the list ("Counterfactuals and Objectivity"). Armstrong thinks the regularity view cannot allow counterfactuals ever to be true. If laws supervene on factual states of affairs, and would not be the same if the constituent states of affairs were different, then any state of affairs '5 The details are unimportant, but to give a flavor of the arguments here is how one constructs a trivial Humean regularity that intuitively is not a law. First, take any object you please, and give an exhaustive list of its properties. Some subset of that set will probably suffice to distinguish that thing from every other thing in the universe. Second, make the conjunction of those properties the antecedent, and the conjunction of the remaining properties the consequent. One now has a true universal generalization (the antecedent could after all describe more than one thing although in fact it does not) that is only true of one thing, and no one would want to call this a law. 34 that is not the case is ruled out as physically impossible. If so, laws do not support counterfactuals, but rule them out. 5. The Problem of Induction: Item 23 ("Inference") and item 4 ("Prediction and Confirmation"). He thinks that the Humean view founders on Hume's Problem. Experienced instances can never license inferences to unexperienced instances. The items on van Fraassen's list that are missing from Armstrong's are universality and relations to science. Armstrong grants universality to the regularity theory because he agrees that universality is a property of laws (although he thinks the regularity theorist is not well placed to defend the claim that laws must be universal truths). Armstrong grants to the regularity theorist as well that laws are an important part of science. What does Armstrong offer in place of the regularity view of laws? Armstrong maintains that laws are contingent relations of necessitation between universals. This view focuses on the objective truth-makers of scientific law claims and overcomes the subjectivity of sophisticated regularity views that relativize laws to conditions of knowing them. How, according to Armstrong, does this analysis succeed where the regularity view fails? First, since the relationship is between the universal properties of things, and not the particular things themselves, universality of the law is guaranteed. Next, it is in virtue of this relationship between universals that what is governed by laws happens necessarily. 'Necessitation' guarantees that. The relationship is intensional. The antecedent gives the reason for the consequent, not simply that both happen to be true. This reason grounds the instances, and explains them. This account solves the induction problem because it supplies a necessary connection between 35 natural events. Also, because universals are intelligible, they provide criteria for picking out and identifying events as relevantly similar. With natural necessity and criteria of identity, both prediction and relevant confirmation are possible. We can now understand, Armstrong maintains, how laws can support counterfactuals. Universals can have more instances than they do. If some universal had additional instances, and if a law bound that universal to a second universal, then the additional instances of the first universal would also be instances of the second universal. Laws are thoroughly objective, since they are relations between parts of the world, not our language or thought. And laws are context-independent in all the relevant senses, since they do not depend on theory, language or culture. Laws are even independent of each other (except when we are dealing with hierarchical relations between universals). Finally, Armstrong claims that science discovers such laws, and that they constitute an important part of science. 1.3.5. Van Fraassen's Antirealist Counterchallenge to the Realists Van Fraassen disagrees. As always, what would solve our problems if true does not solve our problems tout court: our problems may in fact be insoluble. Van Fraassen claims that the realists have not solved the problem of laws at all because what they call laws do not have all the properties we expect laws to have. To see this, van Fraassen argues, we need to see that the requirements these properties place on would-be laws do not all pull in the same direction. In fact, these properties fall into two groups, and the groups of properties set requirements that pull in opposite directions. He labels these requirements 'the problem of identification' and 'the problem of inference.‘ Of these he says, "An easy solution to either spells serious 36 trouble from the other."(Van Fraassen, Laws 38) The simple explanation of the problems is as follows: 1. "The problem of inference is simply this: that it is a law that A should imply that A, on any acceptable account of laws." (3 8-9) 2. "To answer these queries one must identify the relevant sort of fact about the world that gives 'law’ its sense; that is the problem of identification." (39) The inference problem is a problem for views, like Armstrong's, which emphasize the intensional character of the law-operator 'it is a law that...’ The problem is: how does one go from 'it is a law that A' to the truth of A, since the latter only gives information about events and sequences of events in the world. To explain what happens, to ground predictions, one needs to use laws as inference tickets. On the older empiricist views, this would confront one with the problem of induction, because all the evidence for a law would be old, and the predictions would involve inferences beyond one's evidence set. The realists come in at this point and say 'we can ground the inference and solve the problem of induction by a more adequate theory of laws.’ Van Fraassen agrees: most realist views solve the problem by construing 'it is a law that...’ as 'it is necessary that...‘ and use modal logic to infer 'A' from 'necessarily A.‘ (van Fraassen, Laws 39) Armstrong, however, has a problem here, because his view that laws are contingent relations of necessitation between universals leaves untouched the problem of the relation between their instances, and it is the latter relation that needs to be clear if we are going to be able to explain or predict why some event occurred in terms of its correlation with antecedent conditions (Laws 96-99). After all, except in the case of so-called singular causation, there are always antecedent conditions 37 of importance picked out in causal laws (and, a fortiori, in non-causal laws like the universal gas law and Ohm's law). The problem would be solved completely if the relationship between antecedent and consequent were one of extensional inclusion: that just is what makes the class logic of the Aristotelian syllogism work: if all humans are mortal, and all university professors are human, then all university professors are mortal ( Laws 97). But Armstrong claims that getting the extensions right is not enough. Even if all F's were G's, it does not follow that it is a law that F's are G's. Something more, and van Fraassen insists, something different, is required. But it is hard to know what different kind of relationship could do the same job as extensional inclusion. The problem of inference, therefore, requires one say enough about the relationship to explain how it can license inferences. Van Fraassen thinks that possible world semantics, similar to the version created by D. Lewis, almost solves that problem by modal logic.16 D. Armstrong, however, does not solve that problem. Calling a relationship one of necessitation may make it sound like one has explicated 'necessity'. But without more, this is simply a case of similar sounding words, and one solves nothing by coining a word. (Remember the 'dormitive powers' invoked in Moliere's Bcnrgeciseflemleman, as a supposed explanation for why opium put one to sleep). To make the relationship primitive (that is, incapable of analysis or definition in terms that do not involve universals or modality) is to leave the problem of inference unsolved, because the unpersuaded reader is lefi in the dark as to how the relations between universals can have anything to do with the relations between empirically ‘ ’6 Van Fraassen, Laws 45,64: "As we have found, on Lewis's account, the assertion that it is a law that A entails that it is physically necessary that A. This meets one of our main criteria," and "The inference problem is thus successfully handled." Cf. van Fraassen's discussion of Pargetter and McCall in van Fraassen, Laws 65-93. 38 accessible events, since these two sets of relations are on different levels, ontologically speaking. The problem of identification moves in the opposite direction, from phenomena to laws (Laws 72-73).17 The question here is how can we identify the laws empirically. Presumably, for the realists Pargetter and Lewis, once we have the laws, the problem of inference can be solved. But the problem for all realists, including Armstrong, is that laws are not to be identified with anything empirical, certainly not with empirical regularities. So now the problem for the realists is similar to the problem for theoreticians whose theories always outrun the empirical evidence; how does one pick out, out of all the mutually incompatible views that are indistinguishable empirically, which view is the correct view? For possible world theorists, we can explain the necessity of laws if we can say that laws are true in all the physically possible worlds. But to be able to say that, one has to define the accessibility relationship 'physically possible relative to world X'. However, there are an infinite number of relationships between possible worlds. How do we identify the one we are looking for? By description? That would not work because any description will leave a very large class of isomorphic relations. But, for realists, the real relationship must be unique. How then shall we pick it out? By pointing? But again, because there are no causal connections between possible worlds, we cannot point to entities in any world but our own. Van Fraassen believes that Armstrong has his own version of this problem. He asks, how, on Armstrong's view, does one determine which relationship is the relationship 01’ necessitation between universals? One cannot solve the problem by \‘1 For discussion of the version of the problem for McCall's branched space-time model, see Van Fraassen, Laws 74-79. 39 simple stipulation, by claiming there is only one relationship and that is the one we are talking about. That would be to "baptize the relationship in absentia," he says, which is not useful for those who are unpersuaded of the existence of such and want to be convinced. Van Fraassen believes that exactly similar difficulties arise for realist views of objective probability, that are constructed to handle probabilistic laws (Laws 76-86;151-214 esp. 160-170). The questions remain, which of the empirically indistinguishable models contains the relationship of objective chance (identification problem), and how does objective chance help us set our subjective expectations so that the probability we assign to occurrences matches what we empirically find to be the case? The strategy of inference to the best explanation does not help here. In fact, that strategy leads to incoherence in the set of personal probabilities we assign to beliefs, as we systematically overweigh or underweigh evidence in a way that makes us subject to Dutch Book problems (160-170, esp. 166-169).18 The upshot of van Fraassen's brief against realist theories of laws can be stated briefly. Just because the philosopher can show the world behaves as if laws were relations of necessitation between universals, or relations between sets of possible worlds doesn't mean these really exist. For us to claim that laws exist will require that we have clear criteria for telling when we are confronted with a universal, for distinguishing one universal from another, and for picking out the relationship between universals that licenses the inferences we want. It is not enough to consistently describe the relationship. '8 The problem comes, van Fraassen claims, when one gives bonus points to hypotheses for CXplanatory success beyond what they would derive from Bayesian conditionalizing of belief on evidence. 40 Van Fraassen goes on in W to argue that not only do we not have what realists would call laws, but we don't need them either for rational belief or to do science. First, Van Fraassen explains how we can have rational belief revision without laws. Realism leads to irrational beliefs about objective probabilities because the beliefs are inconsistent. Conversely, biting the bullet and being explicitly subjectivist about assignment of probabilities allows one to calibrate belief by conditionalizing on experience (129-214). Second, he shows that the great conservation laws can be deduced from symmetry considerations, and that laws therefore are entirely dispensable in science: science can and now does simply work with the symmetries. Laws, in the classical sense of inviolable rules of the possibility and impossibility of events, are nowhere in evidenCe in quantum mechanics or relativity theory (215-348).19 1.4 Thesis My thesis is that an empiricist can meet the burden of saying what additional conditions a regularity must meet to be a law without embracing realism. It is the task of this thesis to show that van Fraassen exaggerates. We can and do have in science laws that are not simply derivable from symmetries of problem situations. It is important that we do because purely formal laws give us no information about the world. To make my case, therefore, I need to register a different diagnosis of where the realist views go wrong and show how the challenge of the realists and van Fraassen can be met in broadly empiricist terms. Part of my ‘9 Part three of van Fraassen book develops the notion of symmetry and its use in theory constraint for geometry, mechanics and relativity theory; part four develops the notion of symmetry relevant to probability theory. 41 work, therefore, is critical. In the next chapter, I discuss and criticize the central presuppositions of three realist views. In chapter three, I develop my alternative, a pragmatist view building on the work of those in the pragmatist tradition who have analyzed scientific laws. In chapter four I will discuss objections to my view that realists, constructivists, and van Fraassen might raise about the stability of the position I defend. 42 Chapter Two: Three Realist Positions 2.1 Realism's Stake in Science Realism seems to the reader to be an alternative preferable to the verificationist empiricist view of science because the realist claims that empiricists make science to be about us and our states of mind, whether sensations or beliefs, when in fact science is about the world. Realists claim to understand what science is about. I intend to show in this chapter that realism is an unattractive view that cannot make good on its promises because of the very assumptions on which it most insists. A fortiori, its view of laws is unacceptable and should be rejected. Another view of laws is to be preferred, one that squares better with our widely shared assumption of the continuity and interactions between natural systems, including the systems that we are. Such a view will be developed in chapter three. 2.2 Three Realists Positions: D. Armstrong, 8. McCall and R. Harre. 2. 2.1 Laws as Relations between Universals (Armstrong) David Armstrong defends the view that laws are relations between universals. He sets up the analysis by showing that there are certain problems insoluble on the regularity view. 20 The most significant problems are these: 1) The regularity view gets the extensions of laws wrong; there can be regularities that are not laws and laws that are not regularities. 2) The regularity view gets the intensions wrong: even if every law had a 20 This is similar to his strategy in arguing for universals. He first shows that there are insuperable problems with all versions of nominalism, except possibly moderate nominalism. He then argues for a realist view as having fewer problems and as having more resources to handle the problems than nominalism could. 43 corresponding regularity, regularities do not handle the modality of laws, i.e., the necessary connection between items related by law. 3) Attempts to sophisticate the regularity account add an irrelevant subjective element: laws may exist even if no conscious beings existed.21 What is at stake for Armstrong in an account of laws is their objectivity. He wants an account that gives necessary and sufficient conditions for lawhood and that furthermore gets the meaning of 'law' right. Armstrong believes that to get laws right, one must do ontology. He wants to find a real truth-maker, in virtue of which claims about laws are correct. So, to him it is very important to make and maintain a firm distinction between laws and law expressions. The phrase 'law expressions' seems ambiguous between linguistic expressions (i.e., the laws in science books) and manifestations of laws in instances.22 However, the ambiguity doesn't matter because Armstrong believes each of the law expressions covaries independently of laws and that both undermine the regularity view. We will take up both in turn later, but the importance of the distinction is to drive a wedge between the ontology or metaphysics of law, on the one hand, and the epistemology of law, on the other hand. The solution, for Armstrong, is to claim that laws are relations between universals. What are universals? What kind of relations between 2' This again parallels Armstrong's argument against nominalist views of universals. The predicate nominalist, the concept nominalist, the class nominalist, and the mereological nominalist all tie universals to something external, but his intuitions tell him these can all exist or fail to exist independently of each other. Armstrong believes that the Platonist, who holds to transcendent universals, falls afoul of the same problem. Armstrong's immanent universals, embedded in concrete states of affairs is the only view that characterizes universals by something intrinsic. He believes that an account that gets the intensions right automatically gets the extensions right. 22 See his discussion of iron and oaken laws, and the problem of infinitely qualified laws in Armstrong, m 147-150; 27-29. universals counts? How will claiming universals have this kind of relations solve the problem of laws? Universals are ways in which existing particulars might be. Fundamental for Armstrong's ontology are states of affairs. States of affairs comprise particulars and their associated properties. Each state of affairs is located in some particular place in space-time, and collectively they constitute the objectively existing truth-makers for claims made about things in the world. The universals that exist all have instances. Universals are the abstractable parts of a state of affairs. They are either properties of particulars or relations between particulars. Unlike particulars, universals are repeatable and can be realized in new instances. Why suppose universals exist? For Armstrong, the decisive argument is that two objects that can truly receive the same predicate must have some ontological ground for the predication. That is, there must be some sense in which we can say two nonidentical things are identical before we can correctly attribute the same properties to them. Universals must be fully determinate. If some object has a weight, there is only one weight that it has exactly. If it has a color (say red), there is one shade of red it has, not another. Universals can be related in more than one way. Clearly, they can be related by subsumption or inclusion. More general concepts can embrace less general ones. All mammals are animals. But this analytic or a priori relationship of inclusion does not interest Armstrong. It may even be problematic for him if all universals are determinate and none are determinable.23 But the important relationships are contingent relations of necessitation. He doesn't simply want it to be the case that two universals 23 Nevertheless, he believes universals may be analyzable in terms of other universals, and that this may be true of every universal. There may be no primitive unanalyzable universals. 45 share all their instances. A nominalist about universals could endorse that view and use it in a version of the regularity theory of laws. No, what Armstrong is interested in is an intensional relation that is nevertheless discovered a posteriori by science. He wants, for cases where it is a law that F's are G's, that everything that is an F is also a G myimecfheingefl. It is this 'in virtue of which makes the account intensional and makes the universals essential to the account. How does such an account solve the problem of laws? According to Armstrong, it makes clear why the counterfactuals associated with laws are true. He believes that mere regularities, even sophisticated ones, cannot handle counterfactuality. A mere regularity suffices for lawhood, even if we have reasons to think the correlation accidental. Absence of instances means we have to regard an occurrence as nomically impossible. Both of these conclusions are counterintuitive, Armstrong believes. Functional laws have missing instances that we believe are nomically possible even if not actual. Probabilistic laws tell us what might happen, even if it does not. The presence of favorable instances does not tell us whether we are confronted with a deterministic law, probabilistic law, or coincidence. Besides instances we need the reasons they are instances, reasons that can sort out various kinds of law (e. g., stochastic from deterministic) and laws from non-laws. So, necessitation explains the necessity we associate with connections between events made by laws. Necessitation is a contingent relationship. Nomic necessity is not logical necessity, because another set of laws might have been our laws. It is a primitive relationship in that there are no more basic terms to which it might be reduced by analysis. Now, 'necessitation' may sound like a relationship that guarantees that if one has an F, one also has a G, that is, 'a is an F' entails 'a is a G'. 46 However, for Armstrong, this is not the case. The law might be a probabilistic one, for example, and only give a probability that, say, half the atoms of radium in this sample will decay in 10 seconds. For those laws, one only has the probability of necessitation. Furthermore, there are some laws that say what will occur W, and other laws that say what will happen if conditions are right or if the operation of other laws does not override or block the expression of this law. The former, the 'iron laws', as well as the latter, 'oaken laws', both involve necessitation, even though they differ in their expression. 'Necessitation' is an objective fact about the world, not to be confused with the semantic relationship of predication. Similarly, Armstrong claims that universals are objective constituents of the world, not to be confused with predicates, which are constituents of a language. Universals and predicates do not map unto each other in any simple fashion. There may be universals without predicates. We only learn of a universal through its concrete instances. Universals that have not yet been discovered may nevertheless exist but lack their corresponding predicates. For example, the universal 'radium' existed before we discovered it and before we coined the corresponding predicate 'radium'. Radium samples made discovery of the universal 'radium' and the coining of the predicate 'radium' possible. There are predicates with no corresponding universals. Negative predicates and disjunctive predicates, for example, do not have corresponding universals. So the question naturally arises: how can one tell when empirical propositions are true, if their truth depends on the existence of these objective truth-makers, and there is no guarantee that our language can pick those out? Armstrong's answer, uniformly throughout all his work, seems to 47 be that whatever exists has causal efficacy.24 So, science is the appropriate vehicle for separating the predicates that map unto universals from those that do not. Our best science tells us what the fumiture of the universe is. 2.2.2 Laws as Relations in Branched Space-Time (McCall) Storrs McCall, in AMcdelcfjheJlnixerse, defends the view that the universe is best depicted as a forward branching tree in space-time. The forward direction is the future. The branches are physically possible events or physically possible ways the world could go under the conditions at the junction point. The unused branches are pruned off as the present moves past the junction point along one branch or another. The intuition behind this view is that at any given time there are different ways the history of the world might go, but the past is settled. In McCall's view, laws are the constant concomitance of universal- instantiated event types. Thus, his branching structure has nodes from which branches arise. If every A-type branch has only B-type branches above it, then it is a law that all A's are B's. If every A-branch has B branches and non-B branches above it, then it will be a probabilistic law that all A's are followed by B's with a probability equal to the ratio of B branches to total branches above A.25 If that probability is rational, the branches are finite. McCall claims that for real-valued probability, one has a decenary tree, with each branch being subdivided into ten sub-branches, and each sub-branch subdivided again into ten sub-branches, and so on, to give the equivalent of 2‘ Hence, mental states exist if they are causally connected with the environment and appropriate behaviors. Beliefs are dispositions to generate behavior. Knowledge is belief which is nomically connected with the state of affairs that makes it a true belief. Universals cause particulars to be a certain way. Causal language appears throughout Armstrong's works. 25 This, of course, assumes that there is a fixed ratio that holds for this type-identical event. 48 the decimal expansion of a real number. However, branches are neither diagrams nor propositions, but are real 'ways' in which the world might have gone. After the fact, however, there is only one way the world could have gone and uninstantiated branches are lopped off. Nothing 'decides' which way the world will go. The world is indeterministic, and each branch is equi- probable. Models of the world, therefore, are isomorphic to some portion of the world tree. Law-statements are made true by the world tree, and are equivalent with any other law-statement that has the same models (in the above sense), even if they differ in wording. Laws are objective, he claims, and do not fall prey to the worrisome subjectivity he finds in David Lewis, whose view of laws counts on personal judgments of simplicity, strength, and the optimal balance to determine the laws.26 Rather, laws are not statements at all, but structures that possibly change through time. Hence, McCall is willing to accept the possibility that laws evolve through time, and that we might have more laws now than before, a view reminiscent of Pierce, as Joe Hanna has reminded me. 2.2.3 Laws as Relations in a Natural Kind Hierarchy (Harre) Harre describes his own view as a variation on the view that laws are relations between universals, but his view is an interesting variation that makes some new points. In his book, Lamfflatme, Harre claims that the following problems arise with respect to laws. Laws are general in form and seem to say what will always and necessarily take place. So the first problem is about the content of laws. What are laws about? Actual events and processes? Dispositions? About relations between universals, or properties? 25 Lewis says, "A contingent generalization is a law of nature if and only if it appears as a theorem (or axiom) in each of the true deductive systems that achieves the best combination of simplicity and strength" in Lewis, Ccumerfacmels 73. This is a variation on a theory of laws given by Ramsey. 49 Or are they redescriptions, in scientific language, of facts and true by convention? Each of these, Harre believes, has an aspect of the truth, and there are historical examples that fit each. In his attempt to capture what is true in each of these views, he looks at the problematic character of universality, necessity and confirmation for laws. Universality (or generality, as he prefers) admits of three dimensions: substance-generality, experimental-conditions generality and generality of time and space. So, for an example of the first dimension, whereas Newton's laws of motion apply to every physical object, Ohm's law applies only to conductive metals and solutions. For an example of the second dimension, Boyle's law relating the pressure and volume of a gas is pretty general, whereas Reynold's law relating aerodynamic drag to velocity is restricted to conditions of laminar flow at low velocities. For an example of the third dimension, laws of mammalian behavior would be restricted to times and places mammals exist. Harre believes that one reason actualism, the view, associated with Mach, that laws are summaries of experience, is that such a view is too simple in the advice it gives us when our laws turn out to have counterexamples: reject the law as false and replace it by one that is true. He claims that scientists properly treat counterexamples differently depending on differences in theoretical models explaining the regularity. If the law is taxonomic, for example 'all mammals bear their young live,‘ we can treat the characteristic of 'bearing live young' as either definitional or nondefinitional of the species 'mammal'. The difference shows up in how we handle counterexamples to a law. Sometimes we allow contrary evidence to refute the law and sometimes we allow counterexample to revise the law. If the characteristic is definitional, and we have one clear counterexample, then the 50 law is refuted. If the characteristic is nondefinitional, then counterexamples lead us to restrict the range of validity for the law. Some characteristics named in the law can change their status on contrary evidence. So, if we thought bearing live young part of the definition of 'mammal' then we have a problem when confronted by the duck-billed platypus, which, in other respects is clearly mammalian. Such an instance refutes the law if the characteristic is taken as definitional--or forces us to reject the duck-billed platypus as a mammal. Or, we could, as in fact we did in this case, hedge our original claim and demote the status of the characteristic 'bearing live young' to a nondefinitional one. We would now say that most mammals bear their young live, and that other characteristics are definitional of 'mammal.‘ Alternatively, we could start with a characteristic that was not definitional, and handle counterexamples by restricting the scope until one has a characteristic that becomes definitional of the class of objects within its scope. So, we handle deviations from Newton's laws of motion by restricting the scope of those laws to point masses, rigid bodies, harmonic oscillators and conservative systems in general where there are no losses due to friction. Paradoxically, restricting the range of valid conditions in one way, leads to an expansion of the range in another. No longer restricted to planets and projectiles, Newton's laws of motion now apply to all material substances. One makes Newton's laws definitional of what it is to be a material body. This comes close to what Harre believes Poincare was doing in making laws true by convention. Non-taxonomic laws, like functional ones, can turn out to be disguised identities or not. So heat and the average kinetic energy of molecules are not simply correlated, they are identical. Nontaxonomic nonidentities would include the functional laws, like Boyle's law. Contrary 51 evidence here is handled by restricting the scope of the experimental conditions under which the law holds true: not at very low temperature or very high pressure (we should add, not at extremely high temperature eitherl). Harre also considers the possibility that some of our laws may not have held in the first thousandths of a second after the big bang. This possibility, if true, would be a restriction of the scope along the third dimension of generality. Modality is another matter. Harre rightly points out that Hume believed there were no natural necessities, that necessity was a psychological matter describing our expectation that the future would resemble the past. Since Harre wants to defend a notion of natural necessity, distinct from logical necessity and empirical regularity, he wants an ontology that grounds the 'must' in the claim that laws describe what must happen. He believes this is revealed in the sort of explanations scientists give for why the regularity is a law. Mechanisms, for nontaxonomic laws, and stable natures, for taxonomic laws, are set forth in theories. Theories are both models and the hierarchy of natural kinds in which the model is embedded. The idea is that the proposed mechanism is suggested by the network of natural kinds to which an object belongs. For example, atoms are ascribed properties like momentum and charge, based on their participation in the natural kind 'physical body' and the fact that other physical bodies have these properties. The hierarchy determines which analogies are natural and which are not; the analogies, in turn, determine which models are plausible and which are not. Models, of course, are on our side of the ontological divide, along with language, mathematics and formal systems. But they all have their ground in what is in the world, i.e., objects with their properties and relations, including causal relations in various mechanisms. 52 Belief that there is a mechanism at work in producing the regularity is how we separate accidental regularities from laws, Harre says. This is natural necessity. Hierarchies of natural kinds, furthermore, solve the two great paradoxes of induction: the grue paradox and the paradox of the ravens, Harre claims. The first is solved because we do not ascribe properties until we are persuaded we have ascribed stable natures to things. Harre contends that for an emerald to be grue would require it to be green when we observe it and blue when we do not, but that requirement has some odd sort consequences: A grue emerald that is viewed at some times and not others would be switching back and forth between green and blue, something we do not believe happens or would only believe if we had some explanation for the switching.27 One wants to ascribe stable natures to things which only change properties when caused to do so. "Such a scheme [as the realist seeks] is elaborated until as many as possible of the observed empirical irregularities and discontinuities in some field of interest are accounted for in terms of theoretically defined stable natures." (Harre 106). The paradox of the ravens is handled similarly. Contraposition does not preserve lawhood, because there are no natural kinds picked out by the complementary class. The universality of laws is not strict universality covering everything in the universe, because there are presuppositions restricting the universe of discourse. The claim lacks truth value if the presupposition is false, e. g., if there are no ravens. 27 Harte here gives a common reading of Goodman's new problem of induction, but one that I believe is mistaken. Grue emeralds do not switch back and forth; they remain grue unless something causes them to change. Expectations are set in an exactly similar fashion to the way our color words set out expectations. The point is that both are confirmed to exactly the same degree by past evidence, yet both set different expectations for the future. In this regard, the problem of grue is exactly the same as the problem 'which regularities do we project into the future as laws'. 53 2.3 An Analysis and a Criticism of the Core Realist Presuppositions Although but three out of the many various realist positions have been set forth here, the variety is sufficient to make one wonder. What makes these views all realist views? Is there anything like a set of core assumptions held by all or most realists on the topic of laws? I think I can find three: 1. An cntclcgical claim: The world is independent of the mind. 2. A semantic claim: Language has meaning because terms pick out objects in the world, and because sentences pick out states of affairs in the world. Truth and reference are independent of verification or indeed of verifiability. Some realists, but not all, would say the nonlogical terms in the theories of mature science refer. 3. An epistemclcgical claim: we know what kinds of things there are, and what their properties are by the normal practices of science. Some realists, but not all, use the success of science in prediction and technological application as an argument that science gets the structure of the real world right, approximately right, or right in the long run. A plausible fourth assumption, shared by many realists, if not most, is a historical claim: scientific progress increasingly approximates the truth. I do not list it here because I do not think it is really independent of claim three. Rather, it simply tells us that the isomorphism between language and reality is achieved only in the long run.28 Are these three core assumptions held by 28 My view does not need such a notion, since the focus is on the coherence of experience under a set of descriptions and manipulations. I will discuss my more pragmatist notion of truth in chapter three. 54 these realists? Let us look at some of the claims they make in various points of their works. In W, Armstrong says the following: First, I assume the truth of a Realistic account of laws of nature. That is to say, I assume that they exist independently of the minds which attempt to grasp them. (7) Laws of nature must therefore be sharply distinguished from law- statements . Law statements may be true or (much more likely) false. If they are true, then what makes them true is a law. (8) It is clearly possible that there should be a universe governed by laws, ...but one in which there were no minds. (63) Our problem is now before us. There are no secure paradigms of laws of nature. Consider contemporary natural science. It is perfectly possible, epistemically possible, that we do not know a single law of nature...the scientific theories that we now work with are obviously a reasonable approximation to at least some of the real laws of nature even the rough-and ready generalizations of pre-scientific practical wisdom represent a reasonable degree of approximation to genuine laws. (6) In volume 2 of Uniletsalsandjcientiflcfiealism, Armstrong says the following: Philosophers are familiar with the idea that science attempts to discover the laws of nature Further, philosophers are familiar with the idea that it is a weariness and a labor to establish in any degree what these law-like patterns are. But philosophers have tended to assume that there is no particular difficulty in identifying the universals themselves....What has to be realized, instead, is that determining what universals there are is as much a matter for laborious enquiry as determining how universals are linked in laws. (The two enterprises are, of course, bound up with each other). (8-9, parenthetic remarks in the original) What properties or relations there are in the world is to be decided by total science, that is, the sum total of all enquiries into the nature of things. (Philosophy is part of total science, but a mere part and not the most important part). ( 8; parenthetic remarks in the original) Storrs McCall, in AMcdechtheflniyeme, says the following: 55 The universe tree can be regarded as a huge cosmic entity, depending neither for its existence nor for its nature upon being cognized by a conscious intelligence. (7) Given a language, an assignment function can be found which links items in this language to items in the universe model. (41) [T]he truth of a law depends on the way universals are instantiated in the branched model. (73) On the basis of the branched model, how could such a law come to be known? [R]epeated experimentation can reveal the mode of distribution of a universal or repeatable B on a fan of branches above A, even though on every experimental occasion only one of the branches is actually inspected. (64) Rom Harre says the following in Lawmfll‘iatme: Those who, like myself, wish to defend the idea that there is a genuine natural necessity and that it is a feature of the physical world, are obliged to find some natural property or process, over and above observed regularities of adjacent event, but bound up with them, which embodies that natural necessity. (82) [S]cientific Realism. In the form in which I wish to defend this idea and make use of it in this book, I shall express the relationship between theories and reality in terms of the concept of verisimilitude. The verisimilitude of a theory is the degree to which the model a theory describes resembles the reality it represents. (93) The question of what the concepts of verisimilitude of models, the plausibility of theories and the truth of scientific laws mean must be distinguished from the question of how we know that one theory is nearer the truth than another, one model more verisimilitudinous than another and so on. The former or semantic questions can be answered independently of and prior to attempts to answer the latter or epistemic questions. (94) [T]he theoretical context of any research programme depends on some scheme of interlocking type-hierarchies. Such a scheme is elaborated until as many as possible of the observed empirical irregularities and discontinuities in some field of interest are accounted for in terms of theoretically defined stable natures. (106) In the reconstructive process through which the method of production of the phenomenon is depersonalized and simplified, the - complexity of the relation between an item of knowledge and its 56 source is systematically concealed....Why is there this difference in the presentation of a law [from the actual complexities of the experiment]? It is the role of the guiding hand of the actual experimenter which is written out of the successive accounts. The effect of this is to display the phenomenon as a natural occurrence existing independently of Faraday, Boyle, Snell, Mendel or anyone else. (3 7) In Realismfiescyed, R. Harre and co-authors claim: We agree that many of the problems associated with fending off these recent anti-realist attacks result from a failure to separate metaphysical, semantical and epistemological issues, and that attempts to refine and make more precise the true nature of the realist doctrine are certainly justified. (2) If the terms of a theory do not refer to mind-independent entities how does the theory reach out to reality? For example, what parts or aspects of the world is a particular theory about? (3) It seems pretty clear that these realists do hold the three assumptions I have ascribed to them. They want to hold that mind, language and the world are independent of each other in metaphysics, but come together in epistemology, especially through the work of science. The independence assumptions come through in their claims of the separability of metaphysical, semantic and epistemological questions, and in the claim that some of those questions can be settled without settling others. The connection assumption comes through in their appeal to science and scientific method to sort out which items exist and which do not. Needing to be made explicit are: 1. What notion of independence is at work in assumptions one and two? 2. What notion of connection is at work in assumption three, i.e., how can science bring into isomorphic relation, ontologically independent structures like mind, language and the external world? 57 Can science give us a picture of the world that is independent of the processes which brought that picture about? In clarifying these notions of independence and connection, I shall go through the list of core realist assumptions twice. The first time, in 2.3.1, I will examine the internal consistency of the assumptions. The second time, in 2.3.2, I will examine the list for the intrinsic plausibility of the assumptions, given our commitment to modern science and naturalism. 2.3.] Internal Critique of the Realist Assumptions Since in general it seems that all scientific realists are working with something like these assumptions, we will, for the sake of compactness, focus on Armstrong's answer to these questions, and then discuss the views of McCall and Harre where they differ from Armstrong's to see if their views are free from the problems I found in his view. My criticism of these assumptions will not turn on any differences that may exist among these three, or, so far as I can tell, any of the other scientific realists who are . realists on the topic of laws (some of whom will be addressed in footnotes). Armstrong has identified his assumptions for the reader in several works. These are: l) Realism about Laws: Laws are parts of the world. 2) Realism about Universals: Universals are features of the world. 3) Actualism: There are no uninstantiated laws or universals. 4) Naturalism: The totality of entities is nothing more than a space- time system 5) Physicalism: The only particulars that the space-time system contains are physical entities governed by nothing more that the laws of physics in a completed physics) 58 6) Factualism: The hypothesis that all that there is, is a world of states of affairs, a so-called Tractatus world) (What 5, 7-9; World 6). These assumptions are not all independent of each other. Number three and six are related: universals just are properties of some particular in some state of affairs. How, if at all, does Armstrong's list of these six assumptions hook up with my earlier list of three core realist assumptions? What, if any light, do they shed on realism in general and on realism about laws in particular? Let us compare the lists. We will ignore Armstrong's numbers one and two for the moment: clearly if he holds to realism about universals and realism about laws, then he is a realist! However, since one can be a realist with respect to different classes of entities, for the sake of clarity it is good that Armstrong tells us what kind of realist he is. Still, describing himself as a realist leaves us none the wiser about any assumptions on which his realism depends. Some of the other assumptions in this list are relevant only to his epistemology and irrelevant to his metaphysical realism. Numbers four and five are specialized versions of assumption number three on my list because they specify physics as the science of choice for getting our ontology right. Number six sheds some light on why he holds a correspondence theory of truth, which forges the link between language and the world lefi independent in my second assumption. Assumption number three on Armstrong's list links up reality and experience, left independent in my first assumption. His assumption number four, naturalism, which locates everything that exists in space-time, probably is best seen as a rationale for my assumption three, the belief that science produces ontology. Since states of affairs just are the propositional content of perception, assumption six 59 adds to my assumption three that the truth is knowable in principle, even if coming to know it takes a lot of hard, scientific work. So, Armstrong distances himself from other ways of explaining the independence of mind and world (e. g., any mind-body or mind-matter dualism, as in Descartes), or the independence of language and reality (e.g., positing an independent realm of meanings, as in Plato or Meinong), or the connection mind and language come to have with reality (e. g., rational dialectic in Plato). And in light of the success of modern science in explaining the world, Armstrong's assumptions may seem entirely reasonable. The question I want to ask is whether they combine into a coherent position. Then I would like to generalize the problems found in Armstrong's view to show there are good reasons not to embrace the three assumptions held in common by the realists. Finally, I would like to show how these weaknesses in the realists' position wreak havoc on their account of laws. The first point is this. It is understandable why a dualist, who resists reduction of mind to matter, might want to insist on the ontological independence of mind and world, but why would a materialist want to do so? In particular, why would one committed to the theory of evolution want to make such distinctions central to an analysis of scientific knowledge? To be sure, what the world is like is not something we automatically know, and one needs some way to describe the difference between what we think is the case and what is actually the case. So, some form of epistemic realism might be given some plausibility. But even here, natural selection seems to place some constraints on what may successfully be believed, and so, it seems, that it is precisely the non-independence, and even homogeneity, of mind and matter that must be given some credit for the success of humans in 60 knowing the world. At least, so some of Armstrong's fellow-naturalists have argued. The second point is this. It is understandable that someone who holds assumption one would also hold assumption two. Language, after all, is a historical structure, created by conventions and practices of people who may have entertained incorrect views of the world, and hence, created predicates that do not pick out genuine natural classes.29 But, if that is so, assumption three becomes problematic. How can science, an institution that emerges from the same human social milieu as language, succeed in describing the world as it is where millennia of human attempts to do so have failed. If human history were simply a tissue of errors, the prospects for science would be dim. Armstrong, of course believes no such thing. He agrees that even common sense gets things approximately right.30 He likes the picture that W. Sellars gives of the relationship between the manifest image of the world available in common sense, and the scientific image, which gradually clarifies and replaces the manifest image--but is be entitled to that picture? One needs some account of how we can get things right at all before we can- grant him that we get things right either pre-scientifically or scientifically. The independence notions in assumptions one and two force a divide between the world and ourselves not easily bridged. The third point is this. One needs more in an account of science than simple faith in the experimental method to weed out falsehoods. It is difficult to see what help Armstrong expects from science, since, for all the 29 However, it is certainly not a requirement that realists go this way. Socrates defended a doctrine of natural names against the conventionalist views of the sophists in Camus. 30 Cf. Armstrong : "They are real features of things, real joints in reality, some of which are grasped in approximate fashion by everyday perception and others of which are uncovered a posteriori by deep scientific investigation" in Armstrong, Mild 30. 61 voluminousness of his philosophical publications, his references to scientific practice are brief and underdeveloped. His strategy is to establish by metaphysical arguments that laws exist, and then leave it up to science to discover which laws there are, since which laws there are is an empirical matter. But it is important to see first, that settling the question of which laws there are can be an empirical matter only because Armstrong stipulates in his assumptions that every law that exists must have instances. Second, it is important to see that merely saying science is competent to the task is hand-waving without some account of how science can pull this off. From my reading of various realists, this seems to be a problem none of them have even adequately formulated, much less solved.“ Popper could talk this way because the items compared were the same: statements with empirical content.32 But the realist is interested in comparing very different sorts of things: predicates and universals. Simple appeal to the practice of science won't do, because the practice of science, while public and 'out in the world' 3' Dretske, for example, tentatively suggested that observations entail general statements as well as particular ones, and some realists, like F. Suppe, thus argue that science is not really inductive! But Dretske qualified his claim severely to say that we have solved the induction problem only for the very special case where l) we already have general beliefs among our background beliefs that permit the inference or 2) we believe a conditional where we take the antecedent for granted (Dretske, Seeing 220-234, 233). Dretske, in "Laws of Nature", suggests that laws are confirmed faster than their corresponding universal generalization, but does not say how this occurs. Abduction does not help, because either it implies some special intuition or capacity for getting things right, in which case it is not obvious that we have any such capacity, or it rides coat-tails on perceptual abilities we already have, in which case the problem of independence looms large. As we will see shortly, even the realist who has the most to say about scientific method, Harre, has little to say about this problem beyond what a coherentist like myself would say, and that will not solve the epistemological problem generated by the ontological independence assumptions either. The problem is deep: the realists are trying to conjoin older metaphysical views with modern scientific views, and the clash does not become apparent until one asks, as I am asking here, about the ontological status realists would assign to parts of science. 32 Even Popper has problems, however, making a connection between this dialectical view of the epistemology implicit in scientific method, on the one hand, and his realist metaphysical program involving verisimilitude and his platonist three-world ontology, on the other hand. His failure is instructive for those who would back realist metaphysics in philosophy of science. 62 so to speak, is linguistically mediated and related to all sorts of theoretical beliefs held by the scientists, whether viewed individually or communally. So, the practice of science is irretrievably bound up with precisely the items -- mind and language -- that assumptions one and two tell us are independent of reality! How is it going to be able, therefore, to build a bridge between them? The fourth point is this. The bare existence of natural classes would not help us in science if we did not know which classes were natural. We could not use them as a guide in theory selection, for example. All we are really in a position to do in science is to say that the classifications we are using have held up reasonably well in experience. Such judgments are retrospective, revisable, and never guarantee that the classes we converge upon are natural in the sense the realist requires. It could not be otherwise since we are never in a position to inspect the relationship between our representation of reality and reality itself by virtue of the independence claim in assumption one. The only way to overcome this would be to make the classes cause our beliefs in them. Armstrong does speak this way in a crucial discussion of the epistemology of general propositions.33 How might Armstrong answer the objections? While apparently willing to concede an implicit skepticism in his view, he goes on to say it is a mitigated skepticism like Hume's. When he goes on to criticize the skeptic, he does so in terms that show he does not recognize himself in the position he criticizes. He claims that the skeptic, if sensible, is selective, and if consistent, holds a position so unattractive as to pose little serious threat. 33 Armstrong: "The nature of arsenic itself ensures that it is poisonous....1n practice, this will mean that the belief [i.e., the belief that all arsenic is poisonous], to be knowledge, must have been brought into existence by the action of individual samples of arsenic. " (Armstrong, Belief 203; emphasis original). 63 Positively, he argues that belief entails knowledge. One cannot believe something unless one also has the capacity to know since beliefs are candidates for knowledge (Belief 190-192, 217-219). Since Armstrong believes there are no unrealized possibilities in the world, the possibility of knowledge implies the existence of some actual knowledge. How good are these replies? They beg the question. Let us leave aside the question of how he could know when he actually gets the laws right: Armstrong will say that knowing when we are right would involve knowledge of knowledge, and is not required for simple knowledge (we can know without even believing that we know). Furthermore, he would go on, second-order knowledge, i.e., knowledge of knowledge, raises no new problem, since it has the same structure, as first order knowledge. That is, the state of affairs which the second-order belief picks out causes the second-order belief and is nomically connected with it. Lest we spend too much time on Armstrong's replies, let us see that the skeptical problem has been side-stepped, not solved. He believes one can have conclusive reasons for beliefs that close any gaps into which the skeptic could stick wedges. Armstrong does not take the skeptic seriously because he fails to see how deeply mired in skepticism his own position is. One could not support the claim that knowledge involves a relationship to a mind-independent world unless one already had the independence assumption. And one should not allow Armstrong to slip in the word 'reliable', as he does in explaining how general belief can be transformed into general knowledge, without some account of how we can reliably form beliefs about a mind-independent world. Now, Armstrong does talk at times as though the causal efficacy of properties caused beliefs in us of various sorts--and that once we have these 64 beliefs, we are in a position to notice resemblance (or, as he prefers to call it, partial identity) of instances which we collect together in natural classes. This turns out to be crucial to his account of how we acquire the general (universal) notion of mass, length and color.34 Lest the reader be taken in by the nominalist sounding program of building up resemblance classes from particular instances, one should keep in mind that for Armstrong, resemblances are objective features of the world, there to be discovered. They have nothing to do with language--or even minds! The appeal to causation, if it worked, would buy us perceptual access to reality, but would not explain why science is especially qualified to carve nature at the joints. In fact, one needs to be very careful how one phrases one's causal theories lest one acquire insuperable problems in explaining how error is possible. And at any rate, Armstrong wants something broader than a causal theory of knowledge because of well-known problems with the simple versions, and this is why he talks about nomic connections and reliability. We will come back to this appeal to causation later, because it is important to see what problems such an appeal raises if one accepts the realist assumptions. I would like to point out that Armstrong also reverses the direction: science is successful, and since his account gives the best explanation of its success, it is likely to be true. This is the 'inference to the best explanation' strategy that has been widely criticized by Cartwright and van Fraassen (Cartwright 44-53; 87-89; van Fraassen, IheScientificlmege, 19-40). How precisely does Armstrong's ontology of states of affairs explain science's success? 34 And also, the meter, the kilogram! See Armstrong, World 47. 65 The following story about how science works is consistent with things Armstrong does say about science, but nowhere explicitly set forth by Armstrong: Science seeks to get the geography and history of the world right, i.e., the kinds of things that exist and their properties. Science does so by isolating bits of the world and studying them closely. By manipulating some of the conditions, it establishes which states of affairs are causally linked with which others, and thus establishes the causal laws. By seeing which properties are invariably linked together, it establishes the universal laws. By seeing which properties are sometimes linked, it establishes the probabilistic laws. The best explanation for how science is able to do this is to posit laws as relations between universals, by virtue of which these regularities hold up. Similarly, judgments of perceptual similarity require that two items be the same in a certain respect, and that is all Armstrong intends by a universal. Universals combine with particulars in states of affairs, and states of affairs combine to give the world. Science converges on the right set of distinctions over time by the accumulation of careful observations and precise measurements that allow one to make a distinction only when there is a difference. That seems a plausible realist story about how science works, and there is enough of recognizable scientific method here to make it attractive. Indeed, it resembles the story I will tell about science in the next chapter-- with some crucial differences. What could be wrong with this as an account of how we know universals through science? ‘66 To answer that question, we need to see what we have done in constructing such a story. We have made an inferential leap over the ontological divide by claiming that the universals are not empirical items but are empirically accessible, as is the case with the relations between universals. So the perceptual means of science are up to the task of establishing the laws of nature. But what does this amount to but canceling one stipulation by another? What form of independence is left after we have granted that all universals and their relations are instantiated and perceptible? What form of independence is left when I stipulate science is successful now, and don't hold out for some predicates (I know not which) that correspond exactly to reality (I know not what) in some future science? Adding the word 'approximation' doesn't help; it simply reminds us that the realist doesn't want the independence to get out of hand, but wants to keep reality nearby to actual science. Armstrong, in W, attempts to address the charges of van Fraassen and D. Lewis, that the notion of necessitation between universals is not coherent by describing a thought experiment that he claims leads to his view. He starts out with perceptible regularities of properties, goes on to perception of single causation, then continues by abstraction to the universally quantified material conditional, and finishes with the second order relation between universals as the best hypothesis for why the regularity holds up. Look at how much this response assumes: 1) that we perceive instances of universals, 2) that we perceive causes, and 3) that we can perceive second-order relations between abstract entities. If he is using this in the extensional or transparent sense of 'perceive' (Dretske's non- epistemic seeing, where one can see an X without seeing that it is an X) then the truth of these assumptions rests on the truth of his metaphysical theory, 67 and therefore does not support it, on pain of circularity. If he is using 'perceive' in a nontransparent or opaque context (i.e., intensionally), then, by his own hypothesis that beliefs and reality are separate matters, his argument does not go through. What we have for evidence is on the subject side of the subject-object divide. What we have is the way things appear; what we need is the way things really are. Only if 'perceive' is treated as equivalent to 'know,’ and given his account of non-inferential knowledge, that is probably the way he intended it to be taken, does his argument work. However, then the premises become dubious. Do we know there are universals, or causes, or second-order relations between abstract entities? Are matters in any better repair in Harre's view? Unlike Armstrong, Harre has written books on scientific method. One might expect, therefore, a better account of how realistically construed laws are captured best by the methods of science, but such expectations are disappointed. In spite of the much greater attention to how hypotheses are formed and tested, one still is none the wiser on the confidence Harre has that there is a natural stopping point for revision, that the world really does have joints at which it can and should be properly cut. Unlike Armstrong, Harre is not a logical atomist but a holist of sorts. But unlike Quine, whose holism is one of beliefs (his web of belief notion), Harre is a holist with regard to natural kinds in the world. Mind and language turn out to be two subsystems of a larger natural hierarchical system of kinds. This holism is supposed to explain the use of models in science, which are drawn from natural analogies based on location in the hierarchy of natural kinds. And Harre also seeks an isomorphism between the representing system and the represented system that leads to a family of theories in sequence of increasing verisimilitude until the last theory, which 68 is the truth. Where does independence come in here, in this version of naturalism? Both thought and language are already embedded in this web of natural kinds and in interaction with it. And exactly the same problem occurs here in its construction of natural kind classes. How does Harre's version of the induction to the best explanation show, in a way that does not beg the question at issue, that stable consensus indicates verisimilitude, or that powerful technologies indicate truth? The argument, borrowed from Hacking, that manipulation is a good argument for the reality of the manipulated cannot prove what they would like it to prove, that is, that manipulation reveals a thing in itself, with non-relative properties. As for McCall, his view is odd enough in several respects that it requires separate comment. His 'reality', an evolving, dynamic space-time tree, is not unchangeable like natural kinds or universals. Now, I want to be careful here. Universals and natural kinds are created and destroyed for both Armstrong and Harre, since evolution occurs. But while the natural kind or universal exists (i.e., has instances or examples), it remains changelessly the same. And the evolution of the tree is a random matter; which way the 'world decides to go' when faced with a fork in the tree is indeterminate. So the tree does not strictly speaking cause our beliefs, and we are left with experience to tell us how to reconstruct accurately the state of the tree at any given moment in the history of the universe. McCall, more than Harre or Armstrong, relies heavily on the inference to the best explanation. He claims that if his space-time tree existed, it could solve many of our philosophical puzzles, including questions of free will, laws, decision, counterfactuals, quantum mechanics, identity, and essence. Whether scientists should be impressed with this list is not discussed. The explanation, however, is entirely retrospective. The indeterrninism of the process of evolution makes 69 prediction impossible, and he admits his view still has Hume's problem. So McCall not only has van Fraassen's identification problem (as have his realist colleagues) but also has the inference problem as well. McCall seems to have the worst of both worlds. The general problem seems to be that realists are committed to the idea that the world is what it is, regardless of our limitations or modes of access to it. We want to say that it is, without saying what it is, or how we know that it is or what it is. So, an ontological gap must be kept between the world and the knower. But then the realists want to go on to bridge that gap and say what the world is like (i.e., describe it in language) and give force to that language (by way of claiming it is true, or approximately true, and that we know it, partially know it, justifiably believe it, etc.). To bridge the ontological gap, realists claim that science is able to put us in touch with the real world. But that claim poses a problem for them, because all the means at science's disposal are precisely the ones on our side of the ontological divide. This is precisely a problem because realism of all forms, thanks to Parmenides, has the legacy of a strong distinction between appearance and reality that threatens realist epistemology with deep skeptical worries, e.g., Cartesian demons. Once one has this 'divide' between subject and object of knowledge, the relation between them is problematic, and with that relation, the possibility of knowledge. It is for this reason that realists like Armstrong and Dretske, when they define knowledge, focus on 'conclusive reasons' that eliminate the very possibility of error. One further point deserves comment: the use of the notions 'cause' and 'causation' in realist accounts. Philosophers who are realists about laws tend to be realists about cause as well. Pargetter and Tooley, for example, 70 develop detailed accounts of causation, because they must challenge the Humean account of cause that makes a regularity theory of laws attractive (Tooley; Bigelow and Pargetter 263-294). To be sure, not all laws are causal, and some realists entertain the possibility of singular causation (with or without singular laws). For realists, causation bridges the ontological divide between minds and the external world. In a passage that shows this assumption at work, Pargetter addresses the problem of mathematical knowledge thus: Part of the complaint about mathematical objects lies in the allegation that they are not observable. The complaint traces back to the deeper allegation: that numbers lack causal powers. They do not cause anything... Our strategy, however, is to deny that mathematical entities are idle. In fact, we believe that some mathematical entities are observable after all. But we do not need to establish observability here...Of more central concern for us here is the more central question of causation (380). What lies behind this preoccupation with causation for the realist? One motivation is certainly epistemic. As Armstrong points out in his sketch of his ontology, WIS, while it is important that properties not be identified with powers, still "we think of [properties] as bestowing powers upon the particulars that have them. A property that bestows no power will not be easy to detect!" (69) Causation bridges the ontological divide between minds and the external world by guaranteeing that whatever really exists will have observable instances. But does this view work? Not if causation is considered part of the external world and separate from minds and language. If causation is separate from mind and language, then we have again the appearance/reality problem that bedevils realism. The effect and the cause are viewed as standing on opposite sides of the 71 ontological divide, and it is unclear what one can infer from the effect about the cause. Some realists, like Dretske, try to finesse this point by talk about information flowing from the world to the knowing mind, but this is simply an empty metaphor if one cannot explain how such a transfer between such radically different items can take place. The issue, again, is not whether realists make any sense when they talk of one system causing another system to be in a particular state. Clearly such talk makes good sense. The only question is whether, given the realist's commitment to the independence assumptions, it is good sense to which the realist is entitled. Causal talk makes sense if it links items of experience, but it transcends the bounds of sense when it tries to relate experience to something that transcends experience. I can now briefly state why realism cannot solve the problem of laws. Laws are part of the structure of the universe for realists, not items of experience or belief, which are subjective matters. Regularities are on our side of the ontological divide. Assigning laws a causal or explanatory role for regularities leaves one none the wiser about how such divided things can be related. Since this criticism depends on nothing more than the core assumptions the realists share, it is a criticism internal to realism. 2. 3.2 External Critique of Realist Assumptions I would like in this section to address the following questions: 1) What, from my perspective, has gone wrong? 2) What should we make of their three core assumptions in the light of modern science and naturalism? 3)What makes realism seem plausible, and can I account for the part of realism that seems to make practical good sense? On question number one, I think what the realists fail to realize is that they are simply constructing a language and failing to take into account 72 the role they are both individually and collectively playing in making such a construction. They are doing so with materials they have drawn from ordinary experience, the language-games we play in practical life that have well-recognized public criteria of application. As Kantians (including phenomenologist Husserl) would put the matter, they have failed to recognize the extent to which they have constituted the objects of knowledge (through objectification), or how subjects and objects of knowledge are abstracted from an original relation in consciousness (or, as I would prefer, in social practice). What they are doing is translating statements from the scientific idiom into their preferred realist language and mistakenly taking the translation for an explanation. This point is not terribly new. Nietzsche complained that Plato had merely doubled the world, and that philosophers have been two worlders ever since. Duhem complained that appeal to explanation is metaphysical, because it attempts to pierce through the veil of appearances to underlying reality. The realists have, I think, adopted a language that makes sense for certain sorts of projects. It makes sense to insist on the transcendence of experience by reality if one wants to rescue reality from change and corruption as Plato did, or to rescue God from being confused with the forces of nature or the designs of humanity, as Augustine and Aquinas did. It makes sense for a dualist to insist on a divide between mind and body. These distinctions allow one to deploy a particular system of values, engage in some actions and refrain from others in ways some traditional societies have found satisfying. But such distinctions come with a price tag. Once one opens a gap, one has a problem how to relate the items again, as shown by the ancient problem of knowledge of Being, the medieval discussions of the incomprehensibility of God, and the modern discussions of the existence of 73 the external world and other minds. I can throw in one more related problem, the problem of deriving an 'ought' from an 'is', sometimes called 'the problem of relating facts and norrns'. To answer question number two, the three core realist assumptions are unattractive and should be rejected because they do not comport well with either modern science or naturalism.. Furthermore, this is for reasons that the realists should themselves admit, since they endorse modern science and naturalism. Modern science views its objects as relative to a particular framework, whether a system of measurement (as in quantum mechanics), or an inertial coordinate system (as in relativity theory). Fundamental particles are events that occur in accelerators or other apparatus. Genes are relative to assays, and animal populations are relative to schemes of classification. So identity is modulo processes of identification, and invariance is modulo processes of variation. Observer and observed are related before we abstract out what belongs to the objects. So, objectivity is modulo processes of objectification. The relativity of objects to frameworks of thought and action is even more important in the social sciences, where the observer and the observed interact in ways far from trivial. The assumption that minds and the world are independent, core assumption number one for the realists, is not in keeping with the spirit of these shifts in scientific thinking. Naturalism places the human observer in the context of natural systems as a naturally selected organism, related to other naturally selected organisms, and in constant interchange with its environment. This continuity of the human and nonhuman is eXactly the point of the naturalistic stance. The importance of this point will be explored in the next chapter, where I 74 develop the pragmatist view of laws. Here I only intend to explore its implications for the realist assumptions. Let us go through the list of core realist assumptions I put forth in section 2.3, and subjected to an internal critique in section 2.3.1, so that we may now subject it to an external critique. Assumption number one is unattractive. Without minds, there could be no choice of an appropriate frame of reference, and hence, no measurement or detection. Indeed, without concepts there could be no evidence, because relevance is established only relative to a set of criteria. Without a world to provide landmarks, the mind could not find its way about. Minds are fundamentally connected with the world from the outset, making sense of it from various points of view, expressed in language. Science simply continues these sense-making activities in a more precise and self-conscious manner. Within naturalism, the mind, language and the objects of the same just are parts of a complicated system of relations. A world free of all connection with human minds would be a world with which we could have nothing to do. As Kant put it, without concepts, the world in itself is an X, unknown and unknowable. Assumption number two is also unattractive. A language containing true utterances we may never verify is a language we cannot use. It is a language detached from the moorings of sensible employment and hence, language without meaning. The realist position, since it is formulated in language, simply does not make sense! We do not know where to attach the predicate 'real' when reality is both mind-independent and language- independent. (Our ordinary use of the term is neither mind-independent nor language-independent, but part of an evidence-essential language-game where we have successfully uncovered the bogus). This is not simply a point 75 about the single word, 'real.' It is science's objective to formulate claims that have some experiential import. Quarks are not explanatorily on a par with Aristotelian occult properties. Otherwise, there would be no point to building expensive and politically controversial superconducting supercolliders. Even the most theoretical of theoretical entities demand some empirical tether. Otherwise, we are not playing the scientific language-game, but doing something else. For naturalism, there is no reason to treat language any differently than other naturally selected systems to which it is related. Distinctions need to be made, but they do not amount to wholesale ontological divides. Assumptions one and two combine to underwrite a distinction between nature and culture, a distinction that divides the sciences into the natural sciences and the social sciences. Not only naturalists, but everyone should have problems with this way of dividing the world and science. As a division of scientific labor, this is unproblematic. One cannot do everything. The problems arise, however, because once the division is made, there is a tendency to forget that divisions of labor are matters of convenience. With that come reductionist social theories that draw the causal arrows in only one direction. A case in point is in biological deterrninist arguments about intelligence, as discussed in Gould's MismeasurecflMnn. Being realist about intelligence led researchers C. Murray and R.Hermstein in the B_ell_C_mye to consider intelligence a thing and to look to biology and genetics for its cause, and to insist that politics and a particular view of educational and social institutions had nothing to do with the results. Left unexplored are questions of how different ways of measuring intelligence may have led to different kinds of correlations than the ones Murray and Hemnstein discussed, and how value judgments may have entered into choosing certain 76 measures as the most important measure. Similar points can be raised with discussions of the 'real rate of inflation', and claims that public transfers indexed to the rate of inflation have been based on too high estimates. If policy formation (soft science) is going to defer to biology and genetics (hard science), the public is entitled to know how soft hard science is and how hard soft science is. A naturalist perspective will put into question the evaluative overlay which the distinction between hard and soft science inherits from an earlier dualism between natural and moral science. Finally, assumption number three is unattractive as well. Not only are the realist accounts of how science supports realism underdescribed, those accounts fail to address the issue of motivation. If the objects of science are as they claim, mind-independent and language-independent, what could have motivated naturally selected organisms like ourselves to engage in their pursuit? To say, as Aristotle does that "All men by nature desire to know," will not do, because it leaves untouched the problem of why anyone should have thought the procedures of science yield knowledge. On the other hand, if inquiry starts with problems an organism has in satisfying desires, or learning what to expect from its environment, in order to better defend itself, for example, then one has an account of the organism in relation to a given environment that has some chance of showing how inquiry evolved into experimental science. To start with the concrete is not to bar any claim to the abstract. However, to start with the abstract, and, what is worse, to reify it, may block the road home to the concrete. Such a view loses sight of our stake in the sciences, and the technical exploitability of its results. For all of these reasons, the assumptions of realism and the realist view of laws should be rejected. To paraphrase Gorgias, the realist position 77 does not make sense, and even if it did, we could never know it, and even if we could know it, we couldn't communicate it in a language-dependent social institution like science. I have not proved that realism is false. If realism is nonsense neither truth nor falsehood appropriately apply to it. But I do claim that the position is so unattractive that it is not worth anyone's time to try to articulate a sensible version of it. Let us now answer the third question with which I began this section. What part of the realist project can be salvaged from the debris? Is it all loss? I think not. I intend to show in chapter three that we can make all the valuable distinctions a realist would want, without making the realist assumptions. Here, I would like to highlight a couple. One can salvage the worry about the fallibility of belief within a non-realist naturalist framework. Some of our attempts to relate to the world fail to succeed. Some languages may be more useful than others. Next, objectivity may be salvaged as an attempt to construct a world picture that is free from particular biases and idiosyncrasies andthat has intersubjective validity. The connections to interests and needs may attenuate, and the results may transcend and modify the culture from which it arose. Next, a sort of realism, relative to a language, seems defensible. We can, after all, within experience, make distinctions between objects that we treat separately. We can isolate systems of interest, at least partially, from other systems in which they are embedded or to which they are related. And invariance under sets of manipulations may qualify something as real relative to the language-game in which such manipulations have point. So, some form of relative or qualified independence of subjects and objects is defensible. We are not, as anti-realists, in the night in which all cows are black (to quote Hegel's blast against Schelling). In particular, I will show in 78 chapter three that one can be a holist and still have a place for a distinction between laws and accidental correlations. 2.4 Conclusion Realism about laws came as an answer to defects seen in the regularity analysis offered by Hempel and Oppenheim. But realism seems to be saddled with insuperable problems of its own that make the position unattractive for those holding a naturalistic worldview. In particular, the claims of ontological distance seem at war with claims of epistemic closeness. Such claims open a gap between the subject and object of knowledge that becomes problematic to epistemic bridge-makers. Appeal to science to bridge the gap seems a prayer of faith given that all the resources available to science lie on one side of the divide and the goals of science on the other side of the divide. A view that respects the continuity of the natural systems we are with the natural systems we study looks more promising, and will be explored in the next chapter. 79 Chapter Three: A Pragmatist Position In the light of the discussion in chapters one and two, it seems that what is required for a solution to the problem of laws is a way of drawing the distinction between laws and accidental regularities or correlations that does not raise insuperable epistemic problems. What I intend to do in this chapter is what both the realists and van Fraassen claim cannot be done: show how laws can be empirically accessible and still license inference to future experience. In other words, I will solve simultaneously what van Fraassen calls the identification problem and the inference problem. The view to be defended here has been worked out within the framework provided by American pragmatism.35 The central claim is that laws are constructions that are part of our construction of a comprehensive world-picture. While these include qualitative laws and taxonomic laws, which in their own way are useful enough, I shall concern myself in this thesis with an important subset of scientific laws, the quantitative, experimental laws. Such laws are repeatable relationships between measured quantities that are determined by our manipulations of the conditions under which they are observed to hold. Why pay especial attention to these? I shall pay especial attention to these because science tries to replace the qualitative and the classificatory laws by quantitative ones when it can. I shall later argue that the reason for this is that the drive for more informative claims leads to the demand for greater precision in formulation, and quantification allows that. We seek to measure, when we can, and to control the processes of interest when we can, in order to develop more testable 35 The works relevant to my work here are: Dewey; Nagel, W; Goodman; Quine, Bland; mm; SkyrmS- 80 claims. On this point, the pragmatist is in agreement with Popper. So, while not dismissing the importance to science of other types of law, from this point on I shall talk only about the quantitative ones, especially the ones that are subject to experimental control. Laws arise within the context of a set of scientific practices which clarify and replace vague, imprecise articulations of regularities found in common sense. We construct laws in ways that make the replacing regularity more reliable as a basis of practice than the replaced regularity. Indeed the central feature that science adds is the practical understanding of the conditions under which regularities hold up or break down--precisely what is needed to provide a defensible distinction between laws and accidental correlations. To be in a position to say what laws are, I will need to make some general claims about language, minds and the world that will set my position off from realist claims about the same. The structure of each subsection in this chapter will be a movement from the general to the specific. I will first develop general claims about language, practice and success, then specific claims about them. In this chapter I defend the view: 1. That laws are constructions embodied in some language, especially mathematical language (section 3.1). 2. That the language is applied within a set of practices that measure and monitor the relations between variables of interest (section 3.2). 3. That the constructs are placed in an environment in which they may fail (section 3.3). 4. That the test scientific constructs need to pass is that of reliable prediction (section 3.4). 81 5. That this view solves the problem of laws as we have developed it in chapters one and two (section 3.5). 3.] Laws as Constructions 3.1.1. Constructions Relative to Language36 In this section, I argue that scientific laws are designed for a particular kind of use, and as such need to be explicitly embodied in some representation that we can grasp and use. This need not be language as we usually think of it, but language-like objects (e. g., charts and diagrams) may be used as well. The language-relativity of laws is significant: some laws may even lack a translated version in other languages (e.g., 'grue' and laws stating regularities of emeralds), because there is no change that requires a law for its description. In calling laws constructs, I seem to be making an elementary blunder: there is language and then there is W, that to which it refers or which it describes. But this is not a blunder, because one cannot talk ahcut anything except within some set of distinctions that describe the properties of that something and the contrasting properties lacked by that same something. Meaning is not a function of isolated bits of language uttered in a vacuum, but is rather within the fiamework provided by a meaning space mapped out by a language. One cannot, for example, tell what one means by 'cat' without some space of meaning that includes 'physical object,‘ and 'living being'. The language must allow concrete particulars of the right sort. In addition, one needs further words to mark out distinctive properties, so shape-words, color-words, and texture-words are required to even introduce a word like 'cat' into a language that lacks such a word. The semantic space mapped out by a language has to be the right 36 see Dewey chapter 3:"The Existential Matrix of Inquiry: Cultural" and Quine, Word. 82 shape, not only to allow definitions of undefined terms, but to be able to characterize the class of objects at all. To be sure, some crucial distinctions may be lacking, and one may be able to coin new words or expand the meaning of some others by analogy and metaphor to talk about unexpected events. Still, under such circumstances, one is deploying or re-deploying linguistic resources one already has. Science never starts from a blank slate, but within the range of a meaning-space constructed within ordinary experience. The language as a whole is at work in the semantic reach of its parts. So, linguistic atomism is untenable. One must work with some version or other of holism. Still, while one cannot talk about things that cannot be talked about, may they nevertheless exist? Cannot one make a meaningful and serviceable distinction between the meaning and the truth of claims? And if one can, might laws hold 'behind our backs' so to speak, where we neither see nor think about them? Might it not be of enormous practical importance that there be laws, even if no one is prepared to notice them or explicitly express them in language? All of these questions have the same point: to drive a wedge between verification conditions and meaning conditions. This is, as I said in chapter two, part of the realist strategy: to make the truth of a claim independent of ways of finding out whether the claim is true. A claim can be true even though no one knows it to be true, and this applies to all claims whatsoever. The point I would like to raise in reply is that the appearance of making sense notwithstanding, the realist strategy produces nonsense. The joke about the earnest German biologist and the flea comes to mind. Noticing that the flea jumped for shorter and shorter distances as it progressively lost one limb after another, the biologist was dumbfounded by the fact that the flea 83 suddenly refused to jump at all upon the removal of its final appendage. Since the flea had been compliant up to this point, and zero was not on the line formed by the other data points, the scientist concluded that the flea had inexplicably gone deaf! The point of the joke is the same point being made here: a question that makes sense in one context may utterly lack it in other contexts. To ask what an utterance means apart from any individual's experience may make sense. To ask what someone might mean by the utterance apart from any possible experience, does not. One has deprived such an utterance of any legs with which to jump. Language marks out distinctions within experience, and makes sensible applications within experience. To raise a general question of whether all our standards of correct application might not after all be in error is to use language in a way that denies the conditions of sensible utterance. This is a version of the claim Kant made about the fundamental categories of our thought: they make experience of objects possible; they cannot be used to extend beyond experience to objects that transcend experience. Wittgenstein's point that language requires publicly assessable criteria for correct application holds for scientific utterance as well. One simply cannot make sense of theoretical discourse apart from numerous connections with talk about phenomena. The phenomena must be appropriately labeled, described, measured and investigated. The language weaves experience into a structure. This point does not turn on our ability to reduce the meaning of individual terms or sentences to some so-called pure or neutral observation language. The point is rather that if the language, be it theoretical, observational or mixed, does not relate bits of experience with each other in some coherent way, then we are certainly not dealing with the language of 84 science, and may not be dealing with a language at all. Our view of language and linguistic meaning must be broadly verificationist. Language helps us come to grips with the world cognitively. We construct it to direct our noticings in different ways: perceptual contrasts, monitoring the effects of our interventions and actions in the world, placing an evaluative overlay on objects from the environment labeled fiiend or foe, and so forth. We simply are not in a position to act intelligently in our environment until we have constructed a space of meanings and values that map out relations of relevance, appropriateness and importance. In science in particular, we have additional reasons for demanding explicitness and precision of statement. Laws in science are cognitive devices constructed to enable us to predict the behavior of interesting systems in the world of experience. To do so, we need to isolate the system of interest. To do that we need to be clear about what belongs to the system and what does not, and about which conditions are relevant to the behavior and which are not. Do we know these things a priori? No, we do not. But generalizations from ordinary experience form reasonable starting points for science, and science goes on to make these generalizations more precise (more on that later). It is important that science render explicit normally tacit habits of usage in order to make claims and terms precise. Attention to a habit already in place puts one in a position to notice that the habit is not working very well, and to construct a new habit that better serves in navigating the world of experience. The metaphor of a 'map by which we steer' used by Armstrong, originating in Ramsey, is an appropriate one for belief, but I think Armstrong missed the pragmatic point: we are in the business of constructing maps or representations of the world because we have practical 85 goals to meet. For example, the qualitative regularity that air gets 'thinner' and air pressure goes down with increasing altitude may be serviceable enough for mountain climbers wondering why they tire more easily at higher altitudes, but it is not as useful as Boyle's law, which, with the addition of some laws of respiratory physiology will tell airplane designers precisely what altitude the pressurized cabin needs supplemental oxygen in order to prevent passengers from passing out. The position I am defending here may buy us a realism of a sort, but it is an internal realism, to borrow H. Putnam's phrase. For the internal realist, we can only sensibly make existence claims within the semantic space mapped out by our language or some reasonable extension of our language. Any plausible existence claim must cohere with the framework of sense that we construct by the collection of our accepted theoretical and factual beliefs. So construction of laws within a language involves recognizing that our language is our world -- all the hard won beliefs we have about the way the world is are formulated within our language and modify our language. This is to deny, of course, a central tenet of realism: . that minds (and their contents), language (and sentences formed in that language), and the (external) world are separate metaphysically but connected semantically. Mind and language cannot be 'about' the world unless they are intimately involved with each other from the outset. But such intimacy belies the claim that there is some large metaphysical distinction between the three to defend. Laws in particular connect up bits of experience by explicitly stating the general relation between specific observable quantities in mathematical form. Even qualitative laws need to specify the relation in question to be taken seriously. The reason is at least partially to determine the area in 86 which confirmation or disconfirmation is to be sought. This explicitness excludes many ways the world could be. It sets both a higher standard for success and a lower standard for failure for the claimed generalizations. Why this would be desirable needs explanation, but here we simply observe that this is the effect of greater explicitness in formulation. 3.1.2 The Role of Mathematics No constructivist account of laws can succeed without some account of the role mathematics plays in the articulation of laws. After all, if laws require mathematics, and if mathematics requires realism, then, as Pargetter argued, one needs to explain why laws should not also require realist analysis if they do not. The answer is brief: even if numbers were real and their relations were real, it would not follow that the empirical relations modeled by the numerical relations are also real. The empirical relations may be constructs even if the numerical relations are not. I think we need not even grant Pargetter that much. There is no compelling reason to treat mathematics as other than a language with its own set of rules for correct manipulation. There is a great deal of historical evidence that mathematics incorporates, in abstract form, relations from extramathematical systems: geometry from the measurement of physical spatial relations, groups from transformations of various sorts, arithmetic from practices of counting, etc. It doesn't matter that there may be mathematical objects for which there are no experiential or physical correlates, or that the natural model may not even be correct. For example, physical geometry may not be Euclidean, even though the latter was designed with the former as the intended model: once one interprets the formal term 'line' as the 'least-time curve light traces out in a vacuum', there 87 is a fact of the matter about whether such lines behave like Euclidean lines or like Riemannian lines. The connections with experience are numerous enough to make Platonism an unattractive account of mathematical truth. To be sure, mathematical abstractions do take on a life of their own with unexpected consequences. But the point here is that realism is no more required in an account of mathematical relations than it is for the empirical laws they model. 3.2 Practical Conditions of Constructions Wittgenstein described language-games as the interweaving of language and action, and however apt a description that may be of language and language subsystems in general, it certainly applies to the language of science. One does not simply have distinctions or terms, but a range of activities that involve determining the correct applications for specific cases, with the values to be assigned. To anticipate points to be raised in the next section, one wants to have not only the words, but the criteria for successfully applying those words to experience. Later, I will give a theory of measurement that accounts for the mathematical form and the experimental verification of laws. 3.2.1 The Connection between Thought and Action A point made by all pragmatists especially important here is that all thought arises in a practical context in connection with problems a thinking organism has in achieving particular ends. The problem may be more or less organic in character, where for example the organism is hungry, in pain, or up against a physical barrier of some sort that needs to be surmounted, and there may be no direct way to satisfy the needs the organism has. Thinking 88 is an indirect way of satisfying the needs or solving the problems the organism has, by looking for items in experience which have been reliably associated with satisfactions in the past, e. g., a door or a ladder to surmount the physical barrier of a wall. The problem may be of a distinctly social sort, for example, looking for a coordination of activities of a group when its members are in disagreement over how to proceed. The problem may be an intellectual one, such as puzzling out the application of mathematical resources worked out in different divisions of number theory to come up with a solution to Fermat's last theorem. The point here is that so long as the problem is a familiar one, for which we have the resources, habits already acquired are sufficient, and no thinking occurs. Insofar as the problem has novel aspects, however, thinking may be required for successful solution. Explicit attention, as I said in the previous section, becomes required when old habitual ways of proceeding do not work. In science, one goes out of one's way to test habitual ways of proceeding to see where they break down, in order to replace those with habits that are more successful in the problematic situation. The indirect route of thought may become very indirect indeed, as we will see when we come to the pragmatist account of mathematics and logic, but the point remains that thought is rooted in the practical and naturally finds its way back to the practical in the long run. The applicability of scientific knowledge is what one would expect, on a pragmatic reading of what science is about. A number of points need to be made here. First, a thinker is an embodied organism that constantly interacts with its environment. Action is the part of the interaction for which the organism is the initiator. Second, the interaction is for the sake of maintaining the organism in that environment: survival and flourishing are the point of the actions. Third, any organism 89 that has survived has a repertoire of actions that seem to work reasonably well in the environments it finds itself in. Insofar as they are performed roughly the same way in roughly the same contexts, they are deemed habitual. Fourth, any habit may fail to achieve its ends in a context that superficially resembles the ones in which it has succeeded in the past. F ifih, rational organisms, through the medium of representation, can turn the habit itself into an object for scrutiny, in order to differentiate the contexts and manner of its observance to improve its rate of success. Thought is precisely the process by which the deviation between ends and results is observed and corrected. In organisms without brains, similar results are achieved, if at all, by random modification of the behavior until something works. With organisms with brains, the trials can be internalized, so to speak, in inferential chains tracing out the conditions and consequences of actions. 3. 2.2 The Connection between Words and Action The next point is that this thinking is conducted in the medium of language for language using organisms like ourselves. This point goes beyond the point raised earlier, that language is for the sake of experience and requires empirical applicability. Here the point is that language is part of a way of being in the world, part of a way of life. The set of distinctions that exists in our language is there for the actions it enables and the set of social practices it sustains. The language needs to be at least rich enough in distinctions to enable us to notice when the conditions are right to engage in a particular practice. Those distinctions need to be available to speakers of the language so that they can police its rules. The language need not be rich enough to formulate those rules explicitly -- although natural languages like English are -- but they must be rich enough to enable speakers to distinguish 90 superficially similar perceptual occasions where the distinction marks the difference between success and failure (whether complete or partial) of the enterprise. The last point is that actions create the perceptual events that lead to linguistic utterance as well as the other way around: action elicits utterance and utterance elicits further action. 3.2.3 The Social Character of Practice The next point to be made about practice is that practice is engaged in by the group. Since language is a part of a practice, this implies that language too is essentially social in character. Both points are important. Practices are performed by groups, who train others to do the same things in the same circumstances. As Wittgenstein has argued, the rules of a way of life are caught as much as taught. Imitation is more effective, and more frequent than explicit drill in formulated rules. Nevertheless, the individual performance is positively or negatively reinforced depending on whether it conforms or deviates from the group standards. Language, as a specific subset of practices, accompanies other practices and is similarly positively or negatively reinforced by the group. The group tries to sustain itself in a given environment, by means of group habits of action and speech. While individuals interact with environments to sustain themselves, part of the environment is the practice of the social group in which it finds itself. For this reason, survival tends to be at the level of the group. Individuals tend to survive or succeed if their group does. One can put this point in terms of institutions and speak in functionalist terms: institutions that meet needs in the society in which they are created tend to survive. The culture of the institution maintains itself. But its survival depends on what 91 other institutions and non-institutions are doing: if the work is redundant, it may not be able to sustain itself. Furthermore, if the institution is in a society where some fundamental needs are not being met, the institution may die because the society does. It is hard to find uncontroversial examples of each, but here are some suggestions: the electoral college may be an example of an institution that dies for the first reason; the monarchy, an example of the second; and the hunter-gatherer ways of the native American, an example of the third. In each case, significant changes in the political and social environment -- the grth of an educated electorate; the growth of strong parliaments and a wealthy middle class; the invasion of hunting grounds by numerous well armed and determined European settlers -- led to the demise of institutions that succeeded well enough in a former environment. Science, as an institution, faces pressure from similar shifts in political, social and economic environments, and maintains itself because the social practices of the group are able to 'deliver the goods' of reliable predictions and control. 3. 2.4 The Practical Character of Science The next point is to see that science is itself a set of practices that tries to maintain itself in a particular environment. Everything will hinge on how we characterize that environment, so much more needs to be said. Nevertheless, for one committed to naturalism, the point should be clear: science, as a sublanguage, is part of a natural system embedded in a set of other natural systems, like minds and societies and ecosystems, and like them, it tries to maintain itself in that environment. Distinguishing my view from that of other constructivists who might be inclined to say the same thing with very different import will need to await first a discussion of the 92 practices of science (3.2.5) and then some attention to the conditions of success for constructive activity (3.3). Here however, I simply want to make some general observations. In its internal relations, the science community polices the rules of its practices. It sets standards for education and credentialing, for publication and awards. It sets the benchmark for quality of research and the uniformity of nomenclature, all to ensure that its members do basically the same things the same way. In its external relations with the rest of culture and society, whether in seeking outward funding, or in doing public relations work with the government or the community at large, the scientific community seeks to ensure the continuity of its work and its institutions. It succeeds in both because there is general agreement on the value of its products, a subject to which we will return in section 3.3. 3.2.5 The Continuum of the Practices of Science37 When I claim that science is a language-using set of practices created by embodied cognizers in interaction with the world, I am making some general claims about the representations that occur in science: none of them are autonomous, but all must sustain themselves in an environment created 37 The following list of scientific activities is my own, but it is by no means idiosyncratic. To compare, Lenzen in "Procedures of Empirical Science" gives the following list: 1) Observation (under which he subsumes: a. perception, b. counting c. measurement and d. instrumental detection) 2. Systematization which includes classification, correlation, successive approximation, successive definition, and statistical analysis Lenzen, "Procedures" in Neurath, et. al. 1: 279-339; Ian Hacking, more recently and more provocatively, has suggested that the activities in the laboratory simply put into play and try to maintain a stable configuration of the following heterogeneous elements: 1. ideas, including questions, background knowledge, systematic theory, topical hypotheses, modeling of the apparatus; 2. things, including targets, sources of modification, detectors, tools; and 3. marks and the manipulations of marks, including data, data reduction, data analysis, interpretation. Hacking, "Self-Vindication" in Pickering (ed.), Science 29-65. I think Gooding's criticism of both is appropriate: the agency of the experimenter is not given its due in either view. The former focuses on logical relations between representations; the latter, on equilibrium of elements; neither focuses on the activity and the learning associated with active manipulation of the world. 93 by this constant interchange between cognizer and cognized. To make the point explicit and raise its plausibility, I will now look at a range of activities that occur within the sciences to show how they involve the manipulation of the environment by the organism, and the modification of the organism to fit the environment. On each activity, I wish to show that a) any cognitive yield from this activity comes only as a result of some interaction between the cognitive agent and the system of interest (i.e., there is no such thing as a cognitive free lunch) and b) that the product of such activity takes on a life of its own and can only retain its worth if it can maintain itself in the various environments into which it is placed within the context of scientific practices. (i) Observation It is practically a banal truism that science involves the observation of natural systems of various sorts. The point ceases to be banal, and becomes controversial when the point is made that there is no observation without change, often change induced by the observer. Least controversial is that the change involves a change of the mental state of the observer. But observation involves physical changes as well. The dye in the rods and cones must undergo change. The lens must focus, and the head must rotate, to bring things into view. Some such physical changes must take place in us, the subjects of knowledge if we are to see anything at all. But in addition, some physical changes in the object may be necessary. It may, for example, need to be dyed, chipped, put into a culture, on a slide, under a lens, or into a bubble chamber to be 'observed' at all. Even if the object is already visible as is, it must, at least, change state to scatter photons. 94 All observation involves selection and distinction. We need to distinguish object from background, and distinguish changes relevant to the behavior of the object from those irrelevant. Observation is value-laden in that there are preferred places and evaluative judgments involved in what we choose to observe and what we choose to ignore. Observation is relative to us. We constitute the physical frame of reference. Our sensory modalities select what is perceptible and our way of life places an evaluative overlay over all that we perceive. To put that out of play, in order to make disinterested observation, is an achievement, one that requires effort and attention. Perception involves (at least rudimentary) conception. We have already begun to think about perceptual objects when we have marked them off from the surrounding phenomenal field--even before we have described or classified it. Observation involves action. Observation is far from a passive affair. So observations cannot be made unless we produce the phenomena in a suitable device, for example, a particle accelerator or an electron microscope. Similarly we must prepare the specimen in suitable ways, for example, bombard it with a particle beam, dye it with Gram stain. Even naked eye observation is no straightforward matter, but requires training. It requires skill to do well, a skill that scientific community inculcates in new members. One must distinguish actual observations from instrumental artifacts or optical illusions. Observations occupy a semantic niche in which they must maintain themselves against competitors. Plato claims to have observations showing heavenly bodies were perfect spheres, and most ancient Greek astronomers agreed in those observations, but those observations had competition. 95 Observations that the heavenly bodies has features inconsistent with smooth spheres--mountains on the Moon and spots on the Sun-- observations recorded by Galileo in Sidentanncins competed for the same semantic niche and dislodged the observations of the ancients, and with those observations, the ancient theory was dislodged as well. In fact, where the ancient theory was absent, as in China the observations of sunspots were made at a much earlier date!. Observations are placed in a context where replicability is important. The ideal would be to have a machine do it. And many chemical and physical experiments do have electronic devices as detectors, which feed the information directly to computers which record and process the data. Short of that ideal, however, one wants to have the procedures replicable by any suitably trained person. Ultimately, one needs to describe the conditions of observing phenomena in such a way that any other researcher could make the same observations. Once one can do so, they become detached from the individual who first made them and take on a life of their own: they now have to sustain relations of compatibility or incompatibility with other observations and the observational consequences of other beliefs. Whether they will succeed in so maintaining themselves is up for grabs. As the Galileo example above shows, rival observations may be made in the field that show the researcher failed to see what he or she claimed to see. It may be that one notices differences that are irrelevant to the behavior of the system, or fails to notice differences observed by others that are relevant to the behavior of systems. (ii) Description Skilled observation already involves language, and the two reciprocally modify each other. However, description goes further in 96 systematically characterizing some object or system, at least at the level of phenomena. Description involves construction at several levels. Like observation, description involves interaction with objects. For example, one may need to turn a rock over, push it, crush it, or dissolve it to generate a list of characteristics sufficient to identify some stone as being a particular mineral. Even in field studies, description requires a special sort of attention, a looking disciplined by the phenomenal features of the object. Descriptions place objects in relevance classes of interest, which suggest interesting properties the object may also have. Description also involves interaction within language. As Dewey usefully points out, once one has a linguistic designation for something, one can trace out not only relations between events, but also inferential relations between terms. This may mean embedding the term in a new environment. To use Dewey's example, once we have the word for 'cloud' we not only have the customary connection with rain, but also the possibility of association with terms like 'temperature', 'pressure' and 'volume'. (Dewey 58- 9). The possibility of abstracting an item from one context and placing it in another is a form of manipulation built on an analogy with transplanting plants. The inferential component lies in the fact that as represented the item can now be thought and its properties and behavior simulated in thought. The notion of simulation has turned out to be very powerful indeed. The Einstein-Podolsky-Rosen thought experiment and Monte Carlo simulations of particle detector behavior have proved very important to modern theoretical and experimental physics respectively. 38 38 For the former, see Kuhn, "A Function for Thought Experiments" in Kuhn, Essential 240- 265; for the latter, see Knorr Cetina, Enistemic 46-78. 97 If a term proves useful, it may enable scientists to forge connections within their theoretical web of belief to make the web a simpler, more coherent whole. It may enable us to explain more, to understand more. But if so, it will do so by putting us in a position to predict successfully how a system will behave. That leads to the next point. Descriptions also involve predictions. When one describes something as inanimate, a set of expectations is in place that prepare one to be surprised if they are not met. Rival descriptions of systems of interest set expectations different ways, and different behavior may lead one set of descriptions to displace another. What one is prepared to notice is a function of discriminations available in one's language. Those discriminations exist alongside of practices that make use of them. So, the distinction between 'gram-negative' and 'gram-positive' enables one to distinguish bacteria based on how those bacteria absorb the gram stain we use to make visible cellular structures which are normally invisible under an optical microscope. This distinction turns out to be useful, however, because the different bacteria are differently related to particular kinds of disease, and so this procedure of identifying bacteria turns out to be diagnostically useful. (iii) Systematic Classifying As I argued above, classifying is already involved in description. Preparatory to constructing theories in a domain is the belief that some things or items of experience 'belong together' in some way. We construct classes for which we later construct accounts explaining the unifying principle of the classification. As taxonomic disputes in evolutionary theory indicate, classifying is not an innocent matter. Any classification admits of rivals and needs to maintain itself in the field by considerations of fit into some set of theoretical and empirical commitment already held in the 98 scientific community. Classifying objects or systems in a particular way indicates ways of treating the object that are appropriate to the kind of thing it is. Thus, if the 0-H group in a particular molecule is labeled an alcoholic O-H, this will lead us to treat the substance differently than if we classify it as an alkalinic O-H, as anyone who mistook sodium hydroxide for rubbing alcohol would recognize! A classification goes beyond mere description in the following ways. i) It places this item in conjunction with other items not present, with which we claim it has some similarity. The similarity grounds inferences to other' structures, properties and behaviors this item may be expected to have in common with other items in the class. Those inferences may not have been in anybody's mind when the classification was first suggested, but may now change the expectations of those who use it in ways that either strengthen or weaken its value as a system of categories. It takes on a life of its own once articulated and in use, in that it suggests theoretical moves and empirical tests that may not pan out. Thus a whale, notwithstanding its superficial resemblance to fish, is classified by us as a mammal because it bears its young alive and suckles them. But, this classification leads us to expect other common features if we were not already aware of them: whales are air- breathers and warm-blooded. ii) a classification that has any claim to 'naturalness'39 will involve some theoretical analysis of the object that will Sort out which items in a description are relevant to the behavior in question and which are irrelevant. This point, too, is present in the example: it seems A 39 We can still use such phrases as 'natural class' as long as we recognize what we are doing. We are simply saying of one set of constructions that we are committed, in some way, to saying the world is this way rather than another way, and that our willingness to commit that way is at least partially a function of the theories we accept. This again is a realism of a sort, but an internal realism constructed within a particular language with its associated set of practices. 99 'natural' to us to regard whales as mammals precisely because we do believe lungs and ovaries are features that only arose late in evolutionary history. As should be pretty clear, we can classify items in a way that does not hold up. We may assign a rock to one geological stratum, and then find ourselves confused over the fossils it contains. We can assign an event in a particle accelerator to a particular particle, and then find out it has the wrong mass or charge or spin for that particle. We may find that not all members of the class share the identifier property, and then need to give up that property as the criterion of class membership. It may turn out that the class has no set of properties widely shared by its members, and therefore lacks the cohesiveness of a useful classification. Classifications ought to be treated as hypotheses that may or may not prove useful. We may find one or more properties shared by all members of a class without concluding that the class is a natural class. We do not, at any point, find classifications that are ready-made. The natural classes, which realists assume, do not exist. All classes are constructed by us, and we judge the degree of naturalness a particular collection of objects has by theoretical considerations governing the connection between the objects and the identifier property. 'All the objects in this room' may score low or high on the degree of naturalness scale depending on our theory of how things came to be in this room. (iv) Measurement All measurement involves interaction between systems to determine the behavior of the empirical variable of interest. We need this information before we can construct an appropriate numerical model for the empirical system that will assign numbers to physical magnitudes and predict the states of an empirical system by manipulating the numbers representing the 100 state variables (Suppes and Zinnes 4-8;17-19). So, for example, we learn that weight is such that if we weigh on a pan balance two different collections of objects and they balance, then any third set of objects that balances either of the first two sets will also balance the other. So transitivity holds for the notion of equal weight. It also holds for unequal weight: some third set of objects heavier (or lighter) than either of the first two will go down (or up) if either of those two sets of objects are put in the opposite pan on a pan balance. Finally, there is the important property that for any two sets of unequal weights, some object can be found that will make the lighter set balance the heavier set. Those properties behave the same way as a numerical model which is a subset of the real numbers under the operation of addition. So the empirical system involving physical objects, an operation called 'weighing' and an operation of concatenating sets of physical objects (by placing them on the same pan of a pan balance) behaves like a set of real numbers, ordered by the less than or equals relation and the operation of addition. Furthermore, one can show that any other numerical representation of this empirical system will be isomorphic with our first one. They will be equivalent up to an equivalence transformation, in this case, multiplication by a constant. So one can construct a scale for weight that will allow one to use arithmetic on measurements. Nor is weighing restricted to pan balances. If spring balances make the same objects equal, less than or greater than the same objects so ordered by the pan balance, including for concatenations of objects, then these two ways of measuring weight are equivalent. So, as one can have equivalent scales with a different choice of units, so too one can have equivalent empirical systems. Here too, we construct. We construct the numerical relational system. We construct the measuring devices. We engage in activities like 101 calibration, testing and refining measuring devices which solve the problem of nonequivalence. A simple example may suffice. Pan balances do not exhibit complete transitivity because there is a smallest discernible difference for any empirical balance. This presents the following problem: for a set of physical objects a, b and c, it may be the case that a is indistinguishable from b in weight on a given pan balance, and b from c in weight on the same pan balance, while a and c are distinguishable because their difference exceeds the smallest discernible difference of that pan balance. But if this is so, then the numerical relational model and the empirical system of objects and pan balance are not isomorphic. We try to construct ever more sensitive devices so that we can avoid the inconsistencies obtained from the less fine grain measurement. 'Weighs the same as' should be an equivalence relation. (v) Experiment Less needs to be said here: experiments are very obviously constructed. The equipment, the design and the procedures used are all human inventions. Experiments all involve manipulating some system by manipulating variable conditions internal or external to it and then monitoring the changes such manipulations produce in the system. For example, we may produce genetic mutations either directly by irradiating the cell with x-rays, or indirectly by putting dioxin, a known mutagen, in the water surrounding the cell. Then the interest lies in how such mutations affect the behavior of the cell. So, in experiments, we are manipulating the world and watching how it responds to our manipulations. What is the epistemic pay-off for such manipulations? What do experiments add to what we already have in observation and measurement? 102 While it is true that the value of any measurable variable cannot be determined without manipulation of the system in question, there are manipulations which intentionally change and control the value of some variables while observing the impact on other variables. One may change the chemical environment of a bacterial culture and observe the impact on growth. Similarly, one may change the rate of interest, and observe the impact on consumer spending. Some of this variation is done to develop understanding of what control of the variable means (Gooding 24-27). Some is done because one does not know the shape of the relation of covariation, or even what all the variables are. (vi) Data Analysis Data are not simply given but are constructed. Often, in the course of an experiment, we sit down and decide whether we believe the data or not. This may happen after the run is over, or in the course of the run, where the readings may simply not make sense. We may check the calibration of the detectors, see if operational protocols have been followed, and test the instruments for defects, wear, alignment, etc. Why on earth would one do this? Because we have expectations of what should happen, given our theories about the behavior of the system in question. When those expectations let us down, we have a problem that needs to be solved. Did we do the right experiment? Did we perform it the right way? Were the instruments the appropriate ones? Are they working properly? Each of these questions sets us an investigative task. We cannot simply throw out the points we don't like, because we need some justification for discarding data. If the detector drifted off calibration, then one could argue that systematic error in the detector invalidates the set of data points. 103 Data analysis may involve charting, graphing or trying to fit the values into some sort of geometry of the relevant quality space: this involves constructing a model for the data, which interacts with theoretical models of the system. These constructions may fail: the data may simply not conform to the model we choose, and the prediction based on the model may not work. (vii) Modeling Modeling covers an array of activities. We may build physical models by analogy with simple systems, e.g., billiard balls attached to springs as a model of the thermal behavior of a solid. Or the models may be very abstract, as with superstn'ng theory. To revisit a point already raised in the discussion of measurement, one needs to be able to show that the mathematical model is isomorphic or homomorphic with the empirical system modeled, before one accepts it as a model. In order to do that for measurement, one needs to construct a numerical relational model and an empirical device with associated set of procedures for measurement, and then show that they have the same relational properties. The same considerations hold for more theoretical models invoking theoretical entities. One still needs to 'get at' or identify (to use van Fraassen's language) the relevant behavior that makes the systems relevantly similar. (viii) Statistical Analysis Data analysis requires statistical analysis because not all errors are systematic. Some errors are random. A simple illustration of this is that one may measure the length of a chair carefully ten times and get ten different (but very close) results. The same holds true for measuring the temperature of a glass of water with a mercury thermometer. This may seem odd, but our 104 conception of the universe is that there is a great deal of complexity to the systems we are studying and their interactions with us, our devices and each other. Thus, we occasionally find that our predictions of where the next data point will be, based on our last one, fail. In such cases it sometimes is possible to identify the reason for the failure in some feature of a system's environment which is creating 'noise' in the detector. What we do is construct a model of the environment that tells us why the data diverge from the prediction generated by the model for the system of interest. Where we cannot identify the source of the noise, we simply give a model of the data that predicts the next data point will fall within a given range with a given probability. Clearly, the claims are sensitive to which statistical model we construct and the actual empirical behavior of the system in question in its particular environment. Statistical models can fail just like other models. (ix) Communication and Verification by the Community Last, but not least in our (incomplete) survey of the practices of science come communication and verification. All scientists publish their results at some point or other. This was as true for Aristotle and Galileo as it is today. And the point of publishing is to have the results publicly available and publicly checked. Some advocates of the social studies of science (e.g., Gilbert and Mulkay) have focused on the form of the scientific report, and the divergence from the informal spoken discourse of scientists. Although these same authors have addressed the rhetorical features at the expense of the logical features of scientific writing, it seems clear from the way these reports are actually used in the scientific community that their primary function is to enable quality control by the community over its results. Results are never simply used nor are results simply disputed, as an emphasis on either consensus formation and alliances, or dissensus 105 formation and controversy might suggest. Rather, a process of public checking is initiated that involves examining procedure, instrumentation, methods, conditions and materials used, as well as the 'reputations' of the research team and lab for good work, and so on. The sociologists have a point: a Nobel laureate is taken more seriously than some other researcher, no matter how good the latter may be. _ But, at the end of the day, not even Nobel prizewinners can pass off just any project. Linus Pauling won a Nobel prize for chemistry, but the respect for his work on protein structure did not carry over to his claims for vitamin C. Notwithstanding the stature of Einstein, his claims against quantum mechanics are not taken seriously in the physics community today. This amounts to a distinguishing mark of scientific practice: scientists check up on the work of their peers. This is not to say the system of peer review always works. It does not work, for example, when, owing to prejudice for or against the conclusions or the researchers, the accepted canons of good experimental work are not used to evaluate the experiment. But, while the system of peer review does not always work, it is nevertheless important for the scientific community to subject its work to public display and review. This is not the case with art, for example, where one writer or painter need not be aware of the work of or criticisms of another artist. There of course is peer review in the arts: juries, foundations for the arts, etc., do employ artists to evaluate the work of fellow artists. Nevertheless, no artist need alter their work to line it up with the recommendations of such review--in fact, the cult of authenticity, where it is in force, would accuse an artist who did change his or her work of lacking integrity. In the sciences, however, it is another matter. One is encouraged to stick to one's guns there only if one anticipates that further research will answer the worries 106 expressed by other researchers in the field. Vindication comes, if it comes at all, in the form of experimental data that passes all the tests constructed by the research community. The crucial test is replicability: can anyone else, with similar materials and similar equipment repeat the experiment and get the same results? 3.2. 6 The Test of Practice: Replicability Why is this a reasonable test? What is at stake is whether one can replicate the results of some piece of research. Repetition is the only sufficient answer to the objection that a result was an artifact of an instrumental set-up, or a researcher's expectations. Both of these points bear on the objectivity of the results. It is not a condition of success, for example, of symphonic performance. One may appreciate, indeed seek, different interpretations of the same piece by Beethoven, for example. Nothing further hangs on 'replication' of an artistic performance, except of course that it ceases to be a performance of a particular piece if the minimal conditions of the performance (right notes in the right order, by the right instruments, for approximately the right length of time, etc.) are observed. Still, one may positively value a performance played slightly faster or slightly slower, or transcribed in a new key. In replicating experiments, however, variation serves only one end: to test the limits of validity of previous experimental results. 'Replication' cannot mean exact repetition in any event, since at minimum, one is doing runs at different times. But replication marks out a similarity class that overlaps with experiments which extend the range or increase the precision of the original experiment. One systematically varies the conditions to see if the same curve captures the data. The law states the relevant variables. Now, the point is to see how the outcomes vary with the variables. If the 107 covariation ceases, then one has not found the functional relation one is seeking. If Boyle's law is checked at higher and lower temperatures, higher and lower pressures, with the same or different apparatus, it is all with a mind to checking the conditions under which the relationship between pressure and volume hold. Has one captured all the interesting variables? Does one know which variables are relevant (e. g., temperature, pressure, volume, the amount of gas) and which irrelevant (e.g., the type of gas), and whether, under some conditions, the irrelevant ones become relevant? For example, at the critical point, at a different temperature and pressure for each type of gas, Boyle's law breaks down. The scientific community, wanting to eliminate certain types of errors, will minimally demand the researcher repeat the experiment. Successful repeatability, however, buys one another form of objectivity: the results become exportable, not only from a particular lab but from labs in general. This is valuable because, whereas replicability is a matter of the scientific community coming to agreement that an experiment was properly performed and the results credible, the repeatability of the results allows scientific results passport to other communities as well, indeed, to culture as a whole. I will say more on this point later. 3. 2. 7 Laws as Constructed Relations between Measurable Variables In the limited sense that is central to this dissertation, a law is a defeasible relation of covariation between measurable variables that models, in mathematical terms, relations in some system of interest. It is a relation of covariation: we cannot even determine it unless something in the world changes, usually by our initiation. But it is a stable relation of covariation, by which I mean it is invariant with respect to a particular set of manipulations that both creates instances and fails to create counterinstances 108 even though we might have expected it to. I will have more about creating counterexamples in the next section. The positive role of constructing instances makes the manipulations a 'recipe' for creating ephemeral phenomena (e.g., decay events) or stable products (e.g., polymer plastics). The defeasible relationship between variables is one we can bring about repeatedly. Replicability is an important part of the second order invariance of first order relations of covariation that distinguishes an accidental correlation from a law. To say that laws are constructed within experience needs clarification. There are views of experience that make it the subjective contents in someone's head, or the pure present (the future and past only available in representations of various sorts) or all laid out (tenselessly, as observations past, present and future), or as untamable flux. None of these views as they stand is particularly helpful. Experience has to include more than the self or it will be imprisoned in the self. It must include more than the present, or it will be imprisoned in the present. It must include more than some static, eternal structure of observations past, present and future, or one will never grasp the temporal character of experience. Experience must include more than untamable flux or else one will never be able to construct notions that make the movement of experience intelligible--or to set expectations of what will happen that stand some chance of being useful. All of these notions of experience are old, and caught up with the problematics of the static and the changing. If formulated in these terms, the problem we have set for ourselves cannot be solved, for the stability of dynamic patterns of experience does not start with according priority to either the static or the changing. Rather the stability that is sought is stability relative to a particular set of conditions as we have ascertained them in a given environment. We 109 F}- set the formulation up as we interact with systems of interest, and note under which conditions the system behaves as expected according to the formulation. Then we put the formulation under pressure by altering the conditions to see if the formulation holds up. We call 'laws' only those constructions that seem to hold up pretty well under systematic variation of conditions of validity. So, for example, we are more confident in the claim that a particular virus causes a disease when we can create the disease in a healthy specimen by injecting the virus. We are more confident still when we can cause the same disease in other species. The conditions of the construction of laws account for some of the features of laws. i) These conditions of construction account for the use of simple mathematical relations where the conditions of validity are suppressed. Boyle's law, as we saw, is valid for atmospheric gases under the normal range of temperature for atmospheric gases. Although the variation of temperature is great from the tropics to the poles, the variation remains within the scope of the law. Since such variation in temperature does not affect the validity of the law, temperature can be left as an unstated parameter in the formulation of the law. Similarly, interpolation and extrapolation are 'covered' in the sense that we have reasons to think the law continues to hold in places where we have not checked it. (The details of the justification for this confidence must await 3.3 on the justification of laws) ii) These conditions of construction account for the functional form of many laws, because the laws were arrived at, in many cases, by varying one condition while monitoring others. Hence differential equations frequently occur in formulations of laws. 110 up: ifuih 'I’A‘l'r x. iii) These conditions of construction account for the fact that laws often do not specify the units. For example, Boyle's law does not specify whether volume is in cubic centimeters, liters, or cubic feet. One can prove this does not matter, as Suppes and Zinnes show, because any suitable scale will be related by a transformation function. A similar argument can be given about the equivalence of different suitable measurement processes for the same variable (Suppes and Zinnes 8-10; 19-20). iv) These conditions of construction account for the universality of the law. This point may seem redundant in the light of point i, but it is worth looking at the implications. First, replicability accounts for universality with respect to time and place. Replicability involves a denial that when and where an experiment took place is a condition of the law holding. Absolute time and place do not occur in the equations, only relative time and place do. Second, it accounts for what Harre called in our previous chapter 'substance universality', his paradigm example being Newton's laws of motion, which apply to physical bodies generally, not just the moon or falling apples. Replicability may, by semantic ascent or descent, widen or narrow the domain of a law. Boyle's law originally applied only to air, but we find that other gasses behave similarly. In fact, it has become part of how we define a gas, so it combines with Charles's law into something we call 'the nnimsal gas law.’ Similar extensions occur in biology, where observations valid at the individual level are extended to the species, the genus, the family, class, phylum or kingdom. In the reverse direction, the laws of planetary motion are applied to newly discovered asteroids and satellites to determine their 111 trajectory. The laws of their ballistic behavior are the same as the laws of planetary motion. So, the domain of objects to which a law properly applies is fixed by replication, and the universality is relative to that domain. v) Finally, these conditions of construction account for the inferential role played by laws. The mathematical structure enabling deduction applies to the empirical system because we set up the construction to make it so. 3.3 Success Conditions for Constructions I will argue in this section that the justification of law claims lies in their usefulness. While no construction is guaranteed success, successful constructs are very useful in performing the important roles of explanation, prediction and control. In the previous sections, I commented on how laws are constructed and how they maintain themselves in a particular environment. Before I go on to say how these constructs are justified, it is appropriate to say something general about this notion of a construct 'maintaining itself in a particular environment.’ What does this metaphor mean? Similarly, what can it mean to say a construct 'takes on a life of its own'? What I mean in this context by 'maintaining itself is that any new term or claim will only be able to establish a consistent practice if it fits in to a web of distinctions or claims already in place. In ecology, one would say 'finds its niche.’ A distinction can do so if there are no competing distinctions: the phenomena may be new and require new vocabulary to describe them. The competing distinctions may be vague and lump together distinct phenomena. 'Heat' for example, in Bacon's usage, was too vague to support any interesting theory of heat as we now understand it. Color words 112 may need to become more precise and definite to handle spectral analysis or qualitative chemistry. Competitor descriptions that need to be dislodged may be entrenched with other terms, concepts and claims that would need to be revised or replaced if a new term were going to establish itself. For example, 'sunrise' as a description of the movement of the sun needs to be reduced in theoretical status to mere appearance before one can simply say 'the earth moves around the sun, not vice versa.‘ 'Taking on a life of its own' involves the following features of contingent group practice in the use of terms: 1) A term, once objectified, may come to have a very different significance than the one the individual author or even the original group of practitioners had in mind. The term may be altered to fit in with contemporary practices and beliefs. 2) The term as part of a process of inquiry may come to have different and, perhaps, more definite, content as inquiry proceeds. The term may turn out unusable as nature resists being constructed that way. Finally, and most importantly on my view, 3) a term may be successfully embedded in new contexts. It may be independently combined and employed with other terms and claims in another language game. The term may do useful work in contexts other than the one in which it originated. I have in mind here how laboratory phenomena can be articulated in ways that make them useful in technological application. For example, one may baptize terms such as 'atom' to describe chemical events, only to find not only the conception and practices of chemistry change, but the proper description of atoms changes. Dalton launched the term and the conceptual change, but the evolution of the significance of the term was no longer in his hands once the term was launched. 113 Both of these metaphors point to the complex and sometime very indirect ways experience causes us to alter our practices. 'Maintaining itself is a metaphor that captures my holism because every term requires a context. The phrase 'life of its own' captures the objectivity of terms, because once articulated in group practices, these terms can resist idiosyncratic use, or can change values in the light of experiential consequences. Both of these points differ from realist positions on meaning. For realists, it is the world, not the group that determines correct usage. For the realists, it is the atomicity of the world, not the articulation of the sign, that guarantees independent combination. Still, it is important to say these things to show that 1) there is an alternative to the view that makes meaning depend on reference that does not make meaning 'something in the head.‘ The use theory of meaning places meaning in group practice, with all the historical contingency that involves. 2) It is also important that holism need not commit us to deny the obvious: that words and sentences can do roughly similar work in different linguistic wholes. Both points are important to both the construction and the testing of constructs. A term or claim must make sense (fit into some practice) of some group to be viable, but to be successful, it will need to be able to fit into the practice of other groups as well. With some allowance for technicalisms that are properly creatures of technical language games only, some terms or paraphrases will have to lend themselves to common sense use. If the terms or claims cannot help us in the way we think about the world, or change the way we live, they have not succeeded in the sense of success important to this thesis. I will explain what I mean by that by explaining how laws succeed under testing. 114 Since laws are constructed under a set of conditions, there is already testing occurring during construction, so the separation we make here is somewhat artificial. Nevertheless, it is worthwhile for analytical purposes to give construction and testing separate treatments, even if the same processes are involved in both cases. We are looking at the same processes from different perspectives when we ask alternately how laws are created and sustain themselves in a given environment, on the one hand, and how their stability contributes to their usefulness on the other hand. So, in this section, I want to go on to show how constructs in general, and laws in particular, are tested. This falls short of justification in the sense of demonstrating the truth of such claims, at least if truth is construed as being acceptable beyond any possibility of future revision, but it is still grounds for rational acceptance. In particular, I want to argue the following: 1. Empirical adequacy is not enough (3.3.1). 2. Consensus is not enough (3.3.2). 3. Reliable prediction is required (3.3.3). 4. Reliable prediction comes of replicable experimental results (3.3.4). 5. Reliability forms the basis for decision-making at both the level of individual action and broader technological policy (3.3.5). 6. Laws are constructs on which we can build plans of action and technical products that have some assurance of success (3.3.6). 3.3.1. Empirical Adequacy Insufficient We live and act in time. We act based on experience, but we must decide what to do in the light of events that have not happened yet. This is the problem for the empiricist. The empiricist is right to insist that the world is too complex to be known from the armchair: we must study the way 115 objects behave before we can be in any position to frame the laws governing their behavior. But the future, which hasn't happened yet, is relevant to the truth of claims about those laws. One way to deal with the problem is by the concept of 'empirical adequacy'. Construct the set of all observations, past, present and future that bear on the truth of the claim. Van F raassen calls a theoretical claim that is consistent with such a set 'empirically adequate.‘ Van Fraassen claims that logical consistency and empirical adequacy are the most that an empiricist can reasonably demand of a construct. I shall now argue that the pragmatist should ask for more than this, and that this additional demand is reasonable given the cognitive role we expect laws to play. Let us grant" that, in some sense, truth is atemporal. We would agree that if, in the future, events turned out differently than expected, we would claim that the law we used to make the prediction was false and always had been false. If we do not concede this, then prediction affords no real test of laws. Still, there are two problems with van Fraassen's formulation. The first problem is that it seems we could not know whether a claim is empirically adequate, because we never have access to the set of observations as van Fraassen describes it. The second problem is that 'consistency with observation' is too weak a demand, given the requirement of informativeness. The first problem is that van F raassen at times talks as if this atemporal class existed, whether anyone ever makes these observations or not. While at times be carefully relativizes observability to technical means available at the time, at other times he speaks as though these confirmers and defeaters simply existed, whether they ever become part of experience or not. This, I think, is problematic for the same reasons as the realist view of 116 truth is: once a wedge is driven between truth, and verifiability, meaning becomes problematic. Since we are temporally and spatially located beings, it is important that we be careful how we describe the relevance of future evidence we do not yet possess. Van Fraassen's set of observations is an atemporal set, and all our empirical means are temporal. We have a problem of empirical access here. The second problem is that consistency with observation can be bought too cheaply. One simply weakens the theory. Instead of predicting rain tomorrow afternoon, I predict it will rain some time this month. I save the hypothesis, if I do, at the cost of content. I can weaken the hypothesis into a tautology: either it will rain tomorrow or it will not. I now have a hypothesis that is consistent with all the empirical data--and perfectly useless as a law (Hanna, "Empirical Adequacy"). The law no longer tells us anything definite about the world. Now, it is one thing to say that we need empirical access to relevant evidence, another to say how we can have it. If the future is relevant to the truth of our claims, but we do not live in the future but the present, we have two historically important philosophical problems we need to address. The first is Hume's problem, that is, that past experience, as logically distinct from future experience, cannot be the ground of a sound inference to the latter. The second is Goodman's problem of 'grue': regularities are where you find them, and you can find them anywhere. The first problem is acknowledged by empiricists and is the reason why they insist on the relevance of future evidence in the first place. It is well known that accidental regularities, especially stochastic ones, like a run of heads in coin tossing, may fall apart in time. One doesn't always know when sufficient time has lapsed, and the negative induction from past claims 117 about laws has made us all fallibilists. We have learned over time that our best formulated laws are often special, holding only under limited circumstances within particular boundary conditions. The universal gas law, for example, holds only for gases above their critical point. Empiricists have attempted to solve Hume's problem by formulating inductive logics, and by theories of statistical sampling and decision theory, but the problem of justifying such procedures remains. The second problem, however, is a problem even if we have a satisfactory way of handling the first problem. Even if we were confident that some regularities do hold up in the future we would still have the problem of which regularities will hold up in the future. If an n-order curve will go through all the data, there always exist curves of order greater than n that also go through those points. The problem here is not that too few regularities will hold up, but that too many will. Both problems are problems because they put in question our ability to make predictions. The first problem puts in question whether such a move can be an inference, i.e., a justified conclusion, based on the evidence. The second takes for granted that there can be justified inferences of this sort and goes on to ask: which ones? In my view the solution comes from two tests: 1) consistency with the linguistic practice of the scientific community, that is, with the terms, claims, laws, theories--and their logical implications, paraphrases and assertibility conditions, and 2) the test of experimental practice that involves the manipulation of the conditions under which the claims are assertible. The first test comes from Goodman. Goodman solves what I have called above the second problem, and what he calls the 'new riddle of induction', by giving the conditions of projectibility. A regularity must 118 contain terms entrenched in our language (that is, having a history of successful use), having examples, having no known counterexamples, and fitting in with the structure of already accepted laws and theories. The regularities we would consider laws must pass the test of coherence thus broadly conceived. This includes a condition he calls cotenability. Two sentences are cotenable if they are capable of being simultaneously true. Some consistent pairs of sentences fail this test. For example, 'Joe is in London', and 'Joe is in Paris' are logically consistent, even logically compatible, but cannot both be asserted of the same person at the same time for extralogical reasons: no person can be at two such widely separated places at the same time. Goodman's coherence test for lawhood is essentially backward looking. Past success as evidenced by entrenchment is the best predictor of future success. Skyrms defines a second test regularities must pass to be laws. This test looks forward. Skyrms, in W, develops a notion he calls 'resilience', which qualifies the regularities we should call laws by additional tests. Regularities involve relations of covariation between observable variables, observed under specific conditions. Any regularity that holds up under the variation of some set of conditions is said to be resilient with respect to that set of conditions.40 How do these views combine in my view? First, the question of coherence with past expectations is important, since in the pragmatist view, we always start in the middle. But the past is not simply observational but 40 Actually the technical definition involves the change in conditional probability of the truth of the generalization conditional on variations of variables in the set of variables tested. So, relative to that set, the regularity is resilient if the conditional probability does not change at all, or only very slightly (with the standard of how slightly specified in advance) 119 theoretical as well. We need to be able to pick out the relevant features to formulate a regularity, and that requires the use of language, a language which ought to already have some success in dealing with experience. But more is required. One needs to put pressure on the regularities in ways that will make them fail, if they are going to fail, sooner rather than later. And for this, one has at one's disposal, a repertoire of ways that regularities of this type have failed in the past. This repertoire gets worked into experimental methodology as self-checks that make sure the data and the models are not flawed in certain specifiable ways. They may be flawed in other ways not yet determined, but minimally, a model should at least not fail in some of the ways past candidates have failed. Thus, one does things to the apparatus to make sure it has held its calibration and doesn't have peculiar flaws invalidating the results. One checks the sample for purity. One checks other samples for comparable results. Learning to separate the relevant from irrelevant variables is a process that may take a variety of procedures and experimental set-ups, as anyone reading the series of experiments Boyle actually performed can see. How does my approach help with either Hume's problem or Goodman's problem? How does reference to the logical and experimental practices of the scientific community license inferences from limited data to universal generalizations? How does such reference handle the problem of underdetermination to pick out the function that best predicts the future behavior of interesting systems? My approach dissolves Hume's problem. There are no serious skeptical doubts about whether induction is rational for the pragmatist, and if we do not have such a doubt, we do not need to eliminate it. The problem is not whether inductive procedures are rational. We project regularities and 120 have no reason to be dissatisfied with those projections so long as they continue to work. This just is what we mean by rational inference, based on the success of our best practices. regularities we succeed in projecting are laws. Now, 'success' here, should not be glossed, as the realists might, by 'approximately true' but rather as 'true.' This is the only truth we know, that which enables us to navigate through experience. Successful belief sets expectations that are not frustrated, and we make these judgments from the point of view of the present looking into the uncertain future, not from the present looking over the certain past. So the first test, the test of coherence with past practice, is different from a test of coherence based merely on belief, insofar as practice already involves successful realization of the belief. Practice thus undermines at least one of the possible ways in which belief can go wrong, unrealizability. This point is important to make because oftentimes, the terms or claims are formulated after the phenomenon has been observed, and deployed to explain the past. But an explanation that yields no predictions is justifiably suspect. The second test, however, goes beyond the first insofar as we explicitly fly to upset the regularity by playing with the conditions under which it is observed. Inforrnally, in everyday experience, we do this when walking on ice or unstable structures. We send 'test loads' by shifting our weight in various ways, or only placing some of our weight on a spot, rather than risk our whole bodies prematurely. The common sense notion behind all this is: any number of times, I have been wrong about the soundness of these structures, and better that they fail now when the risk is small than later. In this fashion, we can localize the trouble, if trouble there be. Similarly, regularities can fail any number of ways, and specifying the conditions of failure actually renders the regularity itself more precise. 121 Hence, testing and attempting to fail regularities also is a way of picking out the regularity, from the class of regularities allowable by the data so far, which we find the most projectible. Playing with the conditions may allow us to see the character of the functional dependence (more on this later), and see if the best curve is, for example, a linear or quadratic function of the variable. The most informative function will be the one that restricts the outcomes the most, but it will be the one most vulnerable to upset when we further determine data points. An objection that might be raised at this point: by calling this 'success', am I not begging the question? Isn't the success of past practices precisely what is at issue, and what is threatened by future evidence we do not have yet? My response is: realizability is a real test, and continual realizability is all that temporal beings like ourselves, without crystal balls, can reasonably ask. To see why this is a good response, to see why the pragmatist's rule 'don't worry about beliefs that are not causing you problems right now' is not just ostrich- epistemology in the face of insoluble problems, let us see the connection between this answer and Reichenbach's pragmatic dissolution of the problem of induction. That response goes as follows: if there are no laws, if, for example, the relative frequency is not going to converge on some definite value, then induction will fail, but so will every other attempt to find a regularity. If there are regularities, if the series of coin tossings converge on a definite result, if any other method can find the regularity, so can induction. If crystal ball gazing reliably predicted the weather, and sophisticated computer models did not, the inductivist would notice the regularity between the prediction of the crystal ball gazing and the event 122 predicted and form a law connecting the two. If the crystal ball connected two sets of conditions reliably, the inductivist could formulate the law sans crystal ball. How does appeal to past practical success hook up with this response? Past success implies that the language we have adopted and the practices we employ had survival value. That is why we continue to use them. But ultimately, the question of survival can only be made at the end of time. So here is the dilemma. It may be that we are massively in error now and will be extinct by the year 3000, if local conditions have been benign to our error over the past 100,000 years. All our success may be apparent. Here is the practical version of Reichenbach's response. Perhaps none of our practices succeed, in which case no one will formulate regularities that hold up and we will all fail. But if any practice succeeds, scientific practice will succeed as well, because it builds on past successes. The worry is unreal furthermore, because the fact that we are still here, living in a variety of environments over a long stretch of time has to count as success if anything does. What could the contrasting term to apparent success be if, ex hypothesi, there is no such thing as real success? So long as our laws and theories continue to assign high likelihood to whatever happens, we have all the success we could reasonably demand of them. We can use them to navigate through experience because we can base actions on their predictions. They inform us, in non-trivial ways, about what will happen next The practice of scientists, therefore, in requiring accepted terminology and procedures for scientific work, goes beyond empirical adequacy in at least two ways. First, it is not enough for a term or a claim that it retrospectively can account for all that has happened. It must be put into practical use and prove its usefulness by continuing to make good sense 123 of experience over a period of time. If the distinction is idle or proves unusable because its conditions of application are unclear, it will fail to embed itself in any long-lived practice. Terms already in the language have already proven themselves to some extent. The conservative impulse in science, remarked by Kuhn in his notion of 'norrnal science', has this pragmatic justification: although there is no sense to talking, as a fallibilist, of absolute success, since we may be wrong in any of our claims, it still makes sense to talk of relative success. We can demand that scientific claims be W as common sense claims, because common sense claims have proved their survival value. Current science can demand that its successor be W as it currently is, even if definite trade-offs are involved in the switch. Similarly, terms can be replaced, but only by terms that with their associated distinctions and claims do a better job. The group judges whether to adopt the new term or not based on what the new term will enable the group to do. It may not be immediately apparent, but the demand for practical usableness, as a common sense measure of theoretical adequacy, amounts to a demand for minimal informativeness. One needs to know the empirical conditions under which to correctly deploy terms and claims. Second, an interest in self-consciously improving empirical beliefs, led scientists to develop the informal common-sense procedures for checking beliefs into formal scientific methodologies of great power. Those scientific methodologies, in turn, transform the vague hypotheses found in common-sense theories into precise testable scientific theories. This quest for improving belief leads to the search for more informative hypotheses and precise predictions--the very same object sought in the quest for improved practice. After all, we are better able to act when our plans are based on 124 reliable forecasts of the behavior of the systems with which we interact. Mariners are better able to navigate when they have more informative maps of the ocean. Briefly, why does not my view have the problems I found with empirical adequacy, i.e., empirical access to an atemporal set, and failure to motivate the drive for informative, testable claims? First, while I admit the relevance of future evidence, I do so by hedging my claims as defeasible. Only experience is relevant to whether my claims work or not, but so long as my claims continue to pass appropriate tests, I do not let any merely theoretical possibility of error cause worries that I do not have. The claim is true -- but I may revise that claim later. Second, mere consistency with observation is not enough. Laws as guides to practice require the most informative claims about the world we can sustain. Empirical adequacy is not enough because even when it is forward looking and predictive, and not simply assuming a view from nowhere (or nowhen), it fails to give any reason why more informative laws are preferable to less informative ones. If simple consistency is enough, then tautologies will be fine, because they are consistent with anything that happens, no matter what happens. One cannot pull any definite predictions out of vague or vacuously true claims, so they are of no practical use. The very reason we frame laws is to enable us to successfully navigate through experience. So, in addition to empirical adequacy, one requires some judgment by the scientific community that the regularity is realizable and informative. 3.3.2. Consensus Insufficient To bring in the judgments of the scientific community may seem to give conservatives veto power over new formulations and approaches. Even 125 if standards evolve (as they surely do), the standards are set by the most established group that retains the power to silence dissent. Some social theorists of science (Latour, for example) seem to believe this is true. They view science as the construction of networks of alliances where the strongest rhetoric wins. The point I want to make here is that constructions may fail even if scientists, with all the good will such consensus requires, agree on the language and tests employed. The analogy with engineering is perfectly appropriate here: even bridges constructed with the approval and careful work of engineers sometimes fail. We find conditions or combinations of conditions we had not considered frustrating the generalization we employed. It may turn out that some simplifying assumption or idealization we used has a more limited domain than we thought. We cannot conjure up phenomena simply because such phenomena would fit our theory. We may never produce either the phenomena that we would call 'superstrings' or those that we might call Higgs particles, just as we were unable to produce phenomena we would call ether drifi, cold fusion or polywater. Such terms may turn out not to be useful because we cannot construct a consistent context of use when there simply are no phenomena that consistently behave as we would expect these to behave, no matter how much we play with the conditions of observation. While there can be theory-motivated reasons for saying our theory must be wrong, and while the point at which we give up may be socially negotiable, it is simply false to say all is negotiation. 3. 3.3. Reliable Prediction is Required. Thus, I believe the test of importance, to which we subject scientific formulations of theory and law, is the forward looking test of reliable prediction. This may disappoint those who expected something more than Popper's criterion of falsifiability, but it is important to see why this is a 126 reasonable test having direct bearing on the usefulness of laws in grounding practice. We have already pointed out that empirical adequacy without inforrnativeness fails here, because if any formulation is empirically adequate, so are all weaker versions. By 'going vague', we can save laws from empirical falsification, but then they no longer support definite predictions. The point is that the functional relationship expressed in a law can be characterized both positively and negatively. Positively, it is simply to say that there are instances of the functional relationship that we have been able to produce and that this functional relationship holds over a broad range of environmental variation. In other words, this is simply to say that the relationship is a reliable basis for action. Negatively, we are saying that there is no condition or set of conditions we have been able to produce showing that the functional relationship does not hold. It is precisely our inability to produce conditions under which both variables vary independently of each other that shows us we are dealing with a law rather than an accidental correlation. We believe that for accidental correlations there is some producible condition or set of conditions under which the functional relationship would disappear.“ The collective experience of the scientific community furnishes a repertoire of techniques for unmasking accidental correlations, i.e., a set of experimentally controllable conditions, and a set of techniques or manipulations for controlling them. We want to say that if a ‘1 One doesn't want to say "under which the functional relationship would change" since the latter includes situations where one finds out that the law requires more complex formulation. The law governing aerodynamic drag is a case in point: under low velocity, aerodynamic drag is approximately a linear function ; under higher velocity, second and third order terms turn out to have non-negligible contributions. One wants to say here not that the correlation picked out in Reynold's law does not exist or is accidental; rather, one wants to say that the simplifying assumptions do not hold when air flow becomes turbulent. The boundary conditions for the simple law signal the need for the more complicated formulation when the boundaries are transgressed. 127 generalization holds with respect to the potential defeaters producible by the relevant set of techniques, that generalization has a prima facie, defeasible claim to lawhood. So, for example, we know how to produce a mass of gold and a mass of U235. We could bring together masses of a magnitude showing that there is a law that masses of U235 above a certain critical limit disintegrate, but masses of gold can be increased without limit (beyond the available gold, of course). Thus, we can manipulatively produce a ' counterexample to one supposed law, breaking the symmetry that existed when considering them only on syntactic or semantic grounds. Thus, the objection of van Fraassen from syntax and semantics that laws are indistinguishable from accidental regularities can be handled in a straightforward way in my account. The pragmatics of laws differs from that of accidental regularities. 3. 3. 4. Reliable Prediction from Experimental Replicability That is the negative side. Failure to defeat a law claim gives a set of conditions irrelevant to the holding of the regularity. We can say , at least, that the generality is invariant with respect to variations within that set. On the positive side, one wants to be able to reliably bring about the holding of the regularity by producing the set of conditions relevant to its holding. To do this, one needs a recipe for producing the phenomena in the laboratory. This is what Gooding in ExpenmemAndthMakmgoflMeamng calls "making skills disappear". The ingenuity, skill and learning of a particular researcher needs to be incorporated into a device or instrument with instructions for its proper use that can be followed by anyone to yield consistent results. Gooding's example is Michael Faraday and his production of an electric dynamo. The point is that skill is relative to individuals, but the phenomena need to be publicly available or publicly producible before they 128 can be candidates for natural processes. Since I believe skill is part of a group practice, I am not as concerned about the problem of individualism as Gooding, but I do wonder how scientific products can be exported to the non-scientific public. The first step, I think, is the one Gooding has mentioned: one needs to embody the knowledge in devices and routines for their regular production. The next step is crucial. These recipes need to be able to be followed outside the laboratory. The device must be dispensable, or usable outside the laboratory. The usefulness of the result will depend on whether the same (or similar) conditions produced in the laboratory by device cum procedure can be produced outside the laboratory. This can, of course be true vacuously if the conditions cannot be produced in the laboratory. So, we need to restrict ourselves to phenomena that can be produced and leave to one side phenomena that we think might be producible but are not within the limits of our current theories or technology. Minimally, we want phenomena whose production is feasible under current theory and technological development. We want results exportable from the lab, after being produced under conditions artificially in ways that we think the phenomena might arise naturally, given our theoretical beliefs about the set of laws we believe hold‘zand about how the environmental conditions normally vary. Thus, the devices used by Boyle to determine the functional covariation of pressure and volume at constant temperature of gas, are unnecessary for the regularity to hold outside the laboratory, because we have independent ways to measure the temperature and have both empirical ‘2 The apparent circularity in the appeal to the set of laws in the settling of some law is simply a restatement of the holism implicit in the pragmatic view: any law needs to fit in with the set of laws we already accept, and modifications need to be made to make the whole set consistent when additions, subtractions and revisions take place. 129 and theoretical reasons for thinking that the correlation holds up under normal temperature variation in the atmosphere. The regularity's holding being independent of a specific set of manipulations, however, does not mean it is independent of all manipulations. Some manipulation or other, perhaps suggested by the common experience of mountain climbers, or by mariners observing temperature shifts and wind patterns, accounts for the intuitions that led to the scientific formulations of empirical and theoretical beliefs about the normal variation in the atmosphere. In some cases, the practical application of a regularity precedes its theoretical articulation. For example, water pumps preceded the experiments of Torricelli, Perrier and Boyle. Once we had the theoretical explanations, we could understand the limitations of the water pump. We now understand why a different device, working on different principles, is required to pump water from a depth below 32 feet. Methods of starting fires preceded our understanding of the chemical nature of combustion. But, once we have Lavoisier's experiments we come to have other methods for making fires and for putting fires out. In other cases, practical application has followed theoretical articulation. For example, the lightening rod, a practical way to prevent fires from lightening awaited an understanding of the electrical nature of lightening, and Franklin used the latter to construct the former. In both cases, the basic point is the same: we want lab results to have relevance to events outside the laboratory. We want, in other words, to be able to claim that if it consistently works in the laboratory ('in vitro'), one has good grounds for believing it will hold outside the laboratory ('in vivo') as well. Bare replication may seem like a poor candidate for a reason why experimental results apply to extra-experimental situations, and usually we do want more than simple repetition: we want extension of conditions, 130 variation of apparatus, test material, more precise measurements, etc. But even simple repetition may constitute a severe test for an anomalous result. It was enough to settle the questions of polywater and cold fusion, to mention two recent controversial cases. The demand for replicability captures our intuition that if we all do things the same way, we should get the same results. replicability weeds out the idiosyncratic and replaces it with routine procedure. Ideally, we would like the procedures to be executable by a machine. Data reproducibility is the scientist's criterion of intersubjective validity, the form objectivity takes in science. The test of replicability is not always decisive. There are times when we trust the irreproducible results of a reputable scientist over the inability of his or her less reputable colleagues to duplicate the results. But for the most part, the rule is: the more people who can get the same results, the likelier that the results are trustworthy. 3. 3. 5. Reliability and Decision-Making Reliability, in the sense of 'reliable basis for decisions at the level of individual action and broader technological policy' is a natural extension of the requirement of replicability: both are rooted in the intersubjectively assessable 'world' we construct, and the same 'recipes' work in both the internal scientific context and the external social context. Such objective results are the useful bases of actions and policies on both the individual and social level. Once Faraday constructed his dynamometer, we could use it to generate electricity for personal and business use in motor cars and offices. The science of electronics puts us in a position to produce, modify, block electrical and magnetic phenomena for various uses. The science of chemistry puts us in a position to construct materials with various useful properties. The laws in these disciplines 131 describe the behavior of physical and chemical systems under modified conditions in their environment. Thus, we can melt metals and alloy them with various substances to improve strength, conductivity, malleability, ductility and other useful properties. We can produce a pressure gradient that forces fluid through a tube, or a potential difference that produces current in an electric circuit. We can follow the course of some bacterium or some macromolecule in a living system by altering it with a radio-active component in a radio-immuno assay. Using the laws of chemistry, we can exchange radioactive atoms for stable atoms in a cell's structure, and using the laws of physics we can make a detector sensitive enough to follow the cell with the radioactive tags. 3.3. 6. Laws and Action-Plans The upshot of my view is this: we construct laws under conditions observed while we interact with systems of interest. Those conditions allow us to reproduce results when we have come up with a useful construction. Reproducible results are useful because we reproduce them whenever we want. We can exploit such results to devise systems and products to enhance our quality of life. Laws are constructs on which we can build plans of action and technical products that have some assurance of success. 3. 3. 7. Three Potential Problems Are there any special problems with this view? Let us see how this view handles the following three problems: I. How do I justify the use of continuous functions? 11. How do I justify the selection of one curve from an infinite class of data equivalent ones? 111. How do I justify the use of idealizations? 132 I. How do I justify the use of continuous functions? Any data set is always finite. A law, however, is often expressed in mathematical form by differential or algebraic equations which have an infinite set of potential instances. In fact, some equations have an uncountable infinite set, since they have real valued functions for solutions. How can we justify using continuous functions on experiential grounds? We can and do by arranging that the conditions we sample 'cover' the curve we think best captures the law. We sample at different points within a given run, between runs, and between replications. As we test for replicability, we enlarge the sample space to check in between and beyond points checked before. We can systematically vary the sampling to approximate the rationals: by deliberately checking in between points checked in the past, we can produce an approximation, in a series, to a dense ordering. Now, we are not restricted to measurements with the same precision as previous ones. Clearly, scientific measurement has become progressively more accurate over the past three centuries, and if we were to hypothesize that we could continue to refine measurements over infinite time, we could come up, for any given data point, with a series of measurements that would give a decimal expansion of the reals. This would be the operational equivalent of a Cauchy sequence. All one needs to do, if one is an empiricist, is to say that there is no empirical bar to thinking one could continue to improve measurements in this way until there was no practical difference between what one has empirically and what one has in the real valued function. Nested sequences of measurements, with progressively smaller widths of error, relative to the same level of confidence, can make such an induction rational if not risk-free. In fact, if one assumes the series is 133 completeable, continuous functions are the only ones that will do, and the conclusion is forced (Hanna, "Empirical Adequacy"). 11. How do I justify the selection of one curve from an infinite class of data equivalent ones? Even if we are justified in using continuous functions, we still have a problem. There are simply too many real valued functions that will do the job, infinitely many in fact. This is not the case of course if we can complete the series mentioned in the previous section: in that case we are left with exactly one function that will do.43 In principle, the justification is complete for such convergent series. But short of that, for any finite sequence of measurements, even a convergent one, we are left with a nontrivial width of values for any confidence level.44 How do we justify the choice of any one in particular? The answer a pragmatist will want to make is that so long as the curve chosen continues to give good predictions, that is so long as all data turns out to be, within experimental error, on the curve we have done as well as we could want. If it makes no difference in the expectations set which curve is chosen, then the choice is one of indifference. We need to keep in mind what I said earlier about the need for the most informative curve justifiable by the data, since the precision of prediction and, equivalently, its falsifiability, hinges on that. Nevertheless, we still are left with a very large class of highly informative curves, and so long as the one chosen continues to assign high likelihood to whatever occurs, we have no 43 Actually, even here we have an equivalence class of curves that differ from each other by at most a countable set of points. 44 The confidence level, or level of significance, is, as mentioned before, the chance that the next measurement will fall outside the interval of error for a given method of measurement with a given device. If one insists on certainty, the width of the interval is infinite. Dropping to 99% confidence gives one a manageable width, but more typical are the choice of 95% or 90% confidence levels. 134 reason to complain that perhaps we have chosen the wrong curve after all. Such a worry does not arise. Of course, it ceases to be a matter of indifference once we improve the accuracy of measurement, and, as I said earlier, the desire in science for maximally informative claims about the world drives the push for more accurate measurements. What happens here is that the interval within which we look as the most probable place for the next measured data point to be found, relative to a given confidence interval, progressively shrinks as accuracy improves. So long as the maximally informative curve continues to lie within the interval allowed by our most accurate measurement, then the curve is resilient in Skyrm's sense, and projectible in Goodman's sense, and justified in my view. 111. How do I justify the use of idealizations? How do we justify the use of laws that invoke entities that cannot exist, for example, point masses, frictionless pulleys, infinite potential wells, and perfectly rigid bodies? The answer again is: because we can construct a series of physical systems where we progressively control the bothersome variables and minimize them. I can reduce fiiction, and watch how the system behaves under conditions of reduced friction. I can observe trends in the behavior that I can extrapolate to the fictional case where friction is eliminated altogether. The last term is a construct, never observed, but it may turn out to be a useful approximation to the behavior of unideal systems. Indeed, it often turns out that real systems, under a range of conditions, behave very much like the ideal systems we talk about in our laws. Real gases, in many cases, under normal pressure, temperature and constructible volumes, behave very much like the ideal gasses in the ideal gas law. Ball bearings rolling down smooth planed incline planes behave 135 like frictionless systems and bumper cars floating on carbon dioxide gas, such as those used in classroom demonstrations by Feynman, behave even more so (Feynman 10-5). So long as I can control the variables, I can make the real systems behave as closely like the ideal systems as I wish, and that is sufficient justification, I think, for constructing such idealized, simplified systems in our notation. 3.4 Answers to Important Parts of Van Fraassen's List It is time to see whether we have solved the original problem or not. Let us look again at van F raassen's list and see how my view fares. The list van Fraassen gives (again, with my explanations of the terms): 1) Universality: The law must hold under all relevantly similar circumstances. All F's must be G's if it is a law that F's are 6'5. 2) Relations to Necessity: Laws involve necessity in the following ways: 23) Inference: One must be able to infer the current observable state of affairs from the law and current conditions if this is a case governed by this law. 2b) Intensionality: One cannot simply obtain the truth of the observable state of affairs by detaching it from the law- operator, because the state of affairs is located in an intensional context. The logic involved in the inference in 2a) is not truth-functional. 2c) Necessity Bestowed: Any necessary connection between natural events derives its necessity from a law. 2d) Necessity Inherited: The view (which van F raassen calls a minority view) that for laws to confer necessity on their instances, they must themselves be necessary truths. 3) Explanation: Laws must explain why their instances occurred. 4) Prediction and Confirmation: Laws must be connected to observable states of affairs so that our belief in the law can be 136 increased by viewing positive instances and, so that our belief that the state of affairs will occur is increased by our belief in the law. 5) Counterfactuals and Objectivity: Our belief in the connection between events related by law must be such that we would expect it to hold in unobserved instances and even impossible to observe instances. This is part of what we mean by the 'mind-independent' status of laws: their objectivity. 5a) This includes context-independence: Laws are not strongly theory dependent or language dependent 5b) Laws are objective: They are discovered, not constructed. 6) Relation to Science: Laws are a central (perhaps the central) feature of science. Without laws, science could not perform the intellectual role it does play for us. 1) Universality: Universality is relativized to a set of experimentally produced conditions and conditional variations. As Skyrrns would put the matter, the regularity is resilient with respect to a give set of conditions. But this universality is nevertheless quite useful. We have well-motivated reasons for specifying the domain of objects, systems, events, phenomena for which the law holds, based on the inability to eliminate the functional covariance by manipulating the objects in various ways. This includes reasons for thinking that location in space and time will always be irrelevant for any given law. 2) Relations to Necessity: With Goodman, I want to say that if we can break into the circle of concepts that includes: law, necessity, disposition, possibility and counterfactuals anywhere, we can define the others in terms of the one clearly understood concept. Projectible regularity explains and determines the truth of counterfactuals, and those in turn give the following analysis of necessity: invariable succession, not only factual but counterfactual. The 137 necessary is contrasted with the accidental, on my view, and we pry apart accidental correlations by creating a condition or combination of conditions where the functional relationship disappears. When we cannot do that, we assent to the counterfactual and maintain the necessity of the occurrence. But then, all we mean by necessity is that an event is to be expected based on the set of laws in conjunction with the set of conditions prevailing at the time. 2a) Inference The heart of the matter is that we believe we can infer further (future) facts from facts we already have. In particular, we believe that the states and behavior of systems do not vary randomly, but in a manner that we can determine by test and measurement. We can construct models from which we can pull, by appropriate mathematical procedures, information about the world verifiable by measurement. If I am right, we necessarily solve both of van Fraassen's problems at the same time. We identify laws with the construct because the conditions of construction include identifying empirical states of affairs through observation, measurement, and manipulation of the system. The inference problem is solved when we have satisfied ourselves through appropriate tests that the construct is stable, and therefore usable as the basis for action. 2b) Intensionality Laws remain intensional, since co-extensive scientific terms may differ in procedures by which we identify instances, and claims may differ in the tests they have sustained. To use Dretske's example, it might be a law that all diamonds have a refractive index of 2.419 and not be a law that all minerals mined in kimberlite have a refractive index of 2.419, even though all diamonds are mined in kimberlite and vice versa (Dretske, "Laws of 138 Nature", 250). This would be because we are more inclined to identify diamonds by their structure, and refractive indices test structure, whereas we think kimberlite should be identified by its location in a geological stratum, and think kimberlite's history relevant to the appropriateness of the designation. Indeed, while it makes sense to think we can create the conditions under which artificial diamonds might be made, by, say, compressing coal, we would think we could create kimberlite artificially only if we could reproduce some of the historical geological conditions that brought kimberlite into existence in the first place. 2c) Necessity Bestowed Goodman maintained that once we have broken into the semantic circle of law, possibility, necessity, disposition, counterfactual--we can explain the remaining terms in terms of the clarified term. We broke into the circle by means of the two notions of projectibility and resilience. Projectibility might here be glossed in a way that gives us a handle on necessity: we confidently believe this regularity carmot be broken, and that the consequent always will follow on the antecedent. Resilience adds to this: our confidence is born of failed efforts to produce a counterexample. 2d) Necessity Inherited If this criterion maintains we have necessary laws, it is simply mistaken. The problem is that such laws would hold in all possible worlds, even worlds where the antecedent conditions do not exist and could not be brought about by any set of manipulations. Assuming we can make sense of this notion of possible worlds, the idea of a law that holds, even though there are no instances of it ever, does seem exceedingly odd, but some realists seem willing to contemplate such laws. The best sense I can make of the notion is that conceivably, the world is such that there is only one set of 139 possible laws, and that, try as we might have to manipulate the conditions that obtained at the big bang, our current set of laws would have been the ones that would have resulted. If this is the claim, it seems to me unlikely to be true. Had conditions differed but slightly, life would not have evolved, and none of the laws governing biological things would have existed. Had the conditions been but slightly different, large bodies such as stars and planets would not have existed, and the law of planetary motion would not have existed either. If, in other words, some laws emerge only when entities of the requisite complexity exist, no such law would exist were the entity non-existent. Since laws are the laws of behavior of some system or other, it is very hard to make sense of a notion of behavior without a behaver. So, if my view fails to meet this criterion, it is no defect in my account, since the notion of a necessary law (outside logic or mathematics) seems incoherent. A notion of law relative to a language and set of manipulations delivers necessity where it counts: firm expectations of what will happen in the world. Such a notion of law does not confer necessity on the law itself in a way that would make the law hold independent of a choice of a language or a set of manipulated conditions. A judgment of invariance under systematic manipulation comes a posteriori, not a priori. 3) Explanation One puzzle for realists is how an extensionalist'view can explain anything. If a law just is a regularity, then it does not explain the regularity but comprises it. The point here is: insofar as we may legitimately infer new instances from a law, we at some level explain their occurrence. For a pragmatist, we're simply imbedding an event in a coherent explanatory context, and this is as legitimately an explanation as anything the realists might put forth. 140 There is a deeper point, however, that involves rehabilitating a principle now regarded in disrepute, the symmetry thesis that claims all explanations are also predictions and all rational predictions involve explanations. The point I want to raise here is that it is too easy to rationalize after the fact based on some theory constructed just for that purpose. That, I maintain, is something of the origin of the realist view of universals: they are shadows of real things in the world taken for explanations for why things are the way they are. The real test of an explanation, something that bears on its falsifiability, is the ability to assign a high likelihood to every event that happens, including those which, but for holding the explanation, we would have assigned a low likelihood. The point of the symmetry thesis is that an explanation, to be any good, must be informative. If it is to yield any genuine understanding, it must tell us where to direct our attention and where to direct our efforts, if we are interested in a particular outcome. My view makes sense of the demand for simplicity and the ban for the ad hoc in explanations. The simplest explanation has the highest information content and is the most falsifiable: if it works, and works consistently, it rules out a lot of ways the world could be, and also a lot of possible actions as ineffective. Complicated ad hoc explanations tend to be made after the fact, to repair some failed explanatory attempt. Simple explanations are preferred because they give us a small class of factors to manipulate in testing the law. The goodness of an explanation lies in its ability to predict, consistently, whatever is going to happen. An after the fact explanation may be no good for prediction precisely because it is vague: it gives us nothing to do, and fits such a large class of outcomes that its information content is low. On my account, science is driven by the need for maximally informative accounts, since 141 those are of the most practical value. Increasingly accurate and precise measurements give increasingly narrow windows for success: the law that passes such tests is more useful than one not so tested. 4) Prediction and Confirmation Van Fraassen, together with empiricists generally, tends to treat prediction and confirmation on a par: theory is a black box with empirical inputs and outputs. Prediction is the empirical output, which gets compared with direct empirical observations of the system: if theory and observation agree, the theory is confirmed by observation. This is overly simple, and not all agree with it. Popper accepted the importance of prediction but rejected any notion of confirmation. Some, like Camap and Hempel, tried to have both. Here in the context of a discussion of laws, the important point is this: we have observed regularities which, as I have already said, are themselves constructions, and then one has calculation enabling models, some concrete, some formal and abstract; most mathematical, and we may compare an observed course of events with a predicted one to see how well they agree. I take it as a virtue of my view that it tells how the use of mathematical models is consistent with an emphasis on empirical checking of the behavior of modeled systems. There is a continuum of manipulations from the experimental, through the quasi-experimental (i.e., linear regression methods) to the nonexperimental, formal logical and mathematical manipulations of symbols that involves no radical breaks. That one naturally selected system can model another need not cause any surprise, and naturalized epistemology need not entangle itself in the problems of defending some sharp distinction between observation and theory. On this view, prediction and confirmation are cognitive extensions of our continuous 142 interaction with our environment, which at this level involves transfer of information. 5) Counterfactuals and Objectivity Counterfactuals are claims about what might or would have been the case if some aspects of the situation had been other than they were. They have proved notoriously difficult to analyze. How can we extend the scope of laws to govern not only the observed, but the unobserved and even impossible to observe? One way is to notice the expectations laws set up. For example, this book on my desk is not now falling. Have I refuted Newton's law of universal gravitation? No. At present, the force of gravity is balanced by the upward force of the desk (which in turn, is based on the electrostatic forces of attraction and repulsion between molecules of the desk, the book, the floor, the earth, etc.) But, if the forces were not balanced, if the book were unsupported, then the book would be falling toward the center of the earth at the average velocity of acceleration of gravity times the time fallen, divided by two. An astounding claim! Why do I believe it? On my view, we assent to counterfactuals of this sort, because we can bring about the conditions where its truth is manifest: we can move the book so it is no longer supported, and observe that it does indeed fall. So, counterfactual truth rides parasitic on factual truth, and the bridge is made by our constructive activity altering conditions. This seems a bit too simple. Could Einstein really travel at the speed of light to observe the beam of light? No, he could not. But thought- experiments are internalized and abstract versions of our concrete manipulations of real systems. And the test of the validity of a thought- experiment is often whether there is some possible experimental realization of the results arrived at by thought. Insofar as we cannot do this, our 143 intuitions embodied in the thought experiment are often shaky, and the truth of the counterfactual claims are often indeterminate. 5a) Context-independence I am inclined to think that this requirement ought to be given up. No construct is context-independent in any absolute sense. All wind up embedded in various contexts, whether linguistic, institutional or experiential. Nevertheless, there is some remaining sense to context- independence that'is important and worth commenting upon. This is the sense in which a construct can be successfully embedded within a context other than the context in which it was constructed. We want inoculations to work not only for laboratory rats but for the people living outside the laboratory who are prone to the disease. Science would be a pretty poor affair if it were created only for the consumption of scientists. And some work in the social studies of science seems to suggest that science is just that. With my notion of reliability, we have a notion of scientific validity that allows scientific results to lodge themselves into practical contexts outside of science. Engineers can count on the work of physicists when building suspension bridges. Inspectors can count on the standards of workmanship created by engineers. Insurance agents can count on the work of qualified inspectors. Politicians and citizens can count on the work of physicists, engineers, inspectors, insurance agents and underwriters when deciding to build and use bridges. There is a web of practices, united by overlap of expertise, that mediates the formulation, testing and application of scientific laws, and these multiple frames assure a limited sort of context independence that can be quite far reaching in its effects: it can allow a certain degree of transcendence of culture so that western science can be appropriated by non-westem cultures. 144 Given my claims about how laws are language-dependent and embedded in a set of practices, how is this possible? I think the short answer is that strings of symbols can be subject to different interpretive contexts imposed by the different sets of practices employed in different environments. So, "F=ma" as a schema, gets construed differently in the practice of pure physics than in applied physics, even though the same physical models may have led to the formulation and testing of the law as in the application of it. And this is simply because engineers follow different practices than physicists, although there is some overlap. The engineer has to be concerned with whether the devices will actually work in the environments for which they are designed, and how to modify them if they do not. The physicist takes for granted that some devices work, and tries to elucidate the principles of their operation. The language used by them differs in ways appropriate to the aims, methods and standards appropriate to their set of practices, and some translation is required for them to communicate with each other. But communicate they do, and strings of symbols like "F=ma" are accorded different uses in the process of translation. The brief answer is that bits of language can embed themselves in new linguistic contexts and do useful work there. Insofar as this is possible for various scientific laws, we have a form of context independence for laws. 5b) Objectivity This, I believe is the crucial condition. Making laws items of experience, even uniform experience, may sound like making them creatures of subjectivity. Allowing that they are constructs may reinforce that suspicion, since the act of construction seems a matter of arbitrary will. It is up to us, is it not, to establish the conventions of language? This fear, I believe, is unfounded. We are everywhere confronted with constraints on 145 choices, constraints that rarely eliminate all elements of arbitrariness, but certainly prevent 'anything goes' as a serious description of scientific activity. I have interpreted 'objectivity' as intersubjective validity, and that calls for some comment. The traditional realist notion of objectivity is as 'something that exists, apart from minds, apart from language, and would exist even if the latter did not.‘ To call laws constructs seems to confuse the concept with that to which the concept applies, and to confirse the epistemic with the semantic or themetaphysical. I argued in chapter two that this notion of objectivity has serious problems with it: it is, if not self- contradictory, incoherent. To bring the notion down to intersubjectivity raised the question of what coherence and consensus have to do with truth. Here is my answer: while experience does not select the one and only correct representation of the world (because there is no such thing), it does resist being construed in certain ways. One can adopt a way of talking about the world and try to live consistently within it, but one may not be able to get away with it. To paraphrase Rorty: truth is what experience will let you get away with (a test that may prove much stricter that what your colleagues let you get away with!). To use James: any belief must run the gauntlet of experience. Objectivity is found at the level of the group: if the group tries to construct the world in a certain way and fails, the group may cease to exist. Since group survival is at stake, the group has an interest in ensuring uniformity of practice. This may involve getting everyone to use the word 'quark' the way the experts do, or it may involve getting everyone to use the word 'rights' or 'democracy' the way they do. The pay-off, in both cases, will be that to do so makes experience consistent in the sense that things behave 146 the way we expect them to. If people do not use the word the same way, or engage in practices inconsistent with the word or the system to which it belongs, then we can say the construction failed. 'Democracy' taken one way, may fail to take hold, or a society trying to maintain it may perish. The expectations of regular behavior based on the language game employing 'democracy' in that sense (call it 'democracyt') may be frustrated, and we simply say: 'democracyt' does not work. Similarly, the language game involving 'phlogiston', the putative agent of combustion in the eighteenth century majority view of chemistry, failed to maintain itself. This is objectivity robust enough to explain theory change in terms not entirely social or psychological, but not realist in the sense of theory- and language- independent either. While laws still must find some niche within experience, in experiential terms, the environmental forces that bring selective pressure on laws are sufficient to unmask wishful thinking and to prevent gerrymandered regularities from maintaining themselves. 6) Relation to Science That there are laws in this sense in science, I take to be obvious. No scientific textbook could be written that did not include mathematical formulae, models of interesting systems, and general description in qualitative terms of recurring and therefore predictable behavior. Certainly in fundamental physics, to pick van F raassen's area for examples, laws are the presupposition of both quantum mechanics and relativistic mechanics. Maxwell's Laws were the source of Einstein's insight in relativity, and the Wien-Stefan law of radiation was the source of Planck's. And the quantum model of the atom articulated by Bohr would have been inconceivable without the laws of spectroscopy embodied in the Balmer series, the Lyman series, and others. And, not to raise too fine a point in this post-Kuhnian age, 147 the notion of a mechanics ingredient in both quantum and relativity theory assumes the work of Kepler, Galileo and Newton embodied in the laws of motion and gravitation. Van F raassen's point about symmetry arguments being sufficient for the formulation of conservation laws is correct, but beside the point. Not all laws are conservation laws. Many lower level empirical laws are crucial in setting our expectations about the behavior of specific systems we encounter. Even the conservation laws require experience in the construction of the language in which (possibly some) laws seem to fall out a priori as symmetries of the problem-situation. 3.5 The Remainder of List Mistaken A possible response to my claim to have solved the problem van Fraassen lays out is that I have cheated: some of the central terms in the conditions have been given non-standard meanings, and the problem I have 'solved' is not a problem anyone has. To make this objection more precise, let us look at which conditions have been met straightforwardly, and which required some 'tweaking.' The empirical virtues of confirmation, prediction, and possibly, universality and relations to science, may possibly be conceded by the realist opponents. But the logical and metaphysical virtues, involving necessity, intensionality, inference, explanation, strong objectivity (that is, complete independence of any conditioning by subjectivity) and stronger notions of universality (Kant's 'strict universality') are left begging in my account, unless one accepts the impoverished replacements I give as 'analyses' of the original notion. Although I shall address objections in the next chapter, this one is too important to leave unanswered in this chapter, for it strikes at the core of my claim to have solved the problem in any but a Pickwickian sense. 148 First, let me grant the substance of the objection. My analyses of these central terms do not contain all the content traditionally ascribed to the notions. 'Necessity' for example, conceived as some compulsion or force making events turn out as they do, is absent from my notion. My notion reaches as far as, and no farther than, that of a regularity that supports counterfactual claims. There is no mysterious 'physical necessity' lying between logical necessity and cosmic accident that requires an account. We are simply accounting for our expectation that certain regularities will hold up in the future and others will not. The case is similar with possibility. We can characterize possible outcomes as those which are not determined either way by the set of laws we accept, but may, under some conditions, turn out one way, and may, under other conditions, turn out the other way. So, understanding what a law is helps us understand what possibility is. Now, this answer may not satisfy advocates of the traditional notion of necessity or possibility, but what of it? Any analysis of central disputed terms will require some account of why certain parts of traditional usage are regarded as more important than others and some argument why successful analysis requires no more than that part. The strongest possible argument for the latter is that the original notion is incoherent, and the defense of that claim has already been given in chapter two. Necessity, objectivity and universality as the realists construe them involve contradiction at the level of presuppositions. So, if we're inclined to insist that terms be used as the tradition or traditions have used them, then I insist the terms cannot be so used. And if the problem involves such a commitment, then I admit neither I nor anyone else has solved the problem because the problem so conceived is insoluble, indeed cannot be coherently stated, much less solved. 149 But, if that is so, then this is a problem no one actually has. The real problem, if I may speak thus, turns out to be one of having a language that one cannot use because the conditions of correct use do not exist. The realists have tried to construct the world in a certain way, and have failed. So the solution of their problem would be to abandon that way of speaking and look for a form of speech which at least prima facie is free of that problem. So, if the translation of central terms into the idiom of pragmatism is disallowed for some items on the list, I claim that those items are mistaken. Nevertheless, I do not want to leave matters here. First, because I think there is a real problem of understanding what laws are and how they are able to fulfill their role as predictors of future behavior of systems we encounter in experience, a problem that does not disappear with the disappearance of realism. Second, I believe that these terms have perfectly good uses that are not marred by contamination with realist assumptions, and that they are not terms which belong to realism per se. Realists do not have a monopoly on the use of 'necessity' and 'possibility': these are terms of ordinary speech. And these uses may figure in an account of how laws can perform the role the do. Third, I think that it still remains a task of the defender of laws to show that one can give a coherent account, and it is not beside the point to show that the central terms in an account can be construed in a manner which makes the whole account coherent and consistent. As a matter of fact, it may have turned out that central terms do not have any uses that together make up a consistent semantic niche for a term like 'laws'. 'Phlogiston' turned out to be such a term of scientific art: there simply was no way to construct a coherent account that covered the work such a notion would have to perform without, for example, countenancing notions like 'negative weight', work we now believe the 150 addition of oxygen in some processes, or the release of hydrogen in other processes, to do in chemical theory. But that is fine. We can make perfectly good sense of combustion without it. It may have turned out similarly for the notion of 'laws', but I suggest here that it did not turn out that way. Law-talk marks a distinction in our treatment of constructed regularities, a distinction that does useful work in science. It is a distinction that needs to be made clear, and my account is an attempt to do so. If my account has succeeded, then it has solved a problem we actually have: coming to terms with how naturally selected organisms like ourselves, with limited cognitive access to our environment, can successfully predict the future. My answer, in brief, has been that we interact with our environment in ways that establish and systematically try to fail regularities by employing a repertoire of actions designed to construct defeaters for the regularity, at just those points defeaters have been found for similar regularities. This is a good test for a regularity we are willing to project into the future, since we believe that many of the conditions within our experimental control, if not connected by a law, vary independently of each other. No expectation of covariation arises from such conditions of independence. So, systematic interaction pries loose covariation that is simply accidental because this, in the normal state of affairs, is precisely how such accidental covariation is discovered to be accidental. So, the acceptability-conditions of counterfactuals are constructed through this process of attempted failure of laws. The failure to fail does not amount to a proof of truth, but proof is not required. That which proves unproblematic, for the pragmatist, retains the status of 'innocent until proved guilty.‘ The claim that maintains itself is not treated as a rascal we haven't unmasked yet: that way lies madness and 151 skeptical regress. Rather, the claim is treated, prima facie and defeasibly, as a claim on which we can base practical plans of action. I have not in this chapter addressed all the objections and worries that may have been created for the reader, nor will I in the next chapter. But I do hope to answer at least some central objections to my presupposition, and may thereby prevent a much greater body of objections, which I will not address directly, from arising. 3.6 Summary I believe a satisfactory account of laws requires taking the naturalistic viewpoint seriously, which sees all natural systems maintaining themselves in an environment, including those natural systems constituted by our language and minds. The naturalistic viewpoint raises the question: how can limited cognitive agents with our resources be capable of our achievements? The answer given by pragmatists to this question is by our constructive activity subject to our control over the environment of the constructs. If the analogy with building in general is taken seriously, as I suggest it should be, then we have the following points of analogy to notice. 1. Buildings do fail, with catastrophic results. Since they are designed for use by human beings, we have an interest in studying the ways those buildings fail, and incorporating the results in a checklist. 2. Buildings do succeed, and prove, by their usefulness, worth the risks in building them. We record the ways that have worked in a list as well. The two lists become our building code, the set of standards we employ to check the worthiness of structures built for human use. 3. Neither success nor failure makes any sense apart from interests, purposes and actions of agents. 152 4. However, interests, purposes and actions do not guarantee success, because the environment has a say. Buildings appropriate for one environment may utterly fail in another environment and some buildings may be unconstructable dreams of the planner. Laws, too, are constructs, designed to enable us to form reliable expectations of the behavior of systems of interest. The behavior must be of interest, but being of interest is no guarantee we will succeed in capturing it in a formula that enables reliable prediction. Some attempts to construct experience fail, while others succeed. We build up cognitive repertoires by noticing the conditions of success or failure when we can discern them. The task of science is to systematically construct formulas that enable us to successfully predict and control the behavior of natural systems, including ourselves. Laws, so far from playing the subordinate role they have come to have with respect to theory, actually provide the crucial criteria of practical importance that theory must meet. Our speculative interests, whatever they may be, are embedded in an overall practical framework of survival and flourishing. This, I suggest solves the problem, or at least a real problem of laws. Maintaining itself in certain environments does not remove a law claim from any possibility of future revision or replacement. But a coherent fit between construct and environment is a positive if fallible indicator of reliability, and reliable predictive capacity is all that we can reasonably ask of laws. 153 Chapter Four: Three Solutions in Search of a Problem Laws are predictors on everybody's books. That is a given. How prediction is possible, or even whether it is possible, is a point on which views sharply divide. And the divide is related to the ways these views handle the problem of induction. On this question we find a remarkable concordance between views as disparate as those of Armstrong, Collins and van Fraassen, which we will analyze in this chapter. Why those three? Armstrong's inclusion should be obvious. One can safely assume he would not be persuaded by the arguments I gave against him in chapter two, or the arguments I gave for my view in chapter three. So here I voice an objection I believe he might make after having read what I have written so far. Collins represents the social studies of science view. Since I have identified myself as a constructivist of some sort, it is important to make clear what kind of constructivist I am. Collins' rival conception of constructivism, which is close to mine in many respects, allows me a chance to say where I believe the differences are between my view and views based on more sociological models. Van F raassen's inclusion should also be obvious: this thesis is written as a response to his rejection of scientific laws. We are largely agreed on the flaws with the realist position. How do we come to disagree? I will address that by responding to an objection I believe van Fraassen would make to my account. 154 In this chapter, I will examine three fimdamental objections that three rival views of laws might raise against my account. The objection from the realists comes first: laws cannot be representations, but must be what those representations are about. The objection from the constructivists comes next: expectations are uniform because they are subject to social negotiation. Laws are no different in this regard than social rules. The objection from van Fraassen comes last: expectations may be rational although lawless. He argues that any attempt to say what laws would add, epistemically, leads us directly into incoherence. The point I will argue in this chapter is that laws may be reliable representations without begging any important questions in semantics, scientific practice or epistemology. 4.1 Laws not Representations but What Representations are About Armstrong argues that there must be objective grounds for correct predications and truthful assertions. He lodges these in his objective states of affairs and relations between them. Laws, if they are anything at all that science discovers, must be part of the world. He has specific arguments for why laws must not be identified with regularities. How might he adjust those arguments (as surely he would) to show that laws are not constructions? He would, I think, invoke a semantic argument from the nature of language as representations of the objective world. Laws cannot be statements, but what those statements are about. If laws were statements, they could not tie together events as we believe laws do. This argument would, I think, be supplemented by a separate logical argument: laws and regularities, even exceptionless ones (and, indirectly, constructions that formulate such regularities), cannot be identified because they independently covary. We have exceptionless regularities where there 155 is no law, and we have laws where there is no corresponding exceptionless regularity. This second argument, although important, I think is the less important of the two. Certainly, if it works, it undermines my position. But since I believe it is more easily answered than the first argument, I will take the second argument first. The second argument amounts to a claim that my view gets the extensions of the term 'law' wrong when I make laws to be constructed regularities. The first argument amounts to the claim that I mistake a real relation for a formal one. 4.1.1 The Logical Problem from the Extension of 'Law’ While the first argument marks out an ontological difference between regularities and laws, the second argument marks out a logical difference: regularities, even exceptionless ones, are logically distinct from laws because they have different classes of instances. There can be laws that do not have regularities as their expressions: probabilistic laws, for example, do not. There can be regularities that are always true because their conditions are never met, or only are met a finite number of times while attended by their consequents. A list of all the regularities that hold up at every time and every place ("Humean" regularities, in Armstrong's terminology) would contain types and instances absent from a complete list of laws. Furthermore, it is possible that the list of laws contains types and instances missing from the list of Humean regularities. But if so, the reduction of laws to regularities fails, and one cannot analyze laws as simply omnitemporally and omnispatially true regularities. The objection is problematic in that it treats regularities as true in the past, present and future, assuming that we have the class of instances including members that do not exist yet. This is to treat regularities realistically, but to do so begs the very issue under debate. The pragmatist 156 wants to insist that it is important that we occupy a position in time and do not have the future available to us except in the form of projections by laws. Next, Armstrong refers to the view he has refuted as the naive regularity view, and it is appropriate to respond to it by saying I do not hold the naive version: it is important to my view that we hold not just exceptionless regularities, but tested regularities. So, it is appropriate to ask if Armstrong would have some further objection to my sophistication of the naive regularity theory. He does. His complaint, specifically directed at Skyrrns, is that laws might not be resilient in the form I have described resilience in chapter three. Recall that the conditional probability of the regularity on the testing conditions varies at most by a wiggle within a particular interval, specified in advance. My gloss is that we can count on the regularity holding up on reasonable variation of environmental variables. What does Armstrong mean by "the laws might not be resilient"? He claims that for probabilistic laws, the true probability is consistent with instances or runs of arbitrary relative frequency. What is my response to this? Again, the notion of 'true probability', 1 think begs the crucial question: how do we know that a law is a probabilistic one? We need not be committed to any claim about how fast a given mean of a sample of instances will converge on some limiting frequency, so long as we are prepared to say under which conditions we would give up the claim that there is a definite limiting probability involved. And it is important to see why we should insist on this: the ability to make predications, even with probabilistic laws, depends crucially on our ability to determine the probability that the consequent conditions will follow on the antecedent conditions. If we cannot do that, it makes no sense to talk about a 'law' at all. 157 For my next point, let us look at the counterexamples Armstrong constructs to show how Humean regularities are not laws, and vice versa, and show what problems lurk in the counterexamples. Armstrong constructs Humean regularities that no one would want to call laws because they have only a single case and lack the repeatability we expect of laws. The recipe for cooking up such a regularity involves the following steps: 1. Single out the set of properties sufficient to pick out exactly one object. 2. Pick the other properties of that object, and make them a set. 3. Make the conjunction of the first set the antecedent and the conjunction of the remaining set the consequent. One now has a conditional that is general in form (no names or spatial or temporal locations), is therefore reproducible, but in fact only has had, has, and will have exactly one instance. These, however, will not be counterinstances to my tested regularities, because it is built in to my notion of 'test' that there must be at least two instances (the original and the replication) and will usually be many more than two. The reason for insisting on this lies in the purpose or point of formulating laws: regularities, if that is the right name for them, that are exhausted by their observed instances (see the discussion of Goodman, in chapter three) are of no use to us for predicting future events. What about counterinstances the other way, i.e., laws that are not Humean regularities? I have already addressed the supposed inability of the regularity view to handle probabilistic laws. What about the more general problem, say of laws that turn out to be local laws and not laws that hold in all times and places? Tooley's example, of a garden that grows only apples, no matter what is planted, but contains no special properties that set it apart 158 from other gardens does support a generalization, 'everything planted in garden x grows apples', but it lacks the proper spatial and temporal generality (because of the definite description 'garden x') to be a Humean regularity. My response would be, it is not a law either. It is built in to our notion of replicability that it should not matter where or when the experiment is performed, provided nothing of relevance to the results is changed. The same experiment, performed under the same conditions should give the same results. If it does not, it is not projectible. Now, perhaps, such a strange phenomenon as 'apples always grow in garden x' can be treated as a degenerate case, since it has temporal generality and spatial generality of a limited sort: anywhere in garden x will work, still, it is a funny sort of law and certainly not a paradigm case that should set the criteria for lawhood. Regularities whose scope is less than all time and all space may still be useful, so long as they are universal within the scope of their applicability . But, without such universality, they would be useless as inference tickets and powerless to predict. This brings me to the larger issue. If the role of laws is to guide actions with respect to effectiveness, then what is the problem with even the untested Humean regularities, provided they are truly exceptionless? Waiving aside the legitimate complaint we might make that in acting on a regularity we are testing it after a fashion, even an untested regularity will work if the consequent always follows on the antecedent, we desire the consequent and can bring about the antecedent. Everything we could want for effective action is present. The reason we insist on testing is because untested regularities have often proved unreliable in the past. We have learned how they failed and have some idea how to make them fail in advance--before we committed life and resources to their use. But the 159 problem is not in the regularity as such, but in the conditions of its observance. We have sampling errors, observational errors, error of inference, and errors of formulation that we need to check before the putative law is acceptable. We believe that there is a regularity if we can produce instances of it. We believe it is an exceptionless regularity if we cannot produce counterinstances of it. But if we were just given an exceptionless regularity (never mind how) and knew it was exceptionless (again, never mind how) -- where is the problem that needs to be solved? The gap that Armstrong and the realists want to open between laws and Humean regularities strikes the pragmatist as unreal. If the grounds for such a distinction are the intuitions of the realists, the pragmatist would like to suggest those intuitions are mistaken. I, at least, have no such intuitions. The distinction I make between accidental generalities and laws is one that is made entirely within experience and makes them disjoint subsets that partition the set of experiential regularities. We get the extension of law right by revising the laws we accept in the light of experience, both in common life and the laboratory. But at the end of the day, we are left with regularities, tested regularities it is true, but regularities nevertheless. The pragmatist does not see a logical problem here that needs to be solved. 4.1.2 Objection from the Conceptual Role of Laws The serious worry is that law statements do not do the same conceptual work that laws do. Here we get at something very basic to the realist view. Laws are real relations. They are part of the furniture of the universe. They cannot be bits of language because language does not make event B follow on event A in causal laws, and causal laws are the fundamental laws. Law statements, if they are about anything, are about 160 laws, which makes law statements true if they are true. Or so claims Armstrong. This is part of his claim to have solved the problem of induction by the appeal to realism. It is part of what he means when he claims that the regularity view does not get the intensions of laws right, because such views miss the inner connection between the items connected. How does his solution to the problem of induction work?45 The argument is that: 1. One cannot be rational in one's expectations unless one can give a reason for them. 2. A reason would be the same as an explanation of the events. 3. Regularities do not explain events, since they are simply sets of events, but laws do explain events. 4. We don't know laws by induction, on pain of circularity (i.e., a solution to the problem of induction may not assume induction in its proof). 5. We don't know laws by deduction from evidence, since they do not follow without other premises from that evidence. Further premises would have the same problem ('how do we know such general claims?') as laws, leading to regresses. 6. Therefore, if we know laws, we know them non-inferentially. 7. The only non-inferential knowledge we have is perceptual or analytic. 8. Knowledge of laws is not perceptual. 45 I am relying on the analysis van Fraassen gives of Armstrong's argument in van Fraassen, Laws 138-142 in the section "Armstrong's Justification of Induction." I have run together what he calls 'two sub-arguments, and have taken some liberties on how I parse out the premises, but the basic moves here were identified by van F raassen. For the passages in Armstrong where these arguments occur, see Armstrong, What 54-59. 161 9. Whatever better explains constitutes a good reason for believing something, so belief in the best explanation is rational by definition. That is, it is analytic, therefore necessary and a priori.(The 'inference to the best explanation' rule) 10. Laws better explain events than any rival hypothesis. 11. Therefore, we are rational to believe in laws without needing induction for their justification. 12. But if we have laws, then the problem is solved: we have good reason to expect some regularities to hold in the future as they have in the past, although which ones those are is still an empirical matter. Van F raassen believes the hinge of the argument for laws and for induction is the 'inference to best explanation' rule, my premise nine. Accordingly, he attacks the argument there. Since I am only concerned with the objection this argument raises for my view of laws, I do not need to go into a detailed criticism of the strategy, although I think I largely agree with what van Fraassen says against the strategy.46 The fundamental objection to the regularity view coming out of the realist argument for laws goes as follows: it is not that the regularity view counts too many things as laws that intuitively are not laws, but that regularities are the wrong sort of thing. We have supporting evidence in the fact that, while laws are explanations, regularities are not, and, a fortiori, expressions of regularities are even further removed: bits of language at best express explanations. What lies behind this realist objection? I think the following expresses the realist intuition here. One must distinguish the manifestation 46 See the discussion of van Fraassen's criticism of the argument below. 162 .. l A- _‘A of something from the thing itself. Language points to realities and should not be mistaken for that to which it points. Regularities manifest laws, but are distinct. Linguistic expressions may manifest laws, but they also are distinct. Laws explain the regularities and give the truth-conditions for law statements. The explanation arrow does not go from regularities to laws. The regularity theorist errs in identifying laws with regularities, because the regularity just is the set of events we would count as manifesting laws. But those events cannot explain themselves. The connections between one set of events cannot explain themselves, still less can those connections explain the connections between other sets of events. Harvest may have always followed planting, but that fact does not explain itself, and it certainly does not explain any connection between future plantings and harvests. Something different is needed. In this example, we need an explanation drawn from the laws of biology to explain the connection between planting and harvest. The regularity theorist seems to think we can argue from known events to unknown events, a non-sequitur, or argue from the set of known plus unknown events to unknown events, which is either trivially true, or equivalent to the previously mentioned non-sequitur. In either event, regularities cannot play the role of explanation, so they are not the right sort of thing to be counted as laws. A fortiori, whatever logical connections there may be between parts of language, that cannot be what anyone means by 'necessary connection' in this context. To suggest, as some have, that the only necessity is verbal or logical necessity is to attribute to philosophers an inability to distinguish between a sign and what it stands for. I'm not inclined to quarrel, as van Fraassen does, with the premise that rational expectations involve being able to give reasons or explanations for why the person expects the past pattern to be repeated in the future. I 163 think some suitably weakened form of that premise is correct. We may not know the reason why, or may not be aware that we are projecting a . regularity, but when aware and challenged, we will usually give a reason that includes or implies some statement of law as an explanation for why we expect the regularity to hold up. My point of disagreement is that regularities are not explanatory.“7 Class membership may provide important clues about properties all members of the class share, and it may license inferences based on analogy even if nothing like Armstrong's universals exist. Analogical reasoning does not involve appeal to truth-makers because it does not presuppose some set of properties all members of the set share. Rather, it involves learning which properties members of the set share and which they do not. The members are not viewed as firlly determinate metaphysical realities, but as epistemically fuzzy objects whose nature is being determined during a process of inquiry. The determination is carried out in some language that may carve up the terrain differently than other languages we might have used. We do not simply list the members and read off their properties, because neither the set nor the properties are givens, but must be constructed. Our way of constructing the classes involves sorting out relevant from irrelevant properties by manipulating conditions. We learn about how these objects work simultaneously with our classifying them. The regularities we wind up with are not the regularities we started out with: they have become more determinate, more precise, and have already been challenged to fail in various ways. For example, we learn a lot about nuclei and nuclear processes while we sort out which materials are radioactive and ‘7 Van Fraassen has a very sharp challenge to my claim in his criticism of induction, which I will need to address when we come to it below. 164 which not, and the concept of radioactivity is adjusted and modified while the list of members is determined. Most concepts in physical science display this open-texture as H. Putnam has noticed. The problem is generated by Armstrong's atomism. He believes the world is the facts, that is, states of affairs that are logically independent of each other. So for him, an extensionalist would have to view laws as sets of ordered pairs of events, which allows him to ask "what do these ordered pairs have to do with each other if they do not share universals?" Now, for me, individuation is relative to language. Are laws about events or about processes? Or neither? There is no canonical minimum vocabulary in which we have to state what laws govern. That is important. How we describe the constituents of laws is part of our overall enterprise of deciding how to describe the world. That we have some language that treats laws as something other than a construct and maintains there is separate work for them to do should be regarded as a historical comment on the languages we have developed in the past. But before we accord them normative force, we need to ask what we can do with such a distinction? The problem I have continually come up against in trying to understand the realists is that we have no use for unformulated laws. All our explanations occur in language. Where else could they occur? We are trying to make sense of experiential patterns. 4.1.3 Response: Law-statements can be about the world without being about laws The upshot of my response to the realists is this: law-statements can be about the world without being about laws. Law-statements pick out experiential episodes in a particular way that constitutes them both as regularities and as laws--but this constitution is relative to a language. There 165 simply is no way around this relativity. But for all that, constructions can and do guide expectations, just as constructed bridges and roads guide our actions. Just as with bridges and roads, we do not invent the material of the constructs ex nihilo: we are constructing experience, and find ourselves up against the resistance of the world in trying to construct the world one way rather than another. Still, we need to overcome the tendency to treat successful constructs as 'having been there all along, waiting to be revealed', because the constructional activity could have taken a different direction. Roads can guide without revealing some deep world path that would have been there even if humans and their designs had not existed. 4.2 Laws as Socially Negotiated Rules Social constructivists in the social studies of science and technology typically do not talk about laws, and when they do, they almost invariably mean laws as part of some legal system. This lexical fact is interesting, and deserves study. But here, I want to notice that although the word 'law' may be absent, there are discussions in the social constructivist literature that are relevant to the view I have put forth. Since H.M. Collins has put out an entire book of essays where he explicitly and self-consciously addresses the issues of replication and induction in scientific practice, it will be instructive to see where we agree and where we part ways. We are both social constructivists of some sort. The question is, where might we disagree in the account I have given of laws, and how do I respond on the point of disagreement? I will argue in this section that we disagree over the degree to which constructive closure is a matter of social negotiation and that we disagree over the extent to which the advance of science requires the replacement of the tacit and inexplicit appeals to skill and group 166 membership by standardized and mechanically applicable procedures that can be explicitly set forth in algorithms. 4.2.1 Replication as Social Achievement After admitting that perception and the stability of perception amount to the same thing, and claiming that order requires our ability to re- identify something as the same thing we perceived before, Collins goes on to ask where our expectation of perceptual stability and order comes from in science. This is the problem of induction, but since the focus is on predictive success, it lends itself to a discussion of scientific laws.48 This discussion of Collins has direct bearing on our discussion of laws, because what he calls the problem of order just is what I have called the problem of laws. After looking at Wittgenstein's and Goodman's work, he settles on entrenchment of central terms in a network of terms as giving the taken-for-granted background for such agreement, although he claims to have gotten this insight from M. Hesse rather than Quine, and he wants the coherence lodged in social institutions, not probabilistic relations between beliefs. Next, he comes to puzzle over how such a background of agreed upon meanings ever gets challenged and changed (as it surely does in science) and again arrives at the same conclusion I arrived at: "Press scientists and in the last resort they will defend the validity of their claims by reference to the repeatability of their observations or the replicability of their experiments...Repeatability, or replicability (I will use the terms interchangeably), is the touchstone of common sense philosophy of science." (Collins 19) With such substantial agreement, the reader may well expect the differences between Collins and me to be trivial. However, it is at just this ‘8 Since in the context, Collins discusses Hume's view of cause and laws, this interpretation of Collins is one I believe he would accept. 167 point that Collins makes a critical break: "Replicability, the vanguard of common sense theories of science turns out to be as much of a philosophical and sociological puzzle as the problem of induction rather than a simple straightforward test of certain knowledge. It is crucial to separate the simple idea of replicability from the complexities of its practical accomplishment." For "[o]nly in exceptional circumstances is there any reward to be gained from repeating another's work. Science reserves its highest honors for those who do things first, and a confirmation of another's work merely confirms that the other is prizeworthy. A confirmation, if it is to be worth anything in its own right, must be done in an elegant new way or in a manner that will noticeably advance the state of the art." Collins discusses in this book the difficulties associated with replicating advanced design lasers from one laboratory to the next, with detecting gravity waves, and with detecting paranormal events. Collins uses these cases studies to establish the following five propositions, which constitute the core of his 'empirical programme of relativism' [EPOR]: Proposition One: Transfer of skill-like knowledge is capricious. Proposition Two: Skill-like knowledge travels best (or only) through accomplished practitioners. Proposition Three: Experimental ability has the character of a skill that can be acquiredand developed with practice. Like a skill, it cannot be fully explicated or absolutely established. Proposition Four: Experimental ability is invisible in its passage and in those who possess it. Proposition Five: Proper working of the apparatus, parts of the apparatus and the experimenter are defined by the ability to take part 168 in producing the proper experimental outcome. Other indicators cannot be found. Proposition Six: Scientists and others tend to believe in the responsiveness of nature to manipulations directed by sets of algorithm-like instructions. This gives the impression that carrying out experiments is, literally, a formality. This belief, though it may occasionally be suspended at times of difficulty, re-crystallizes catastrophically upon the successful completion of an experiment. (Collins 73,74,76) In the postscript to his book, Collins sharply distinguishes two models of science, the algorithmic view and the enculturation view. The algorithmic view has science driven by a set of explicit, formal procedural rules. The enculturation view has science driven by the acquisition and transmission of skills among qualified experts. In the subsection, "understanding the deconstruction of facts," Collins maintains: "the perception of certainty is a matter of distance from the scene of the crystallization, both in time and 'social space.‘ Certainty increases because the details of the social processes that went into the creation of certainty become invisible....Thenceforward certainty is maintained by continued representations of the data in the style of facts.... The mere act of describing an experiment as a piece of ordinary life reduces its power to convince" (162). The central idea of the enculturation model is that the scientific taken-for-granted beliefs begin as controversial, and come to be accepted only after the debate is closed. But the debate gets closed only through a series of decisions about which practitioners are expert, which experiments were correctly done, and which experiments count as replications. There are no algorithms that could be executed by some machine to sort out the 169 scientists from non-scientists, skilled practitioners from unskilled, successful experiments from unsuccessful, and significant results from insignificant ones.49 Drawing heavily on the notion of 'tacit knowledge' from Polanyi, and the notion of 'paradigm' from Kuhn, Collins makes the construction of scientific fact turn on the authority of the scientific community. Now, up to a point, there is little to disagree with here. The scientific community identifies its skilled members and is responsible for what Ravetz has called 'quality control' on research. The standards of proper techniques are constructed and enforced by the community. The community establishes when an experiment has been replicated or not. It also decides when the weight of evidence is sufficient to close a particular debate. However, what is missing here is what I have been calling 'the resistance of the world to being constructed in certain ways.’ I conceive of this resistance as negative, because it does not convey positive information. The world does not suggest particular forms to us that correspond to its joints. I agree with Collins that the realists are wrong when they insist it is our job to discover pre-existing natural classes that are part of the furniture of the world. Nevertheless, there are constraints on our constructions that are not simply social. A team of Nobel laureates could bring it about that the scientific community universally agreed on a particular set of constructs, and that set of constructs could fail nevertheless. This is the point I made in chapter three: consensus is not enough. Many scholars working in the social studies of science (Collins is not alone here) seem to believe that all agreement is achieved by negotiation between social actors. For example, Gilbert and Mulkay, writing from the perspective 49 See"The Idea of Replication" in Collins 29-49; esp. 39-46. 170 of discourse analysis also argue that scientific facts are constructed by consensus , or by the illhsjen of consensus, since agreement may be achievable only by ignoring real differences of interpretation, community membership, or ascribed competence (Gilbert and Mulkay 112-140). Some scholars are willing to allow natural objects some say in how the negotiations go,50 but others are deeply troubled by this metaphor.“ The point here, again, is that the function of laws is to tell us how systems will behave in the future, and this cannot be an entirely social matter. Even after we grant the importance of language in shaping our perceptions in everyday life and in science, not all conceptual frameworks are created equal, and consequently some groups find their way of life under pressure from events that fail to turn out as expected. Healings may not heal, agricultural practices may produce deserts, methods of diplomacy may lead to ruinous wars, and methods of distribution of goods may lead to shortages and starvation. Some ways of doing things simply do not work. I can say this without being ethnocentric. Our own culture is certainly not immune to the charge of having institutions that do not work the way they are supposed to work! Nor is it immediately clear, when a problem arises, what the solution will be. None of the available alternatives may be any better, or if better, practicable in that particular context. The problem may be hard to diagnose: it may be not an isolated practice that is at fault, but its combination with other practices. The Quine- ” See Latour and Callon's actor-network theory and their somewhat broad notion of an 'actant' in Latour, Seience and Callon, "Four Models of Dynamics in Science" in Jassanoff, et.al.. 29-63 . 5' See Collins and Yearly, "Epistemological Chicken" in Pickering (ed.), Sejehee 301-326; Latour's reply "Don't Throw out the Baby with the Bath School"; and Collins and Yearly response "Journey Through Space" follow. Especially important here is the section "Networks Revisited: Ants and Ors" where Collins and Yearly discuss their reservations about actANT S rather than actORS. Pickering, Seienee 372-376. 171 Duhem problem has its counterpart in practice, and so it should if practice and belief are hooked together as I have claimed in chapter three. Now let us say Collins were inclined to grant that my point applied to technology and engineering but not to concepts. It is unlikely that he would, since his examples of replication failure include examples of constructing devices that simply do not work (even though theory says they should), like the TEA laser Harrison was trying to duplicate. And since the Bath school of social studies in science seems committed to focusing on agency and practice, and the contingency of scientific choices in general, I find it unattractive to suggest a difference between theory and practice at this point. Nevertheless, let us create an opponent, called pseudo-Collins, who would want to make the following move: we have complete control over the descriptions we give of an activity and its results, so we can make the world safe for our versions by deploying appropriate rhetorical devices. If I am reading them correctly, Gilbert and Mulkay are making moves of precisely this sort. Scientists as social actors deploy the empiricist repertoire to explain how they are just following the observational facts and the contingent repertoire to explain how their opponents, though bright and hardworking, through social and psychological distorting influences got things wrong. They also can, when necessary, use the rhetorical device of 'the truth will out' to explain why the scientific community still hasn't backed their theoretical explanation of the experimental events. In our opponent's move, we can, following Quine and Duhem, make whatever adjustments we need in the rest of our beliefs in order to save our version of the events. This version misses something important: the empirical character of science. What we want is more than having our views accepted by others. We also want our beliefs to track experience and register appropriate 172 sensitivity to changes in our environment. The realists are not wrong to suggest that empirical belief involves relation to the world, and this is a point that is missed when all agreement comes down to consensus. Once again, such a view must, in its nature, be entirely retrospective: we cannot explain away recalcitrant data until we have already been disappointed in our expectations. The trick, of course, is to get it right in advance of the data, to predict what is going to be the case. And for this, we need to stop the negotiation process and make some claim that risks being wrong. On this matter, I side with Popper: if we never risk being wrong, to keep our options in play and avoid any firm commitments, we simply are not doing science. To be sure, this does not mean we never revisit our commitments or revise our beliefs. This too occurs. But if we never expose our beliefs to falsification, we never really know how solid our views are, and it becomes treacherous to base any practical plans on such untested beliefs. I am sympathetic to the claims of those in the social studies of science that we do not simply read off of nature (or of our tables of data) how nature is organized, and I agree that the actual story is more complicated than that. Still, if science is not sensitive to experience in the appropriate way, it simply becomes an elaborate game or ritual and not the source of reliable belief I maintain it is. And if there are some in the social studies of science who believe science is just an elaborate game, then I maintain they are missing something basic in their views, that is, how science can fund technology that works. 4. 2.2 First Response: Algorithms are Methodologically Useful I would like to give a modest rehabilitation of the algorithmic view Collins opposes. Collins opposes the algorithmic view of scientific decision- making by appeal to L. Wittgenstein, M. Polanyi, and T. Kuhn with their 173 respective notions of 'following a rule', 'tacit knowledge' and 'paradigm'. The role of skill in reproducing results and the difficulty in identifying skilled members apart from their ability to produce the same results as other recognizably skilled practitioners are seen as undermining the older forrnalist views of a logic of science held by the logical empiricists R. Camap and K. Popper.52 I, too, oppose this view. Scientific judgment is not algorithmic, but involves multiple criteria, vague terms and plenty of scope for disagreement on the weighting and application of criteria in scientific conflict. The case studies found in the social studies of science literature provide abundant evidence of the correctness of Kuhn's case against the algorithmic view, so I agree with Collins because I agree with Kuhn here. Nevertheless, there is something important that both Kuhn and his followers are missing and here I must part company with them. The formation of explicit procedures executable in the limit by a computer or a machine is an important part of the progress of science towards greater objectivity and publicity. The reason for replacing qualitative descriptions with comparative and quantitative ones is that the latter are more informative: there are more ways such claims can turn out false, so they provide more information about the world. Again, this is a point raised by Popper. Why do we want claims that do not give much latitude for interpretation? Because to the extent that we eliminate possible interpretations, we narrow the range of possible disagreement when different persons agree. Most of us agree there are a lot of trees on campus at Michigan State University. Fewer of us agree on the 52 Popper's claim to having always been an opponent of the logical positivists should be taken seriously, but usually is not. See Hacking, Representing 43-44, for an explanation why Popper is commonly lumped together with the logical positivists. 174 exact number. Since nothing practical hinges on that, we are content to leave matters at the level of vague agreement. But what if the matter were one of medical diagnosis, and different diagnoses led to quite different incompatible therapies? Both doctors and patient have a stake in determining the preferred description of the disease. In the interest in successful therapy, neither can afford to leave matters vague: both will want the most informative description justified by the evidence. That the scientific community is looking for algorithms, even imperfect ones, is shown by the replacements of human observers by electronic detectors, human recorders by computers, and human hands by robots. The rationale is a good one: we do not want the outcome to turn on idiosyncrasies of individual scientists. We want the experimental procedure to be performed the exact same way wherever possible. To be sure, this is an ideal even when we talk of robots. Lubrication changes as a function of time over a given run, signal strength may vary with random noise in the circuit, replacement parts may not be identical, etc. Still, we are talking here as always, about controlling variables that affect the outcome of experiments, and with that, the reliability of results. The formulation of laws turns on the ability to control conditions under which the uniformity holds. This means that the trend towards explicitness is a trend that favors the formulation of laws. This leads to another related point. Insofar as we can replace judgment with explicit procedure, we replace difficult to control variables by ones we can better control. Insofar as we depend on the community of science to identify skilled performers and skilled performances, we are taking matters on trust. Now, since science is a human endeavor worked out under conditions of ignorance, we cannot eliminate the need for trust 175 altogether. But it is important to see that the judgments within this system of credit constitute promissory notes that may be difficult to fill. We trust the competence of well-trained scientists from good institutions and we trust the output of instruments constructed by reputable firms. But it would be better to actually watch the scientist do the experiment, and it would be better to have comparative studies of the performance characteristics of various instruments under a variety of conditions. To some extent, the game really is what D. Gooding suggested: to embody skill in the construction of a device that will behave the same way for everyone. However, this requires us to be more explicit about the conditions involved in the procedures, and this, as I said above, involves specifying the laws relevant to the operations and their outcomes. Insofar as social studies of science unmask consensus as illusory, it points out a problem that needs to be solved by the scientific community. Intersubjective validity, if it is going to cut across group membership and give non-scientists results they can use, requires reflection on the methods used in an attempt to be as explicit as possible about what needs to be done and how. This reflection is necessary to minimize differences due to idiosyncrasy and incompetence. Nonstandard views and practices may generate fruitful new ideas, but can also spell disaster for common enterprises that require real agreement among the actors. 4. 2.3 Second Response: Consensus is prior to Dissensus. I believe that the focus on scientific controversy has led some (like Gilbert and Mulkay, T. Pinch) to exaggerate how much dissensus there really is and to miss the deep agreements that link even those scientists who disagree with each other. Not only must scientists agree widely about the nature of the discipline and its procedures, but certain widely shared 176 agreements between scientists and non-scientists form the background for the very formulation of controversial issues within the discipline. To put this in the language of laws: if we did not have shared expectations of how scientists and scientific devices behave, we would not be in a position to notice when they don't behave the way we expected them to, i.e., when we have views that disagree or data that does not fit our theories. In this respect, I think they have missed something basic to Kuhn: the possibility, even under conditions of normal science, of anomalies. The more explicit we are in the formulation of laws and procedures, the more we are in control of outcomes, because becoming explicit involves sorting out relevant from irrelevant conditions. The problem of order turns out to include the scientist, who is able to make his or her behavior more controlled as he or she makes his or her procedures more explicit. 4 .2.4 Third Response: Science is constructed at every level without being constructions all the way down .’ In sum, my response to constructivists who reject any independent control by the world over belief is that science is constructed at every level, but is not constructs all the way down. That is, there is no isolable aspect of science that is free from constructional activity, as I argued in chapter two. There is no 'given' content that can somehow be freed from all human dress, so as to be viewed naked and unadomed. Every fact is a constructed fact. Still, not every construct works. All are of the nature of hypotheses tentatively floated, tested, and tried out in various environments. Since we work out the details of empirical processes under conditions of imperfect information, we may find ourselves under pressure to revise constructs when they fail to track those processes. Firm belief does not eliminate anomalies. It is only with firm belief that anomalies are possible. 177 The lesson for laws should be clear. Laws cannot be social rules like which side of the road we drive upon, because the problems we face in the world are not simple coordination problems (to steal a phrase from game theory used with good effect by D. Lewis in Cenyentien). We are not the only ones who police our constructs. The world does as well. At the very least, the world holds veto power. However difficult it is to explain the point without lapsing into metaphor, the point remains important: there must be more involved in the formulation of laws than passing the inspection of journal referees. Truth cannot simply be whatever our colleagues will let us get away with. It must also be at least partially a matter of what the world will let us get away with. 4.3 Laws as Doppelganger for Subjective Probabilities Van F raassen and I agree at many points about the realists's weaknesses. The question is, why does he give laws over to the realists and say good riddance? Why does he believe laws dispensable? The answer will turn out to be very complicated, and, like the others we have discussed, turns on his views of induction and semantics. I will start with his conclusion, which turns out to have two parts, only one of which I have addressed so far. I will then work through his arguments for the unanswered part of his conclusion, because those arguments contain an important objection to my view. His conclusion is that laws are impossible and superfluous. They are impossible because nothing could have all the properties ascribed to laws by philoSOphers. I have already addressed and answered that part of van Fraassen's conclusion in chapter three. The pragmatist can supply items that will meet the appropriate philosophical demands. The part of his conclusion I have not addressed is 178 the second part. Laws are superfluous because they duplicate the work already done by subjective probabilities and arguments from the symmetries of theoretical models. But like the doppelgéinger of literature, this is not a twin, but an evil one that tempts us into metaphysics and incoherence. Laws are not simply useless; they are impediments to clear understanding of good empirical scientific practice. It is irrational to assign an event any higher probability than a strict Bayesian would, but laws tempt us to do just that. If van Fraassen is right, then I have a choice between being a rational subjectivist or an incoherent objectivist. The pragmatist should repent of fence-sitting, give up Truth and Laws as a realist dreams, and follow the honest path of experience and reason. How does van Fraassen argue for the claim that laws are irrational and superfluous? In brief, he regards induction as impossible (and unnecessary) and believes the meaning of scientific language is to be found in language-independent models. Combined, these lead to a rejection of laws: we cannot know what the future will be like even though we can (and must!) talk as though we do. Those who claim we have knowledge of the future based on laws are mistaken. The best we can do is project past relative frequencies into the future. Van F raassen arrives at these views in articulating a semantic theory of theories and a perrnissibility view of rationality. Combined, they amount to the claim that rational beliefs are not knowledge but are merely probable on the evidence. But this, it will turn out, is simply a disguised version of the realist independence assumptions that I criticized in chapter two. Because laws are not part of language, much less parts of minds or mental states, they can play no role in guiding belief- forrnation or actions. That's the claim in a nutshell. 179 The objection arises that any role playable by laws is already performed by conditionalizing beliefs on evidence. If we attempt to do better than the conditional probability by giving laws weight in the probabilities we assign to events, we actually do worse because we violate the axioms of probability and embrace incoherence. However, so long as we are consistent in our assignments, we can assign whatever probabilities we like to events, and then adjust our beliefs in the light of how well they predict what happens. Van Fraassen is arguing against the view that without laws we can have no rational expectations for the future. Since this is my belief, he is arguing against my view. So it is important for us to look at the arguments he gives to see if they directly address the claims I have made, or can be modified to do so. 4.3.1 Van Fraassen’s Dual Strategy In chapter 6 of Lamnifiymmehjes, van Fraassen argues against views of both logical empiricists and realists (131-150). The logical empiricists claim that the data are enough to license a move from particular samples to general conclusions. The realists claim that although the data are not enough, without laws, to yield universal conclusions, we have laws on the basis of inference to the best explanation. Logical empiricists try to solve the problem of induction by the construction of definite confirmation functions that act as rules telling us which generalizations to accept. The realists claim we employ abduction, rather than induction to get laws and that laws constrain beliefs about the future. Van Fraassen has criticisms of both. The logical empiricists use an internal approach: the data themselves, without external constraint, point to the conclusion. The realists use an external approach: laws, existing separately from us, language and minds, 180 can control our beliefs (on one interpretation on the realist view)53 and set our expectations so that we can draw particular conclusions (i.e., make predictions) from sets of data. Both critiques given by van F raassen contain potential objections to my view, even though I do not advocate an inductive logic or inference to the best explanation. So after I give some of the details of the argument van Fraassen gives, 1 will need to tease out particular objections I think he might raise to my account of laws. I believe that for van Fraassen to make a solid objection to pragmatism, he must share one or more of the fundamental assumptions of the realists, but if he does so, he becomes heir to all the headaches those assumptions bring. 4.3.2 The Problem with Induction In his critique of induction (132-134), van Fraassen criticizes efforts to construct a formal inductive logic with rules setting specific confidence levels for scientific propositions based on the amount of support they receive from observational propositions. He claims that any choice we make for the rule will turn out either arbitrary or uninformative. He singles out the straight-line rule as an example. The rule states that we should take the relative frequency of an event as the probability of its future occurrence. If the rule were good, it should hold in all situations. However, suppose we apply it to the proposition 'this coin will turn up heads next time I toss it' after we have already tossed it once with it coming up heads. By the straight-line rule, we have a relative frequency of 1 (since one out of one tosses turned up heads). Should we therefore assign the probability of l to the proposition that the coin will turn up heads on toss number two? Of 53 Van Fraassen considers this causalist view as one of three possible interpretations to be assigned to Dretske's claim about the remarkable confirmation of laws, but goes on to deny that this is the interpretation Dretske had in mind. 181 course not, because we have experience of other coin tosses that is relevant. If the coin is fair, it probably will not turn out only heads for, say, the next ten tosses. But using the straight-line rule, we have no reason to expect that our success in getting heads will be broken any time in the future. To avoid this problem, we could go to a second order rule that judges the confidence we attach to our first order claim about probabilities. But the same problem occurs there. The alternative strategy is to avoid setting the probability exactly, and say that as the sample size increases, the probability of the relative frequency matching the true probability approaches one. But this makes the rule more sensible at the expense of its informativeness. If, for the sake of definiteness, I try to increase precision by specifying narrower and narrower confidence levels, I gain inforrnativeness, but at the expense of rational motivation. I cannot justify the choice of a confidence interval by appeal to scientific practice, because there is no universally accepted confidence interval that holds for all disciplines. So, no such rule using only the data can guide our selection of probabilities to assign to propositions. Do I get caught in this dilemma? The problematic assumption is that there are objective chances to be discovered. But events do not have probabilities associated with them apart from the way we describe them, classify them or measure them. Probability is relative to a practice. Within the practice we have a set of distinctions that enable us to distinguish initial and final conditions, and we also have a set of manipulations that allow us to bring about the connection between the initial and final conditions expressed in a law. It makes sense that probability is sensitive to the construction of a reference class--even a realist like Salmon recognizes that--but the construction of a reference class itself is relative to the procedures for determining its 182 membership. There are no objective reference classes out there waiting for us to discover them. The relativity of probability to a set of procedures can be seen even in the simple example of tossing a coin. It makes no sense to describe the 'probability of a coin coming up heads' without specifying the procedure for coin tossing, because, quantum effects aside, the ballistic behavior of a tossed coin is deterministic and can be made as close to certainty as we like with the appropriate control of conditions. Series of coin tossings based on different procedures can converge on very different probabilities, and the straight-line rule would yield different probabilities under these different conditions. Furthermore, if we give up the assumption of determinism (under which all events have a probability of one or zero), and assume that natural processes and events are irreducibly stochastic, then probability evolves over time, and the probability of the coin coming up heads may be very different depending on whether we ask the question a minute before the toss or a nanosecond before. The matter of probability is tied to the notion of replicability. Any stable probability we would assign to an event would require we can identify what would be a replication of the experiment. As social constructivists rightly point out, there is a large social element in the criteria of identity or relevant similarity, to weaken the notion a bit for experiments that cover a different range of conditions. Some questions, which I paraphrase from Collins, must be addressed: 1) Did the same experiment contain the replication in successive runs? 2) Was different apparatus used? 3) Was the experiment performed by equally competent researchers? 183 4) Was the experiment performed by friend or foe? 5) Was the experiment performed in a related or dubious field? 6) Was the data 'close’ to the expected curve? (Collins 39-44) All of these criteria need to be negotiated, as the social constructivists correctly claim. We weigh the trade-offs, but can only be vindicated in retrospect. If the beliefs work, we made the right choices. If not, back to the drawing board! Now, we could stop here as far as the pragmatist is concerned. The question I have is: would van Fraassen be content to let matters rest here? If so, enough has been said: his objection was not directed against me and has been sufficiently answered. If not, however, then we need to go on. What more would he require? The question has some bite, because van Fraassen does talk about the difference between acceptance and belief: we may accept claims as empirically adequate that we do not believe to be true. This distinction between truth and experience enables van Fraassen to enter into dialogue with the realists, who in their independence assumptions mark similar distinctions between reality and mind, between reality and linguistic claims, and between truth and meaning. So the questions that need to be addressed are, does van Fraassen allow 'objective chance' an existence independent of 'subjective probability', and does he believe that it makes sense to claim objective probability exists even if its existence cannot be experientially verified? If so, his permissive view of rationality is too permissive, and at war with his empiricism. I will give a fuller exposition of these points in 4.3.4 and my response to them in 4.3.5. 4. 3.3 The Problem with Inference to the Best Explanation Van Fraassen's criticism of the realist strategy, 'inference to the best explanation', takes the following form (142-149; but also, 160-170). He 184 puzzles over what the strategy might mean and comes up with two possibilities: I) Privilege: We are disposed to form true beliefs. 2) Force Majeure: We are forced to choose among historically available hypotheses. He then considers two revisions of the principle that might make it more palatable to its critics: make it simply a matter of raising the probability of truth, or make it simply a rule of the rationality (and not the truth) of the belief. He then criticizes each of these possibilities in turn. Privilege suggests we have a knack for coming up with the right hypothesis, so that the best explanation could be among the rivals considered. But, he goes on to say, neither rationalism (a la Spinoza) nor naturalism (a la Darwin) can really justify this claim. Against force majeure, he argues that being forced to choose does not mean one is forced to accept the result as good. One may have chosen the best of a bad lot. Some choices are simply ones that enables us to get on with the work, without judging the merits beyond that of the chosen means. The sophisticated versions of the 'inference to the best explanation' strategy sound close to what he attributes to Dretske and Armstrong. Dretske claims that laws are confirmed faster than the corresponding universal regularity, and that this suggests something other than induction is going on. Van Fraassen finds this puzzling, but lodges the onus on some notion of intrinsic explanatoriness. Otherwise, either the regularity is raised at exactly the same rate as the corresponding law, or the result is incoherence because bonus points given to laws that depart from the probabilities assigned by a conditionalizer lead to Dutch book problems. Other versions of what Dretske might mean fail even quicker. It is unclear how the mere existence 185 of laws could raise the probabilities we assign to our formulation of those laws. Nor are matters improved if we raise probabilities based on the belief that it is a law, because then the law is assigned a probability of one. Once we have assigned it a probability of one, there is no longer any question of evidence confirming the law faster than its corresponding regularity, since the law is already certain. At any rate, the confirming power of evidence is linked to the idea that something can be intrinsically explanatory. As one might expect from the contextual character of the pragmatics of explanation that van Fraassen endorses, van Fraassen finds this a puzzling notion. Better is Armstrong's attempt to make 'induction is rational' an analytic truth (Laws 138-142). Problematic about this move is the notion of rational compulsion that seems to be required if inference to the best explanation is going to qualify as a normative rule for formation of beliefs and expectations about the future. While van Fraassen agrees it might be rational to believe something based on such a rule, he believes it fails to be rational if one insists on following this rule. Rationality is permissive, not compulsive, for van Fraassen. Like William James, van F raassen believes we are rational in our belief formation so long as we do nothing that conflicts with rational constraints. But this differs from a claim that we must believe what is rationally required, because nothing is rationally required absolutely, but only relative to some set of beliefs we hold. It is rational to consult the set of beliefs we already hold in constructing other beliefs, but unaided rationality cannot supply some initial set of beliefs as Descartes, Spinoza and some other rationalists supposed. So, even assuming we could solve the problem of what counts as 'best' in the best explanation, it still would be inappropriate to suggest that this supplies foundation principles for knowledge. 186 Now I heartily endorse these criticisms. Any version of pragmatism maintains that we always Start in the middle of things. All versions deny that there are any absolute beginnings. Some versions question absolute endings as well, even in the long run. Rationality works on the stuff we supply from the space of ordinary life and common experience. Science refines the process and makes it self-conscious, critical and self-modifying. There is no Baron von Munchhausen here, lifting himself by his own ponytails from the swamp of human contingency and historical situatedness. We do construct explanations, including laws, to guide us through experience and to fund our practical projects. Still, this can not properly be described as inference, and laws are still very much with us on the plane of experience. Would van Fraassen be satisfied with this, or would he still have problems with my point of view? What more does he require? Once again, empirical adequacy is an issue. Not only do we never get to absolute beginnings, i.e., truth, and hence laws in the realist sense, but empirical adequacy places demands on constructs that rob us of laws in the sense of empirical regularities. What we have are regularities that have held up in the past, with no guarantees that they will hold up in the future. It still makes sense, van Fraassen thinks, to go with the regularities that have held in the past since simple conditionalizing gives us that much. But all we ever really get is belief in the probability of the regularity holding, not in the truth of the regularity. So long as the regularity is empirically adequate, we have no reason to complain. But still there is something missing (i.e., future data) that makes us withhold belief in the truth of the claim. Instead, we accept the claim without believing it. This view, that we accept scientific claims instead of believing them, has problems and I will discuss those problems below in 4.3.6. 187 4.3.4 The Problem with Pragmatism Since I have already tipped my hand, the reader knows that van Fraassen is working out some of the implications of a view that draws on William James. That makes him a pragmatist of some sort. I am a pragmatist of some sort. The reader may be beginning to wonder what there is left to discuss: if van Fraassen and I agree that pragmatism is the way to go, where's the beef? Re-reading van Fraassen, I am tempted to ask the same question. Still, certain places and claims he makes leave the nagging suspicion that we are not very agreed at all. While van F raassen invokes the William James of 'The Will to Believe' in justification of his claims to rationality, he does not quote the William James of 'Pragmatism' who identifies truth with what works and who asks about the cash value (in terms of experience) of abstract terms. The puzzle of omission continues with the absence of any discussion of the view of E. Nagel54 and a somewhat cursory discussion of N. Goodman (Laws 34-35). Here are two missed opportunities to show his endorsement of pragmatism by favoring some of the pragmatist views of laws already available. However, van F raassen seems unwilling to entertain a use theory of laws, of which my view is one version. Why? I suggest the answer lies in van Fraassen's commitments that make the pragmatic features of laws irrelevant to their truth. Van Fraassen develops a very different theory of truth in his semantic conception of theories. His view, I will suggest later shares some common presuppositions with the realist positions he rejects. Evidence that van F raassen would say 5‘ Unless one counts the following dismissal as discussion: "The culmination of Camap's and Reichenbach's, and Hempel's attempts, which is found in Ernest Nagel's WIS mm, was still strangely inconclusive. In retrospect they were dealing with modalities which they could not reduce, saw no way to finesse, could not accept unreduced, and could not banish." Laws 38. 188 these things comes from the following passages. In Ijhefiejentifielmage, van F raassen distinguishes the realist view from two sorts of antirealist: the first antirealist maintains that theories are not literally true or false. This view he identifies as instrumentalist. The second antirealist view maintains that theories are literally true or false, but we do not know which. All we know is whether things are as if those theories are true, that is, whether they are empirically adequate. This second antirealist view is van Fraassen's view. The realist is like the theist, who believes scientific theories make literal existence claims that aim at truth. The first type of antirealist is like the atheist, who holds that scientific theories, when not simple summaries of observations, have at best a metaphorical truth. Finally, the second type of antirealist is like the agnostic who maintains that scientific theories aim at literal truth, but we have no empirical means to know if they have reached such truth. Whom does van Fraassen include in his instrumentalist category, the atheists? Does the category contain only the logical empiricists whose suspicion of theoretical entities leads them to seek creative interpretations of central terms in physical theory? That seems true, given other things he says. He says, for example, '[l]ogical positivism even if one is quite charitable about what counts as a development rather than a change of position, had a rather spectacular crash" (IheSeientjflelmage 2). In another context, he talks about how far Stegmuller has gone on the road to "logical positivism and instrumentalism" (Laws 191). He explains 'instrumentalism' as a view that refuses to take a doxastic attitude toward theories. It seems to me that this term throws together some very different approaches. Surely the view that attempts to reduce theoretical terms to observational ones, differs from the view that replaces one set of theoretical terms (so-called folk-theory) by 189 another set, a strategy employed by eliminativists, who seem most nearly like the atheist of whom van Fraassen speaks. These views in turn differ dramatically from those views of instrumentalists like Dewey and Nagel. They all agree in denying literal truth to theories, but they affirm very different things. However, the choice of the term 'instrumentalist', a term of choice for the pragmatist, suggests that pragmatism is included in the views van F raassen intends to criticize. What is the criticism, then, that van F raassen has of pragmatists, including those who make truth a matter of what works in practice? The criticism would be that they one and all have the wrong notion of truth. Features of utterances important to the pragmatist (e.g., historical and social context and the context of other beliefs held by the utterer or the listener) along with features associated with the uses of the utterance (e.g., fruitfulness, simplicity, and coherence with other accepted theories) are irrelevant to the truth of the claim. Truth is to be identified with an isomorphism between one of the models of the theory and the system it purports to represent. This, it should be noted, is exactly the formulation the realists would give, a point to which we will return later. Why should a use theory of laws not count? Again, based on evidence from the book, pragmatists are not correctly playing the language game associated with laws. Laws have been taken to be principles, or axioms of science and knowledge. This the pragmatist can concede: rationalist philosophers have, in the past, mistaken the aim of science to be the articulation of a system of axioms from which the entirety of knowledge can be deduced. And van Fraassen is right again when he says that axioms are hard to come by and do not eliminate unintended models. But unless one is a realist, there is no reason to insist on a unique set of axioms, and the 190 pragmatist is not interested in a structure that deductively yields all knowledge. We can make useful predictions with rough analog models, which are considerably easier to formulate than a set of axioms. Multiple, incompatible models may be employed on the same phenomena for different purposes. The pragmatist finds the insistence on axioms a hindrance if it prohibits such sound scientific practices as these. While van F raassen's critique of axiomatic approaches is interesting, since it does address logical empiricist projects no longer pursued, it does not really raise a difficulty for the pragmatist. The real battle, it would seem, lies in the rivalry between the pragmatists' account of meaning and truth, and van Fraassen's. I will now turn to the account van Fraassen gives, how it influences his rejection of laws, and then go on to say why I am not persuaded to give up laws with him. 4.3.5 Van F raassen 's Position Let me summarize what van F raassen has said so far, then go on to develop his view, contained in his chapter 'Toward a New Epistemology' (Laws 151-182, esp. 151-155 and 170-176). First, van Fraassen rejects inductive logic. There is no rule that leads from data to universal generalizations, much less laws, supposing they differ. The data do not determine the theory. Next, there are no external constraints, laws or otherwise, constraining our belief, even in the limit, to force one choice upon the scientist as the rational one. These two points combine to say rationality does not compel belief. Rationality is permissive: so long as we are not in direct conflict with other firmly held, observationally-based or otherwise, beliefs, we are permitted to believe what we like. Construction is like free enterprise, we can construct whatever the marketplace of ideas will bear. Any models we construct of the data will outrun the data. The added 191 parts are either true or false in the claims about the world they support. But it is not the business of science to settle such matters, so long as the models give the correct empirical results. In technical terms, as long as we can embed all the data into some empirical sub-model of the theory, we have no grounds to complain about the non-empirical sub-modelsof the theory. A theory is not to be identified with sentences and their satisfaction conditions, but with classes of models defining some model space to represent the data. In van Fraassen's view, we face two constraints: logical consistency and empirical adequacy. The way science proceeds, van Fraassen claims, is by noticing when the model spaces we constructed are not wide enough to accommodate the data, by widening the model space by constructing new models, and then by narrowing the space once again by empirical testing through experimentation. Laws turn out to be formal matters; they are symmetries of the models that reproduce symmetries of the problem. Symmetry arguments from mechanical problems suggest the conservation laws of physics (239-243). Symmetry considerations lead to the axioms of probability theory (331-337). It is precisely in noticing asymmetries that Dutch book problems arise, and these asymmetries are removed by holding to the axioms of probability (338-348). For this reason van Fraassen believes that even though there are no rules of ampliative inference, should we wish to adopt a rule anyway, it ought to be Bayes's Theorem, so that it conforms to the axioms of probability, indeed, is a deductive consequence of them, and no rule not equivalent to that rule will. 4. 3.6 My Response to the Objection and the Position Van Fraassen's view of model construction and experimental model elimination, is interesting, but I think wrong in implying that all we ever do 192 is theorize, and in defining experimentation as theory carried on by other means. At this point the pragmatist must part ways with the antirealist, because action is not simply an extension of thought. On the contrary, thought is a form of action. All concepts are operationally defined because concepts are constructed through interaction with the environment. The connection between thinking and doing is far more intimate than allowed by the realist or, apparently, the antirealist. In spite of van Fraassen's constructivism and his appeal to the pragmatism of William James, his distinction between truth-conditions and truth-irrelevant pragmatic conditions is inimical to the pragmatist enterprise. This distinction is equivalent to the distinctions the realist makes between truth-conditions and verification or assertibility conditions. Because the problem lies in what appears to be a natural interpretation of the semantic conception of theory, our best course is to take things slowly. The semantic conception of theories identifies theories with their classes of models, and not with the language in which those same theories are expressed. Models are language transcendent, and theories, which are composed of models, are language transcendent as well. Consider what the relationship between models and language truly is. In what sense, for van F raassen, do we construct models? My guess is that we posit an object to which we assign different properties. In symbolic form, we construct a set of properties and claim that any object satisfying those will be a model of the theory. No further constraints are placed on admissible models. Van Fraassen's semantic conception of theories assumes that there are objects, along with their properties and relations, as parts of the world ' and that some of those objects-in-relation are candidate models. So far, this 193 is simply the extensionalist way of talking about predicates and their meanings. Still, an important move being made here should be noticed. While language enters by supplying the predicates, it plays no further role in securing the objects. Hence, satisfaction conditions are viewed from the perspective of a representational theory of truth, rather than another, such as a coherence theory of truth. It does so because nothing further is said about satisfaction conditions. The existence of a model is sufficient for the meaningfulness of the set of predicates, even if no one is ever in a 'position to verify that a model exists by producing one.55 There may be a model without it being the case that anyone has a model. While there are antirealist construals of this claim, it is a favorite rallying point for realists of all sorts. In fact, a crucial but very small step from here leads directly to the realist independence claims. The additional move that a model may exist without being verifiable even in principle, when added to the semantic conception of theory, is equivalent to the realist independence claims I identified in chapter two. The wedge between verification conditions and truth-conditions was already driven by van Fraassen in his work IheSeientifieImage, when he claimed there might be models that are true but not verifiable even in principle. He then goes on to say that as scientists, we are only concerned with the models we can verify. This is problematic. Language has meaning because of public conditions for correct use. If language generates these models, where does the surplus of meaning, over and above the publicly verifiable conditions of correct use, come from? One can say, as van 55 While Quine eliminates logical subjects by bound variables and hence puts predicates in control, it remains clear that he is offering an analysis of the substantive terms of a language and of the boundaries of sense marked by a language. We are not given unlimited license to supply substitutional instances, or posits, for which no corresponding stimulus could be named. 194 Fraassen does say, that language is always already outrunning experience.“ But this will not do. All of these outrunnings ought to be accorded the character of risky projections that may need to be revised in the light of future experience. They are taken on credit, but their credit-worthiness has not been certified. The point for the pragmatist, however, is that all the ways in which language outruns experience can be seen in the semantic relations between terms and the inferential consequences of those relations. Under the pressure of experience, we revise our language, not in a piece-meal fashion but in a fashion that acknowledges the ways in which our central terms are related. In other words, the relationships, though part of a language and not verifiable as such, are empirically revisable, because the indirect confrontation with experience may reveal the web as not making useful distinctions. The language may be too impoverished to enable us to account for the phenomena. The pragmatist would want to insist that while terms may not be directly verifiable or even verifiable in practice, we still must retain the claim that terms are empirically revisable in principle. So, the wedge van Fraassen wants to drive between truth and verification requires a contrast between language and models that cannot be made. Models outrun data only if language does. Models are revisable under the pressure of empirical data only if language is. It would seem that language generates models in a way that renders doubtful the language- independent status van Fraassen accords to models. This should have been clear from reflection on laws. Laws do not exist only at the level of fully developed mathematical models like Boyle's 56 Quine does too with his distinction between posits, which are rich dense objects, and sensory stimuli, which underdeterrnine posits. The difference is that Quine nowhere suggests that there is any question of the truth of the semantic surplus, only of the usefulness for economy of thought and expression. 195 law or the Schroedinger equation. One may prefer, as I have for much of this thesis, to preserve the term 'law' for science. If so, the terminology can be rendered consistent again by properly qualifying extended uses: 'common- sense law', 'qualitative law', 'lawlike regularity'. The point, of course, is that these play a similar cognitive role (though not as effectively) as scientific laws: predictions of future occurrences. This is not simply analogy: it is in keeping with Dewey's continuity principle to call them both 'laws.‘ Laws are present in our posits in language, as Popper showed, even in our claims about ordinary glasses of water: firm expectations of future behavior are part of the meaning of attribution of class properties to a perceptual object, indeed, even in the naming of them. The difference is in the degree of precision involved in the predictions of future behavior. 4.3. 7 Van Fraassen 's Realism It is appropriate that our final remarks should come back full circle to our first remarks in chapter one. Van F raassen is right to say that there is a philosophical problem, not a scientific problem, of laws. He is also right to identify the source of the trouble as the fondness for speculative metaphysics present in the philosophical reflections on the nature of science throughout the history of Western philosophy. Where he is wrong, I think, is to identify 'law' as a creature of this speculative tendency in philosophy, rather than as a genuine constituent of science needing clarification. His willingness to make this identification, 1 think, betrays common ground van Fraassen shares with the realists that enables him to have dialogue with them but that also weakens his critique. The assumptions, I suggest, lurk in a particular understanding of semantics that pulls him in the direction of realism, and a particular understanding of rationality that frees him from that pull, but leaves an opening for the realists to enter. 196 In this section I am going to argue the following: insofar as van F raassen is consistently empiricist, he offers no arguments for abandoning anything other than realist versions of 'scientific laws'. I wholeheartedly endorse this rejection. Insofar as van Fraassen offers a strong argument against pragmatist construals of law, however, he must do so by partially embracing realist assumptions. Therefore he has offered no additional reasons for dropping the project of constructing empiricist construals of scientific laws. Either way, nothing new has been offered by van Fraassen, despite the sophistication of his presentation, to undermine pragmatist efforts to elucidate the concept of scientific law. The empiricist van Fraassen asks how we can identify the entities invoked by realists. This, I have argued, is a good question -- in fact, the right question to pose. The empiricist van Fraassen asks how something objective, like a law or objective chance, could affect our expectations, or the personal probabilities we assign to events. I think this question is unanswerable, given the realists's strong commitment to independence of the world from both language and mind. To bring in causality is to bring in too little too late, and, indeed, it is the wrong sort of solution, given the ontological divide between cause as part of the world, and language and minds on the other side. The realist has reduced the options to dogmatism or skepticism. The realist van Fraassen wants to distinguish belief from acceptance and empirical adequacy from truth. These distinctions per se are not objectionable: we can have good empirical reasons for making such distinctions. But the way van Fraassen makes these distinctions shows that he has already conceded some crucial ground to the realists. Both truth and belief are construed as the realist would construe them: truth requires 197 isomorphism of model and reality, and belief is belief that the model is isomorphic with reality. These are precisely the ways that the Wittgenstein of the Itaetahts would characterize these terms. This is the way the realists talk when they define the conditions under which a theory is to be counted as true and should be believed. It funds the talk about verisimilitude we find in Popper and Newton-Smith. It assumes it makes sense to say that there is a model without any consideration of how anyone could ever know that was the case. How does it happen that van F raassen and the realist are in agreement here? The analogy with religion may be at fault here. In religion, the difference between realism and antirealism is framed in terms of belief in a transcendent Being, God. Both agree the term is meaningful. They disagree on the possibility of knowing whether such a being does exist. Both differ from the atheist who often wants to claim that the question does not make sense. However appropriate such a stance may be for theology, we can't help but wonder whether it is at all appropriate for science. Given this agreement between van F raassen and the realists on truth and belief, how does van Fraassen come to be in disagreement with the realists? Van F raassen weakens the concept of rational constraint to the point that we are not forced to believe what our best theories claim. Rationality as permissive, rather than coercive, stands center stage in the constructive part of van Fraassen's constructive empiricism. And‘it comes down to this: any internally consistent model is rationally permissible if also consistent with the empirical data. This, I believe, leaves the gate open for the wolves. It frees us from scruples that every empiricist should want to maintain. We can invoke ridiculous entities in our theories, so long as logical consistency and empirical adequacy are respected. But as a claim 198 about the plausibility of hypotheses, this simply will not do. Analogy needs to be respected. Common sense ontologies are given a role in van Fraassen's model of theory construction that ignores the important transactions in the opposite direction. Science modifies common sense--and should! The pragmatist view of reflective equilibrium acknowledges van Fraassen's point about lack of privilege, without giving up the continuity of mutual modification of our entire set of beliefs. As William James said, any new belief must run the gauntlet of our other beliefs to pass muster. Any new belief may be rejected--or may lead to the rejection or far reaching modification of our other beliefs. The test ultimately lies in the predictive success of the set of beliefs as revised. Since laws are 'the predictors par excellence, the devices we explicitly construct for the sake of predictive success, laws must continue to play a role in any epistemology endorsed by a pragmatist. Van Fraassen's permissive rationality is too permissive. It places no bars on realist projects beyond logical consistency and empirical adequacy, and these bars can be surmounted by weakening the content of theories. The predictive failure--indeed the absence of any predictive element in the realist proposals is a serious fault that van F raassen's critique must leave untouched. Indeed, his view shares this weakness: experiment is simply seen as a cognitive activity that requires machinery. Experiment is not part of our active probing of the world producing defeaters and confirmers that would not even exist without such active probing. Van F raassen's models are not constructed in the important relevant sense. Just as for Popper, the temporal introduction afforded by theoretical construction does not affect the Platonic existence of the class of confirmers and defeaters. Empirical adequacy, defined as consistency with all 199 observable events past present and future, invokes a set with as objective an existence as any the realistwould want. The pragmatist wants construction in a more robust sense, where there are no defeaters or confirmers until we construct either them or the conditions for their existence. We construct the initial conditions, the possible falsifying conditions, and control the conditions under which the instances unfold. We adjust beliefs in response to changes in the world, and we adjust the world in response to changes in belief. The point about laws is as follows: laws are not objective items trying to break in to our subjective consciousness as the realists suppose. In one sense, there is nothing more to laws than van F raassen's personal probabilities, provided, of course, that we make these the property of a group rather than an individual. This is our best guess about how events will go in the future based on evidence of the past. I have called this the backward looking argument, and it is a point on which van F raassen and I are in agreement. Nevertheless, van Fraassen relegates to the realm of pragmatics the practical testing and projecting that shows we do indeed believe these claims, and he claims pragmatics are epistemically irrelevant. For the pragmatist, however, our willingness to lay everything on the line, so to speak, and to stick out our epistemic necks to see if we can reproduce the results and apply them in practical situations -- these are quite relevant to whether we have true beliefs or not. This is the forward looking argument. It is absent fi'om van Fraassen as well as the realists. The reason it is absent, I suspect, is that the constraints either philosophical school is willing to place on models are too weak. If some parts of a model are not verifiable even in principle, we have little motivation to try to deliberately revise all parts of our theoretical vocabulary in the light of experience. We can declare some parts of our language permanently off-limits to empirical scrutiny. Space- 200 time trees seem to be of this sort. Or we can come up with duplicate predicates that are verifiedwhenever their ordinary double is. Universals strike me as being in this category. Either way, we have distinctions that do not even in principle make a difference in experience. The central drama we see is the Hume world: events happen in a regular way without announcing the regularity they exhibit. The metaphysical scaffolding that is supposed to make the events in the Hume world possible are all hidden behind an impenetrable curtain or perhaps we do have metaphysical agents who come from behind the curtain only to appear on stage disguised as actors we have already seen before. The pragmatist theory of meaning, that distinctions worth making must make a difference to mark a difference, preserves the connection with experience that makes the forward looking argument appropriate. It is for the sake of making our lives go as well as possible that we attend to whether our ways of interacting with the world are working or not. Indeed, we deliberately try to fail our claims if a lot hangs on their truth practically.57 Van Fraassen lacks this element, because his timeless class of empirical evidence makes the temporal character of human existence recede from view and from consideration for the reasons why we engage in science. But the motives of predicting the future and taming uncertainty drive scientific 57 Metaphors are no substitute for arguments, but someone may wonder how I would describe the scene in terms of the stage-metaphor I just used? Do we see behind the curtain by 'projection?' Are we the authors of the play? No, neither will work. One claims we see the invisible, which is untrue. The other is simply idealism, which also is false. The reality is more complicated. It is more a collaborative enterprise, like film-making, where screen- writer, director, and actors all have input in how the film goes, and the original direction given by the script may be challenged successfully when director or actor appeal to the need for coherencies or consistencies of various sorts, e.g., keeping a character in type by adding a typical action or omitting an atypical one. But at times the screen-writer will win, claiming the director or actor failed to understand the complexity of the character or the situation. 201 progress and prevent science from being an idle intellectual game as some argue metaphysics tends to be. Insofar as van Fraassen shares the realist premises, he shares their weaknesses. The ontological divide poses problems if we take it seriously, even for those going on to say science should not concern itself with it. Real classes of defeaters or confirmers raise the same problematics for van Fraassen as objective chance raises for the realists. He is not in a stable or strong position to dislodge other views from that ground, and has, I think nothing new to offer against empiricist views other than the traditional problems raised against the logical empiricist views. But, I have argued, pragmatism, certainly in the form it appears in Quine and Skyrms, is free of most, if not all, of these problems. In the absence of new objections based on unproblematic assumptions, I believe the pragmatist view holds the field. For all these reasons, I believe laws should be preserved. Realist laws, even if it made sense to claim that they existed, would leave us none the wiser on how to live. Van Fraassen is right about that. But van Fraassen, by weakening his commitment to empiricism with realist moves in semantics and epistemology, has deprived himself of a strong reply to the realists. His opting for the subjectivist view of probability as opposed to the objective one of the realists, simply leaves him enmeshed with the same problematics. It misses a perfectly good sense of objectivity to which he is entitled, and which I have developed, that preserves laws as objective guides for expectations of the future. It also misses what such objectivity adds to Bayes's Theorem that does not lead to evaluative irrationality: the need for realizing, in practical actions, the lawful regularities we believe hold. Replication and reliable application do not change the probabilities we 202 assign to events, as Dretske might claim. Rather, they simply make clear which probabilities we believe, and which we merely entertain with some weaker doxastic attitude, perhaps acceptance, in van Fraassen's terms. We believe what we act upon, provided we do not act tentatively or with qualifications and hedges, as we do in the initial stages of formulating a law. Once we are willing to export scientific laws for general use, we are declaring it defeasibly true, since we may withdraw or revise the claim at a later date, but true nevertheless. Van Fraassen's sophisticated probabilism makes sense in extended theoretical construction, and in risky policy discussions, where the trade-offs need to be evaluated carefully. But for general living and the application of tested technologies, it makes no sense to quarrel with the claim that people treat taken-for-granted belief as true and treat taken-for-granted technology as working in the intended way. If we were not able to treat some expectations as secure, we would not be in a position to treat other expectations as less than secure. This is the same point I raised against Collins who starts with dissensus rather than consensus. This is something, by the way, that is very odd for a metal theorist of science to do, who would agree, I should think, that we must have common ground to even be in a position to disagree on anything. Perhaps we go too far if we call such expectations 'certainties', although we would assign them personal probabilities very close to, if not identical, to one. They are not certainties because, in the light of past experience, we know that some of them, perhaps all of them, will need revising in the future. Still, they are as close to certainties as anything we ever have in our lives, because we are willing to act on them without question or hesitation. They become the basis of our habits, without which we could not live. 203 Ultimately, my quarrel with van F raassen lies here. Theoretical revision takes place in the context of taken-for-granted beliefs and habits. Laws are sharp formulations of beliefs that become the basis of sharp formulations of practice, whether technical, like in the design of devices like televisions and computers, and'in' technical practices, like medicine and engineering, or non-technical, as in hygiene habits we practice without thinking about their justification in medical theory and practice. Laws are habits of thought we acquire through transactions with systems under study, habits of thought that give rise to habits of practice if truly believed. We cannot understand the connection between theory and practice if, like van Fraassen, we withhold the connection of belief that united them. If 'belief and 'firm expectation' are seen as equivalent, then the problem of laws comes down to the problem of belief, and this problem cannot be formulated, much less solved in van F raassen's terms. So, van Fraassen banished laws from science when he banished belief. His commitment to realist premises made the problem of laws insoluble. Thus, I believe it is his partial commitment to realist assumptions that has left van Fraassen bereft of good reasons for continuing the search for an empiricist or pragmatist account of scientific laws. But this commitment is inconsistent with his empiricism. Insofar as he is an empiricist, I think we agree. Where he departs from empiricism, he acquires the weaknesses and incoherencies of the very views he opposes. Realism gives him a platform from which to criticize my view, but it is a view or family of views that has troubles in the foundations. If realism's assumptions are mutually inconsistent, as I have argued, no coherent objection can be made based on that set of assumptions. Hence, as I read him, either we agree, in which case there is no reason to 204 forgo laws, or we disagree. Because he is not in a position to coherently state the nature of our disagreement, I still have no reason to forgo laws. Thus I defend pragmatism in its view of scientific laws. 4.4 Conclusion All three sets of objections seek to show I cannot have objectivity and relativity at the same time. The realist seeks to show that the pragmatist is really a relativist and subjectivist who cannot do justice to the causal structure of the world. The social constructivist seeks to show that I am really an inconsistent realist who realizes that science is social and involves human choices, but cannot give up the last remnant of realism. I still want nature to teach us, although the lessons are negative rather than positive. Van F raassen seeks to show that I have not learned yet that science is not about truth, but rational belief. I have argued that the only world we know is a world for us, a world with which we interact and which we construct. All three of my objectors, Armstrong, Collins and van Fraassen, miss the importance of dynamic interaction with the world. In response to the realists, I say that our conceptual framework is a product of our interaction with the world. Our concepts and actions work together to create a livable world. How they create such a world is as much a function of our interests and aims as it is of the satisfactions the world offers. When we forget how much we have contributed to the picture of the world we have, we open ourselves to becoming ideological dupes of a particular social agendum, as, I fear, is the case with the biodeterrninists. We construct scientific laws for our own well-being. In response to the social constructivists, I say the world does resist us and our constructions sometimes fail. No species is an island. We all live in an environment that offers us no guarantees of success. Consensus is not 205 necessarily secured by conquest, rhetorical or political. Consensus, like any other construction, may fail. Laws,_like social conventions, are important because we need taken-for-granted regularities for cooperative enterprises. Laws, unlike social conventions, are important because they are taken-for- granted regularities that are not endlessly negotiable. They mark out the boundary of variation, and guide our actions as much by what they rule out as by what they rule in. In response to van Fraassen, I say that there is nothing to be gained by distinguishing rational belief and knowledge, or acceptance and belief, if the only point in making such a distinction is to avoid epistemic risk. Scientific methodology reduces, but does not eliminate, the risk of error. Commitment and willingness to act are the litmus test of belief for the pragmatist, and being taken-for-granted is the sign of knowledge and truth for both ordinary life and science. I believe not only is my view stable, but also it is the only view of those considered that keeps related things together. 206 Chapter Five: Conclusion of Thesis I have argued in this thesis that scientific laws are of great importance to finite beings like ourselves, who occupy a limited stretch of space and time, and, with limited resources, need to figure out what set of practices will conduce to our well-being. We not only have a need for generalizations that work locally, in the environment we happen to find ourselves, but globally as well. Laws can do this because they pick out the features that covary and distinguish them from the features that vary independently. The problem of laws has been to figure out how laws can do that. How can laws pick out stable regularities from unstable ones? The realist seeks the answer in metaphysics. Universals, space-time trees, and natural kind hierarchies have a stability lacking in their instances, and having the right metaphysics gives us an explanation for the stable regularities. Aristotle favored an answer of this sort, and modern realists, like Armstrong, have worked out sophisticated versions of Aristotle's views. The problem of laws arises because we have come to accept Aristotle's claim that there is a natural necessity, distinct from logical necessity, that we need to grasp if we are to understand laws. There's the rub. Modern realists reject Aristotle's epistemology in favor of modern empirical science. Information has lost its metaphorical status for us since we no longer believe we abstract the form and have our consciousness shaped by it--in-formed by the form of the object. We have a distinction, between mind and material object, that grew out of religious and ethical convictions of a Socrates, who believed the inner person was more important than the outer one. And we have modern naturalistic science, that makes distinctions between different natural systems for convenience sake, but is committed to the fundamental similarity of all physical things. How do we bring Socratic distinction 207 together with naturalistic similarity? Can we? I have argued that realism cannot save scientific laws because the items in its accounts cannot do the work they are supposed to do. Universals and causes are on the wrong side . of the ontological divide between minds and the world to do us any epistemic good. I have argued that it is best to start our reflections in the middle of our inquiries, rather than seeking a mythic beginning in some ur-principles that have been guiding us all along. Perhaps our paths are guarded by angels, but that doesn't help us epistemically unless they speak. Our inquiries are the efforts of finite, embodied cognitive beings. We muddle along as best we can, given our limitations in time, energy and experience. We construct a world useful to our needs by casting about for usefill descriptions that get us somewhere. There may have been other distinctions we could have made that would have done just as well. That doesn't matter. What matters is whether the distinctions we have made bring order to experience. If we can discern regular repeatable patterns in experience, we are in a position to take advantage of them for our practical purposes. The trick is to pick the right regularities. We accomplish the trick by trial and error until we see the ways in which some regularities fail. Experimental methodology grows out of common sense experience of discovering our mistakes. The demand for replication in the laboratory, in a controlled environment, and reliable application outside the laboratory, in a less controlled environment, is the reasonable demand of those deceived in the past. Regularities become stable in the process of making sure they hold up. A stable regularity is really all one needs to infer the future from the past. A stable regularity is just the sort of thing one can see, because it only becomes stable as we closely attend to its conditions of validity. So, my view, 1 think, solves both the identification 208 problem and the inference problem as van F raassen describes them. I can tell you which states of affairs pick out a law. A covariance among measurable variables that we cannot upset in any of the standard ways, is a law. I can also tell you why such covariances are inference tickets to future events. Such covariances as pass the test are unlikely to be accidental. If there are no guarantees, then of course I cannot guarantee my position is stable. Nevertheless, I am unpersuaded by at least some of the objections that might be raised to my position by realists, antirealists and social constructivists. I am not persuaded by those who want to say that either nature dictates to us or we to nature, but not both. Realism versus relativism strikes me as part of the shop-wom problematics of an ancient dualism. Nature or culture? That was the debate between the sophists in Athens. We work out the distinction between the social and the natural within experience, and should not treat complementary perspectives as though they were antithetical alternatives. I am not impressed either with van Fraassen's assumption that verificationism was simply a bad idea created by the logical empiricists. They may have constructed a version which failed, but the demand for verifiability seems to me to be part of empiricism. That demand should only be abandoned if one is prepared to suggest how else, besides through experience, we are to come to know about the world. Given the choice between a slow muddle of scientific inquiry, and the jet-flight to Platonic heaven offered by the inference to the best explanation, I prefer the muddle. At least the muddle has some chance of getting us somewhere, whereas the speculative jet-flight is never going to get off the ground. So, while I am grateful for the realist offer to solve my problems, I think I am better off trying to solve them myself. 209 Laws are human constructions. When they work, they can be used by anyone, anywhere at any time. Scientific laws are universal. Since there are very few human institutions for which that is true, scientific laws are remarkable testimony to the value of science for humanity. If human history is viewed as one big coordination problem, we can see in laws 3 way forward through ideological impasses. If that is so, then van Fraassen's demand that we give laws up is one from which we should demure. 210 BIBLIOGRAPHY 211 10. ll. . Armstrong, D[avid] M[amet]. W3. . Aronson, Jerrold, Rom Harre, and Eileen Cornell Way. Realism Bibliography Cambridge: Cambridge UP, 1973. llnixersalaanificiemifieliealism Cambridge: Cambridge UP, 1978. ---. What is a Law at Nature? Cambridge: Cambridge UP, 1983. ---. A Wetld Qt States at Affairs. Cambridge: Cambridge UP, 1997. WWW LaSalle 111.: Open Court, 1995. Barnes, Jonathan, ed. WW. Princeton: Princeton UP, 1984. Berkeley, George. 126.121.9111. Ed. and Trans. Douglas M. Jesseph. Dordrecht: Kluwer, 1992. ---. jueiplemfflhmanfinewledge. Lasalle, 111.: Open Court, 1986. Bigelow, John, and Robert Pargetter. W. Cambridge: Cambridge UP, 1990. Bronshtein, I. N., and K. A. Semendyayev. HandheekeflMathematiea. New York: Van Nostrand Reinhold, 1985. Buchdahl, Gert. Metaphmsandtlrefilulosonhuffimence Cambridge, Mass.: MIT, 1969. 212 12. 13. 14. 15. --- 16. 17. 18. 19. 20. 21. Burks, Arthur W. W. Chicago: U of Chicago P, 1977. Burn, Edwin A. IheMmanhxsicaLEQundafionmeodemPhxsical Seienee. Garden City, N.Y.: Doubleday, 1954. Carnap, Rudolf. Meaningandfleeessiu. Chicago: U of Chicago P, 1956. .Ehflosonhicalioundatlonsoffihxsics New York: Basic, 1966. Chisholm, Roderick. “The Contrary-to-Fact Conditional.” Mind 55 (1946): 289-307. . “Law Statements and Counterfactual Inference.” Analysis 15 (1955): 97-105. Cochrane, R Mmmaflflmmmflmmhmnmm eflfitandatds. Washington, D. C. National Bureau of Standards, 1966. Collins, Harry. W. London: Sage, 1985. Conant, James Bryant. “Robert Boyle's Experiments in Pneumatics.” WWW Eds James Bryant Conant and Leonard K. Nash. Vol. 1. Cambridge: Harvard UP, 1957. 1-64. Coyne, Gary. IhaLabnratcmHandhooknflMatefialafixnefiment ahdleehnigue. Englewood Cliffs, NJ: Prentice Hall, 1992. 213 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. Descartes, R. IheBhrleonhrealfluungmeeseanes Eds. J. Cottingham, R. Stoothoff, and D. Murdoch. Vol. 1. Cambridge: Cambridge UP, 1984. 177-291. ---. “The World.” Descartes, Rhilesephieal. 1: 79-99. Dewey, John. W. Carbondale, 111.: South Illinois UP, 1991. Dijksterhuis, E. J. MeehanizatieneflthLWetfiEiemLe. Princeton: Princeton UP, 1986. Dretske, Fred 1. “Laws of Nature.” Rhilesephuffieienee 44.2 (1977): 248-268. ---, W. Chicago: U of Chicago P, 1969. Ellis, Brian. RasieQeneeptsefMeasmement. Cambridge: Cambridge UP, 1968. Feynman, Richard. Ihe_Charaeter_Qf_Ehysieal_Law. Cambridge: MIT, 1 967. ---, W. Vol. 1. Reading, Mass.: Addison-Wesley, 1963. Gooding, David. ExperimentandtheMakingefiMeaning. Dordrecht: Kluwer, 1990. Goodman, Nelson. Eaet,_Eietien_and_F_QLeeast. 4rth. ed. Cambridge, Mass.: Harvard UP, 1983. 214 33. 34. 35. 36. 37. 38. --- 39. 40. 41. 42. 43. Hanna, Joseph. “Empirical Adequacy.” Ehllflmhimflficifinfl 50 (1983): 1-34. . “Objective Homogeneity Relativized.” Rhilesephuffieienee l (1986): 422-431. Harre, Rom. Lawmflflatnre. London: Duckworth, 1994. ---. _arieties_9_f_Realistn. Oxford: Blackwell, 1986. Hempel, Carl. Asneetseflfieientiflefixnlanatien. New York: Macmillan, 1965. . "Empiricist Criteria of Cognitive Significance." Hempel, Aspeets. 101-122. Hesse, Mary. “Laws and Theories.” Eneyelenediaeflfihilesenhy. Eds., Arthur Pap and Paul Edwards. Vol.4. New York: Macmillan, 1967. 404-410. Hume, David. Treatiseenflnmanhlamre. Ed. P.H Nidditch. 2nd. ed. Oxford: Clarendon, 1978. Jackson, Frank ed. Cenditienals. Oxford: Oxford UP, 1991. Jasanoff, Sheila, GeraldE. Markle, James C. Petersen, et al eds. HandhmknfScimandIeshnolmSmdies Thousand Oaks: Sage, 1995. Kneale, William. “Natural Laws and Contrary-to-Fact Conditionals.” Analysis 10 (1950): 121-125. 215 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54.. 55. ---. “Universality and Necessity.” Bdtishlenmalfiflhefihilgsenhy eflSeienee 12 (1961): 89-102. Knorr-Cetina, Karin. WWW:- Oxford: Pergamon, 1 980. ---. W. Cambridge: Harvard UP, 1999. Latour, B. SeitmeejnAetien. Cambridge: Harvard UP, 1987. Leibniz, Gottfried Wilhelm. WWW. Ed. Leroy E. Loemker. Dordrecht: Kluwer, 1989. Lenzen, Victor. “Procedures of Empirical Science.” in Neurath 1: 279-339. Lewis, David. Cannterfaemals. Cambridge: Harvard UP, 1973. ---. “New Work for a Theory of Universals.” RanetsinMetanhysies andEnistenielegy. Cambridge: Cambridge UP, 1999. 8-55. ---. anheflnmhmeflflquds. Oxford: Blackwell, 1986. Maekie. J. L. WWW Oxford: Clarendon, 1974. McCall, Storrs. AMedeleflthellnis/erse. Oxford: Clarendon, 1996. Mill, John Stuart. SystemeflLegie. New York: Harper, 1881. 216 56. 57. 58. 59. 60. 61. 62. 63. 65. 66. Nagel, Ernest. W. Indianapolis: Hackett, 1979. Needham, Jacob. “Human Laws of Nature in China and the West.” W121 (Jan 1951) 3 30; 194 230 Neurath, Otto, Rudolf Camap, and Charles Morris, eds. International Eneyelenediaefllnifletlfieienee. 2 vols. Chicago: U of Chicago P, 1955. Pickering, Andrew. Manglegifltaetiee. Chicago: U of Chicago P, 1995. ---, ed . W. Chicago: U of Chicago P, 1992. Plato. Cemnleteflqtks. Ed. John M. Cooper. Indianapolis: Hackett, l 997. Popper, Karl. IheLegieeflSeientifieDisegyem. New York: Harper, 1959. Quine, W[illard] V[an] O[rmand]. “Ontological Relativity.” QmeloeieaLRelatixinLandDJheLEssaxs New York Columbia UP, 1969. ---. W. Cambridge: MIT P, 1960. Reichenbach, Hans. Laws,_MedMeundfiMetfaemals. Berkeley: U of California P, 1976. Ruby, Jane. “The Origins of Scientific 'Law'.” LQumaleLtheHistem endeas 47.3 (July 1986): 341-359. 217 67. 68. 69. 70. 71. 72. 73. 74. 75. --- 76. 77. Sehwedt. Georg.Ihe_Essennal_Gu1de_tsLAnahm;_Chemm- New York: Wiley, 1997. Shugar, Gershon, and John A. Dean. IhefihemistlsReadeeferenee Handhoek. New York: McGraw, 1990. Singer, Charles. W. Mineola: Dover, 1997. Skyrms Brian W W. New Haven: Yale UP, 1980. Suppes, Patrick and Joseph Zinnes. “Basic Measurement Theory.” HandhmknfMathematicalehologx Eds R D Luce R..R Bush andE. H. Galanter. Vol. 1. New York: Wiley, 1963: 3- 76. Thomas Aquinas. Snmmalheolcgiaelatintextandfinglish l . . 1 . 1' 1 1 . . Cambridge: Blackfriars, 1964. Tooley, Michael. CansanemARealistAnpi’eaeh. Oxford: Clarendon, 1987. van Fraassen, Bas. LawsantLSymmetm. Oxford: Clarendon, 1989. . IheSeientifielmage. Oxford: Clarendon, 1980. Vlastos, Gregory. Elatejsllniyezse. Seattle: U of Washington P, 1975. Walters, R. 8. “Laws of Science and Lawlike Statements.” Eneyelepediaeffihilesonhy. Eds. Arthur Pap and Paul Edwards. Vol. 4. New York: Macmillan, 1967. 410—414. 218 78. 79. 80. 81. 82. Wartofsky. Marx W. WWW New York: Macmillan, 1968. Williams, Frederick W QuamitatixeReseaLeh Fort Worth. Harcourt, 1992. Wilson, E. Bright , Jr. IntrednetientefieientifieReseateh. New York: Macmillan, 1952. Wittgenstein, Ludwig. Ehilesenhiealjngestigatiens. Trans. G.E.M. Anscombe. 3rd. ed. New York: Macmillan, 1953. Zilsel, Edgar. “The Genesis of the Concept of Physical Law.” EhilosnnhicalRexiewSl (1942)! 245- 279 219 "‘lllllllllllllllllli