OVERDUE FINES ARE 25¢ PER DAY PER ITEM Return to Book drop to remove this checkout from your record. OPERATIONAL TECHNIQUES FOR APPLIED DECISION ANALYSIS UNDER UNCERTAINTY By Robert P. King A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Agricultural Economics 1979 ABSTRACT OPERATIONAL TECHNIQUES FOR APPLIED DECISION ANALYSIS UNDER UNCERTAINTY By Robert P. King The techniques developed in this study are designed for use during four phases of an applied decision analysis: problem formula- tion, the determination of subjective probability distributions, the measurement of decision maker preferences, and the identification of preferred choices. When considered together, they represent an integrated set of techniques which facilitate the application of decision theory based on the expected utility hypothesis. Problem formulation is an important first step in any applied decision analysis. Two important considerations related to problem formulation are emphasized in this study. First, the need to identify and classify the factors which have an important impact on the outcome of the decision to be made is noted, and a classificatory scheme based on systems identification is presented. Second, the need to give careful attention to the specification of what is to be decided is stressed. The desirability of flexible decision strategies in many situations is noted, and the use of feedback control rules to introduce flexibility into a strategy is described. Direct probability assessments of exogenous stochastic factors and the modelling of more complex stochastic processes are combined in the procedure for the determination of the distribution of outcomes associated with any choice which is presented in this study. Under Robert P. King this approach, the decision maker's expectations concerning future levels of critical environmental variables are elicited directly. Monte Carlo simulation techniques are then used to determine the effect of these factors on the distribution of outcomes associated with any particular strategy. The value of this approach is greatly enhanced by the generalized procedure for the generation of sample vectors from multivariate distributions with non-normal marginals, which was developed as part of this study. With regard to the measurement of decision maker preferences, shortcomings of both single-valued utility functions and commonly used efficiency criteria such as first and second degree stochastic dominance are identified, and a new approach to the measurement of decision maker preferences is presented. This new procedure permits the construction of interval measurements of a decision maker's absolute risk aversion. Unlike other preference measurement function procedures, it allows the direct specification of the degree of precision with which pre- ferences are measured, since the absolute risk aversion interval can be of any desired width. Interval measurements of this sort can be used in conjunction with the evaluative criterion of stochastic dominance with respect to a function to order alternative choices. The final methodological contribution of this study is the formu- lation of a generalized risk efficient Monte Carlo programming model, which combines random search procedures, Monte Carlo simulation, and evaluation by the criterion of stochastic dominance with respect to a function within a single analytical framework for the identification Robert P. King of preferred choices. This model is flexible and computationally efficient, and it is well-suited for use in the analysis of a wide range of practical decision problems. The methodological tools developed in this study are applied to the analysis of two related problems. The first is concerned with land rental and crop production decisions on a small cash grain farm under conditions of uncertainty with respect to prices, yields, and time available for fieldwork. In the second problem analyzed, these same decisions are considered in conjunction with the selection of a flexible marketing strategy which evaluates forward contracting strategies over a seven-month period. For Jane ACKNOWLEDGEMENTS For their guidance and assistance, I wish to thank the members of my thesis committee: Roy Black, Ralph Hepp, Glenn Johnson, and Lindon Robison. I am especially grateful to Lindon Robison, my thesis supervisor, whose questions prompted this study and whose willingness to explore new ideas was a continuing source of help. I also want to give particular thanks to Warren Vincent, who was my major professor throughout most of my graduate career. Without the freedom and the good advice he gave me this study might not have been undertaken. Funding for this study was provided by the Michigan Agricultural Experiment Station. I gratefully acknowledge their support. Finally, I thank my wife, Jane, who shared all the good and bad days that come with an undertaking like this one. Her encouragement and sense of perspective were a constant source of help. iii TABLE OF CONTENTS LIST OF TABLES ................... . . . . . LIST OF FIGURES ............... . ....... Chapter I. INTRODUCTION 1.1 Background and Need for the Study ......... 1.2 Problem Statement ................. 1.3 Objectives of the Study .............. 1.4 Plan for the Remainder of the Study ........ II. PROBLEM FORMULATION IN THE ANALYSIS OF DECISIONS MADE UNDER UNCERTAINTY 2.1 Introduction . . ........... . ...... 2.2 The Identification and Classification ....... of Factors Relevant to the Decision 2.3 The Specification of the Decision to be Made . . . . 2.3.1 The Specification of Feedback Control Rules . 2.3.2 Plans, Decisions, and Actions ..... . . . 2.4 A Formal Statement of the Decision Problem ..... 2.5 An Application . . ............ . . . . . III. THE DETERMINATION OF SUBJECTIVE PROBABILITIES 3.1 Introduction ........... . ........ 3.2 Probability Encoding Procedures .......... 3.3 Modelling Stochastic Processes ........... 3.4 An Application ................... IV. THE MEASUREMENT OF DECISION MAKER PREFERENCES 4.1 Introduction ................... 4.2 The Use of Single Valued Utility Functions to . . . Represent Decision Maker Preferences 4.3 Efficiency Criteria and the Representation . . . . . of Decision Maker Preferences 4.4 Stochastic Dominance with Respect ......... to a Function 4.5 An Interval Approach to the Measurement ...... of Decision Maker Preferences 4.6 Implementation of the Procedure . . . . . . . . . . 4.7 An Empirical Test ................. 4.8 An Application . . ........... . ..... 12 14 21 23 28 33 35 Chapter Page V. COMPUTATIONAL PROCEDURES FOR THE IDENTIFICATION OF PREFERRED CHOICES 5.1 Introduction . . . . ............... 130 5. 2 A Review of Existing Computational Procedures . . . 131 5.3 A Generalized Procedure for the Identification . . . 137 of Preferred Choices Under Uncertainty 5.3.1 Generation of a Feasible Management Strategy. 139 5.3.2 Determination of the Distribution of . . . . 146 System Output Levels 5.3.3 The Evaluation of Alternative Strategies . . 147 5.3.4 General Comments on the GREMP Model ..... 151 5.4 An Application .............. . . . . . 156 VI. COMBINED PRODUCTION AND MARKETING DECISIONS BY CASH GRAIN FARMERS: AN EXTENDED APPLICATION 6.1 Introduction ............ . ..... . . 162 6.2 Problem Formulation . . . ........... . . 166 6.3 The Determination of Subjective .......... 172 Probability Distributions 6.4 Decision Maker Preferences .......... . . 179 6.5 The Identification of Preferred Choices ...... 180 6.6 Further Discussion of the Results . . . . . . . . . 191 6.7 Implications for Further Research . . . ...... 194 VII. SUMMARY AND CONCLUSIONS 7.1 A Review of the Methodological Tools . . . . . . . . 197 Developed in this Study 7.2 Empirical Findings ............... . . 201 7.3 Implications for Future Research .......... 203 APPENDIX A ........................... 207 A GENERALIZED MULTIVARIATE PROCESS GENERATOR APPENDIX B. . . ................. 240 IMPLEMENTATION OF THE INTERVAL APPROACH TO THE MEASUREMENT OF DECISION MAKER PREFERENCES APPENDIX C .................... . . . . . 278 IMPLEMENTATION OF THE GREMP MODEL LIST OF TABLES Table bk #0)“) 0000 (00000 as 01 01 01 O1 01 010101 . o o o o o o o by 01 J: to re d (ON oowox U1 45 —l (AN-d u—l Standard Crop Enterprise Budgets .......... Planting and Harvest Periods and Possible ....... Crops for Each Combination Average Corn Yield and Moisture Content ....... . . by Planting and Harvest Date Average Soybean Yield by Planting and Harvest Date Percent of Time Available for Fieldwork by . . . . : Calendar Period for Well Drained Sandy Loom Soils in the Lenawee, Monroe, Livingston County Area Beta Distribution Parameters for Time Available . . . . for Fieldwork by Calendar Period Beta Distribution Parameters for Corn . . . ..... and Soybean Price Distributions Two Possible Management Strategies ......... System Performance under Strategy 1 .......... System Performance under Strategy 2 .......... Probability Distribution Associated with ....... Two Alternative Action Strategies Sample Distribution from a Normal Distribution . . . . With u = 3000 and o = 1000 Boundary Intervals for Pairs ............. of Sample Distributions Performance Indicators for Alternative ........ Preference Measures Efficient Strategies for Decision Maker A ....... Efficient Strategies for Decision Maker B ...... . . Efficient Strategies for Decision Maker C ....... System Performance under a Sample ........... Management Strategy Representative Strategies from the Efficient ..... Sets of Four Decision Makers System Performance under the Preferred Strategy of. a Risk Neutral Decision Maker System Performance under the Representative ...... Strategy for Decision Maker A System Performance under the Representative ...... Strategy for Decision Maker B System Performance under the Representative ...... Strategy for Decision Maker C vi Lass 37 39 67 68 71 158 159 161 178 182 183 186 188 190 P335: Parameters of Marginal Beta Distributions ........ 236 Specified Correlation Matrix .............. 236 Sample Correlation Matrix for ......... . . . . . 238 Multivariate Normal Data Sample Correlation Matrix for . . . . . ......... 238 Multivariate Beta Data An Example of Preference Data Input ........... 270 for Program UFUNC vii LIST OF FIGURES Figure Page 2.1 A General Scheme for System Identification ....... 16 3.1 A Cumulative Distribution Function . . . . . . ..... 51 3.2 A Cumulative Distribution Function . .......... 62 Based on Five Sample Observations 3.3 Cumulative Distribution Functions for ....... . . . 82 Two Alternative Strategies 4.1 Illustrations of First and Second ..... . . . . . . . 93 Degree Stochastic Dominance 4.2 Cumulative Distribution Functions F(y) and G(y) ..... 102 4.3 A Graph of the Function [G(y)-F(y)] ...... . . . . . 103 4.4 A Sequence of Interval Preference Measurements ..... 110 4.5 Upper and Lower Bound Absolute Risk Aversion ...... 111 Functions Based on Three Interval Measurements 4.6 An Absolute Risk Aversion ................ 113 Measurement Scale 4.7 A Sample Questionnaire ............... . . 119 4.8 Interval Measurements of Absolute Risk ......... 128 Aversion for Three Decision Makers 5.1 A Flow Chart of the GREMP Model ......... . . . . 140 5.2 Sample Observations from the Distribution ........ 148 of Outcomes Associated with Strategy One 5.3 Sample Observations from the Distribution . . . . . . . . 150 of Outcomes Associated with Strategy Two 5.4 Sample Observations from the Distribution ........ 152 of Outcomes Associated with Strategy Three 6.1 Dates for the Application of Forward .......... 173 Contracting Rules 6.2 Price Forecast Line and Observed ......... . . . 175 Control Price Levels for One State of Nature 6.3 The Expected Net Return Maximizing Strategy . . ..... 193 viii Figure _9_Pa .>.>.>>> 01th .1 '—'LD (”N01 d “HOCDNO‘U'l-bw N Generation of Random Variates by the . . . . . . . . . . 212 Inverse Transformation Method A Program for Generating Exponential .......... 214 Random Variates A Program for Generating Gamma ............. 216 Random Variates A Program for Generating Beta .............. 219 Random Variates An Approximate Representation of a ........... 221 Cumulative Distribution Function Based on Six Known Points A Table Look-up Function Subprogram ........... 222 A Generalized Univariate Process Generator ....... 224 A Program for Generating Multivariate . . . . ...... 227 Normal Random Variates A Generalized Multivariate Process Generator . . . . . . 231 A Multivariate Beta Process Generator .......... 234 Sequence of Choices for a Measurement of Absolute . . . . 242 Risk Aversion Based on Three Questions A Suggested Absolute Risk ................ 244 Aversion Measurement Scale A Listing of Program NORGEN ............... 245 Sample Output from Program NORGEN ........ . . . . 248 A Listing of Program INTID ............ . . . 250 An Eight Element Measurement Scale ....... . . . . 255 Sample Output for Program INTID ....... . ..... 256 A Three-Stage Hierarchy of Questions .......... 258 Sample Questionnaire .................. 261 A Listing of Program UFUNC ........ . . . . . . . 264 Representation of a Utility Function by ...... . . . 266 Interpolation Between Known Points An Interval Preference Measurement ........... 268 Utility Functions Associated with Upper and ....... 271 Lower Bound Absolute Risk Aversion Functions A Listing of Program NSTDO ............... 272 A Sample Output from Program NSTDO ........... 276 General Flow Chart of Program GREMP ........... 279 Statements in GREMP Which Generate ........... 286 Feasible Strategies A Simple Version of Subroutine CHECK .......... 288 A Simple Version of Subroutine DISGEN .......... 291 ix CHAPTER I INTRODUCTION 1.1 Background and Need for the Stugy Uncertainty is that state of knowledge in which the consequences of actions being considered cannot be specified exactly; it is that condition in which knowledge is to some degree imperfect or incomplete.1 As such, the presence of uncertainty is a basic fact in nearly all decision situations. Though almost always present to some degree, uncertainty is not a factor which must be considered explicitly in every instance. Often the consequences of ignoring imperfectionsin knowledge are judged to be minimal, and decisions can be made as though all relevant factors were known with certainty. In many other instances, however, when the outcome of an important choice is highly uncertain, the failure to consider uncertainty explicitly may not be justifiable. Uncertainty can have an important impact both on the process by which decisions are made and on the character of decisions themselves. Learning, which has no value when knowledge is perfect, becomes a 1Knight's (1921) distinction between risk and uncertainty is not made in this study. General acceptance of Ramsey's (1931) observation that knowledge of the true probability distribution of a random variable is not possible obviates the need for this distinction. It is important to recognize, however, that degrees of uncertainty can vary and that a decision maker's state of knowledge has an important impact on his actions (Wald, 1947; Johnson and Lard, 1961). 2 potentially worthwhile activity under uncertainty. As a result, the decision maker's attention may be focused primarily on the decision of whether or not to continue learning rather than on the actual choice of an action to be undertaken.1 When learning stops and a choice of actions is made, the character of that choice may also differ from that of one made under certainty. For example, decisions made under uncer- tainty often take the form of flexible strategies which make forth- coming actions contingent upon future events that the decision maker can observe but cannot control.2 Such strategies would be of little value if knowledge were perfect and all future occurrences could be known in advance. The presence of uncertainty also affects the character of the decision rules used to identify a preferred choice. Decision rules which give explicit consideration to uncertainty generally require more specific information about decision maker preferences than is required under certainty, and they must permit the synthesis of this normative information with probabilistic information about the possible outcomes of any choice being considered. Despite these and other impacts of uncertainty in the decision process, uncertainty is often not considered explicitly in the analysis of decisions upon which it may have a profound effect. In many instances the failure to give proper attention to such an important factor is not due to a lack of recognition of the impacts of uncertainty or to 1The role of learning in the decision process is discussed in Bradford and Johnson (1953) and in Johnson and Lard (1961). More recently, the work of Aoki (1975), MacRae (1975) and others have dealt with the "dual-control" problem of learning and setting policies simultaneously. 2Massé (1962), Cocks (1953), Rae (1971), and Day (1975) have all stressed the importance of such adaptive decision strategies. 3 inadequacies in the theory of decision making under uncertainty. Rather, this failure can often be attributed to a lack of workable analytical techniques which permit the explicit consideration of uncertainty in the analysis of practical decision problems, techniques which are flexible enough to allow the application of powerful theoretical results in a wide range of complex situations. This study is concerned with the development of an integrated set of techniques which facilitate the incorporation of explicit considera- tions of uncertainty into a decision analysis. Particular emphasis is placed on the development of methodological tools which make the applica- tion of decision theory based on the expected utility hypothesis more feasible in a wide range of practical problem solving contexts. The expected utility hypothesis has been the basis for much of the body of theory concerned with decision making under uncertainty and has been used to explain a diverse range of behavioral patterns.1 It is also a potentially powerful tool for the analysis of decision problems in a practical context. But for a few notable exceptions such as Grayson (1960), Howard, Matheson, and North (1972), and Keeney (1973), however, this body of theory has rarely been applied successfully in the solution of practical decision problems. A number of difficulties have limited the usefulness of decision theory based on the expected utility hypothesis. The expectations and preferences of decision makers have proved to be difficult to determine 1Notable among the theoretical applications of the expected utility hypothesis are the important early articles by Friedman and Savage (1948), the more recent work of Samuelson (1967) and Ehrlich and Becker (1972), and the extensive literature concerned with portfolio selection they based on the work of Tobin (1958), and Markowitz (1959) and Baumol (1970), among others. 4 and represent accurately, and, as Zadeh (1973) and Watson, Weiss, and Donnell (1979) note, it is somewhat paradoxical that extremely precise decision rules based on the expected utility hypothesis are applied in situations where relevant information is highly imprecise. Computa- tional problems associated with the implementation of expected utility maximizing decision rules are also a source of difficulty. Often they force the imposition of restrictive assumptions on the way expectations and preferences are represented and so further limit the theory's usefulness in an applied context (Anderson, 1975). Finally, as Johnson (1976), Day (1964), Cyert and March (1963), and others have observed, much more than the appreciation of a decision rule is involved in the choice process. Even when other methodological difficulties can be resolved, decision theory based on the expected utility hypothesis can be successfully applied only if a better understanding of problems and the process by which they are resolved is attained. These difficulties are serious ones, but they stem from problems with the way decision theory has been applied rather than from the theory itself. They point to a need to develop methodological tools which can be used to make decision theory based on the expected utility hypothesis truly operational in a practical context. 1.2 Problem Statement In response to this need, this study focuses on the problem of formulating an integrated set of operational techniques for the analysis of decision making under uncertainty, techniques which are consistent with theory based on the expected utility hypothesis and which overcome a number of the problems encountered in earlier attempts to apply that 5 theory. Within this broad problem, four more specific areas of diffi- culty can be identified: problem formulation, the determination and representation of expectations, the measurement of decision maker preferences, and the identification of preferred choices. A problem is said to exist when "a condition, situation, or thing is not as good or is worse (more bad) than it could be," (Johnson, 1976, p. 270). Before the information needed to resolve a problem can be collected and analyzed, before a course of action can be selected and implemented, the problem itself must be clearly defined. Problem formulation is the process by which such a problem definition is developed. It requires that performance criteria, choice variables, and relevant factors in the decision situation which are beyond the control of the decision maker must be defined and that the nature of the decision to be made must be clearly specified. Despite its importance, problem formulation is often given relatively little attention. Frequently, for example, the definition of a problem under consideration is dictated by the computational tools to be used as aids in its resolution. As a result, important sources of uncertainty may not be considered and the special character of decisions made under uncertainty may be ignored. Despite insights provided by Johnson (l961aL Cyert and March (1963), Churchman (1968), Day (1971, 1975) and others, then, problem formulation continues to be a problem in the analysis of decisions made under uncertainty. The process of selecting a course of action which will best resolve a particular problem requires the synthesis of two types of knowledge: (1) positive knowledge, which pertains to beliefs about what is, what 6 will be, and what can be done; and (2) normative knowledge, which pertains to beliefs about the goodness or badness of particular conditions, situations, and things. The product of such a synthesis is prescriptive knowledge, which allows the decision maker to prescribe or specify the right strategy or set of actions.1 Expectations concerning the relative likelihood of alternative events occurring in the future represent an important part of the posi- tive information required in any applied decision analysis. A number of methodological problems arise in connection with the determination and representation of expectations. Frequently, they are not well formulated in the mind of the decision maker, who may not be accustomed to thinking in probabilistic terms or may simply not be familiar with a particular factor which may have a significant impact on the conse- quences of his choice. Even when the decision maker's expectations are well formulated, problems may arise because he is unable to express them in a form which is useful analytically. Other difficulties may stem from the fact that the process by which the outcomes of particular actions are determined may be so complex that it cannot be comprehended as a unified whole. Given these problems, there is a need for proce- dures which help the decision maker structure his own thoughts and help him to use information from more expert sources, a need for tech- niques which allow the decision maker to break down complex processes 1The distinctions between "right" and "wrong" and "good" and “bad" made by Lewis (1955), is an important one. The adjectives "right" and "wrong" refer to the nature of an act, while "good“ and "bad" refer to the conditions existing prior to an act or to its consequences. To say a condition or consequence is good is the statement of a normative belief. To say that an act is right is the statement of a prescriptive belief which is founded both on positive information concerning the con- sequences of the act and on normative information pertaining to the quality of goodness of the consequences. _. A 7 into more comprehensible sub-processes about which expectations can be more easily formulated and then reintegrate that information for use in the decision analysis. Information on decision maker preferences is the primary normative input in any decision analysis. Problems associated with the measure- ment and representation of preferences also cause serious difficulties in the analysis of decisions made under uncertainty. Currently avail- able measurement techniques are used to construct single valued utility functions, which are precise but often inaccurate representations of preference, and many place little faith in them. Efficiency criteria based on stochastic dominance, on the other hand, require little specific information about the decision maker's preferences, but they often fail to order choices and may not eliminate enough alternatives when a large number must be evaluated. These difficulties indicate that there is a need for preference measurement techniques which are more reliable and easier to use in an applied context. The fourth major area of difficulty is that of identifying a pre— ferred choice or set of choices from what may be an infinitely large number of alternatives. This requires the synthesis of both positive and normative information, the simultaneous consideration of both expectations and preferences. Mathematical programing techniques are commonly used as computational aids in the solution of complex quanti- tative problems. They are best suited, however, for use in situations where uncertainty is not a major factor. As Anderson (1975) notes, the use of mathematical programming in the analysis of decisions made under uncertainty often requires that rather severe restrictions be placed on the manner in which a decision problem is posed and in the way 8 information on expectations and preferences is represented. These difficulties point to the need for more flexible computational procedures. 1.3 Objectives of the Study The problems which motivate this study are primarily methodological. They have important implications, however, for the analysis of decision problems in a practical context. In response to them, the objectives of the study are: 1. To present a framework for problem formulation which can serve as a guide in the identification and structuring of information required for the analysis of decisions made under uncertainty. To review procedures used in the elicitation of information on decision maker preferences and to refine techniques for the determination of probability distribution for outcomes which are the result of complex processes affected by a variety of stochastic factors. To develop and test a technique for the measurement of decision maker preferences which is well suited for use in an applied context and which overcomes some of the difficulties associated with other preference measurement procedures. To formulate and make operational a computational proce- dure for the identification of preferred choices which is flexible enough to be used in the analysis of a wide range of practical problems and which imposes few restric— tions on problem formulation or on the representation of decision maker expectations and preferences. 9 The fulfillment of each of these four objectives contributes to the primary purpose of this study, which is to develop an integrated set of techniques for use in the analysis of decisions made under uncertainty. Emphasis should be placed on the fact that the usefulness of the techniques presented below is greatly enhanced by their having been combined within a single methodological framework. 1.4 Plan for the Remainder of the Study The principle objectives of the study are addressed in the next four chapters. Problem formulation is the subject of Chapter II. Par- ticular emphasis is given to the usefulness of system identification (Manetsch and Park, 1977a) as an aid in structuring information in a practical decision context. The need to recognize the importance of opportunities for learning and adaptive behavior and the impact such opportunities have on the character of decisions is also stressed. Techniques for determining decision maker expectations are described in Chapter III. Procedures for eliciting information on subjective probability distributions are reviewed, and the use of Monte Carlo simulation techniques to model the performance of complex stochastic systems for which outcome distributions cannot be determined analytically is discussed. The value of this approach is greatly enhanced by the generalized multivariate process generator developed as part of this study. Described in detail in Appendix A, this analytical tool can be used to model multivariate probability distributions defined by marginals of any form and by any positive definite correlation matrix. 10 The measurement of decision maker preferences is the subject of Chapter IV.1 Procedures used to derive single valued utility functions are first reviewed, and several commonly used efficiency criteria are discussed as possible alternatives to the use of single valued utility functions in applied decision analyses.2 A more general and more powerful efficiency criterion, stochastic dominance with respect to a function (Meyer, 1977a), is then described and a new approach to the measurement of decision maker preferences, developed as part of this study for use in conjunction with this criterion, is introduced. This new approach allows the analyst to specify the degree of precision with which decision maker preferences are measured. Results of an experi- mental test of this technique are also presented. They demonstrate its flexibility and its predictive power. Computational procedures for the identification of preferred choices are the subject of Chapter V. Uses of mathematical programming techniques in the analysis of decisions made under uncertainty are reviewed first, and the major shortcomings of these techniques are identified. A new procedure for the identification of preferred choices which combines random search methods, simulation techniques, and evalua- tion by the criterion of stochastic dominance with respect to a function 1Though the importance of multiple objectives in many decision situations is recognized, attention in this study focuses entirely on preference relationships which depend only on the level of a single performance criterion. 2Computer programs used in the implementation of this technique are listed in Appendix B. 11 is introduced and described in detail.1 This computational tool is remarkably flexible, placing few restrictions on the way probabilities and preferences are represented or on the general form of the problem to be solved. Two related examples are used to illustrate the techniques developed in this study. Both are concerned with decisions affecting the opera- tion of a southeastern Michigan cash grain farm. The simpler of the two examples, which focuses on land rental and crop mix decisions under price and yield uncertainty, is discussed at the end of the four methodological chapters. It is used to demonstrate how techniques for problem formula- tion, the determination of expectations, the measurement of preferences, and the identification of preferred choices can actually be applied. The second example, which considers the selection of a marketing strategy in conjunction with production and land rental decisions, is the subject of Chapter VI. Again prices and yields are uncertain. The marketing strategies considered may include cash sales at harvest, forward contracting,or any combination of these. Emphasis is placed on the adaptive nature of such strategies and on the impact of preferences on the combined production-marketing strategy selected. Finally, in Chapter VII the strengths and weaknesses of the inte- grated set of techniques developed in this study will be discussed. Particular attention will be given to an evaluation of the range of applications for which these techniques can be of use and to the identi- fication of areas where further methodological improvements are needed. 1The computer program which implements this decision model is presented in Appendix C. CHAPTER II PROBLEM FORMULATION IN THE ANALYSIS OF DECISIONS MADE UNDER UNCERTAINTY 2.1 Introduction A problem exists "when an indeterminate situation, present or projected, is regarded as unsatisfactory and a more satisfactory alter- native situation is sought" (Johnson and Zerby, 1973, p. 3). Management is the process by which problems of a practical nature are resolved.1 In describing the managerial process, Johnson (1976) has identified six major types of activities: problem definition, observation, analysis, decision, execution, and responsibility bearing. This study is concerned primarily with the development of analytical tools which can aid the decision maker during the analysis and decision phases of the management process--tools which facilitate the determination of distributions of outcomes associated with alternative actions, the measurement of decision maker preferences, and the application of decision rules used to identify preferred choices. These tools can be of little use in a practical context, however, if the problems to which they are applied have not been clearly and correctly specified. 1As Johnson and Zerby (1973) note, problems can be practical or theoretical in nature. Practical problems are those which are related to the choice of an action and so demand some form of resolution. Theoretical problems, on the other hand, are not linked to a definite action ghich can be fixed in space and time and so may never be fully reso ve . 12 13 Problem formulation is the process by which a problem is defined and structured for analysis. It is the process by which features of the problematic situation judged to have an important impact on the choice to be made are identified and classified and the process by which the nature of the decision to be made is specified. The product of this process should serve as a guide in the collection of additional infor- mation, should provide a framework for the organization of that infor- mation, and should help structure the analysis which leads to a decision. As such, problem formulation is a critical activity within any applied decision analysis. Major expenditures of resources for information gathering and analysis may be required before a choice can be made in some decision situations. These resources can be used effectively only if the problem under consideration is clearly and correctly defined. A carefully determined solution to an irrelevant problem is of little use. Problem formulation is the subject of this chapter. The discussion in subsequent sections focuses on two important aspects of the process of problem formulation: the identification and classification of variables relevant to the analysis of a particular problem and the actual specification of what is to be decided. With respect to both of these activities, emphasis is placed on the need to recognize the dynamic character of many decision situations and the impact it has on the choice process. The need to recognize the role of learning and the effect it has both on the way problems are formulated and the manner in which decisions are made is also stressed. The purpose of this discussion is not to introduce new concepts or to develop a comprehensive procedure for problem formulation. Rather, it is to restate some 14 valuable observations made by others and to present a general view of decision problems and a working vocabulary for the discussion of them that can provide insights into the process of problem formulation. 2.2 The Identification and Classification of Factors Relevant to the Decision At the outset of a decision analysis the problem under consideration may be only vaguely defined in the mind of the decision maker. To better understand the nature of the problem, one of the first tasks usually undertaken is the identification of factors judged to have an important impact on the choice to be made. Not only does this help to clarify the problem, but it also establishes a set of variables which can be the focus of observations and analysis. As more is learned about the pro- blem, this set of relevant factors is, of course, repeatedly revised. Efforts to identify the important factors in a particular decision situation are facilitated by the presence of a general classificatory framework which suggests broad types of variables that should be con- sidered. Such a framework is presented by Manetsch and Park (1977a) in their discussion of system identification. A system can be defined as a collection of objects or processes which interact to perform a given function or set of functions. System identification is a generalized scheme for structuring information about the characteristics of par- ticular systems. It is a particularly valuable classificatory frame- work in a practical problem solving context because it is well suited for the description of static as well as dynamic decision problems and because it encourages the explicit consideration of the sources of uncertainty in any particular situation. 15 Manetsch and Park identify five broad classes of variables which should be considered in any decision situation: system outputs, con- trollable system inputs, exogenous system inputs, system state variables, and system design parameters. These are the major elements in Figure 2.1. Before discussing each of these categories it should be noted that a "system" is a concept rather than an actual entity. The definition of a system-~the specification of its functions and component processes-- depends on the purposes for which it is being considered. In the analysis of a particular decision, the function of the system considered should be to determine or at least affect the situation or condition which is judged to be problematic, and the system's component processes should include all those which have an important impact on that situation. For example, the problem facing a farm family may be that its standard of living is unacceptably low. In trying to improve this unsatisfactory situation they will want to consider the system whose functions it is to provide them with the resources for obtaining food, clothing, shelter, and other necessities. The component processes within that system would be that set of processes by which such resources are generated--a set which might include farm production and marketing, off-farm work, and public assistance. System identification begins with the specification of system output variables. System outputs are the products of the processes which comprise a particular system; and system output variables, which measure levels of system outputs, should serve as indicators of the degree to which the system under consideration performs its designated functions. They should be the basis for a reliable representation of 16 cowpmu.w.ucmnH Emumzm cow msocom .mgmcmw < ..N mg:m.u «amoeba :33: £322... .338 .833 ‘ L(‘ H «38 f .1 a a p .533 A 338.. 2.3.6 4 . .828 33“....» $32.; 3.5“. a: 0.... «33 an: m .53»... £33.». \fi/fi 03.: 3.5.69 A)! mean:— sauna...” 3653.6 lll'l‘l‘lI‘I‘Il\||-\‘I1 [I’Ii |/ 2228 2.5 17 all relevant features of system performance, providing information about both the desired and undesired impacts of any choice being con- sidered. For this reason, considerable care should be taken in identifying system output variables. In some situations all relevant information may be conveyed by a single output measure, but in other instances more than one system output variable is needed to adequately represent system performance. For the family in our example, net annual household income may be an adequate measure of system performance; but it may be necessary to specify other system output variables if, for example, there are important costs associated with the acceptance of public assistance which are not reflected by the level of net annual income. System output levels are determined by inputs to the system and by its structure. System inputs are factors or stimuli emanating from outside the system which affect its performance. They can be classified as controllable or exogenous. Controllable system inputs are those for which a level can, to some degree, be specified by the decision maker. The level of a controllable system input may represent an amount of some physical factor of production flowing into one of the processes within the system or it may specify a level of some well defined activity. In our example the set of controllable system input variables might include designations of levels for each farm production enterprise and for hours of off-farm work activity and a binary variable indicating parti- cipation or non-participation in public assistance programs. The levels of exogenous system inputs cannot be determined by the decision maker. Rather, they are determined by the system environment, a set of processes which affect system performance but are not, in turn, 18 significantly affected by the system's behavior. The set of exogenous system inputs in our example might include farm product prices, levels of rainfall, wage rates, and levels of public assistance. All of these have a potentially important impact on the family's standard of living but are beyond its control. The distinction between the system and its environment is not always evident, nor is it necessarily fixed. It depends on the problem under consideration and on the power of the decision maker. The distinction is an important one, however, especially in the analysis of decisions made under uncertainty, since stochastic factors in the environment can be viewed as the primary source of uncer- tainty in most decision situations. System structure determines the relationship between system inputs and system outputs. The structure of a system is described by system state variables and by system design parameters. State variables are descriptors of the state or condition of a system at any point in time. In general system outputs can be viewed as functions of the system's state at some specified time or as functions of the system's state through time. In addition to determining system output levels, the state of the system may also affect the range of allowable levels for controllable system inputs. Therefore, it is important to give careful consideration to the specification of system state variables. In our example the set of system state variables related to crop production processes might include current levels of acreage planted to each crop grown, current amount of each crop harvested, current crop production expenses incurred, and current crop sales receipts. System design parameters define the relationship between inputs to the system and its resultant state. As such they describe the processes 19 which comprise the system. With regard to the crop production pro- cesses in our example, the set of system design parameters could include variable production costs per acre of each crop grown, time required to plant or harvest an acre of each crop grown, and parameters indicating tillage practices and standard procedures concerning the order of pro- cedures for the planting and harvesting of each crop. System design parameters have an important impact on system performance, and in some instances they can be altered by the decision maker. For example, a change in tillage practices may significantly affect both timeliness and crop yields for the farm in our example. Such a change, however, may be costly. As a result, alterations in system design are usually under- taken only in response to serious problems which cannot be resolved by other means. In any decision analysis variables in each of these categories should be identified. The analysis itself focuses on the specification of a strategy to be undertaken to resolve the problem being considered. A strategy is defined by desired levels for controllable system input variables and by the new values of any system design parameters that are to be changed. Manetsch and Park (1977a) define “management" or "control" as the process by which desired levels for controllable system input levels and "design" or "planning" as the process by which specifi- cations for system structure are made.1 In general there are limits on the range of strategies that can be undertaken in any particular situa- tion. As indicated earlier in Figure 2.1, the state of the system, 1This concept of management is a much more narrow one than that used in this study. The management of a system is but one activity within the broader managerial process. 20 system design parameters, and factors in the environment all affect the control process. They impose constraints which restrict the allowable range of values for controllable system input variables. Though not shown explicitly in Figure 2.1, similar constraints also limit design changes. As the process of system identification proceeds, then, it is also important to identify those factors which restrict the range of available choices. To this point in the discussion systems have been viewed as essentially unidirectional processes which convert inputs to outputs. This view is limited because it fails to recognize the fact that as strategies are implemented there may be opportunities for learning and for revising chosen plans of action on the basis of newly acquired information. Introduction of the concept of feedback into our view of system identification helps to overcome this limitation. Feedback is the return flow of information (both positive and normative) on the s tate of the system and into environment to the central process unit. Recognition of the feedback loop between the set of system state Va riables and the control unit changes our conceptualization of the process by which inputs are transformed to outputs from one which is es sentially unidirectional and disjointed to one in which this process '3 S viewed as a continuous cycle or closed loop. This is a more r‘ea listic way of representing the context in which decisions are made, especially in situations where the impact of uncertainty is important. “081: decisions are not made in isolation, nor are they implemented instantaneously. Rather, they are made sequentially, and the outcome 01: One decision affects the opportunity set from which future choices can be made. Furthermore, because decisions are implemented over a 21 period of time, there are often opportunities to revise them. In such a context, learning is of considerable importance, and feedback in the medium through which learning takes place. The recognition of such informational flows can have an important impact on the specification of what is to be decided. Therefore, there is a need to consider such factors during the process of problem formulation. Finally, it should be noted that the identification and classifica- tion of the important factors in a particular decision situation is, in itself, a learning process. A decision maker's view of a problem and of the set of processes by which that problem can be resolved is repeatedly revised and made more complete. This kind of learning continues until 1Further efforts are judged not to be worthwhile or until the decision lwlaker is forced to take an action.1 The degree of detail included in the description of a particular system, then, depends on the usefulness of that detail in helping the decision maker determine his preferred course of action. 2? —. :3 The Specification of the Decision to be Made As was noted in Chapter I, the presence of uncertainty can have Ell”! important impact on the character of decisions. A11 situations ‘3 r‘I\rolving uncertainty are, in a sense, dynamic, since they are charac- tel"‘ized by changes in the decision maker's knowledge through time. One makes a choice and begins to act in the present, but only later do the consequences of one's actions come to be known. Often the outcomes \ 1Johnson and Lard (1961) relate the decision of whether or not to COHt‘inue learning to five more formally defined knowledge situations: eawning, forced learning, forced action, inaction, and risk. 22 associated with a particular strategy unfold over an extended period of time, and there are opportunities for learning as the strategy is implemented. The existence of such opportunities may make it desirable to introduce flexibility into the specification of a decision strategy. The strategy becomes a conditional plan--a set of contingency rules which direct actions on the basis of currently available information. In this way the importance of future opportunities for learning is recognized explicitly when a choice is made. Though not all decisions made under uncertainty take this form, many do. The discussion in this section will focus, in part, on the specification of flexible strategies based on feedback control rules. Another important consideration which affects the character of decisions is the length of the planning horizon. Because current choices have an impact on future opportunities, it is often necessary to formulate strategies which extend into the future. When knowledge ‘5 s perfect,it is possible to specify future actions extending over an infinite planning horizon. When knowledge is not perfect and reliable in formation about future events can be attained only at considerable Cos t, on the other hand, the time horizon for which it is worthwhile to formulate a plan may be shortened considerably (Modigliani and CO hen, 1961; Kleindorfer and Kunreuther, 1978). Specification of the re‘! evant planning horizon and the distinctions among a plan, a decision, and an action, then, will also be considered in this section. \ t _ 1Dreyfus (1968) demonstrates that flexible strategies are superior ° ‘Inflexible ones in multistage decision problems and uncertainty. 23 Before beginning the discussion it should be noted that the specification of what is to be decided, like the identification and classification of the important factors in a decision situation, is a process which continues throughout a decision analysis. As more is learned about the problem at hand, as observation and analysis continue, one's conceptualization of what is to be decided is repeatedly revised, and refinements in this aspect of problem formulation continue until further changes are not worthwhile. 2.3.1 The Specification of Feedback Control Rules A feedback control rule is a rule for processing current informa- ‘tion on the state of a system and its environment in order to repeatedly thdate desired controllable system input levels. Feedback control rules 1c:an take a variety of forms. They can be as simple as the statement, ' JIf the forward contract price of corn is below $2.00 on May 1, I'll comply with the federal set aside program requirements; if it's above $2-00, I won't participate." Alternatively, they can be complex ‘f’c.llnctions of several state variables. In specifying a feedback control Pu 1e to determine the level of some controllable system input variable 31‘12» any point in time, one must be concerned with the identification of State variables which can be expected to have a significant impact on the desired level of the controllable system input being considered, “‘3 th the form of the rule, and with the actual parameters of the rule. The simple feedback control rule stated above determines the level 01" a binary controllable system input variable which has a value of 0 if the operator chooses not to participate and 1 if he chooses to par- t1C‘1 pate. The only state variable affecting the choice of a level for 24 this variable is the forward contract price of corn. The form of the rule is that of an "if-then” statement, and the parameters of the rule are a forward contract price of corn—-$2.00--and a date when a decision will be made--May 1. It should be noted that the effectiveness of the rule is affected by the variables considered, by its form, and by its parameters. All affect its impact on system performance, and in selecting a preferred management one may be concerned with the specifi- cation of all three factors. It should also be noted that this rule is but one component of a management strategy which might include other feedback control rules and direct specifications of some controllable system inputs. The specification of feedback control rules can be a difficult 'task in more complex situations. In some special cases optimal control rrnethods can be used to derive feedback control rules which optimize .ssjrstem performance, but the presence of uncertainty greatly complicates the application of these analytical tools.2 Often, then, it is necessary to specify a general form of a control rule and perform experiments to determine its parameters. A simple example related to forward contracting 551‘t2rrategies by cash grain farmers should help to explain how a reasonable 1":<>rm for a feedback control rule can be determined. Let v(t) be a controllable system input which specifies the number (3‘7: bushels of corn which, at time t, the operator contracts to deliver -———_._2 1In reality, of course, other variables may affect this decision. 2Optimal control techniques are discussed in detail in Aoki (1967), Karreman (1968), Sage (1968), and Kirk (1970). The first two are con- CeY‘ned with optimal control decisions under uncertainty. 25 at harvest. Let x(t) be the total number of bushels contracted prior to time t--i.e. x(t) = :5: v(t)--and let d(t) be the total number of bushels the operator desires to have contracted at time t. The level of v(t) is defined by the following expression: v(t) = d(t)-x(t) if d(t)>x(t) 2.1 0 otherwise This is a feedback control rule which specifies the level of v(t) at any point in time. That level is equal to the difference between desired and actual contracting levels to date. Since a contract, once Inade. cannot be dissolved, however, levels of v(t) are restricted to rwon-negative values. Actual contracting levels can be observed, but desired contracting ‘1 evels cannot be.1 In order to implement this rule, then, a more complete specification is needed, a specification which defines desired ¢::<311tracting levels as a function of observable variables. The projected size of the operator's corn harvest, h(t), is one f‘a ctor upon which the desired level of contracting is expected to depend. Initially, then, d(t) might be defined by the expression d(t) = chit) 2.2 Nb ‘3 ch implies that the desired level of contracting is some specified fEr‘actionus, of the projected corn harvest. This is not a very satis- Fa(:‘lzory specification, however, because the operator's estimate of how ma ny bushels of corn he expects to harvest may not change much over the per“Tod during which the rule is to be applied. Furthermore, this Specification ignores the impact of prices on desired contracting levels. X 1If the operator knew his desired contracting level, he would have “0 need for this rule. 26 An additional factor which should be considered, then, is the sign and magnitude of the difference between the current forward contract price, c(t), and the farmer's current estimate of the expected cash price at harvest, e(t).1 The more the contract price is above the expected cash price,the more the farmer will wish to have contracted; the more it is below the expected cash price,the less he will wish to have contracted. Therefore, a revised specification of d(t) could be: d(t) = h(t)[a(c(t)-e(t))] 2.3 where a is a positive constant. The desired level of contracting may also depend on movements of the contract price. If it is rising rapidly the farmer may wish to cielay the commitment of an additional portion of his crop to a forward contract. On the other hand if the contract price is falling he may wish to lock in a relatively high price. To reflect this, the specifi- cation of d(t) can be further revised so that d(t) = h(t)[a(c(t)-e(t))+s “—3911 2.4 Where dc(t)/dt is the rate of change in the contract price and B is a negative constant. The interaction between the two terms in brackets Should be noted. If the contract price is falling but is less than the expected cash price, the first term should override the second, and no new contracting will be desired. On the other hand, if the contract pr-i ce is both above the expected cash price and falling, the two terms rei nforce each other to raise the desired contract level. \ 1The expected cash price at harvest, e(t), can be determined by 3“ EXpectations model, the complexity of which can be determined by t e requirements of the particular decision situation. 27 A third factor which could influence a forward contracting strategy is the percentage of desired corn acreage actually planted at time t, p(t). Fearing unusually bad weather which could delay or prevent the planting of some of the specified acreage, some operators may hesitate to contract much of their projected harvest until planting is nearly complete. Similarly, many farm operators, fearing the consequences of a sharp downturn in prices, desire toihave some of their crop contracted in nearly all situations. Therefore, the specification of d(t) can be revised once again to become d(t) = h(t)[a(c(t)-e(t))+8 d—gitfl mun 2.5 vvhere y is a positive constant. One other restriction on d(t) should be noted. In some situations 'tzhe desired contracting level implied by the specification above may be unacceptably high either for the farm operator or for the manager of the local elevator. Therefore, it may be advisable to establish an Upper bound on d(t). This can be considered to be a prespecified con- ES‘t:rr~aint or it can be treated as a parameter. In this case the upper 130 und on d(t) will be set at 1.5 h(t), which implies that a maximum of 1 5 0 percent if the projected crop can be contracted.1 By substituting the right hand side of equation 2.5 for d(t) in equation 2.1, the feedback control rule can be expressed in a form Wh‘i ch contains only observable variables. This rule has been presented On'ly as a relatively simple example of the types of rules that can be Specified. In some situations it may be desirable to consider more \ 1Due to the form of the feedback control rule no lower bound on d(ll) is needed. Such a lower bound could be specified, however, if “acessary. 28 factors and to experiment further with functional form. The power of such a rule should be evident, however. Once a functional form has been specified and values for the three parameters have been selected, an adaptive marketing strategy for an entire year has been established, a strategy which only requires information which is readily available to any farm operator and which allows the decision maker to take advan- tage of opportunities to learn. In many decision situations such a ru] e may be preferable to a management strategy which specifies an 1"1’1 exible marketing strategy prior to the planting season. 2.3.2 Plans, Decisions, and Actions The preceding discussion has shown how the incorporation of feed- back control rules into the specification of an action strategy intro- dUCes flexibility into the concept of what is to be decided in a Particular situation. In attempting to gain a better understanding of the general characteristics of decision problems involving uncertainty and in giving further consideration to the basic question of what con- st‘i tutes an action choice, it is also important to draw clear distinc- ti Ons among the three related concepts of a plan, a decision, and an acti on. The distinctions made here parallel those made by Modigliani and Cohen (1951) and by Day (1971). A plan is a strategy for controlling system performance through ma"Iagement or design which extends into an uncertain future--a strategy has«ed on information about the current situation and on expectations torIcerning future events. In general plans can be altered. Such a] terations may be costly, however, because resources must be expended to gather and analyze new information and to reformulate the plan 29 itsel f and because actions undertaken to implement the initial phases of a p‘l an may restrict the opportunities open to a decision maker.1 A decision, on the other hand, is a choice which is essentially irreversible. It is that part of a plan which is to be implemented before further planning is undertaken. As such, decisions are the desired output of the analytical process which is of primary concern in this study. Fi nally, an action is the realization of a decision. In a world freer <31F uncertainty, decisions and actions would be effectively identical. IO'DCJSS‘t: instances, however, events that cannot be known with certainty at tt‘ia' 'time when decisions are made affect the extent to which they can be i"'IID‘I emented and, of course, the outcomes associated with them. What 15 "EEEIT1 ized may not be what was chosen. It is necessary, then, to dIStVi rIguish between decisions and actions. 'Trlnere is a crucial interplay among these three activities, an inte"D'lay which must be recognized during the formulation of decision prob1 ems. Though the primary focus of a decision analysis is on the “‘0" Ce of actions to be undertaken, one must be aware that the outcomes of“ (itarrent decisions affect the opportunity set which circumscribes futdare decisions. Therefore, it is often necessary to formulate plans ft’T‘ periods extending beyond the immediate period in which decisions are tc’ be implemented. In the multiperiod model of decision making under conditions of perfect knowledge developed by Hicks in Value and Capital \ 1This second point is demonstrated by Johnson (l961b)and Johnson and Quance (1972) in their discussions of asset fixity. 30 (1946) the planning horizon is of infinite length, since an event at any time can have an effect on the total flow of system outputs. In most cases, however, information is not perfect, and Modigliani and Cohen (1961) have observed that in an uncertain context forecasts are subject to error and reliable information about future events can be attained only at a cost--a cost which is directly related to the degree of uncertainty and inversely related to the time proximity of the future event. In such situations it is not worthwhile to formulate plans over an infinite horizon. Rather, the planning horizon should be extended 1 Fur- only to the point where current decisions cease to be affected. thermore, actions at the end of the planning horizon need not be planned in as great a detail as those at the beginning. In formulating decision problems, then, it is necessary to determine the appropriate length of the planning horizon. The act of planning, which involves the collection and analysis of information, is expensive, however, and it is also important to consider how often plans should be reformulated. If the costs of planning are high or if the benefits from it are comparatively low, frequent revision of plans may not be worthwhile. Therefore, when formulating decision problems, it is also necessary to consider the length of what might be termed the decision horizon-—the length of time over which decisions apply and replanning does not take place. The length of the decision horizon has an important impact on the 1Kleindorfer and Kunreuther (1978) note that the cost of planning and the length of the planning horizon depend on the costs of fore- casting future events, the degree of uncertainty, and the computational costs associated with the determination of an optimal plan. 31 character of decisions themselves. When it is short,the importance of learning through feedback may be minimal, and a valid decision strategy would specify levels for all controllable system inputs over the entire decision horizon. When the decision horizon is long, however, imple- mentation of a decision strategy may involve a series of actions which are affected by factors that cannot be known with certainty at the time when decisions are made. In such instances opportunities for learning may exist, but the reformulation of plans on the basis of new information is not worthwhile. As a result, the decision may be more concerned with the choice of adaptive decision rules to be followed over the entire decision horizon than with the determination of desired levels for all choice variables over that period. Relating this to the scheme of system identification, the management process unit is viewed as a controller which directs system performance through the applica- tion of feedback control rules. In formulating rules of this sort, one must also consider what can be termed an action horizon--the length of time between successive reassessments of the current situation and expectations for the future and reapplications of the adaptive decision rule. The action horizon corresponds to the time increment embodied in a feedback loop. Its desired length will depend on the cost of monitoring the state of the system and the environment and on the relative costs and returns of applying the feedback control rule. As such, the length of the action horizon also has an impact on the nature of decisions, and attention should be given to its specification in the process of problem formulation. 32 The following example should help to clarify the distinctions among plans, decisions, and actions. It should also help to demonstrate the importance of making these distinctions when formulating decision pro- blems. Production and investment decisions made by farmers in any given year have an impact on their operation for a number of subsequent years. They affect future cropping patterns, levels of available resources, and cash flow requirements, though their exact impact cannot be known in advance due to uncertainties with respect to a number of environmental factors beyond the control of the individual decision maker. Farmers find it desirable to formulate production and investment plans, then, but the high degree of uncertainty they face may cause them to limit their planning horizon to, perhaps, three years. Once a farmer has begun to implement his p1an--once he has purchased seed, fertilizer, and other inputs needed to grow the first year's crops and, perhaps, new land or machinery--a decision has been made and in most cases exten- sive revision of his plan will not be worthwhile until the end of the crop year. It can be said, then, that the decision horizon is approxi- mately one year. The farmer's decision should not be considered to be a rigid strategy which defines his actions for each day of that year, however. Rather, it is a set of specified levels for major controllable system inputs and a set of adaptive decision rules which structure future efforts to collect and analyze information and direct his actions in response to changes in the state of his operation and the environment. For example, a management strategy could be comprised in part of a set of desired acreage levels for each cropping activity and a set of adap- tive rules which automatically adjust the crop plan in response to 33 information on current acreage planted and changes in relative prices. A simple adaptive rule might be to shift unplanted corn acreage to soybeans if the forward contract price for corn is below $2.10 and the contract price for beans is higher than $6.50. If such a rule were applied weekly, the action horizon would be said to be one week. It is important to recognize the impact of each of these activities-- planning, decision making, and action--on the conceptualization of the decision problem. Failure to recognize the need to plan may lead to decisions which, while beneficial in the short run, have a harmful long- run impact on the system. Similarly, it must also be recognized that planning itself is expensive and that in most cases a period of time exists over which extensive plan revision is not worthwhile. A portion of any plan, then, can be viewed as a decision which will not be altered. Finally, recognition of the fact that uncertain aspects of the environ- ment are likely to affect the implementation of any decision and that opportunities for learning and adaptive behavior exist when the decision horizon is relatively long leads to the conclusion that in many instances decision makers should choose flexible rather than inflexible strategies. 2.4 A Formal Statement of the Decision Problem In the analysis of complex decision problems it is often desirable to make a formal statement of the problem to be resolved. When mathematical decision aids such as those developed in this study are used in the decision analysis, this may be a necessity. In the most general terms, the basic decision problem under uncertainty can be stated in the following manner: identify a feasible action strategy which results in system performance over a specified time horizon that 34 can be considered optimal according to some relevant evaluative cri- terion. System performance is measured by system output variables, at least some of which are stochastic. An action strategy is defined by a set of controllable system input levels and/or by a set of control rules which determine the levels.1 This is, in essence, the statement of a stochastic optimal control problem. Though it may be impossible or prohibitively expensive to find the truly optimal solution to such a problem in most decision situations, this is a generalized formal statement of the problems decision makers seek to resolve. It is a problem which requires information on the physical, human, and institu- tional realities of the context in which choices are made, assessments of probabilities associated with stochastic events beyond the central of the decision maker, and an understanding of the decision maker's normative beliefs if it is to be resolved successfully. As the process of problem formulation continues, a more comprehensive understanding of the problem being analyzed should be gained. This understanding of the problem should serve as a guide throughout the decision analysis. 1Symbolically, the problem is max u = h(y<1).1> + 73 g(y(t),t)dt st y(t)=f(X(t).t) x(t)=a(X(t).v(t).e(t).t) r(X(t),x(t).t)Ss 5(U(t).tlfv where y(t) is a vector of system output variables, x(t) is a vector of system state variables, v(t) is a vector of controllable system inputs, e(t) is a vector of exogenous system inputs, and a is a function com- prised of system design parameters. The final two constraints limit the set of allowable states and the set of allowable controllable input levels. The elements of the vector v(t) are the choice variables in this problem. 35 2.5 An Application In this final section,the concepts related to problem formulation presented earlier are applied to a more concrete decision problem. The example introduced here will also be used to illustrate the techniques developed in subsequent chapters. This is not an actual case example. Rather, it might be termed a synthetic case study, since it synthesizes circumstances and concerns common to many individuals' situations. The decision maker in our example is the operator of a relatively small cash grain farm in southeastern Michigan. He owns 240 acres of tillable land on which he grows corn and soybeans. He is heavily in debt, with interest and principle repayment commitments on long and intermediate term debts of $35,000 per year. Except for approximately $6,000 income from off farm work by the operator and his wife, all of the family's income is derived from the farming operation. If the family's level of income is insufficient to meet debt repayment commit- ments and family living expenses, they face the prospect of refinancing some loans or of being forced out of farming altogether. In 1978 their income was low, and they relied in part on savings to cover expenses. The operator views his current situation as uncertain and unsatis- factory. He feels a strong need for a higher, less uncertain level of income in the year to come. Though he has other needs and desires, this is his primary concern. The problem to be analyzed, then, is that of identifying a strategy which best provides a level of income adequate to meet debt repayment obligations and family living expenses. The operator believes this problem is a serious one, and he is willing to expend the resources required to undertake a careful analysis of his alternatives and their consequences. our 010‘ the ces: 11' 1' of ' fan off nei far far 15$ C0?) C0”, 36 Having identified the decision maker and the problem which motivates our analysis, it is necessary to look more carefully at the kinds of choices the farm operator can make and at the factors which will affect the outcome of these choices. One might begin by considering what pro- cesses affect the family's level of income--by defining the system that will be the focus of the analysis. In this case the system is comprised ”ofuthe set Of Production and marketing processes which constitute the \ farm operation and the set of processes associated with engaging in off farm work. In order to simplify this example, it is assumed that neither the operator nor his wife is willing to take a permanent off- farm job. As a result, opportunities for affecting the pattern of off; farm earnings are limited, and the level of off-farm income will be assumed given. The optput Of this system is measured by a single variable, y, which is defined as apppalmgggpwincome available for family living expenses, ipgpme tax, and investment after all debt repayment commitments and othgphbusiness expenses have been met.1 The level of income realized depends on the structure of the system, on exogenous inputs to the system, and on the choices made by the operator. The structure of the system defines more exactly the set of pro- cesses by which the system output, net cash income, is generated. In this example the conceptualization of the system will be kept as simple as possible. Standard crop budgets, which are given in Table 2.1, define 1Other performance measures could be identified, but the cost of considering them is deemed excessive in this case, since it greatly complicates the analysis. 37 Table 2.1 Standard Crop Enterprise Budgets Corn Soybeans Seed, bu. (.23) 9.70 (.83) 8.30 Fertilizer Nitrogen, lb. (120) 16.80 (10) 1.40 Phosphorous (P205), lb. (75) 13.50 (50) 9.00 Potassium (K20), 1b. (100) 9.00 (25) 2.20 Lime .80 1.10 Herbicide, other chemicals 10.80 13.00 Fuel and repair 14.40 9.60 Utilities 2.00 2.10 Miscellaneous 2.20 2.20 Total Selected Cash Expenses 79.20 48.90 Drying cost, per point per bu. .01 O Hauling cost, per bu. .10 .14 Time required for Planting, hours per acre .757 .757 Harvest, hours per acre .418 .502 Source: Nott, et a1. (1977). 38 the basig production processes for gorp_and soybeans. They specify not only physical inputs such as seed, fertilizer herbicides and fuel, but also the time required for planting and harvesting, the two critical‘ fieldwork operations. These budgets alone are not considered sufficient to adequately represent crop production, however. The timeliness of plapting and harvesting also affects crop yields and, ultimately, the level of income realized. Therefore, the acreage planted in each crop is classified according to when it is planted and harvested. Six planting periods and five harvest periods are defined in Table 2.2, and possible planting-harvest combinations are specified for each crop. A fjpgl characteristic of the crop production process which should be noted is the rule of thumb that all corn is planted before soybeans are planted and that all soybeans are harvested before corn is harvested. Other relevant system design features include the fact that the farmer does all the fieldwork himself, though he gets some help from his wife who hauls grain at harvest. Again to simplify the example, it is assumed that all production is sold at harveston the cash market. Marketing alternatives such as forward contracting, hedging, storage, and participation in government set aside programs will not be considered. These system design characteristics define the processes by which income is generated. System state variables are also useful in under- standing the system's structure, since they serve as descriptors which represent fully the system's performance through time. In this example the set of stgtpflggpipples includes an indicator ofccgppentmpet_cash ipgpme which is repeatedly updated as costs are incurred and crop sale a. receipts taken in, an indicator of tptal the acreage of each crop 39 .Amumc ocv .Fm pm .xumpm "moczom -11- mammazom\ccoo mcmonzom\ccoo mcmmnzom\ccou mcmmnxom mplmp «can u--- mcmmnAom\ccou mcwoaaom\ccoo mammnxom\ccou mcmmaaom FFue menu :cou mcooaxom\:cou mcmmazom\cgou mammazom\=gou mammaAom m mcacium an: ccou mcmmnxom\ccou memoaxom\ccoo mcmmnaom\:cou mammaxom mmimp an: atom ccoo cgoo :coo ccou mpuop an: :cou ccoo :Lou :cou ccou o. xmzimm F.ca< mm-m Loasasoz -m..wm”wwwu ..-.. casebuo o.-. .mno.uo -.chmmumwmwm no.28a m=.p=a.a wowcma umm>cmz cowumc.anu summ cow maocu m_n.mmoa ecu meowcma umo>gmz ucm mcwucmpa N.N mpamh 4O EIEEEEP,EDP harvested to date, indicators of theflnumber 9f agres of feaghcrop_which remain unplanted or Dnharvested, and indicators of the EDEERSFLQI bushels of each crop harvested to date. In additioniitjme itself is monitored, as are the number of acresgof eachflcrop planted in eagh of the six planting periods and the number of acres of each crop planted in a particular planting period which are harvested in each of the five harvest periods. Inputs to the system, as well as its structure, affect the level of income realized by the farmer. A number of exogenous system inputs can be identified in this example. Those considered to be stochastic include: the price at harvest of each crop, the number of days avail- able for fieldwork in any particular planting or harvest period, and Subjective assessments of the probability distributions for these variables will be required for the analysis. Relatively few controllable system inputs will be considered in this example. Since the analysis focuses on decisions related to the farming operation, those of primary concern are the number of acres rented (land rental opportunities do exist) and the number of available acres planted in each of the two crops grown, corn and soybeans. Several factors limit the range of possible values for the three con- trollable system inputs of primary concern. Due to the characteristics of the local land market, the number of acres rented, v], can be assumed to take only five values: 0, 80, 160, 240, 320. Limits on available land imply that total crop acreage must be less than or equal to that which is owned plus that which is rented. If v2 is acreage planted in 41 \ corn and v3 is acreage planted in soybeans, then the following relation- ' ship must hold: < v2+v3-240+v 1 2.6 Thgffiarm‘operator not only has control over the several system i 'TPUES discussed above, but he also may be able to affect the performance 0 1“ tint-{system through design changes such as the purchase of new mé chinery or the alteration of cultural practices. Given the operator's ra ther precarious financial position and the fact that his crop yields ha ve not been notably low, however, it seems best to focus the analysis on the specification of controllable system input levels. Therefore, des ign changes will not be considered. What is the nelevant planning horizon in this example? If herbicide Carry-over problems are not considered to be of major importance, and if Ions term leases are not required for rental of any of the 80 acre t"‘acts, then the choices to be made in this example have an impact on 533 tern performance and on the opportunities open to the farmer only in tfi'le year they are made. Therefore the relevant planning horizon is a 5“ hgle year.1 Because the costs of the analysis to be undertaken are "(51: insignificant, the farm operator does not wish tozconsider major Q"‘tanges in his strategy unless conditions change so dramatically that l this is deemed worthwhile. The decision horizon, then, is also one year. It is recognized, however, that there will be opportunities to 1 earn over the course of the year and that new knowledge may lead to a \ 'I If major design changes such as investment in new machinery were t>eeing considered, the planning horizon would need to be longer. 42 desire for some minor revisions in the strategy. Of particular concern are possible losses in yields due to a lack of timeliness in planting corn. Therefore, a simple feedback control rule will also be considered in the analysis. The rule takes the following form: "Regardless of specified crop acreage levels, soybeans will be planted on all unplanted acreage after v4 (a parameter indicating a specific date)." Soybeans will not be planted before May l9, nor will corn be planted after June 3, and the Operator wishes to check the feedback control rule at the end of each planting period between these dates. Given these restrictions, the possible values for the parameter, v4, are May l8, May 26, and June 3. The action horizon for this rule, then, is eight days during the period when it is operative. The management strategy in this example is defined by values of the three controllable input variables and by the single feedback control rule parameter. Our problem is to find the strategy, v*, which best satisfies the farm operator's need for a higher and more stable level of income. Constraints on allowable control variable levels and the structural characteristics of the system which determine the relationship between system inputs and outputs must be considered when the choice is made. The fact that the choice must be made under condi- tions of uncertainty with respect to product prices, crop yields, and time available for fieldwork must also be considered, since this means that the outcome of any strategy can be specified only in probabilistic terms. To make such a choice requires the integration of positive knowledge of what is and what may be with normative knowledge concerning the goodness or badness of particular outcomes. The expected utility { 43 hypothesis, which states that the preferred choice of a decision maker is that which has the highest expected utility, will serve as the basis for this integration. Our problem, then, is to identify the management strategy v* for which the associated distribution of realized net cash income, f(y), maximizes the expected utility of the decision maker. CHAPTER III THE DETERMINATION OF SUBJECTIVE PROBABILITIES 3.l Introduction Choices made under uncertainty are affected by a decision maker's preferences for alternative outcomes and by his expectations concerning the likelihood of each possible outcome associated with the action strategies under consideration. Both of these factors are subjective and vary from decision maker to decision maker, and information on both is a critical input in the analysis of any decision problem in which the impact of uncertainty is of major importance. In this chapter, techniques for eliciting information on expectations and methods of structuring that information for use in a formal decision analysis will be examined. Expectations are reflected in a decision maker's beliefs about the probabilities of different events occurring. These beliefs may be based in part on logical deductions, on inferences drawn from empirical observations, on intuition, or on a combination of all three types of information. In general, however, probabilities must be viewed as personal or subjective and, as such, cannot be judged to be correct or incorrect.1 The problem in a decision analysis is one of representing 1This personalistic view of probabilities whereby they are con- sidered to be "degrees of belief" rather than objective facts has its origins in Ramsey's (193l) discussion of probability in the essay "Truth and Probability." 44 45 subjective probabilities in a manner which is consistent with the decision maker's actual beliefs and in a form which facilitates the use of this information in the evaluation of alternative choices. In many situations, this already difficult problem is made more so by the fact that the decision maker's beliefs may be poorly defined and may be based on quite limited information. As Hogarth (l975, p. 273) notes: . . man is a selective, stepwise information processing system with limited capacity, and, as I shall argue, he is ill-equipped for assessing subjective probability distribu- tions. Furthermore, man frequently just ignores uncertainty. The assessment of subjective probabilities, then, can be a difficult, complex task. Elicitation procedures should be designed to help the decision maker think in probabilistic terms. Furthermore, they should serve as an aid in structuring information from a wide range of sources, including that provided by experts more knowledgeable than the decision maker himself. Of primary concern.in the analysis of choices made under uncer- tainty are the subjective probability distributions of the outcomes associated with each management strategy under consideration. In general, such distributions cannot be assessed directly by the decision maker, however, since they are usually dependent both on specified levels of system control inputs and on a number of stochastic and non-stochastic environmental factors. Rather, their assessment requires both the encoding, or direct elicitation, of subjective probability distribution for important stochastic exogenous system inputs and the modelling of the relationships between system inputs--controllable and exogenous--and system outputs. The combined use of encoding and modelling allows the decision maker to break down what may be a complex 46 stochastic process into more manageable sub-units about which he can formulate expectations directly. It also encourages the decision maker to think explicitly about how controllable and uncontrollable factors interact to determine the outcome of any choice. The value of careful system identification should be evident here, since it is the process by which controllable and exogenous system inputs having an important impact on system performance are identified. The number of stochastic factors considered and the complexity of the model used to represent their effort on the outcome of a particular choice should be determined by the nature of the problem being analyzed. In many instances the stochastic process under consideration will be modelled more than once with new exogenous system inputs and a more refined view of the system itself being considered at each stage. As Spetzler and Stael von Holstein (1975, p. 341-2) note: Modelling efforts tend to be most effective and most economical if they begin with a gross model that is successively refined. A model should be refined only as long as the cost of each additional refinement provides at least comparable improvement in information. The criterion for how much information is needed depends on how significantly the information bears on the decision at hand. The elicitation and structuring of information on subjective probability distributiong,then, can be viewed as a learning process which extends the knowledge gained during system identification. It is a process which should lead to an improved understanding both of the system under consideration in a decision analysis and of the decision problem itself. In the remainder of this chapter, procedures for encoding subjec- tive probabilities will first be reviewed. Methods for modelling stochastic processes will then be examined with particular attention being given to the use of Monte Carlo methods and simulation techniques 47 to represent the performance of complex systems. Finally, these tech- niques are applied to the analysis of the decision problem introduced in the concluding section of the preceding chapter. 3.2 Probability Encoding Procedures During the system identification phase of problem formulation, stochastic exogenous system inputs which are expected to have an important impact on system performance are specified. In many instances, preliminary models may be constructed and sensitivity tests performed in order to better determine which environmental factors affect system performance most significantly. Once this has been done, the encoding of subjective probabilities associated with important exogenous variables can begin. Encoding is the process by which a decision maker's beliefs about the relative likelihood of different events are elicited and used to represent a subjective probability distribution. It is one means by which information from a range of sources is structured for use in a decision analysis. It should be emphasized once again that a single, correct subjective probability distribution does not exist. Furthermore, it should be noted that the decision maker may not even think in probabilistic terms. As Winkler (l967, p. 778) notes: . there is no 'true' prior distribution. Rather, the assessor has certain prior knowledge which is not easy to express quantitatively without careful thought. An elici- tation technique used by the statistician does not elicit a 'true' prior distribution, but in a sense helps to draw out an assessment of a prior distribution from the prior knowledge. Finally, it should be noted that the encoding process may also involve the consideration of information from outside expert sources or the use 48 of historical data. This may be particularly important when the decision maker's own knowledge about a particular factor is limited. In such instances he may be willing to accept the assessments of others or to base his expectations for the future entirely on past occurrences. Before actual encoding begins, the stochastic variable under con- sideration should be clearly defined,and its importance in the decision analysis should be recognized by the decision maker. The variable should be viewed as truly exogenous to the system under consideration. If its level will be affected by the decision maker's actions, it cannot be considered to be exogenous. Finally, the variable's relationship to other random factors should be considered carefully. If its level is conditional upon that of other exogenous variables, this should be recognized and dealt with explicitly during the encoding process. Possible sources of bias should also be considered before encoding begins. Biases are said to exist when an encoded subjective probability distribution does not conform with the decision maker's actual beliefs. As such, they are impossible to measure, since the elicited information is the only available indicator of the decision maker's beliefs. Evi- dence from controlled experiments in which subjects use sample observa- tions to assess probability distributions known to the experimentor, however, indicates that interview procedures can have on impact responses (Hogarth, 1975). Furthermore, experiments designed to reveal how sub- jects assess probabilities rather than how well they assess them indicate that several of common perceptual heuristics can cause problems in the 49 assessment of probabilities.1 The introduction of biases into the encoding process can be minimized through the careful design of inter- view procedures. At the outset of the encoding interview, efforts should be made to gather and make note of all available information which may help the decision maker formulate his expectations. If a substantial data base of past values of the random variable being considered exists, and if the subject believes these data accurately reflect his expectations concerning future events, he may choose to let the historical data define his subjective probability distribution. This may be a reasonable procedure, for example, in the case of rainfall patterns. In other instances, this cataloging of available information may reveal that the decision maker knows little about the variable being encoded. A decision must be made, then, as to whether more information should be sought from expert sources or whether encoding should proceed. This decision will depend on the cost of new information and on the sensitivity of system performance to fluctuations in the variable under consideration. The purpose of an encoding interview is to construct a quantitative representation of the decision maker's subjective probability distribu- tion for a particular variable. In general, this representation takes the form of a cumulative distribution function, such as that in 1Of these, the most notable are availability (Kahneman and Tversky, 1972), representativeness (Kahneman and Tversky, l973), and anchoring (Tversky and Kahneman, l974). 50 Figure 3.l.1 Moments are sometimes used to describe subjective distri— butions, but most subjects find it difficult to translate probabilistic beliefs directly into statements of the moments of a particular distri- bution. Several types of questions can be used to elicit information on a decision maker's expectations. Spetzler and Stael van Holstein (l975) classify elicitation techniques according to encoding method and response mode. They identify three encoding methods: l. Those which require the assessor to specify a probability level while the value of the random variable is fixed--i.e., the respondent indicates the probability that the variable X will fall below x*. 2. Those which require the assessor to specify a value of the random variable while the probability level is fixed-~i.e., the respondent indicates a value of the variable X, x*, such that the probability X will fall below x* is equal to a specified probability level. 3. Those in which the assessor specifies both a value of the random variable and a probability value associated with it. In effect, this is done when historical data are said to reflect the decision maker's beliefs concerning future events. Two response modes are identified: direct and indirect. Under the direct response mode the assessor is asked to explicitly specify values 1For any value x* of the random variable X, the corresponding value of the cumulative distribution function, F(x*), is the probability that a sample observation of X will have a value less than or equal to x*. F(X) 1.0} 51 Figure 3.l A Cumulative Distribution Function 52 of the random variable or probability levels which define points on the cumulative distribution function. Under the indirect mode the assessor is asked to indicate which of two or more bets he prefers. One of the bets serves as a reference which allows a probability assessment to be inferred from the assessor's response. Several types of elicitation techniques may be used in a single encoding interview. Questioning might begin with direct response mode questioning to determine extreme values of the variable to be encoded. Indirect response mode questioning can then be used to determine a number of points on the cumulative distribution function. Finally, direct response mode questions can be asked to determine probability quartiles of the distribution which can be used as a consistency check. At the completion of the encoding process a number of points on the cumulative distribution function of the decision maker's subjective probability distribution for the variable under consideration have been identified. Similarly, if historical data are used in lieu of subjec- tive assessments to define the distribution, each observation can be assigned a position on a cumulative distribution function according to the following rule: "If a sample of n observations is drawn from some distribution and arranged in order of size, the kth observation is a reasonable estimate of the k/ (n + l) fractile of the distribution" (Schlaiffer, 1959, p. l04). The cumulative distribution function must be defined for all possible values of the random variable, however, not simply at selected points.‘ This is usually accomplished by sketching a smooth curve through the observed points, though if there are good reasons to believe the distribution of a random variable is from a 53 particular family of distributions with cumulatives of a known form, regression techniques can be used. In either case the reliability of such a representation may be questioned, especially when only a few data points have been identified. Anderson (l974b)has investigated the estimational reliability of handbsketched smoothed cumulative distribution functions based on sparse empirical data. The questions he poses are also relevant in connection with the construction of smoothed cumulative distribution functions from a small number of subjectively assessed points. His results indicate that, as expected, estimational reliability increases when a large number of observations are available, but they also show that in a surprisingly large number of cases a fairly good estimate of an underlying distribution can be made on the basis of only three to five observations. These results are somewhat encouraging, but they should also serve as a warning of the need in some cases to consider explicitly the inexactness of assessed probability distributions, as is done by Watson, Weiss and Donnell (1979). The encoding interview should end with the decision maker's verifi- cation of the quantitative representation of his beliefs. This can be done by asking him to examine either the cumulative distribution function which has been constructed or a random sample drawn from the distribu- tion it defines. More formal verification procedures involve the use of scoring rules (Minkler, 1969; Stael von Holstein, l970; Savage, l97l), 54 but as Hogarth (1975) notes, the usefulness of such rules is questionable in many situations.1 Finally, it should be noted that the encoding procedures discussed above are designed for the determination of subjective probability distributionscflirandom variables judged to be statistically independent of all other random factors under consideration. In many instances, however, random factors are not independent. On a particular farm, for example, yields for two crops such as corn and soybeans are likely to be highly correlated. Similarly, prices received for these two crops would not be expected to be statistically independent. In such a situation one of two alternative procedures can be followed: the process by which the correlated random variables are determined can be modelled back to the point where all stochastic exogenous factors can be assumed to be independent or the decision maker's joint probability distribution for the correlated variables can be assessed directly. Neither alternative is particularly attractive. Modelling can be costly, and as the model becomes more complex the number of variables to be encoded may increase rapidly. Furthermore, while the decision maker may have well formulated expectations about many of the non- independent random variables, he may have little or no knowledge of the statistically independent underlying variables in the more extensive 1A scoring rule is a payoff function with the vector of stated probabilities for each of a set of mutually exclusive and expansive events and a vector of probabilities representing the decision maker's true beliefs being the arguments. If a scoring rule is strictly proper, it will be maximized when the stated probabilities coincide directly with the assessor's true beliefs. 55 model. Encoding of joint probability functions, on the other hand, is a difficult, time-consuming process. Experimental results indicate that many subjects are ill-equipped for the assessment of correlations between random variables (Chapman, l967; Tversky and Kahneman, 1973). Therefore, elicited distributions may simply reflect poorly formulated expectations. When joint specification rather than more extensive modelling is the preferred course of action, it is advisable to rely on historical data whenever possible. For example, if distributions for rainfall and daylight hours without cloud cover over a particular two-week period are to be assessed, past weather data could probably be used to repre- sent most decision makers' expectations. Similarly, yield data for several crops over an extended time period, if corrected for time trends and other identifiable factors, may provide adequate information to construct a marginal distribution for each crop and to estimate correlation coefficients between crops. Even in the case of crop prices for which past experience may not be relevant in the formulation of each marginal distribution, it may be possible to use historical data to estimate correlation coefficients which could be used in con- junction with marginal distributions determined by other methods. In cases where historical data are not available or are considered to be irrelevant, joint specification of bivariate subjective probability distributions can be accomplished by encoding one of the marginal distri- butions and then encoding conditional distributions for the second variable at several values of the first. Anderson, Dillon, and Hardaker (1977) describe this procedure in some detail and explain how it can be extended to cases where more than two correlated variables are to be encoded. 56 3.3 Modelling Stochastic Processes The encoding techniques reviewed in the preceding section are used to elicit direct assessments of subjective probability distributions. In most decision situations, however, such direct probability assessments can be made only for the distributionscfi’exogenous system input variables. They usually cannot be made for the distributionscfl'system output variables, the distributionscn’primary concern in the evaluation of alternative choices. Rather, these distributions, which depend on complex interactions among a number of factors, must be determined indirectly by modelling system performance. A model is a deterministic mathematical representation of the set of processes by which controllable and exogenous system inputs determine system output levels. As will be demonstrated below, given information about the levels of controllable system inputs which define a particular strategy and information about the probability distributions of exogenous system inputs, a model can be used to determine the associated probability distribution of system out- put levels. Even in situations where the distributions of system output variables associated with alternative strategies can be assessed directly, a model of the system under consideration can be useful for several reasons. First, it can increase the decision maker's under- standing of the set of processes which determine the outcome of any strategy, since modelling can be viewed as a learning activity. Second, by providing a logical representation of the processes which comprise a system, a model allows the decision maker to focus his attention on the formulation and representation of expectations about future levels of 57 individual exogenous system input variables. He need not consider all such factors and their interactions with other determinants of system performance simultaneously. Finally, if the number of alternatives being considered is large relative to the number of exogenous system input variables, the use of a model to determine system output variable distributions can significantly reduce the number of probability dis- tributions which must be encoded, since only the distributions of exogenous system input variables must be assessed.1 The exact nature and complexity of the model used in any particular decision analysis will depend on the characteristics of the problem under consideration. In some instances the appropriate model may be quite simple. For example, if the set of controllable inputs is defined by v, a column vector of acreage levels for each of several crops; if the set of exogenous system inputs is defined by e, a column vector of net returns per acre for each crop activity; and if the total net return for all crop activities, y, is the only system output variable of concern, the appropriate model of this system may simply be the following linear equation: y = e'v 3.1 In other cases, much more complex models may be required to adequately represent the relationships among system inputs and system outputs. This is true, especially, when the system under consideration is dynamic and when strategies are defined by feedback control rules as 1The same probability assessments for these variables are used in the determination of system output distributions for each strategy considered. Therefore, exogenous system inputs must be truly exogenous-- i.e., their levels must not be significantly affected by system per- formance. 58 well as by fixed specifications of controllable system input levels.1 Specific conceptual and quantitative tools used in systems modelling will not be discussed in this study. Excellent discussions of such techniques can be found in Forrester (1961), Manetsch and Park (1977a), and Manetsch (1978). As defined above, a model is a deterministic representation of the relationship between a set of system inputs and a set of system outputs. Given specified levels of all system inputs, controllable and exogenous, the set of system outputs can be calculated exactly. In decision situations involving uncertainty, however, levels of stochastic exo— genous system inputs cannot be known exactly prior to their occurrence. This implies, of course, that the system output levels associated with a particular strategy cannot be determined exactly either. Rather, they can be specified only in probabilistic terms. In such instances a system modeL.despite its deterministic character, can be of use in describing system performance. In some special cases. a system model can be the basis for the analytical determination of the distribution of system outputs associated with any strategy being considered. Returning to the simple linear model specified in equation 3.1, for example, if each random factor in the vector of exogenous system inputs, e, is normally distributed, the distribution of total net revenue, y, is also normal with mean, u, and variance, 02, defined by the following expressions: 1See Forrester (1961), Cyert and March (1963), and Dent and Anderson (1971) for examples of more complex models. 59 p = m'v 02 = v'av 3.2 where m is a column vector of the expected net revenues for each crop activity and Q is the variance-covariance matrix for net returns. Anderson (1975) has shown that the distribution of y can also be determined analytically when each element of e has a Beta distribution. In situations where subjective probability distributions for exo- genous system inputs are not all members of the same family of distri- butions or where a more complex model is required to represent system performance, it may not be possible to analytically derive distributions for system outputs. In fact, when a model is particularly complex, it may not even be possible to calculate system outputs analytically for the special case when levels for all controllable and exogenous system inputs are known with certainty. In such instances, numerical simula- tion techniques and Monte Carlo methods are required to determine sys- tem output distributions. Manetsch and Park (1977b, p. 8-1) define simulation as "a tech- nique for obtaining particular time solutions of a mathematical model corresponding to specific assumptions regarding model inputs and values assigned to parameters." The model specified in equation 3.1 can provide the basis for a simple example of simulation. Consider the case in which 200 acres of land are to be planted and only two crop activities are possible. Let corn be crop 1 and soybeans be crap 2. If the controllable and exogenous system input vectors, v and e respectively, are defined as follows, _ 150 _ 95.00 V ‘E50 9 'Eioopo 3'3 60 then simulation of system performance for this particular case requires only that the following matrix multiplication be carried out: y = [ 95.00 100.00 J ['38 3.4 In this case, with 150 acres of corn and 50 acres of soybeans planted, total net revenue, y, is equal to $19,250. Systems of concern in most practical decision situations are larger and more complex than this one, and their simulation is generally more involved. Frequently numerical solution techniques are required. and many simulation models are computerized. One of the distinct advan- tages of simulation, however, is that it is a remarkably flexible pro- cedure which allows complex processes to be represented realistically. Naylor, et al. (1966), Schmidt and Taylor (1970), and Manetsch and Park (1977b) all provide excellent discussions of simulation techniques. Monte Carlo methods are commonly used in combination with simula- tion to model the performance of complex stochastic systems. Under this approach, numerical procedures are employed to generate sample observa- tions from the decision maker's subjective probability distributions for exogenous system input variables. Each sample vector, e*, specifies a level for each exogenous system input and, as such, defines a state of the system's environment. By constructing a large number of sample states of the environment and simulating the system performance associated with a particular strategy for each of these environmental states, a set of sample observations from the distribution of system outputs associated with that strategy is generated. These observations can be used to 61 define a cumulative distribution function according to the procedures described in the discussion of encoding.1 Returning once again to the simple example discussed above, let the joint subjective probability distribution of the two elements of the exogenous systems input vector, e, be defined in the following manner. The marginal distribution of revenues for corn is normal with mean $115.00 and standard deviation $35.00. That for soybeans is a member of the gamma family of distributions with a mean of $135.00, a standard deviation of $40.00 and a minimum value of $55.00. The correlation coefficient for the two net revenues is .75. Using Monte Carlo techniques described in detail in Appendix A, the following five sample vectors from this joint probability distribution were generated: 1 [107.87 2 _ [155.77 3 e = e - e = 80.08 107.43 128.99 92.23 4 = 158.18 5 = 152.05 3.5 9 147.11 9 210.51 They represent five sample states of the environment. Simulating system performance under the strategy defined by the controllable system input vector v' = [150 50], five sample observations from the distribution of net income levels associated with this strategy are determined: y1 = 521552.00, y2 = 529955.00, y3 = 515523.50, y4 = 531082.50. y5 = $33334.50. These five sample observations were used to construct the cumulative distribution function shown in Figure 3.2 In general, 1In addition to Schlaiffer (1959), Mood and Graybill (1953) and Barnett (1975) also discuss the validity of the rule which, when N observations of a random variable are arrayed in increasing order, states that the Kth observation can be used as an estimate of the K/(N+l) fractile. 62 F(y) 1.0 5/6 2/3 1/2 1/3 1/6 l l I l 1 15000 20000 25000 30000 35000 X Figure 3.2 A Cumulative Distribution Function Based on Five Sample Observations 63 at least 20 sample environmental states should be simulated, and for many problems it may be desirable to simulate as many as 50 to 100 sample states. Even this simple example demonstrates the power of this approach. Because the marginal distributions of the two net return variables are not of the same family, the distribution of net income levels cannot be derived analytically. The combinations of Monte Carlo sampling tech- niques and simulation, however, permits the representation of the dis- tribution. This same approach can be easily extended for use in the analysis of much more complex systems in which the interactions between controllable and exogenous system inputs are not so straightforward.1 It should also be noted that this method imposes no restrictions on the nature of the system inputs or system outputs. Subjective probability distributions for environmental factors can take a form which most closely reflects the decision maker's encoded beliefs. System output“ distributions are determined by the structure of the model, by the subjective probability distributions for exogenous system inputs, and by the management control strategy. When used in a decision analysis, these distributions can be described by their moments or by their cumulative distribution functions. One serious criticism of this approach to modelling stochastic processes is that statistical dependence between random environmental factors is often ignored (Anderson, l974a) This may be due, in part, to difficulties associated with the joint specification of subjective 1The example in the final section of this chapter demonstrates how this approach can be applied in the analysis of a more complex system. 64 probability distributions. Even when statistical dependence has been recognized in the encoding process, however, it is often ignored in stochastic system models due to a lack of available techniques for generating sample observations from multivariate probability distri- butions. Procedures have been developed for the generation of random variates from a wide range of univariate probability distributions (Naylor, et al.. 1966; Schmidt and Taylor, 1970). Process generators have also been formulated for several multivariate distributions, most notably the multivariate normal and Wishart distributions (Naylor, et al.. 1966; Newman and Odell, 1971). More recently, Coleman and Saipe (1977) have developed a procedure for generating serially correlated lognormal variates, which is a special case of more general procedures for modelling bivariate random variables with prescribed marginals and correlations (Coleman and Saipe, 1976). A need remains, however, for a generalized multivariate process generator which permits greater flexibility concerning the specification of marginal distributions and which has the capacity to be easily extended beyond the bivariate case. Such a procedure has been developed as part of this study and is described in detail in Appendix A.1 This generalized multivariate pro- cess generator can be used to generate sample observations from multi- 2 W variate distributions comprised of up to fifty random variables. The )‘ 1Procedures used to generate sample observations from univariate distributions are also reviewed. 2The program can easily be expanded to model processes with still more individual variates. 65 marginals of the distribution modelled can be of any form and they need not all belong to the same family of distributions. It is assumed only that enough information is available on each marginal to construct its cumulative distribution function and that the correlation coefficient between each pair of variates within the distribution can be specified. The only restriction placed on the matrix of correlation coefficients: is that it be positive-definite and symmetrical, a condition required 1 for feasibility and internal consistency. I) The existence of such a procedure greatly enhances the power of the approach to the modelling of stochastic processes described in this section. It permits greater realism in the representation of underlying probability distribution without requiring that the stochastic dependence between some random factors, which has an important impact on choices in many decision situations, be ignored. 3.4 An Application The techniques introduced in this chapter can be applied to the cash grain farm example formulated in Chapter II. Crop yields for each planting-harvest period combination, product prices, and time available for fieldwork have been identified as stochastic factors which have an important impact on the level of income realized by the farm operator. In this section the specification of subjective pro- bability distributions for each of these exogenous system inputs and the use of simulation to determine the impact of these random factors on the outcomes associated with particular management strategies are discussed. 66 Because the case farm used in this example is a synthetic one, no actual decision maker has been identified. Therefore, experts in the Department of Agricultural Economics at Michigan State University were relied on for the assessment of probability distribution for the stochastic factors. This is not altogether unrealistic, since in many cases actual decision makers choose to rely heavily on the opinions of experts in the formulation of their expectations. The assessment of probability distributions for yields was based in part on historical data and in part on more subjective information. Estimates of expected corn and soybean yields for each planting-harvest 1) combination are given in Tables:3£land 3.2.1 These estimates are based on figures ”5291'" Telplan Program #18, a commonly used decision aid which focuses on choices similar to those being analyzed in this example. They are the product of a group assessment of experimental data and the personal observations of experts. No estimates of_variances or other features of these probability distributions were made by this group; nor did they assess the degree of correlation between yields for different erops and different,planting-harvest combinations. Therefore, the following subjective assessments were made. ‘All yield distributions were assumed towbegngrmal, having means equal to those specified in Tables 3.1 and 3.2. Specification of the variances of these distribu- tions was‘based on the assumption that the coefficient of variation for all corn yields is 11 percent and that for all soybean yields is 1Base yields of 100 bu/acre for corn and 33 bu/acre for soybeans are assumed. 67 .Amymc ocv .Pm pm .xUMFm "wocaom wovcma umm>cmx mcoz mcoz acoz mcoz mzoz mpump mesa wcoz mcoz mcoz mcoz mcoz FF-¢ mesa gum e an we gem e an em “mm 9 an mm xmm e an on mcoz m mczwunm an: nmm e an Fm Num 9 an mm aom e an mm Num 9 an em «:02 mmump >dz mmm e an om Nam e an om Nwm e an Pm Rom e an mm Rom e an mm mFIFF an: Rpm e an mm Rmm @ an mm Row 9 an mm uwm e an cop umm e an om op >.mznmm nga< mum: pmm>cmz ucm mappcmpa an pcmucoo mczpmwoz ucm upmw> cgou mmoLm>< p.m mpnmh 68 .Amumu ocv .pm pm .xumpm "mugzom mcoz an up an mm an mm an RN mpimF mcau acoz an om an mm an mm an Fm Fpue mean mcoz an —N an mm an on an Nm m mczcuum mm: 5:52 an NN an mm an pm an mm m~-m_ a”: mcoz mcoz mcoz mcoz mcoz mpupp an: mcoz mcoz mcoz mcoz wcoz op xmzumw Frga< m~-m conso>oz -chwmmwwmu N_-F_ conoooo o_-¢ conoooo -Nwmcumumwmmm 55,258 acco=m_a cowgma umm>cmx mama umm>cmz use mcwucmpa an upmv> cmmnzom mmmgm>< ~.m mpnu» 69 .Z.5 percent.1 Concerning yield correlations, thgflcorrelation coefficient betweenféfiyltwo corn yield distributions oriafiyltwo soybean distributions wassetequal to .90, while that between anylpair of distributions com— prised of one corn yield distribution and one soybean yield distribution was set at .80.2 These parameters--means, coefficients of variation, and correlation coefficients--define a multivariate normal distribution. Since there are thirty-four individual yield distributions, eighteen for corn and sixteen for soybeans, this multivariate distribution is comprised of_thirty-four random variables. One additional characteristic was specified for this set of distributions. It was felt that the multivariate normal distribution, as specified, did not adequately account for the possibility of extremely low yields due to serious drought. Such conditions occur in southeastern Michigan about one year in twenty. Therefore it was specified that in any year there is a .05 probability that drought conditions will prevail and that corn yields will be one-half and soybeans two-thirds of what they would have been under more normal conditions. 1The coefficient of variation, 0, is defined by the expression C=9- 1.1 where o is the standard deviation of a distribution and u is its mean. If the mean and coefficient of variation are known, the variance of that distribution, 52, is defined by the expression 52 = (cn)2. 2The correlation coefficient for any two random variables x only is defined by the expression 0 r =._£X_ O')(O"y where °xy is the covariance between x and y and Ox and 0y are the respective standard deviations of x and y. 7O A§§§§§mgflt§wgfflgrgbability.distributions of time available for fieldwgrk in each planting or harvest period were based on the informa- tion presented in Table 3.3,_whieh isalsobased on figures used in m.. “‘J—Or ‘1 Telplan Program #18. Alludistributions of time_available for fieldwork were assumed to be membersof the Beta family of distributions. This assumption was made because of the flexibility of Beta distributions and because, like the amount of time available for fieldwork, Beta distributions are bounded from above and below. The choice of para- meters for each distribution was based solely on the information in Table 3.3, which is adequate to determine upper and lower bounds and one intermediate point on the cumulative distribution function. With the aid of tables of twentiles of the standard Beta distribution given in Pratt, Raiffa, and Schlaiffer (1965), parameters were selected for each period according to the simple criterion that the cumulative distribution function should pass as close as possible to the single 0 observed data point. These parameters are given in Table 3.4. No information on correlations between time available for fieldwork in different periods was available, but it was felt that correlations do exist between levels observed in adjacent or nearly adjacent time periods. Therefore the following aesumptions were made. The correla- tion coefficient for the time available in any two adjacent periods was set at .5; that for periods separated by a single period was set at .3; and thatgfor periods separated by two periods was set at .1. All othere>i}(”(, )1V1\|{'. I r .\‘J‘ correlation coefficients were set at 0, including those between any 2 planting period and harvest period. Encoding procedures such as those outlined in Section 3.2 above were used to elicit the author's own subjective probability distributions 71 Table 3.3 Percent of Time Available for Fieldwork by Calendar Period for Well Drained Sandy Loom Soils in the Lenawee, Monroe, Livingston County Area Period Casingar Percentagea April 25-May 10 16 50 May ll-18 8 37 May l9-26 8 65 May 27-June 3 8 70 June 4-11 8 70 June 12-19 8 70 September 27-0ctober 3 7 53 October 4-10 7 53 October 11—17 7 53 October lB-November 7 21 33 November 8-28 21 14 aThe probability that the percentage of days available for fieldwork will be less than this value is .3. Sources Black, et a1. (no date). 72 Table 3.4 ‘Beta Distribution Parametersa for Time Available for Fieldwork by Calendar Period Period 83:6; ggfifidb a B Meanc 0:3?giign April 23-May 10 0 192 9 7 1081»: v 21.1 “ May 11-18 0 96 7 9 42 11.5 May 19-26 0 96 14 6 67 9.6 May 27-June 3 O 96 12 4 72 10.1 June 4-11 0 96 12 4 72 10.1 June 12-19 0 96 12 4 72 10.1 Sept. 27-Oct. 3 O 84 10 7 49 9.7 October 4-10 0 84 10 7 49 9.7 October 11-17 0 84 10 7 49 9.7 Oct. l7-Nov. 7 O 252 6 9 101 30.9 November 8-28 0 252 3 12 50 25.2 aThe density function of the standard Beta distribution is given by the expression: f(x)== ; 3+? 8 xa'] (l-x)8'1 for Owo com: m a venom Loan: ucsom Logo; :cou mo ucmccepm moped amm>cmz mmcowuanwcumpo moped cmmnxom one atom cow msmpmsmsoa copaznwcumwo mumm m.m mpamh 75 analysis of decisions made under uncertainty. First, we note that there is a lack of available material which can serve as an aid in the assessment of probability distribution. The need for such information is generally not considered when agronomic experiments are designed or when outlook information is reported. When one considers the degree of uncertainty experienced by agricultural producers, however, and the impact of this uncertainty in the decision process, the need for more expert assessments of probability distribution is evident. Second. it should be noted that decisions must be made even in the absence of reliable information upon which to base probability assessments. The costs of obtaining additional information must be weighed against the possible benefits. Further refinements should not be made in subjective probability assessments beyond the point at which the decision maker and the analyst believe the farmer's expectations are reasonably well represented. To this point probabilitydistributions for exogenous system inputs jgdged to haveja_significant impact on system performance, as measured by net cash income, have been specified. The task of actually deter- mining the effect these factors have on the distribution of net cash income levels remains. This requires the application of the Monte Carlo simulation techniques outlined above in Section 3.3. For any management strategy under consideration, net cash income levels are determined for a number of randomly selected states of nature. These levels are viewed as sample observations from the distribution of net cash income levels under the particular management strategy. When arrayed in order of increasing magnitude they serve as the basis for the construction of the cumulative distribution function of that distribution. 76 In this example, it will be recalled..flflméfléggmgntestrategy is defined_bylevels ofcontrol variables indicating acreage rented, aereggemtgmbe planted in corn, and acreage planted to be planted in soy- beans and by_a single feedback control role parameter indicating the date after which all unplanted acreage is to be planted in soybeans. A statewof the environment is defined by a set of specified values for all exogenous system inputs-~by values for all non-stochastic environ- mental factors and by one sample observation from the multivariatefl probability distribution comprised of crop yields, time available for fieldwork, and product prices. A simple simulation model was specified to determine the level of net income realized under any particular managerial strategy in a given state of the environment. The simulation begins with the computation of charges for land rental, if any. Snbject to time availablefor fieldwork,themodel then simulates the planting of corn until the specified corn acreage is attained or until the date after which all remaining acreage is to be planted in soybeans. Planting of soybeans then proceeds until all acreage is planted or until June_l9, the final day of the last planting period. Throughout the planting process, hsystem state variables indicating the acreage of each crop planted in each planting period are repeatedly updated. There is no assurance that all available acres will be planted in a particular state of nature; this depends on levels of time available for fieldwork. Costs for seed, fertilizer, herbicides, and fuel are incurred for each acre actually planted. Harvesting is simulated in a similar manner. Subject to time available, soybeans are harvested as quickly as possible, with acreage 77 planted first being harvested first. This continues until all planted soybeans acreage is harvested or until November 8, the date after which unharvested acreage is judged to be a total loss. Theharvest of corn then begins, again with acreage planted first being harvested first. Thisprocess continues until all planted corn acreage is harvested or until November 28, the last day of the final harvest period. Again, there is no assurance that all acres planted will be harvested.1 All harvested acreage is classified according to crop, planting period, and harvest period, and system state variables indicating the number of acres in each category are repeatedly updated. The values of these variables are multiplied by corresponding crop yields for each planting-harvest combination to determine the total number of bushels of each crop harvested. Drying and hauling costs are assessed for each bushel har- vested. Finally, receipts from crop sales are determined by multiplying the number of buShels of each crop harvested by the relevant price, and net cash income is computed by subtracting costs incurred and debt repayment commitments from the sum of crop receipts and off-farm income. This model was used to determine net cash income levels realized in twenty randomly selected states of nature for each of the two manage- ment strategies defined in Table 3.6. Levels for stochastic factors in each state of nature were generated using the Monte Carlo procedures described in Appendix A. In effect each state can be viewed as a sample observation from the combined multivariate probability distribution of prices, yields, and days available for fieldwork. Net income levels realized under Strategy 1 are given in the first column of Table 3.7. 1Nothing is recovered from unharvested soybean acreage, but yields equal to one-half those realized for corn planted in the fourth planting period and harvested in the final harvest period are assumed to be recoverable on unharvested corn acreage. 78 Table 3.6 Two Possible Management Strategies Central Strategy Variable 1 2 Land Rental 0 240 Corn Acreage O 180 Soybean Acreage 240 300 Stopping Date for June 3 May 26 Corn Planting 79 nn.smno o .N.n.mnn n a .00000.0 0. 0000 00 000: 000.0000: 0. 0.0.» 0000 00000>0 00.0 0..“ mN..n 00.00N 00.00N ~n.~ . 00.0 00.0 Na...0.. 00.N -.Nn 00.00N 00.00N 0~.N . 00.0 00.0 .m.uncn. 0n.m 00..n 00.00N 00.00N -.~ 0 00.0 00.0 ~0.00m .v.m ~0.o~ 00.0q~ 00.00N .o.. . 00.0 00.0 04.Nwm~n 00.0 .0.s~ 00.00N 00.00N 0~.~ . 00.0 00.0 0N.0mara a... 0..0~ 00.00N 00.00N .0.~ . 00.0 00.0 «0.0Nrn 00.0 Nc.0~ .00.00~ 00.0c~ 0m.~ . 00.0 00.0 00.0mcsn 00.m 00.nw 00.0c~ 00.0c~ m0.r . 00.0 00.0 m0.u~wh- . mn.~ s~.~m 00.00N 00.00N .m.~ . 00.0 00.0 NN.~0.m. No.5 0~.0n 00.0c~ 00.0¢~ .n.~ . 00.0 00.0 co.~ucn. .0.” ms.Nn 00.0c~ 00.00N n0.~ . 00.0 00.; 00.N0mc~ «new 0~os~ 00.0qw 00occ~ -.r n 00.0 00.0 o .msmwwo . 0~.0 n..o~ 00.0: 00.3.. 2.0 0 00.0 00.0 3.30m 0N.o .s.0~ 00.00N 00.00N .0.~ . 00.0 00.0 00.0emN on.0 m..0m 00.00N 00.00N n..~ . 00.0 00.0 N...000 o..o 00.0m 00.00N 00.0q~ 0-.~ . 00.0 00.0 00.0npq 00.0 .0..m 00.00m 00.00N ~0.N . 00.0 00.0 «0.0000 0~.0 0~.m~ 00.00N 00.00N 0..N . 00.0 00.0 0N.0000- .0.m .0.Nn 00.00N 00.00N o..~ . 00.0 00.0 «0.0.;n 00.0 00.Nm 00.00m 00.00N N0.N o. 00.0 00.0 00..noo 00.00 0.0.. 0oumo>cuz 00000.0 00.00 0.0.» 0oumo>gaz 00000.0 0:000. 00000>t 00000 0000‘. 00000>< 0000< 0000< 0000 0022260 0:00 . .mmumeum 0000: «000000.000 smumxm N.m 0.00. 80 Information on soybean prices and average yields per acre is also given. If this strategy is selected by the operator of the farm in our example the expected level of net cash income is rather low--only $3,816. The probability is .3 that net cash income will be negative, but there is only a probability of :95 that losses will exceed $10,000. On the other A -—~——-—" hand, there is a probability of .25 that net cash income will exceed $10,000 and a probability of .05 that it will exceed $20,000. Net income levels realized under Strategy 2 are given in Table 3.8. Under this strategy, which calls for the rental of 240 acres and a more balanced crop mix, the expected level of net cash income is much higher--$10,798. The probability is .6 that net cash income will exceed $10,000 under this strategy and .3 that it will exceed $20,000. The probability of realizing a negative net income level is .25, which is less than under Strategy 1. When losses occur, however, they tend to be substantial and there is a :12 probability that net cash income will be less than $19,900. The other information in Table 3.8 demon- strates how the management strategy is revised by the feedback controll rule.) In;ten;of_the twenty states of nature less than 130 acres of corn are planted because of the stipulation that all unplanted acreage be planted in soybeans.after May 26. The information in this table also demonstrates that when the number of acres cultivated reaches this high a level, there is no assurance that all available acreage will be planted or that all planted acres will be harvested. The figures given in Tables 3.7 and 3.8 can be used to construct cumulative distribution functions of net cash income levels associated with each of the two strategies. These are shown in Figure 3.3. The _._R_ ‘. 81 a... .m.00 ~0.~.n 00.N.m Nn.N 00.00 a...0. 0...0. 0.....0N 00.. 0.... 00.00. 00.00. 00.~ .0.00. 00.0.. 00.0.. «0.0000. 0m.m .0.o~ 00.00n 00400n m~.~ o0.m0. 00.0.. 00.00. 0..ac~n.: .0.m ~0.s~ 00.00m 00.00m .o.. 00.n0 00.0.. 00.00. oo.~00.u no.0 ...0~ 0~.00n 0~.00n 0~.~ .0.00 00.0.. 00.0.. 00.0000 .... .0.m~ 0m.~0~ 00.00N .0.N mm... 00.0.. 00.00. N..000. 00.0 n0.o. 00.00. 00.00» om.~ 0~.50 . 00.0.. 00.00. m .m-0.- 00.0 00.m~ 00..0m 00..0m m0.. 00... um... ner «n.0r. 0....N..- mm.. 00.0n 00.00n 00.00m .m. .0.No 00.00. 00.00. c .u0sow 00.. No.0N 00.00. 00.00. .n.N 00.00 00.00. 00.00. .s.~mma~ .u.h ~0..n 00.000 00.000 n0.~ 00.00. 00.00. 00.0.. 00.00.0n. mu. 00.0N 00.00n 00.00» .N.. 0...0 00.00. 00.00. .0.=.0n.o. 00.0 m..0~ 00.00. 00.000 ~N.~ w...o 00.00. 00.00. .0.00m~. 0N.0 m~.0~ 0..NNm o..-n 00.N 00.00 .~.sm. .~.~m. 0....00. 00.0 .N.0~ 00.000 00.000 n..~ 00.00 00.00. 00.00. .0.000~. 00.0 .0.0~ 00.00r 00..~n 0m.0 m~.00 .<.rx -m 0v 0m.wr. 00.000. 00.0 no.o~ 00.00. 00.000 .0.N o...0. 00.0.. 00.00. m0.0000N 0~.0 N...~ 00.0mm 0N.mmn 0..N 00.0. .m.m..-n;-¥ 0~.00. 00.0.00- .0.0 00.0. 00.00. 00.00. 0..N .m.mo 00.0.. 00.0.. 00.Nm.n. ...0 n~..n No.00. 00.00. .0.0 .c.m.. 00.0.. 00.0.. 00.000NN 00.00 0.0.. 00000>00= 00000.0 00.00 0.0.. 00000>00= 00000.0 00000. 00000>0 00000 0000: . 00000»:. 00000 00000 0000 0000000» 0000 00: N .000000m 00000 0000000.000 Empmxm m.m 0.00. 82 00.00.000m m>.umcgm..< 03. 00. m:0..00:u 00..00.0.m.0 m>..0.=sao m. m 0.00.. 008. 000$ 000$. 800w 0000. 08.... 0000 000. 0 000.- 0000- 008.- 0000.- d d d d I d d 1 d u d d \ \ \ \ \ \\ \ ........ w 0.32.... \ IIII . 33:0. \ \ \ \ \ 0.. .... 83 procedure for ordering alternative choices which is introduced in the next chapter requires that such a function be constructed for each strategy considered. CHAPTER IV THE MEASUREMENT OF DECISION MAKER PREFERENCES 4.1 Introduction Choices made under uncertainty are affected by decision maker pre- ferences for alternative outcomes as well as by subjective assessments of probability distributions of system outputs. When confronted with the choice between participation in two uncertain activities for which all possible outcomes and their probabilities are specified exactly, one decision maker may choose the first alternative while another may choose the second. This divergence in behavior cannot be attributed to a difference in subjective probabilities, since all relevant pro- babilities are specified prior to the time when a choice must be made. Rather, it must be attributed to a difference in the preferences of the two decision makers. Preferences, like assessments of probabilities in situations less highly structured than this example, are personal in nature, and some determination must be made of them in any applied decision analysis. This chapter examines procedures for eliciting information on decision maker preferences and techniques for combining this information with subjective probability assessments to identify preferred choices. A decision maker's preferences can be represented quantitatively by a utility function, U = u(y) 4.1 84 85 This is simply a relationship between the outcome of a choice as represented by a vector of system output levels, y, and an index of its desirability, U. It is a relationship which assigns values to alternative situations or conditions. When combined with a decision rule, such as utility maximization, a utility function becomes the basis for the identification of a preferred course of action. If the system output levels associated with a particular strategy can be known with certainty, and if the decision maker's utility function is also known, calculation of the utility level of this choice is a relatively simple, direct matter. In uncertain decision situations, however, levels of system outputs realized under a specified strategy cannot be known exactly at the time when a choice is made, and the associated level of utility cannot be determined directly. In such situations the expected utility hypothesis provides a way of assigning assessments of value to alternative choices. First proposed in the eighteenth century by mathematician Daniel Bernoulli to explain the gambling behavior of some decision makers in uncertain situations, and derived more formally nearly 200 years later by Ramsey (l931) and by von Neumann and Morgenstern (1944), the expected utility hypothesis states that for any decision maker whose preferences are complete, transitive, continuous, and independent: (1) An ordinal utility function, u(y), can be constructed such that u(y) is defined for all system output levels and u(y])>u(y2) if the outcome defined by y1 is preferred to that defined by ya. (2) The utility of an uncertain prospect is equal to the expected utility of its possible outcomes. 86 (3) The scale of x(y) is arbitrary up to a positive linear transformation-~i.e., rankings of the utility function V(y) = a+bu(y) are identical to those according to u(y) if a and b are constants and b>0.1 The second result is the key one, since it is the basis for the commonly used decision rule which states that a decision maker's pre- ferred choice is that which maximizes his expected utility. If the expected utility hypothesis is to be applied in a practical context, the decision maker's utility function must be represented accurately enough to serve as a reliable aid in the identification of a preferred course of action. As is true in the determination of sub- jective probability distributions, however, the degree of accuracy sought in the measurement of preferences is dependent largely upon the characteristics of the decision problem under consideration. In some instances a very precise measure of preferences may be required, while in others nearly all feasible alternatives can be eliminated from con- sideration on the basis of only an approximate measure of preferences. In the remaining sections of this chapter several alternative approaches to the measurement and representation of decision maker preferences will be examined along with the evaluative criteria used in conjunction with each type of measurement to order action choices.2 1See Fishburn (1970) and Hirshleifer (l970) for more extensive dis- cussions of the derivation of the expected utility hypothesis and for a complete explanation of the axioms which underlie it. 2Though the importance of preference measures based on more than one system output variable is recognized, the difficulty and cost of deter- mining such measures preclude their use in most practical decision situa- tions. The discussion below focuses entirely, then, on the measurement of preferences for outcomes which are adequately described by a single system output variable. See Keeney and Raiffa (l977)for an excellent dis- cussion of the measurement of multidimensional preference relationships. 87 Each of the approaches considered is based on the expected utility hypothesis, but each requires a different level of precision in the mea- surement of preferences. Techniques for the representation of decision maker preferences and for the identification of preferred choices based on the use of single valued utility functions are reviewed first. The concept of an efficiency criterion, which allows the partial ordering of possible alternatives on the basis of relatively unrestrictive assumptions about decision maker preferences, is then introduced and several commonly used criteria are examined. A more recently developed efficiency criterion, stochastic dominance with respect to a function (Meyer, l977a) is then described, and a procedure for determining the interval measurements of decision maker preferences required for the application of this criterion is presented. This new measurement technique, developed as part of this study, permits the construction of a representation of decision maker preferences which is only as precise as the decision problem under consideration requires. Results of an empirical test of this procedure, which are also presented below, demonstrate that it is both accurate and flexible. In the final section of the chapter, the incorporation of information on preferences into the sample problem discussed in the preceding two chapters is examined. 4.2 The Use of Single Valued Utility Functions to Represent Decision Maker Preferences Perhaps the most direct approach to the measurement of preferences is to actually derive the decision maker's utility function. This re- quires that a number of points in the utility function be determined by 88 direct elicitation. A curve is then fitted through these points, and that curve is said to be the decision maker's utility function. A utility function is a highly structured representation of a decision maker's preferences. Like probabilities, however, preferences are often not clearly formulated in the mind of the decision maker. Therefore, the interview procedures used to elicit information on preferences should be designed both to clarify and to structure the decision maker's assessments of value. Several procedures have been developed for the elicitation of information on preferences. The most commonly used are reviewed in Officer and Halter (1968) and Anderson, Dillon, and Hardaker (1977). Each procedure requires that a series of choices be made between pairs of uncertain alternatives or between certain and uncertain alternatives. If these choices are properly structured, each should reveal enough information about the decision maker's preferences to determine one point on his utility function. Once a set of data points has been elicited, a curve is fitted through them to obtain an explicit relationship between levels of utility and all relevant levels of the system output variable. In choosing a functional form, a number of factors should be considered. Goodness of fit is, of course, important, since the estimated utility function should conform as closely as possible to the information ob- tained in the elicitation interview. Ease of estimation and the tracta- bility of a function in the calculation of expected utilities should also be considered. Polynomial specifications are, perhaps, the most commonly used in empirical work, but a number of other alternative forms have also been prepared (Lin and Chang, 1978). 89 Having estimated a decision maker's utility function, action choices can be ordered by calculating the expected utility of each. If the outcome associated with a particular management strategy, v*, is described by a single discrete system output variable, y, the expected utility of the strategy, EU(ylv*), is given by the expression: n EU(ylv*) = Z f(yilv*)u(y i=1 where f(yilv*) is the probability of the it i) 4.2 h possible outcome under strategy v* and ”(yi) is the utility of that outcome. When the system output variable, y, is continuous, the expected utility is defined by I the expression: EU(ylv*) = If; f(ylv*)u(y)dy 4.3 where f(ylv*) is the probability density function of y under strategy v*. Consider, for example, the case in which the decision maker's utility function is of the form U(y) = MM 4.4 and the probability distributions associated with two strategies, A and B, are as specified in Table 4.l. Since the system output variable is discrete, equation 4.2 can be used to calculate the expected utilities for these two alternatives. That for strategy A is EU = .5'In(500)+.1 'ln(1000)+.l '|n('|500)+.3 'ln(2000) 4.5 A = 6.81 while that for strategy B is .2 ln(500)+.6 1n(1000)+.l ln(1500)+.l ln(2000) 4.6 EUB = 6.88 For this decision maker, then, strategy A is preferred to strategy B. 90 Table 4.1 Probability Distribution Associated with Two Alternative Action Strategies System Output Probability Level Strategy A Strategy B 500 .5 .2 l000 .l .6 1500 .l .l 2000 .3 .l Representation of a decision maker's preferences with a single- valued utility function has several serious shortcomings in an applied decision analysis. With regard to preference measurement procedures, the hypothetical choices posed in the elicitation interviews are, in general, less complex and less interesting than those actually facing a decision maker. As a result, it may be difficult to hold the full attention of the respondent through a series of similar questions. Furthermore, it can be argued that, because the types of choices made during the elicitation interview bear little resemblance to those made in real life, the value of the interview itself as a learning process whereby the respondent can gain a better understanding of how he makes decisions is limited. Other problems encountered in the elicitation of the information required to construct single valued utility functions are discussed in Officer and Halter (1968). Still more serious problems arise as a result of the way empirically estimated utility functions are generally used in a decision analysis. Once a set of data points from a decision maker's utility function has been elicited, a curve is fitted through its elements. Rare indeed is the case in which the fit is perfect so that the parameter values of 91 the utility function can be known with certainty. Even if the fit were perfect, shortcomings of the preference elicitation procedures make it likely that the data points themselves include measurement errors. Therefore, an empirically estimated utility function cannot be con— sidered to be an exact representation of decision maker preferences. Despite the possible sources of imprecision, however, a utility function, once estimated, is usually treated as though it were an exact representa- tion of preferences when alternative choices are ordered, and any absolute difference in the expected utilities associated with two choices is taken as a clear indication that one is preferred to the other. If a utility function does not accurately reflect a decision maker's actual preferences, this can result in the recommendation of a choice which is not actually the preferred choice of the decision maker. When empirically estimated single valued utility functions are used to order alternative choices, then, there is a high likelihood that errors of this sort will be made. 4.3 Efficiency Criteria and the Representation of Decision Maker Preferences The difficulties associated with the use of single valued utility functions to order choices in a practical context have been the impetus for the development of several efficiency criteria which overcome some of the shortcomings identified above. An efficiency criterion is a preference relationship which provides a partial ordering of feasible action choices for decision makers whose preferences conform to certain rather general specifications. As such, an efficiency criterion can be used to eliminate some feasible choices from consideration without 92 requiring detailed information about the decision maker's preferences. In many instances, the use of such a criterion may greatly reduce the number of alternatives to be considered. If enough alternatives can be eliminated, it may be possible for a final choice to be made on the basis of direct comparisons of the distributions of outcomes associated with each of the remaining alternatives. First and second degree stochastic dominance are among the simplest and most commonly used efficiency criteria. Both were formulated independently by Hadar and Russell (l969) and Hanoch and Levy (l969). First degree stochastic dominance holds for all decision makers who prefer more of the system output to less--i.e., for all decision makers having positive marginal utility with respect to the system output variable. An alternative for which the associated distribution of the system output variable is described by the cumulative distribution function F(y) is preferred to a second alternative with associated cumulative distribution G(y) by the criterion of first degree stochastic dominance of F(y)SG(y) 4.7 for all possible levels of y and if the inequality in 4.7 is a strict inequality for at least some value of y. In Figure 4.1, for example, F(y) dominates G(y) by this criterion, since it is always below and to the right. Neither F(y) nor G(y) can be ordered with respect to H(y) according to this criterion, since both are strictly greater than H(y) for some system output levels. While first degree stochastic dominance holds, in effect, for all decision makers, second degree stochastic dominance places an P(y) 93 G(y) F(y) Figure 4.l Illustrations of First and Second Degree Stochastic Dominance H(y) 94 additional restriction on preferences. It requires that the marginal utility of the system output variable be both positive and decreasing-- i.e., it requires that the decision maker's utility function be concave. Given two alternatives having system output distributions defined by the cumulative distribution functions F(y) and G(y), respectively, the first alternative is preferred to the second under the criterion of second degree stochastic dominance if 13’” F(x)dx51¥w G(x)dx 4.8 for all possible values of y and if the inequality in 4.8 is a strict inequality for at least some value of y. In effect, this means that the first alternative dominates the second if the area under cumulative F(y) is always less than or equal to that under G(y). In Figure 4.1, for example, F(y) dominates both G(y) and H(y) by this criterion, since the area order this cumulative is less than that order either of the others at all values of y. G(y) and H(y) cannot be ordered by this criterion, however, since the area order H(y) is at times less than that order G(y) and vice versa. Other efficiency criteria depend on additional restrictions in the maker's preferences or in the nature of the probability distributions of system outputs. The mean-variance efficiency criterion (Markowitz, l959) is simply a special case of second degree stochastic dominance in which all probability distributions are normal. Third degree stochastic dominance (Whitmore, 1970) is similar to first and second degree stochas- tic dominance, but it requires the additional assumption that the decision maker's utility function have a positive third derivative with respect to the system output variable. 95 Once a particular criterion with its associated restrictions on preferences has been specified, an ordering of any two alternatives can be made strictly on the basis of properties of the two associated probability distributions of the system output variable. Under such an ordering, one alternative will dominate the other, or the criterion will not be able to order the two alternatives and both will be con- sidered efficient. If one alternative does dominate the other, it is unanimously preferred by the class of decision makers for whom the criterion applies. By making a series of pair-wise comparisons of all alternatives under consideration and eliminating from consideration any alternative which has been dominated, an efficient set of choices can be determined for any finite set of alternatives. This set will contain the preferred choice of any member of the class of decision makers for whom the criterion applies. The use of an efficiency criterion to order alternative choices is, in many respects, preferable to the use of a single valued utility function. No direct measurements of preferences need be made. Rather, relatively easily accepted restrictions are simply imposed on the decision maker's preferences. Unfortunately, however, none of the efficiency criteria mentioned above is a particularly discriminating evaluative tool. In an application of second degree stochastic dominance by Anderson (1975), for example, twenty of forty-eight randomly generated farm plans were in the efficient set. Furthermore, though the restrictions on preferences required by most efficiency criteria do not appear to be unduly strict, they often run counter to empirical evidence. Again focusing attention on second degree stochastic 96 dominance, despite the fact that strong theoretical arguments have been made for the near universality of concave utility functions (Arrow, 1971), the weight of empirical evidence indicates that decision makers do at times exhibit increasing marginal utility (Officer and Halter, 1968; Conklin, Baquet, and Halter, 1977). While the concept of an efficiency criterion is an attractive one, then, efficiency criteria have not proved to be useful tools in practice. There is a need for efficiency criteria which are both more flexible and more discriminating than those described above. Furthermore, there is a need for techniques for obtaining measures of decision maker preferences which, though less precise than those used to construct a single-valued utility function, facilitate the empirical determination of whether or not a particular efficiency criterion adequately repre- sents the preferences of a decision maker. In the sections which follow, a more powerful efficiency criterion, stochastic dominance with respect to a function (Meyer, 1977a), is introduced, and a method for measuring decision maker preferences designed to be used in conjunction with this criterion is presented. 4.4 Stochastic Dominance with Respect to a Function Stochastic dominance with respect to a function is an evaluative criterion which orders uncertain action choices for classes of decision makers defined by specified lower and upper bounds, r1(y) and r2(y), on the absolute risk aversion function. The absolute risk aversion function (Arrow, 1971; Pratt, 1964), r(y), is defined by the expression: r(y) = -U" (y)/U'(y) 4.9 97 where u'(y) and u" (y) are the first and second derivatives of a van Neumann-Morgenstern utility function u(y). In the most abstract terms, values of the absolute risk aversion function are simply local measures of the degree of concavity or convexity exhibited by a decision maker's utility function. Since u'(y) is assumed to be positive, a positive value of r(y) implies a negative value of u" (y) which in turn implies a concave utility function. Similarly, the utility function is convex at y if r(y) is negative. As such, the absolute risk aversion also serves as a local indicator of the extent to which a decision maker is risk averse or risk loving. Following Arrow's (1971) definition, an individual is risk averse (loving) if, from a position of uncertainty, he is unwilling (willing) to take a bet which is actuarially fair (unfair).1 Concavity of the utility function and risk aversion are synonymous under this definition, and both are implied by a positive value of r(y). A negative value of r(y) implies both local convexity of the utility function and risk loving behavior. Perhaps the most important property of the absolute risk aversion function, however, is that it is a unique measure of preferences, while a utility function is 1Arrow's definition of risk aversion has been the source of some confusion, since risk aversion and risk preference have often been equated with an aversion to and a love for gambling. Unless some measure of the degree of gambling associated with a particular choice is identified as a system output and included as an argument in a decision maker's utility function, however, his choices are, by the omission of this factor, assumed to be unaffected by the degree of gambling involved. Arrow's concept or risk aversion refers only to the characteristics of a utility function with a single argument. As Friedman and Savage (1948) demonstrate, such a utility function can be used to explain why gambling has utility or disutility in certain situations without requiring that preferences for gambling per se be measured. 98 unique only to a positive linear transformation.‘ In effect, then, upper and lower bounds on a decision maker's absolute risk aversion function define an interval measurement in his preferences. Stochastic dominance with respect to a function orders choices on the basis of such a measurement. The major advantage of this criterion is that it imposes no re- strictions on the width or shape of the relevant region of risk aversion shape. The interval measurement can be as precise or imprecise as is deemed necessary for a particular decision analysis. Negative as well as positive levels of absolute risk aversion can lie within the risk aversion interval at some or all levels of system output. Less flexible efficiency criteria such as first and second degree stochastic dominance can be viewed as special cases of this more general criterion. The requirement under first degree stochastic dominance that the decision maker have positive marginal utility places no restrictions on the decision maker's absolute risk aversion function--i.e., r1(y) = -w and r2(y) = m for all possible values of y. The requirement under second degree stochastic dominance that marginal utility be decreasing as well as positive, on the other hand, implies that r](y) = O and r2(y) = w for all values of y. More formally stated, stochastic dominance with respect to a function is a criterion which establishes necessary and sufficient 1Because a utility function is unique only to a positive linear transformation, u(y) and u*(y) = a+bu(y). b>0 are strategically equivalent, though perhaps highly dissimilar, utility functions. The absolute risk aversion functions of these two utility functions are identical, however: r(y) = -U" (y)/u'(y) 99 conditions for the distribution of system outputs defined by the cumu- lative distribution function F(y) to be preferred to that defined by the cumulative distribution function G(y) by all agents whose absolute risk aversion functions lie everywhere between lower and upper bounds r](y) and r2(y). As developed by Meyer (l977a), the solution procedure requires the identification of a utility function u0(y) which minimizes 1 6 [G(y)-F(y)JU'(y)dy 4.9 subject to the constraint r1(y)s -u"(y)/u'(y)2r2(y> y zto. 11 ' 4.10 The expression in equation 4.9 is equal to the difference between the expected utilities of system output distributions F(y) and G(y).2 If, for a given class of decision makers, the minimum of this difference is positive, F(y) is unanimously preferred to G(y). If the minimum is zero, it is possible for an agent in the relevant class of decision makers to be indifferent between the two alternatives and they cannot be ordered. Should the minimum be negative, F(y) cannot be said to be unanimously preferred to G(y). In this case, the expression J}, [F(y)-G(y)]u'(y)dy 4.11 1The range of system outputs is normalized so that all values of y fall on the bounded interval [0, 1]. 2This can be demonstrated in the following manner. Let f(y) and g(y) be the probability density functions associated with F(y) and G(y) :3 f(y)u(y)dy - x3 g(y)u(y)dy = I; [f(y)-g(y)]uly)dy is the difference between the expected utilities associated with the two distributions. Integrating by parts, I; [i(y)-g(y)1u(y)dy = [F(y)-G(y)1udy = I; [G(y)-F(y)]u'vaszszu N.v mszmwu coco aces wmnv coon no:—-> A spa—nae; ocoo— coco coco coo: oocw nx— ooom coo. econ noapa> A spa—nae; nxw .cu nx— nxu “be 103 G(y)-F(y) l—l l I | l I l . l L I % I I i I 1 l 2000 4000l l 6000 I 8000 10000“v ' I 1 l I l l l l 2 I I L_! ‘ I Figure 4.3 A Graph of the Function [G(Y)-F(Y)] 104 until the point where y = 4000. At this point ’2000 [G(y)-F(y)JU6(y)dy 4.14 = [4382 4000 = 0. (-1/3)(.001)e"°°'ydy-.00194 Therefore, the optimal control switches to y = .002 for values of y less than 4000. The procedure continues back with the same optimal control until If, [G(y)-F(y)]ub(y)dy 4.15 3 -.0 = 12388 (l/3)(.002)e 02ydy .00528 Since the value of the objective function is positive, distribution F(y) is preferred to G(y) by all decision makers whose absolute risk aversion functions lie everywhere between r1 = .001 and r2 = .002. The utility function which minimizes the objective function has an absolute risk aversion function such that: r(y) = .002 when yS4000 4.16 .001 when y>4000 Note that this utility function does not have constant absolute risk aversion. Stochastic dominance with respect to a function is a remarkably flex- ible evaluative criterion which has considerable potential for use in the analysis of practical decision problems. Unlike other efficiency cri- teria, it does not require that fixed restrictions be imposed on the representation of the decision makers' preferences: and, because the bounds on absolute risk aversion can be as close or as far apart as desired, stochastic dominance with respect to a function can be used to order more choices than can be done with other criteria. Unlike a single valued 105 utility function, it does not require that an exact representation of the decision maker's preferences be specified. Furthermore, stochastic dominance with respect to a function is relatively easy to apply. A com- puter program which implements the solution procedure defined above has been developed by Meyer, and a modified version of that program is pre- sented in Appendix B. 4.5 An Interval Approach to the Measurement of Decision Maker Preferences Stochastic dominance with respect to a function is a powerful analytical tool. Before it can be used in an applied context, however, an operational procedure must be developed for the determination of lower and upper bounds on a decision maker's absolute risk aversion function. A technique for making such interval measurements of decision maker preferences is introduced in this section. This procedure uses information revealed by a series of choices between carefully selected distributions to establish lower and upper bounds on an individual's absolute risk aversion function. The degree of precision with which preferences are measured--i.e., the size of the interval between the lower and upper bound functions--can be specified directly in accordance with the characteristics of the problem under consideration. At one extreme the interval can be of infinite width, and at the other extreme it can converge to a single line. The procedure for constructing interval measurements of decision maker preferences is based on the fact that under certain conditions a choice between two outcome distributions defined over a relatively narrow range of system output levels divides absolute risk aversion space over that range into two regions: one consistent with the choice and one 106 inconsistent with it. The level of absolute risk aversion at which the division is made depends solely on the two distributions-—i.e., their properties define the two regions. The decision maker's preferences, as revealed by his ordering of the two distributions, however, determine into which of these two regions his level of absolute risk aversion is said to fall. By confronting the decision maker with a series of choices between carefully selected pairs of distributions, the region of absolute risk aversion space which is consistent with the decision maker's prefer- ences can repeatedly be divided. With each choice a portion of that region is shown to be inconsistent with the decision maker's preferences, and the interval measurement for the level of absolute risk aversion is narrowed. The procedure continues until a desired level of accuracy is attained. Upper and lower limits for the level of absolute risk aversion are deter- mined at a number of income levels. These values are used to estimate upper and lower limits for the absolute risk aversion function over the relevant range of incomes. The validity of the statement that a choice between two distributions is, under certain conditions, the basis for a division of absolute risk aversion space into regions consistent and inconsistent with a decision maker's revealed preferences can be demonstrated using concepts developed by Meyer in "Second Degree Stochastic Dominance with Respect to a Function." In that paper Meyer (l977b, p. 483) proves the following theorem: Theorem For cumulative distributions F(y) and G(y) 1% [G(x)-F(x)]dk(x)zo yz[0’ 1] and I; [G(x)-F(x)]dk(x)=0 only if I; [G(x)-F(x)]dk(x)SO yz[0, l] 107 The theorem states that F(y) is preferred to G(y) by all decision makers more risk averse than the utility function k(y) and that decision makers having utility function k(y) are indifferent between the two distributions only if G(y) is preferred to F(y) by decision makers less risk averse than k(y).1 The function k(y), then, can be considered to be a boundary func- tion, since it separates a class of decision makers who prefer F(y) from a class who prefer G(y). If the distributions F(y) and G(y) are defined over a narrow range of system output levels and if the decision maker's absolute risk aversion function can be approximated by a constant value A over that range, pre- ference for F(y) implies that A is greater than or equal to the minimum value of the absolute risk aversion associated with k(y). Otherwise, the decision maker would be less risk averse than k(y) and his choice would be inconsistent with expected utility maximization. Preference for G(y), on the other hand, implies that A is less than or equal to the maximum value of the absolute risk aversion function associated with k(y), since F(y) is preferred by all decision makers more risk averse than k(y). It 1Using Pratt's definition of risk aversion in the large, a decision maker with utility function u(y) is more risk averse than k(y) if _kll < _ull Uy’ k'éy) ' U'éy) while he is less risk averse than k(y) if -k"§y; 2 -u"{x; uy k y u' y Meyer (l977b) shows that F(y) is preferred to G(y) by all decision makers more risk averse than k(y) if 1% [G(x)-F(x)]dk(x)20 Uye[O, 1] and if the inequality is strict for some value of y. He also shows that G(y) is preferred to F(y) by all decision makers less risk averse than k(y) if r; [G(x)-F(x)]dk(x)50 Uye[0, 1] and if the inequality is strict for some value of y. 108 should be noted that the assumption that a decision maker's absolute risk aversion function can be adequately approximated by a constant value over a narrow range of system output levels is critical here. The theorem stated above does not imply that decision makers who prefer F(y) to G(y) are more risk averse than k(y); nor does it imply that decision makers who prefer G(y) to F(y) are less risk averse than k(y). With the assump- tion of constant absolute risk aversion in the neighborhood of a given system output level, however, it can be inferred that decision makers who prefer F(y) to G(y) are not less risk averse than k(y) and those who pre- fer G(y) to F(y) are not more risk averse than k(y). The properties of a utility function which serves as a boundary func- tion between two distributions are dependent upon the two distributions.1 By careful selection of distributions, a boundary function can be placed anywhere in risk aversion space. A series of questions can be devised, then, which allows the repeated reduction of region of risk aversion space consistent with the revealed preferences of a decision maker, thereby, narrowing the interval measurement of absolute risk aversion. A simple example should help to illustrate how the procedure works. Let the boundary function for two distributions, k1(y), have an absolute risk aversion function which lies everywhere on the interval (A],A2), and let the first distribution be preferred by decision makers more risk averse than k(y) while the second is preferred by those less risk averse than k](y). In this case the decision maker prefers the first distribu- tion. Given the assumption of constant absolute risk aversion over this range of system output levels, this implies that his level of 1A boundary function does not exist for each pair of distributions. One would not exist, for example, if one distribution dominates the other by first degree stochastic dominance. Similarly, the existence of one boundary function does not preclude the existence of others. 109 absolute risk aversion between y1 and y2 lies everywhere above A], as is shown in part (a) of Figure 4.4. If choices between two additional pairs of distributions indicate first that the level is greater than A2 and second that it is less than A it can be inferred that the 3. decision maker's level of absolute risk aversion over this range of system outputs lies between A2 and A3, as is shown in part (c) of Figure 4.4. With each choice, then, a portion of the region of absolute risk aversion space consistent with prior choices is shown to be incon- sistent with the decision maker's preferences and the interval measure- ment of absolute risk aversion is narrowed. Choices are presented to the decision maker until a desired level of accuracy is attained. An interval measurement of a decision maker's absolute risk aversion function can be constructed over a much broader range of system outputs by making interval measurements in the neighborhood of several system output levels and connecting known portions of the upper and lower bound absolute risk aversion function with linear segments, as is done in Figure 4.5. In this case direct interval measurements have been made in the neighborhood of three system output levels: 3,000; 10,000; and 17,000. 4.6 Implementation of the Procedure The discussion above describes an iterative approach to the con- struction of interval measurements of a decision maker's absolute risk aversion function. It does little, however, to answer the basic opera- tional questions of how appropriate distributions can be selected and of how the boundary interval for any pair of distributions can be identified. The techniques used to implement the interval approach to 110 mucwewczmmmz mucmgmmwgm Fm>cmucH mo mocmzawm < e.¢ wczmwm «4 v .3.. w ~«. ou.eeu u..ea .e..< .o d \\\\‘ «4 M 3.. I M 3.. 8.2.. 2.88 .3: A 8.23 «a: .32 .- h N A p a a u.» J l\\\\\\\ 3.. 3.. N \\\\ 3.. 111 PM .W07‘ .ooosJ / \ / / \ ¥———-——— / \ l, ‘\ .0001... __J \___ .. ._ ._ 2000 ‘ 4000 LsoooJ 8000 70000 12000414000 ‘ 16000 @000 20000 ' y ’owOI .- - —J .00003 ‘ Figure 4.5 Upper and Lower Bound Absolute Risk Aversion Functions Based on Three Interval Measurements 112 the measurement of decision maker preferences are described briefly in this section. A more technical explanation of them is given in Appendix B. The first step in the implementation of the procedure is to estab- lish a measurement scale, which is defined by a number of reference levels for absolute risk aversion. In Figure 4.6, for example, four reference levels are specified: -.0001, .0001, .0005, and .0010. This scale or grid determines the accuracy with which absolute risk aversion can be measured. Any number of reference levels can be specified. The intervals between them can be as wide or narrow as is deemed necessary, and they need not all be of equal size. In many cases it may be desirable to put more fineness or detail in the measurement scale in regions of risk aversion space where it is believed a priori that the decision maker's level of absolute risk aversion is likely to fall. Next, a set of distributions which will serve as the basis for the choices made by the decision maker must be constructed. These distri- butions should be defined over a relatively narrow range of system output levels, since the decision maker's level of absolute risk aver- ] As described in sion is assumed to be constant over that range. Appendix B, they are constructed in a random manner by generating several hundred random numbers from a specified distribution and grouping them into sets of six observations each. Each is a distribu- tion of outcomes, and each element is said to have a 1/6 probability of occurrence. Only six elements are included in each distribution because more complex distributions might make decision makers' choices unduly 1Experience to date indicates that a range of five to ten percent the size of the entire range of system output levels over which pre- ferences are to be measured is adequate. 113 r(y) r(y) = .0010 r(y) = .0005 r(y) = .0001 ‘ y Figure 4.6 An Absolute Risk Aversion Measurement Scale r(y) = -.0001 114 difficult. Distributions with fewer elements, on the other hand, may not be rich enough to make the choices interesting. The use of six element distributions also facilitates explanation of the choice situa- tion to the decision maker, since the probability of any one element occurring can be equated directly to the probability of obtaining a specified number of dots on a single role of a die. Three distributions constructed in this manner are shown in Table 4.2. Once a measurement scale has been specified and sample distribu- tions have been constructed, the interval on the measurement scale which contains the risk aversion function associated with the boundary function for each pair of distributions must be identified. The proce- dure by which this is done is described in detail in Appendix B. In essence, criteria identified by Meyer (1977b) are used to identify the highest reference level on the measurement scale, A], such that all decision makers 1g§§_risk averse than A1 prefer one distribution and the lgwg§t_reference level in the measurement scale, A2, such that all decision makers more risk averse than A2 prefer the other distribution. It follows that the absolute risk aversion function associated with the boundary function for the two distributions lies everywhere within the interval (A],A2), which is called a boundary interval. The boundary intervals for each pair of distributions from Table 4.2 are given in Table 4.3. In this case each pair has a different boundary interval. It should be noted that more narrow boundary intervals could have been identified if a more detailed measurement scale had been specified. After the boundary interval has been identified for each pair of distributions, a series of questions is formulated. Each question asks 115 Table 4.2 Sample Distribution from a Normal Distribution With u = 3000 and o = 1000 Distributiona l 2 3 2100 1000 1750 2400 2050 1950 2550 2650 2500 3100 3800 2750 3250 3900 3950 3450 5200 4000 u = 2808 u = 3100 u = 2817 o = o = 1370 o = 883 aThe elements of each distribution are rounded to the nearest 50 units. 116 Ao_oo..mooo.v AFOOO.._ooo.-V Amoco...ooo.v m m> N m m> F N m> F pa>em»:~ acmuczom m>oa< umgemwmea :owuanwsumwo IE Fm>sma:H Acmvczom zopmm uwggmmmsa copuznpgumwo Pm>goucu aguvcaom mee.e=e..um.o mcomusamsum_o m_a28m yo msvma so; m_m>emu:H xgmucaom m.¢ m—QMh 117 the decision maker to indicate which of two selected distributions he prefers. His responses serve as the basis for the interval measurement of absolute risk aversion. Each question focuses on a particular interval of risk aversion space--an interval which corresponds to the boundary interval for the two distributions the decision maker is asked to rank. Let the first question in the example being developed here be: Compare distributions 1 and 2 and indicate which you prefer. This question focuses on the interval (.0001, .0005).1 If distribution 1 is preferred, the information in Table 4.2 indicates that the decision maker's level of absolute risk aversion is greater than .0001. If distribution 2 is preferred, his level of absolute risk aversion is below .0005. The choice of a second question will depend on the respondent's answer to the first. If the respondent prefers distribution 1, for example, it makes little sense to ask him to rank distributions 1 and 3. Such a question indicates whether his level of absolute risk aversion is greater than -.0001 or less than .0001. Since his level of absolute risk aversion is already said to be greater than .0001, this information would be of value only as a consistency check. A choice between distri- butions 2 and 3, on the other hand, would add to our kn0wledge. If distribution 2 is preferred, the decision maker's level of absolute risk aversion can be said to fall on the interval (.0001, .0010), while if distribution 3 is preferred, the level must be greaterlthan .0005. 1It is advisable to focus the first question on a boundary interval at or near the middle of the measurement scale. 118 Because the order of questioning is contingent upon the decision maker's responses, the interview schedule takes a form similar to that of a programmed learning text, as is exemplified in Figure 4.7. Clearly, the set of questions specified in this example does not lead to an accurate measure of a decision maker's preferences. By specifying a finer measurement scale, however, and by asking a series of three or four questions rather than only two, much more accurate measurements can be made. It should also be noted that these questions serve as the basis for a measurement of absolute risk aversion only for system output values between 1000 and 5000. Similar sets of questions must also be constructed to measure preferences in the neighborhood of several other system output levels so that upper and lower bound absolute risk aversion functions can be constructed for a wider range of possible system output levels. Figure 4.5 above shows upper and lower bounds on an absolute risk aversion function based on interval measurements at three system output levels. Note that the slope of the absolute risk aversion function is not restricted. In this example it rises, then falls, and at lower levels of system output the measurement interval includes negative as well as positive values. When absolute risk aversion functions are derived from empirically estimated utility functions, on the other hand, their form is often severely limited by the functional form used to estimate the utility function. It should also be noted that the inter- val approach to the measurement of preferences also avoids‘another common problem encountered in the estimation of single-valued utility functions. Because all questions posed require a choice between two uncertain 119 1. Compare distributions 1 and 2 and circle the one you prefer. If you prefer distribution 1 go to question 3, otherwise go to question 2. 2. Compare distributions 1 and 3 and circle the one you prefer. 3. Compare distributions 2 and 3 and circle the one you prefer. Figure 4.7 A Sample Questionnaire 120 prospects, biases due to preference for an aversion to gambling pg§_§g are eliminated. One final step is required for the implementation of the interval approach to the measurement of preferences. Although Meyer's (1977a) analytical development of stochastic dominance with respect to a function depends only on absolute risk aversion functions, the computer program he has developed to implement the criterion requires that utility functions having absolute risk aversion functions corresponding to lower and upper bounds, r1(y) and r2(y), be specified by the user. Given the definition of the absolute risk aversion function, - - ull r(y) - u y 4.17 the following system of differential equations, which relates levels of absolute risk aversion to values of u(y) and u'(y) can be derived: d My) - 1 0 HM 337 [WM] ’ [4‘01 0][U'(y):l 4'18 Once initial values of u(y) and u'(y) have been specified, recursive numerical integration techniques can be used to solve for u(y) and u'(y) at any level of system output.1 Utility functions associated with the lower and upper absolute risk aversion functions are repre- sented by a table look-up routine in the computer algorithm which implements the stochastic dominance with respect to a function criterion. Corresponding values of y and u(y) determined by numerical integration serve as data points for each table look-up function. 1The initial values of u(y) and u'(y) correspond to the arbitrary scale factors of a van Neumann-Morgenstern utility function. See Appendix B for a listing of the computer program which performs the numerical integration. 121 4.7 ‘An_Empirical Test A simple experiment was designed and conducted to test the efficacy of the interval approach to the measurement of decision maker preferences." Three questionnaires were administered to a group of graduate research assistants from the Department of Agricultural Economics at Michigan State University. The first questionnaire employed the procedure described above to obtain an interval measurement of each subject's absolute risk aversion function. The second questionnaire was used to elicit information required for the construction of a single-valued utility function for each subject.1 Finally, the third questionnaire asked the respondent to make a series of six choices between pairs of distributions, each distribution being comprised of six elements and each being defined on the interval over which preferences had been measured. Information from the first two questionnaires was used to predict the choices made by each respondent in the third questionnaire, and these predictions were compared to the actual responses. In this way the accuracy of each of the two approaches to the measurement of preferences was tested. In evaluating each approach, two criteria were considered: the number of correct predictions and the number of choices for which a definite ordering was made. A prediction was said to be correct if the respondents' actual choice was not excluded from the efficient set of choices and incorrect if it was excluded. The preference measure having the highest proportion of correct predictions was said to be the more 1Because it is the most commonly used and most easily implemented elicitation technique, the ELCE method (Anderson, Dillon, and Hardaker, 1977) was used to identify points on the decision maker's utility unctions. 122 accurate according to this criterion. Concern with the proportion of correct predictions is analogous to concern with the probability of Type I error in a statistical test, the latter being the probability that a true statement will be judged to be false and be rejected. This measure of accuracy is not a good indicator of the relative discriminatory power of preference measurements based on these two approaches. The criterion of first degree stochastic dominance, which holds for all decision makers who prefer more of the system output to less, should never excludeaapreferred choice from the efficient set and so should be perfectly accurate according to the criterion defined above. Often, however, it also fails to exclude many choices from the efficient set. A single-valued utility function, on the other hand, is the basis for a complete ordering of choices--i.e., it always leads to an efficient set having a single element. Therefore, the number of choices actually ordered was also considered. Concern with this measure of discriminatory power is analogous to concern with the Type 11 error associated with a statistical test, which is the probability that a false statement will be judged to be true and not rejected. Clearly there are trade-offs between the accuracy and the discrimi- nating power of a preference measurement. Unlike other measurement techniques and evaluative criteria, the combined use of interval pre- ference measurements and stochastic dominance with respect to a function permits explicit consideration of these trade-offs. As the precision of the interval measurement increases, it becomes a more discriminating basis for the ordering of choices; but the probability of excluding preferred choices from the efficient set also increases. 123 Such trade-offs between accuracy and discriminatory power were also analyzed in the experimental test of the interval approach to the measurement of preferences. Direct interval measurements of absolute risk aversion were made at three levels of income--the relevant system output variable in this instance. These measurements were based on a sequence of four questions at each income level. By constructing interval measurements on the basis of information available at the end of each question, however, four preference measurements-~each more pre- cise than the one which preceded it--were made for each subject. Nine of ten subjects correctly completed all three questionnaires. Since each subject made six choices on the third questionnaire, each preference measurement was used to predict a total of fifty-four choices. The results of the experiment are presented in Table 4.4. They show that there is a clear trade-off between accuracy and discrimi- natory power. First degree stochastic dominance and the single-valued utility function are at opposite extremes in this trade-off relation- ship, and the interval measurements are arrayed between the two. Several factors should be noted. With regard to the accuracy of the interval measurements, it falls at a relatively constant rate as the number of questions posed increases, but even at the higher levels of precision it exceeds that realized with the single-valued utility function. The discriminating power of the interval measurements, on the other hand, increases dramatically as the number of questions asked at each income level increases. First and second degree stochastic dominance, on the other hand, clearly do not discriminate well among the distributions which were the basis for the decision makers' choices. 124 umemcso N o cop Fm mm om m mmuwogu mo ucmugma .N xpuumecou umuuwumsn mm oo— mm mm mm mm mm meowosu mo ucmuswa .P mucmcwsoo mocmcvsoo cowuucam e m N P ovammgoopm uvpmmguoum auvau: .oumovucfi macaw: mmsmmo umspm> meowpmmao mo Lamazz oucmssowema ecoumm umewu upmcvm ucmsmgammmz Fm>gmch mmLzmme muchwu—bLn— m>Pumccmup< so» mgopmumccfi mucmELomgma ¢.¢ 833 125 It should be noted that these results represent but one test of the interval approach to the measurement of decision maker preferences. Results presented in the next section and in Chapter V provide addi- tional evidence of the power of this approach, but further experimenta- tion is needed. It should also be noted that few attempts have been made to apply this technique. With more applied experience may come refinements that will improve the discriminatory power of preference measurements based on this approach without leading to increases in the probability of excluding the preferred choice from the efficient set. 4.8 An Application In order to test the interval approach to the measurement of decision maker preferences in a more practical setting, questionnaires implementing the procedure were administered to seventeen farmers who were all participants in an extension workshop on cash grain marketing strategies. The questionnaires were viewed as an exercise in the workshop--an exercise designed to help individuals think systematically about how they make decisions. Interval measurements of absolute risk aversion were made in the neighborhood of four income levels: ~$3,000, $7,000, $17,000, and $27,000. Each measurement was based on a series of three questions. In addition, the respondents were also asked to make a series of choices between distributions,as was done in the experi- ment described in the preceding section. The farmers had little difficulty in completing the questionnaires, and they seemed to find the choices to be interesting. The range of responses was quite broad. Individuals within the sample of seventeen ranged from the extremely risk averse to the extremely risk loving. 126 Several discernable patterns did emerge, however. Most decision makers exhibited increasing absolute risk aversion over the lower income levels and decreasing absolute risk aversion at higher levels. For most, the interval measurement of absolute risk aversion included negative values at some level of income. In fact, only four of the seventeen decision makers had lower level absolute risk aversion functions which were everywhere non-negative. This casts serious doubt upon the applicability of a criterion such as second degree stochastic dominance which is valid only for decision makers who are risk averse at all system output levels. Choices made in the final section of the questionnaire were pre- dicted remarkably well by the preference measures. Ninety-one out of 102 possible choices were predicted correctly for a success rate of .892. This compares quite favorably with that obtained in the more carefully controlled experiment with student subjects. To test the discriminatory power of the preference measures, that derived for each decision maker was used to order a set of thirty-three distributions, none of which was dominated by any other by the criterion of first degree stochastic dominance. The resultant efficient sets ranged in size from one to twenty-three, with the average size being 10.5. Clearly this represents a sizeable reduction in the size of the efficient set over that attained with first degree stochastic dominance. The second degree stochastic dominance efficient set for these thirty-three dis- tributions had only five elements. It must be remembered, however, that this criterion is valid only for four of the seventeen farmers whose preferences were measured. For these four individuals the size of the efficient set averaged only 2.5. 127 Upper and lower bound absolute risk aversion functions for three representative decision makers are shown in parts a, b, and c of Figure 4.8. The interval measurement for decision maker A declines across the relevant range of net income levels. That for decision maker B rises and then falls, while that for decision maker C is con- stant and then rises at the highest income level. Decision maker C is one of the four individuals in the sample who is everywhere risk averse. The distributions of net income levels associated with each of the two strategies defined in Section 3.3 of Chapter III were ordered for each of these three decision makers using the criterion of stochastic dominance with respect to a function. It will be recalled that Strategy 1 calls for no land rental and for the planting of all acreage in soybeans. Strategy 2 calls for the rental of 240 acres and for a more balanced crop mix. Strategy 1 is preferred to Strategy 2 by decision maker A. Neither distribution dominates the other given the preferences of decision makers B and C. In themselves these results are not particularly interesting. They do demonstrate, however, that the interval measurements of decision maker preferences can lead to different efficient sets for different decision makers. They also show that remarkably dissimilar distributions may be included in an efficient set. The two strategies considered here bear little resemblance to each other. One could be called extremely cautious and the other moderately risky. For decision maker A it is clear that the cautious approach is preferred. For the other two decision makers the preferred choice is not so clear-cut. If these two strategies were but two of many being considered and if a single-valued 128 r(y) .0008~———— \ .0006— \ \ .0004— \ \ .00021— \ \ \\ \3 12000 \ -— — — J l y a ‘ g L a 1 L -.4000 o 4000 8000 _ \ \ 16000 21m0_ 2_4000 28000 1! ”0001— aInterval Measurement of Decision Maker A r(y) .0008— .0006— ,/""“"" .m— / \ \ \ \——-—- .0002— / p- —— — — — ——f 1 1 a a J/ ’ L ‘1‘ \g L -‘19‘2— 1 _ .3202 __fl92/ ’12000 16000 20000 24000 28000 :1 -.0002-— bInterval Measurement for Decision Maker 8 r(y) em- //I_ .0006- / .0004 — / ..______.____ ___ _______- // OOOZF- a’ ” ° / . / a A A l l l ’1 l 1.— -4000 0 4000 8000 12000 16000 20000 24000 28000 .1! -.0002- Figure 4.8 Interval Measurements of Absolute Risk cInterval Measurement for Decision Maker C Aversion for Three Decision Makers 129 utility function were being used to identify a preferred choice, one would have been eliminated from consideration. As the experimental results reported in the preceding section indicate, however, there is a relatively high probability that the preferred strategy would have been the one eliminated. CHAPTER V COMPUTATIONAL PROCEDURES FOR THE IDENTIFICATION OF PREFERRED CHOICES 5.1 Introduction Techniques developed in the preceding chaptensfor the determination and representation of subjective probability distributions and for the measurement of decision maker preferences provide the information required to order any two specified action choices. In most decision situations, however, a large if not infinite range of choices is open to the decision maker. As a result, some systematic technique for the identification and evaluation of a large number of possible strategies is needed in many applied decision analyses. Such a technique should be flexible enough to be applicable in a wide range of decision situa- tions without requiring that important simplifying assumptions be made concerning preferences, probabilities, and the nature of the problem itself. It should serve as an aid in solving practical problems, without forcing the decision maker to alter his conceptualization of the problem at hand. In this chapter a computational procedure designed to meet these needs is formulated, and its implementation is discussed. This proce- dure integrates concepts and operational techniques related to problem formulation, the determination of subjective probability distributions, and the measurement of decision maker preferences developed in the pre- ceding chapters. It is a decision aid which is both powerful and highly flexible. 130 131 In subsequent sections of this chapter, existing computational procedures for the identification of preferred choices are first reviewed, and their strengths and weaknesses are identified. The pro- cedure developed for this study is then introduced and described in detail. Finally, the procedure is applied to the sample problem dis- cussed in the preceding three chapters. 5.2 A Review of Existing Computational Procedures Mathematical programming models are commonly used in the analysis of complex decision problems when the assumption of perfect knowledge can reasonably be made. They are analytically elegant, computationally efficient, and easily adapted for use in a wide range of decision situa- tions. A number of difficulties are encountered, however, when mathe- matical programming models are employed in the analysis of decisions made under uncertainty--difficulties related to problem formulation, to the determination of the probability distributions for system output variables associated with alternative strategies, and to the representa- tion of decision maker preferences. Despite such difficulties, mathe- matical programming techniques are the basis for the most commonly used computational procedures for the identification of preferred choices under uncertainty. I Quadratic programming (Markowitz, 1959; Freund, 1956) is perhaps the most familiar and the most widely accepted mathematical programming technique for the analysis of decisions made under uncer- tainty. Conceptually it is an attractive tool because it is so closely linked with mean-variance analysis, which has been the basis for a wide 132 range of theoretical developments, and because when used in a para- metric programming mode it can, under certain conditions, be used to identify an efficient set of strategies which includes the preferred choice of any risk averse decision maker.1 With respect to practical considerations, it is an attractive technique because the formulation of quadratic programming problems is only slightly more complex than that of a standard linear programming problem and because quadratic programming packages are available in most computer systems. The use- fulness of quadratic programming in an applied decision analysis is severely limited, however, by a number of other factors. With regard to problem formulation, standard quadratic programming models require that input-output relationships be linear and additive and that all controllable system input levels be perfectly divisible. In many practical decision situations these assumptions simply do not correspond closely with reality. Equally serious are the limitations imposed by the standard quadratic programming model on the definition of a management strategy. Decisions are analyzed as though they were inflexible, though, as was noted in Chapter II, one of the most important characteristics of choices under uncertainty is that they are often adaptive in nature. Finally, though computer codes which implement quadratic programming algorithms are readily available, many limit the size of the problem which can be considered. With regard to the determination of probability distributions, quadratic programming requires that they be determined analytically, 1As was noted in Section 4.3 of Chapter IV, the criterion of mean-variance efficiency is a special case of second degree stochastic dominance. The mean-variance efficient set is identified by parametric quadratic programming. 133 which may greatly limit the types of stochastic factors which can be considered in an analysis and may also limit the complexity of the model used to represent a particular stochastic process. Furthermore, within a quadratic programming framework, the distributions of exogenous sys- tem inputs and of system output variables are described only by means, variances, and covariances. Implicitly, then, all distributions are assumed to be normal. When this assumption does not hold, quadratic programming may eliminate from consideration the preferred choices of some decision makers (Tsiang, 1972; Robison and King, 1978). Finally, with respect to the representation of decision maker preferences, quadratic programming requires that the decision maker's utility function be of the quadratic or negative exponential form if a single preferred choice is to be identified. When parametric quadratic programming is used to identify a mean-variance efficient set, on the other hand, the efficient set holds only for risk averse decision makers. It is possible that a decision maker's preferences cannot be adequately represented under any of these assumptions. In response to some of the shortcomings, several linear programming alternatives to quadratic programming have been proposed. These include the game theoretic (McInerney, 1969; Hazell, 1970; Low, 1974) and focus- loss (Boussard and Petit, 1967) approaches and the MOTAD model developed by Hazell (1971). All these models can be solved using standard linear programming algorithms, and none requires that stochastic returns be normally distributed. They impose fewer limitations on the size of_ problems which can be considered and all are more amenable to the relaxation of restrictions on linear constraints and production pro- cesses and on divisibilities of choice variables through the use of 134 separable programming and mixed integer programming techniques. Each does have serious shortcomings, however. Covariance between returns for different activities is ignored in both the MOTAD and focus-loss models. This can lead to a serious misrepresentation of the distribution of outcomes associated with any particular action choice. More impor- tant, however, the links between the decision criteria used in these approaches and the expected utility hypothesis are much weaker than is the case with quadratic programming. The safety-first behavioral assumption implied by the focus-loss approach and employed in many applications of the game theoretic model is especially difficult to reconcile with the axioms underlying the expected utility hypothesis. The Risk Efficient Monte Carlo Programming (REMP) model developed by Anderson (1975, 1976) is in nearly all respects a more attractive alternative to quadratic programming as a decision aid in practical situations. The REMP model employs Monte Carlo programming techniques (Donaldson and Webster, 1968) to construct a large number of feasible management strategies in a random fashion. The distribution of total net returns associated with each strategy under consideration is deter- mined analytically under the assumption that distributions of net returns for each activity and distributions of total net returns for each strategy are members of the beta family. Covariance between returns for each activity is considered explicitly. Under this proce- dure, a cumulative distribution function is constructed for total net returns associated with each strategy and the criterion of second degree stochastic dominance is used to identify an efficient set of choices. The REMP model allows considerable flexibility in the 135 representation of probability distributions, since the beta distribution can take a variety of forms. The model also places few restrictions on decision maker preferences. The criterion of second degree stochastic dominance requires only that the decision maker be risk averse at all levels of system output. With regard to problem formulation, the REMP model also has several distinct advantages. Most notably, the use of Monte Carlo programming allows, at least partially, the relaxation of restrictive assumptions of linearity, additivity, and perfect divisibility required by most mathematical programming models. Despite these advan- tages, the usefulness of the REMP model is greatly limited by the fact that second degree stochastic dominance is not a very discriminatory criterion. In many instances the size of the efficient set identified with the REMP model is so large that the task of selecting a single preferred strategy may be prohibitively difficult. None of these alternatives to linear programming resolves the problems associated with the incorporation of explicit consideration of flexibility into the decision analysis. Recursive programming techniques (Day, 1963; Heidhues,l966) do permit the consideration of such factors within a linear or quadratic programming framework. The behavioral constraints, feedback rules, and repeated optimization which characterize this approach are used to model the process of planning, decision making, and action. The constraints and rules of thumb which drive the model through this process--the desired output of a decision analysis--must be determined exogenously, however. The value of the recursive programming as an aid to decision makers, then, is limited. Stochastic programming (Cocks, 1968; Rae, 1971) is conceptually a more 136 attractive approach to the resolution of this problem. Adaptive decision strategies are determined endogenously under this procedure, which can be used with a standard linear or quadratic programming algorithm. If the problem under consideration is even moderately com- plex, however, the size of the input-output matrix quickly expands to an unmanageable level, so the usefulness of this approach as an applied decision analysis is also limited. Linear decision models, as developed by Holt, Modiglioni, Muth, and Simon (1966), represent a third alterna- tive. Under this approach, which has been further refined by Zellner (1971), Chow (1973), and McRae (1975), dynamic programming techniques are used to determine analytical solutions for a special class of optimal control problems. Having assumed quadratic cost (utility) functions and linear relations between state and control variables, linear decision rules are derived which can be used to determine optimal levels for control variables on the basis of forecasts of key factors. The parameters of these rules remain invariant for as long as the system design parameters are unchanged. Models of this sort are subject to the same limitations imposed on preferences and probability distribution under quadratic programming. Furthermore, the complexity of such models and the high cost performing the initial analysis needed to determine optimal decision rules for a particular process limit applica- bility of this approach to situations where similar decision are made repeatedly. The models described above also fail to completely resolve the problem that complex stochastic processes can often not be adequately represented in a quadratic programming framework. Of particular l37 importance is the fact that the impact of uncertainty concerning fixed resource availability levels cannot be satisfactorily analyzed in the models discussed above. Such factors can be important in some situa- tions. In the decision problem discussed in the preceding three chap- ters, for example, the number of days available for fieldwork during any planting or harvest period is highly uncertain and has a major impact on crop yields and net income levels. Consideration of this type of uncertainty is commonly incorporated into programming models using chance constrained programming techniques (Charnes and Cooper, 1959). Though relatively simple to implement, however, this approach does not permit explicit consideration of the cost associated with violating a constraint and so does not actually facilitate the explicit determination of the impact of stochastic fixed resource levels on the distribution of system output levels realized under any particular management strategy. Finally, none of these models, as specified, permits the representa- tion of preferences by an empirically determined interval measurement of absolute risk aversion in the ordering of choices by the criterion of stochastic dominance with respect to a function, the advantages of which were demonstrated in Chapter IV. It should be noted, however, that stochastic dominance with respect to a function can be rather easily incorporated into the REMP model, as will be demonstrated in the next section. 5.3 A Generalized Procedure for the Identification of Preferred Choices Under Uncertainty While each of the decision models discussed above is attractive in light of at least one theoretical or practical consideration and 138 while each may be appropriate for use in some decision situations, none can be said to be generally applicable in a wide range of prac- tical contexts. Furthermore, many rather common types of decision problems cannot be adequately analyzed with any of the procedures discussed above. A more general approach is needed--one which permits greater flexibility with respect to problem formulation, the determina- tion of probability distributions, and the representation of decision maker preferences without sacrificing the power of decision theory based on the expected utility hypothesis. Such an approach is presented in this section. The generalized procedure for the identification of preferred choices described here is in many respects an extension of Anderson's (1975, 1976) REMP model. Feasible strategies are generated using a modified form of the Monte Carlo programming model developed by Donaldson and Webster (1968), which also serves as a basis for Anderson's model. Under the more generalized procedure, however, a strategy may be defined by specific controllable system input levels, by a set of adaptive decision rules, or by some combination of the two. Distri- butions of system output levels associated with particular strategies are not determined analytically. Rather, they are determined by simulating system performance in a number of sample states of nature, using techniques described in Chapter III. This facilitates the con— sideration of theinunct of a wide range of stochastic exogenous system inputs and permits greater flexibility in the representation of complex stochastic processes. Finally, strategies are evaluated using interval measurements of decision maker preferences and the evaluative criterion of stochastic dominance with respect to a function. 139 Like the REMP model this procedure is an iterative one. A large number of strategies are generated and evaluated sequentially. The determination of a truly optimal choice is not ensured. If a sufficiently large number of plans is examined, however, it is reasonable that the efficient set will contain a nearly optimal choice for a decision maker. Because of its similarity to the REMP model, this generalized procedure for the identification of preferred choices under uncertainty can be called the generalized risk efficient Monte Carlo programming model (GREMP). Interrelationships among the three major processes within the mode1--strategy generation, system output distribution determination, and evaluation--are illustrated by the flow chart in Figure 5.1. Each of these processes will be discussed in greater detail in the remainder of this section. 5.3.1 Generation of a Feasible Management Strategy At the outset of each iteration of the GREMP model a management strategy is constructed. As defined in Chapter II, a management strategy is a set of controllable system input levels, a set of feedback control rules for determining controllable system input levels over the duration of the planning horizon, or some combination of the two. The nature of the problem under consideration determines the types of choices which must be made, and the nature of the decision situation determines the range of choices open to the decision maker. Regardless of how decisions are defined, the presence of constraints makes some management strategies impossible. Limits on available resources may restrict the set of admissible values for some controllable system inputs. Similarly, some controllable system inputs may be 140 Generate a Feasible Management Strategy Use Monte Carlo Silulation to Determine System Output Distribution Evaluate New Strategy Relative to Current Members of Efficient Set Update Efficient Set Examined? Print Information on Efficient Set Figure 5.1 A Flow Chart of the GREMP Model 141 indivisible and so must take integer values. In addition, logic or common sense may dictate that a particular parameter of a feedback control rule should be positive or that one parameter should always be greater than another. Constraints may also be of a form which renders two activities mutually exclusive--e.g., the choice of a par- ticular functional form for a feedback control rule precludes the use of an alternative form. Given the definition of a management strategy in a particular decision situation and the constraints on the range of available choices, some method of identifying feasible strategies for consideration in the decision analysis is needed. When the number of alternatives is small, each can be explicitly specified and evaluated. When the number of alternatives is large, Monte Carlo programming tech- niques can be a valuable tool for the identification of feasible strategies. Monte Carlo programming is a search procedure which constructs sample management strategies at random from the set of feasible strategies. In determining the values for controllable system inputs and/or feedback control rule parameters, techniques similar to those introduced in the discussion of the simulation of stochastic processes are used. Monte Carlo programming is a remarkably flexible tool which can be relatively easily and inexpensively implemented and so is well suited for use in an applied decision analysis. Monte Carlo programming techniques are explained in detail by Donaldson and Webster (1968). A technical discussion of their applica- tion is also presented in Appendix C of this study along with a listing of the computer program used to implement the GREMP model. In the context of current discussion an example is, perhaps, the most effective medium 142 for the explanation of the process by which Monte Carlo programming is used to construct feasible strategies. The management strategy specified in relation to the decision problem introduced in Chapter II will be the basis for this example. It will be recalled that this strategy is defined by three controllable system input levels and by one simple feedback control rule which has a single parameter. The three controllable system input variables are: v1 = number of acres rented v2 = number of acres planted in corn v3 = number of acres planted in soybeans The feedback control rule is: "Regardless of specified crop acreage levels, soybeans will be planted on all unplanted acreage after v4 (a parameter indicating a specific date)." Four indivisible 80 acre tracts of land are available for rental. Therefore, the only admissible values for v1 are 0, BO, 160, 240, and 320. Two types of constraints are imposed on crop acreage levels. First, the farmer states that if he grows a crop at all, he wishes to plant at least fifty acres of that crop, i.e. v1 = O or vi 2 50 i = 2, 3 5.1 Second, total crop acreage is restricted to that which is owned by the farmer, 240 acres, plus that which is rented, v]. The farmer wishes to plant all available acreage, if possible. Therefore: = 240 + v 5.2 v 2 3 1 With regard to the control rule parameter v4, three possible values will +V be considered: May 18, May 26, and June 3. As was noted in Chapter II, these are the ending dates of the last three possible corn planting 143 periods. Let v4 = 1 if May 18 is chosen, 2 if May 26 is chosen, and 3 if June 3 is chosen. The construction of a management strategy is a sequential process, and in some cases it may be necessary to set a value for one choice variable before levels of others can be established. In the example being discussed here, for example, a value of v1 must be determined before crop acreage levels can be specified. Therefore, after control variables have been identified and constraints on them have been specified, the control variables must be classified according to the sequence in which they should be considered. In our example, land rental can be classified as a resource acquisition activity, and v1 is the first variable for which a value is determined. Crop acreage levels, v2 and v3, refer to resource using activities and are specified next. Finally, the value of the control rule of parameter, v4, is set.1 In constructing a management strategy, the value of each controllable system input or feedback control rule parameter is treated as a random variable. In our example, v1 and v4 are clearly discrete random variables since each has only a few possible values. Unless it is desirable to assign greater probability weight to some particular value, both can be treated as discrete uniform random variables. In the case of land rental, then, there is probability of 1/5 that no land will be rented, 1/5 that 80 acres will be rented, 1/5 that 160 acres will be rented, 1/5 that 240 acres will be rented, and 1/5 that 320 acres will 1The order in which control variable levels are specified depends, in general,on the characteristics of the problem under consideration and on computational convenience. All that is actually required in this example is that a value of v1 be set before v2 and v3 are considered. 144 be rented. Similarly, a probability of 1/3 can be assigned to each of the three possible values of the control rule parameter. Acreage levels for each crop, v2 and v3, can be treated as con- tinuous or discrete random variables. Again a uniform distribution, be it discrete or continuous, should be used unless there are a priori reasons for assigning higher probabilities of selection to certain values or ranges of values. In this case acreage levels will be treated as discrete variables with possible values being integer multiples of ten lying on the interval between 50 acres and total available acreage, 240 + v . The probability of any particular admissible value being 1 selected is given by the expression: p = 710(240:%;:50)¥T' 5.3 The denominator on the right hand side of 5.3 is a general expression for the number of possible values of v2 or v3 given a particular value of v]. Actual construction of a management strategy begins with the generation of a sample observation from the distribution of v]. In this instance let the value of that observation be 240, which implies that 240 acres of land are to be rented under this strategy. Next, values of v2 and v3, the crop acreage levels, must be determined. Again using a discrete uniform process generator, one of these two variables is selected for consideration. Let v2, the number of acres planted in corn, be the variable selected. A sample observation from the distribution of this variable is then generated. In this case let its value be 220 acres. Clearly the constraint on total acreage is not violated by this value, since v3 is considered to be equal to zero until 145 it is assigned another value. This is, then, a feasible value for v2. The constraint on remaining available acreage is next updated to state that: 240 + v -v 5,4 3 1 2 240 + 240 - 220 = 260 < 11 and a value for v3 is, in turn, established. It should be noted that had the value of v2 been greater than 430, a value of v3 less than 50 would have been implied. Because values less than 50 are not permitted for v2 or v3, v3 would have been set to zero and corn acreage would have been expanded to 480. It should also be noted that the process of control variable level determination is somewhat more complex when more than two resource-using variables enter into a single constraint. The procedures used in such cases are explained in Donaldson and Webster (1969) and in Appendix C, and the computer program developed for the implementation of the GREMP model is fully applicable to problems of this sort. Once values for the three controllable systems input variables have been specified, all that remains in constructing a feasible management strategy is the determination of a value for the control rule parameter, v4. A sample observation from this variable's distribution is selected-- in this instance let its value be 2, which implies a date of May 26-- and the strategy is complete. It is: 240 acres of land rented < _.a ll v2 = 220 acres of corn to be planted = 260 acres of soybeans to be planted May 26, the date after which all unplanted acreage is shifted to soybean production regardless of specified values of v2 and v3. 146 This simple example demonstrates some of the features which make Monte Carlo programming such a flexible procedure. Choice variables can be continuous or discrete, and they need not be assigned values which correspond to strictly quantifiable entities. In addition, both upper and lower limits can be established for a choice variable once activated without necessarily forcing it into the management strategy at a non- zero level. Other features of Monte Carlo programming are outlined in Appendix C. 5.3.2 Determination of the Distribution of System Output Levels Once a strategy has been constructed, the associated distribution of system output levels must be determined. This is done sequentially as each strategy is generated using the Monte Carlo simulation tech- niques described in Section 3.3 of Chapter III. System performance under a given strategy is simulated for a large number of sample states of nature, each defined by a sample vector from the joint probability distribution of relevant stochastic system input variables. In this way sample system output levels are determined, which can be used to construct a cumulative distribution function for the underlying distri- bution. In the current example, net cash income is the system output variable which serves as the basis for the evaluation of alternative strategies. It will be recalled that crop prices, crop yields, and time available for fieldwork were judged to be the stochastic exogenous system inputs which have an important impact on system performance. Subjective probability distributions for all of these factors were 147 specified in Section 3.4 of Chapter III. Using techniques described in Appendix A, twenty sample vectors of levels for each of these stochas- tic system inputs were generated and read into the computer program which implements the GREMP model. A computer simulation model which determines the net cash income level realized under a specific manage- ment strategy in any given state of nature was also described in Section 3.4 of Chapter III. This model is incorporated into the larger GREMP model as a subroutine which is called after the generation of each alternative strategy. Using the stochastic system input data from the main program and the control variable levels for the new strategy, it calculates net cash income for each of the twenty states of nature. The twenty sample income levels associated with the management strategy defined in the preceding subsection are given in Figure 5.2. 5.3.3 The Evaluation of Alternative Strategies Alternative management strategies are evaluated within the GREMP model by applying the criterion of stochastic dominance with respect to a function, with an interval measurement of decision maker preferences defining the relevant levels and upper bound absolute risk aversion functions. Evaluations are made sequentially as strategies are generated. If a particular strategy is not dominated by any current member of the efficient set, it, too, becomes a member of the efficient. In such instances the control variable values which define the new strategy and the associated set of sample system output levels are stored. If the new strategy is dominated by any member of the current efficient set, on the other hand, it is eliminated from further consideration. Similarly, u = $10,707.58 148 -17271.70 -15571.83 - 8576.60 - 1328.38 1644.65 5658.63 7705.34 10057.10 11737.75 13146.29 13676.88 15136.41 19763.05 22865.28 26578.39 28998.96 29472.42 29695.14 33951.82 0 = $15,464.33 Figure 5.2 Sample Observations from the Distribution of Outcomes Associated with Strategy One 149 members of the efficient set which are dominated by the new strategy are removed from the efficient set. Because the criterion of stochastic dominance with respect to a function is fully transitive, this proce- dure ensures that no member of the efficient set for the set of strate- gies already examined will be eliminated and that only information on actual members of the current efficient set will be saved. Returning to the example being discussed, let the decision maker be the first of the three decision makers for whom actual interval measurements were described in Section 4.8 of Chapter IV. Both the lower and upper bounds of his risk aversion interval, it will be recalled, decrease monotonically over the range of income levels for which pre- ferences were measured. Let the first strategy generated by the GREMP model be that defined above in Section 5.3.1. Being the first strategy considered, it automatically becomes a member of the efficient set. Let the second strategy generated be: v1 = 0 acres rented v2 = 50 acres of corn to be planted v3 = 190 acres of soybeans to be planted v4 = May 26, the date after which all unplanted acreage is to be planted in soybeans The set of net income levels associated with this strategy is given in Figure 5.3. Given this decision maker's preferences, this distribution dominates the distribution of income levels associated with the first strategy. After two iterations, then, the efficient set is comprised only of the second management strategy. Let the third strategy generated by the GREMP model be defined by the following control variable levels: 150 ~11482.39 u = $2979.44 9829. 9463. .16 3452. 606. .79 2097. 3043. 3214. 3545. 3700. 5900. 6354. 7663. 10637. .02 11360. .27 5351 731 10921 14041 15963. 65 32 86 25 9O 55 38 55 89 3O 32 25 01 01 26 o = $7691.31 Figure 5.3 Sample Observations from the Distribution of Outcomes Associated with Strategy Two 151 v1 = 160 acres of land rented v2 = 120 acres of corn to be planted v3 = 280 acres of soybeans to be planted v4 = May 26, the date after which all unplanted acreage is to be planted in soybeans The distribution of net cash income levels associated with this strategy is given in Figure 5.4. When compared to the distribution of income levels associated with the second strategy--the only member of the current efficient set-~this distribution neither dominates nor is dominated. After three iterations, then, the efficient set is comprised of the second and third strategies. The process continues in this manner until a prespecified number of iterations have been completed. As noted earlier, there is no guarantee that a true optimum will be identified. If a sufficient number of strategies are evaluated, however, it is almost certain that a nearly optimal strategy will be included in the efficient set. The number of iterations specified depends on the characteristic of the problem being analyzed. Donaldson and Webster (1968) suggest that 2000 strategies be examined. Experience to date with the relatively small problems considered in this study, however, indicates that 500 to 1000 iterations are often quite sufficient. 5.3.4 General Comments on the GREMP Model The flexibility of the generalized procedure for the identification of preferred choices described above is its greatest strength. It can easily be adapted for use in the analysis of a diverse range of practical decision problems without requiring that major simplifying assumptions u = $10,152.21 152 -12468.33 -11453.37 - 8569.24 — 4272.46 740. 4556. 8818. 9095. 10008. 10424. 10543. 10785. 16644. 18476. 23776. 25010. 25050. 26685. 30877. 21 02 28 91 77 22 O7 49 11 43 9O 99 58 05 08 o = $12,516.70 Figure 5.4 Sample Observations from the Distribution of Outcomes Associated with Strategy Three 153 be made. With regard to problem formulation, the use of a random search procedure to generate strategies for consideration permits considerable flexibility with respect to both the types of control variables which can be specified in the definition of a management strategy and the types of constraints which can be imposed on them. Choice variables can be discrete or continuous. They can be controllable system input levels, feedback control rule parameters, or even indicators of the form of a feedback rule. Constraints on control variables can be linear or non—linear and can take forms more complex than those per— mitted in decision models based on mathematical programming.1 With regard to the determination of system output distributions associated with alternative strategies, the use of Monte Carlo simulation tech- niques greatly facilitates the realistic representation of the stochastic processes by which the outcomes of particular choices are determined. In the simple example discussed above, price, yield, and time available for fieldwork are all considered explicitly as random factors which affect the outcome of any choice. Few restrictions are placed on the form of exogenous system input distributions, and the relationships among these factors, controllable system inputs, and system outputs can be quite complex in nature. With regard to the representation of decision maker preferences, the use of stochastic dominance with respect to a function also contributes greatly to the flexibility of the approach without sacrificing the logical power of decision theory based on the 1A discussion of some of the types of constraints in control variables which are possible in the GREMP model is given in Appendix C. 154 expected utility hypothesis. The interval preference measurements used with this criterion can be as precise or imprecise as is required for any particular decision analysis. The GREMP model is also relatively efficient computationally. In the analysis of one test problem with thirty-five choice variables and twelve linear constraints, for example, 1000 alternative strategies are generated and evaluated using less than seventy seconds of CPU time in a CDC6500. Furthermore, the core size of the computer program which implements the GREMP model is relatively small, and the degree of compu- tational accuracy required for interval calculations is not unusually great. This suggests that it may be possible to design computer soft- ware which will permit the use of the GREMP model on a moderately sized personal computer. Several criticisms of the GREMP model can be made. As was noted earlier, there is no guarantee that the model will identify a true optimum, since alternative strategies are generated in a random manner. Furthermore, the efficient set of choices may be quite large if the decision maker's preferences are not measured precisely enough. Neither of these criticisms is particularly serious, however. With regard to the first, in many instances it can be argued that a good solution to a well-formulated problem is preferable to an optimal solution to a problem which bears little resemblance to that actually facing the decision maker. The flexibility of the GREMP model, then, compensates for this weakness. With regard to the size of the efficient set, it can always be reduced by making more precise interval measurements of pre- ferences, but it should be remembered that such reduction may lead to the exclusion of the decision maker's preferred choice from the efficient set. 155 A more valid criticism of the procedure is that it provides no direct information on how the composition of the efficient set might change if probability assessments were altered or if alternative values were assigned to certain key system design parameters. Such informa- tion can be obtained only by specifying the changes and repeating the procedure. For complex problems a sensitivity analysis conducted in this manner can be a costly and time-consuming process. The GREMP model can also be criticized because it requires that the evaluation of alternative strategies be based on the distribution of a single system output variable, usually some measure of income or wealth output variable. In reality, most decision makers are concerned with more than one performance criterion. They have multiple objectives, and they consider trade-offs among these objectives when making choices. Decision theorists have focused considerable attention in recent years on the construction of preference measures which depend on more than one performance criterion and on the incorporation of such preference mea- sures into a decision analysis. Unfortunately, however, the criterion of stochastic dominance with respect to a function has not been extended to the multivariate case, and there is no indication that such an exten- sion will be made in the near future. Despite the acknowledged existence of multiple goals which affect choices, it can be argued that in many instances the consideration of only the most important objective may be adequate. The added accuracy attained from a more complete analysis may not justify the added cost of such an undertaking. When it is judged that more than one performance variable must be considered in a decision analysis, however, the GREMP model can be used if modified 156 slightly. All that is necessary is that a single-valued multiple cri- terion utility function be incorporated into the model to replace the evaluative component based on stochastic dominance with respect to a function. Finally, two criticisms of a practical nature can be made of the model. First, it should be noted that, although the program which com- plements the GREMP model can be easily adapted for use in the analysis of a wide range of problems, the user is required to supply several problem-specific subroutines and so must have some programming skills. Though expertise in computer programming is certainly not required, this may preclude the use of the model in some instances. Second, it can be noted that in the analysis of complex decision problems with a large number of control variables, a large number of strategies must be examined before a nearly optimal one is identified. Computational costs can be considerable, then, for large problems. It should be noted, how- ever, that with careful problem formulation and with the use of feedback control rules the size of the feasible set of strategies can be greatly reduced. 5.4 An Application In this section the GREMP model is applied to the analysis of the sample problem discussed in the three preceding chapters. Efficient sets of choices are identified from a set of 500 feasible strategies for each of the three decision makers whose preference measures were given in Section 4.8 of Chapter IV. Since this sample problem was the basis for the discussion of the GREMP model in the preceding section, the reader should be familiar with its essential features. Therefore, the results of this application will be presented without further discussion. 157 The efficient set of choices for decision maker A is comprised of 1 Levels of land rented the eight strategies defined in Table 5.1. range from 0 to 160, with four of the eight strategies calling for land rental levels of 80 acres. Soybeans are the predominant crop in each plan, which is understandable given the cost-price relationships for the two crops. At low corn acreage levels, the feedback control rule parameter has little effect, so the switching date is of little impor- tance in these strategies. Mean income levels realized under the efficient strategies range from slightly less than $3000 to slightly above $10,000. Minimum income levels vary little from one strategy to another. Maximum income levels, however, are significantly affected by land rental levels. It is also interesting to note that the efficient set need not be a mean-variance efficient set. For example, strategy 2 dominates strategy 4 by the mean-variance criterion, but both are in the decision maker's efficient set determined by stochastic dominance with respect to a function. The efficient set of decision maker B is comprised of the nine strategies defined in Table 5.2. In this case, land rental levels tend to be higher than those called for by the strategies in decision maker A's efficient set. At higher acreage levels, the mix between corn and soybeans becomes more even, though most of the available acreage is planted to soybeans in each strategy. Mean net income levels tend to be higher within this set of strategies, but the dispersion of possible net income levels is also greater. Given the differences in the 1System output levels are not enumerated for each state of nature. Rather, the mean, standard deviation, minimum value and maximum value are given for each distribution. 158 mo—mN mmmppi mMNo— omen m mczw omN on ow w memm— Nwep—i Fawn mcmN 0N .2.2 amp om o n momeN mmmppi Nepop came m mesa oeN om om o mpemN ONNNpi unwo— mops mp c3: CNN om om m onppm mochi mNmNp ommm eN Am: owN op_ owp e NmmmN NoFNpi wNmoP mmpn mp c3: omN om om m unwom woeNFi NFmNP NmPOF 0N an: omN ON— cop N mmmON mumppi nmmm m—wm m wczw ovN o o p Eng”... 2...... in”. a”... em”... .3 .3 smug cowpsapgpmwo msoucH smog umz we mmwucmqoea mpm>m4 wpaa.sm> Fogacou < .oxez eo.m.ooo co. mo_aooe.bm eeo.o..cu ..m o_eee 159 commN eomN—i mmMNF memo m mesa omN omp cop m ommom omMN—i mmeNF mm—op oN Xe: QNN omp cop m opomN FmNNpi mmMNF wwoop m maze omN oep cop n omnom ommNpu NNvNF mm—op m— Xe: omN omp cop m Nmmmm NNNN—i mwvmp mmmop oN Xe: omN omp oeN m ~mmom pmNNpu mmeN— Nmpo— w— Xe: ocN oep cop e nmmom moprt nFmN— NmFoP wN Xe: omN ONF om— m Nmmmm NNNn—i vamp mowop mN Xe: omN OON oeN N mwomN NmmNpt mmFNp oemm oN Xe: oeN omP on— F an... ENE. fie“. 8w”... ewe“. .fi .Mfi ewe...” cowuzawgammo msoocm gmeo pez we mewugeaoea m—m>m4 mpnewee> pocpcou m mee: coemwoma so; mmvmmueeum ucmwuvwmu N.m m—nep 160 preference measurements for decision makers A and B, the dissimilarity between these two efficient sets is understandable. The interval measure- ment of absolute risk aversion for decision maker A indicated a high level of risk aversion at negative net income levels--i.e. he has an apparently strong aversion to losses. Decision maker B, on the other hand, has much lower levels of absolute risk aversion at low income levels, and his efficient set contains strategies which, while providing opportunities for the realization of high income levels, also can result in substantial losses. The efficient set of decision maker C is comprised of the strategies defined in Table 5.3. Of interest in this case is the fact that each of these eleven strategies is a member of the efficient set of either decision maker A or B. In a sense, then, it can be said that this decision maker's preference measurement lies between those of the other two decision makers. Several general comments can be made about these results. First, they provide clear evidence of the discriminatory power of the preference measures based on the interval approach. The largest of three efficient sets contains only two percent of the total number of strategies examined. Second, the results demonstrate once again that preferences do have an important impact on the choices made by an individual. Finally, it should be observed that a search of only 100 feasible strategies identi- fied many of the strategies included in these three efficient sets. This indicates that in some cases the evaluation of an extremely large number of strategies may not be necessary. 161 eemom ommm.- emem_ ee_o_ em :8: CNN on. oe. _. 9.8mm .mmm_- mmm~_ eeoo. m oese oeN oe. oe. o_ eemom omm~_- NA¢N. mm_o. m. sea eke om. oe. m me.m~ mem..- ammo. mean m oeze see c. om m memom mew..- Amme e_em m oeze oem o o A momem mew..- Ne.o. emme m ceze gem om ow e mpeem ONNNF- “mes. ee.. e. ea: eke om om m o“._m moe~_- meme. emam em me: emu o__ oe. e “seem meem.- ._m~_ Nm_o. em :8: see om. oe. m .mmom .mN~_- eeem. Nope. e. sex com 04. on. N Naemm ~o_~.- memo. mm.“ e. em: Dew oe om . an... ENE... fine. ea... owe“. a... em“ an? coppsnweumwo msoocH gmeo umz eo mewpgmqoem mFm>m4 apnewge> Foeucou u .oxez eo.n.ooo .8. mo.maoa.em eeo.o...m m.m o.eee CHAPTER VI COMBINED PRODUCTION AND MARKETING DECISIONS BY CASH GRAIN FARMERS: AN EXTENDED APPLICATION 6.1 Introduction The formulation of an integrated set of operational techniques for the analysis of decisions made under uncertainty was the focus of the preceding chapters of this study. A simple example related to land rental and crop production decisions made by cash grain farmers has been used to illustrate these techniques. In this chapter the useful- ness of the methodological tools developed above is demonstrated further by expanding the earlier example to include the consideration of alter- native marketing strategies in conjunction with the selection of a cropping plan. Two modes of marketing will be evaluated: the sale of all production at harvest in the cash market and forward contracting.1 The objectives here are to examine how these two modes of marketing can best be combined in the formulation of a marketing strategy which is appropriate for a particular decision maker, to determine the degree of interdependence between crop production and marketing strategies, and to examine the impact of changes in preferences on combined produc- tion and marketing strategies. 1To simplify the discussion, other marketing alternatives such as hedging in the futures market and participation in government price stabilization programs will not be considered, nor will the use of crop storage as a marketing tool be examined. These alternatives could be incorporated into the analysis presented below, however. 162 163 Cash grain farmers make major resource allocation decisions under conditions characterized by uncertainty with respect to both prices and output levels. With the increased dependence of the feed grain sector on foreign markets in recent years, the impact of price uncertainty on cash grain producers has been particularly strong. Many have come to realize that marketing as well as production decisions have a major impact on the level of income they realize. The common practice of selling all production at harvest in the cash market often results in the receipt of prices which are low relative to those which could be realized under alternative modes of marketing. Furthermore, a strategy comprised only of cash sales at harvest does nothing to diminish the degree of price uncertainty faced by the producer over the period prior to and during planting--the period when major allocative decisions must be made. As a result, many producers find it desirable to consider for- ward pricing some or all of their planned production. By contracting to deliver a certain quantity of grain on a future date at a specified price, the producer sells all or some portion of his crop in advance. In doing so, he establishes with certainty a price for at least part of his total production, thereby greatly reducing the degree of price uncertainty he faces. The reduction in price uncertainty achieved through forward con- tracting can be of considerable value in some situations. Advance know- ledge of the price to be received simplifies the planning process. Furthermore, if the contract price is high enough, a producer who forward contracts may almost ensure that he will realize an adequate level of income. There are also costs associated with contracting, however. 164 The other party in the contract, the buyer of the grain, often has access to better information than that available to the producer and it is unlikely that he will offer a price higher than that he himself expects to receive. Of equal importance is the fact that the seller, though he protects himself against the effect of an unexpected downturn in price, also foregoes the opportunity to benefit from unexpected price increases. Finally, grain must be delivered according to the terms of the contract even if production falls short of the amount which is forward priced. If yields are unusually low or if poor weather condi-. tions prevent the planting of some acreage altogether, the producer faces the prospect of being forced to purchase grain on the cash market to meet the terms of his contract. Given the advantages and disadvan- tages of forward contracting, then, the producer must determine how much, if any, of his anticipated production he wishes to contract. Clearly preferences, financial position, price expectations and produc- tion plans affect this decision. The determination of optimal forward contracting levels for agri- cultural producers facing yield and price uncertainty has been examined in a mean-variance framework by McKinnan (1967). He shows that under relatively simple conditions the optimal level of forward contracting depends on five fundamental parameters: the standard deviations of crop yield and product price at harvest, the expected crop yield, the forward contract price (which is also the expected price at harvest), and the correlation coefficient between crop yield and harvest price. McKinnan's work has been extended by Ward and Fletcher (1971) and by Heifner (1972), and Barry and Willmann (1976) provide an interesting 165 empirical application of contracting theory based on mean-variance analysis. The applicability of the results of each of these studies is limited by the somewhat restrictive assumptions associated with the use of the mean-variance criterion--assumptions of normally distributed net returns and risk averse behavior. The requirement that net returns be normally distributed may be particularly unrealistic. More critical, however, is the failure in each study to consider a factor which is of primary importance in an applied context. All treat forward contracting decisions as though they can be made at only one point in time. In reality producers have forward pricing opportunities open to them over an extended period of time. In such a context the decision of whgg to contract may be as important as the decision of how much to contract. Producers continually evaluate forward pricing opportunities, and it is not unusual for an individual to enter into contracts at several different times. Once a contract is made, however, it must be honored. Choices made in the present, then, affect opportunities in the future. Therefore, it is important to consider forward contracting decisions in a more dynamic analytical framework. The techniques developed in this study allow the relaxation of the restrictions on probabilities and preferences imposed by the use of mean-variance analysis. More important, however, they permit the evalua- tion of both production and forward contracting decisions in a more dynamic framework in which the roles of learning and adaptive behavior are treated more explicitly than in previous studies. In the analysis below, attention will again be focused on decisions made by a cash grain farmer producing corn and soybeans. As in the example discussed 166 in earlier chapters, flexibility is introduced into the production planning process through the incorporation of a simple stopping rule for corn planting. Marketing strategies for both corn and soybeans are defined by more complex feedback control rules which are applied repeatedly over a seven month period extending from mid-January to mid-August. Particular attention will be given to the examination of both the interdependence between production and marketing strategies and the impact of changes in decision maker preferences on the choice of a combined production-marketing strategy. 6.2 Problem Formulation The basic decision situation in this extended example is the same as that described in Section 2.5 of Chapter II. The operator of a relatively small southeastern Michigan cash grain farm needs to realize a substantial level of income from his farming operation in order to meet his debt repayment commitments of $35,000 annually and to cover family living expenses. He wishes to choose a management strategy which, given his risk preferences and the range of opportunities open to him, will best satisfy this need. The time is January 1979, and, because land rental decisions must be made, the farmer must formulate at least a tentative management strategy now. The system of concern in this example is comprised of a set of production and marketing processes which constitute a farming operation. The performance of this system is measured by a single system output variable: annual net income for family living expenses and firm expan- sion after all debt repayment commitments have been met. The level of income realized is affected by the structure of the system and by exo- genous and controllable system inputs. 167 The structure of the production and cash marketing processes embodied in this system was described in Section 2.5 and remains unchanged in this example. It will be recalled that standard crop budgets for corn and soybeans were presented and that relevant planting and harvest periods were specified for each crop, as were rules of thumb which determine the priority of each crop during planting and harvesting. Important state variables related to the production process included production expenses incurred to date, acreage of each crop planted or harvested to date, and bushels of each crop harvested to date. All production was sold in the cash market at harvest in the original example. The harvest price of each crop was multiplied by the number of bushels harvested to determine the value of marketing receipts for each crop, the key state variable used to describe the marketing process. The incorporation of forward contracting into the analysis requires that several structural features associated with this mode of marketing be specified and that several new state variables be defined. Struc- turally, it must be recognized that contracts, once made, are binding, and that if production falls short of the amount contracted, enough grain must be purchased in the cash market at harvest to cover the deficiency. It must also be noted that, unlike a hedge in the futures market, a hedge based on a forward contract cannot be lifted. The new state variables which must be defined include the number of bushels of each crop contracted to date and the current level of receipts forth- coming at harvest from quantities of each crop contracted.1 1Several additional state variables will be defined during the discussion of the feedback control rules which determine the marketing strategies for corn and soybeans. 168 Exogenous system inputs in the original example included the following stochastic environmental factors: the price at harvest of each Crop, the number of days available for fieldwork in any particular planting or harvest period, and the yield of each crop for each allow- able planting-harvest combination. Consideration of forward contracting requires the specification of an additional set of stochastic factors: the harvest delivery forward contract prices for corn and soybeans over the period when contracting decisions are to be made. Clearly this ‘ firth" set of prices will affect the desirability of forward contracting and the level of income realized under any management strategy which calls for the forward contracting of either crop, and clearly these prices cannot be known with certainty in mid-January when a management strategy must be formulated. Only four control variables were considered in the example discussed in previous chapters. They were: v1 = acres of land rented v2 = acres of corn to be planted v3 = acres of soybeans to be planted v4 = the date after which all unplanted acreage is to be planted in soybeans The consideration of forward contracting decisions requires the specifi- cation of several new control variables, which will determine the number of bushels of each crop contracted at any particular time. Contracting levels, pgr_§g, will not be selected directly. Rather, a feedback control rule which determines the desired contracting level at any point in time will be specified for each crop. The form of this rule is the 169 same for both corn and soybeans and is similar to that discussed in Section 2.3.1 of Chapter II. It is: where DBC DBC EC ZA ZB ZC t ECt(vaAt+vxZBt+vyZCt) 6.1 desired bushels contracted at time t current expected size of harvest the percentage difference between the current contract price, CPt, and the current expected harvest price, EPt; 1.e. ZAt = (CPt-EPt)/EPt the current daily rate of change in the contract price: 1.e. ZBt = dCPt/dt the difference between the desired percentage of the expected crop contracted and the actual percentage contracted. The desired percentage contracted is defined by the express1on Apt/DAPt+vz’ where APt and DAPt are current actual and desired acreage planted and v2 is a parameter to be selected. The actual percentage con- tracted is defined by the expression BCt/Ect, where BCt is the number of bushels contracted to date and ECt is the expected size of the harvest. It follows that ZCt = APt/DAPt +Vz'Bct/Ect' In addition to v , v , v , and vy are parameters to be selected. Z W X The inclusion of each term in this contracting rule can be justified by appealing to commonly recognized rules of thumb regarding forward contracting. The first term ZAt, reflects an assessment of the funda- mental position of the market. If ZAt is positive, the current contract 170 price is above the current expected harvest price. As ZAt becomes larger, the attractiveness of current pricing opportunities increases and the decision maker is expected to want to contract more. Similarly, if ZAt is negative, the current pricing opportunity is not an attractive one and the desired level of contracting will be less. The parameter which weights this factor, vw, is expected to be positive. The second term, ZBt’ reflects, in a very simple way, a technical assessment of market conditions. If ZBt is positive, the contract price is increasing, and most technical analysts would recommend that contracting be delayed. If ZBt is negative, and if fundamental analysis indicates that the current pricing opportunity is a favorable one, on the other hand, many technical analysts would recommend that forward contracts be entered into. Following this reasoning, the parameter which weights this factor, vx, is expected to be negative. The third term, ZCt, reflects the degree to which current contracting levels coincide with desired contracting levels, In this case desired levels are a linear function of the percentage of desired acreage actually planted. The parameter vz shifts the intercept of that function and is expected to be negative, reflecting the fact that many decision makers hesitate to contract before planting begins unless pricing opportunities are particularly favorable. When ZCt is positive, the decision maker is expected to desire to contract more unless other factors indicate the current pricing opportunity is not a good one. When ZCt is negative, the desire to contract is diminished. Therefore, the parameter which weights the importance of this factor in the con- tracting rule, vy, is expected to be positive. 171 As stated above, a control rule of this form is specified for each crop. Since four parameters must be selected for each rule, a total of eight new control variables are required in this example. Control variables v5, v6, v7, and v8 correspond to Vw’ vx, vy, and v2 for the corn contracting rule, and v9, v10, v1], and v12 correspond to the same parameters for the soybean contracting rule. Introduction of the contracting rules into the analysis also requires the specification of several new state variables. In addition to those mentioned earlier, state variables indicating the daily rate of change in the contract price, the expected size of the harvest, and the expected price at harvest must be monitored for each crop. The daily rate of change in the contract price is defined by the expression: dCP t - .- ‘dt— ‘ “Pt apt-1"dt 6'2 where CPt and CPt-l and dt is the number of days between price observations. The expected are successive observations of the contract price size of harvest for a particular crop is defined by the expression: ECt = (BY)(DAPt) 6.3 where DAPt is the desired acreage to be planted and BY is a base 1 yield. The expected harvest price for a particular crop is defined by the following simple expectations model: t t-l+'17CPt-2 6.4 This is simply a weighted moving average of the three most recent con- EP = .SCPt+.33CP tract price observations. Finally, it should be noted that in the 1BY is set at 100 bushels per acre for corn and 33 bushels per acre for soybeans . 172 application of each rule, the number of bushels to be contracted at any time is set at zero if the number of bushels already contracted is equal to 150 percent of the expected crop. As in the previous example, the planning and decision horizons are both said to be a single crop year. It will be recalled that the stopping rule for corn planting is checked at the end of the second, third, and fourth planting periods. The two contracting rules are consulted a total of twelve times--four times prior to the commencement of planting, at the end of each of the six planting periods, and twice between planting and harvest. The exact dates for each application of the two rules are given in Figure 6.1. The problem in this example, then, is to identify a management strategy defined by a total of twelve control variables which maximizes the expected utility of a particular decision maker. Three of these choice variables specify controllable system input levels directly. The other nine are parameters in feedback control rules which are consulted periodically over the entire decision horizon. 6.3 The Determination of Subjective Probability Distributions Subjective probability distributions for crop yields, time avail- able for fieldwork, and crop prices at harvest were specified in Section 3.4 of Chapter III. They remain unchanged in this example. It is also necessary to specify subjective distributions for the harvest delivery contract price of each crop at each of the eleven dates after January 10 listed in Figure 6.1. These prices cannot be known with certainty at the time when a strategy must be selected, and their distri- butions can have an important impact on the distribution of net income levels realized under any particular management strategy being considered. 173 January 10 February 10 March 20 April 20 May 10 May 18 May 26 June 3 June 11 June 19 July 20 August 20 Figure 6.1 Dates for the Application of Forward Contracting Rules 174 For either crop, elements in the series of contract prices are expected to be correlated with each other and with the cash price at harvest. Therefore, individual contract price distributions cannot be specified independently. Even if this could be done, it is unlikely that a decision maker would have clearly formulated expectations con- cerning contract price levels at each of eleven dates over a seven month period. To simplify the specification of these distributions, then, the following procedure which reflects the author's own subjective assessments was used. The harvest delivery contract price for each crop on January 10, 1979, the date when a strategy is to be selected, is known with certainty. That for corn is $2.08, and that for soybeans is $6.31. For any particular state of nature, the cash price at harvest can also be specified, this price representing a sample observation from the underlying subjective price distribution. In addition, contract margins charged by elevators are relatively constant and can be set at $.10 per bushel for corn and $.25 per bushel for soybeans.1 Given this information, it is assumed that the harvest delivery contract price offered on any date between January 10 and the contract delivery date can be adequately forecasted by linearly interpolating between the January 10 price and the price at harvest less the contracting margin.2 In Figure 6.2, for example, the contract price of corn on January 10 is $2.08. The cash price 295 days later November 1 is $2.32. With a 1Contract margins cover the elevator's operating expenses and represent a premium paid to the elevator by the farmer for the reduction in price uncertainty. 2Approximate delivery dates are October 15 for soybeans and November 1 for corn. 175 mezuez mo mueum wco toe m_m>m4 move; Foeucou ee>cmmno ece mew; ameumcou wows; N.e wcamwu N .900 N .uemm w .m:< N sze m maze op Xe: mp FPLQ< FF suce: m .neu omN oeN opN owp omp ONP om cm on — H . _ u u u d . oo.N o_.N cease: HUMLHCOU mmm . ON.N woven umm>cez cmeu . . . om.N x 875 “mete: gmeo . 176 contract margin of $.lOper bushel, then, a hypothetical contract price offered on that day would be $2.22. The line between the contract prices of $2.08 on January 10 and $2.22 on November 1 is used to forecast the contract price on intermediate dates. On May 10, 120 days after January 10, for example, the forecasted contract price is $2.14. Forecasts of the contract price of each crop at each of the dates specified in Figure 6.1 can be made in this manner. By making such forecasts for a number of sample harvest price observations, a crude multivariate distribution of intermediate forward contract price levels can be specified. Clearly forecasts based on such a simple model are subject to con- siderable error. To reflect this fact during the actual specification of sample observations from the contract price distributions, the fore- cast of each intermediate contract price was multiplied by a factor of 1+e, where e is a normally distributed random variable with mean 0.0 and standard deviation .05. To further enhance the realism of the model, the multiplicative error terms for successive dates were correlated to reflect the fact that observed contract price levels are autocorrelated. A plot of values based on this more complex model is also shown in Figure 6.2. In this instance, relatively good pricing opportunities occur in March and July, and contract prices during the period when planting takes place cluster around the price forecast line. It is interesting to note that the contract price in this example never exceeds the actual cash price at harvest. The distribution of net income levels realized under any particular management strategy is determined by simulating system performance under 177 that strategy in each of a set of sample states of nature.1 One possible management strategy is defined by the following control variable levels: v1 = 240 acres rented; v2 = 190 acres corn; v3 = 290 acres soybeans; v4 = May 26, the date after which all unplanted acreage is planted in soybeans: v5 = 12; v6 = 1; v7 = .90 and v8 = -.30 for the corn contracting rule; and v9 = 17, v10 = -1, v1] = .50, and v12 = -.20 for the soybean contracting rule. The sample observations which define the distribution of net cash income levels associated with this strategy are given in Table 6.1 along with the other information pertaining to system performance in each of the twenty states of nature considered. The cropping plan in this case is identical to that which maximized expected net returns in the earlier example which precluded forward contracting, and the net income levels realized under the strategy with- out contracting are also given in Table 6.1. Several observations can be made about the differences in net income levels realized under these two strategies. First, the average net income level is higher under the strategy which precludes contracting. This is to be expected, given the contract margins on corn and soybeans of $.10 and $.25 respectively. Second, the variability of net income, as measured by the standard deviation of the sample observations, is reduced by forward contracting, again as would be expected. The reduc- tion is not a sizeable one, however. It is also interesting to note that in state 14 when yields for both corn and soybeans are apparently 1The computer program which implements the simulation model used in this phase of the analysis is subroutine DISGEN in the listing of the GREMP model at the end of Appendix C. 178 n~.—om.m— e -.~ec.m_ a pm.sas.o—n - a . nm.moa.e—» a a c .chn m..s .c.» we.ma.a on.mmuo n..m emue .m.aceo mm.o.ece c».henn~ e .mmocr we.” om.e pe...ne ma.«.ec ON.“ .~.m ae.e_nc. mc.nc.c« cc.o~oem uuoep¢n. «u. rm.o “we've Chowasc Naon Nmox rmocono cuounmo. eooavccc me.umm.- .o.m .M.o s~..c. .m.m~cm we.r ...~ mm.nrmm em.mccce a...nmi no.8aom No.0 es.e em.nno~ ne.emes on.~ so.“ .~.ocee oo.cmnw— e~.ammc «n.m=~. .e.~ n¢.c oo.~qem «s.3..~ .8.“ sc.~ ec.~... eo.~ocep -.o~ne ~c.c~oc.. re.c Nc.e ro.mecm ~o.a»~m er.“ om.~ mo.c.em~ ~n.oooe oh.hpek.- ee..-e.i . 99.: .n.c .«..e. .o.owme ne.. .o.. en.omn_ ee..~.~ .o.~.mo.-. .e..oes~ mm.k cm.“ eu..ee. .o..nco Fr.“ .n.~ m~.naeo ee.k.e.. no...ne~ mp.eouaa me.~ s—.s enoaece nmocwme Pm.r N~.N «oeuxwe «seesawe No.«m¢o~ Ne.«monn «a.» ,cu.s o~.s-m Ne.occo mc.m .o.~ er.¢coe Naommcuw mm.a~x0m - mo..eemru “v.4 em.» .u..c¢. a..eme~ ea.' oo.. oo.ouoe eo.ounm~ ~e.c.oc.. o..o.p~r u~.. ca.» e..~mwe m..ce~r -.~ SN.“ mm..pre «5.04rep .ce.meme. - =«.~mc9e o~.e n..e No.oom— ~q.ep.o sc.~ .p.~ co.eom» oc.~s~¢p mm.~ooo me.u.c~. an. ee.e ue.cpen 8e..ame n..~ a..~ he.on~.. -.mo~o_ en.;m~n. mc.eso. o..c cm.e o .mmmn em.epme cm.m c—.~ co.ecne wo.Feno w~.cu¢. e“..eos. se.c .m.o -..c¢ on.~.sc no.“ Fw.m mc.mcne q..m.~o. ”c.eo.em so..~n.- u..o c..o c~.-ou wo.eecp ...n ma.“ ca..~ec. ma.o.ne. .c..u.e- or.o..n. 53.. o¢.m om.-or A .vewo or." no.” .y.nkce Ne.co... .c...en. enumenmc. a..c s~.e o .mee' «c.3000 Ac.“ cc.“ 04.;ece .,._-ecm n...p... 9:.uueeueou oz ou..e oo.ee eouoeeueoo ecumosee: ou..a uu.ea eouueeueou eoanose-z u=_»ua.»=ou gu.: a_o>o4 auosee: uueeueou a_ogu=e u—ogmae amuse-x .uueeueou «pognae «Fognae gu.: «posed oeaue~ as: gnaw oueeo>¢ cue» uneco>z neeue_ aux ”53.8..“ . EB _ Xmmaecum “casemece: ePeEem e cmec: mucescoeemm Emmem F.e m_nee 179 quite low, contracting costs the producer a considerable amount of money and turns an already bad situation into a worse one. The problem is particularly serious for corn. Pricing opportunities apparently appear to be quite favorable, since more corn is contracted than would normally be produced. At harvest, however, the cash price is $.09 above the average price for which corn was forward contracted, and a total of 14,631 bushels must be purchased on the cash market at this higher price to meet the terms of the contracts. Finally, it should be noted that quantities of corn and soybeans contracted vary considerably from one state of nature to another. This reflects the fact that there are considerable differences in the attractiveness of the forward con- tracting opportunities available in each state. Clearly the contracting rule is sensitive to these differences. 6.4 (Decision Maker Preferences Interval preference measures for the three decision makers dis- cussed in the illustrations presented in Chapters IV and V will be used to order the alternative strategies considered in this example. Orderings will also be made for a risk neutral decision maker--a decision maker whose absolute risk aversion function is always equal to zero--to determine whether forward contracting rules can be identified which lead to higher expected net income levels than those realized under strategies which preclude forward contracting. The net income level distributions associated with the two strate- gies identified in the preceding section--one with contracting and one without—-were ordered for each of the four decision makers considered in this example. The strategy which precludes forward contracting is 180 preferred to that with contracting by the risk neutral decision maker and by decision maker A. The two strategies cannot be ordered for decision makers B and C. It is interesting to note that the non-contracting strategy is preferred by decision maker A, whose preferences were most conservative in the previous example. This may be attributable to the aversion of this decision maker to the sizeable losses incurred under the contracting strategy in state 14 when crop yields are unusually low and contracting levels are high. 6.5 The Identification of Preferred Choices By incorporating the simulation model described in Section 6.2 into the Monte Carlo risk programming model described in Chapter V, an efficient set of strategies can be identified for each of the four decision makers considered in this example. Each strategy specifies a flexible production plan and feedback control rules which direct forward contracting decisions for both corn and soybeans. Constraints placed on the values of crop production and land rental control variables were discussed in Section 5.3 of Chapter V. They remain unchanged in this example. Only upper and lower bound constraints are placed on the parameters of the two contracting rules. It will be recalled from Section 6.2 that each rule has four parameters. Experi- mentation with the model indicates that the following ranges of admissible values are reasonable and do not unduly constrain the set of feasible choices: 181 OSVWSZO w=5, 9 6.5 ~205vx50 x=6, 10 05vy51.0 y=7, 11 -.7Sv250 z=8, 12 Parameters vw and vx are restricted to integer values. Parameters vy and v2 are restricted to values which are integer multiples of 0.1. In this example 1000 randomly selected strategies were examined and ordered for each decision maker's preferences. This is twice the number of strategies considered in the earlier example. The fact that each strategy is now defined by twelve control variables rather than only four implies that many more feasible strategies exist, however, and therefore more strategies should be evaluated. The efficient sets identified range in size from a single element for the risk neutral decision maker and three elements for decision maker A to thirteen and seventeen elements for decision makers B and C respectively. It is not possible to discuss all the efficient strategies for each decision maker, therefore, a representative strategy from the efficient set of each decision maker is defined in Table 6.2. The representative strategy for the risk neutral decision maker is that which maximizes expected net income under contracting. Detailed information on system performance under this strategy in each of the twenty states of nature considered is given in Table 6.3. The expected net return is slightly less than that realized under the expected net return maximizing strategy from the earlier example which precludes contracting, but in many respects the two strategies are remarkably similar. The two cropping plans are nearly identical, 182 on. 1 oN. o.ep- o.ep cm. 1 cm. o.mp1 o.e_ eN Xe: CNN om ow cc. 1 o o.N 1 o.e ON. 1 co. o.m 1 o.mN mp Xe: oeN oep cop o ow. o.N 1 o.NP 1--- 1--- 1--- -11- m maze oeN o o 0N. . op. o.NN- o.Np cm. 1 ON. o.epi o.c~ eN Xe: com oe_ oeN peceaez xmwm N—> Np> o—> m> m> N> e> m> e> m> N> p> mceuesecea epem meanesecee epsm ween emeeco< ememsu< eeucem uoecucou :eenXom puecacou :cou mcwgouwzm ceeeXom ccou mmgu< mcmxe: cowmwoeo Leon No meem acmwuwmem we» seem memmeuecpm m>wueucemecaem N.e mpneN Nn.mNc.mN a o NN.mNc.mN e 808NO°Fn . 3 NNe3N0°"“ I 3 183 .eouuacucou a. ue—guoc cog: eoeNNoec: a. 09.5: auecaeeu cause». as». enoaNmoN mN.N mo.N oo.Nrcm om.m-o NN.~ ac.N NN.oue mm.oeoep No.wanmN cm.r.oon uo.N nc.o om.me~ mm.emmo oN.N cc.N Np.one <~.o¢NoN er.ceeom «N.eemme cu.m N oc.c quoNNNx NN.N we.“ ”N.ome cN.cNe~N «monNNnN ooouccwa Nw.m N 00.9 no.ewmu Nu.N «o.N wN.0mc (noNNNmN co.coNNt ne.emam .Nuoc N coda mo.NNeN c~.~ m~en NN.NNOm co.ewa mc.uon ~N.8ch NN.N N co.o op.cNNN No.~ ec.~ Np.one eN.cammN m~.eNeN ne.N-e.1 NN.e Nm.e No.N~e~ on.eeen on.“ NN.~ Ne.conN cm.ecme nN.NcnmNi. cpeNNNNNo uo.m N cope No.0emc NN.N No.N Npcoro evoc~NN No.0;NNNt o .ueNoN mm.N N oo.o «n.9mNo Nm.N ac.N . we.omo we.mneeN oo.mNeo~ NN.~mmc~ Nu.N No.N «ceNc «c.mpwe Nu.“ mo.N ”N.omo cp.cNNNN en.emenm 0N.o;eam Nu.N N oo.o oc.Nmeo mc.N No.N Np.omo ce.mvo«w ~N.onNem No.nNant mmec N 00.9 c~.mN¢N NN.N moo~ MNeemw enoromar oc.0u¢mpu pv.c4nN. www.e om.o eoeech en.omce NN.N . econ mN.9mc ~n.nNch e .numn. cN.NmocN ow.c N oo.o N...NNo Ne.~ Nc.~ Np.ome oc.~.~qN no.Ncoo. N¢.¢ucmN cm.c om.e ea.¢me~ oN.NNNN mr.~ N~.N em.oNoe omoNNomN NN.0amn. No.3..e o..e «c.e wo.man N¢...ee on.“ ec.N ~N.omc mo.Nemo m .Nsee .x..=een .~.c N oc.c ue.eeoe Ne.~ ac.“ NN.cmo ~a.cNmeN «c.oNcon :e.eNr.1 um.‘ N 00.9 NoeoNoN a..~ cm.“ NN.qqom rn.cNucN qr.m.ee- N:.em.u. NN.. mc.o epomeN rc.crNe oN.N sc.~ mN.aoe oo.N;.NN .N.um.q. .x....ee N... ._ 89.0 ac..mmo Ne.“ ac.m NN.o.c Ne.NNcem N..NNowr oeNNue.N=ou ox no.5; ouNeN eouue.ucou eouaosce: 00.5; 06.5; eouuecucou eouuo>ce= ueNNu-cueoo gNNx «No>o4 nausea: uuecucou «Nogman NNoguae uuo>eez . uueeucou «Nogmae uNognae ea.) «Nu>oa oeoucN No: cmeu omega»: ennui. omega»: «neocN no: uezxzam .reu Lexe: :onNueo Necpzez mem e No meeeeum eeccemeca mg» Lees: eoceecowcma EeNme m.e eNaeN 184 and the contracting rules employed in the strategy which permits for- ward pricing do not result in extensive forward contracting for either crop in most states of nature. The paremeter weighting the first term of each contracting rule is assigned a relatively high value, which implies that contracting is strongly discouraged when the contract price is below the expected harvest price and strongly encouraged when it is above the expected harvest price. This is precisely the behavior one would expect from an individual who seeks to maximize expected net returns. The parameter weighting the second term of each contracting rule is assigned a strongly negative value. This implies that contract price movements also have an important impact on this decision maker's choices. Even if the pricing opportunity is a favorable one, he will delay his commitment to a forward contract if the contract price is rising. Again this conforms to intuitive expectations about the behavior of such a decision maker. Finally, the third parameter of each contracting rule is relatively small, which implies that this decision maker places little emphasis on achieving a desired level of contracting under almost any market conditions. Net income levels realized under an identical production strategy which precludes contracting are also given in Table 6.3. Comparison of the two net income distributions indicates that contracting can lead to both higher expected net returns and less income variation, though actual differences between the two strategies are minimal. It is also interesting to note that the corn contracting rule is particularly successful in this instance, the average corn contracting price being well above the cash price at harvest in three of the four states of 185 nature when more than 1000 bushels are contracted. The soybean con- tracting rule is somewhat less successful. Information on system performance under the preferred strategy of decision maker A is given in Table 6.4. This strategy is markedly different from that of the risk neutral decision maker. Its production component calls for no land rental and for the planting of all available acreage in soybeans. This is identical to one of the efficient produc- tion plans identified for this decision maker in the earlier example. Because no corn is to be planted, only the soybean contracting rule has any impact on system performance. As the information given in Table 6.4 indicates, this rule results in substantial levels of forward contracting in most states of nature. The parameter weighting the first term of the soybean contracting rule, v9, is relatively large, indicating that this decision maker also places considerable emphasis on fundamen- tal analysis--on differences between the forward contract price and the expected cash price at harvest. Unlike the risk neutral decision maker, however, decision maker A assigns relatively little weight to technical analysis--to the analysis of contract price movements--as is indicated by the small absolute magnitude of parameter v10. Also in contrast to the strategy of the risk neutral decision maker, the relatively large value of parameter V11 indicates that much stronger emphasis is placed on attaining some desired level of contracting under most conditions, and the fact that v12 is assigned a value of 0.0 implies that this desired level is 100 percent of the expected crop once all acreage has been planted. 186 an.Nmne - a me.NNve - e NN.mNQNN - a . mN.eNnnN a a .eouuweueeu «N ocNgNoe cog: vac—Noeea a. ouNen Nu-Naeeo «accuse as». Nm.NNoNN mN.N mm.Neco ec.ecmN Ne.“ N cc.c cc.u Ne.oncc. NN.NNan we.~ c~.NNNN eo.NNNN an.“ N cu.o cc.e :N.-e. Ne.eem em. No.4oxe oN.oomN ~N.~ N co.o ee.c qm.~NN. oe.Ncm~t Ne.m No.oo~m Ne.eeoe Nu.N N cc.c cc.e Nc.mN4N- ap.chN- No.e co.«Nnc «c.cNec cm.~ N . oo.o co.o om.N- No.o~NN NN.N m .mqu No.mv~e Ne.N N oc.e cc.o eo.~mcn oc.NmeNt we.» Nm.ech .N.ceoe oN.~ N ee.o oo.c NN.quN. ne.xuept we.“ ~m.~NoN um.mmoo me.N N oo.o oo.c No.~emnn N~.~NNNN mm. « .nNco N¢.N.NN NN.. N oc.o co.c Ne.ooNnN eo.~aom. NN.N Nc.NoNe N~.NN~N NN.~ N co.o ee.c Nm.oeNcN eo.Nch~ No.N NoehNNN osevmaN mc.~ N oc.o cc.c ~N.¢cex ouNeN ouNNN eouuecucou eoumo>sez acNNu-Lucou gNNz «No>oN Nmo>cez ”Nognae «Noses: Nuo>ce= NoeeNcou «Noguan nNoguan gNNx «No>uN oaoueN No: snag snap oaecu>< eeoueN No: aceoAaQN :Leu < cexe: :onNueo cow XmeNecNm e>NNeNcmmeenmm esp twee: eucescoecem EeNme e.e eNneN 187 Comparison of the net income levels realized under this strategy with those associated with a management strategy which precludes con- tracting but has an identical production plan is of considerable interest. The information given in Table 6.4 indicates that both the probability of negative net income levels and the magnitude of losses are reduced by the contracting rule. The probability of realizing relatively high income levels is also reduced, however, and expected net income is con- siderably higher under the strategy without contracting. When these two strategies are ordered using decision maker A's preference measurement, the strategy with contracting dominates that without. Information on system performance under the representative strategy from the efficient set of decision maker B is given in Table 6.5.1 In this instance 160 acres are rented, 140 acres are to be planted in corn, 260 acres are to be planted in soybeans, and May 18 is the date after which all unplanted acreage is planted to soybeans. The two contracting rules are quite different in this strategy. That for corn places a strong emphasis on differences between the current contracted price and the expected cash harvest price, the value of parameter v5 being 19.0. Relatively little emphasis is placed on price movements, as evidenced by the small value of v6, and a rather strong emphasis is placed on contracting a specified percentage of the projected crop. This rule results in moderate to high contracting levels in thirteen states and is quite successful, with the average contract price being above the cash 1The preferred strategy of the risk neutral decision maker is also in decision maker B's efficient set. 188 mn.oo¢.NN a o ov.NoN.oNN a : Nm.NNN.NN e o ou.m~n.oNo a a .eoNe-gueou NN ocNgNoo cos: voeNNoeeo NN ooNso uu-coeou caucus. ogN- eN.NoNnN mN.N no.c awoNNNN oc.omoe Nm.~ «N.N eu.nrac N~.ochN mN.ommmN cm.oemen uu.N oo.e o~.oum oo.ceme en.“ en.“ NN.rNNm nc.oceoN oc.Nvmom Np.cuNNN em.v Nm.u Ne.em« oreouNe -.N c~eN Ne.er~m rm.ercN Nc.uNwN. No.wwn. Ne.m Nr.e No.am~ Ne.oNcN No.e N oo.c ~.cor~N everuei om.NNcw No.0 oe.o mo.NwN cN.NNNN om.“ eN.~ rwemmNm em.oNrNN om.onsm ~e.NNNN NN.N Nn.o Ne.eN~ Nm.ooeo Ne.~ N cc.o eo.Noch. oo.~ch NN.o-~N1 No.o No.» NN.NNn em.No~N oN.N c«.~ oo.o-NN NN.NNro co.mo-N1 mm.q~ecu uu.w «m.e NwoeMN mc.Nr¢N ma.N N oc.c N:.rece No.mmnet mooeNuoN NN.N «cos 1 «NeNNoN NN.NNNe N". N cc.o mo.rNan ou.nn~en NN.Ne~qN Nv.N pm.o Ne.omN N¢.NumN rm.N N oo.c NN.~u~nN Ne.cmomn no.cnnon Nu.N «Noe . aNenNN cm.NNNe mo.N oo.~ mN.~xm~ Np.cNmNN NN.~mocn NN.NN-N1 NN.c Nn.o Ne.on~ cm.NNoo N..N co.~ «N.~oo oo.eocNN No.moeNNi e~.oemoN.. ow.e oe.o oo.~NNN oN.NNNN ~N.~ N i.-- oo.o ~N.NNS~N o..oeeo. Nc.Nneo o~.o Nm.o NN.NNN me...Ne No.~ oN oc.o oN.ooNNN mm.aoem rc.uoNo. on.o cm.e ”weave m¢.cNNN m..~ NN.N m emo~o No.NoNNN NooaooNN no.NNrc. uN.c om.o ~N.omN uN.N.Nr om.“ we.“ NN.NNNA oo.omooN en.NwoNN meecNme. 1e.c rm.e Ne.NNN oe.oNoN Nu.~ NN.L mNmnooN N.nc~cox ooNem ooNcm eoNuoLNoou eoNuooto: ocNNooeacou gNNz «No>oo Nausea: Nooeaoou oNogooe oNoguon Noose-z Nuoeucou uNogooe «Nogooe :NNn «Nosed oeoucN No: soon vooeo>< coco ounce»: neoucN no: szzzxem c599 _ m .mxe: conNueo Noe Xmeuecum e>NNepcemecoem ecu Lees: eocescoNgeo seNme m.e eNneN 189 harvest price in ten of the thirteen states in which there is contracting. The soybean contracting rule, on the other hand, is characterized by low values for each of its four parameters. It results in contracting levels below 1000 bushels in all but three states of nature and so has little effect on net income levels realized. Comparison of the distribution of net income levels realized under this strategy with that associated with an identical production plan without contracting indicates that contracting results in a slightly higher expected net return and in some reduction in income variability Given decision maker B's preferences, the strategy with contracting dominates that without. Information on system performance under the representative strategy 1 In this case the produc- for decision maker C is given in Table 6.6. tion plan calls for the rental of 80 acres, for 50 acres of corn, and for 270 acres of soybeans. May 25 is the date after which all unplanted acreage is planted to soybeans, but the switching rule is inactive due to the low level of desired corn acreage. The contracting rules for both crops place considerable weight both on differences between the current contract price and the expected cash harvest price and on con- tract price movements. The corn contracting rule emphasizes the con- tracting of a certain percentage of the crop in most states of nature, however, while the soybean rule does not. Neither rule results in substantial contracting levels in most states of nature, though the 1The representative strategy of decision maker B is also in decision maker C's efficient set. NN.Nne.oN o No.NoN.N N a : Ntosg: e 863;... 190 .eouwoeucoo NN ooNzNoo ooze eoeNNoeoo a. oo.o; Noocoooo «accuse oz»- om.nNNNN NN.N ~N.N oo.Noom oo.oawr ~m.~ x~.~ Ne.rtm~ Nn.NN.c mo.oNNN. es..mov. oo.N N oo.o o..Nemm on.“ on.“ No.eNoN oo.meNm on. nos. c .voNo em.m N oo.o oo.oqu ~N.N a~.N u~.NNo oo.oONw :n.eme¢ NN.mo~N- No.m N oo.o mm.omes No.N eN.o om.o.N cm.Ncnq No.emo.. No.Nmm. No.o N - oo.o No.N.NN o~.~ . .N.~ oN.oNN oe.Nooo No.NnN. -.NNoe NN.N N. oo.o NN.NNoo No.“ mo.~ "n.eoa co.NNNn nN.eeoo Nq.oveoo mm.e oN.e os.oeNN «NerNcm om.~ oe.o mm.cNo~ vN.emN~ ex.oq~oo Ne.NNeNu oo.m N oo.o No.coms ma.r mo.N noemm~ or.~mom amoewzru rc.oN~.~ NN.N NN.N oo.oon Nm.nere NN. we.“ N~.oNNN om.ocec ao.NNNN~ om.ummoN mu.N N oo.o No.rNce NM.N cN.N eoowv~ Noomcca «NequmoN on.oNceN No.N N oo.o me.~NNa Nc.N eo.~ er.Noo oo.NeNN oN.eooo~ om.c-~N1 um.¢ N oo.o cs.cuNN N~.N co.“ 9N.oNN n..em~e «N.Nma~po ~e.uxqo ew.o cm.e oo.oNNn eo.rcaN -.~ «N.N Nc.eoo cm.Nonz mN.vmnr N~.Noom om.e N oo.o NN.oNeN No.~ cc.“ cN.~s~ oo.cqeo uw.amom oc.cmoN on.¢ en.e oN.er- oo.o~oz m«.~ oN.~ nN.ewco cm.Nueq ro.oceN oo..N~o oN.N No.0 eo.NoNN oN.Nch om.- ca.e NN.~oNN on.eNe No.8oa. Nu.Nmm~. .ao.o N oo.o eq.No~N No.“ «N.N oo.o.NN oc.eNNr oo.oon. «N.NNNmu e~.e N co.c oc.cho «N.N ~N.~ NN.cNmm em.NNNq mm.qoqm- er.,.oo NN.. N oo.o e..oNNc oN.A .N.N en.cucN cm. oNc on.oN.e 4.....c on. NN.» eN oo.o :oae‘a Nee.“ «New No.3.N £313.. «19.3.... oeNNooNNoou oz ooNeo oo.o; eoNooNNoou eoumoogoz ooNNN ooNNN eoNooeucoo eouoo>eoz ocNNooeNoou zNNz nNoooz «noose: Nooeueoo «Nozmoe NNozooe Noo>eoz. Nongooou «Nozmoe «Nozooe zNNx oNo>oN neoucN uoz zuoull oooeo>< zone emote»: osoucN uez «coonaom :Nou e>NNeNcemmNoem ezu Lees: oozeENoNNeo EeNme e.e eNneN u Nexe: conNueo NoN XmmNeNNm 191 corn rule does lend to contracting more than half the number of bushels harvested in three states of nature. The average contract price for corn exceeds the cash harvest price in twelve of the twenty states of nature, while that for soybeans exceeds the cash harvest price in only two of the six states in which there is contracting. The distribution of net income levels realized under this strategy is quite similar to that associated with an identical cropping plan and no contracting. Expected returns are slightly higher under the strategy with contracting, however, and, given decision maker C's preferences, it is preferred to that without contracting. 6.6 Further Discussion of the Results Several more general observations can be made regarding these results. First, they demonstrate once again that decision maker pre- ferences have an important impact on the choice of a management strategy. Nowhere is this more evident than in the differences between the pre— ferred strategy of the risk neutral decision maker and the representa- tive strategy from the efficient set of decision maker A, whose level of absolute risk aversion is high over the negative range of net income values. Both cropping and marketing strategies are quite different, as are the associated net income distributions. The preferred strategy of the risk neutral decision maker calls for the rental of 240 acres and forairelatively balanced mix of corn and soybean production; it calls for the application of contracting rules which lead to an avoidance of forward pricing except in instances when pricing opportunities appear to be particularly favorable. The representative strategy for decision maker A, on the other hand, calls for no land rental and specialization 192 in soybean production and for the application of a contracting rule which results in the forward pricing of a substantial portion of the expected crop under nearly all market conditions. The expected net income level of $10,756.27 under the preferred strategy of the risk neutral decision maker is more than $7,400 higher than that associated with the representative strategy of decision maker A, but the latter is much less likely to face substantial losses under his preferred strategy. The results also show that the introduction of forward contracting has only a minor impact on crop production strategies. Levels of land rental, relative acreages allotted to corn and soybeans, and stopping dates for corn planting specified in the strategies included in the efficient set of each decision maker show few significant changes. The most notable change is the inclusion of several cropping plans which devote more acreage to corn in the efficient set of decision maker C when forward contracting is incorporated into the management strategy. A third observation is that, though many of the strategies identified appear to have some minor flaw, efforts to construct improved strategies through a search of a larger number of strategies or by simply changing parameter values which seem to cause problems rarely resulted in the identification of strategies with substantially better performance. In an effort to identify a better strategy for the risk neutral decision maker, for example, 250 additional strategies for which land rental is set at 240 acres were evaluated. The strategy defined in Figure 6.3 was identified as that which maximizes expected net returns. It has an expected net income level of $10,868.57. This is higher than that of 193 Crop Production Plan v1 = 240 acres rented v2 = 190 acres corn v3 = 290 acres soybeans v4 = May 26, the date after which all unplanted acreage is planted to soybeans Corn Contracting Rule Parameters v5 = 20.00 v6 = -12.00 v7 = .20 v82 = -.40 Soybean Contracting Rule Parameters v9 = 7.00 v10 = -4.00 v1] = 0 v122 = -.30 Expected Net Return: $10,868.57 Standard Deviation of Net Return: $15,381.73 Lowest Sample Net Return: $-l7,108.70 Highest Sample Net Return: $33,857.36 Figure 6.3 The Expected Net Return Maximizing Strategy 194 the preferred strategy identified in the original searches of strategies with and without contracting. The increase in the level of expected net income is not a very significant one, however. Finally, it is interesting to note the composition of the efficient set for each decision maker when strategies with and without contracting are evaluated simultaneously. For the risk neutral decision maker the results based on the original search indicate that a strategy without contracting--that defined in Table 6.l--is preferred to all others. The strategy defined in Figure 6.3, which does involve contracting, has a still higher expected net return, however, and so is preferred by this decision maker.1 For decision maker A, it was found that each of the eight efficient non-contracting strategies is dominated by at least one of the efficient strategies with contracting. For decision maker B seven of the nine efficient non-contracting strategies were dominated by at least one efficient contracting strategy, and for decision maker C all eleven efficient non-contracting strategies are dominated. In general, then, strategies with contracting clearly outperform those which preclude it under the conditions specified in this example. 6.7 Implications for Further Research The problem of identifying combined production and marketing strate— gies which are well suited for a particular decision maker's situation is an important one. The analysis presented above is not intended to be a source of definitive solutions to this problem. Rather, it demon- strates an approach to the analysis of choices of this sort which employs 1A more intensive search of non-contracting strategies failed to identify any with a higher expected net income level. 195 the techniques developed in this study to identify reasonable management strategies which perform well under relatively realistic conditions. More work needs to be done, however, to make the model described above the basis for truly reliable prescriptions. With regard to problem formulation, other marketing alternatives need to be incorporated into the analysis. The impact on individual management strategies of government price stabilization programs which put an effective floor on crop prices and provide a form of disaster insurance for participants may be particularly important, for example. Similarly, the use of the futures market rather than forward contracting may be an attractive alternative for larger producers. In addition to the inclusion of other marketing alternatives, more careful specification of the feedback control rules which direct marketing decisions may also make the model a more reliable prescriptive tool. Improved assessments of probability distributions for all the stochastic factors considered in this example are also needed. Particular attention should be given to the specification of more realistic distri- butions for forward contract price offerings over the course of the planning horizon. As was done in the example above, this can, perhaps, be best achieved by modelling the relationships among current contract price, cash price at harvest, and intermediate contract price offerings. In addition to improved specifications of subjective probability distri- butions, it may also be desirable to consider more than twenty sample states of nature in a decision situation such as this one in which a large number of random factors have an impact on net income levels. 196 Once modifications of this sort have been made, a more systematic exploration of the effect of decision maker preferences, probability assessments, and scale of operation on the nature of preferred forward pricing rules would be of particular interest. Can shifts in the para- meter values of a specific rule be related to differences in factors such as these? Are contracting rule parameters relatively insensitive to changes in some of these factors? Is the choice of an expectations model dependent upon preferences? These are important questions which should be considered in future research. The objective of such research should not necessarily be to identify invariant behavioral rules which can be applied in any situation. The results above indicate that such rules are not likely to exist. Rather, the goal should be to better understand both the contracting rules themselves and the complex inter- actions among the variety of factors which affect decision maker choices. CHAPTER VII SUMMARY AND CONCLUSIONS 7.1 A Review of the Methodological Tools Developed in this Study This study has focused on the development of techniques designed for use during four important phases of an applied decision analysis: problem formulation, the determination of subjective probability distri- butions, the measurement of decision maker preferences, and the identifi- cation of preferred choices. When considered together, the procedures described in this study represent an integrated set of operational techniques which facilitate the application of powerful theoretical tools based on the expected utility hypothesis in an applied setting. In the discussion of problem formulation, two important considera- tions were stressed: the need to structure the problem being analyzed by identifying and classifying factors which will have an important impact on the outcome of the decision being made, and the need to give careful attention to the definition of what is to be decided. With regard to the first of these considerations, the usefulness of systems identification (Manetsch and Park, l977a) as an aid in structuring the decision maker's perception of a particular practical problem was demonstrated. With regard to the definition of what is to be decided, the need to recognize the existence of future opportunities for learning in many decision situations and the desirability of flexible management 197 198 strategies in such instances were emphasized. The incorporation of feedback control rules into a management strategy was shown to be one way in which considerations of flexibility can be introduced into a decision analysis. The value of combining direct probability assessments of underlying stochastic factors with the modelling of more complex stochastic pro- cesses was stressed in the discussion of the determination of the dis- tribution of outcomes associated with any particular choice. Under this approach, which is suggested by Spetzler and Stael von Holstein (1975), direct encoding techniques are used to elicit information on a decision maker's expectations about future levels of environmental variables which cannot be controlled by the decision maker but have an important impact on the outcome realized under any particular strategy. Monte Carlo simulation techniques are then used to determine the combined effect of these exogenous stochastic factors and a particular management strategy in the distribution of system output levels. The use of both direct encoding and modelling allows considerable flexibility in the specification of subjective probability distributions of underlying variables and in the representation of the stochastic system under consideration--flexibility that is often lost when a more strictly analytical approach is taken. One important criticism of systems modelling and Monte Carlo simulation is that correlations between random factors are often ignored. This is due largely to the fact that procedures for the generation of sample observations from multivariate distributions have been developed for only a few special distributions. An important 199 contribution of this study is the development of a generalized multi- variate process generator, which is described in detail in Appendix A. This greatly enhances the value of Monte Carlo simulation as a tool in the applied analysis of decisions made under uncertainty. The need for preference measurement techniques which are more flexible and more reliable than existing procedures is stressed in the discussion of the measurement of decision maker preferences. In response to this need, a new approach to preference measurement has been developed as a part of this study. This new procedure permits the construction of interval measurements of a decision maker's absolute risk aversion function. Perhaps its most important feature is that it allows direct specification of the degree of precision with which preferences are to be measured. In contrast, single valued utility functions are exact but often inaccurate preference measures, and commonly used stochastic efficiency criteria are based on inexact, inflexible assumptions about preferences. Interval measurements of absolute risk aversion are used in con- junction with the criterion of stochastic dominance with respect to a function (Meyer, 1977a) to order alternative choices. This recently developed stochastic efficiency criterion can be used to evaluate alter- native choices for classes of decision makers defined by upper and lower bound absolute risk aversion functions. As such, it is both more flexible and potentially more discriminating than other efficiency criteria based on stochastic dominance. Results of an empirical test of the interval approach to the measurement of decision maker preferences demonstrate its value. They show that it leads to a lower probability of eliminating a decision 200 maker's preferred choice from the effiCient set of choices than that realized with a single—vaerd utility functidn and that permits the identification of efficient sets smaller than those associated with the criteria of first and second degree stochastic dominance. The results also show how the precision with which pre- ferences are measured can be varied under this approach. The identification of preferred choices is the primary objective of any decision analysis. The final methodological contribution of this study is the formulation of a generalized risk efficient Monte Carlo programming model (GREMP), which combines random search proce- dures, Monte Carlo simulation, and evaluation by the criterion of stochastic dominance with respect to a function in a single analytical framework for the identification of preferred choices. This model is both flexible and computationally efficient, and it is well suited for the analysis of a wide range of practical decision problems. The use of Monte Carlo programming procedures to generate alternative strategies for consideration facilitates the introduction of flexibility into the definition of a management strategy, since strategies can be defined by feedback control rules as well as by specific values of control variables over the entire planning horizon. The incorporation of Monte Carlo simulation techniques into the model facilitates the more realistic representation of the complex processes by which the control variable levels determined by a particular management strategy and a set of stochastic environmental factors interact to determine the properties of the distribution of outcomes associated with any choice. Finally, evaluation by the criterion of stochastic dominance with respect to a 201 function permits the ordering of choices in a manner fully consistent with the expected utility hypothesis without requiring that restrictive assumptions be made about decision maker preferences or the form of system output distributions. The methodological tools developed in this study are of considerable value. It should be noted, however, that they are not intended for use in all decision situations. They allow considerable flexibility con- cerning the degree of detail incorporated into any decision analysis, but they are intended primarily for application on a computer and can be expensive to use. They do not replace budgeting techniques or mathematical programming but supplement these and other existing methodological tools. As is true of any aid in the decision process, the procedures developed here should be employed only when the benefits associated with their use exceed the possible added costs. It should also be noted that these procedures are not all that is required to successfully resolve a practical decision problem. Rather, they repre- sent a set of tools which facilitates some aspects of the decision process-~a process which involves a wide range of activities. 7.2 Empirical Findings This study focuses primarily on methodological rather than empirical issues. Several empirical findings associated with the illustrative applications of the procedures developed in this study are worthy of note, however. As a test of the interval approach to the measurement of decision maker preferences, preference measurements were made for seventeen farmer participants in a marketing extension workshop. As reported in Section 4.8 202 of Chapter IV, the resulting preference measurements provide rather strong evidence that many decision makers exhibit both risk loving and risk averse behavior. The interval measurements for thirteen of the seventeen respondents included negative as well as positive values. The results also showed that strictly decreasing absolute risk aversion functions are not as common as is often suggested. Though certainly not generalizable to a larger population, these findings do call to question the wide acceptance of the proposition that absolute risk aversion functions tend to be positive valued and strictly decreasing, and so suggest that further research in this area may be warranted. The procedures developed in this study are applied in the analysis of two related examples concerned with choices made in the operation of a cash grain farm. The first example focuses on land rental and produc- tion planning decisions when prices, yields, and time available for fieldwork are uncertain. In the second example, these same decisions are considered in conjunction with the choice of a flexible marketing strategy involving cash sales at harvest and forward contracting. The results of both applications demonstrate the flexibility of the proce- dures developed in this study. They also indicate that decision maker preferences have an important impact on both the selection of a cropping plan and on the choice of an appropriate marketing strategy. Finally, they show that the degree of interdependence between production and marketing strategies is not particularly great, which suggests that it may be possible to analyze these two components of a management strategy separately in some situations. 203 7.3 Implications for Future Research The techniques developed in this study can be of use in the analysis of a diverse range of practical decision problems. Clearly more work can be done, for example, on the analysis of alternative production and marketing strategies for agricultural firms. Of particular value would be attempts to develop flexible marketing strategies which are suitable for a broad range of decision makers and decision situations. Another potentially important application of these techniques is in the analysis of alternative pest management strategies. Such strategies are, in effect, feedback control rules designed to direct actions in complex decision situations characterized by a considerable degree of uncertainty, and the GREMP model, with its combination of random search and simulation, is well suited for the identification and evaluation rules of this type. The methodological tools developed in this study can also be of use in the analysis of major investment-disinvestment decisions, both private and public. They permit the explicit consideration of flexibility in the analysis of such choices--a factor which, as Masse (1962) notes, is of critical importance when major resource commitments are to be made in an uncertain environment. At a still higher level of aggrega- tion, these tools can also be employed in the analysis of policy decisions having outcomes which are strongly influenced by stochastic factors in the environment in which they are implemented. The techniques developed in this study greatly facilitate the application of decision theory based on the expected utility hypothesis in the analysis of practical decision problems. They have considerable potential value, but they can be made more effective if they are refined 204 still further. There is a need, for example, for further testing of the procedures used to implement the interval approach to the measure- ment of decision maker preferences. Experiments should be conducted to identify measurement scales which allow preferences to be adequately represented in a wide range of decision situations, and alternative modes of questioning should be evaluated. Will a measurement grid which works well in the analysis of choices having outcomes which are concentrated around a certain value be adequate when preferences are to be measured over a much broader range of outcomes? Over how wide a range of systems output levels can absolute risk aversion be assumed to be constant? Does the range depend on the level of system output? In the neighborhood of how many system output levels should preferences be measured? Upon how many choices should each measurement be based? These are but five of the many questions which need to be answered. More research is also needed on the representation of preference relationships which depend on more than one system output variable and on the development of multivariate stochastic dominance criteria. Though some work has been done in the latter area by Levy (1973), Levy and Paroush (l974a, b) and Kihlstrom and Mirmon (1974), further research is needed. Particularly valuable would be an extension of stochastic dominance with respect to a function to the multivariate case. Additional refinements are also needed in the GREMP model. It may be possible, for example, to increase the efficiency of this procedure by incorporating learning rules which lend at least partial direction to the random search. Such rules could be applied at the end of each 100 iterations of the model and might have the effect of reducing the 205 range of values for control variables over which the search is to be made in subsequent iterations. A second type of rule might cause the search to be stopped if no new strategies have entered the efficient set for a certain number of iterations. Finally, it must be noted that methodological advances alone will not eliminate all the difficulties encountered in the applied analysis of decisions made under uncertainty. The two applications of the techniques developed in this study point to the importance of and the need for an improved information base in most decision situations. This need is particularly strong with regard to decisions made by agricultural producers who face so many different types of uncertainty. Efforts must be made to supply producers with probabilistic price forecasts and to teach them to use such information effectively. Frequently, all the information needed to make forecasts in probabilistic terms is readily available to the agencies or individuals who predict future price levels, but in most cases price forecasts simply state a most likely value, or at best, an interval within which the price is expected to fall. Similarly, more complete information is also needed about how yields are affected by timeliness and by stochastic factors in the environment. More consideration needs to be given to such factors in the design of agronomic experiments. The information base for the analysis of decisions made under uncertainty could also be improved by research designed to identify systematic relationships between levels of absolute risk aversion and selected decision maker characteristics. Such information could be of use to policy analysts who wish to consider the impacts of uncertainty 206 on the choices made by representative firms. It could also be of use in situations when the importance of a choice to be made does not warrant the expenditure of sources required to construct an interval measurement of the decision maker‘s preferences. In conclusion, then, the methodological tools developed in this study can, in their present form, be employed in the analysis of a wide range of practical decision problems. They represent an important improvement in the set of techniques available for the applied analysis of decisions made under uncertainty. Further efforts are needed, however, to make these tools both easier to use and more effective. APPENDICES APPENDIX A A GENERALIZED MULTIVARIATE PROCESS GENERATOR A.l Introduction The analysis of decisions made under uncertainty requires that infor- mation on the probability distributions of system output variables be determined for each alternative strategy being considered. In most cases the properties of such distributions depend on the controllable system input levels which define any particular strategy and on the probability distributions of stochastic environmental factors which cannot be controlled by the decision maker. When there is only one stochastic environmental factor or when the relationship between con- trollable and exogenous system inputs and system outputs is relatively simple, the properties of system output distributions can be derived analytically.1 When several stochastic factors having probability distributions from different families must be considered or when input- output relationships are not of a convenient form, however, analytical techniques cannot be used to determine the properties of system output distributions. In such instances, Monte Carlo sampling procedures and numerical simulation techniques are frequently used to obtain the necessary information. Sample states of nature are defined by selecting 1See Anderson and Doran (1978) for an excellent review of the con- ditions under which information on such probability distributions can be determined analytically. 207 208 values for each exogenous system input in a pseudorandom _fashion. Numerical simulation techniques are then used to determine the system output levels associated with a particular strategy for each state of nature. The resulting values constitute a random sample from the pro- bability distribution of system outputs. They can be used to calculate sample moments of the underlying distribution, or they can be used to construct an approximate representation of the cumulative distribution function of the system output variable. A process generator is a procedure, usually programmed for imple-_ mentation on a computer, which generates pseudorandom sample observa- 1 As such, process tions from a specified probability distribution. generators are a basic building block in the procedure described above. Process generators have been developed for a wide range of standard univariate probability distributions including the discrete and con- tinuous uniform, exponential, gamma, beta, chi—square, normal, lognormal, geometric, binomial, hypergeometric, and Poisson (Naylor, et al., 1966; Schmidt and Taylor, 1970; Newmann and Odell, 1971). In determining the properties of a distribution of systems outputs, the use of univariate process generators is appropriate if all underlying stochastic factors can be assumed to be independently distributed. When this assumption cannot be made, a multivariate process generator is required. Process generators have been developed for several multivariate distributions, most notably the multivariate normal and the Wishart 1The term "pseudorandom" refers to the fact that, although values generated by a process generator have all the properties of a random sample, they are actually generated in a deterministic fashion. 209 distributions (Naylor, et al., 1966; Newmann and Odell, 1971). In many cases, however, the properties of the joint distribution of a particular set of stochastic environmental factors may not be adequately approxi- mated by either of these distributions. There is a need, then, for multivariate process generators which are more flexible than those currently available. A major contribution of this study is the development of a workablel procedure for the generation of random variates from multivariate pro- bability distributions with non-normal marginal distributions. The formulation and implementation of this procedure is the primary focus of this appendix. Before introducing the generalized multivariate process generator, however, basic concepts related to the generation of random variates are first reviewed, and several commonly used univariate process generators are presented. The generation of random variates from the multivariate normal distribution is then discussed, since this procedure is used in the more general process generator developed in this study. Finally, the algorithm for the generation of random variates from multivariate distributions with specified marginal dis- tributions and correlation matrix is presented along with an explana- tion and listing of the computer program used to implement it.1 An empirical example is also presented to demonstrate the efficacy of this procedure. 1Though developed independently, this procedure is similar in many respects to methods described in an unpublished paper by Coleman and Saipe (1976), which describes several procedures for the generation of sample observations from bivariate distributions. 210 A.2 Basic Procedures for the Generation of Random Variates Several approaches have been developed for the generation of random variates. They include the inverse transformation method, the rejection method, and the composition method, all of which are described in Naylor et al. (1966). Because it is the most commonly used and because it serves as the basis for the algorithm to be discussed below, only the inverse transformation method will be reviewed here. The cumulative distribution function, F(x), of the continuous random variable, x, is defined over the interval (0,1). Associated with each value of x, then, is a value r lying on the interval (0,1) such that r = F(x) A.l Similarly, if the inverse of F(x) can be determined, the following relationship will hold: x = F-1(r) A.2 In this case any particular value of r uniquely determines a value of x if F'](r) is a continuous, monotonically increasing function.. By generating a set of uniformly distributed random variables lying on the interval (0,1) and calculating the corresponding valueSnyx deter- mined by equation A.2, a set of sample observations from the probability distribution of x is generated. This is the inverse transformation method for generating random variates. Following Manetsch and Park (1977b) the validity of the inverse transformation method can be demonstrated in the following manner. Let r = F(x) A.3 211 where r is a uniformly distributed random variable defined on the interval (0,1). By the definition of a cumulative distribution function F(xo) = Pixfxo} = r0 A.4 where x0 is a specific value of x and r0 is the corresponding value of r. Since r is uniformly distributed - = l G(ro) - P{rSrO} rO A.5 Since F(xo) and r0 are equal, Equation A.5 implies that < .. P{r-F(x0)} - F(xo) A.6 and since F(x) is assumed to be a monotonically increasing function in x, it also follows that PiF"(r)Sx01 = F(x P{xfxo} A.7 0) = Therefore, the distribution of the random variable generated by the inverse transformation method is identified to that of the random variable x. Application of the inverse transformation method is shown graphically in Figure A.1. A cumulative distribution function F(x) is drawn on the left, and its inverse is drawn on the right. In the particular example shown the randomly selected value of R is .5, and the associated value of x is 6.3. The inverse transformation method is the basis for commonly used procedures for generating random variates from several standard probability 1For uniformly distributed random variables g(ro) = rO'OSrosl 0 elsewhere and r0 r0 vogue: :oNNeENonceNN emce>cN ezN Xe mmNeNNe> soecem No :oNNeNecme N.< ecsmNN 212 o v N 3 o o v N 11 d d d d. I q A u q a I d d d d u . . _ L N N N. . . . _ . . z . z. . _ . . — O N 11111111 . o .. o A .. L O a O. .. c— u o.— Xlahv —.Im LIAXvu 213 distributions. Most notably, it is the basis for process generators for exponential, gamma and beta random variables. Procedures for the generation of variates from each of these distributions are presented below with the computer programs which implement them.1 A.2.l An Exponential Process Generator The density function of an exponential random variable is given by the expression: f(x) = oe"°‘x OSx A.8 .0 otherwise?. This function can be integrated to obtain the cumulative distribution function, F(x) F(x) = Ix oe'ax dx 1-e'0‘x = r. A.9 0 The inverse of F(x) is given by the expression F”1 (r) = x = --1;ln (l-r). 4'10 If r is a uniformly distributed random variable on the interval (0,1) this expression reduces to x = - a— 1n (r). A.11 By generating values of r at random and calculating the associated values of x using Equation A.ll, sample observations from an exponential distribution with parameter a can be generated. A computer program which implements this procedure is listed in Figure A.2. Three parameter values must be supplied by the user: ND, BL, and XMEAN. ND is the number of variates to be generated, BL is 1These programs can easily be made into subroutines and incorporated into larger programs. 2The mean and variance of this distribution are l/o and l/az respectively. 214 CUTPUT) =INPUTeTAPEE= r FULHWFE TNT- rlIL ) 5‘ I C L o (D pmxru VN + 1) F ))r...... P.) N1. 0‘02 0 Xr1.LN.\Il.rU 9N1. N.......N! 91166ct.] 11.14. C 01.: or! OH:..I\1L(C(I\ ASAIFAiTT RTE “1555 nun-1...! PJEIPT pin. OAVK D- N In! D. .i.-.t EFL—U: :Fnun..1q EDWARDLKXIWC NFC heart. ”In. 1.1.: Figure A.2 A Program for Generating Exponential Random Variates 215 the lowest value the random variable can take,1 and XMEAN is the expected value of the distribution. A.2.2 A Gamma Process Generator The density function of a gamma distribution is 0 otherwise? An analytic expression for the cumulative distribution function of this distribution does not exist. It can be shown, however, that the sum of k exponentially distributed random variables, each having an expected value of 1/a, is a gamma random variable with a density function identical to that given in Equation A.12 (Schmidt and Taylor, 1970, p. 265). Random variates from gamma distributionS‘flncwhich the parameter k is an integer, then, can be generated by summing k variates drawn from an appropriate exponential distribution. A computer program which generates gamma random variates by this method is given in Figure A.3. In this case four parameter values must be supplied by the user: ND, BL, XMEAN, and K. ND, BL, and XMEAN are, again, the number of sample observations to be generated, the lower bound of the distribution, and the expected value of the distribution. 1In the deviation above, BL was assumed to be zero, but this need not be the case. 2The mean and variance of this distribution are k/a and k/a2 respectively. 216 OUTPUT) (u E D. A TI 9 T U p. N. I N. C» E P. A TI 9 T U". P 9 z. N. Ur» 0E) en. an I Tvnl. c.“ U 9? T. D LA. e. IleanN... 5.! I 'L 0 (DC; Vrrb AN] 1) V )h \I )F9.. My)l..\ F. N .5... 0 AOL... y)‘ r... 9...... rUNU .14 91oF... N..NU.1 1....» : 1N n. 971: P. 9’1. :IL {Nile ANSL lupin (NIT D..(N.n..u nu er.r_£.r. roenuvrchueL a. + T ”on: Figure A.3 .A Program for Generating Gamma Random Variates oar. Q. D..)IN.LN.N\111 CHE-.0..0:: _NFIVCN. hr Duluhuv fiLhN r...) erN F L. Cufierwnie 1b.... In" 217 K is the second parameter of the gamma distribution.1 The parameter K cannot always be treated as an integer. Though not reviewed here, a procedure for generating variates from gamma distribution with non- integer parameters is described elsewhere by Phillips and Beightler (1972). A.2.3 A Beta Process Generator The density function of the beta distribution is a-1 -1 .1.) 18.3er 2 O elsewhere. Again, an analytic expression for the cumulative distribution function cannot be derived. As Naylor et a1. (1966) note, however, the random variable defined by the expression x = xl/(x1+x2), A.14 where x1 and x2 are both gamma random variables with identical values of a and values of k such that k1 = a and k2 = 8, has a beta distribu- tion with parameters a and B. If a and B are integers, then, beta random variates can be generated using an extension of the procedures 1The parameter k can be determined by solving the following expression k g (p-BL)2 2 a where u and a2 are the mean and variance of the distribution being modelled and BL is the lower bound of that distribution. The program computes the value of the parameter a automatically. 2The mean and variance of this distribution are u = a/(a+B) and o2= oB (a+8+1)(51812 218 developed above. Two gamma variates from appropriate distributions are generated and a beta variate is defined according to Equation A.14.1 A computer program for generating beta variates is given in Figure A54. The user must supply values for five parameters: ND, BL, BU, K1, and K2. ND is the number of variates to be generated. BL and BU are the upper and lower bound values which the random variable can take. Though set at zero and one in the derivation above, they can be set at any values. K1 and K2 are the two shape parameters of the beta distribution. Their values are determined by solving the following two equations 3 2 2 K2 = Y :27 211+GJY-5 A.15 6 and K1 = 4%é- A.16 where y and 52 are the mean and variance of the distribution once it has been normalized so that all values be on the interval (0,1).2 A.2.4 A Generalized Univariate Process Generator Each of the process generators above is based on a simple applica- tion of the inverse transformation method to the exponential distribu- tion. For many distributions, however, the inverse of the cumulative 1When a beta distribution has non-integer parameters, the gamma variates required in Equation A.14 can be generated using the procedure developed by Phillips and Beightler (1972). zMore formally, Y = (u-BL)/Bu-8L) and 52 = yz/(Bu-BL)2 1where u and y2 are the mean and variance of the actual distribution to be modelled. 219 INPUT-TAPFE=CUTPUT) (Nu AN. no; T3rh 1 {.0 C K), L. .41 91F- 1: 1.1! .1. 01. :(Gr. AN... Enid.» O + :LJePL ' 5' 75" J’. ) O ))*(5U- X 10.79 ) pa A4)c r..— K. 5.29. e 9,) 0 4. On. 11‘ 17:»..1 —— -( G 'YJLI Jin.r.(£.(( FC+I¢TT PIE.) 0 o FwLIO..A,L?1anNt floa.1run.r.£AG h. a. N. .N 9,. .. :4 IN AFC f....1 u- p. F...‘71rwrh-. FF.L01J?U....10..:n/.:c PCP pLDGCrUCV.th.PrLC.X.F—c F. 1.. LJrL.-. 1.. 11........ 1a. Figure A.4 A Program for Generating Beta Random Variates 220 distribution function cannot be derived analytically, and simple analytical links to other distributions for which such inverses can be calculated may not exist. Most notably, this is true for the normal distribution. In the case of many empirical probability distributions, the exact functional form of the cumulative distribution function may not even be known. Nevertheless, the inverse transformation method can still be used in such instances. Values of the cumulative distri- bution function F(x) at specified levels of the random variable x can be used to construct a rough approximation of the entire function by 1 In Figure A.5, for linearly interpolating between known points. example, six points on a cumulative distribution function are specified: the upper and lower bounds, and three intermediate points which divide the cumulative into quartiles. Given such a diagramatic representation, the value of the random variable x associated with any randomly selected probability level is easily determined. As shown in Figure A.5, for example, the value of x corresponding to a probability level of .2 is 44. A table look-up function (Llewellyn, 1965) is the equivalent of such a diagram on a digital computer. The table look-up function TABEX, a listing of which is given in Figure A.6, has access to an array of X values, ARG, and an array of corresponding probability levels, VAL, each array having K elements. Given an argument value, 1Points in the cumulative distribution function can be determined analytically, by numerical integration of a known probability density function, or they can simply be sample observations from an empirical distribution. As Schlaeffer (1959) notes sample observation arranged in order are reasonable estimates of the fractiles of the underlying distribution. 221 CNN oNN mucNoo czocz me co eemem :oNNucou coNuoaNNNmNo e>NNeN352u e No coNNeNcemmcoem eNeENxoeoa< c< m.< ecemNN oo— om ow ON on om ow om ON ON S. 3. 222 AV t 1 O 1 Y L(J)-VfL(J-1))/ T *( d- O)!‘ 5’15 1.1+ \INUu 310+ .0!) (P9) 591 RA. DNJ 0" TV? 1 VGWP NQeUA wCZYfi. flLINNRla) 1L ISJV2JIN T" U x (Phil .1 1|. (Elna-I erUvi NM “ICC. .nNaLNtNr... Z n1.e1\ FP.[IT(¢?(1Ur r. 1. 2 1 Llewellyn (1965, p. 4-22). pr. 0 TO K Source: Figure A.6 A Table Look-up Function Subprogram 223 DUMMY, this function subprogram computes a value of the function defined by ARG and VAL. If x is a random variable and if R is a variable lying on the interval (0,1) which corresponds to a value of F(x), then the FORTRAN statement R = TABEX(VAL, ARG, X, K) A.l7 calculates the probability level on the cumulative distribution function corresponding to any specified value of X. One of the interesting pro- perties of this particular table look-up function is that by simply switching the positions of VAL and ARG in the calling statement, values of the inverse function can be calculated. The FORTRAN statement X = TABEX (ARG, VAL, R, K) A. l8 for example, calculates the value of X which corresponds to any specified probability level, R. A computer program which uses the inverse transformation method to generate variates from a probability distribution defined by a specified set of points on the cumulative distribution function is given in Figure A.7. The user must supply values of ND, the number of variates to be generated, and K, the number of points on the cumulative distribution function to be specified explicitly. Paired values of the random variable and the cumulative distribution function are then read into arrays ARG and VAL. These values must be arranged by the user in ascending order. The procedures developed in this section are of interest in them- selves, since they are all useful tools for Monte Carlo sampling. More important in relation to the primary purpose of this appendix, however, they are used extensively in the generalized procedure for the genera- tion of variates from multivariate distribution. 224 OUTPUT) FUNCTICL ,b N F. 0 p. I o A T R I T. U pl. \, ' E h}. \I .l I R 1 U R O . P T In .N D Cy F... Is I E T; N L = T. D .1 fr c». A» o to ..\ F R F5 N ) . P E VKJ E .KI \l A K 1 rt 9 .u T E TD 5 Y. K. 9 6 Art. A» r} 14L .l .Lr: V a. \1 Us)?» UC x.” \I U CV1 P35 MX I I 5 1!. 7-..... US. (I Q) t 1U CLT T. C. R R.( 5". CA a4 D. A.» LLb \1 r: 1...». TVS 0N. 9r. K 9P \a . V U Orr. N D ,T. 9 LA» )nu+ F)T SA L 1A P :9 d!) has‘ TC a.» (I 9 L) (r \a If] N L? L (1 CF] ((RDIV L AA AX ) X‘ F9. NGANC Ruhr VV V 3.) t.L b. . lb rsLuV P e V Fu 0) or... n A.. oVul‘ Gin ) nl.) )urnw CC C 0 RV. TV?» V For ECFV 10 Q’nhfi.)1flu T KC..F.-R Fuhnuruovlruc QFrLllnuachVI n.” 9 ob! C 1 F1 11...: .. .l\ 01...»..C: .. rm; Y r . v HIPQRIQ$:OII(X€((( CI:w() b 2 LC..rtr.F_C5FT.5R F.F.(TTT- 7.1.5.0.. : luv... PNP.(R.E(U :\ .LMFFEZA. TN HYI‘LI ru Pat v PF.” Pfi.LL.NLLlA~LYlu/v n. p.r.1.l_. _rallevl T ONUIUSbA PT FTZFREU Kn (?c1hv F C..1..|lr.a. .11Vrtf...hr\ .. 2 r. graft.-.» U104? A5, «Lit. 7 (IV hrh. R r?! h..D..r..n.:.p.an..Frv F3. r..r.._1aT (FF.IUr\r. pl. Flu, r... F. 1... A A An ts E rqu r... SF. C31!» 2. «. D. FT C C «1 .n 11.2 C CC C C Figure A.7 A Generalized Univariate Process Generator 225 A.3 The Generation of Variates from the Multivariate Normal Distribution As stated above, the multivariate normal distribution is one of the few multivariate probability distributions from which a workable process generator has been developed. Each element of the vector of random variables, x, of which a multivariate distribution is comprised is normally distributed with specified mean and variance. The multi- variate distribution is fully described by the vector of expected values for each of its marginals, u, and by a positive definite, symmetrical variance-covariance matrix, 9, which is defined by the following expression. 9 = E[(x-u) ' (x-u)] A-l9 If the elements of x are not correlated, the off-diagonal elements of this matrix will equal zero and each variate in x can be generated independently using procedures such as those outlined in the preceding section. When correlations are present, however, this approach is not satisfactory. Naylor, et al. (l966) describe a procedure for the generation of variates from the multivariate normal distribution which is based on a theorem proved by T. M. Anderson (l959). That theorem states that if .z is a vector of independent standard normal variates, there exists a LJnique lower triangular matrix, C, such that x = C2+u ‘ _-, A.20 IFt follows directly that the variance-covariance matrix of (x-u)--which 1'3 also the variance-covariance matrix of x, since u is a vector of Ctaristants—-is defined by the expression CC'. Therefore, 9 = CC' A.2] 226 The "square root method" can be used to derive a set of recursive for- mulas for computing the elements of C from those of 0. Once the elements of C have been calculated, a vector of independent standard normal variates, 2, can be generated, and the vector x can be found using Equation A.20. To generate a large number of sample vectors from a given multivariate distribution, the elements of C need to be calculated only once. The final two steps described above are then repeated for the generation of each vector of variates. A computer program which implements this procedure is listed in Figure A.8. The user of this program must specify ND and MN, the num- ber of sample vectors to be generated and the number of elements in each vector. He must also specify the mean and variance of each of the MN marginal distributions. Finally, the off-diagonal elements of the correlation matrix which are non-zero must be specified.1 The variable IND is set to a non-zero value when the last correlation coefficient is read. This program can be used to model multivariate normal distribu- tions with up to fifty elements. More information about its structure is provided on comment cards included in the listing. A.4 A Generalized Multivariate Process Generator In this section a generalized procedure for the generation of sample vectors from multivariate probability distributions is described. 1For any pair of random variables x and y, then correlation coefficient [3 is defined by the expression 3llows directly from the proof of the validity of the inverse trans- fc>rmation method given in Section A.2. 230 A computer program which implements this procedure is listed in Figure A.9. Several types of inputs must be supplied by the user of MVGEN. ND and MN are the number of sample vectors to be generated and the number of elements in each vector. MN cannot exceed fifty. K is the number of data points to be specified for the construction of the cumulative distribution function of each marginal distribution. It can take a value as high as lOO. K paired values of the random variable and cumulative distribution function of each marginal must be supplied by the user. These values, which must be arranged in ascending order, are read into the two dimensional arrays ARG and VAL. Finally, non- zero off diagonal elements of the correlation matrix must be supplied by the user. These can be read in any order. The variable IND is set to a non-zero value when the last correlation coefficient is read. The structure of the program closely follows the outline of the procedure described above. After all necessary variables are initialized, sample vectors are generated sequentially by first generating a sample vector from a multivariate normal distribution and then transforming the elements of that vector first to variates of a uniform distribution and finally to variates from each marginal of the multivatiate distri- bution being modelled. More information about particular aspects of the program is given in comment cards. Program MVGEN is designed to permit the user to specify marginal (iistribution of any form. When all the marginals of a particular multi- variate distribution can be adequately described by one or more standard Flrobability distributions, it may be more convenient and more efficient computationally to modify MVGEN so that only the parameters of each 231 PROGRAM HUGEDH INPUT. OUTPUT. TAPE 5=‘INPUT. TAPEb'OUTPUT) COP‘MUN /81 GOP I/ C( 50. 50). COR(SO. SO) PARISO; 50). PIN COMMON IBl OCFP/ A' ’C( '50. 100) VAL (50. 10 DIMENSION RV(;O).U(50). E(50) VALNCQI).ARON(4I) DIMENEION Y(. )0) If NFYI THO DATA STATFHFNTS ASSICN VALUES TO VARIABLES SFD TO CONSRTUCT A TABLE LOOK UP REPRESENTATION OF THE NVIRSE QT THE (UMU'ATIVE DISTRIBUTION FUNCTION OF A IANJARD NORMAL RANDOM VARIABLE. DATA kN/dl / DATA ARON! 3 5.-I 96,-! 645.-1. 439.-I 281.-I.150.-I 0370-. 925. 1*.81.- 7'>b.- b74.~ 596. -.52 .-.454.- 386 312 253.-.189. ?-. 12b. - 0" IO: 0 Or ()an . 1260 .187: 253 .13 215860 . 4540 52‘. . 5980 3 674. 755 841. 9d‘1.1 03 7 I 150.1. 2811. 439 I b45.1.960o3.5/ C READ THE NUNBCR OF SAHPLE VECTORS TO BE GENERATED. ND. AND c THE Nunucn o» clzr:cnrs IN EA(H VECTOR. MN. RtiAIM 5. IOU) NI). MN g READ THC hwhrCP n7 POINTS TO DEKDEFINED ON THE CUNULATIVE C mac-4 DISTRIDUTICN Cl EACH HARGINAL. RFR.D(5.IOI) READ DATA FOINTSK FOR EACH HARGINAL. DO 5 I 1 MN DO 5 J 1 READ ‘ I ARC(I.J) VALII: F SOHAL ELEMENTS OF )THE CORRELATION MATRIX TO ZERO. ( ALL 0 DO 10 O I 0 Ru 4 ) A MN DO 1 MN COR( 0. O IF(I E0 J) COR(I J)=l O CONT I IIK'E AD ALL NOH- ZERO CORRELATION COEFFICIENTS SET IND EGUA L N325 ZERO VALUE HHEN THE LAST CORRELATION COEFFICIENT IS \D FAD(5.103l I.J.COR(I.J).IND COR (1 F(IND E0 0) GO JTD 'PUCT LOHER TRIANGULAR MATRIX C. CALL COLF ”” VRLN(1)~O O 012? I= .KN 20 VA:N(I)=VA1N*((I.0-B)*9KB) I B- D+DD AR 0(I JI-ARG(I J- I)+DARC*RANGE ‘ ' ' “" ' VAL(I.J)-P IF(VAL(I.J> CT 0 999999) VAL(I.J)=O.999999 IF(J LT K) CO TO 5 ARC(I.J)LUU VALII.J)nI.0 D CONTINUE C SET SOLISFI DIACONAL ELENENTS OF THE CORRELATION MATRIX TO ZERO. DO 10 J1I.HN COR(I-J)-U O IF(I EQ J) c0R(I.J)=I.O . IO CONTIr c READ ALL R‘fi-*Il COPRELATIOII COEFFICIENTS SET IND EQUAL g T8 CENENTE’ERQ VALUE NHEN THL LAST CORRELATION COEFFICIENT IS 15 REAH<3. 103) I. u COR(I J) IND cei:lo.1>- 0h'(1.J) IF (IND [U 0) GO TO 15 c CONSTRUCT L2H;R TRIANCULAR MATRIX c , CALL curs VALNIII=O 0 00120 1= LLHN 20 VALNx\“': N.ARGN. RV(I).KNI C TRANSFORM THt HH’FOQHLY OIEWTPIDIIED RANDOM VARIABLES INTO C RANDBB XSRIP’ Ll-rL FROM THE SPECIFIED BETA DISTRIBUTIONS. 3101!!“ 40 Y(IJ—TA“EKIII UII) K) 50 NRITE'!..: :uv: (Y(M). M=I.HN) zoo FuirflAIIDISI IOI FUWMAT(I;') 1°? LQHMT I‘ '2 215‘ 103 FORRAI<911 IIO 2.15) 200 FORfiATIIoru 2) END Figure A.lO A Multivariate Beta Process Generator 235 Figure A.10 (cont'd.) UDPOUTINE NORVEC(NN E) HEN‘ION E( 5 I“! MN RANF(-I) RIFi-l) 5 D DO R R R S 6=(-2. OOALOG(R1))09.:oCDS(b.2831*R2) E I 1: P- (I ET ND FUNCTION TAHFYTVAE.ARC DUMMV.K c THIS FUNCTION SUIPPD’RAM 15 FRon LLEHELLYNII965). BIMFNCIQNK VAT(I).ARC(I) l 1 J‘ IF(DU”“/ CT ARO(J)) 00 TO 1 2 TABrx=tDuHHY- ~AHGIJ- 1v)o-au.le 1))4VAL(J-1) REIURN I CONTINUE J= GOK T0 2 END FUNCTION Tnntx1(I.DDNNV.K) ~-vggnTON éBkULKZI ARC<50.100>.VAL(50.100> IFtDuNNT'c: VAL(I. 4)) GO TO 2 TABEXI=(DUHHY- VAl.(I J— I))I(ARG(I. J)-ARG(I J-IIII 1(VALééod)~VALII.J-I))+ARC(I J-I I RETU I CONTINUE J=K on T0 2 ~4~-- _”__._ufl-..-. ~» END ‘FUNCTION CANNA< xmwm auspomn< do “cosmgsmmmz «com mwu_ogu mo wucmacmm z m> 2 Lo 5‘; LL] Lu m m> P.m «gnaw; go a m> H go I m> u m :o_pmm:o .8 a\’nw.>\u N .8533 < < F :owummzo 243 space where the decision maker's actual level is expected to fall or in the regions where relatively small changes in absolute risk aversion have the greatest impact on preference orderings. Experience to date indicates that most of the detail on the measurement scale should be concentrated in the risk aversion interval between -.0001 and .0010. Actual measurements for a variety of decision makers have tended to fall most frequently within this interval, and tests on several empirical decision problems have indicated that choices are most strongly affected by changes in absolute risk aversion within this range. A suggested set of sixteen reference levels is given in Figure 8.2. These define fifteen boundary intervals upon which choices in a four question sequence could be focused. A measurement scale for a three question sequence could be constructed by using every other reference level, and that for a two question sequence could be constructed by using every fourth reference level. 8.3 The Generation of Sample Distributions Once a measurement scale has been specified, the sample probability distributions which are the basis for the choices used to reveal the decision maker's preferences must be generated. Program NORGEN, which is listed in Figure 3.3, is used to construct these distributions, each of which is actually a set of sample observations drawn from a normal distribution with specified mean and standard deviation.1 These sample l A normal distribution is used because it is convenient. Any other underlying distribution can also be the basis for the generation of sample distributions. 244 .0100 .0050 .0025 .0015 .0010 .0008 .0006 .0004 .0003 .0002 .0001 0 .0001 .00025 .0005 .0010 Figure 8.2 A Suggested Absolute Risk Aversion Measurement Scale 245 0. FR RFOO E0 .1 8 NC MNOA UAIF NET .1 "AG ‘9 IN. EE 9V1 Figure 8.3 A Listing of Program NORGEN PNDED A NDN T 9 U 00 0.00 TENRR UTCA PAIGE NRTMEH ) IEUAT 2 :NBT D R 5.5.7.80 N 1 [GR Mn U 1 P T. A 09 3 Apr—SN R9 8 TEIA. Q In; a... v DEB 99 o 10 MT Do, 6 UTHYS T o o ( D. C S S S 13:» Q 9 9‘...“ fl. UNENN N 0 C N. 00 00 A)...‘ t A QINII EDT 5 E ) TTITT HNU . ”.1 5 UU UU YUP. t v- I PBSBB 10.1 t + \I Q NINII ORR ) ) 2 2 IRCER KIT. .1 «J I o (TITT v(S lXN 9 0 NSTSS [TI .1 RQUY a 1.. EIAII ”Ann. ()0 Tsirr GDVDLD 0 r. )0 run-LR) 30.0. P. R )L.LNION))CTIIY I o 9 OFEGG OFL O 90 911L39 c.. 9 0.2.). Nccukku rc/rylnell. .A14”.11 1.3.; FIT. 9..an :C 9:((t(( c..JF.F..tF2 MRCYY. QTDIIQJFFOTT 41UU((( AF. LL SSS ( NAN. obi‘ N..NTT.1 F.F...r._R.F ls: (7.5".LibiuntiTIlhéf CMLEE CEF?NTIQ?.LLTR11V~V 0UPD..L oATT IT. : .. (r F 111...»..va RP: RNV: NNDESAOR (\01? : : : 2 RCPOOOM. P AUUNRIRDPHDRCXMYKPCCFFFF D S U m: A Fruo n\ EFHHP PP. a... J‘Lr. POTTI r7 In... «J11»... CCCCC C 246 observations are considered to be possible system output levels, and each is said to have an equal probability of occurrence. The user of NORGEN must specify values for five parameters: NE, N0, YMEAN, STD, and IROUND. NE is the number of sample distributions to be generated, and ND is the number of sample observations defining each distribution. Recommended values for NE and ND are 40 and 6 respectively. The generation of forty sample distributions almost guarantees that at least one pair of distributions will have its boundary interval at any specified level. The use of six-element distributions is justified by the ease of explaining the probability associated with each element and by the fact that this number of elements is sufficient to allow for considerable complexity in each distribution. YMEAN and STD are the mean and standard deviation of the underlying normal distribution from which the sample elements of each distribution are drawn. In practice, YMEAN is usually set equal to 0.0, which implies that the expected value of the mean of each sample distribution is also 0.0. That expected value can be shifted to any level, y*, however, by simply adding y* to each element of a distribution. The appropriate value for STD depends on the characteristics of the decision situation being analyzed. If STD is assigned too high a value, the dispersion of the sample distri- butions will be great and the assumption of constant absolute risk aversion over the range of system output levels on which they are defined may be difficult to justify. If STD is assigned too low a value, on the other hand, the points defining each distribution will be so highly concentrated around a single system output level that choices between distribution will be difficult to make. Experience to date indicates 247 that a value of STD between one and five percent of the entire relevant range of system output levels is appropriate. Finally, it is often desirable to round the system output levels defining each sample observation to the nearest 10, 50, or 100 units. This can be accom- plished by specifying a value for IROUND. If IROUND is set equal to 50, which is the recommended value, all system output levels are rounded down to the nearest 50 units. A sample output from program NORGEN is given in Figure 8.4. 8.4 Identification of Boundary Intervals After a measurement scale has been specified and sample distribu- tions have been generated, the boundary interval for each pair of dis- tributions must be identified. The boundary interval for two distributions, (A1, 12), is an interval in risk aversion space such that decision makers whose absolute risk aversion functions lie everywhere below A] unanimously prefer one distribution, while those whose absolute risk aversion functions lie everywhere above A2 unanimously prefer the other. Clearly a boundary interval is not unique. If (A1, A2) is a boundary interval for two distributions, for example, and if A3 < A1 and A4 > 12’ then (A3, A4) is also a boundary interval for these two distributions. In measuring pre- ferences, however, it is desirable to specify boundary intervals which are as narrow as possible. The absolute risk aversion reference levels which define the measurement scale constitute the set of potential endpoints for boundary intervals. If a measurement scale is comprised of four reference levels, -.0010, 0, .0005, and .0010, a total of six boundary intervals can be constructed: (-.0010, 0), (0, .0005), (.0005, .0010), (-.0010, .0005), 248 000000 0.00000 0.00900 00090... n.30.u..-o...u 005000 003001.. 003073 .......a-..u:0 0.1.0000 00000.0 000000 900000 000000 000000 000.900 000030 OCGOCO 000006 0.10000 00.... oo.o.- oooooo oo.o-o 0.0... 00.... one... 00.... .00... oo.o-o 1000000200000030000.0340000005000000.»000000703000080OCCAOQODOOCOCOOOGCO .15555001500555105500.31500550150000015555051050 0515 055010555502300500 T2Q12931351121T33148.T575.41T27437¢TQ95.1 T780 1911 69217107121T647734 S... I S. . .S. . 5.. .S......S .. .S .1 1.5 ... 5.1. S .. .. . .... _ .... .I I H I“ I .. I w I I . D . D. 1 ... 10-1.11 10.11.---3110liiif .D _ .. ... O D D D _ . 3089200.... codon.o.0 00.UO»UD HUOOCOO OR.GG.UPJ flaaooao OPJOOD-oa afihaopbc DEBUnUFUnL rucnrdoo 030.000 030000 OOOOWLO 000300 afluooflwnd on..ocoo fluoocauc 000000 “.00000 000000 0.0... 00.... 0.0.0. .00... 0.0... .0000. 00.... 0.0... .00... 10900CCficoccccrusoncrflC‘ccc.aoch. O o o o o I OOOCCOECCGGOGTCC00009000030:COOCOOCOOOCOU 500505 050 5 05355.... 000550 35500.... 555505 505005 05555 3555051055505 . T01623¢TQ51 6735361472124 3116 1111T53Q66 . 1.01000sz . 1a., T437 01TA.3A 1.30 .5,. ..S .1. 5.... .S .. .5. ...S ... 51.1.5 . S...1$., . I . I _ I w I , I n . I ..l . v... , T. v... . , D . .U . D n U. m 3 _ s D _ D D H D . 0 Figure 8.4 Sample Output from Program NORGEN 249 (D, .OOlO), and (-.ODlO, .OOlD). Because relatively narrow boundary intervals are sought, however, only the first three-~those defined by adjacent reference levels--are of interest. For any pair of sample distributions it is necessary to determine which, if any, of these three intervals can be said to be a boundary interval. Given the defini- tion of a boundary interval, this requires the identification of the highest reference level, A1, such that all decision makers less risk averse than A] prefer one distribution and the lowest reference level, A2, such that all decision makers more risk averse than A2 prefer the other distribution. Program INTID, which is listed in Figure 3.5, is used to accomplish this task. Given a set of absolute risk aversion reference levels and a set of sample distributions, it identifies the narrowest boundary interval for each pair of distributions. It does this by applying stochastic dominance criteria developed by Meyer (1977b) in "Second Degree Stochastic Dominance with Respect to a Function." Subroutine SDLB of INTID orders distributions for classes of decision makers whose absolute risk aversion functions are bounded only from below by applying the following criterion: cumulative distribution function F(y) is unanimously preferred to cumulative distribution function G(y) by all decision makers more risk averse than k(y) if and only if: I!” [G(x)-F(x)]dk(x)20 By 1 B.l This subroutine, then, is used to identify the upper bound of the boundaryVinterval. Subroutine SDUB, on the other hand, orders distributions 1This criterion is based on Meyer's (l977b, p. 479) Definition 4. ‘ O C X S 2 9 cl K. N o o E C \l N r 1 \a O c ..L F Q U I. R Q C v! T U o P S U 9| 5 .- 0 D o D. N an o 0 9! RN 0 I. E r. )S I. 0 {A X R H H 0N ) Ill .l .U 9 'I r. 50 L L8 U H O O S R 0 LI. 0. C P UD 0 I U N 1.70 0 (.0 T "N N D S I. VU 9. 0.... U) A A N R&. o QF no 0' E E D .l I. [I ‘1 ’R :5 CK P. I. T H .7 TR J If. 6.! NC 9 D. S 3.. FT C (r. Fl 1 9 H v I. 0 IS F. ...... PC 3| In A .h N I H In. A. Q CU 5 5 Ar .1 S 70 an all .I) EL. Q o E F I. F r" I 0 Q0 9}; OD H F f E IF 0 0' Ii; [R of C .I) D V 0n. :1) )X U0 hi. QR. A O) C N CK Vn¢ P x 1: if E )2 H l. b9 3 1!. l. 0 NO 5.1 NC J o C C! 1 I ‘3 .10 NP- a vile F 18 1| N 9‘ 1 CR 9.... ..R 0 r. 05 0 or. H 0 c. T. Q 0‘ r O CHL Put. I. o H 'I F C \r ) Q C) CA 0 N 3 :X S OH T CI; J!) P15 EC 0c 0 Iv Jon. S R c C‘ (9.... ‘2 ”ES 1! I Q 0‘ I. F. but» 0 E f. o o .I‘ If cl )1. )2 E V C... G )u HEP. _ 9P CG? n.0 I 0:. J 0 V I I. 1‘ .a .1 7C Thu. 0 I In 0) E .1 TR 3... ...... NC... “ U Q If .0 V v... I... L G K ‘0 01C 0 CI. 0 ..I”ru..u. ‘1! E ll, 0 o 'c s cc. olr’ \lrv. . Tin rat-r. 0 0 Flu R8 N O I I ha. 0|. I. 0‘. UZIOFR .5 o O (r. C 1 R Fl. COUI‘ (0V , a-‘(!‘.” 9'! ‘l D )x 9 O I. .. ‘1 .10 07...... Q00 . QLvoT 9.20 oC \a v. P r C ‘1! S I F. G 3 CV r U rF RTE _ T 0 CU ouN r7. 1 U. A ...R N 911 R 9 T o 5 [F 1 n.n..... o o I _ UD)E$CQ PT ( H D a. I (( f ) U )2 2 PE 1 N6.) )9 . PhGYuupc LU E 0 v” I, .L 7.0 V I I. ”1 ST \I’I 0K K 90 H “9h N hb fl 31 k on v P] I C 0 (F 0 oY O, LUo)fi( (r: . ISIITYfiu' .1 a. "I T .8 I A... A 5 .IO 1. or. T7. nrlrUlnu 0.9 (ICSTCE 0R N QR S I? 0 W05) K R) 8 RI X .0 ”K0000 ROR , PVMIL H XTr I. \o an" F.) INTD. S (B A C 0 CYFrOJu. 17. Cal 1.... . 1CnmnV3 339..., 2) ch. 916 . £0)AC(N \a 1. o.) )7 0,515» \r.‘.... 9 .01...) ’FF ollJn. (al.,, 0 Q1... ’01 oku 0k 0) (3.1019. 0’ ...)R ’fi.10 2' 10". L! Kl. 00 o... .1. 9.? Nil. .Pr r.r c. a. CI: r.” .2. 0..» 1.51.0 o Vr.r..IEIJ 31 11,: 9 Sn. ’31....13.’Il ((CDQC rt. .3 [Pfirur V; v. 3H 2.21.... I. 0 .....l. o l.lkurn.» ..( turf” 7le Cu 3qu ....XC: .. .. r r... t5t.H.Pc.. Ir. EFL/.31 clau'lL p» r FF In a JV. 011.....7. 01.35 UTE? 91.! 2t. 0... 0|. 0 OIUIJ .3. K. UL o o o 91. 01. a at. .1... w IfiCL9(.€(C QCJOLCflCC ARCUOQhCIO :LH€(U 0(6( 0! 75!? EC r3LLL€16tolUUU INC... ..-.IG‘T CQYCQMTC ...h.1..ht NAGCTRRZ)MS (TL 57"! (1. GCTC.C§LC0 $50.63... (TX Pqurm (Cffrcffhrfik.(tr(IDH(Pv:%V:IV$rfiY JJYYflf (ttA FII.?L :1. 11 RFFECFLCOXYI FPF.VLOrJv..lu.T.1Cu $01.} :fl..1fl..)7.:..7h$l.1 QTTTVS fiPTK’GTH.VTPYIIJLIQHLLIIITTlu QTTTY. Orv UP IPGvIRns LR Aha-Font.“ ERJKRUI: I... 1hr.I.\S ARYFoNvJPHCYRr (I. KLLCC‘I v.90 FNMD ROINv FF...» 9. OSDECDrFJCTA u....0u.A(OATRCDA 0(3»..RCA~ orcCQGr....-fl ..FCOIOOF‘O:B£FFFD OQOIODCCN PCn. A.lRFNU‘ . .QFCRCFUVAFCKVYCVSUF CDRCCUF ED r.u.FLv no.» PIHFTCPICCLCCIITUGHr AGCCCE DEF r» P .l 0L 1. P. I. A H t M 1 R .fiu ELI F PFIU nu 0r. 0 nYlO C .00 0». Cr?» 1 I. 39. 1h. :Eu 0:20 RON 5 0R 0 03C 1.. CS 11 3FS 0 5 5... n I 03 1.12 2 3 n... n .J A. 2 a. 9.2 6 16 1.1.1. . CCC C Q 55C C CC C NDQC(21)9CP(21)QR(QDCID)QRA(EO) JJ) 1/ 2091090 (’0 N IFLQCK OFACJJ)OX IFIEMJJH 9 u:-t1r¢“) C"? T0 93 FUNCTION U COP“: U:! C Figure 8.5 A Listing of Program INTID 251 Figure 8.5 (cont'd.) FLUCKI/ NDQCCZI’QCPCZI)QRCQ'AQIC’QR“§O) U(CP(I‘.°1,0K)"U(CP(H)OK” CP(V‘1)QK)'U(CP(")QK)’ IFLOCKI/ ND~C(21)QCP(21)QF(Q1010,0FAISO) C"(P.L.l)QK)'U(CF(P.L)OK’) \DoClZl)QCPC21)QQ(QOQIO)QRI(SC, 1 a... nu . nu ) 2 ) 2 . .0 n. r. 1 ... I. 0 ... . 0 ... I. O I .. T I a .. T 1 w T T O D O D n! K wu C v. OI. K C C F n- l. v. .... I...) 11 TU Y. r... T. .... r. r: D. U1 D. 1| as h I L (Q 3 ) (n U 0. 0 3 ) 0 9 R1! 1 h) C o O 1 1 30.. D G) 1 1. 1 2r... .... CKI D ...: .... 2).... a 0 0,1. .... ... I. O o o C I. RC1. ..n 9|. Cw 9 C O P.» O C)“ .3 C u N) 0 CT C 1!. r. ’4‘... r. r. oln .... Lil. o r. E .1... 5 EL .... TT N 0C1 . 0 1C... .... 0 cc t o o 1; or 1 . No In : : o .1 71/ on 1.9 r G 5 z . E 7. 2 1... o .15 G :7. . r. .../011 Jr. ..tI ... T n. .9 ..F)I.UA tfl.vF)U T. C. . ..LF1)1UA A0 FL: I)”. T I. ) L )Ul.» .L UV or... HIP : LL51. 0 IF .11.” UN .03: N. .... . F ..s.“ Lu.L 2... o... . FN N UNSEJI 11 o NTAHV CCCNF... . .Ir..lFLc frrIY..L. DOLE... h... Lr..|.]r...lc,cr....n.5L...L1.IIR CPR...‘ on? (1:...1? 9n.....3..¢.....L?IU...:1.:fiT:U Fm,..:FA.........:K'.ITIUI...LI.»..:.«...T:U Rx.......7. '1an 77))?U r,» F.8Lfi. F(L‘..(T(t r(uwruTD n..PF2:Lh.. (F:(L\HIT(F .. (F..(..un....n.. c v.» 1 :T: (J..ml.x.In..TD UCICFr 01.5... part... IrJIFCNrQ. UOIDLFNOE ...-LC...» 03 CEHILOF ILFnJr.r.u.. U010... H. .. IOF .. u.C(I\0f.w‘. SCENIIDQIXCIRIDCDICIRF . SCCENIICICEIICIRI?KCIDVICIRE SCUNDAMICINICTTCR? 1 .L Q5 0 5 0 Q5 3 5 0 3 3 1 1. 2 1 1..2 Q Q 252 Figure 8.5 (cont'd.) 9CP(21)QP(Q3010)QRACSO) a 5 5 3 c... 0 .3 T 9 2 . o a... _ V G .... ' ’ 1 )i .l.. .... 21 In .... P2 caul Ol- vl—C‘..” ’ 1C 6.... )fi 0 H O o 0.“ N (U 209KB 5 LN KTYo/ I A o 8‘. 1. C/ Ufa)" o o "I nulerPUl-‘l, ’ 1) UK h a» ... (15 1 (.1. CC n. 0,). 0 K K .In C on.“ n \l’ O x! 0 \- Q r.... 19.1... n 1.11 11 12 KP n. 9:...K.F 3.0. 0.? C ... v." F o 21.7le CY‘ (all. «.I.‘ rt... 0... v: .0 o Gr..rul¢£.15(i115(.hl5 oCU UN C....JUU¢..NP..+ C:oo C... 0..NV DC ..upnnwar 1.1.3:...)10:)1«.fi..)?O:)IR RV 1.1.11... .....K K V.Q.’IKT)IK( T’V.KT)ITU Bu. : ..1(DZ ((((I(: 14...... Ins: ICNYD U0 16.(P RDA...» 6 PF (D. 10(P120‘P20‘nrnitnd SCI.KCCSNDIIIICCKGCCKKGCCKGCCCP.r. cl 2 3 .0 ad 5 5 5 5 ha ... 5 a . 5 5 253 for classes of decision makers defined only by an upper bound in absolute risk aversion using the criterion: cumulative distribution function G(yfi is preferred to cumulative distribution function F(y) by all decision makers less risk averse than k(y) if and only if: I; [G(x)-F(x)]dk(x)50 By 1 3.2 This subroutine is used to identify the lower bound of the boundary interval. In determining the boundary interval for a particular pair of distributions, program INTID tests each interval on the measurement scale until a boundary interval defined by two adjacent reference levels is identified. Given the reference levels--.OOlO, 0, .0005, and .0010, for example, the interval (-.0010, 0) is considered first. If sub— routine SDUB indicates unanimous preference for one distribution at absolute risk aversion levels below 0.00l0 and subroutine SDLB indicates unanimous preference for the other at absolute risk aversion levels above 0, (0.0010, 0) is a boundary interval. If this criterion is not met, the interval (0, .0005) is evaluated in the same manner. The program steps up the measurement scale in this way until a boundary interval is identified or until all possible intervals have been examined. Several parameter values must be specified by the user of INTID. NE and ND are defined exactly as in program NORGEN; they indicate the number of distributions to be considered and the number of elements defining each distribution. Maximum values of NE and ND are 40 and lO 1This criterion is based on Meyer's (1977b, p. 482) Theorem 5. 254 respectively. N6 is the number of reference levels on the measurement grid. Its maximum value is 64, the number of reference levels required for the specification of a six question sequence. Two other types of inputs must be supplied by the user of INTID. First, the output of program NORGEN--the NE distributions, each having ND elements--must be read into INTID.1 These data are stored in two arrays: NAME, an array of distribution names, and R, an array of sample points. Second, the reference levels defining the measurement scale must be specified and read into array RA. A sample output from program INTID identifying boundary intervals for the distributions given in Figure 8.4 based on the measurement scale defined in Figure 8.6 is given in Figure 8.7. 8.5 Construction of the Questionnaire At least one pair of distributions for which the boundary interval lies between any two adjacent reference levels on the measurement scale should be identified by program INTID. Once this has been done, a hierarchy of questions can be established, with each question focusing on a different boundary interval. The hierarchy of questions associated with the measurement scale defined in Figure 8.7 is given in Figure 8.8. In general the first question of such a hierarchy should focus on the boundary interval at the center of the measurement scale. That in Figure 8.8, for example, focuses on the boundary interval (.0001, .0002), which is defined by the fourth and fifth reference levels of the eight- level measurement scale. The two questions at the second level focus 1Programs NORGEN and INTID are written so that the output of NORGEN can be catalogued as a permanent file and read into INTID from that file. 255 .0050 .0010 .0006 .0003 .0001 -.0001 -.0005 Figure 8.6 An Eight Element Measurement Scale 1256 camcoo oqocooc ocuoao ounce. cocoo- canon. canon. cacao. cameo. uncoool owccoo swoon. ooocooa canoe. odoooo camcoo cocoa. ounce. conga. canon. oomooo canop- canoe. oueoool canoe. u>oo¢ u>om¢ u>om< u>om< u>am¢ u>om( u>om¢ u>um< m>um¢ u>am¢ u>om< u>cm¢ u>om¢ u>um¢ w>uo< u>um¢ u>um¢ u>cm¢ u>om< u>om< u>um¢ u>om¢ u>om< u>om< u>om< QHFzH ouxauuuau m pm—o Ouzxumuza m pmuo Cuzxuuuxa m rmuc Ouzcwuuxa o hmua ouczuuuca n pmuc ouxxuuuxa m rune ouuzuuuza m uwno ouxxuuuxu c rmuo Ouzawuuza c hmuc ouzawmuza c pane owxxuuuzm o bun: ouzxuuuca c ban; ouzauuuun c hmuo Ouuzuuuua 0 pm“: ouuxuuuxn c pmno cuxauuucm n puma ouamuuwza n >m~c cuzuuuuzu n pmuc auuxuuucn n pmno aucuuuuzm n pan: cwxxuuuaa murm~o ouxuuuwca warm—G owzuuumza «upmuc auzzuuucc « pane ouccuuuxu u >u~o eeeee. emeee.- eeeee. eeeee. eneee. eeeee. eeeee. eeeee.e eeeee. eeeee.- eeeee. eeeee. eeeee.- eeeee. eeeee. eeeee. eeeee. eeeee. eeeee. eeeee. eeeee. eeeee. eeeee. emeee.u eeeae. Dogma xoawm aoawm acaum :oauc nodum Joaum acauz aoqed soaks JOJum aoauo :oaum :04u¢ 304mm scabs noaum auger aoaum noamm JOJmc :oaum 304mm Jadum zedum Eegmoge Low usauso weasem omuzuuuum curauuwcm auzxuuucc ouzauumcu oucauuuaa ouzauuwun omwzuuuzm omxzuuwca curvatura ouxzuuuzn ouczumuzn aucxuuwau ouxxmmuxm cuzzuuuxa ouxauuwwa cuxzumwca ouaxwuuca owxauuuxa omzcuuuwm omzcuuuua ouczugumd cudamuuxn auxuumuag ouxauuwuu ouxzuuuxm e.e eeeeee oupm~o conno- nepuea eeeoe.u tarmac oouoa. o wuuo ec—:a. c pm": ooaou. 9 ran: coaco. w bwuc oo—aoo cmhmna o—ooc. cupnno ounce. sawmno ouccoou tarmac canoe. e wane camco- m pmno aaacooc h thC oo—uo. o bwno amuooo ewhm~o ounce. barman canoe. c—wuuo 00099. a pawn aomao. s ha~o cameo. N pmuo oowoo. w wmnc octoo. N Faun oouuc. curmua o—oooou N hung cameo. m4¢>¢upz~ >¢oc< u>cm¢ u>om¢ u>um< u>um¢ u>om( u>um< u>om¢ u>um¢ u>um< u>uo< u>uc< u>om¢ u>om¢ u>uM¢ u>om< u>om< u>am< u>om< u>cm< u>om¢ u>um< u>om< u>um¢ u>um< u>om¢ ouczumuza Guzzuuuua ouzcuuwxn ouxcuuuxu oucuuuucn ouzzuuuxa auacuuwza auxzuuuca Oucauuuza oumcuuuxa ouzcuuuxa ouzuuuuaa owaauuucn ouxmuuuxa owzzuuwza ouzcmuuca ouzcuuuua ouxxuuuza ouzmuuuza cumxuuwxa Guzzwuuca ou¢¢Uku¢a ouxcwuuca cumcuuuxm ouzauuuxo ouzruuwzn curmuc tarmac Oahu—c mupmua nupmac nupm~o tawn~c cnpn—o Narmuo «upmuu cupmuo c pane n~pm~o oupm—o cupmno wnbm~c nupmuo earn—c muhmuu «upmno cuwm~u ¢ hmuo pmno pmuc pmuc pwuc K3000 emcee-I annou- unocc- oouooo canoe. ocuoo. omoo0ol omaoOol cocoa. caucus cwooco emcee-I anaco- canon. canon. canoe. conco- canoe. canoe. ornocol unaccel co—oo. oouooo croocol cocoa. canoe. aoaum :ogum :oaum :oJuo :oauo aoeum noamm acqum JOme :oaum zoaum :oaum soap: Joaum :04»; scam; sadum noeum madam JCme :ouum scene aoauc :oaum aoamr :oqwc auc¢Ukuza uuxmuwuwa cwzcwuuaa Quacuuuca oureuumzu ouxauuuza oucauuuza omezueczu auazuuwxa a¢¢¢umu¢a omzauueea cu¢¢Uhu¢n cuzzmuuzk ouxzuuuua cuzauuuxm ouxauuuca euazwumca Ducamuuma caucumuaa cueamueza ewxxmuwea eweeuumee ouaxuuucn aucxuuuxc Duccumwam outrumuun suhu~o subm~a warm—m cuemuc sqhm~o unpm~c cmpm~a haemuc musmuo «upmuo wuhmpo hupcuo c pm~c pm—o »m_o pwno hmuo hm—o pane pwuc pmmu pv—c s puma sekwuc s pmno o~>m~c h r— F r- h r—v~ h F- enuoool coupe. cwoco. aorcco canoe. cane:- canonon anuccon oc—oo. conco- cease. panacea upwoc. camco- cacao. owrooo oomooo nooeo. onooo. n—cogoo oomvcoc oomnoo ouvco. ouaeool conco- one:9o A.e.eeeev N.e eeeeee a» or or er cp 0» or up 0» Oh oh or or o» o» o» a» or a» c» c» o» a» c» OF C» eeeee.u eeeee. eeeee. eeeee. eeeee. eeeee. ereee.u eueee.- eeeee. eeeee.. eeeee. eeeee.- eeeee. . eeeee. eeeee. eeeee. eeeee. couoo. eeeee. emeee.- eeeee.u eeeee. eeeee. eeeee.u eeeee. eeeee. 258 mcowpmmzo mo Azueegmw: macaw-«mesh < w.m mgzmeu .oues we ou—ogu peeve ecu gauge Agugego22 ecu co gunmen «as» ou ueo— :u_;= momzoamoe 0:» saw: pecan—mcoo :oemgm>e save ouspomno ma pe>gmuce mg» uueuevc— upo>gouce uwumxuegn on» u .—e>eou:— age we wcaoa even: msu case mmeu>e xm_e one: mgoxee ca—mvuou an uueewuogn apmeosececa :o_u=a.eum2u ogu gu—x woueeuOmme e_ —e>guu=— accesses e :opoe uzm—e use cu engage one: .—e>guu=e ogu we uczoa Luzo— use ens» omgo>e meL mmo— «Loses anew—own an voggmuoga A—maoe_=e:: coeuanesum—u as» :u—3 wouevoomma m2 —e>Lou=— agencaoa e :o—on «yup use on gucegn ache T eefim H88. 68: "28:88; ”88:88; H88. .8 289.88.; He .88.; u2891i 8 as e as 2 5:. e 53 V53 e 5e 2 5:. 828. .28; Geee. .88; 58. .e V :Waa 8 5e .2. e 5e 2 5e .2 e 58 8 5:. .2; :2e 2 5e .3 e as e E 2 53 e 53 e as . S .88.- 88. 89 e e 5e .8 e 5e 5 hmua Lo e hmun ee 5 .8 5e :89 .288 8 5e .3 .... 5e 259 on the intervals at the center of the two segments of the measurement scale created by the first question, and the four questions at the third level focus on the boundary intervals at the center of the four segments created by the second set of questions. Once the hierarchy of questions has been specified, the number of system output levels at which direct measurements of absolute risk aversion are to be made should be determined. Experience to date has shown that direct measurements in the neighborhood of three to four system output levels provide an adequate basis for the construction of an absolute risk aversion function over even a broad range of system output values. If, for example, annual income is the system output variable for which preference information is to be elicited and the relevant income range is from 0 to $20,000, direct measurements of absolute risk aversion could be made in the neighborhood of $3,000; $l0,000 and $l7,000. In order to specify the choices used to elicit information on pre- ferences in the neighborhood of a given system output level, the sample distributions generated by NORGEN must be shifted to that level by adding a constant to each element. The boundary intervals between dis- tributions do not change when the expected value of their respective means is shifted away from zero. This is true because the reference levels on the measurement scale, A, represent constant levels of absolute risk aversion, such as would be associated with a utility function of the form u(y) =-e-Ay 8.3 When the mean of any distribution is shifted by adding a constant value to each of its elements, the associated expected utility is altered only 260 by a positive multiplicative factor for decision makers with this form of utility function.1 If two distributions are shifted by the same amount, then, their relative ranking by decision makers more or less risk averse than any specified value remains unchanged. A set of sample questions designed to elicit information on pre- ferences for income in the neighborhood of $10,000 is given in Figure 8.9. They are specified in the manner described in Chapter IV. It should be noted that the choices actually presented to a decision maker are dependent upon his responses to prior questions in the hierarchy. 8.6 Administration and Interpretation of the Questionnaire Before the questionnaire is administered, the decision maker should have a clear understanding of its objective, which is to obtain an accurate representation of his preferences. The system output for which preferences are to be measured should already have been clearly defined and should be recognized by the decision maker to be the primary indica- tor of system performance he will consider when making a choice in the situation being analyzed. Administration of the questionnaire is straightforward. The decision maker is presented with several series of choices such as those 1Let the mean of the distribution of a random variable y shift from zero to y*--i.e., let the random variable w equal y + y*. E[u(w)] fco - e'xw dw In” _e'l(y+y*) dy ..*..- = e 1y [_w-e 1y dy a E [u(y)] where a is a positive constant. 261 coco— coco— camo— Ceca coma cmwm om hm_o om—o— omoop coma ceam ammo cavm m ~m_o oo—p— oeso— coco— om—m cmom coca up hm~o «co.u=a.gun_o o.q€am mgweccoeummeo mpaEem m.m mgamem up hm_a o hm_a .Loeoce so» ago «so u—ug_u use ~_ »m_a use e pm—a oeeqeou cu hm_a e »m_o . .Luumen :ox use as» o—ug_u new om hm~o vac v pmuo oeeasou o~ »m_o n pm_o .gueaea so» use use u_ug.u use s. »m_o can n pm~o meeQEou om hm~o . m hm_n .goemen ac» oco use o—u52u use cu hm_o use n »m_o oeeeeou .o e62umoaa ou om oe_zcuguc .e :o_ueo=a oe om .m pm~o Lounge so» 22 . a hm_o e »m_o .Louocn so» see us» o—uc.u use a pm_o ace c »m_a oeeasou .c coweemac a» on om_xeoguo .m coeueaaa o» co .~ hm_o couch; no» e— 2 heee .e eeee .eoeoen sax one age «pugfiu use s hm_o ace v hm_a ounanu .~ co_umoea o» om um23gogeo .n =o_emoac a» co .o~ hm_a Loewe; :oa *— cw hm_o m hm_o .Lwemgq so» ago one o_ug_u ace ON emwo ace m hm_c acousou 262 specified in Figure 8.8. Each series measures preferences in the neighborhood of a particular system output level. Completion of a questionnaire comprised of four three-question series takes approxi- mately twenty minutes. Experience to date has shown that decision makers find this preference elicitation procedure more interesting and more informative than the interview process required to elicit a single- valued utility function. Interpretation of the results is also quite straightforward. Using the series of questions in Figure 8.9 as an example, consider the case in which the decision maker prefers DIST 5 in question (1), DIST 7 in question (4) and DIST l7 in question (5). Referring to Figure 8.8, preference for DIST 5 over DIST 20 indicates that the decision maker is not less risk averse than .000l; i.e., that r(y) > .0001 3.4 Similarly, preference for DIST 7 over DIST 4 indicates that his level of absolute risk aversion is such that r(y) < .0010 3.5 Finally, from his preference for DIST l7 over DIST 3 it can be inferred that r(y) < .0006 8.6 As noted in the lower line of Figure 8.8, then, these three responses indicate that the decision maker's level of absolute risk aversion lies on the interval [.000l, .0006] in the neighborhood of y = $l0,000. 263 8.7 The Use of Interval Measurements of Preferences to Order Choices The interval approach to the measurement of decision maker pre- ferences was devised for use with the evaluative criterion of stochastic dominance with respect to a function. It determines upper and lower bounds on a decision maker's absolute risk aversion function, the basic information on preferences required for the application of their cri- terion. In order to actually implement stochastic dominance with respect to a function in the ordering of choices, however, utility functions having absolute risk aversion functions which correspond to these upper and lower bounds must be constructed. The link between absolute risk aversion functions and utility functions is straightforward, but analytical relationships between the two can be found only in certain special cases.1 Because no particular functional form is specified for the upper and lower absolute risk aversion functions constructed under the interval approach, the determination of the associated utility functions by analytical means is a difficult if not impossible task. Program UFUNC, which is listed in Figure 8.10, resolves this pro- blem. It employs numerical integration techniques to generate values for the utility functions associated with the upper and lower bound absolute risk aversion functions at regular intervals over the relevant range of system output levels. In effect, these values define the two 1As Pratt (1964) notes, u = Ie'fr where u is a uti1ity function and r is an absolute risk aversion function. The two constants of integration are arbitrary, corresponding to the arbitrary scale and origin of the utility function. 264 . w I w _ “ _ d . _ w . E I. _. .. H . T e E . _ E '- .. 0 s _ . ~ 5 S _ T T _ . T L . T . S n u . S HE A , I . . L C" V L E _ E TT _ R T V m V . N . C. . T o . T _ U H _ Te , e U ' . 'I n no , O. _ a . _ t . u X . N , e o A . . . T ) RI. 1 5 F0 5 . S ‘ OFT N O: E u 0 E ETN T 0 T N P P I c . I L . A SOY E. _ T EN R , R T ENI RT _ C VE U . C I UlL TN H N E" E . F 'I L A 0x 0 L” U ‘NV rs .. I. S \I) s \l) P VIP H . ET E TT E TT U) "E c I . V... “L U "N U N" TO TYT TN . ) T TE L 11 L TI ) IT U N NF. 1. I V A KK I KK N to" P T UV C L 51. V I I V I I . N EU TD 1 U 1 IT TY YT )I PL UET RG . L T c T I I T I I N2 A! OTN O . I U 2E T 66 T 66 NC TV AIT FE . V VP I RR . I RR IU ) II "LP" 8 I D S L . IA L II ZI.3 T) EUFT S \I N DE 7. I I I II C) T UO)TC T TO I U NR T LU T LU UN . PTOSLEI NT C 0 I 0U LL U LL .fl 0 TCOTIUT I L I. U 2 AA . l.‘ \I I 2 UL‘SCTD 05 L T. E VV . E VV N1 E ”Ll. I 1 F... I R To 0" (C N (( NC 2 ‘HMEDX 10 V E I TI TI _ I II. IU I TVoUPNA 00 IN I H LD H LL _ R LL 1o ! Up)! AH 00 T0 ) O IN OR 38 R EB () 5 P)OTn T TOO A! o T L Tl GE IQ E IA UN I NOOVT II 00 o DT I C U T TT . T TT In 2 T14! YN 0.6 AT T G D O )E n: E c: H)( o (C IHee.nvv..\10e.T EHN A R)" L 0 0.0 AA 1 D I“ T TIN" 0 CG? R.MZOTN 0R1 DTARA A1 . 2 RR Y 1 RR Y +0" 1 NP(DAvY.-NI CK N . PM 05 K o S K Y e 2K() F UAUN N 091' Per ET.)0R ITT oT N V )\:TA 0 T N N \(ITI K Inn? I E ASE)TYYI FN))CKOTETGA ONeI N 127R 2 o NoI I TZYK ? NTeo UNN EHaFD/NTRTpfiflpOFFafl EEST I a... T 0 E51 T o... T +=21 H CFNUEnvL/Xlon. OIETTZFHAT .HL: T Davy-TE: E ..OFL: .. Gav-VF... F.E1N0 E...- RTTULF ICTAhfifiuT ITS... ICUPHE NEEJJ: EEBDU) U 1) oEEJ K EEDDU) UUKNT‘UC ASSMACSTNHYKNNSTEIST $00 ITV .KY...9NIYN00 QTOTV Tee+efllYNNN (NT PNNTVN(ITY. N ECAF (RTOE . .OHIEOD nun-.211. 2.1 e .02 I2AF.O1SDHPZTT 1.112072!!! SEEN TDHYzc=EPDHF20HC5HEQO.TTLTN5.HRYTT9)TTO.K!)TL3¢?+NPYVT‘)TTP‘NTTH OHNIY AP:120HEARR IRU TR::O(S I TCC::NHTN::0NHTS I Y((chHTNNI ITNcD RTIFTTEPTKKNTREUPOEURC A1237 TC:0¢FF?TOC(01233C( T0:0sFF?TD((OCDCRRDCN PDD IURFNINT UHF DRFTDS YTTIEUDTDTTTYTCUUCYTTIUUEUDTDTTTTYCUUCCKhFHCFE DLF DS 0 S IT SF , SF ATT AA A I E ET . _ ET ETU 8 EE 9E 200 T“ "U . . 5 0 0 "U 5 00 02 RUG 9 PH 9“ WC VT Tc _ _ HT 2 TO 2 35 ‘w CCC ” CC C C CC. CC _ . . CC. Figure 8.10 A Listing of Program UFUNC 265 utility functions, since values not calculated directly can be determined by linear interpolation, as in Figure 8.11. Such an approximate representation of a function is called a table look-up function (Llewellyn, 1965).1 Several inputs must be specified by the user of UFUNC. First, the range of system output levels over which the utility function is to be constructed is defined by specifying the minimum and maximum values, YMIN and YMAX. Values of the utility function over this range are calculated by solving the following two differential equations recur- sively with Euler integration:2 _Q My) = l 0 My) 37 dy My) -r(y) 0 U'U) ' The solution technique requires that initial values of u(y) and u'(y) be specified at some level of y. Within the program, u(0) and u'(0) are automatically set at 0 and 1.0 respectively, so this condition is met. It is also necessary to specify a value of DY, the output increment. The smaller the value of DY, the more accurate the numerical approximation of the utility function value will be. In cases where system output has been specified in dollars, values ranging up to 5.0 have proved to be adequate.3 If the range of systems outputs is large, more values of the 1See Appendix A for a brief discussion of table look-up functions. 2Manetsch and Park (l977b) provide an excellent discussion of numerical integration in general and Euler integration in particular. 3Stability conditions under Euler integration require in this case that a value of DY be selected so that the absolute value of the following expression be less than 1.0 where R* is the minimum value of the decision maker's absolute risk aversion function: (2-(ov) (8%)) (0;)2(R*)2 - 8(01) (R) If the minimum value of r(y) for a decision maker is set at -.01, an extremely low level, the stability conditions are met if DY = 5.0. 266 UUO 120 100‘ 60 a l 1 n l 1 1 1 L A L -2000 2000 4000 6000 8000 10000 12000 14000 y -20 . -40 P Figure 8.11 Representation of a Utility Function by Interpolation Between Known Points 267 utility function will be calculated than are needed for the table look- up representation. A final parameter to be specified, then, is YINT. It defines the size of the interval between system output levels for which values of the utility function are to be specified in the table look-up function. In application to date, YINT has been set equal to 50 or 100. The minimum value for YINT is determined by the following expression YMAX-YMIN 399 The user of UFUNC must also supply information on the decision maker's YINT > 8.7 absolute risk aversion function. The interval approach to the measure- ment of preferences determines upper and lower bounds on a decision maker's absolute risk aversion function in the neighborhood of several system output levels. In the example shown in Figure 8.12, direct interval measurements were made in the neighborhood of y = 3000, y = 10,000, and y = 20,000. The upper and lower bound functions are considered to be constant over the range of y values for which each measurement applies:l Values for the two absolute risk aversion functions at system output levels other than those where direct mea- surements have been made are determined by linear interpolation between known absolute risk aversion values or by linear 1This is a result of the assumption that the decision makers absolute risk aversion function is constant in the neighborhood of any particular system output level (see Section 4.5 of Chapter IV). The range of system output levels over which a given measurement holds is dependent upon the dispersion of the sample distributions used to elicit the preference information. In the example in Figure 8.12, distributions generated by NORGEN with STD set equal to 500 were used. As expected, nearly all points in the sample distribution fall within two standard deviations of the specified mean, y*. Therefore, the interval measurements are said to be valid for system output levels in the range y* i 1000. / 1% oooow ooowp coco. d n dlll “cosmgsmemz mucmcmmmca —m>cmu:. c< Np.m «gnaw; / oooep u / / 8 2/ 88. L. d e X IR. \\\\ om \ ooom Is \xfi 'lll >( ooov oocm \ Fooc.u pooo. Nooo. moco. 3.. 269 extrapolation for system output levels outside the range over which direct measurements are made. The points marked with an X in Figure 8.12 convey all the information on the upper and lower bound absolute risk aversion functions that is required by UFUNC. They occur at six system output levels. Therefore, parameter KINT is program UFUNC, which indi- cates the number of points for which specific information is required, is set equal to 6. A series of six data cards is then read by the pro- gram, each card setting values for three variables: ARG, which is the level of y; VALL, which is equal to the lower value of r(y); and VALU, which is equal to the upper value of r(y). The values for the example in Figure 8.12 are given in Table 8.1. The utility functions generated by UFUNC for their example are graphed in Figure 8.13. Once values of the utility functions associated with the upper and lower bound absolute risk aversion functions have been calculated, they serve as inputs to program NSTDO, which orders distributions of system outputs according to the criterion of stochastic dominance with 1 The respect to a function. Program NSTDO is listed in Figure 8.14. logical foundation of this procedure is explained in Section 4.4 of Chapter IV and, more extensively, in Meyer (l977a). Several parameter values must be specified by the user of NSTDO. ND and NE again define the number of sample observations defining each system output distribution and the number of distributions to be con- sidered. Their maximum values are 40 and 50 respectively. Data defining 1This is a slightly modified version of the program written by Meyer for the application of stochastic dominance with respect to a function described in "Further Applications of Stochastic Dominance to Mutual Fund Performance." Table 8.1 270 An Example of Preference Data Input for Program UFUNC ARG VALL VALU 2000 -.0001 .0001 4000 -.0001 .0001 9000 0 .0003 11000 0 .0003 16000 -.0001 .0001 18000 -.0001 .0001 271 4000 ~ / / / 3600 e l / / 3200 I / / / 2800 v I, / / 2400 " // / / ' / / Lower Bound / Utility Function 1600 " / ------- Upper Bound /’ Utility Function / / 1200 " / / / 8m: . / ./ / /’ 400 ' l/’ /’ / / 200 400 600 800 1000 1200 1400 1600 1800 2000 Figure 8.13 Utility Functions Associated with Upper and Lower Bound Absolute Risk Aversion Functions 272 SUBR”UTINC RU' DCR crmvr" w.) r.: N..C(iOi).CP(101) 1(50) R(40 50) RA410.4) TUI‘WI. 1 DO 400 I=l.Nl)l AH=T(1) N“! II=I+1 DO 401 J-II.ND flFSAfi.LE .(JJ) G3 TO 401 |;: AMTT(J) 401 CONTINUE T(N)=T(I) T(I)=AH _ - _ 400 CONTINUE ' RETURN EN SUB°OUTIIN‘ CUMCAL(H1.H2) fiOHdOH IJD NL.H3.C(IOI).CP(101).T(50).R(40.50).RA(10:4) 1:1 K231 C(1)=O O CP(11=0 O BNDtND NDQ=ND¢2+I DO 500 1'? N02 IF(K1 CT NU “JD ¥.2 GT ND) GO TO 550 IF(KI GT ND) (‘0 10 IF(KZ’ CT ND) (:0 TU 55] IF(R ggmuurV/bxorve/ 9(400). VALL(400).VALU(400).KDIN SHALL.DIFF K: moxmu! agglguEXE(VALU.SHALL DIFF. K.DUHHY) END "- - ~ - ~-- UNCTION UI!(Y.J5) SDKHON HD.NLH,.C(IC1).CP(101) T( 50) R( 40.50).RA(10.4) commmu IBIOCKZ / ARC<400) VA ML (4 00) VAL LU(400).KDIH.SHALL-DIFF K=VDIP U11: TA3EX VA 400) VA ALU(400) KD DIH.SHALL.DIFF K=nuiM urzv rALEX( RC VALU.X.K) PEfxwuq END FUNCTION T\PF>F(VAL SMALL DIFF. KoDUNHY) 312:”3IDN VALII) DUI“ .- f UMP: r ~3r: 1‘. L1 I?MINO\MA!I(1 U‘DUfl/Dl IF? I O) K) TAEZXE1(VflL(1*I)—VAL(I)HH(DUM ~FlOAT¢I~ ~13ODIFF)/DIFF+VAL(I) ' RETURN END FUNCTION TAXEX(VAL. ARC DUMMY.K) Dlrmr:wzltu Vhl_(1).ARG 1) '_ _ DO 1 J ?.¥ 2: « AHU(J)) CO 10 1 2 'Ilggéx-ZTD’ 5.231%“ A" ;( Pl))G(VAL(J)~VAL(J~1))/ 1(ARG(J)-AKG('~1))+VAL(J~1) _ RFTU=N 1 COmrIr HE J: 00 TO 2 275 the NE distributions are read next by the program. This information is stored in arrays NAME and R. Finally, data on the decision maker's preferences must be read into the program. Values of SMALL, DIFF, and KDIM--the smallest system output value for which utility values are to be assigned, the difference between system output levels for which utility values are to be assigned, and the number of system outputs for which utility values are to be assigned--are read first. Then data on the utility functions associated with the decision maker's upper and lower bound absolute risk aversion functions are read into arrays ARG, VALL, and VALU, which are defined as above in program UFUNC.1 A simple output from program NSTDO is shown in Figure 8.15. This is an ordering of four distributions for the decision maker whose inter- val preference measurement is graphed in Figure B.l2. The symbol 1_ indicates that the first distribution named is preferred to the second; :l_indicates that the second distribution is preferred to the first; and Q_indicates that the two distributions cannot be ordered by the cri- terion of stochastic dominance with respect to a function for the class of decision makers whose absolute risk aversion functions lie within the specified bounds. The combined power of interval measurements of decision maker pre- ferences and the criterion of stochastic dominance with respect to a function is demonstrated by the results presented in Sections 4.7 and 4.8 of Chapter IV. Clearly these are two related analytical tools which can be of considerable value in the analysis of decision made under uncertainty. 1The program reads these data from a permanent file which is the catalogued output of program UFUNC. 276 NAME nUrU11 9.7g“ TTT 889. 7.11.. VERSUS 1....“ TTT SS S III DFUD 2» L 2 17 T- (urea; 11?: PL— .PL 011 10..“ TTT SSS TIT. firth; 739.15 TTT. (.51.: 71111.. Pair: 111 15.2.3 TTT 883 III Ftnlrl 0.414 TTT SFUCV 17.71 a: PI.- Figure 8.15 A Sample Output from Program NSTDO 277 In this appendix the relatively straightforward procedures used to implement these techniques in a practical setting have been described. APPENDIX C IMPLEMENTATION OF THE GREMP MODEL C.l Introduction In this appendix the computer program used to implement the GREMP model is described, and some of the special features of the model are discussed. The objective is to acquaint the potential user with the more technical aspects of this procedure and to suggest ways in which the model can be adapted for use in the analysis of particular decision problems. The basic structure of the computer program which implements the GREMP model is shown in the flow chart in Figure C.l. The program begins with an initialization phase, during which parameter values are specified and required data are read in. The program then goes through a specified number of iterations during which strategies are generated at random, the outcomes of each strategy are simulated for a number of states of nature, and the efficient set is updated. Once the desired number of alternative strategies has been generated and evaluated, information on the elements in the efficient set is printed, and the program terminates. As was noted in Chapter V, this procedure is not designed to identify a truly optimal choice. Rather, it simply generates a large number of strategies, and, on the basis of evaluative information supplied by the user, it identifies an efficient set of choices from those considered. The particular value of the GREMP model is that it can be used to analyze problems for which an optimal solution cannot be determined analytically. 278 279 Start ‘Initialization Phase ' Strategy Generation Simulation of Outcomes Update Efficient Set Desired Number of Strategies Examined? Print Information on Efficient Set Figure C.l General Flow Chart of Program GREMP 280 The discussion in subsequent sections is organized in a manner similar to the computer program itself. Data requirements and suggested program parameter values are first examined. Next, the procedure by which strategies are generated is described. The simulation of the outcomes associated with each strategy generated and the evaluation of strategies by the criterion of stochastic dominance with respect to a function are then briefly discussed, with references being made to the more extensive descriptions of these procedures given in Chapters III and IV and in Appendices A and B. A complete listing of the program is included at the end of the Appendix. C.2 The Initialization Phase During the initialization phase of the program, run parameter values are established, some or all of the constraints on control ‘variable levels are specified, and data defining alternative states of nature and the decision maker's preferences are read in. The run parameters define certain general characteristics of any particular application of the model. They include: ND, ITNS, NV, NC, NVC, MAXNO and NCONS. As in other programs developed in this study, ND is the number of sample observations defining the distribution of outcomes associated with each strategy being considered. As such, it is also the number of states of nature to be defined. In the applications of the GREMP model discussed in Chapters V and VI, a value of 20 was specified for ND. In many practical instances a larger value of ND would be desirable. The maximum value in this version of the program is 20, but this can be augmented by simply changing the appropriate 281 array dimensions.1 ITNS is the number of iterations the model will perform-~i.e. the number of sample strategies which will be generated and evaluated. The value of ITNS depends entirely on the characteristics of the problem being analyzed. ITNS was set at 500 and lOOO in the two applications of the GREMP model discussed in this study. It may be desirable to specify larger values of ITNS when problems with a large number of choice variables are to be analyzed or when the identification of a more nearly optimal strategy is desired. If the simulation model used to generate sample observations from the distribution of outcomes associated with each strategy is quite complex, however, the cost of each iteration may be so high that a much lower value of ITNS must be specified. NV is the number of control variables used to define a management strategy in the problem being considered. The set of control variables can be divided into as many as ten categories, with NC being the number of categories. In the applications discussed in Chapters V and V1, for example, it was convenient to divide the control variables into three categories: resource acquiring activity levels, resource using activity levels, and control rule parameters.2 Once the number of categories has been specified, the number of control variables in each category must be 1Each stochastic system input variable must be dimensional to ND; e.g. in the listing at the end of this chapter, the second argument in each array in common blocks 2 and 3 is set to ND. In addition, T and R are dimensioned to T(ND) and R(21,ND), and C and CP are dimensioned to C(2*ND+l) and CP(2*ND+l). 2See Eisgruber and Lee (1971) for an interesting discussion of why choice variables need to be classified in such a manner when strategies are to be constructed in a sequential manner. 282 read into the appropriate element of the array NVC—-i.e. NVC(l) is set equal to the number of control variables in category one; NVC(Z) is set equal to the number in category two; etc. Since the classifica- tion of variables is intended to be mutually exclusive and exhaustive, the following equation must hold: NC NV = 2 NVC i=1 i C.l In some instances it may be infeasible for all NV of the control variables to be set at non-zero levels in the specification of a strategy. When this is the case, it may be desirable to impose a limit on the number of control variables considered in defining each strategy. MAXNO is used to impose such a restriction; its value must be less than or equal to NV.1 Finally, NCONS is the number of linear constraints to be imposed on the control variables which define a management strategy. In general, all these constraints must be of a "less than or equal to" form.2 Up to 25 linear constraints can be imposed in the current version of the model. This number can easily be expanded, however. Once the run parameter values have been specified, the program reads information on two sets of constraints. Members of the first set limit the range of allowable values for the NV control variables by establishing a minimum value, VMIN, a maximum value, VMAX, and the magnitude of the interval between values, VINT. If, for a particular 1See Donaldson and Webster (1968) for a more complete discussion of how such restrictions can be used. 2See Dent and Thompson (1968) for a discussion of the difficulties caused by constraints of other forms and for an explanation of how these difficulties can be overcome. 283 control variable, VMIN = O, VMAX = lOO, and VINT = 25, then possible values for that variable are 0, 25, 50, 75, and 100. If VINT is set equal to an integer, all values of the control variable will be integer. If, on the other hand, VINT is set equal to a very small value, the set of allowable values approaches that of a continuous variable between VMIN and VMAX. Linear constraints such as those used in the specification of a linear programming model comprise the second set of constraints. The program reads input-output coefficients and resource availability levels for each of the NCONS constraints of this type. All input-output coefficients are first initialized to equal zero. Non-zero values are then read into the two dimensional array A by specifying the constraint number, I, the control variable number, J, and the desired value of A(I,J).1 The variable LAST is simply a flag which, when set to a non- zero value, indicates that the last non-zero input-output coefficient has been read. Next, the NCONS resource availability levels are read into the array F, and the specification of linear constraints is completed. The initialization phase continues with the program reading data which define levels for each stochastic exogenous system input variable in each of the ND states of nature used in the determination 2 of system output distributions. The number of exogenous system input 1Control variables are ordered in the following manner. The first NVC(l) control variables are the elements in category one, the next NVC(Z) control variables are the elements in category two, etc. 2These data are generated externally to the program using tech- niques described in Appendix A. 284 variables depends on the characteristics of the system being considered in a particular decision analysis. In general, then, the user will supply his own READ statements here. The version of the program listed at the end of this appendix is that used in the analysis of the problem discussed in Chapter VI. Therefore, data on contract prices, cash prices, crop yields, and days available for fieldwork are read.1 Finally, information on decision maker preferences required for the application of stochastic dominance with respect to a function is read by the program. More specifically, the data points generated by program UFUNC, which define the utility functions associated with the decision maker's upper and lower bound absolute risk aversion functions, are read.2 First, however, values of YMIN, DY, and KDIM-aparameters of the table look-up functions used to represent those utility functions-- are read. YMIN is the minimum value of the system output variable for which a utility value is calculated, DY is the interval between system output levels for which utility values are calculated, and KDIM is the total number of data points. Once these values are established, KDIM values of ARG, VALL, and VALU--a system output level, a lower bound utility value, and an upper bound utility value--are read into the appropriate arrays.3 1All these data are read from separate permanent files. This is often more convenient than using data cards. ZSee Appendix B for a listing and description of this program. 3These values are read from a permanent file which is the catalogued output of UFUNC. 285 C.3 Strategy Generation All user supplied inputs are read into the program during the initialization phase. At the beginning of the first iteration and of all subsequent iterations, all control variables are set to zero, all constraints are reset to their original values, and several variables used to monitor the strategy generation process are set to zero. The generation of a feasible strategy then begins. The segment of the main program which constructs each strategy is listed in Figure C.2. The sequence of operations is such that all the elements of one control variable category are assigned values before those in the next category are considered. After updating values of ILO and IHI, the lowest and highest variable numbers of the elements in the variable category being considered, the program calls subroutine SELECT.1 This subroutine is a discrete uniform process generator which randomly selects a variable, V(J), from the set of variables in the category under consideration. If that variable has been considered in the construction of the current strategy, the value of IND(J) will be non-zero and subroutine SELECT will be called again. If it has not been considered, the value of NEX(I), the number of variables within the category already examined, is augmented by one and IND(J), the indicator for the variable is set equal to NEX(I). Subroutine LEVSET is then called. Like SELECT, it is a discrete uniform process generator. 1When none of the control variables in a particular category imposes constraints on any of the others, it is possible to bypass calls of sub- routines SELECT and CHECK. In the version of the program listed at the end of this appendix, this is done for variable category 3, the set of control rule parameters. 50 65 70 286 :7- I Z 0 H HZ a—H’JiQI-a one DI CXAI‘O “C *:¢~ Ola-40 mZXTZOI—VVUU) AHH'TIA I?“ II Ii “'ilHH -r- l "UWH‘V"‘1... Cr .7: C CU, C)0.1r. 0H ($131.0, FP r. L. o O 9 T 1. rlT. I .11.» on. ) T 0 TI. CFTuK Tl. ClTuh . C... 9 9 T J 9 S .1. C9 CC I. J... .1 GT C 91. C9 S .1. C9 5.. J ... n. 9 .J O x )Vfi._\.J .. O o 0 CQ 0X .VOCv. OX )VCSU. )) 9 6 To B... 9 7 r. .196 91. )) F0 1. G 0 CC )9C .1. CC ...96 .1. 055 P. 9 .... N 2 9 . 9... 7r. UH ... h(1 V DC. . a... TC . 9..- T5 T1... 0 .... 2 U A 7 0 Q R ) 9.9.)“.0 91.1. T 9 7 9 9.1.... 9.1) (7.9.16) 9....) 997.9..lup ”hp 0 6 M )n C) 9 Q T. 7 N1 )1 H V9 L .C CD )0) c. )THF P P D ) ))) CM V9... .C .....H V9 C 2:. CK! )0 c. )KS 5 O ... C )KQ ... 9) .)9 K7 Pr...“ 7 JLPC CO. C .OH JJJ 991 9) o). .1. 9) .)9 G99 5. ) HI Vi .) )9 9 7 O ) HA K! 9) 0 Cu 0CCC 9 9 r1: .1 9 7H1. .CA ACCLPS 30R 7H1.1.1. 9an F...3CCCFD PQCCCC Wu 1.7... 6 )HS. K‘9K «.13 9.. Q 9... G )HC. 7.1.9K K Q1. 9r;\$ Hu- n. n . 9 9 r ‘C‘aRR Ptrnficwfi 9 9 59 $53 99“ C19 9CCC 9K $1. 9555 )v n. .10. 9a.. 9 KS...) ((.)K 9 9 9H. 91.... n FCC) 11C)“ 9 AVr...l.t (I. 9.1YrT ) 7...u99 .L99A. .1.)T 7..99 09 P\EP.II.C9 ..VEPKB 11.1. 9KE..5 9.19.! $.K1. 0E T117. ).v.a.k. 0.91. J r.1_.L9r Pu Cr. 1.1 9L5? 10 9H1...1.Hrr P 9.. $.u... L30.) 9H.P.H..r Cu“ H(L9Ln11ru. F(Lohnu 91C»... 79Lk....d1...u.V ..FKrun. 9L C“: ..u 7.01....PK ..FFG 1| ’9 9C9. r19... .. 9.09 9C}; 19's ICCTFSFC.C0. 959 9.1.. 9 9 9500.]? (CS 91. ’9 9C99 919. ’9 9C9. 9FFE=J9 9K9¢~)S(n.)s1\. of 9.9. 99K)91.. 90C.)C1\.1)r.1.- SSHSCSCC CCUJTTHP... . GTHCA‘SA . on» .IAfilQHTHL ..9H5lll )CCHCCCCC)S$HSC¢..CC TCCUdCP/YI. .553... S‘sPOLHUH/KQKT CK . SACQeuAGfl PP: [1.955 1.1... GrVPFPG . D ...: TEFLCCnED ... PLFPO RD R.- .h ..PFCEC: CC...FFSE$PS$15(1.N SSP VPJR : JP . 5.. .ShSH . K TKHJfi: g . C CCL...1r..Ln1..J:..TnJ..FFC..: FO......1.:..:::r...»:...v:.Uc. .S::C......:..QTC1.1.....r.F.L,(r.C£:u.C—.F.9 ......-.3...12.....9Sl...)01...H9u)1.rbe.-.L.K90m99C1...)U1...F‘ (kaPC: : . C79... JFK: .1» PT)VLFHFHP .11.».913911... JF)PT)fiFPF .VCD...C.U1.: ..V1.CFC.U(: ......PCSTritF ..flOAVNKTVHP ..KQRTD ..TKTmJabVOKTVHR: ......1)¢(F.CJPP.\ (...-(JP .. dFr.1.nr.\..rCr......S:v,. (((JP. JP