OVERDUE FIRES: 25¢ per day per Item RETURNING LIBRARY MATERIALS: Place In book return to charge from c1rcu1at1on records THEORY AND APPLICATIONS OF SENSITIVITY ANALYSIS TO ENZYME KINETICS By Thomas Henry Pierce III A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Chemistry 1981 Copyright by Thomas Henry Pierce III 1981 ABSTRACT THEORY AND APPLICATIONS OF SENSITIVITY ANALYSIS TO ENZYME KINETICS By Thomas Henry Pierce III The theory of Sensitivity Analysis and its applications to Enzyme Kinetics are examined. The Walsh Sensitivity Analysis Procedure, WASP, is develOped and shown to be a powerful probe of the theory of Sensitivity Analysis as well as the preferred method for discrete models. The Fourier Analysis Sensitivity Test, FAST, method is reviewed and shown to be the preferred method of Sensitivity Analysis for continuous models. The linear Taylor series approach to Sensitivity Analysis is given as an aid in interpreting Sensitivity Analysis results. The theory of Walsh function Sensitivity Analysis is derived and its advantages are investigated. The Walsh technique is shown to be an exact technique for discrete model output functions. For continuous model output functions the Walsh method yields an averaged finite difference Taylor series with respect to the parameters. Walsh Analysis and 2-point discrete Fourier Analysis are shown to be identical. Since Walsh Analysis is easily Thomas Henry Pierce III related to both the Fourier method and to the linear Taylor series method, it is a valuable tool for further development of Sensitivity Analysis. The applications of the mass action laws of chemical kinetics are used to develop models which are analyzed with respect to their parameters. Enzyme Kinetics models for hysteresis and allosterism are investigated by the techniques of Sensitivity Analysis. The mechanism of hysteresis in the Frieden Model and the Ainslie model is clearly shown to be an effect of the rates of isomerization of the inactive enzyme-substrate complex to the active enzyme-substrate complex. The Ainslie model is dynamically equivalent to the simpler Frieden model for a large set of rate constants. The Frieden model also displays apparent allosterism if the "correct" set of rate constants and initial condititons are used. Therefore the Frieden model is the simplest one-site enzyme kinetic model which displays both burst and lag hysteresis as well as both positive and negative cooperativity (allosterism). Fourier Sensitivity Analysis was applied to a pH Tryptophanase model where the parameters and their variations were obtained from experimental data. This type of analysis gave insight to the design of future experiments. Over the experimentally accessible range of pH, the lower pH region is shown to contain the most Thomas Henry Pierce III information on the parameters which can be examined in future experiments. A computer program is given which is used for routine application of Sensitivity Analysis, both Fourier and Walsh, to other models. This program has been extensively revised to clarify its logic and to simplify its use. Any mathematical model which can be simulated on a computer may be directly inserted into this program. Suggestions for future work are discussed. Research in the connections between Statistics and Sensitivity Analysis should lead to insight into both areas. The investigation of "approximate" Walsh Sensitivity Analysis may lead to faster algorithms for Sensitivity Analysis. ACKNOWLEDGEMENTS A great many people assisted me in this endeavor and I am especially grateful to those mentioned here. I would like to thank Rex Kenner, Bill Waller, Barb Kennedy and Aristides Mavridis for their help in getting me started; Jim Mehaffey and Mark Ondrias for keeping me sane. My parents and Grandmother Pierce I thank for their encouragement. I am grateful to Cathy Stewart for her conviction and care during these difficult times. I appreciate the everpresent support and guidance of Professors Robert Cukier and James Dye and acknowledge Dr. Robert Ball (Dr. Bob) for his penetrating discussions, computer programs (GETNUM, PLOT) and especially his friendship without which this work would have been impossible. I am also indebted to Michigan State University, Chemistry Department, for an assistantship as well as support from NSF Grant No. PCM 78—15750. ii TABLE OF CONTENTS Chapter Page LIST or TABLES . . . . . . . . . . . . . . . g . . . V LIST OF FIGURES. . . . . . . . . . . . . . . . . . . vii CHAPTER I. SENSITIVITY ANALYSIS -- AN OVERVIEW. . . . . . . . . . . . . . . 1 CHAPTER II. FOURIER SENSITIVITY ANALYSIS. . . . . . 9 CHAPTER III. WALSH SENSITIVITY ANALYSIS . . . . . . 28 CHAPTER IV. EXAMPLES OF SENSITIVITY ANALYSIS. . . . 52 CHAPTER V. SENSITIVITY ANALYSIS OF SIMPLE ENZYME KINETICS MODELS . . . . . . . . . 128 Michaelis-Menten Model . . . . . . . . 128 Reversible Michaelis-Menten Model. . . . . . . . 1A0 Models with Slow Conformational Changes. . . . . . . . . . . . . . . . . . . . . 1A1 Model of Ainslie, Shill and Neet . . . . . . . . lu2 Frieden Model. . . . . . . . . . . . . . . . . . 157 Summary. . . . . . . . . . . . . . . . . . . . . 167 CHAPTER VI. SENSITIVITY ANALYSIS OF A TRYPTOPHANASE KINETIC MODEL . . . . . . 170 CHAPTER VII. FUTURE WORK AND DEVELOPMENT. . . . . . 189 APPENDIX 1 — Relationships of Fourier Co- efficients to Taylor Series Coefficients. . . . . . . . . . . 19A iii Chapter APPENDIX APPENDIX APPENDIX APPENDIX APPENDIX APPENDIX APPENDIX APPENDIX REFERENCES . 9 Histograms of Fourier Trans- formation Functions Fast Walsh Transform. Parseval's Equation for Walsh Functions The Calculation of walsh Partial Variances . . . . . . . Derivation of walsh Coupled Partial Varianc Formulas. Relationship Between Linearly Dependent Equations and their Fourier Coefficients. . . Sensitivity Analysis Fortran Programs. . . . . . . Approximate walsh Sequency Sets iv Pag (D 198 202 20“ 209 213 216 218 283 284 LIST OF TABLES Table Page 3.1 Multiplication Table for the p = 2 A Group of Walsh Functions . . . . . . . . . . 32 5.1 Parameter Values for the Ir— reversible Michaelis-Menten Model. . . . . . . . . . . . . . . . . . . . 130 5.2 Parameter Values for the Re— versible Michaelis-Menten Model. . . . . . . . . . . . . . . . . . . . 130 5.3 Parameter Values for the Ainslie Model. . . . . . . . . ... . . . . . . . . . 1A6 5.” Summary of SAM and Computer Data . . . . . . 1A? 5.5 Parameter Values for the Frieden Model. . . . . . . . . . . . . . . . . . . . 159 5.6 Rate Constants for Allosteric Test . . . . . . . . . . . . . . . . . . . . 168 6-1 Parameters for Tryptophanase Model . . . . . . . . . . . . . . . . . . . 172 Table 6.3 Page Best-fit Parameters Based Upon Scheme 1 and Their Marginal Standard Deviation Estimates . . . . . . . . 173 Legend for Figures 6.5—6.11. . . . . . . . . 173 Vi LIST OF FIGURES Figure Figure 3.1 The four Walsh functions defined by two binary digits. . . . . . . . . . . . Figure 3. 2 Plot of the sampling points in the multidimensional u—space.. . . . . . Figure 4.1 The linear sensitivity coefficients from the Linear Model. . . Figure 4.2 The averaged value of the Linear Model from a Walsh Sensitivity Analysis. . . . . . . Figure 4.3 The standard deviation of the simulations sampled in the Walsh Analysis of the Linear Model.. . . . . . . . . . . . . . . Figure 4.4 The Welsh expansion coefficients from the Linear Model. . . . . . . . Figure 4.5 Walsh partial variances from the Linear Model. . . . . . Figure 4.6 Fourier expansion coefficients from the Linear Model. Figure 4.7 Fourier partial variances from the Linear Model. Figure 4.8 The standard deviation of the sampled simulations from the Linear Model using Fourier Analysis.. ................ Figure 4.9 Linear sensitivity coefficients from the Exponential Model. . . . Figure 4.10 The averaged value of the Exponential Model using Walsh Sensitivity Analysis with 10% variation in the parameters. . . . . . . Figure 4.11 Walsh eXpansion coefficients from the Exponential Model (10% variation). . Figure 4.12 Walsh partial variances from the jExponential Model (10% variation). . . . . IFigure 4.13 The standard deviation from the Walsh vii Page 33 A8 56 57 58 6O 63 65 66 67 7O 71 72 7A Analysis of the Exponential Model (10% variation). Figure 4.14 The relative deviation from the Walsh Analysis of Figure 4.15 analysis of the Exponential Model (10% variation). The averaged value from the Walsh the Exponential Model with the parameters varied by 60%. - Figure 4.16 Exponential Figure 4.17 Exponential Figure 4.18 Exponential Figure 4.19 Exponential Figure 4.20 Exponential Figure 4.21 Exponential Figure 4.22 Exponential Figure 4.23 Exponential Figure 4.24 Exponential Figure 4.25 Exponential Figure 4.26 the Exponential Model (10% variation). Figure 4.27 Exponential Figure 4.28 Walsh expansion coefficients from the Model (60% variation). The Walsh partial variances from the Model (60% variation)“ . . . . The Walsh standard deviation from the Model (60% variation)“ The Walsh relative deviation from the Model (60% variation). . . The Walsh averaged value from the Model with 100% parameter variation. Walsh expansion coefficients from the Model (100% variation).. Walsh partial variances from the Model (100% variation).. The Walsh standard deviation from the Model (100% variation).. The Walsh relative deviation from the Model (100% variation).. The Fourier averaged value from the Model (10% variation). Fourier expansion coefficients from Fourier partial variances from the Model (10% variation). . . . The Fourier standard deviation from the Exponential Model (10% variation). Figure 4.29 The Fourier relative deviation from the Exponential Model (10% variation). viii Page 75 77 78 79 . 8O 81 82 . 8A 85 86 87 88 9O 91 . 92 . 93 9A Figure 4.30 The Fourier averaged value from the Exponential Model (100% variation). Figure 4.31 Fourier expansion coefficients from the Exponential Model (100% variation). Figure 4. 32 Fourier partial variances from the Exponential Model (100% variation).. . Figure 4.33 The Fourier standard deviation from the Exponential Model (100% variation).. . Figure 4.34 The Fourier relative deviation from the Exponential Model (100% variation). Figure 4.35 The averaged concentrations from the Unimolecular Model. . - . . . . . . . . . . Fi re 4 36 Linear sensitivity coefficients for [B in the Unimolecular Model. - . . . Figure 4 37 Fourier partial variances for [B] in the Unimolecular Model. Fi re 4.38 Linear sensitivity coefficients for fiuin the Unimolecular Model. Figure 4.39 Fourier partial variances for [C] in the Unimolecular Model. Fi re 4.40 The Fourier standard deviation for iuin the Unimolecular Model. . . . . . Fi re 4.41 The Fourier relative deviation for [C in the Unimolecular Model. Figure 4.42 The averaged concentrations in the Branched Unimolecular Model. . . . . Figure 4.43 Fourier partial variances for [B] in the Branched Unimolecular Model. Figure 4.44 Fourier partial variances for [C]. in the Branched Unimolecular Model. Figure 4.45 Fourier partial variances for [D] in the Branched Unimolecular Model. . . . . . Figure 4.46 The averaged Substrate concentration in the Michaelis-Menten Model (1% variation). ix Page 96 97 98 99 100 103 105 106 107 108 110 111 113 11A 116 117 120 Figure Figure A.A7 The relative deviation of Substrate in the Michaelis-Menten Model (1% variation). Figure A.A8 Partial variances of the rate constants for Substrate in the Michaelis-Menten Model (1% variation). Figure A.A9 Partial variances of the rate constants for Velocity in the Michaelis—Menten Model (1% variation). Figure “.50 Averaged Substrate concentration in the Michaelis-Menten Model (80% variation). Figure A.51 Partial variances for Substrate in the Michaelis-Menten Model (80% variation). Figure 4.52 Partial variances for Velocity in the Michaelis-Menten Model (80% variation). Figure 5.1. Average concentrations and standard deviations of the concentrations Michaelis- Menten models. The symbols represent: 4), S; O, P; A, E; +, ES. . . . . . . . . . . . Figure 5.2 Partial variance plots for the Michaelis—Menten models. A number represents the partial variance for that rate constant. Coupled partial variances are represented as follows: in (a) by *, 81,3; in (b) by *, 3%,3; +9 81,25 X, 82,3; in (C) by *9 81,33 +9 ,3. o o o o o o . o o . o o . o o . . o o 0 Figure 5.3 Average concentrations and the standard deviations of the concentrations for the Ainslie models. The s mbols represent: 0, S; A, E; +, E*S;¢, P; , ES: )1, E*P.. Figure 5.A Partial variance plots for the Ainslie lag model. A number represents the partial variance for the rate constant. Other partial variances are represented by: B, 8113 D, S13; F, 815. Coupled partial variances are represented as follows: in (b) by +, 85, ; *, 87,113 in (c) by +, 81;,13; *, 81,15; in (d) by , 85,7; in (e) by , 13,15.. . . . . . . Figure 5.5 Partial variance plots for the Ainslie burst model. A number represents the partial variance for that rate constant. Other partial variances are represented by: B. 811; D. 813; F, S15. Coupled partial vari— ances are represented as follows: in (a) by *3 811,7; in (b) by *, 81.15; +, 813,15. The X Page 121 122 123 125 126 127 133 136 1A8 151 Figure symbol "s" represents the sum of the displayed partial variances. Figure 5.6 Average concentrations and the standard deviations of the concentrations of the Frieden models. The symbols represent: 0, S; A, E; +, E*S;<>, P;X, E*;)(, ES. ... . . . . Figure 5.7 Partial variance plots for the Frieden lag model. A number represents the partial vari- ance for that rate constant. Coupled partial vari- ances are represented as follows: in (b) by +, . ' . 9(- 85,9, in (C) by +, 85,9, , 81,7. . . . . . . . . Figure 5.8 Partial variance plots for the Frieden burst model. A number represents the partial vari- ance for that rate constant. Coupled partial vari- ances are represented as follows: in (a) by *, 85,9; in (b) by *, 81,7. Figure 6.1 Averaged value of De1(ABS) from the pH drop Tryptophanase Model (standard deviation variation). . . . . . . . . . . . . . . . . Figure 6.2 Averaged value of De1(ABS) from the pH jump Tryptophanase Model (standard deviation variation). . . . . . . . . . . . . . . . . Figure 6.3 Averaged value of k' from the Tryptophanase Model (standard deviation variation). . . . . . . . . . . . . . . . . Figure 6. A Partial variances of De1(ABS) from the pH jump Tryptophanase Model (standard deviation variation). . . . . . . . . . Figure 6.5 Partial variances of De1(ABS) from the Tryptophanase Model (10% variation).. . . . . . Figure 6. 6 Partial variances of k' from the pH jump Tryptophanase Model (standard deviation variation). . . . . . . . . . . . . . . . Figure 6.7 Partial variances of k' from the pH jump Tryptophanase Model (10% variation). Figure 6. 8 Partial variances of De1(ABS) from the pH drop Tryptophanase Model (standard deviation variation). . . . . . . . . . . . . . . . . . Figure 6. 9 Partial variances of De1(ABS) from the pH drop Tryptophanase Model (10% variation). . xi Page 161 16“ 166 175 175 178 179 180 182 183 18A 185 Figure A.1 Histogram of log—uniform distribu- tion function Figure A.2 function. Histogram of uniform distribution Figure A.3 Histogram of Gaussian—type distribu— tion function . . . . . . . . Figure A.A function. Histogram plot of sin—transformation xii Page 198 199 200 201 I. SENSITIVITY ANALYSIS — AN OVERVIEW INTRODUCTION Mathematical models have exerted tremendous influ— ence in science. Many people have commented on the "seem- ingly exact" way that mathematical equations model nature (Benacerraf, gt al. 196“). It is this ability to 'describe' physical processes that makes mathematics so useful in science. Mathematical models are composed of four parts; inde- pendent variables through which the model evolves, dependent variables which change as a function of the independent variables, parameters which are constant during a Simula- tion but may change from one simulation to another, and constants which never change, such as the velocity of light in a vacuum. Once a mathematical model is proposed, if it is 'cor- rect', we can use it to predict the future behavior of a physical system. It may be used to explain previous be- havior of the physical system. To do this many models require the 'adjustment' of parameters. It is this ability of these parameters to describe different physical systems by changing their values which gives great generality to mathematical models and causes confusion as to whether or not the model is 'right'. Often two or more models give the correct results using only different parameters. Mathematical models depend on their parameters. Sensitivity analysis describes precisely how the mathe— matical model depends on its parameters. An intuitive sensitivity analysis method would be to vary a parameter over two values and observe how the model changes. If a particular value of the model output increases as the parameter increases, we say that the output is positively affected. This can be generalized by asking for the quanti- tative sensitivity of a particular output function to a parameter. The most popular method is to take the deriva- tive of the output function with respect to the parameter 'p', evaluated at a nominal value DO- —"I (-—) (1.1) Many models have more than one parameter. A collection of the derivatives with respect to the parameters permits observations to be made about the model. For example, we can order the parameters according to their significance. The most significant parameter is the one which has the largest effect on the value of the model output function. This leads to an ordering of the parameters with respect to their effect on the model, from most significant to least significant. Often models have parameters with at least one in- dependent variable. An example is the temperature de- Ipendence of a rate constant: -Ea/RT k= Ae (1.2) Here A and Ea'are parameters, R is a constant, and tempera- ture, T, is the independent variable. The model output function is the rate constant k, the dependent variable. In such cases the sensitivity of the output function depends not only on the parameters but also on the inde- pendent variable. The model output function is the rate constant k, the dependent variable. In such cases the sensitivity of the output function depends not only on the parameters but also on the independent variable. When measurements are repeated at different values of the in— dependent variable, sequences of parameter sensitivity Values can be collected over a temperature range of interest A sequence of parameter sensitivity values may also give other useful information. From it we may be able to 1dentify regions of sensitivity. Using the previous ex- aJTlple, there may be temperature ranges where the sensi- tILVity ordering of parameters changes. In one region the parameter 'A' may be the most important, while in a dif- ferent region the parameter 'Ea' may be the most important. Therefore, to measure 'A', we should measure it in the first region where the model is most sensitive to 'A'. For a more complex model it may happen that in the zregion of interest the model has pg significant sensi— tsivity to one or more of the parameters. In this case at;he model may be reduced to a simpler model by formally 1?:ixing the value of insensitive parameter to zero (or .c)118): For example, it may be possible to remove the step corresponding to the parameter in question from a mechanism. Application of sensitivity analysis can also help to \rzailidate a model. Knowing the rank-order of the parameters' sseallsitivities and the region of maximum sensitivity permits title design of experiments to probe the test model over tzklese regions. A fit of the model to the new data, gives fPLlrther evidence that the model is correct. As described above, sensitivity analysis can lead to IDeetter understanding of the model. Since many models are <3€ehaved function is identical to another expansion which r1213 been rearranged, it can be seen (Appendix 1) that tskdese Fourier coefficients are functions of all the higher Clearivatives of the model function with respect to the IDEIrameters. This is the ideal relationship required for rnanlinear functions. It allows sensitivity measures where ‘Clae parameters are varied over orders of magnitude with ,Eyg restrictions on the model output function. The implementation of the Fourier method on a com- Fnlter requires approximations as explained in Chapter UTwo. The approximations limit the method through an ac- clmmulation.of approximation error. However the sources <>f"the error have been described by Cukier §t_§l, (1975). ‘TIlese errors are controllable and they can be bounded by a. maximum.error estimate. Other expansions are possible and in Chapter Three we will investigate Walsh series expansions (Walsh 1923, Fine 19U9). It will be shown that for a discrete model we incur pg errors in analyzing the sensitivity of a Inodel. However for continuous models it will be shown tshat a Walsh sensitivity expansion is identical to a 1Finite-difference Taylor series.. In Chapter Four the three approaches are compared, Eilfld we can see that they give similar results when the gpazrameter variation approaches zero. In the case of a global analysis of a strongly nonlinear model only tzrie Fourier method gives the correct results for the sensitivities. In Chapter Five we apply the Fourier technique to Esc>me steady-state enzyme kinetics models. Here we Show Tillat two apparently different models are dynamically id- GBIltical over a rather extensive range of parameter varia- ‘tzion. Also the sensitivity analysis of progress curves Sdaows that some parameters may 'accumulate' sensitivity 2111 time. At short times they are relatively insensitive, bnat as the reaction proceeds they become the most important parameters in the model. In Chapter Six we apply the Fourier technique to a rwecently-studied transient-state enzyme model (June 33 El; 1979, 1980). The future work and development of sensitivity analysis texzhniques is discussed in Chapter Seven. It is suggested that the study of approximate Walsh sensitivity analysis and the frequency sets used in approximate analysis will extend both methods. Also the relationship of sensitivity analysis and statistics is discussed. It is our belief that these questions will direct research into fruitful gareas which will advance the usefulness of sensitivity lgznalysis techniques with extremely complex models. Appendix 8 contains the various programs and their c>1>eration instructions for the application of both Fourier ELITd welsh Sensitivity Analysis. The programs are model j.r1dependent so that any type of numerical model may be ‘ulssed. It is my hope that sufficient theory and examples axlre given here to encourage others to use these powerful t:eechniques. II. FOURIER SENSITIVITY ANALYSIS In this chapter we will discuss the Fourier method caf sensitivity analysis. Only an overview of the theory veill be given as this technique was extensively reviewed E337 Cukier et a1. (1978). Here we will examine the details <31? the implementation and review the approximations and 21.jJnitationS of this method. The particular model chosen era this implementation is derived from the laws of mass EL<2tion kinetics. Chemical rate equations as derived from postulated ‘nleechanisms can be described by sets of first order in time, C:c>up1ed ordinary differential equations of the form, iii. = Fitgucg) (2.1) dt 1 = l,2,3,...,m “nith prescribed initial conditions, 0 (2.2) Ci(t=0) = C1 In Equation (2.1) Ci(t) is the concentration of the ifiih species at time t, g = (C(1),C(2),...,C(m)), is a 10 vector of all the species concentrations and k = (k(1), k(2),...,k(n)) is a vector of all the rate constants (parameters). The function Fi symbolizes the rate law for the species 1. We shall assume that these rate equa- tions can be solved for C(t), given the initial conditions, the values of the rate constants, and the Specific func- tional form of the rate laws. If this cannot be done analytically, it can almost always be done numerically. We require the sensitivities of the concentration (31(t) to uncertainties in the values of kg, the rate co- eefTflcients. The uncertainties in the rate coefficients aaxre, in this method, represented statistically. That is, vve: assign a probability density, pZ (k£)dk£ as the prob- ability that the z'th rate coefficient lies between kg and kl + dkz. These probability densities reflect our lcrlc1wledge of the possible values of the rate coefficients i.r1 a given elementary chemical reaction. If one has ac- <3111rate data, then the probability density can be chosen t3<> be narrow to reflect this information. However, if CiEifbaare sparse or not reliable, the uncertainty can be C3I1<33en as large as desired. The joint distribution function may then be written 1‘). P(_Ig) = I} p5, (kg) (2.3) 11 as we require the rate constants' probability distributions to be linearly independent, but not necessarily identically distributed. Once the joint distribution is known we may "'construct moments of the multidimensional function by multi— dimensional integration. The first moment, the average value, is written = f dk C(k)P(k) (2A) tvhere dk = dkldkz...dkn is the multidimensional volume eelement in the rate coefficient Space. In the present eeJcample represents the average concentration of a g;1;ven species as calculated from the rate laws and the IPEite constants, where the rate constants are varied over 1:11eir entire set of possible values. Similarly we may construct multidimensional variances C>f the function, by calculating (o2)i = - 2 (2.5) UDInis would represent the expected spread of concentrations C>‘ver values accessible to species 1 because of uncertainties len the rate constants. Similarly partial variances, the ‘réariance along only one parameter dimension, say the first l2 parameter can be computed as 2 - * 2 9(- 2 (Ol)i ‘ ((Ci(kl)) > ‘ (Ci(kl)> (2.6) where G:(kl) is the function averaged over 5 = (k2...kn) and the integration is over kl. This gives the spread of concentrations caused by uncertainties in kl. This idea may be extended to coupled partial variances, variances over more than one parameter at a time. These variances would be very informative. We would be able to characterize the extent that the model de- pended on the parameters. This also would tell us which parameters were most important (those whose variances were largest). If a parameter's variance were small then the effect of a parameter changing over its entire range is negligible to the behavior of the model. This means that the model may be simplified by excluding parameters 'whose variances are small. The coupled partial variances would tell us how the Ibarameters interact. If these coupled variances are large 11hen the model also depends on the relationship of coupled parameters. The effect of one parameter acts in concert unith.another parameter. These coupled variances may be eoctended to arbitrary number of parameters coupled together (but less than n). The only requirement here is the construction of the 13 joint probability distribution and its multidimensional integration. Presumably this could be done by numerical quadrature. We would sample each parameter over its domain, then solve the model for each different combina- tion of the parameters. The solutions would be numerically integrated against the joint probability distribution to give the desired moments. The required amount of calculation to accomplish this is enormous. If we chose 10 different values for each parameter, and there were only five parameters in the model, we would have to solve the model 510 times, ap- proximately 10 million times. Even so, if the ranges of the parameters were large or the model highly structured, we would need still more sampled points to accurately carry out these variance calculations by such a brute force method. To calculate the variances in finite time we need a different approach. We need a way to compute the multi- dimensional integrals without exhaustive sampling of the output function. In 1938 Hermann Weyl (Weyl, 1938) derived an integral identity which, under certain conditions, reduces a multi- dimensional integral to a single path integral. To apply his theorem we must return to our definition of parameters and transform them into periodic functions. The rate constants, considered as random variables, may be related to a generating function, 1A k2 = 82, (119.) (2.7) where g2 is the generating function and u2 is the inde— pendent variable. As ul is varied from -w to m, k, is varied over all its possible values. Consequently, uz also has a probability distribution. Since the kz's I l A s, the u, s are also independent.o We may then write the total joint probability are independent functions of the u density function in u-Space as ) (2.8) It is convenient to let u2 be related to 91 through the transformation function such that as e, traverses -m to m, u, also goes from -w to w, so no information has been lost. Now we further write 9, = (wls) with -m + 0(p2 ) (2.26) This equation shows a direct relationship between the averaged linear sensitivity coefficient and the Fourier coefficient, Bwl’ which is used to construct the w, partial variance. The Fourier method of sensitivity analysis is imple- mented in a program given in Appendix 8. This program is not restricted to models involving ordinary differential equations. Rather any type of mathematical model may be inserted in the program through use of the subroutine MODEL. Hence the global parameter sensitivities of any type of model are easily obtained. III. HADAMARD-ORDERED WALSH FUNCTIONS Cukier gt a1, (1978) proposed that alternate orthogonal expansions also may be possible, leading to other types of sensitivity analysis. This approach was investigated and an alternate expansion was found. It was discovered that a Walsh function expansion (walsh 1923) could be used for sensitivity analysis. If we consider a model whose parameters take on a finite set of values, then we may use welsh Sensitivity Analysis. Here the model output function has a finite set of discrete responses dependent on the parameters. For these discrete model functions the use of walsh functions eliminates the approximations inherent in the Fourier method. With continuous model output functions (those functions whose parameters vary continuously over a domain) the Walsh method is closely related to Taylor Series Sensitivity Analysis. In this chapter we develop the theory of the Walsh method. A welsh function (Ahmed and Rao 1975) may be defined as a function of two arguments, a time variable and a sequency variable, Similar to frequency in Fourier analysis. Walsh functions form a complete orthonormal set of step functions. Here the Hadamard definition (Ahmed and Rao 1975) is used to represent Walsh functions. 28 29 t ._ 11 WALH(w,t) = (-1)l'l (3.1) Where t=(tl,t2,...,tp) is the binary representation of t and (wl,w2,...wp) is the binary representation of w. walsh functions are defined over the time range [0,1], 14g;,the time variable is a real number less than 1. However, the sequency variable is an integer less than 2p. This means that the binary point for the time vari- able is placed to the left of t1 and the binary point for the sequency variable is placed to the right of wp. Note that the indexing chosen here labels the most significant digit of the variable first. With this definition of walsh functions, the time variable is defined with respect to the sequency variable in order to cancel dimensions. It should be noted that the time referred to here i§_ngt_"model time". Model time is defined as the independent variable in a model such as the model of a chemical reaction which evolves in time. To clarify the evaluation of a walsh function, let us consider the walsh function, WALH(2, .75). Only two binary digits are necessary to represent these arguments, therefore let p=2. writing out the binary expansions of the arguments we obtain 30 W = 210 = (100)2 (302) t = 07510 = (oll)2 Substituting these binary representations into the defin- ing Equation 3.1 we obtain w,tl+w2t2 1 l.l+O.l=(_l) = -1 WALH(2, .75)=(-1) =(-1) (3.3) walsh functions are constrained by the number of binary digits required to represent w and t. For this reason we get different groups of functions for each choice of p, the number of binary digits used in the representation. Each group is closed with respect to ordinary multiplica- tion. If we multiply one Walsh function by another walsh function from the same group, we obtain a third member of the group. This means that multiplication of a p—digit walsh function with a k—digit walsh function is not de- fined. This group property can be illustrated by examining a walsh function multiplication table for p-2. Here we have a group of Walsh functions with only four members, WALH(O,t), WALH(l,t), WALH(2,t), and WALH(3,t). By using 31 Equation 3.1 we can generate Table 1. Figure 1 gives the plots of these four Walsh func- tions. They are piecewise continuous functions. These functions may be integrated but they may not be differen- tiated without the introduction of distributions which include delta functions. walsh functions have some useful properties. They are invariant to an exchange of arguments, i.e., WALH(w,t) = WALH(t,w) (3.1)) Proof: Ewiti itiwi WALH(w,t) = (-1) = (-l) = WALH(t,w) (3.5) As has been claimed, the Walsh functions are orthog- onal (Walsh, 1923), A} WALH(n,t) WALH(w,t)dt = 2pan (3.6) W Proof: 32 Table 3.1. Multiplication Table for the p=2 Group of walsh Functions. * WALH(O,t) WALH(l,t) WALH(2,t) WALH(3,t) WALH(O,t) WALH(O,t) WALH(l,t) WALH(2,t) WALH(3,t) WALH(l,t) WALH(l,t) WALH(O,t) WALH(3,t) WALH(2,t) WALH(2,t) WALH(2,t) WALH(3,t) WALH(O,t) WALH(l,t) WALH(3,t) WALH(3,t) WALH(2,t) WALH(l,t) WALH(0,t) 33 .muflmflc apmnwn 03p hp cosflmmc msoepocze gnaw: L309 one _.n epzmflm c8: 00... and 00.0 0N0 00.0 _ . _ _ _ . _ _ _ 00. P 05.0 on .0 0N0 00.0 Q3050 N I Q mcotoczbl aback 3A {3 WALH(m,t)WALH(w,t)dt p D l l l i:lniti iggiti = 2 z z (-1) {-1) ti=0 t2=0 t =0 p l l 1E£ni+w1)ti = z 2 (-l) tl=0 tp=0 p + l 1 nlwl 122(91 w1)t1 = z ... Z (1 4- (-l) ) {-1) t=0 t=0 2 p p - p :2 H6 -2(S 1:1- ni’wi nlw If we divide each Walsh function by two we obtain an orthonomal set of functions. Completeness of Walsh func- tions was shown by Walsh (1923). Utilizing the orthonormality and completeness proper- ties of Walsh functions we know that any continuous func- tion, f(t), can be expanded into an infinite Walsh series with O i t < 1. This exact infinite expansion of an 35 arbitrary function may be approximated by a finite expan- sion: N—l f(t) = z CnWALH(n,t) (3.8) n=0 Here we restrict to to a discrete set of N points (ti) where, Nt = t1 (3.9) Note that O 1 ti < N-l and ti is an integer. The error incurred by this approximation is error = 2 CnWALH(n,t) (3.10) Since the coefficients of a Walsh expansion decrease in magnitude by (l/n) as shown by Fine (1955), we can approximate the major structure of any arbitrary function by using a finite expansion of Walsh functions. Walsh coefficients are a linear transformation of the function sampled at each ti. We can compute the co- efficients by exploiting the orthogonality of Walsh functions. 36 By writing the function in its finite Walsh expansion, Equation 3.8, then multiplying both sides by WALH(m,t) and integrating over "t", we will project out the Cm coefficient of the Walsh expansion. _1_ fl f(t)WALH(m,t)dt O = 2 Cm 3; fl WALH(n,t)WALH(m,t)dt n=0 2p 0 (s = C (3.11) II M O This procedure for calculation of the N coefficients of the Walsh expansion from the N sampled values of the function requires N2 multiplications. Using matrix factor- ization techniques an algorithm may be developed where this transformation only requires Nlog(N) multiplications. This is known as the Fash Walsh Transform (Ahmed and Rao 1975, Andrews and Caspari 1970) (See Appendix 3). To use Walsh functions in sensitivity analysis we must be able to calculate multidimensional moments, which are averages of the output function over all its param- eters. To calculate these moments we must express the 37 function in terms of its parameters. If we relate each parameter, ui, to a generating function in ti’ 1.11 = 0'o(ti) (3.12) we may expand each parameter dimension in a Walsh series. Thereby we obtain a multi-dimensional Walsh series expansion. For example, let f(ul,u2) be an output function with two parameters, ul and u2. If we relate the parameter ul to a generating function, gl(tl), we may write f(q) as f(ul,u2) = f(gl(tl),u2) = f(tlau2) (3-13) Note again that the same symbol for the function, f, is retained. This will be done throughout this chapter when the meaning is obvious. This function may be formally expanded in a Walsh series in t1 with u2 treated as a parametric constant. f(tl,u2) = g Cj(u2)WALH(J,tl) (3.1M) Since the coefficients are now functions of u2 we may relate u2 to a generating function in t2 and expand the coefficients in a Walsh series in t2. 38 f(tl,u2) = f(tl,g2(t2)) = g CJkWALH(j,tl)WALH(k,t2) z k (3.15) This yields a two-dimensional WAlsh series expansion for f(ul,u2). . The key to the utility of Walsh functions in sensi- tivity analysis is the multiplication identify. For an expansion to be efficient, products of the orthogonal basis set must reduce easily. In the case of Walsh functions, the product of two Walsh functions in the same group is a third Walsh function, given by WALH(n,t)WALH(w,t) = WALH(n + w,t) (3.16) where + is binary addition without carry. That is, O + O = O, O + l = 1, 1 + O = 1, and l + 1 = 0. Proof: 2 W t Z n.t i i i (-l) i 1 i WALH(n,t)WALH(w,t) = (-l) 2 (n +w )t. Z (“1 + W1)t1 (-l)1 i i 1 = (-l)1 WALH(n + w,t) (3.17) 39 We can apply this multiplication property to reduce the multidimensional integral for the multidimensional Walsh series coefficients to a one dimensional integral. This will allow us to connect the multidimensional param- eter space to a single dimensional line. In summary, if f(u) is a multidimensional function in u_with u = (u1,u2,...up) we may expand the function in a finite multidimensional Walsh series. Each dimension of f(g) will be related to a dimension ti which will be ex- panded in a single dimensional Walsh series. We may write a generalization of Equation 3.15 as a Cartesian product. m -1 m -l m -1 l o f(u) = f(t) = z z z n _ _ . _ C WALH(w t ) wl-O w2-O wp-O _ i=1 ’ i (3.18) The coefficients may be expanded as finite sums 1 ml-l mp-l ( ) p C = C 2 . 2 f t, N WALH(w t ) w wlw2 .wp N t1=0 tp=0 i=1 i’ i (3.19) m +m -m N _ 2 l 2 p Let us specialize to the case where each parameter “O has only two distinct values. In this case each Walsh sequency expansion of a parameter will consist of only two sequencies. Since there are only two distinct points in each parameter dimension only two samples will be taken from each dimension, i;g;, m1 = 2. In this case we will need only one binary digit to represent a particular parameter dimension (Kunz 1979). In the multidimensional parameter space, the function is then defined only on the binary hypercube. Each sepa- rate walsh series requires only a one digit representation. This one digit Walsh function is written 1 kiti Z kiti k t = {-1)i=1 = (-1) l 1 MT) WALH(k,t) = (4)1:l (3.20) When the coefficient equation is rewritten with 1- digit Walsh functions the result is p w t n {-1) i =1 l l l 2 2 000 X i p: l w C =— — 2p t l=0 t2 = O t O i (3.21) By applying lexicographic ordering (Kunz 1979) to W1 and ti we may define W and T as L11 20 (3.22) Therefore, T and W are p-digit binary numbers which range from O to 2p—1 as ti and W1 take on their allowed values of O and 1. Upon substitution of W and T the coefficient equation becomes p Z w t i i f(T><-1>1=1 3 2p (3.23) 2H4 2 u N-l C = W T O Z Note that the sum over T encounters all of the two-term sums in Equation 3.21. We may now associate the finite multidimensional ex- pansion with a finite single dimensional expansion. N-l f(T) = 2 C WALH(W,T) (3.2M) w=o w From Equation 3.29 we may derive the required M2 relationship for varying the parameters. Each parameter is associated with a separate dimension. Each dimension may be expanded in a Walsh sequency expansion as in Equa- tion 3.18. Since these sequencies belong to different parameter dimensions we have the requirement that the binary sum of the sequencies be unique for any combina- tion of sequencies. 1WALHCw1,t) WALd(wl+w +...+wp,t +t2+t3+...+tp) 2 l WALH(W,T) (3.25) This means that the binary sum of the w2's must never add to the same W-value for different wi's. Analogous to the Fourier method, this restriction is called "binary incom- mensurate". Note that this procedure involves the con- version of p one-digit Walsh functions to one p-digit Walsh function. Since, for exact analysis, we must sample from each dimension so that we never repeat the search curve, we then must assign a unique sequency to each parameter where the sequencies are binary incommensurate. The simplest set of such sequencies is the set of powers of two, 0,21,22, 2 etc. (3.26) “3 In this way, for a model with 'p' parameters, parameter ul would be associated with the least significant binary digit in a p-digit number, u2 with the next most significant digit, etc. Each parameter can take on two values given by the generating function gi(ti), / 0 1-1 3 gi(ti) = ui = ui + AWALH(w t 21-1) (3.27) i where u? is the average value of ui and A determines the range. With this generating function, when ti takes on its values of O and 1, 111 oscillates between u? + A and u: - A at the 21.1 sequency. By defining the parameters of a model to be functions of different sequency Walsh functions, as in the Fourier method, we can expand the model output function in an infinite walsh series. By truncating the expansion to a finite Walsh series we incur no error. This can be i1- 1ustrated by a two dimensional example. If we set u1 = WALHQm,t) and u2 = WALH(k,t), we may then write f(ul,u2) = f(WALH(m,t), WALH(k,t)) (3.28) uu By expanding Equation 3.28 in a two dimensional walsh series, we will obtain a finite set of terms. Using the multiplication identity we can see that there are only four possible terms, a O-sequency term, a k-sequency term, a m-sequency term, and a (k + m)-sequency term. No other terms are possible regardless of the nature of the function f. In the calculation of the Walsh coefficients with the Fast Walsh Transform, we use 2p equally spaced samples. This enables us to calculate 2p different sequency. Co- efficients, CO’Cl""’Cn-l' Therefore, we will have, as a subset of these coefficients, all four of the required sequencies. Of the 2p computed coefficients only these four will be non-zero. In summary, for any Walsh driven function, there will be no error incurred in approximating the function by using finite Walsh expansions. From Equation 3.2“ we can derive the total variance of the model output function (See Appendix H). The var- iance is o = E C2 (3 29) T = 1 ° This is the same formula as in the Fourior method. In an analogous manner, the single parameter variance is “5 constructed by calculation of the variance with respect to only one parameter, say the first, using the model output function which has been averaged over all the other parameters (See Appendix 5). 2 01 = Cw (3.30) Nowever, in the Walsh case we get only one term! There is no infinite series to truncate as there is in the Fourier method. In the walsh expansion the reduced partial vari- ance is given by y 02 c: l _ l T 2 Ci i=1 which is exact. In a similar vein we can construct coupled partial variances. The coupled partial variances are the Walsh coefficients whose sequency is that of the desired single sequencies added together by binary addition without carry (See Appendix 6). Then we divide by the total variance to get the reduced coupled partial variances. M6 (3.32) For the partial variances to be rigorously correct, $42,, to contain no error terms, we must sample the entire parameter space. This will only be true if we are using a discrete model whose parameters can only take on two values. When Walsh Sensitivity Analysis is applied to a con- tinuous model, an approximation is made. This a-proxima- tion is that the influence on the output function of the range of parameter variation in a continuous model may be approximated by using only two values of a parameter chosen from a continuous range of possible values. A good choice, when comparing the Walsh method with a continuous method, would be the extremes of the interval over which the parameter is varied in the continuous analysis. To illustrate this we will examine a one-parameter model. First we write down the one dimensional Walsh expansion in terms of the parameter, ul, where u1 = WALH(l,t). l t f(ul) = 2 CW <-1)w (3.33) 47 From this expansion we can calculate two terms. The CO term and the Cl term. The CO term is the sum of all the sampled function values c0 = § (f(ul=l) + f(u1='1>) (3.3u) This may be interpreted as the average value of the func- tion at f(ul=0), where we note that when t=O, f(WALH(1,0)) = f(l) and when t = 1, f(WALH(l,l)) = f(-l). To calculate Cl we subtract the two function values: 1 cl = 5 (f(l) - r(-1)) (3.35) If we were to do a Walsh sensitivity analysis on a one parameter model, the minimal sequency with which to vary that parameter is wl = 1. So the C coefficient would 1 be proportional to the "sensitivity" of the model to its parameter. In linear analysis, if we used a central dif- ference formula for the first derivative of the function with respect to a parameter, we would get the same equa- tion for the sensitivity of the parameter. With a two dimensional model the possible Walsh AB coefficients are 00’ C1, C2, C3. We can write out the walsh transform, from Equation 3.23, for the Walsh co- efficients as = —2' (3.36) The sampled parameter sets can be plotted in the two- dimensional parameter space (Figure 2). This identifies the sampled function values in the parameter space. ulf p/‘o ° 0 . 2 xfo u2-> “6 0 f3 ‘f1 Figure 2. Plot of the sampling poinfisin the multidimen- sional u—space. “9 The equation for C may be rewritten as l f —f f -f - 1 l 3 which is the average of the two central difference approxi- mations to the first derivative with respect to the first parameter (See Figure 2). Similarly, for C2 + 2 ) (3.38) which is the average of the two central difference approxi- mations of the first derivative with respect to the second parameter. Finally, the C3 coefficient is exactly as expected, a central difference approximation to A2f/AulAu2. c3 = % (rO-rl—r2+r3) . (3.39) If we examine the general expression for the Walsh coefficient of the kth parameter, wk-l, we note that for 50 an P-dimensional system the coefficient is always the central difference approximation average to the derivative with respect to the kth dimension (without loss of gen- erality we may set R = 1). p 1_[ l l - iElwltl c = c = ... z ... z f(t t ...t )(-l) ’ J p 1 l l f(0t2...tp)-f(1t2...tp) Clo .0 = ‘EII z . z { 2 } 2 t2=0 tp=o ... <é£> (3.u0) Aui Similarly for all coupled Walsh coefficients (21"1 + Zz-l-sequency) we get the approximation to the average of the mixed derivative (See Abramowitz and Stegun pp— 889). C 1 l l ( ){ )t1+t2} =‘—— z ... 2 .f t ...t (-1 110...O 2p t =0 t =0 1 p l p = 31 g % {f(OOtl..tD)-f(01t3..tp)-f(10t3..tp)+f(llt3..tp)} -7;:§ _ .. = 2 t3—O tp o u = _—.é2_f_> (3.}41) 51 This implies that a walsh function expansion to a continuous function is the first 2p terms of a Taylor series using average derivatives. f(3) = f(’_c_) p t p p 2 t +t = + Z <fi£>(-1) i + Z z fipfimsmm nmmcfia one —.¢ mpswfim «ommv oER 0.09 0.0 0.0 Cd! 0.09! Odo—.l . b _ _ . _ OOFI a .. n.t. L l U 0.00! l l on! m - x J I 1 S 9 l - w r0 .. OX .. Wt. 5 0.0 J 0 M“ . I «M. 1 I 3 O L I 9 . . w 0.8 on m” .l I. 3 L EX 1 m. S L T l 1 OOOOF 4 )— 4 q .1 —‘ OOF 0.0— 0.0 0.0 0.0! 0.0Fl ”128$ 53.3% 32.: 3022 Home: 57 .mflmham:< Apfl>flpflmcom swam: w Sosa Hocoz pmocfiq esp mo osam> nowmpm>m one N.v mpzmwm . «Mommy 0E: 0.09 0.0 0.0 o.nl 0.09... OF! . _ _ _ . _ _ Or]. I . 1 ml 1 I m1 at 1 I on T 1 I T a .. ..A NI 1 1. ml = l - ya 0 i'ITITfilislTIllTslfiifiililiil’IIrfillIIiT 0 1W: N 1 r m + l - M: Z e. I 1 ¢ c 1 I o m I. TI m . a 8 . l . A . a . S 0.9 0.0 0.0 on: 907. egos‘ 53:38 .39: Boole koocfi ..o oEo> homogozq‘ 58 .Hono: psocflq esp mo mfimzamc< swam: one ca weanemm mcofiamazawm one mo :ofipww>oo upmuzmvm one m.v opzmwm doom» oE: c? on o.o 0.9. 0.0... o u _ _ — — — p o S 1 ... c. on 1 1 cm on.d w.on Mb cc 1 1 ow w 1 - m on A .r on m. 8 1 1 8 mo 1 1 M. on L 1 on Mr. - 0. W on 1 ... on 8 1 1 8 L w 2: l 1 on: L s o: . _ . _ . _ . o: 0.0.. ow. od o.m1 0.91 3381.. 515.88 is: EpoE hoot: 59 standard deviation is small then the range of parameter space examined has little effect on the output function. Figure A.A is a plot of the Walsh expansion coefficients. These coefficients are related to the linear sensitivity coefficients shown in Figure “.1. A Walsh expansion co- efficient.may be thought of as an averaged derivative of the model output function as shown in Chapter Three. How- ever, the equation depends on the particular transforma- tion function used in the analysis. In Chapter Three the 2-1 transformation function u2 = WALH(2 ,Tg) was used. In this case, by using the chain rule, it can be shown that du = dt. Similarly, the conversion of “2 to t2 must be accounted for when a different transformation function is used. In the linear model the transformation function used 0 was the arithematic transformation function uz = uz + AWALH(22‘l,t£). Applying the chain rule to the equation for a Walsh expansion coefficient, 3.A0, results in BfCuz) 31.1 C = <-—————-. t 2 _ > (4.6) 2% l auz 3 2 which in the case of the arithematic Walsh transformation function yields 60 .Hoco: noonfiq one aonm manofiofimmooo nowonomxo nmaoz one v.v onsmfim «oomy oER 0.0P 0.0 0.0 0.ml 0.0—..1 OOPII _ _ _ _ _ _ . OOPII o m I 1 l O ”‘1’ I l O swapgyeog uogsuodxg a a — q 00 —. od oh 1 0.0 F I «.3505. 5.3.533 n29: noposn noon: 61 8 0 2—1 5?; {u£ = u2 + AWALH(2 ,t£)} Bug _ §E_ - A (4.7) 2 therefore BfCul) (322-1 " A<-—3-E;—> (”.8) From this equation the relationship to the linear sensi- tivity coefficient is easily shown. 0 ”(‘12) “1? x11: u, (151—) = (T) 021-1 (11.9) Therefore the Walsh expansion coefficients, using an arithmetic transformation function, are equal to linear sensitivity coefficients scaled by a scale factor which is the nominal value, ug, divided by the range of parameter variation, A. In the linear model the scale factors for the parameters are (mo/Am) and bO/Ab) which are l and 0, 62 respectively. Alternatively the reduced partial variances may be plotted, (henceforth the reduced partial variances will be called partial variances with the assumption that they are divided by the total variance). These partial variances are shown in Figure 9.5. Using the notation developed in Chapter Two, the partial variance of the first parameter, m, is written 81' Similarly the partial variance of the second parameter, b, is written 82. The point at which the curves intersect, here at t = i1, depends on the range of parameter variation and the nominal values chosen for the Walsh Sensitivity analysis. Here both parameters were varied over [-10.,10.]. In the linear model the interpretation is simple. To estimate the b-parameter, measurements should be taken near t = 0 where 82 is largest, near t = 0. Measurements far from t = 0 are highly sensitive to the value of m as shown by 81' These measurements taken from this region would be best for accurate estimation of m. Treating the linear model with the Fourier method gives similar results. The nominal values and parameter variations were the same as in the walsh analysis. How- ever, more simulations were required to estimate the Fourier coefficients. The frequency set used was the 6th-order accurate set, [3,5], which requires, at a minimum, 11 simulations (21 simulations were used). The expansion 63 .Hocoz noonflg onp Sony moonofino> Hofipnom noaoz m.v onswflm aoomy oEK 0.0, 0.0 0.0 0.01 0.0.1 00.0 _ — _ 1P - — _ b. 00.0 L \Lll‘l’llllr . .\ - 1 mm - 1 e 3.0 1 1 00.0 i - i . I 00.0 1 _ . 1 00.0 050 L 1 05.0 1 Pm 1 1 r 00.. .. 1 1 . - q . a . _ 7.1117]; 00., 0.0, . 0.0 0.0 0.0... 0.07 «0305. 03588 .30.: noposn nooEn 10910:! A SQQUDIJD 6A coefficients versus t are plotted in Figure A.6. These expansion coefficients are proportional to the first derivatives in the Taylor series. The partial variances shown in Figure “.7 are nearly identical to those derived from walsh analysis, Figure “.5. The minor differences are a result of the approxi- mate nature of the Fourier method, since a Fourier expansion of an angled line requires an infinite number of terms. However, the partial variances shown capture more than 99% of the total variance. The standard deviation curve given in Figure 4.8 also shows nearly the same range of variation in the output function as was examined by the Walsh method. However, the standard deviation curve weights simulations far from the average simulation more than those close to the average, causing the Fourier standard deviation curve to be smaller in magnitude than its Walsh counterpart. This happens as the parameter vectors are chosen throughout the param- eter variation interval in the Fourier analysis while the parameter vectors are chosen at the extremes in the Walsh method. Now let us examine a simple nonlinear model. Here nonlinear means that the Taylor series expansion of the output function with respect to the parameters is composed of terms containing second or higher derivatives of the output function. Perhaps the most commonly used nonlinear 15...... .Houoz poonfiq on» scum mpnofiowmmooo :ofimnomxo pofipsom 0.0 opsmflm «com» oEt. 0.0? 0.0 0.0 0.0.1 0.9! _ . _ r _ . 00... ... Y 1 1. col 3 - - m T o - w. L . m. - u 1. 0 \(1 u 0 0 1( .0 3110 A 0 nOJ a . w . m. a. - w I on S q . _ . _ . 00? 0.0. 0.0 0.0 0.0... 0.07. 03505. 55:33 3.233 noboé noon: 66 00.0 .Hocoz poonfiq on» 50pm moonofipo> Howpnom uofiusom >.e cpsmfle «com» oEt. 0.0, 0.0 0.0 0.01. 0.091. - — - by _ — b 00.0 L I - Nm 1 1 1.0«0 A . 1 ... 00.0 1 1.066 1 Pm r ._ 1. . q d 4 d 4 — 4 00 P 0.09 0.0 0.0 odl 0.0—.... 2305 0.3.0308 538 noposn noon: towed A $33 UDUO 67 noonfiq one song mnowpoassfim 0oaasom onu mo :ofipoa>oc unannouo one 0' on on or on 00 0h 00 00 00— .mamhaon< pofiusom mnflm: Howe: «com» 2.0:. 0.0 0.01 0.0 _ ... _ F p P _ o 0.. ON on 0? on 00 on on 1.00 0d d — 4 .— d 00 F 0.0 0.0 I 0.0 P ... 0.3005. 333.com 330k $002 noocfi 0.0 ccsmee uogqmag pJopums 68 model in chemistry is the single exponential. f = k2e 1 (1.10) This function can be expanded in its Taylor series as o o _ O O kit 0 0 klt O f(kl,k2,t) - f(kl,k2,t) + e (k2-k2) + k2te (kl-kl) k0 t o o l _ - + k0t2ek%.t(k -k0)2 + (A ll) 2 l l 0 O O 0 Although the expansion continues for an infinite num- ber of terms, a linear sensitivity analysis would only examine the first derivative terms. gteklt (14.12) 69 These sensitivity coefficients are plotted in Figure A.9 using k = —O.25 seconds and k = 1000.0 as the nominal values. From this sensitivity analysis we can say that the best measurements for k2 are at small t, and the best measurements for kl are at t A seconds, the maximum of the curve. However, if higher order terms are considered note that they may be large and could affect the value of the output function. To use Walsh sensitivity analysis on this model a range of parameter variation is needed. First, let us examine 'local behavior'; behavior of the model when the parameters are varied only slightly. In this case, for small variations in the parameters, the walsh coefficients should be equivalent to the results of classical sensi- tivity analysis. But for large variations in the value of the parameters the Walsh method will give different results as the higher derivatives become significant. Again a parameter set must be chosen. Figure 4.10 shows the plot of the averaged value for the four simula- tions of the exponential model with k2 = 1000 t 100 and k1 = -0.25 i 0.025 seconds, 202;: 10% variation. Since there are two parameters in this model, the curve in Figure “.10 is the average of four different simulations where each simulation has a unique combination of parameters. The expansion coefficients are plotted in Figure A.11. .Hoooz Hmfiunonomxa onu Scum munowowmmooo haw>wuwmnom pooan m.v onzmah «oomy oE: s;uegoy;aog Kylamsuas 1091117 0.00. 0.2 0.0. 0.0 0.0 o p — b — _ — n o .1 1 00. A 9.x 1 08 000 1 1 000 .1 FxX .. con L 1 con 00.. 1 1 00¢ O I. . . I 7 000 1 1 000 L o 000 1 1 000 005 1 1 com 000 1 1 000 1 r 000 L r 000 000. . _ . _ 1 ._ 1 000. 0.0a 0.0. 0.0. 0.0 0.0 2322 0.30.580 38: B022 nonconoolxm 71 .mnouosouom onp :H.=0wpoflua> mo. nu“: mfimxaon< zuw>flpflonom nmawz 0:000 Hoooz Hofiunonogxm on» no o=H0> 0o000o>0 one 0—.¢ onsmflm «oomy oEt. 1200 new, 120, “an 120 0 omnuwww .1 w m . . _ _ _ _ . 0 00. m m 000 000 m m 000 00m 1 w 000 m H 00¢ 1U ..l 00¢ IA 0 fl __ H n BX. 000 n w 000 JP 005 m w 005 u m 000 n w 000 000 M w 000 00°F 1 3 q q d 0 _ q ._.1 COOP C! 0 a: 0.0 F 0.09 0.0 0.0 230$. 03028 53: «00.0019, :oEmV $003 00.005520qu 72 .Azowanfina> x0_v Hocoz Hawanozoaxa one 5050 manowoauuooo :oflnzzaxo nnasz ...v 00:00: «com» .255 0.0m 0.05 0.0— 0.0 0.0 o .J lair — b _ — o .ljnfl‘ffflllil‘ff . O— .l 011/ 1/ 1| CF 1 mo . 00 1 1 00 1 3 0n 1 on a 1 90 0 w. - u 00 1 on 3 1 o 8 00 1 00 mu. . 0. 05 1 05 m. 1 . S 00 .... cm L 00 1 00 L . 00’ a (q 4 ¢ 4 .— d 8F 0.0m 0A.: 0.0. 0.0 0.302? 03538 :03: «00.00.0053, =oEm0 noboE no.0:ocoolxm 73 The C1 coefficient is that Walsh expansion coefficient which, in the case of a continuous model, is an averaged finite-difference measure of the first derivative of the output function with respect to the first parameter, kl. Similarly, the C2 coefficient is a finite-difference ap- proximation to the first derivative of the output function with respect to the second parameter, k2. Comparing Figure “.11 with Figure “.9 we see that they are identi- cal curves to within a constant scaling factor. To display the sensitivities of the parameters it is mOre convenient to examine the partial variances shown in Figure “.12. As before, these plots also show the most sensitivity to the second parameter at short times, and to the first parameter at long times. This figure also shows that there is very little coupling in the sensi- tivity between the two parameters. This can be observed by noting that the sum of the two partial variances (Sl + S2) is nearly 1.0. This means that almost all of the variance in the output function is assigned to S1 or 82. The standard deviation curve for this analysis is given in Figure “.13. This curve, which looks like an exponential decay, reflects the decay of the output func- tion. It is tempting to claim that since the standard deviation is only 5% of its maximum at 20 seconds that statements about the sensitivity of parameters at these long times are meaningless. However, upon examining 7“ .Anofiwofino> mopv Hone: Hofiunonomxm on» 5000 moonsfluo> asfluhom nuanz ~70 9300.0 «oomw oEt. . 0.0a 0.0? 0.0- 0.0 0.0 00.0 r _ b P _ _ . 1 Y. n 1 %Ifill‘llé!!‘ll§lf§ll§ll§!l r g I 05.0 L 1 1 NM 0 00.0 1 .... 1 PM 1 05.0 L 1 0°.P 1 — 41 id 4‘ d a Y.- 0.0N 0.0— 0.09 0.0 0.0 0.305. 0.20.380 .59: «00.00.29, :oEmy \oposn noncocoqu 00.0 0N0 00.0 05.0 00... ION-10d A SGOUDUD 75 .AcoHpmHHn> XQHV Hmcoz Hprcmcomxm any mo mHmmHmc< zmHmz asp EOHH :oHHwH>mc chancmHm age MH. v mgzmHm mommy mER 0.0a 0.0.. 0.9 0.0 0.0 O 4 _ _ 0 — 0 — 0 O OP 0« 0n 0v on on uqumaa pmpumg 05 00 00 COP 0 — 4 d T fi H d1 8—. 0.0a 0.0? 0.0.. 0.0 0.0 ' . «.0902? 3.020.033 0.8003 «£030.00? 0000.030 B00030 00.026000qu 76 the dimensionless plot in Figure “.1“ we see that the relative deviation, which is the standard deviation divided by the average value, actually increases in time. This means that as the magnitude of the output function decreases the relative variation grows. Therefore, the sensitivities of the parameters at long times can be significant if the relative deviation, rather than the absolute deviation, is nearly constant. One advantage that the Walsh method has over linear analysis is that it explicitly uses a range of variation for the parameters. If this range of variation is increased (from 10% to 60%) and the mathematical model reanalyzed it can be seen from Figure “.15 that slightly different behavior results. In Figure “.15 the average value does not decay away as fast as the earlier analysis. Figure “.16 shows that the coefficient curves have shifted the maximum sensitivity of the decay constant, kl’ to longer times, 5 seconds, reflecting the effect of the nonlinear behavior of the model. The sensitivity of the pre-exponen— tial parameter, k2, also decays away more slowly. The partial variances in Figure “.17 also show the effect of a larger range of variation by shifting the crossover point from “ seconds to 6 seconds. Note that the nonlinear effect of the model is to delay the sensitivity to k1 into longer times. However, the standard deviation curve, Figure “.18, and the relative deviation curve, Figure “.19, 77 .AsoHuwHHo> NOHV Hove: HwHacocomxm map 00 mHmmHmc< :mHoz map scum :OHpoH>oc o>HHoHop one vH.v onsmHm doomV mEc. 0.0N 0.0.. 0.0— O...“ 0.6 00.0 . _ 0 r _ — 0 00.0 070 1 090 L l 0N6 L 0N0 onAunl 0nd uogmgAaa 941:0;ng QYO 1 ovd 09.0 1 d H q H _ 0 L OW.O odm 0.9 0.9 ad ad 2322 5.00.00026 50°05 «000000.200, 000250 E6030 00.0Hcmcoqu 78 .R00 an coHpm> mpouosmpom ogu :HHz Hoco: HoHasosomxm on» mo mHthmqo :mHoz on» Scum msHm> comouo>o one m_.v opsmHm «com» 2:: Heom anH “Ho. 02m H20 0 _ — - — _ — P o 0 m co. m w co, U n com H w com .0 T con 1 w con H H oov A w new .A L I _— 0 ... 4d». 000 I”. m: 000 710.. com 1 m can .0 r oom m m com com m m com U H coo. . H . 0 . _ 4 .H oooH 0wo~ Hen. Hac— Han 000 M09422 0.005.033 5003 «000.000.0000, BEEoEEEC 0o0oo00< 00.0006000qu 79 .AsoHalos> momv Hocoz HmHacocoaxm on» scam mucoHonmooo coHnsomxo :mHoz 0_.v ohzmHh «00$ 08.: odu of 0H: Qn 0.0 o p — n — _ — b o 1 . T 2: L N r 8. . o 1 .. H MW com 1 1 com o . . 0o . w L 1 m. 1 .. U con 1 1 con 0 . 1 O 9 . 1 1.. 1!: . . a. 8.. 1 1 8.. a. . .. w . 1 S com 1 1 com. .0 1 _ ... 000 H d 1 d H d 4 q com 98 0.9 od. on ed ”0235. 0.000000088 00%: «0100.00.23 oHEDoEEHCC 0o0oo0>0 00.000620qu 80 .AO0HHmem> R000 Hocoz HoHucocoaxm oza song moonoHpo> Hoprom smHoz one >—.¢ opzmHm «oomy oER 0.0m 0.9 0.0.. 0.0 0.0 00.0 P r h — 0 — _ vl. 00.0 L ’ 7 b >1 I 00.0 1 1 1 - 1 1 00.0 L Nm 1 L .. 00.0 1 0m 1 00.0 1 0 0 0 0 u n 4 .. 050 1 r.mfid I. I L I OC.—. H d J a H — H vl OO.—. 0.0a 0.0.. 0.0.. 0.0 0.0 0.09.005 00.00.00.203 .550: «20.03.0180,. o0o.0poE.0o0c.00 00603. .o.0.0:mcoo0xm. SQOUQUDA [ogyod 81 . .AnOHpoHpo> R000 Hocoz HoHunocomxm on» 3090 noHaoH>o0 chocnowo anoz one 0H.v ousmHe doom» o5... 0.0a 0.0; 0.9 0.0 0.0 o . — p — _ b p o L T 1. 000 1 000 L .L. S 000 1 000 W . w. L - m. con 0 con , m L m . 0.. 00¢ L 00¢ m. 1 U 000 L 000 com H u 4 — H — H 4 000 0.0a 0.9 0.00 0.0 0.0 009003 530.308 000000.. «20.00.0109 onceooctoncc .opos. 00.0006000qu 82 .AnoHHoHno> &000 Honoz HoHpnonomxm on» song :oHHoH>oc o>HuoHou noHo3 one m_.v opzmHe «com» oEt. 0.00 0.00 0.00 0.0 0.0 06.6 1+ - — n — — — _ L 00.0 i II 00.0 11 .11 mud 00.0 1. 1 00.0 ma .. r .01.. 1 1 M. 1 1 9 3.0 1 1 050 G 1 1 9 1 I M. 1 r mW 00.0 a H1 00.0 m L 01 L o 00.. 1 1 1 1 1 my... 1 H 00.0 - . . 0 . 0 . 0 . 00.0 0.00 0.2 0.0. 0.0 0.0 0.09.003 00,003.03 000000.. «toner? o00.00ooE10o0:.00 .opos. 8.002.110.100qu0 83 have the same behavior as for the corresponding cases of a small range of the parameters although the magnitudes have changed. Choosing an even larger range of variation for the parameters of the model (k2 = 1000 i 1000, k = 00,25 i l 0.25 (seconds)7¥) results in the curves in Figures “.20- “.2“. In this analysis the average value does not even‘ decay away to zero! This is not typical behavior as shown in the previous two analyses. This behavior is caused by the particular sets of rate constants used in this analysis. At 2.5 seconds two simulations have reached their final values, and at 12 seconds the other two simu- lations have reached completion. This is the danger en- countered when the analysis uses only the extremes of the parameter variation intervals. If the intervals are large enough the behavior of the model at the extremes of the parameter intervals may be completely different than its behavior closer to the nominal value. With the Walsh method we can examine the onset of nonlinear behavior by expanding the range of analysis from the nominal value. This is important, especially for models which are numerically solved so that the degree of nonlinearity in the model solution is unknown. When the analysis was repeated using the Fourier method with the same set of nominal parameters and over the same small range of variation, k = -0.25 t 0.025 (seconds)-l, k2 = 1000 t 100, the same results as were — __ __ 8“ .noHpoHno> noposopom ROOH anz Hoco: HoHpnonomxm onp sonm osHo> 0omono>o noH03 one 0N.v onsmHm doom» ore... 0.0a 0.0? 0.0F 0.0 0.0 C . _ 0 — 0 _ 0 O 00' w W 00H 000 1 m 000 1 .1 con 1 w 000 000. 1 1 00.. A U r = 000 . 111 n h v 0 c n .1 n .0 n n m. con v.1 U H 1....» 000 u m. 000 J10: 000 1 .m. 005 4 01 000 1 m. 000 0 H COG 1U ...1 com 0000 1 . H . 0 H. 000. 0.0a 04.: 0.0.. 0.0 0.0 002005 0020.00.08 00000.. «:0002109. “100000 0o0oo00< 00.0cocoqu 85 .AnoHpoHuo> moo—v Hocoz HoHunononxm onu 50pm ounoHonmooo :onnomxo noHoz Hm.¢ onsmHm «oomy oFE. 0.0m 0.0H 0.09 0.0 0.0 O f — 0 _ 0 _ 0 P O .000 1 m. 09 000 1 m. 000 U n H... 000 11. n. 000 d u n m u n U 000 1 0 0 0 o 0 0 0 To 0 H1 000 Mu U u a 000 I11 n1 000 mu. 1 1 m. 00» 1 m. 00H m. . 1 S 0 1 000 a n. 000 000 1 r 000 .1 . r 0000 1 . _ . . 0 . . 0 1 1 0000 0.0a 0.0. 0.0H 0.0 0.0 0.0.005 Annamgm 5000: «015.000.0160, o010o00 0.00003. .ozcocoqu 86 .AnoHuoHuo> floopv Hocoz HoHuconomxm on» Sopm moonoHno> HoHHuom anoz mm.v ohsmHm «coo» oEc. Cd“ 0A.: 06—. 6.0 Cd 00.0 0 — 0 — 01 _ 0 00.0 L 0N6 .1 0N0 1 1 1 1 1 1 1 1 1 d 1 O 1 m. 1 0.. OMAVL 000 nu L u. D .1 U 3 1 3 1 S and .1 T and OC.—. 1 q} H d 1 q H 1 11 CO4. odm 0.0? 0.0.. 0.m 0.0 0.020030 0.2.02.0 .0000; «000.000.0150, 201020 0111000030 00.0000900qu 87 moo—V Hocoz HoHpnononxm on» Scum :oHuoH>o0 unownouo ano: one .A20HuoHuo> «080 oEt. Q0« .Q0. Q0. Q0 Q0 — 0 _ 0 _ 0 0 1 - 00. 1 n. L . 11.. 00N1u U 1 .1. 1 .1 000.4 W 00.. 1 m. 000 1 1. 000.1 w 005 1 1 000 1 w 1 v n v v 0 0 1“ n 011* 0 p o 1 n Oom.u W 1 H 0000 _ . _ . _ . H1 Q0« Q0. Q0. Q0 Q0 «500.000.0109 00200.0( 000.00.00.08 .0009: 10010000 0.00030 00.0056000qu mm.H onson 0 000 000 000 .61 D 1. w. 0.0 m D. 000 0 m 000 WM 005 m 000 000 000. 88 .AnoHHoHpo> NOOHV Hove: HoHpnonomxm onu song noHHoH>o0 o>HuoHon an03 one vm.v ousmHm «com» 0.000.010. 0.0a 0.0.. 0.0? 0.m 0.0 00.0 0 _ 0 _ . _ 0 00.0 L 1 L I 00.0 1.. 11 00.0 . L 1 a 1 1 0. 1 1 m. 1 1 m. 00.0 1 00.0 G 1 a M. 1 w. L m- 1 U 0m”.— .1 11 on... lellltllfi 0 .0 o 0 o W H OO.N H 1 q H . — H H H L OO.N 0.8 0.00 0.0. , 0.0 0.0 0.01.005. 0.20.0080 000000: «000.000.0109. .0920 .9003. 000.cocoqu. (D \0 obtained for the average value of the output function as with linear and Walsh techniques (Figure H.25L The expansion coefficients, Figure H.26, have different mag- nitudes but the behavior is the same. The maximum of the C coefficient occurs at the same time point for both the Walsh and Fourier methods. The Fourier partial variances, Figure “.27, are also identical to those of the small variation Walsh analysis. There is one slight difference in the two partial vari- ances at very early times. This is caused by the slightly different parameter ranges used. The Fourier method used a log-uniform transformation function which varied the parameters over [-0.221, -O.227] and [1395, 905]. How- ever, within one second, the Walsh partial variances and the Fourier partial variances, both normalized by their respective total variances which are different (compare Figure 4.13 with Figure H.28), reach identical values. Consequently the Fourier partial variances have the same interpretation as did the previous Walsh partial variances. For comparison purposes, the relative deviation curve for the small variation Fourier analysis is plotted in Figure h.29. Note that it is always smaller than the cor- responding Walsh curve, Figure 3.1“. This reflects both the slightly restricted range of parameter variation and the increased number of simulations used in the Fourier method. 9O .AGOfiuwfipw> mo—V Hmvoz Hmwucmcomxm on» Scum msac> vmmmhm>m poflpsom one mm.v mpsmflm doom» meet. ad... of 3: 3.. 9o 0 _ _ F _ — h H m oop.d W 8w .w n. 08 .m m. 8*. 1H m. can .m m. can 1m w 85 .m m. 8» .m m. com .m W 8o. . _ . a . _ . 1 ad“ of o? 0.... od egos. 5.33:8 538 «actor? :oEmw E622 \ozcmcoqu COP CON con 00¢ com 000 605 com com 000 P m9“: = A 91 .ACOHpMHpm> moav Hmooz HmeCmCOme map Eopm mucoHOHmmmoo coamcmaxm pmfipsom .om.: mpzwfim «can» mEt. odw ode 90.. ed od 0 _ _ owl 0N1. onL 0?] on] out Oh] swapypog uogsuodxg om... 8. . _ . _ . _ . on: odm 05.: 9.0. 9m 0.0 . £9.35. 3.32.3:3 Stack cococo> :oEm \mbo \oawcmcoqu . . E . 92 .AcoHpmew> ROPV Hmcoz Hwflpcmcomxm on» song moocwflnw> Hwflwhmm pmwpsom >m.v mpsmflm «0mm» pct» 0.0“ 0.0— 0.0— 0.m 0.0 00.6 h _ _ _ e p _ ‘rl 00.0 L 1 1. 0| 4 1 i 1 0N0 1 [.mNd . mm a omfivl 1.006 H .m H 05.0 .1 I 05.0 4 .. . 1 421011 1 OO.F H _ H _ H _ H a} 00.9 0.0 odm .06. 0.0— 0.0 . «.632? 53:35 Stack «cosotg, :oEmV BUOE 33:020qu Iomod A $190 UDIJ D .AQOwumfipm> mowv Hmcoz Hwfiucmaomxm map scum coaumfl>mc unaccmpm pmwpsom one mm.¢ mpsmwm «00$ mEt. odm 0.9 ode on 0.0 o . _ . _ . _ . o 0—1. out on! O¢l omL uogqmaq pJDpUDiS on... oml co. . _ . _ . _ . on: odu o9 3: on ad £94.92 5323.25 533 «tozotni zoEmV $6030 00....Emcoqu 9U .Acowpmfipm> mo—v Hmcoz Hmfipcmaogxm any Scum :ofiumw>m0 m>fipwfimu pmfipsom was «com» mEt. 0.0m 0A.: 0.0.. 0.0 00.0 . _ p _ . _ . mm.v mpsmua 0N0 0nd ovd 8.0 q q — 1 — d d u 0.0a 0.9 0.0? 04... «gosx aifiSm 558 «cozotg’ 20:50 02502 Bzcmcoqu 0.0 00.0 09.0 and on .0 0V0 00.0 uoyognaa GADDIGH 95 Figures u.3o—u.3u give the results of Fourier analysis of the exponential model when a large range of variation of the parameters is used, (k1 = -O.25 i .25 (seconds)-l, k2 = 1000 t 1000). This analysis reveals the nonlinear aspects of the model. Figure “.30 shows a slower averaged decay of the output function than for the small range case. The expansion coefficients for the case of large variations, Figure 4.31, have nearly the same behavior as those ob- tained with the small variations (Figure u.26). However, the maximum value of Cl coefficient has shifted to longer times, similar to the behavior of the Walsh Cl coefficient shown in Figure “.16, although the shift is not as great. The large variation C2 coefficient doesn't decay away as fast as the small variation C2 coefficient does. Even more striking are the partial variances plotted in Figure H.32. The partial variance of kl, 81’ reaches a maximum at about 13 seconds and then slowly decays away. The maximum is important since it selects a time region which is optimal for the measurement of that parameter. Also over this larger range of parameter space there is sig- nificant coupling between the sensitivities of the two parameters. This is shown in Figure H.32 by the coupled partial variance 81,2. Coupled partial variances indicate the degree of linear dependence between pairs of parameters. When a coupled partial variance is large it is difficult to separate the 96 . .Acoflpmwpm> ROOFV Hone: wapcmcomxm one scum msam> cmmwpm>w nmansom one 0m.¢ whamHm mommy m8: 000 0.0. 0.9 0.0 0.0 O . _ _ _ . _ _ O - + a 00. - 4 - w u. 00, 00H. 1m w 000. 00m 1” m. 000 1 1 IA 00+ .m n. 00¢ __ 1 r y! 000 1. w 000 7.”. H H m 000 1“. m. 000 Mm 005 ..H. w 02. 0 000 .m m. 000 H m 000 .u ...1. 000 009 . . _ . _ . _ . H. 000. 0.0a 0.0. 0.9 0.0 0.0 9.202.? 53:38 3.5.3 «cozo.to> @920 $022 Ezcmcoqu 97 .Acofipmwcm> ROOHV auto: Hwfiacmcomxm one 5009 maso00wmmmoo :onchxm meczom _m.v mpzmflm «can» 08:. 0.0a 0.0. 0.0— 0.0 0.0 r — _ _ . _ H 0 . . a- 0 00— J NO I 009 CON LII‘IIIII'II'I'I/ 11 00a 1. . /4/ 1 3 con 1 7 1 000 m a 9:161 1 w 000. I ..1 00¢ m. h 1 u 000 I I 000 nJ . 1 m. 000 L T 000 ”11.“. a - 0. 00a. 1 1 005 m... L 1 S 000.1 fl1000 000 I I 000 000. 1 H . H . H . 000. 0.00 0.0. 0.0— 0.0 0.0 £2005 0.3....“ch 3:30.; «c2021.? @900 EDGE 3:200:0qu 98 .Acofipowco> Roopv Hocoz Hoflpcocomxm one Sony moocoflho> Hofippom nowpzom mm.v oczmflm doom» oEt. 0.0a 0.0.. 0.09 0.0. 0.0 00.0 L . _ p _ _ _ r 10“.! O0.0 h Nm 1 . 1 TIIJOIIIOIIIOIIMIIIIOI'I II11¢11111¢I1116111111011 00.0 L N J 00.0 1 d 1 D 1 w 01. 00.0 L 00.0 m 1 U. 11!;I‘J1I‘IIO1II5T11\I1IO 111115.1‘1110 w 1 PW 00w 1 S mfiO.L mud CO.” 14d d 4 — OOoP 0.0m 0.01: 0.0.. £33.? buzzmcom Stack «tomwotg, omuoc $022 \ozcocoqu .AcoHpoHLo> X00.v Hoco: Howwcocomxm on» song :ofipow>oc unoccopm pofinzom one mm.¢ opswflm 99 «com» 2E... 0.0a 0.0" 0.0.. 0.0 0.0 C _ . . . O 00.. .1 11 00.. 00a 1. 1 00m 3 00m com m. u m. 00? 00¢ m com 000 W W 000 000 m. . U 005 00x1 000 000 0.0.1. 05.: 0.0a 0.0 0.0 «.002? 0.3.236 3.3.01. «c0321.? 2300 .9003. .o..:ocoqu .AnoHpoHno> R00.v Hocoz Howpnonomxm one soup :oHaoH>oc o>HpoHon nofipsom one vn.v ogzmfim «com» 2:... 0.0“ 0.0.3. 0.0? 0.0 0.0 0.0 . _ . F _ _ F 0.0 n U 0. N 1 uogogAaq 311301.93 ..0.¢ H . J H A H 0 1 0.1? 0.00 . 0.9 0.3 on 0.0 «2005.. 0.2.350 3:8... «£00029, 00016.0 .9003. .0..cocoqu 101 effect of one parameter from that of the other. Note that because the partial variances and coupled partial variances are relative measures of sensitivity, they must be used in conjunction with the total variance to provide an understanding of the sensitivity. In particular, if the total variance is very small, there is little point in carefully examining its components since they are Just a partitioning of this very small total variance into the individual contributions. In examining the exponential model it can be seen that the walsh method is equivalent to the linear analysis for small variations in the parameters. Its advantage over linear analysis is that as the range of parameter variation increases it also picks up the nonlinear effects in the model. The Fourier method is also similar to linear analysis, in the limit of small variations of the param- eters. However, for large variations in the parameters, since it samples the whole of parameter space, it gives correct results while the walsh and linear methods fail. The Fourier method requires more simulations to achieve its results than does the walsh method, for models with a small number of parameters; i;g;, fewer than seven. The number of simulations required in Fourier analysis is heavily dependent on the order of accuracy of the fre- quency set. For a 6th—order accurate set for 10 parameters the Fourier method requires, at a minimum, 28M3 simulations, 102 whereas a Nth-order set requires only #11 simulations. It should be noted that the walsh method is exact for discrete models and for 10 parameters requires only 102” simulations. By applying the Fourier method to the kinetics of simple chemical reactions we can further improve our understanding of the interpretation of partial variances. One of the simplest reaction schemes in chemical kinetics is the unimolecular first-order decay of species A to species B. A + B (11.13) However, the mathematical model for this reaction is the exponential model which we have already examined. A slightly more complicated model reaction has two coupled first-order reactions, which may be written k k A -} B -3 c (n.1u) Choosing k = 0.1 i 0.01 (seconds).1 and k = 0.01 i l 2 0.001 (seconds)-1 with A = 10000, B = c = 0 we are able k to simulate a reaction with a ’bottleneck' step, (B 4? C), since kl >> k2. Figure “.35 displays the averaged concentrations for this reaction. Application of linear sensitivity analysis 103 .Hocoz noasooaoaflno one Bone mQOHpoppnoocoo nomono>o one mn.v onswwe «oomy och... 00H. . 00. 00. 00 0 C .1... 1*.fllfitwlfllklfinrrrmll”-01»1«-11..Ia11»11«1.«ia-h-d1..91o1»161h1161.w}91“. _ _ . 1.111 O 1. < .H 000. nrtf m. 000. . m 0000 0 11.11.1711 1.1 0000. U via/o m WV 0000 a n1 0000 m u 1 5 000.. 11. n. 000.. a i H p 0000 1” w 0000 m M n a 0000 1. .... 0000 .0. U n 0 000a. a. .4 000m m... H o\o\ W W 0000 41.111.111.111 n. 0000 0000 n m1 0000 1 n COCO" 1 4| 1- 1 fl — d 4 d d .- 1 W # 4 d d u 4 4 -I| 0000’ 00.1. 00. 00. 00 0 0.6.02? Stack «01.1.1.0 M100 cxm. nosooBEED 10“ to this model to examine the sensitivities of the B- concentration would result in sensitivity coefficients similar to the expansion coefficients shown in Figure “.36. Since the curves are not normalized they are difficult to interpret. One might be tempted to say that since Cl is at a maximum at 20 seconds measurements in this time region are Optimal for the determination of kl. Similarly C2 has its most effect on the concentration of B at 90 seconds. Therefore measurements of B near 90 seconds would pin down the k2 rate constant. If we examine the partial variances for the B-concentra- tion we clearly get different results. From Figure “.37 we see that measurements of the B—concentration before 10 seconds have elapsed and after “5 seconds will give ac- curate estimates of kl and k2 respectively. The sensitivity of the C-concentration in linear analysis gives curves shown in Figure “.38. Here we see that, since the k2 step is a bottleneck, we will have a difficult time measuring kl because the effect on C from k2 is so large. Only at short times are the sensitivities of k1 significant. Figure “.39 is even more revealing. This plot of the partial variances of the C-concentration clearly demon- strates that k2 is the most important parameter in the model. It also shows that kl contributes to the C-concen— tration only at short times. Therefore to estimate kl 105 . H0002 guanomaoaflCD on» :0 mmw pom mpcofiowmwooo mpfi>wpflmcom gwmcfiq 0.0 0.0000 0.0000.. 0.00000 0.0000N 0.00000 0.00000 0.00000 0.0000? 0.0000v 0.00000 00w «900» 08:. 00, LL LLJ LL 0. f _95 00m 00— 0 03 0.630% 3.5.9.300 30:3 «0 Tm .13 :50 0030200223 00— 00.0 005000 0.0 0.0000 0.00009 0.0000_. 0.0000N 0.0000N 0.00000 0.00000 0.0000¢ 0.0000¢ 0.00000 swepgyaog Kmmsuas Joaun 106 .Hocoz poazomaoaHCD map :0 mm“ 000 moocwfipm> Hmwpgwm 0000300 >m.v 003000 000 050 00; «0000 02:5 00m 00v 00¢ 00 0 Ii1llfilli+fldfl+flld#fldflkt¢lhl«f. r r _ [Ll . u _ P i» . 0.401 L ’1. PmT’gu?’$--fi X I - + 1 T - NW x .TQ!£3¢I¢IOIQ\$IOI¢1T&I§IO\IIOIIIOIO|Q) 0191980101105; 34 1 u 1 J — J a 0 4 — a q . J“ — 1 q q 1! 00m 00F 00F 00 0 m 08 0.54.05. 3:33 a Aim A13 :xm SSomBEED 00A0 0N0 000 050 00; 101:400 A SQDUOIJO .H00oz 00H500Hoawcb 0:9 :0 dog pow mpc0fioflm00oo hpfl>flpflmc0m 000200 00.0 003000 «000» 0:05 CON 00F 00F . 00 0 0.0 P . _ . L . . r bl . _ r _ p . _ . p . . 0.0 0.0000 1 I. 0.0000 - .5: . 0. 0.0000, 4 . 1 0.0009 0 L - J S 00000, 1 1 0.0000, 0 - f 0. 7 n... m 0.00000 1 V T 0.00000 m... .. - rA 0 0.00000 1 I 0.00000 m .. . 0 0. 0.00000 L 1 0.00000 m0. 1 . NxX r 9.. 0.00000 ... I 0.00000 4 .. 0.0060? 4 q . 1 — . q 0 q q q 4 q 4 q 4 4 0 1 0.0000? 00m 009 00. 00 . 0 o 08 0.6.00.2 0.3.5030 Soc: «0 .Tm T3 :50 023000020005 108 .0000: 000300005020 0:0 :0 av“ 000 00000000> 0000000 0000300 00.0 005000 , «0000 00:: CON one 009 on C 00.0 Ji‘tflfflflrfliifi‘i. -.l. o .9... _ L I” . r F . _ . _ 00.0 1 P” “I nN.O I . . I. 0N6 . . d . . o 1 T WI 0. .. . r In 00.0 .1 .I 00.0 m L . . - U. D .. I U 3 .. r a I. . ‘ s 056 L . T.0fid L I L. {lilINmQIILIOIOXV r COOP d N d d d d d d d .— 1! o q 4 d A—‘ d J 4 d OOoF CON 00.. 00.. on O Q 08 0.600.? 30500 «0 1m {00 :xm gUSOEOEED 109 from measurements of C we should take the measurements at very short times. To estimate k2, measurements after 50 seconds are adequate. Figure “.40 and “.41 show the standard deviation and relative deviation curves, respectively, of the C-concen— tration. From the relative deviation curve we see that we are varying the C-concentration only slightly. Therefore, the Fourier expansion coefficients are equivalent to the linear sensitivity coefficients. The basic difference in the ease of interpretation of the partial variances over the expansion coefficients is that the partial variances explicitly account for the range of parameter variation by being normalized by the total variance. The linear sensitivity coefficients or their equivalent, the expansion coefficients, do not ac- count for the amount of variability introduced by varying the parameters. This is a great weakness in linear analysis. A common feature in chemical kinetics models is the occurrence of a competing reaction. This is a reaction in which two steps compete for the same reactant. Which step predominates is dependent on the rate constants for the two steps. A simple scheme for this problem is written k B +3 D (14.15) 110 .Hmco: paH50mHoaficb onp :H Hog qu :odeH>oc cnmccwpm cmflpzom one ov.¢ oczmflm dummy octh com 00— GOP O P . p L — . . h . _ _ O L - m. U D. i D J 1 O. 0 8m 1 com m. m. L n... . O - U 4 con 1 3:711}. .... con L u q u d 4 d 1 d 4‘ d1 [d1 d d d u — d u + 1 com on? 00. on o o 3. 95>..on Stack «0 Tm TS cxm gOSomioEED lll .Hmnoz pmasomaosficb map :H.mog pow :oHpmH>mc o>HpmHmp nmflpzom 0:9 P¢.¢ opswwm mommy 0E2 000 , 09 00? 00 0 00.0 . . r . _ .r t . _ _ . _ _ _ _ . _ _ . 00.0 L 5 no.0 1. . I no.0 g - . - m 00.0 1 . . I 00.0 W. . Ifflffix - a . - .... m 00.0 .. I 00.0 w. . 1 w . 1 «:0 L ... .20 2.0 . . . 1 q 1 . 1 . _ . . . . _ . . 1 l 9.0 com . 02 00. 00 0 o 3‘ 3335‘ Stack m0 Tm T3 ext goinomioEED 112 Here the final products are a mixture of C and D. The ratio of C to D depends on the two rate constants k2 and k3. Since this is a reaction problem often found in chemical kinetics models a sensitivity analysis on this scheme was done in order to determine the nature of the partial variances. Fourier Sensitivity Analysis was applied to this scheme with k1 = 0.1 i 0.01 seconds, k2 = 0.01 t 0.001, seconds, and k3 = 0.003 1 0.00003 seconds with a log-uni- form transformation function to vary the rate constants uniformly in log-space. A 6th-order frequency set was used, [9, 15, 19], with 37 simulations. The averaged concentrations of the four chemical species are shown in Figure M.U2. From this figure it can be seen that the time range chosen covers virtually all of the reaction. Since the concentration of species B both grows and decays it is the most active. Inspection of this model shows that k2 and k3 cannot be separated by measuring only the concentration of B. In fact, only the sum k2 + k3 could be determined. As expected, Figure M.U3 shows that the sensitivity of B to the rate constants is very similar to that of the coupled first-order model, Figure “.37, since B does not "know" which path it will take and both paths are treated as a single sink. The only difference between this model and the coupled reaction model, with respect to B, is that 10% of the sensitivity 113 .Hmcoz Amazomaosflzb cosocwpm 029 :H mcoflmepzmocoo cmmwpo>a one m¢.¢ musmfim . «00$ 02E. con com CON on? cor on O O .1 . . . . - o I. . .o o. o , 19.90.; . q 964.966.: . < o _ - g _ 0 9.0.0.? \ 000. n r¢¢¢z¢¢ «uxxxxx. L m 10.631010 _ L - 3010...... .u OOON In ;?I.? 1“. 9|". 5 O i‘1i WI U o 1 0000 A w M n OOO¢ M l 0000 h «xxxxx. m 0000 m «1&1 W L 0.9 W 000m. I. 95.0 0.06.0.0 0.02.1 H. A 9.0.619: 061003.08“; .r. 0000 m w 0000 L H OOOOF .l 1 q 0 4| d 14 q q 4 fl 4 « q . fi 1 q q q q q q q q — q u . .1 — OOH OWN . CON One COP on O 0.530.? Stack «Q Mm. .. 0 Mum v.3 ext LoSoo‘oEED coop OOON ooon ooov coon 000m 0005 Doom ooom ooooe uogonuaouog p85OJ9AV 11M .Hmcoz amazooaosflcb cocondpm 0:» :H mmg pom moccaflpa> Hmflpuwm pmezom m¢.¢ mpzmflm «com» 0E: com com 00m 00; 00 P on 0 00.0 .1111«11w111..11111_LL._111 .-.»- _1_9L- . _ _ 1 . _ . _ _ _ 1.1. 00.0 L a We— w Q+e‘?6-o+:.96§o ex! 1 1 0.6.9 Y??I-§-O.§§1¢lzi§zééic 0.0 61.10.91... .7996 mm - 00.0 1 1 00.0 1 - .0 1 1 O 1 - W n . 0.. 00.0 1 1 00.0 W L - u. 0 1 1 U 3 I 9 - - S 05.0 1 1 who . N . .- . 1 1 91?:-¢?¢.I+-Yoéé-vvo-fo ##0499911m19f706 o.o-o.o¢¢o1 Q} 1 000' d 4 q d - q 4 1 d 1.— d a d fi fi 4 q # ‘1 — Tfi q q — 4 J q 1" OOoF 00m 00m 00m 09 00 F on 0 m 02 0.5505. Stack «Q «Mm .. 0 A..- M13 :xm koizooioEED 115 to the 'sink' parameter k2 is given to k3. This small sensitivity to k which is Just the ratio of the nominal 3 rate constants means that any measurements of B over the whole time range even in conjunction with measurements of C or D would be of little help in estimating k3 Since the sensitivity of B to k2 is so large. The partial variances of the C concentration, Figure a.uu, are as expected for a competing reaction. Here k2 is the most important rate constant for C. This is, of course, expected as k2 controls the only path for the production of the C concentration. Note however that k3 'accumulates' sensitivity over the time course of the reaction. This is interpreted as showing how k3 controls, to a lesser extent than k2, the amount of C produced by the end of the reaction. Hence to estimate k3 in this model measurements of C near the end of the reaction are required. The partial variances of the D concentration, Figure a.us, are subject to similar interpretations. During the first 50 seconds of the reaction, the formation and initial decay of the B concentration, the partial variances of D exhibit the same structures as those of the coupled scheme. Here k3 controls the concentration of D, and the sensi- tivity to kl quickly decays away as B is created faster than destroyed. In this case the sensitivity to the rate constant from the competing reaction, k2, accumulates 116 .Hmwoz nwazooaosflcb cmnocwhm map cw Hog pom moocmfinm> Hwfipnwm amwpsom ev.v madmwm Con CON CON onp COP on o . PbFLrethrrh>h>F>P p1L1pL»1_1~ r . b — p _ _ p1p>b>b . COO 4444444444444444444 ({41414J1lli4.q11101 bdhli414441 000 «-906... A . ..m X39141Q§601P L *«kikqnm L I1‘O\‘\ ...? 00.0 1 1111111 . 1 00.0 L 0101. 006 i 096 1 T 060 1 III/14%;}. 1 3.0 1 £1??? 1 1 .4444173Wm «AK 1 4 11017019191651E191???‘ 1 CO.P 1 1 d 11 q 1 q d a 1— q q a q .— 1 d d d —. .1 d 4 d q u q 4 4 00." con onN CON our 00p on o o 3. $3551 Stack «Q 11m .. Q ..Tm 11$ :xm goSooQoEEQ .1 .« saws/10A [ogyod 117 .Hmcoz awasomaoaficb cosocwpm map :H mag pom moocmwhm> wappmm Lowpzom mv.v mpzmfim 00.0 qu 00.0 n56 ooé «ommv 0E: con OnN CON 009 OO_. 00 O wflw0w0fi0wfi4¢0¢0wowow w0wo+0wowowowvw1rhdnflw¥hfl “thutow 00.0 .A Fm RR...) 1 1. .1I\I\.\.\‘ I .NMXXNBR 1 \.....\ 1 00.0 1 «##QQRkRRRRfi. T 1 Q§§§ 1 1 a I .1 .l Om.O 1 r. 1 mm t 05.0 1 filer. T ,.,.‘1 l q d n d - A1 1 — 11 d 1 d d J1 4 q 4 —. J 4 d 1 q q 4 d 1 COOP COM OWN . CON 009 00.. on O 8 «mm 0. O Q ..wnm J3 :xm gosooQoEQcQ Q 3‘ 0.532? gotao.‘ 1011100 . A SSDUDUD 118 faster and to a larger degree than k3 did in the partial variance of C. This is because k2 controls a more rapid step which removes B from the pool available to k3, thus preventing the conversion of a larger amount of B into D. It is then the nominal value of k2, which is greater than the nominal value of k3, which causes the sensitivity of k2 to accumulate faster over the same time range. Note that this model is simple enough to permit conclusions of this type to be made by inspection. However, the verification of these conclusions by sensitivity analysis lends confi- dence to the treatment of more complex models. In enzyme kinetics the most commonly used model is the Michaelis-Menten model. This model may be written k} k3 E + S < >ES -* E + P (H.16) k2 Often one starts with an excess of Substrate, S, to enzyme, E, in order to make a steady-state assumption on the con- centration of ES. This results in a simplified rate equa- tion for the change in substrate in time, often called the 'velocity' of the reaction (Fersht 1977). S(t=0) = 30 (A.1?) 119 Where Vmax = k3EO and Km = (k2 + k3)/kl. This equation may be integrated to obtain progress curves of substrate versus time. If substrate is measured as a function of time either the integrated equation or the expression for the velocity could be used to determine Km and Vmax' To determine which equation, the integrated form or the differential form, would be better suited for esti- mating Km and Vfia a Fourier sensitivity analysis was per- x formed. Figure U.U6 shows the averaged values of substrate for this analysis with Km = 11000 t 110 and Vmax = 50 t 5 and SO = 11000 i 110. The relative deviation curve given in Figure u.u7 shows that the substrate was only varied over a small range by using these parameters. Figure a.ua shows the partial variances of the sub- strate. The first parameter, V is the most important max’ parameter over this entire time range with the second parameter, Km, being much less important and the third parameter, SO, even less important. For the velocity equation we get different results as shown in Figure M.U9. Here at long times the Km parameter is twice as important as it was in the inte- grated equation. Therefore to measure Km we should take velocity data at long times and fit to the velocity equa- tion. If estimation of Vha is our only concern then X use of the integrated equation with measurements during the initial phase of the reaction is sufficient. Note 120 .AcoHpafipm> &Fv Hmcoz copcmzlmflaowsofiz one :H :ofipdppcoocoo mpwgpmnsm ummwnm>w one w¢.¢ mhsmflm on omd 0T0 cod and ooé «oomy meet. COM CON CON COP 009 on C _ . _ h _ . _ r _ . _ a — 00.0 I l.0~0 1 1 l 1.970 .1 00.0 L g omd fl . _ 1 _ a A 1 _ q 0 4 OO.—. OCH CON CON one COP cm 0 «3505‘ 5.3.5003 10:30.; «Eowmlxoomwmv 5.002 cm~cm2lgmozo§< (03 KC} pact/11p) amnsqns p95OJaAV 121 .Acowpofluo> fipv Hocoz copcozlmflaoosoflz onp :H oponponsm mo :oflpofi>o0 o>HpoHoa one >¢.v opzmwm on on .0 on 056 on OQC 00; con «com» octh DON cap 009 on o _ . _ p L . _ c L. L LLL LL Don . . Q 1 fl 1 Q 1 _ com on- 009 on 0 0.3.0.2? 5.3.0.523 3:38 «oiozm1x003m0 1000: 0300:1028003 006 970 owd and Ovd omd cod end and omd ooé (waxed) uonoj/eg 311110133 122 .Anoflpofino> an Hocoz nopnozloflaoonoflz onp nfi oponpopsm now mpnoponoo opon one mo moonoflno> Hofipnom m¢.¢ onsmfim 0.0 v.0 N0 Md To 0.0 0.0 ~10 0.0 0.0 0... «oomy 9:: 00m 0mm CON on? 00 P on 0 _ L _ t 1!. . _ . _ _ r _ 9 g’:¢1 $1.1..." 1.1.1.1.‘1”. ‘-’.*§." .19.,‘191. 01" ..-..‘z 19-991.991.19910-?¢9”«1.« .. Rd 11 0 §4§foééétt¢l9io§ 919.90.010.93. 1.. I N m I I III 1 1 1. I 1 11 1 T ... P m 1. 1 . fiixoiiliii. 01191911031191.1961... o 0-0-9-1-;9993. 1 —1 4 d d u d .—| 1.1 — 4 d 1 COM 00N 00N on _. 00 — on 0—4 032% 1.1.1.1800 32.6.1 203029,...“ .0 noncosnlmnoonog . . 0 . 0.0 «.0 N0 M0 ¢.o 0.0 0.0 5.0 0.0 0.0 aouopoA 11:1de 123 ..Anofipofino> R—v Hocoz nounozloflaoonon one nH mpflooao> pom menoponoo opon onp mo moonofino> Hoflpnom A90 70 NAV 0A0 .TO mnu 0A0 5A0 mnv any 0; «oomv ocns 00m 00m CON 00? GOP on 0 1 LL! 3 1 N U) L 1 C P ILL ;§Q:Oé$§lmre; 01.70.917.101». 9179171-??? .1191»... e194 9 90-9%... L 4 . _ 1 a q _ . _ 1 _ . n com com CON one cop on 0 0.6.0.2? A...>.n..mcom detach «co:o.:o>140.0 0 concoEImnoonoE 00.0 masonn nfio 00 mnu MAC ,\0 mnu 0A0 BAH any any 0;. (KyaolaA) aauogJoA {oypod 12a also that the use of the Michaelis-Menten model to estimate the initial concentration of substrate, SO, requires that the other two parameters, Km and Vm be previously known ax’ since the model is more sensitive to these parameters. Repeating the analysis over a larger parameter range (80% of the nominal value) resulted in essentially the same observations, Figures h.50-H.52. S0 is still the least sensitivity parameter as shown by the partial vari- ances of the substrate, Figure “.51. The velocity equa- tion appears to be the more sensitive formulation of the Michaelis-Menten model. This is dramatically shown by the partial variances in Figure H.52. Vmax is the most im- portant parameter in this section of parameter space. Even its couplings with Km and Vmax are more important than Km or Vmax in the region where the reaction has gone to u0% completion. 125 .Anprowno> R000 Hocoz nopnozlowaoonofiz onp :H nofiponpnoonoo oponponsm comono>< 0m.¢ onsmflm «com» onnh com com com 009 oop on 0 00.0 _ . _ n _ _ b L _ a _ _ _ 00.0 1 1 V. A L . I a . .. m 00 0 1 1 0m 0 5 1 1 a 0. IL I Q. 1 T n 00.0 L 1 01.0 m L 1 H. W 1 a 1 \/ OQAVI. 000 mu 1 m... 1 a L o. a. OQAVI. OQd ,A 1 C. 1 mW. 00; 1 . _ 1 . 1 _ . _ . 1 . 1 00., con 00m CON 009 Dow 00 0 0.800.? 53:..mcom notaok «2200.019, mdv concoEIozoonoE 126 .Acoflpmfipm> xomv H000: cmpcmzimflamwzofiz 009 :w mpwppmpsm pow mmocwfipw> wawumm A90 70 Nd 050 To 06 96 hAu QAV Q6 0; _m.¢ 003000 «0.00» .08.: com. com CON onp cop on o _ . _ _ _ r _ r _ r _ _ _ 0.0 ..U 00.0.90... mm +1] «.0 .. gig: $99?§94§§1&-?91?390 {Jug : - 0 6+ 0: l o§§§$¢éd&i}9o§§§;9: 0.9.99 QYQI I. Nd .Od L Nm r W. q I Gd mw r A L - W. .1 rl m.O m 1 PW .53..-. 9.39 .1. ‘11.; 06 066.95.398.00. i é,¢||10006¢?o590317 I. 0.0 9 of§9¢6§1¢99 90.0.0 0 o r m. 1 I 50 m... L - m. 1 T 0.0 AW 1 r. L .. 0.0 L 5 1-| d — d — d —‘ 1 d a ‘1 q q OoP con omN CON cap 00— on 0 £032? 533.63% Beach «cozctglmdw chmElmNBUQBE 127 .Acofipmflgw> Romv H000: cmpcmzlmfiflmwnofiz 0:9 0“ hpwooam> pom mmocmfipw> wapgmm mm.v opsmflm mummy 0E: con omN CON on? 00, on o Qd _ . _ w%##LTL4#w L, _ _ od 0 #Huunflfififluuz? 70 L fdhi 9§1§§3216308fi v.0 N.O 11 6,610,610,019... oé.o.o§:.. .1.0.9106§10.o.o.¢.¢.0.o.o I. N.O d . Nm - W n 0 1 1 n0 W. v.0 1 1 «.0 m . T u. D 0.0 1 / 1 0.0 w ... 1 —. 01:63.06Qo1790 1 8 0.6 I. _ ¥0IWQ#§O$.Q§Q§§45§& r1 ©.O \M 1 T 3 5.0 1 1 5.0 m. . . (m1 0.0 1 1 0.0 {1 0.0 L _ _ 1 0.0 O.— u a d 0 — 1 d 0‘ — 0 — q d o.—. com com com onp cop on 0 £32? 33.0303 3:33 «cozotglmficv 2320372300003 V. SENSITIVITY ANALYSIS OF SIMPLE ENZYME KINETICS MODELS Many enzyme kinetics models are composed of inter- locked Michaelis—Menten models (Segel 1975).' These models are proposed to explain kinetic behavior patterns which the single Michaelis-Menten model alone is unable to do (Segel 1975). This behavior is even defined as "non- Michaelis-Menten" (Whitehead 1970). It was our desire to examine models which presumably exhibited non-Michaelis- Menten behavior patterns and were composed of linked Michaelis-Menten models. In this chapter we investigate four enzyme kinetics models, the irreversible Michaelis- Menten model, the reversible Michaelis-Menten model (Michaelis gt_al.1913), the Ho-Frieden model (Ho 1976, Bates & Frieden 1973), and the Ainslie, Shill, and Neet model (Ainslie gt a; 1972). Of course, to fully understand the linked Michaelis-Menten models we must first examine the Michaelis-Menten model itself. Michaelis—Menten Model The simplest model used in enzyme kinetics is the ir- reversible Michaelis-Menten model: 128 129 kl k 3 E+S=‘-‘-‘-‘ES—-—)E+P. (5.1) k 2 In order to illustrate the utility of the partial variances to evaluate the sensitivities of concentrations to the rate constants, we applied the Fourier sensitivity analysis method, FSAM, to this simple model. Although the range of rate constants used and the substrate and enzyme concen- trations selected insure that steady-state conditions are established very rapidly, the model was solved numeric- ally without including any steady-state or equilibrium assumptions. Of course, in this situation, if one could observe only the substrate or product concentrations, it would be possible to determine only k3 and (k2 + k3)/kl since the steady-state assumption yields [S] and [P] in terms of these two "constants". We use here the time- development of (E), (8), (ES), and (P) in terms of k1, k2, and k3 in order to illustrate the method and permit comparison with more complex models. Examination of the range of values of kl, k2, and k3 tabulated (Fersht, 1977) for a variety of enzyme reactions which follow Michaelis-Menten kinetics shows that most lie within an interval of four orders of magnitude centered on the nominal values listed in Table 1. Because FSAM was designed to apply to situations with arbitrarily 130 Table 5.1. Parameter Values for the Irreversible Michaelis-Menten Model. (0) k1 Aki o o k +k kl 1.0 (uM sec)-1 10:2 nominal K&0)= 2 3==llOOO uM ko 1 k2 10” sec‘1 10i2 k 103 sec‘1 10.452 1.1 M : KVI < 1.1 *103 ”M 3 1 - s0 = 11,000 uM (Assay Conditions) E0 = 0.05 uM Table 5.2. Parameter Values for the Reversible Michaelis- Menten Model. 0 k1 Aki kl 1.0 (uM sec).l 10:2 0 k k0 7 k 1014 sec-1 10:2 10‘9 < KO: 1 3= o 1 < 10 2 - T ko ko — k3 103 sec"1 10:2 2 A k4 1.0 (uM sec)"1 10:2 S0 = 11’000 “M m o u C) C) U1 t 3 131 large ranges in the rate constants, it was possible to explore the sensitivities of the concentrations to the three rate constants while each was allowed to vary independently by up to four orders of magnitude. The rate constant rang ranges and initial conditions given in Table l were used in these simulations. The initial concentrations correspond to "assay conditions" (SO >>130). It is important to note that the equilibrium constant K = kl/k2 was not held 1 constant when the rate constants were varied. This per- mitted exploration of the overall sensitivity of the model to a range of maximum velocities which covered four orders of magnitude and a range of Michaelis constants which spanned eight orders of magnitude. It would also be possible to test a more restricted model by fixing the equilibrium constant as is done later for more complex models. In Figures 5.1a and 5.1b we have plotted the average concentrations, which are the concentrations summed over all the different rate constant sets divided by the number of simulations, and the standard deviations [square root of the total variance defined in Equation (2.21) for the irreversible Michaelis-Menten Modell All these curves are scaled to the percent of the total enzyme concentra- tion E0 for enzymatic species and to the percent of the initial substrate concentration, 8 for the product and O, substrate. Two concentrations of the four are linearly 132 Figure 5.1. Average concentrations and standard devia- tions of the concentrations Michaelis—Menten Models. The symbols represent: 0, S; 0, P; A, E; +, ES. 133 H . m cadmium ..O. x303 m2; 9.. cm. a: $0 am on E c _ - -..-..1 11 .-.-1111.11-11til-111-1113 1 11-. .-.1 - .--.1-.,_ i .-;1 1 -- 11.. 11.11. 111198 3 . Q0 9 1!— i.-. . flu H11“ ADV a 1%. .1 ADV .m. S DU 0 u {19% a1: flaw - 3330530099.. 1.1, 1. - . .. 1.1. 339066.330: . . . 1I.+++++++++I-++++.- . :x . 96.}. a? .01 .+-..-I-I-TTTT+ +1TTT I. .r++ +12. emswxwawaefifi + + + + +1. Q A H. r moo - 99399033963990 1 was 1.1- 1 .11 - _ - 4.1--1131 1.1-- 114-1 11...! -11-_1--1-:11 .1110 ----1111-11--- Qn on. a L . 1-r1 - 1 1-11 _-.- - 1111-1 _1111 1111 3 0-0 B C N . Mm 3 +1++I1TI1I1TTTTTTTI-wwrfi A1 0 3 . 30.x a 3 . 39.60 N .3330930? N J .111. 01.11; no me A? . U 030090300000»... .5 I. . .000 -l vflflnflflfiwaw Tl -.flflflfldflflflflgflfiflm .1. O MU N N1. 1 1 ADV « _ - 111 0.9299 0.9805; ZMHZMST m3w j -3w3 1 3 ya 4. j 1 3 < 4 3‘ 3 3 < H .3 3 3 3 3 1. -__ 1 1 11~1 1 1 < 4 . . 1 1 1 1 1 1 ‘ ’. ‘ 0‘ :11 a ' - . .. . 1 5 -2 .2... ..E..L-,..§...',. 2 2 2 2 2 4. 2" 2-; Zu‘ B: 9“ ‘ ¢ «+- 0? 2 2. 2 r2 -2 r *fi 0‘2 1 O 3 O 3 ES- COMPLEX ES-COMPLEX I, I} 1 (b) 3 (d) u 2 < i . H 1 CE 1 1 < 1 3 '3 3 > 41 1 _. 1 , 11 a 3 “ 1 3 H . : 3 . . H 12 2 . -7lax ' 31 3? m 1 2 c - ~ 4 2 .,H‘ "* *-r-+2+:z-+2 . 4. q q ‘*' o' *‘3 . x“ *1 0w - a O 3 O 3 TIME (sec) 1116-2 Figure 5.2 137 the product concentration. This is not surprising since k3 controls how fast substrate is converted into product, while the binding step yields a steady state ES concentra- tion essentially instantaneously on the time scale of Figure 5.2a. As time increases, the specific k3 value chosen has an even greater effect on the accumulated product concentration so the relative importance of k3 increases with time. Figure 5.2a also indicates that the product concentra- tion is sensitive to the coupling between kl and k3, especially at early times. This indicates that the ac- curate determination of k3, for example, from the early portion of a single progress curve would be hampered by coupling to kl’ resulting in significantly larger mar- ginal deviations of the rate constant than would be ob- tained by using the entire progress curve. It must be emphasized that the entire analysis described here applies to full time-course behavior rather than only initial rate behavior. If one wished to study the sensitivity of initial rates to substrate concentration for example, a different procedure would be used. The analysis shows that if the concentration of ES were measurable, it would be most sensitive to RI at short times as indicated by Figure 5.2b. This reflects the fact that the formation of ES is the dominant initial step. However, the sensitivity to k3 grows rapidly as 138 substrate is depleted. There is also strong coupling between kl and k3, implying that the errors in determin- ing these rate constants from the time-dependence of the concentration of ES would be strongly correlated. As was found for the product sensitivities, ES is not very sensitive to the value of k2 over the range examined. Figure 5.2b shows apparent discontinuities in the sensitivity of the ES concentration to the rate constants. One might be tempted to attribute this to numerical errors or instabilities in the calculation, but this is not the case. When the ranges over which the rate constants can vary are drastically reduced the discontinuities disappear. The discontinuous curves (obtained with these time steps) originate because of the large ranges available to the three rate constants. Within a narrow time span an ap- preciable number of simulation runs reach completion. When this occurs the ES complex disappears and the concen- tration of E builds up for these simulations. This com- pletely eliminates the sensitivity of ES to the rate constants for this subset of simulations. The result is a series of apparent breaks in the partial variances. It is possible to eliminate these breaks in either of two ways. As indicated above, the ranges of the rate con- stants can be restricted so that the number of simulations which reach completion is insignificant. Alternatively, the initial substrate to enzyme ratio can be made so 139 large that the simulations do not reach completion even for the most favorable combination of rate constants. Neither of these alternatives is entirely satisfactory since they both impose restrictions on the model. In order to be certain that the origin of the great- est sensitivity of product concentration to k3 and least sensitivity to k2 is not Just the large ranges permitted for the rate constants, sensitivity analyses were per- formed over several reduced ranges about the same nominal values. The general results were the same except that, as noted above, the apparent breaks in the sensitivity of E8 to the rate constants disappeared. Under the assay conditions modeled here, the concentra- tion of the enzyme-substrate complex, ES, assumes a steady- state value at very short times, as shown in Figure 5.1a. However, even under steady-state conditions the concentra- tion of the complex changes with time as substrate is used up. This is reflected in sensitivities which also change with time. The growing sensitivity to k3 means that in a full time-course analysis, this rate constant would be relatively more accurately determined by following the progress curve for an extended period of time than could the other rate constants or combinations of these rate constants. 1&0 Reversible Michaelis-Menten Model Only slightly more complex than the irreversible Michaelis—Menten model is the reversible model in which equilibrium is ultimately reached. In testing this model, the same nominal rate constants and ranges were used as for the irreversible case but a reverse step was added: E+S==—=’ES===-E+P. (5.2) k u Table 5.2 gives the nominal rate constants, their ranges, the initial conditions, and the frequency set used. Fig- ures 5.lc and 5.1d show the average concentrations and standard deviations for the reversible case. As shown in Figure 5.20, the first MO seconds gives approximately the same product sensitivities as the ir- reversible model. The reverse step ku only begins to become important at later times as the concentration of product becomes large enough to bind to the enzyme. Initially the sum of the product partial variances and the higher partial variance which couples kl and k3 accounts for 93% of the total variance. During approxi- mately the first 100 seconds, this sum decays to 85% and remains constant. The other 15% of the sensitivity is spread over couplings among the parameters but no 1A1 individual coupling is large enough to appear on the graph. The major difference between the irreversible model and the reversible model is seen in the sensitivity of ES to the rate constants. In the irreversible case the sensitivity to k3 grows, while in the reversible model k1 remains most important. Apparently the reverse step (kn) can serve to stabilize the concentration of BS as the reaction approaches equilibrium. Since substrate is present in excess, the concentration of the complex continues to be dominated by sensitivity to kl. As with the irreversible case, apparent breaks in the sensitivity of ES to the rate constants are observed (see Figure 5.2d). These are again caused by a subset of the simulations which in this case reach their equilibrium values. Models with Slow Conformational Changes Except for a (usually undetectable) lag in product formation caused by storage of substrate as the ES com- plex, the Michaelis-Menten model is not capable of des- cribing bursts or lags. Nor can it lead to allosteric behavior since the phenomenon of allosterism as defined (Segel, 1975; Fersht, 1977) in terms of deviations of the reaction velocity from the predictions of the Michaelis—Menten model. In order to examine these 192 phenomena it is necessary to devise more complex models. The most common interpretations (Monod 33 al., 1965; Koshland gt al., 1966) of allosteric behavior involve multi- subunit enzymes in which interactions among the subunits make the addition of another substrate molecule easier or more difficult than those which were previously bound. These models are intrinsically thermodynamic in nature since they refer to interactions which affect binding constants. It was suggested some time ago (Whitehead, 1970) that allosterism could arise without subunit inter- actions as a natural consequence of kinetic models which involved slow steps such as conformational changes. Such models have also been proposed to describe bursts and lags in product production. In this section we apply sensitivity analysis to a model first examined by Ainslie, Shill and Neet (1972) using steady-state methods. Our sensitivity analysis of the model showed that similar behavior can be obtained with less complex models. Therefore, these simpler models are also examined in some detail. Model of Ainslie, Shill and Neet In 1972, Ainslie gt 1. proposed an enzyme model which they studied by using steady-state techniques coupled to slow conformational changes. They showed that appropriate choices of the 16 rate constants could 1M3 be made so that the model displayed either bursts or lags in product production. They also showed that the varia- tion of the final steady-state velocity with substrate concentration could be made to exhibit allosterism, leading to behavior similar to either positive or negative co- operativity depending upon the choice of rate constants. Because of the wide variations in behavior exhibited by. this model, brought about merely by changing the values of the rate constants, we felt that this model would pro- vide an excellent test of the methods of sensitivity analysis. This model, which we refer to as the Ainslie model, is described by the following scheme: 1 16 in E + S:;=3 ES ;==3 EP;;=£E E + P 2 15 13 10++9 3++u ++ 11 5 7 3* + 3 ==:: E*S :Efi E*P ==e E* + P (5.3) 12 8 The numbering sequence for the 16 rate constants is also given in scheme (1A) above. Equilibrium constants, K1, K3, K5... are defined as kl/kZ’ k3/ku, k5,k6, etc. With its 16 rate constants, this model is complex luu enough to defy intuitive understanding of its detailed dynamic behavior. By applying SAM to this model, with the nominal rate constants and ranges already tested by Ainslie, et al., we can determine the important pathways which lead to bursts and lags. The analysis also showed that the model need not be this complex to yield the same general behavior. . Ainslie, 33 al. separated the rate constants into two sets: those which gave lags in product growth and those which gave bursts. Each of these sets was also divided into two groups which showed allosteric behavior similar to positive cooperativity and negative cooperativity, respectively. Hill plots (Hill, 1925; Segel, 1975; Fersht, 1977) were used to classify the cooperativity. Negative cooperativity gives Hill coefficients less than one while positive cooperativity leads to Hill coefficients greater than one. In this sensitivity analysis it was only necessary to use two groups of rate constant ranges corresponding respectively to bursts and lags in order to cover the entire range studied by Ainslie, gt al. In order to decrease the complexity of the problem, simplify the interpretation, and include the thermodynamic constraints demanded by the presence of mechanistic loops, we main- tained all of the equilibrium constants at fixed values in each of the two sets studied. This corresponds 1H5 approximately to the choices made by Ainslie, e2 _1. who used constant values for most of the equilibrium constants_while varying the rate constants. This simpli- fication reduces the number of independent parameters to eight but does not alter the general behavior of the model. The eight differential equations which describe the time-dependence of the concentrations of the eight species in this scheme can be reduced to six coupled non-linear differential equations by using the two algebraic equa- tions of mass balance. The nominal values of the rate constants, the values of the equilibrium constants used, and the initial conditions are given for the lag and burst sets in Table 5.3, while the frequency sets and computer data are given in Table 5.“. The ranges allowed for each rate constant were 10:1 times.the nominal value. In Figure 5.3 the average concentrations and the standard deviations of the two sensitivity analysis runs are displayed. In the lag set the product growth is initially slow but it rapidly increases reaching 27% of its equilibrium value after 120 seconds. In contrast, the product growth of the burst set starts out fast and then slows down, reaching only 11% of its equilibrium value after 120 seconds. This leads to a large range of con- centrations in the lag set, but to a restricted set in the burst case. In both cases the less active free enzyme, E, is a minor species. 1U6 Table 5.3. Parameter Valuesa for the Ainslie Model. (Lag Set) (Burst Set) 0 (i) O (i) 1 k1 Keq ki Keq 1 10 (0143)”1 10‘2(uM)‘l 10. (uMs)‘l 0.1(uM)'1 3 10'2 s"1 3.0 10‘3 s‘1 10‘2 5 10” s‘1 3.0 105 5‘1 1.0 7 103 5'1 30.0 (UM)_1 103 3'1 100.0 (11M)-1 9 10"1 3‘1 10 10‘2 a-1 10.0 11 10.0 (uMs)‘l 0.3 (uM)‘1 10 (uMsYl 10"2 mm)“1 13 1.0 (ms?1 10‘3 uM 10 (uMs)_1 10"3 pM 15 10 s‘1 112.7 103 s"1 100 _ b _ _ _ EO - 0.5 uM sO — u000.0 0M E0 - 0.05 uM sO - u000.0uM [P] [P] K = 27 =-——431 K = 1.0 = eq T [Sjeq T [Sleq aThe range of the rate constants was 10:1 times the nominal value, k3. bInitial distribution: 90% E, 10% 3*. lu7 .omnnp mampwswxopmmw mo hOpowm w an mawp mane woodwmh o>w£ muowprHMflooe pgmommm mm.ov m.om omo.mm m.mHm om flmHH.oU sea ape mq.Hq.mm.HH Hmeozflcmemfipm mwm.osm .msm.mmn.omm .omH oqs.mfim acme om hmHH.og mesa ape www.mam.ama am pm mfifimafi< m.©H 0mm.mm o.moq on hmam.og HNH apt Hq.sm.fim.mfi mfipwmgm>mm GmpcoETmfiHownowE n.t oom.0H s.sma on mmam.ou um ape ma.mfi.o mfipwmpm>meH ampcmzumwamwnodz Amv .mcmoo Amv mpawom Amv myopum> pmm co pmm .mmmoo pmfipdom mmaowpwpm mafls mo mmcmm hmpmawpwm amuse moawdvmpm .Hmwhdom mo .Hmobaz IwPGH .Hom .HmDEDz mafia. mo .HmDEDz mpsg mafia ago .500 OP came one mafia ago .wpwa AmdeEoo was Egm mo hymeazm .v.m mHan 148 LAG O 1 g h o l- "' s M Z 7 69998 g 2 690° h—l b F— a: 3‘2 mo. _ L—' p—I-D .2 :3 u; U U z z . c, c: (4 CJ llflnnuuqaanaattfiiiflflfifigi °o 4o 80 120 3 (u w“ . ' > W LIJ :1 c: D I .81 L 53 E a) (D . Cb' 4o 30 120 ch 40 80 120 11MB Gad Figure 5.3. Average concentrations and the standard deviations of the concentrations for the Ainslie models. The symbols represent: 0, S;A, E: +, 39631-0, sz, ES: 11’: E*P. 1H9 Careful examination of the partial variances shown in Figure 5.” shows that the lag mechanism operates by shuttling the ES complex to the E*S complex which then rapidly forms product. The top (E+S + E+P) cycle has slow turnover relative to the bottom (E* + S + E* + P) cycle and the important bridge between them is the iso- merization step of the enzyme-substrate complex. The small amount of E* present initially starts turning over substrate so that the substrate concentration is most sensitive to k7, the product formation rate constant in the bottom cycle. As the reaction proceeds the total concentration of enzyme in the bottom cycle is increased by the conversion of ES to E*S; this increases the sensi- tivity to k3 and to the binding step E* + S + E*S (kll). This shift to the bottom cycle is verified by the rapid decay of substrate sensitivity to le. The coupled partial variances of Figure 5.Ab rein- force the above conclusions. The rapid decay in time of the coupled partial variance 85,7 is consistent with the growing importance of the depletion of the substrate concentration via the bottom cycle. Furthermore, the isomerization step which increases the total active enzyme concentration and thus increases the importance of the kll step results in the rapid growth in time of the coupled partial variance 87,11. Turning to the sensitivity of the enzyme concentration, Figure 5.“. 150 I Partial variance plots for the Ainslie lag model. A number represents the partial variance for the rate constant. Other par- tial variances are represented by: B, S 11’ D, 813; F, 815. Coupled partial variances are represented as follows: in (b) by +, . «)6 . ' o 85.7: ’ 37,11, 1“ (C) by +’ 313.15: *’ 81,15; in (d) by *, 85,7; in (e) by *, S13.15' 151 .n an. ... a. A: xmudxou atu wx>NZm SBJNVIUVA WVILUVd SBDNVIHVA WVILUVd :.m ohswfim Nb. x .83 92:. r n a n n n a n 1 a a % . A n . a a n . A3 xwgmzou-m.m 2. o By uh _. > v’ — -: A \ V A » ... _ . .— .. - 1 1 v ... .. ._ ~I- ----------- ( - < ~ :i'F- .. 'EOI-Ioi ...... * A 3 :0 3 .-. . v 3 .0 : " . - . - : I O ....... ‘o. . . ................. "\ _‘4 A v T fi—fi V . A V 115 C "‘ _1- ..-: :‘E ‘ ..j :1 :.:-‘ J I - a- (C 1 3 a I“) - .u/ , 1 3 1 e _ 2 1 . 3 5 J ~ ' d a , 3 jn 3 n ' 1 3 .... , * "’ j . _- ‘4 ‘ 3 _ h z "I l ‘- I ‘_ 7 — - ‘— ‘L < < “ I: ‘ "' - _— ..- —~ I - a < < > > - > .' ‘ s. 4’ - -' - _—‘ . <1 ( -» w v I— _ ... .- — — _ q 1. “P —h >V‘ . _< - < - < q 7' - : :. - - - L _ 2 ‘ - d - ‘ ' - = t ’ O__JLi-_; l ' ' ' -.. A A n :— - Ov-f rrv -fi OfiW—m—é—t—u—a 3 ’we‘e‘le‘il's '1 7 7 7 - — o 5 0 HS 0 H5 Figure 5.5. TIME (sec) x tO’z Partial variance plots for the Ainslie burst model. A number represents the partial vari- ance for that rate constant. Other partial variances are represented by: B, 811; D, 813; F, 51 . Coupled partial variances are represente as follows: in (a) by *, 811 7; in (b) by *, 311 The symbo "s" represents the sum 0%3 the displayed partial variances. 155 negligible. Thus we may conclude that the enzyme concen- tration in the burst region of parameter space is not sig- nificantly affected by the rate constants. Of course, if the equilibrium constants were allowed to vary the results could be greatly altered. The sensitivity plots of the intermediates, ES, E*S, - and E*P, shown in Figures 5.5c, 5.5d and 5.5e, respec- tively, are dominated by the sensitivity to the isomeriza- tion of E8 to E*S. At very short times the top and bottom cycle rates have some sensitivity, but the total variance is very small here. From the above detailed analysis a simple rationale of the operation of this model with regard to burst and lag behavior can be formulated. The initial relative concentrations of E and E* are determined by K9. It is the relatively slow transformation of E*S to ES and 1132: ngga brought about by substrate binding which gives rise to the bursts and lags. The formation of E*S and ES from E* and E, respectively, is rapid compared with the rate of interconversion of these forms. Because of the importance of the step with rate constants k3 and ku it is not surprising that the sensitivity to k3 (fixing k3 also determines ku through the constant value of the equilibrium constant, K3) provides an important clue to the behavior of the model. If we wish to focus on bursts and lags in product production, then the most informative 156 sensitivities will be those which relate product (or sub- strate) concentration to the rate constants. With this information we can now formulate a reduced model which exhibits essentially the same sensitivities as the complete Ainslie model: E+SZESZEPIE+P I (5.3) E* + S 1 E*S I E* + P Of course to obtain the proper very long time behavior it would be necessary to incorporate the E 2 E* step in the model to return E* to E. However, the E z E* step plays no significant role in the behavior of the model in the region of parameter space and the time range explored here. As long as the rate constant sets are chosen such that a pool of enzyme is bound up in the ES intermediate which is then slowly converted to E*S, the lag behavior will result. For burst behavior one needs more E* present initially to cycle through the bottom than the isomerization ES 1 E*S would yield at equilibrium. Since the reduced model can exhibit these features, bursts and lags will result from this model. Furthermore, the mechanistic steps by which EP is 157 created and destroyed are of secondary importance over this region of rate constant space. This suggests the possibility of still further reduction of the model. Frieden Model A model proposed by Bates and Frieden (1970) to account for time-lags in enzyme reactions and also studied by Ho (1976) may be represented by the scheme: 1 7 E + S ES E + P 2 8 111112 311. 11 5 9 E* + S E*S E* + P. (5.A) 6 10 In order to permit comparison of this model with that of Ainslie gt_al. (1972) the same rate constants were used for equivalent steps. To evaluate k7, k8, kg, and klo, a steady-state approximation was applied to dEEPJ/dt and d[E*P]/dt. This related the product release rate and equilibrium constants of the Frieden model to those of the Ainslie model by the following equations, in which the primed rate constants refer to the Frieden model, 158 kk kk 16 1H, 1 13 k'=—-—-—, k'=r—L— 7 k1u+k15 8 1u+k15 kk kk k.__._§_Y_ k' 2.2.51, (5.5) KO These rate constants, the other nominal rate constants, the equilibrium constants, and the initial conditions for the burst and lag runs of the Frieden model are given in Table 5.5, while the frequency sets and computer data are given in Table 5.4. As with the Ainslie model, the equilibrium constants were fixed and the rate constant ranges were 10:1 times the nominal values. Figure 5.6 shows the average concentrations and the standard deviations of the Frieden model. Both the lag and burst cases are similar to those of the Ainslie model. It is interesting to note that there is more "effective" enzyme in the Frieden model since there are fewer inter- mediate complexes. The substrate sensitivity shown in Figure 5.7a is largest for k9, the rate constant for release of product from the active form. Initially, the corresponding upper cycle rate constant k7 for the inactive form contributes about 20% to the rate of product formation but this decreases to less than 5% as the isomerization step 159 Table 5.5. Parameter Values for the Frieden Model. a LAG Burst i k0 k kiO) 1 eq eq 1 10 (um s)“1 10"2 (uM)-l 10.0 (uM s)“1 10‘1 (01"1)‘l 3 10"2 s‘1 1.0 3 x 10'“ s"1 10‘2 5 10 (uM s)“1 10‘1 uM 10 (uM s)‘1 10"2 7 30 s.1 3.1 x 103 pM 10 3'1 10.0 uM 9 750 s‘1 3.1xx 102 um 9.9 x 103 3‘1 100 um 11 10'2 s‘1 10‘1 10‘3 s‘1 10"1 _ b ET - 0.05 uM S0 = AOOO uM [P] [P] _ _ e __ _ eg KT - 31 - mg: KT — 1.0 - [5180 + aThe range of rate constants was 10"1 times the nominal value k1. bInitial distribution: 90% E, 10% E*. 160 Figure 5.6. Average concentrations and the standard deviations of the concentrations of the Frieden models. The symbols represent: G). S; A, 1513+, E*S;®. PsX. E*;)1. ES- 161 owl om ynlt '.l'll0l|.l.l'.||lnl'.t".I||l'."l|ln' .-‘-.ln'ul-‘|ll|ll-¥ b 1. P P .- ‘ . .y .‘ . P De so 8; A8 m.m opswflm 381 m2; . no l 0 K8. 1 U 3 A 19 I1 d 1 q 3 m m m W‘m‘m m‘u. m‘m‘n‘u‘u‘u‘u‘.‘o O .. .. N . .. . 3 . . 3 . a 0‘. ‘1‘...\.. N .0. . BIL 1.08 l “U I... I 0 N L Dam; JmOOZ Zmomim NOIlUBLNBJNUJ 162 proceeds and the isomerization rate constant, k3, becomes more important. The partial variances shown in Figure 5.7b are par- ticularly revealing. The coupling between k3 and k9 grows and decays during the time range of the isomerica- tion of ES to E*S. As this occurs the rate constant, k5, for the binding of substrate to active enzyme grows in importance as does its coupling with k9. By examining these time-developments, one can gain a rather clear pic- ture of the lag behavior as product production shifts from the upper((slow) cycle to the lower (fast) cycle. These dynamic effects are also mirrored in the sensi- tivities to the various enzyme forms. For example, the enzyme sensitivity shown in Figures 5.7c and 5.7d shifts with time from the E-cycle to the E* cycle. This shift is responsible for the lag behavior. Note the rapid growth and decay of the sensitivity of E to the coupling between kl and k7, the slightly slower growth and decay of its sensitivity to k3, and the slower growth in its sensitivity to k5 and kg and to the coupling between k5 and k9. These plots show how the dependence of the concentration of free less active enzyme on the various. rate constants changes with time. The major route for the formation of E*S is the iso- merization step ES 2 E*S. This is shown in Figure 5.70 by the dominance of the sensitivity of the E*S concentration Figure 5.7. 163 Partial variance plots for the Frieden lag model. A number represents the partial variance for that rate constant. Coupled partial variances are represented as fol- lows: in (b) by +, 85,9; in (c) by +, 85,9; *, 81,7. 1611 m.m mbswfim N.o. x 331 m2: n: O h» p h b o n n n n nun U flflfl fl nann o a n r d . V H i o V 1 ... .n 1 A V . B n . n I V n n . N . n 3 n 1 3 a S n n x .2 .. 12 qudxou-mm an. -. . .- . m n: o n NJ. ~15 . and! O .0 a . . a d % o _ .. V ..o .6 B- 8 «woman... at.” . V . . .. ~| n v A o V . ... I U . V n . x N 3 a . . I 3 o o o o S o_m o 1 fl 0 0 n 3 ._ 3. wI>N2m whNZw o n: O nunsnnn-s-ssnndo n n n a n n a mp . d w v I Nu I. t V 1.] u u A 1 4V .v. . . N a. r 3 3 I S ._ A8 : _ wp Hmwppmo ooaosoo .pcmprOo comp pogo pom moQMfipm> Hmfippmd one mpcmmopdmp amass: < .Hoooe pmpsn smomfipm map pom mpoaa mocmfipm> Hmfippmm .m.m opzmflm . ...-o. x 3352: . , m... . o 9.. m n: m oi$fiél$?§£$?§ifi&f2v #0 , E .m M . . . D. ..I ..I Q.I.i.l. m. ..l ......‘O. .l 7......... .. . n.ein.M.nms.~.o_~.~.nchp ” vavv' SBDNVIHVA WVIiHVd SBDNVIHVA WVIlUVd SBDNVIHVA WVIiHVd m «.0 ,m.m.. ADV mm,m.m.m.m . , A8 a E o ‘v—ffi *1 r—VW 1‘71 rv‘rv VV I U 1' I I rV—‘r'rwrrvvr‘ .1 stazou-m*m . mz>sz , wu commgo>< ..m opzmfim IQ Qd 0d on ad — - _ — b _ _ — — _ _ p . 1.006 1 I :80 h .. V A . ...1 4 T a L 1 no.0 m A - n a l 1.¢Qd MW .4 I J . no 1 liflvo MW \/ 1 r. v 1 .. 8.0 kw L I 1 1.5Qd ‘ 4 A A 4‘ 4 q u 1. 1 u q 4 Q6 Q6 ON. 06 .Acoflpmfipm> :oHpmH>mo cpmccmpmv «Q05» .«oocoeagroa go» .< .m intact «Sop 6me EEO} omocosqowoifi 176 the pH drop analysis grows as the pH decreases from 8.7 to 6.U. Both the 10% parameter variation and the standard deviation variation give the same averaged nominal value for AAobs' Similarly, Figure 6.2 shows the average value of AAOb for the pH Jump analysis. Since the range of 3 pH covered in the pH Jump analysis is more symmetric about the apparent pKa value of 8.1 than is the pH drop analysis, Figure 6.}.is more like a complete titration curve than is Figure 6.2. Figure 6.3 displays the averaged value of k' as a function of pH. The same averaged values were obtained for both the pH drOp and the pH Jump analysis as well as for both sets of parameter variations. This is, of course, expected from the functional form of k' which contains only rate constants, equilibrium constants and [H+]o The partial variances of AAobs for the standard devia- tion (std. dev.) pH Jump sensitivity analysis are shown in Figure 6.R. This output function is only sensitive to €882, KBH’ and KB Note that there is no large varia- Y. tion of sensitivity in the pH region 7.0-8.5. The sensi- tivities only differ at the ends of the pH region. Figure 6.5 gives the partial variances for AAobs where the range of variation was 10%. This plot shows that the most important parameter is 683:. Note that the sensitivity to e B: is much lower in Figure 6.¥, where the standard 8 deviation variation was used. Since the model is so 177 070! QHOI mOAY. NOAY. 005$. OOAYI ¢OAYu MOAY. «GAY: :Yol OOAYI :90 nd _ .Hm _ .Acoflpmwpm> 20H90H>00 camccmwmv H000: ommcanOthpa gash mm 02p Bony Amm commgm>< m.w mpsmflm IQ 06 AVG mN p _ p — _ p — — OK _ 06 _ [ 11L. 1 J l 1 Ll.1.l l._l ?: ITIWI d — d - 4 q q q maw “Rm 0N. 383.0 400008003on 08 .< .m uotsom _ 0N «Solo .10me 001022 omocogiowqxfi 06 9.0.. 00.0.. 00.0.. 8.0.. 00.0.. 00.0.. 0.0.. 8.0.. 3.0.. 8.0- 00.0.. «Q6 (39.07.90 JO} enloA afioJaAV 178 .AcoHpmHHs> :oHpmH>00 camccmpmv Hmcoz ommcm:QOPmahe 02p Eonw .x go 05Hm> 00mmpm>< m. w opzmHm IQ nd “Hm Dd “Hm mg. ON m6 0.0 — p — P — — _ — _ — — p — n — — 0.0 ...O’IJI 1 0H .. ... 0H W 9 - r m 1 - ..H. .1 r WA. n ON I I ON 9 1. 1 r O J . 1 MM OH.1 1 1.9n 0 — d .- d. a —. d — d1 4 —. d ‘ 4 d q 0d “Ya Dd HHm OK. ON 06 .x 3‘ .<.m Stack «Sop 6me EpoE mmocorfiowqxfi 179 .Acowumfinm> :oprH>mc Upmccmpmv H000: mmmcmzmopQHHe mesh mm 05p song Awm Hmwppmm v.0 mpzmHh HHO To Nd fld .YO Dd 0A0 Ed 0A0 Q6 0; Id 8 0 m H. . _ _ _ _ _ _ p _ H r 1L H _ p a ...Hnfilfikx _ .U 4/ .U ... 1 H a. .... o/o,o¢¢¢1¢i:\o¢o MMIMIMWWAOV .1 L 1 _ . . _ . _ _ . . H _ _ . _ . 1 or m m B 380.0 .«oocoeom053Q 00» .< .m 00.:00k «Solo 6me HopOE omocozqoaoifi 96 To NAV Nd VAV Dd may Nd mAV Q6 0; aouoqu ,0..de 180 .AcoHpmth> Ro_v H000: omwcwsmopmzpe 02p scum Amm HmHugmm m.© oasmHm Id 0. 0 0 s b n P p P r LP b b — p p h b — r P 0.0 I §«. I 0.0 .0 I I ..0 «.0 I I «.0 MAUI. Avarc I_Uo A0 . . - D .0- xx Ito w 0.0 I . I 0.0 o L ... U. 0.0 I . I 0.0 w . I 3 a no I I 5.0 0.0 I . . I 0.0 L I 0.0 I . I 0.0 60' L d u S a — d d u u‘ — u a d ‘1 — d n O P 0, 0 0 n «oocoHtOmASHoo k: (.m dotsom $0233.30 700 BUOE twotorfiquxfi 181 sensitive to this parameter June 33 a1. (1980) were able to fit it to less than 5% error. Both Figure 6.9 and Figure 6.5 show that the measurement of 888: can be more accurately made at high pH values in pH Jump experiments. The essentially equal sensitivities to KBH and KBY reflect the fact that the amplitudes are most sensitive to Ka a K8H KBY as indicated by the computed standard devia- tion of Ka in Table 6.2. Figure 6.6 is the standard deviation analysis for the apparent first order rate constant k'. Here at low pH, KBY is the most important parameter. At higher pH, say 9, kBY and K8H are the most important parameters. The partial variances for the 10% parameter variation, Figure 6.3, are not much different than those derived from the fitted standard deviation analysis. The overall shapes of the sensitivity curves are the same but the magnitude differ slightly. The maximum sensitivities of the param- eters are grouped in two regions, KBH and kBY are large at a pH of 9, while the other parameters reach their peak in the pH range of 7.0-7.9. The sensitivity to ng in the pH Jump analysis is significant only at low pH values and neither the amplitude nor the rate constants in the ij Jump analysis show appreciable sensitivity to KYH' Figures 6.8 and 6.9. are the partial variances of the ‘parameters in the pH drop model. The amplitude param- eter 888: is the most sensitive in the 10% deviation 182 .AcoHpmem> :oHpmH>00 upsccspmv Hmcoz mmmcmsmopmhue mash mm tsp scam .x 90 mmocmHHm> Hmehmm 0.0 ossmHm H90 To Nd nAV .TO nAv oAv hAv 0A9 Qd oé ‘«Q m m h .11 1 [_ILLI — p — p _ p p — mTImmmw- wWHMM// Mm 0m ......IK ...... IHIHI T «9.03.0 .x ..3 .< .m Stack «Sop .Emv BEOE omocoxqowoifi Qd To Nd Nd To 0d 06 Ed 96 Wd 0; souoyoA 70.111007 183 .Acoflpmflgm> ROHV Hocoz ommchQOpmmHe mean mm 009 20pm _x mo moocmem> HmeHmm >.m mpzmHm Id. 9 m 0 s P p p p — b L H p — _ p p b — _ — oo 1 u » .-...- n HIJNII. . I o L c 0 .C 0 ’ Ix“. j o ..0 I . , .. . r ..o I «.0 I . o WW I «.0 L N I no I d/d/d \ o I no . 0.4.0 a - m0 ....0 I I to W. 1 o o. - m. 0.0 1 910be I 0.0 A . - m. 0.0 I I 0.0 m . . 1- m 50 L XIX I 5.0 L T mHvJU I.vo 0.0 I I 0.0 I r OQP l d 8 1 4 J )1 4 q d d 41 q 1 .- q .- 1 III 00F 0H 0 0 s 383.0 .x 08 .< .m 530m «2030.300 Hdw B0082 omOCO£Qo0QALH 18A Aflo To NAV mwo ¢Ad mA. may Ed mxu Qd o; .Acoflpawpm> :oprH>00 cumwcmpmv Hmcoz ommcmgmopmhpe monc mm mnp EOHM Amm Hmegmm m.m alq od HHm on 00 p . _ — . _ — h — — L _ L Tl. . IRIIIAIAIAIIwrwmwTwA . . /.4/ r - C/Q/Q r I ITEIIQIQIHIIQIHIQIQ I . 0.0/9|: 010010.00}; - .1 I I T I r L I < q d — u A a d 1 q q u 96 HHm 0K. 06 323 .«oucofiomoéto ..8 .< .m Stack «$000 .Emy F622 omocoxqquxfi mpstm OAV 70 NA“ may .To nAV oAV had mAV mAV o; asuoIJoA Iogyod 185 .AcoHpmH00> ROHV H000: omcmsmopmhpa @000 mm 0:» EOHM Amm Hmwppmm m.w mpsmHm AHC To No 0d Yd mAu egg Nd mxu Qd cé ‘10 P n p p —I _ p - n — [-1 p — — .L In I. r. J I l 1 1| 4 4| d a d J u d. 1 d d d I AVG AVG OK. A90 «023 .«80333330 08 .< .m 00.3.0.1 «20.30300 Hdv 000$ 0000031010003: 0A0 70 Nd MAV .TO may mAV 5d mAv mAV O; aouogJoA Iogyod 186 analysis, but since it was accurately measured, the stan- dard deviation analysis shows that the equilibrium constant KBY is the largest. There are only two different regions of sensitivity in these analysis, a high pH set (KBY > 888: > KBH) and a low pH set CKBY > K8H >> 888:). Figures 6.10 and 6.11 show the partial variances of the k' output function in the pH drop analysis. Here, .for the first time, some sensitivity to KYH appears at low pH values. From the partial variance plots for the Tryptophanase model we see that the parameters can be grouped according to the pH dependence of their effect on the output func- tions. KBH’ KBY’ and e 8: determine the value of AA 8 obs with reasonably uniform sensitivities at pH values below 9 when the standard deviations are used. Since 288% is by far the most important parameter, allowing the same relative deviation for it as for the other parameters causes it to take most of the partial variance, from u0% at low pH values to over 95% at higher pH values. By restricting the ranges to the standard deviation values, KBH and KB become the dominant parameters except at high Y pH values. Partial variances obtained when k' is the output func- tion show that K dominates at pH values around 7 but BY becomes third in importance above pH = 8.2. The parameters k and K BY 8H become the most important at high pH values 187 and maintain substantial sensitivity down to pH values of about 7.“. Only at pH values below about 7.6 do the sensitivities to kgy and KYH begin to become important. This reflects the fact that these parameters refer to a low pH pathway for the interconversion of the B and y mani- folds. To determine kgy and KYH with greater precision, the measurements should be extended to lower pH values if possible. The sensitivities of k' to its parameters does not change much when the parameters are allowed to vary over equal relative intervals instead of over the estimated standard deviations. However, examination of Table 6.2 shows that the relative standard deviations for the most important parameters are not very different. Therefore, a change from 110% relative deviation to to is nearly the same as a change from :10% on all parameters to :20% on all parameters so that we would not expect much difference. L): The sensitivity analysis applie here suggests what experiments should be done to further refine the parameters. Incremental pH Jump experiments to higher pH values than the limit of 9.3 used to date would probably result in better estimates Of kBY and KB? while pH drop experiments 5 I V to lower final pH values than “.7 would greatly improve A U . the determination of ng and .YH, This application of sensitivity analysis to the Tryptophanase model was made a ter the model had been 188 developed and the parameters fit to the data. The marginal standard deviation estimates given by KINFITA for all of the parameters gave us an indication of their reliability. However, sensitivity analysis, not only confirmed these ideas, but also clearly delineated the regions of pH in which the absorbance changes and rate constants are most affected by particular parameters. Thus the maJor goals of sensitivity analysis, to rank the parameters in order of their importance to the output functions, and to assist in the design of future experiments, were both realized in this example. VII. FUTURE WORK AND DEVELOPMENT The previous chapters examined the theory and applica- tions of Sensitivity Analysis. This chapter reviews those areas which should be profitable fields of research for further development of sensitivity analysis. The most useful theoretical development would be in the relationship between the Walsh and Fourier methods. Christenson (1952) has laid the groundwork for this problem. He noted that Walsh functions may be generalized to sets of orthogonal functions with more than two values. This is done by relating the Walsh function to powers of (exp(2iH/N), where the two-value Walsh functions are obtained by letting N = 2, thereby giving powers of (-l). The generalized Walsh function may then take on N different values. This relationship suggests that the N-point discrete Fourier transform may be totally developed from a dis- crete algebraic viewpoint without recourse to the continuous Fourier transform. If this were done, a clearer understand- ing of the errors involved in aliasing and choice of fre- quency sets should result. This would also lead to a more direct relationship between the linear sensitivity co- efficients (Taylor series) and the Fourier expansion co- efficients. 189 190 .Choice of the frequency sets for sensitivity analysis has always been a limitation. In the Fourier method we may do sensitivity analysis with up to 50 parameters since their Uth order accurate frequency sets are known. How- ever 6th order or higher accurate frequency sets are not known for an arbitrary number of parameters. It appears to be a difficult number - theoretic problem to even find a higher-order accurate set. However, finding higher- order accurate sets for arbitrary number of parameters would enable the computation of more accurate Fourier sensitivity analyses. In Walsh analysis an arbitrary number of parameters may be evaluated. All the frequencies required for exact analysis are known (21). Unfortunately, the largest re- quired frequency for a p-parameter set is 2p-l. This requires 2p simulations to compute the 2p-l coefficient. For large values of p this becomes impractical. Analogous to the Fourier method we can develop approximate Walsh frequency sets to a required order of accuracy. Appendix 9 has an approximate Walsh frequency set which is Ath- order accurate. With this set of frequencies we can do approximate walsh sensitivity analysis with up to 21 12 21 parameters using only 2 simulations instead of 2 which would be required for exact analysis. This technique will work for any set of approximate frequencies, and with the apparent relationship of Fourier 191 and Walsh expansions we should be able to connect the ap- proximate frequency sets from the two methods with each other. Unfortunately, an algorithm for finding approximate Walsh frequency sets has not been discovered, although it is easier to invent approximate Walsh sets than it is to invent approximate Fourier sets. The set given in Ap- pendix 9 was chosen in an intuitive fashion. Obviously more work is required to deve10p a systematic method of finding approximate Walsh frequency sets for any desired accuracy. This should also clear up the problem of find- ing approximate Fourier frequency sets of arbitrary accuracy. Another useful area of research is the connection of statistics and sensitivity analysis. Sensitivity analysis measures the effect on the output function of variations in the parameters. Statistics deals with the reverse problem, the effect on the parameters caused by variations, or er- rors, in the output function. Research in the relation- ships between sensitivity analysis and statistics would unite the more theoretical aspects of sensitivity analysis with the real world measurements used in statistics. One direct approach is to "feed" the "answers" obtained from a least squares analysis of data directly into the sensitivity analysis programs. The least squares program delivers "best" estimates of the parameters and standard deviation estimates for each parameter. By using these values as the nominal parameters along with the standard 192 deviation as the range of variation a sensitivity analyis may be done on this model. From the partial variance curves obtained in this way one may determine whether the output function is sensitive to that particular parameter space. If there are maxima in the partial variance curves then one should make more measurements in that region to pin down the "best" value for the parameter in a least squares sense. Such an approach should be useful in both model reduction and experimental design. The computer programs are well-designed. However, by examining the timing data printed by the programs it seems likely that improvement in the matrix transpose algorithm (SUBROUTINE TRANP) would decrease the amount of required computer time. Other than this, there are no new, faster algorithms (that I know of) which should be substituted for the ones presently used. However the programs were written to facilitate the replacement of sub—programs if better ones are developed. OnecWher place that the programs could be modified is in SUBROUTINE MODEL. It may cause a significant de- crease in computer time if models written in terms of differential equations are recast into an integral equa- tion form. Integral equations are usually more stable numerically than differential equations. This type of change could result in a decrease of orders of magnitude in the computer time spent computing the required simulations. 193 Applications of both the Fourier and Walsh sensitivity analysis should be straightforward. Interpretation of the results will, of course, depend on the problem. It is hoped that the applications and interpretations pre- sented here are sufficiently detailed to enable interested researchers to perform sensitivity analysis on their own models. The insight available from sensitivity analysis is only realized after the model has been analyzed. APPENDICES APPENDIX 1 RELATIONSHIP OF FOURIER COEFFICIENTS TO TAYLOR SERIES COEFFICIENTS If a function can be expanded in a Taylor series over an interval, it may also be expanded in terms of orthogonal polynomials over an equivalent interval. This may be written where PJ(x) is an arbitrary orthogonal polynomial and (J) (x0) evaluated at x = x0. f is the Jth derivative of f(x) with respect to 'x' By exploiting the orthogonality of the PJ(x) polynomials we can relate the aJ expansion coefficients with the Taylor series coefficients, i.e., oo 00 f ak = jgo fw(x) 13.0.) Pk(x)dx = 3E0 rm.) ——%0—)— (x-xO)JPk(x)dx ' (J) °° f(xo) _ 320 J' ka 19A 195 where wjk = fw(x) (x-xO)JPk(x)dx From this equation we see that an orthogonal polynomial coefficient, ak, is a weighted sum of all derivatives of the function evaluated at the nominal value, x0. This implies that an orthogonal expansion coefficient is composed of the 'effects' of all the derivatives of the function. We can specialize this result to the orthogonal series of Sines and cosines. Expanding f(x) in terms of frequencies we obtain a f(X) = Z {a. cos(Jx) + b sin(Jx)} + -g 3:1 J J 2 where l H ak = F f—w f(x) cos(kx) dx _ 1 Ir bk - n f_1T f(x) sin(kx) dx Substituting in the Taylor series eXpansion for f(x) we obtain bk 196 = % £3, (f(xo) + f'(xO)(x-x0) + ...)cos(kx) dx = %.{:T (f(xo) + f'(xO)(x-x0) + ...)sin(kx) dx Let y = x-xO, then Using =l|I—‘ H+XO if 1 2 1T -n+x (f(xo) + f'(xo)y + 5 f"(x0)y +...)cos(y+xo)dy O (f(xo) + f'(xo)y+-% f"(xo)y2)sin(y+xo)dy the expansion for sin(a+B) and cos (0+8) we obtain 1 Tr+x0 . F f_fi+x0[f(xo)cos(y)cos(x0) - f(xo)sin(y)51n(xo)]dx w+x0 . .LW+XO[1”(Xo)(y)(cos(y)COS(XO))-f'6%9y(sin(y)31n(xo))]dx+... l w+x 2;.Lfl+xo[IWXO)sin(y)COS(XO) + f(xO)cos(y)sin(x0]dx H+X .LW+X [f”(xo)y sincos(xo)+f'(xo)y cos(y)sin(xO]dx+... 197 Setting the nominal value, x to zero, we may reduce 0, equations as follows - °° 1 .. (2.1) 2.1 ak - J20 W [Jr f(O) y cos(ky)dy .+ . % f” f(ZJ l) y2J+l _w (0) sin (ky) dy <>> + IIM 8 0 Similarly bk may be reduced to 2j+lf(2J+1) (0) sin(ky)> - This clearly shows that the Fourier coefficients are com- posed of all derivatives of the expansion function. 198 APPENDIX 2 8 L I I I_ In. _ . r~ >- _ Q '3 2: UJ — :DCl. . C3“3 m w I-I a: H U. '— ‘— m-l Fh_.— — N I“mam-In ‘00 0.75 .I.5 2.25 3.0 PRRRMETER Figure A.1. Histogram of Log—uniform Distribution Func- tion. This function is given by: Parameter - nominal *exp(A% sin-1(sin(sq)) where here 1/2 nominal (PHI/PLO) 1/2 £n(PHIPLO). 199 o l L l m D'- ID.. - V >— L) 2 DJ 23cm. r em DJ 0: U4 9?." " I °-1.0 -0.5 0.0 0.5 1.0 PRRRMETER Figure A.2. Histogram of uniform distribution function. This function is given by: Parameter = nominal + Asin_l(sin(sq)). where here ) nominal %(P HI+PLO 1 2(PHI‘PL0) 200 FREQUENCY Sp 7? 100 trim. ‘17: EI’A izr" S x I I ‘13:o ";h.5 015"' 1.0 0.0 PRRRMETER Figure A.3. Histogram of Gaussian-type distribution func- tion. This function is given by: 1 + sin(sq)J 2 Parameter — a 10S [1 _ sin(sq) where _ 100 PHI'PLO such that 90% of the samples are between PHI and PLO' 201 O 1 L l as - as ~ )— LJ 2: DJ ZDCL. L GCD UJ I: I U. .. w E - I. . 0.1-0 77"0 05 M CCCCC 1005 100 7 0.0 PRRRMETER Figure A.A. Histogram plot of sin-transformation function. This function is given by Parameter = nominal + Asin(sq) where here _H_I___LQ nominal p _ A: HI LO 202 APPENDIX 3 This procedure follows the original Fast Fourier Transform, the Cooley-Tukey algorithm. In fact, some Fast Fourier Transform programs may be converted directly into Walsh transforms by simply setting all the trignometric values to i1 and deleting the complex part of the trans— formation (since the Walsh transform is real). The factorization of the transform relies on the lexicographic ordering of the sampled function values. Writing out the transform using binary representation for the time, t = (tlt2,...,tp), and for the sequency, m = (ml,m2,...,mp) N—l C = C(m m ...m ) = l 2 f(tn)WALH(n3tn) m l 2 p N _ n-0 p Z t.m = )3 X Z f(t t ..t ) (—1) (A-1) ml 0 m2 0 mp_0 The calculation of the transform is carried out in a series of stages. There is one stage for each power of two in the number of points, 2p = N. The first stage is to derive a partial transformation series, X1, from the input series, f(t), by expanding the first sum in the equa- tion (ignoring the scaling factor for now). Xl(tlt2... p—l p t -0 . m 0) + (-1) pf(tl...t 1) l...tp_l p-1 Now we pass through the data, either adding or subtracting adjacent function values. The second stage is constructed from the first by expanding the second sum. Then 1 X t m 2( lt2 tp_2mp_l p) t 23:0 xl(tl tp_lmp) p—l This procedure is continued until all P-stages have been computed. The values of the last stage are the de- sired Walsh coefficients. C = C(mlm2...mp) = Xp(mlm2...mp) This is an extremely fast transform on a computer as only additions and subtractions are required. 20A APPENDIX A PARSEVAL'S FORMULA FOR WALSH FUNCTIONS The total variance of a function may be expressed as the sum of its squared Walsh expansion coefficients. This may be easily seen by computing the variance for an arbi- trary function. The defining equation for variance is 2 OTotal = <(f(x))2> - 2 where is defined as the average of the multidimen- sional function f(x) with x = (xl,x2,...x <(f(x)2> D). - is then the average of the square of the function f(x). Expanding f(x) in a finite multidimensional Walsh series we obtain the following series: ( > 1 l l p < f 5 = z z z c. —. WALH k x ) k =0 k =0 k =0 ‘1k2°'°kp E 1’ i 1 2 p 1-1 p = z c n WALH (k.,x ) E K i=1 1 i 205 To compute the average of f(x) we need only compute the average of the series expansion C X WALH (k X ) dX quo = f x 1 This equation must now be integrated over each dimen- sion, xi. However, since each dimension has only two values the multidimensional integral is equivalent to a multidimensional sum over these two values. This results in the following equation 1 l l l E =«—— z z (2 0k H WALH(ki,xi)) 2p xl-o x2= x =0 5 — i=1 p 1 1 1 p = __ 2 ok I 2 . z z ( n WALH(ki.Xi))} zp g — Xs=0 x2=0 xl=0 i=1 Substituting in the algebraic definition of the one digit Walsh function results in 1 1 1 ikixi «(gm—5:0“ z . z <<-1) } 2 5 — xp=0 xl=0 p k x =chkIn<1+<-1)ii>1 2p 5 — i=1 206 The term in brackets is zero if sun; of the ki's are nonzero. Hence it functions as a Kronecker delta and we may simplify the equation accordingly _1_ I: }=c=c 2‘35 9 - Ck{2 55,9 g oo...o This shows that the average of an arbitrary function is the CO coefficients of its Walsh expansion. The computation of <(f(x))2> is straightforward. First expressing the square of the function as a Cartesian product. WALH(k 1 J’XJ') ‘M O W O E H IImnj WALH(ki,xi) I k l J —-1 which upon substitution of the definition of the Walsh function results in: 2 = VIM This equation may be integrated over each dimension, xi 207 1 2(ki+ki)xi ' i <>2> -- f z 2: 0k ck. <-1> ii! 15... .) +k x l (kl+k1)xi i=2 1 l i =—-z 2: CR Ck'{ Z 2‘. [(l+(-1) )((-l) } 2p k k! _ _ X =0 X =0 . _ _ p 2 E (k +k')x ...}..z z ck ck. { II (1+(-1) 1 1 1)} 2p Ek? _. ... 1:1 The term in brackets acts as a Kronecker delta, requiring k = 5'. Hence the equation is now easily reduced 0 <(f(x))2> -3; Z 2: C C H 25 29513' 1‘- 1‘3'1 1 ki’ki l o = —— C ‘ 2p f. f. I. Cr 2 519a = z (Ck)2 E ... giving a sum of the squared coefficients. 208 Combining the two results we may directly write the equation for the variance solely in terms of the expansion coefficients: 2 = 2 _ = 2 _ 2 = , 2 OTotal <(f(§)) > ECE COD .0 ick- 00...0 where the 2' is a summation over all k except the k sequency term. 209 APPENDIX 5 THE CALCULATION OF WALSH PARTIAL VARIANCES A partial variance is defined as the variance, or dispersion, in one dimension of a multidimensional function a = <>2> - >2 2 l where f*(xl) = , the multidimensional function averaged over all dimensions except the first. 0% is called the partial variance of variable, or parameter, x1. If f(xl,...,xn) is expanded in a multidimensional finite Walsh series with two points along each axis, then ki and x1 may be represented in binary by one digit. 1 1 Q ( f(x x ...x ) = Z ... Z (3 H WALH k x ) 1’ 2 n k1=0 k =0 k1° kn i=1 1’ i n n 2 k.x 1 1 i=1 1 i = Z I C (=1) k =0 k =0 k1 °'kn 210 The average of the function with respect to x2,...xn is the sum of all those values divided by the number of samples ‘K' = < > f (X1) f(xl°"xn) x2...xn n 1 1 1 1 1 iilkixi = “fi3I z . 2 .[ z 2 Ck . k (-l) l 2 x2=0 xn-O kl-O kn=O l n Switching summations: n 1 1 1 1 2 kixi _ l 1:.1 "*N-l z z: [ z X Ck k (-l) J 2 kl=0 kn=0 Xn=0 x2=0 l n eXpanding the term in brackets: 211 f*(xl) = n . Z k x 1 1 k _ 1 4 z z 0(1+(-1> 2><-1)1'3 .- l l k 0 0 =0 x = l n n Xn 3 x l l[ :3 1 1 k1x1 — k1 Z ... 2 0k k (-l) E H (1+(-l) )] =0 k =0 l" n i=2 l k X l k x Z {-1) l l 2n-l = Z (3 l l J Now calculate the second moment, , using the function f*(xl) and squaring the summing over x1. 1 l k X l k'X <(f*(xl))2> =% 2 { 2 0k 0 O(-l) l 1 z ck,O O(-1)1 1} = = . '= X1 0 kl O l kl O l ' = % % % Ck 0 O Ck'OO O % (_l)(kl+kl)xl = '= on o. = kl 0 kl O l 1 X1 0 l l k +k' l l l = " X X C C (l + (-l) ) 2 kl=o k': klo O kiO..O = C2 + C2 212 To calculate the second required term, we need X l l l k x (“0‘1”): =% Z Z Ck 0 0(‘l) l 1 1 x =0 k =0 1 ° 1 1 -1 k =55: ck00(1+(-1)1) k =0 l o. 1 = C0. 0 Hence 2 2 * _ (f (Xl)>xl CO. 0 Subtracting the first moment from the second, we obtain the desired partial variance 213 APPENDIX 6 DERIVATION OF WALSH COUPLED PARTIAL VARIANCE FORMULAS To measure the effect of coupled parameters, say "i" coupled together at a time, we calculate a coupled partial variance. 2 2 = * _ 01,2,0022 (f (x£+l..x2«) > £ where f*(x£+l..x£) is the function f(x) averaged over the variables x£+l..xn, and : is the average of the multi- dimensional function over all the variables x1..xn. Expand f(xl..xn) in an n—dimensional finite Walsh series with two points along each axis. In this case Xi and k1 may be represented by a one digit binary number. )=c ll ZJID f(Xl-X WALH (ki.xi) Z n E E J l The average of f(x) is the CO term as previously shown = C0 21A )2>. So we need only calculate the term x £+l" n PVM I'Pi‘ {\J :3 I to k X k n C 1' E (1+(-1) 1)] (~1)i=1 5 2n‘£[i=2+l 1x1 I'FM 2 iglkixi c —1 ' 0 x ( > k2+1,05 WM f*(x We must now square this function f*(x1...x£) and average it over x1...x£. l * 'X' = - f (x1..X£)f (X1...XQ,) Z 2* Ck* Cm“. ( l) 2 ‘ 2 1 l 1 film-(mi)Xi = _T Z .. Z {Z Z Ck*C *(-l) } 2 x =0 x =0 k* n* - a l l -- Rearranging the summations 2 k n ' 2 l - ii = 2 C *0 *{-——- TI (1+ (-l) )} 1 Q 5* 2* K .2 21 i=1 i =2 ZC*C H O 5* 3* 5 - i=1 ni’ki l l 2 2 =ZC*= Z X C E* 5 kl=0 k2=0 k1'°k10 0 Substituting the appropriate expressions for the two moments we obtain the ith coupled partial variance where {H is a summation of all g* vectors except the one kx equal to 0. 216 APPENDIX 7 RELATIONSHIP BETWEEN LINEARLY DEPENDENT EQUATIONS AND THEIR FOURIER COEFFICIENTS In chemical kinetics mass balance equations often allow us to substitute an algebraic equation for a differential one. These mass balance equations are linear and in enzyme kinetics they are of the form: where the X1 are the different types of enzyme—containing intermediates, and the Vi is the number of enzyme molecules in specie Xi' Given N-l "Xi" expansions in fourier series, the fourier coefficients of the Nth species may be calculated. Using these fourier coefficients one can calculate the partial variances of the Nth specie. This can be seen by inserting the N-l fourier expansions into the mass balance equation 217 i-l M a (J) E 21 VJEM Z aL X 0 J= cosLX + b(J)sinLX] + v i i N M + 2 vEL z 0a£3)cosLX + 0(3)s1an] J: 1+1 3 Solve for'X1 N—l M X -03; [ 2 v K{ 2 afiK)cosLX + bé vi K=1KL=O K) sinLX} - E0] M N-l -v N-l -vK { z ( z ——5 aéx))cosLx + ( z (— K)b£K))sinLX} - E L=0 K=l ”1 x=1V1 O t 1 aLcosLX + bLsinLX I “(‘13 0 Hence the fourier coefficients of X1 are N-l -v a' = X J 8.6K) - E 0 Kgl vi 0 0 a' = N21 :35 a(K) L =1 ‘3. L . ”‘1 2’1. (K) bL = 2 b K=1 " L 218 APPENDIX 8 SENSITIVITY ANALYSIS PROGRAMS The use of the Sensitivity Analysis Programs at Michigan State University is a straightforward task. If the mathema- tical model is composed of ordinary differential equations or algebraic equations, no modifications to the programs are necessary. The equations only have to be coded into FORTRAN 66. After this is done, one has to decide: What kind of an analysis is desired (Walsh or Fourier), the parameters’ nominal values and range of variation, the transformation function to be used, and the time points of interest. This data is read by the program SENANAL which does the required simulations. A second program, TRANS, reads the output from SENANAL, TAPE3, and computes the expansion coefficients and partial variances, both single and coupled. Since a Sensi— tivity Analysis may generate a large amount of data, depending on the number of output functions, parameters, and time points, an Optional plotting program, PLTSEN, is provided. This pro- gram reads the output from TRANS, TAPE9, and plots four sets of curves for each output function. PLTSEN plots the average value of the output function, the single partial variances, the expansion coefficients, and the relative deviation. 219 The following cards execute the programs SENANAL AND PLTSEN. PNC CARD JOB CARD Pw CARD ATTACH,MAIN,SENAN LBINARYOPT2. FTN,I=INPUT,B=SUB. LOAD,MAIN. LOAD,SUB. EXECUTE. EETUEN,LGO. 10. REWIND,TAPE3. 11. ATTACH,TBIN,TRANSFORMBINARYOPT2. 12. LOAD,TBIN. 13. EXECUTE. 1n. CATALOG,TAPE9,SENSITIVITYANALYSISFILE,RP=30. 15. ATTACH,PLT,PLTSENBINARYOPT2. 16. RETURN,LGO. 17. REWIND,TAPE9. 13. PLT. 20. (789) \OCDNChUl-LI‘UONH SUBROUTINE MODEL (TIN, TOUT, YIN, NFUNC, TSTART) COMMON /PARA/ P(50) This is a subroutine which on input has TIN as the initial value at which the output functions have values (there are NFUNC output functions, YIN(l) is the first output function). TSTART is Optional and tells MODEL when it is starting a new parameter vector (IF (TSTART .EQ. TIN) ). The common block /PARA/ contains the parameters to be varied in the model. On output from MODEL, the output functions, sometimes called object functions, have their values at 'TOUT', the time on returning from MODEL. 220 Note that this subroutine must change its FORTRAN code for each different mathematical model, but not for parameter variations. (7891 Data cards for program SENANAL- see the comment cards in SUBROUTINE READIN. I (789) Data cards for program PLTSEN, see the comment cards in the program PLTSEN (789) (6789) The somewhat difficult part is to force SUBROUTINE MODEL to solve for the output functions given a parameter set, the initial values of the output functions and the time at which the solution is desired. The application to algebraic equations is straightforward. However, solving differential equations is more difficult. The use of the EPISODE package (Hindemarsh, 1977) for solving ordinary differential equations is recommendedzmniis a part of the SENANAL package. These subprograms are extensively docu- mented internally with comment cards. 221 Program SENANAL PROGRAM SENANAL(INPUT=65,0UTPUT=65,TAPE1=INPUT,TAPE2=OUTPUT, . TAPE3=513) 0* CG 0* PROGRAM SENANAL IS THE DRIVER PROGRAM OF A SUITE OF CODES C* WHICH PREFORMS SENSITIVITY ANALYSIS ON A MODEL. SENANAL C* READS INPUT AND BASED ON THAT INPUT CHOSES WALSH SENSITIVITY C* ANALYSIS METHOD OR FOURIER SENSITIVITY ANALYSIS METHOD. IT THEN C* PROCEDES TO SOLVE THE MODEL EQUATIONS OVER THE DESIRED C* TIME POINTS WITH THE NECESSARY PAPAMETERS. EACH 0* PARAMETER VECTOR WHICH IS TO BE SOLVED IS CALLED A SIMULATION. * C’ SENANAL SOLVES THE SIMULATIONS BY FIRST CREATING THE PARAMETER * C’ VECTOR AND THEN SOLVING THE MODEL EQUATIONS OVER ALL THE DESIRED’ C* TIME POINTS. THE MODEL SOLUTIONS ( OBJECT FUNCTIONS ) ARE 0* WRITTEN OUT TO TAPE} AT EACH TIME POINT. 0* 0* AFTER A SIMULATION IS COMPLETED, SENANAL CREATES ANOTHER C* PARAMETER VECTOR AND SOLVES THE NEXT SIMULATION. THIS IS C’ REPEATED UNTIL ALL THE NECESSARY SIMULATIONS HAVE BEEN SOLVED. 0* C* VARIABLES *tit l** c-l» 0* BEGIN(NFUN0) - THE INITIAL CONDITIONS, OR EQUIVALENTLY 0* (REAL) THE VALUES OF THE OBJECT FUNCTIONS AT TSTART. 0* cl, 0* IACCUR - ORDER OF ACCURACY OF THE FREQUENCY SET 0* (INTEGER) c-I» (3* 0* IOMEGA = O IF FOURIER 4TH ORDER SET IS TO BE USED 0* (INTEGER) 1 IF SPECIAL FREQUENCY SET IS TO BE USED 0* -1 IF STANDARD WALSH FREQUENCY SET IS TO BE USED CAI» 0* . IMETH - METHOD FLAG FOR SENSITIVITY ANALYSIS; - 1 FOR FOURIER 0* (INTEGER) ANALYSIS, = 2 FOR WALSH ANALYSIS. c-l 0* ITRANS - FLAG FOR TYPE OF TRANSFORMATION FUNCTION 0* (INTEGER) SEE SUBROUTINE PARAM FOR DETAILS. c! C’ IW(NPARA) 3 AN ARRAY CONTAINING THE FREQUENCY SET TO BE USED IN C’ (INTEGER) THE S. A. RUN. 0* IF 'IOMEGA' .EQ. 1, THIS ARRAY MUST BE READ IN FROM C* CARDS. OTHERWISE THE FREQUENCY SETS ARE CREATED ********#********************‘¥ 222 Program SENANAL, CONT'D. 0* INTERNALLY. * cl» * 0* NFUNC - NUMBER OF OBJECT FUNCTIONS WHICH WILL BE SAVED * 0* (INTEGER) AT EACH TIME POINT. : C} 0* NLABEL(NFUNC) - THE NAMES (LABELS) OF THE OBJECT FUNCTIONS * 0* NLABEL(1) SHOULD BE THE NAME OF THE FIRST * 0* OBJECT FUNCTION, ETC. * 0* * 0* NPARA - NUMBER OF PARAMETERS TO VARY * 0* (INTEGER) * 0* i 0* NSIMUL - NUMBER OF SIMULATIONS * 0* (INTEGER) * 0* * 0* PHI(NPARA) = MAXIMUM VALUES OF THE PARAMETERs( OR ONE SIGMA MAx)* 0* (REAL) * cl) 0* PLO(NPARA) . MINIMUM VALUES OF THE PARAMETERs( OR ONE SIGMA MIN) 0* (REAL) C* 0* TIME(TNPTS) - ARRAY CONTAINS THE TIME POINTS AT WHICH THE 0* (REAL) OUTPUT FUNCTIONS ARE TO BE SAVED AND THE 0* SENSITIVITY ANALYZED. CAI 0* TSTART - INITIAL TIME POINT, SO THERE ARE NO S. A. VALUES SAVED 0* (REAL) AT THIS POINT 0* TITLE(8) - A ONE CARD TITLE FOR S. A. RUN 0* (INTEGER) (WRITTEN IN 8A1O FORMAT ) 0* TNPTs . NUMBER OF TIME POINTS 0* (INTEGER) 0* YIN(NFUNC) . AN ARRAY OF LENGTH NFUNC CONTAINING ON 0* (REAL) INPUT TO SUBROUTINE MODEL THE VALUES OF 0* OBJECT FUNCTIONS AT TIN AND UPON OUTPUT FROM MODEL 0* YIN( ) CONTAINS THE VALUES OF THE OBJECT FUNCTIONS .0* AT TOUT. ' C’ ERROR CODES C* STOP "R1" OR STOP 1: IF IOMEGA WAS UNACCEPTABLE,EITHER NOT READ 0* CORRECTLY OR ABS(IOMEGA) .GT. 1 ******t**$t*#* **** ** **** *Iitll 33* 13* 0* STOP "R2" OR STOP 2: IMETH WAS UNACCEPTABLE (.LT.1 .OR. .GT. 2) 223 Program SENANAL, CONT'D. STOP 3: TNPTS WAS TOO LARGE OR TOO SMALL (0,150) STOP 4: ITRANS WAS OUTSIDE DEFINED RANGE (1,5) STOP "R4" OR STOP 5: NFUNC HAS A VALUE OUTSIDE THE DEFINED RANGE ( 1.40) ~ STOP "R5" OR STOP 6: NSIMUL HAS A VALUE OUTSIDE THE DEFINED RANGE ( .GE. 1) STOP 7: PHI(J) .LE. PLo(J), THIS COULD CAUSE A DIVIDE BY ZERO IN SUBROUTINE PARAM. STOP 10: IW(J) .LE. ZERO, FREQUENCIES MUST BE .GE. 1 STOP "R3": NPARA .LT. 1 NUMBER OF PARAMETERS MUST .GE. 1 STOP "F1": NPARA .GT. 50 , USING FOURIER METHOD, NPARA STOP "ORDER" MEANS THAT THE ORDER OF ACCURACY VARIABLE WAS LESS THAN 4. MUST BE .LE. 50 ALSO STOP "GETER": ERROR IN A FREE FORMAT READ, EITHER AN EOF OR AN ILLEGAL CHARACTER. UPON SUCCESFUL COMPLETION OF SENANAL TAPE3 HAS THE FOLLOWING FORMAT. 1) TITLE -- ( 8A1O FORMAT) 2) METHOD,NPARA,TNPTS,NSIMUL,NFUNC ( A10, 416 FORMAT; ' METHOD =1OHWALSH , 1OHFOURIER 3) FREQUENCY SET ( 15H FORMAT, A LABEL FOR THE FILE ) AND IACCUR IN I3-FORMAT 4) Iw(1),Iw(2),...,IW(NPARA) ( 1616 FORMAT ) 5) TIME POINTS ( 15H FORMAT, A LABEL FOR THE FILE ) 6) TIME(1),TIME(2),...,TIMB(TNPTS) ( 7E12.6 FORMAT ) 7) NLABEL(1),...,NLABEL(NFUN0) (8A1O FORMAT ) ****** **** Calcutta: ******** ****** ** ************ ##8## . , 22M Program SENANAL, CONT'D. 8) YIN(1),YIN(2),...,YIN(NFUNC) UNFORMATTED WRITE THERE ARE TNPTS*NSIMUL RECORDS OF TYPE 8, ONE FOR EACH TIME POINT IN A SIMULATION MULTIPLIED BY THE NUMBER OF SIMULATIONS. THESE SIMULATION VECTORS ARE IN AN UNSUITABLE FORM FOR SENSITIVITY ANALYSIS SINCE TO DO S. A. WE NEED ALL THE DIFFERENT SIMULATIONS OBJECT FUNCTIONS' VALUES AT THE SAME TIME POINT. THE SUITE OF CODES RUN BY PROGRAM TRANS WILL REFORMAT TAPE3 AND WILL TRANSFORM THE SIMULATION CURVES INTO SEQUENCY VECTORS( WALSH OR FOURIER ) FOR WHICH PARTIAL VARIANCES WILL BE COMPUTED. * Oat #IIII* it* 0:: #13 t t * CWW C‘I' 0* C} 0* (3* COMMON /PARA/ P(SO) REAL PMAx(50),PMIN(5o),PAVE(5O) REAL TIME(150),PHI(50),PLO(SO) REAL YIN(40),BEGIN(4O) INTEGER IOMEGA,IMETH,NFUN0,ITRANS INTEGER TITLE(8),TRANS(5.2),IW(50) INTEGER NLABEL(40) NPARA,NSIMUL,TNPTS,LABEL(2) DATA PAVE/50*O./, PMIN/50*1.OE+99/, PMAX/50*O./ DATA LABEL/ 10H FOURIER ,1OH WALSH / DATA TRANs/1OHLOGUNIFORM,1OH UNIFORM ,1OH SINE TEST, +1OHLOG(P)BELL,1OHBELL-SHAPE,1OHARITHMETIC,1OHMULTPLIER , +3*(1OH / SUBROUTINE TIMES IS A TOTALLY UNNECESSARY BUT SOMEWHAT USEFUL TIMING ROUTINE CALL TIMEs(1,O) CALL READIN(IOMEGA,TIME,TSTART,IMETH,NFUN0,ITRANS,PHI,PLO, +NPARA,NSIMUL,TNPTS,TITLE,BEGIN,IW,NLABEL,IACCUR) CALL TIMEs(1,O) WRITE OUT INPUT WRITE(2,5) TITLE FORMAT(1H1,8A10) WRITE(2,6) FORMAT<1H ) WRITE(2,10) LABEL(IMETH) 1O 11 12 27 28 31 15 17 18 19 21 22 0* 0* 0* 0* 225 Program SENANAL, CONT'D. FORMAT(1H ,* THIS IS A SENANAL RUN USING *,A1O,* ANALYSIS*) WRITE(2,11) IACCUR,(IW(J),J=1,NPARA) FORMAT(1H ,* WITH THE*,13,*TH ORDER FREQUENCY VECTOR=*,13(2X,15) +./.30X. +14(ZX.15)) WRITE(2,12) NSIMUL,TNPTS FORMAT(1H ,* THERE ARE *,16,* SIMULATIONS IN THIS RUN WITH*, +,I5,* TIME POINTS*) WRITE(2,27) FORMAT(/,/,6x,* FUNCTION *,5x,* INITIAL VALUE *,/) DO 31 J-1,NFUNC WRITE(2,2S) NLABEL(J),BEGIN(J) FORMAT(6X,A10,5X,1PE14.6) CONTINUE WRITE(2,6) WRITE(2,15) TRANS(ITRANS,IMETH) FORMAT(1H ,* THE PARAMETERS WERE CALCULATED USING *,A1O, +* TYPE TRANSFORMATION FUNCTIONS*) WRITE(2,17) FORMAT(* *,/,* PARAMETER*,2X,* PHI(J) *,7X,* PLO(J)*) DO 19 J=1,NPARA WRITE(2,18) J,PHI(J),PL0(J) FORMAT(3x,Is,2x,2x,1PE13.6,2X,E13.6) CONTINUE WRITE(2,21) FORMAT(* *,/,* TIME POINTS *) WRITE(2,22)(TIME(J),J=1,TNPTS FORMAT(* *,1o(1x,E12.6)) CHECK FOR ACCEPTABLE INPUT PARAMETERS IF(IOMEGA .LT. -1 .OR. IOMEGA .GT. 1 ) STOP 1 IF(IMETH .LT. 1 .OR. IMETH .GT. 2 ) STOP 2 IF( TNPTS .LT. 1 .OR. TNPTS .GT. 150 ) STOP 3 IF(ITRANS .GT. 5 .OR. ITRANS .LT. 1 ) STOP 4 IF(NFUNC .LT. 1 .OR. NFUNC .GT. 40 ) STOP 5 IF( NSIMUL .LT. 1) STOP 6 IF( IACCUR .LT. 4 ) STOP "ORDER" Do 1 J-1,NPARA IF( PHI(J) .LE. PLo(J) ) STOP 7 IF( Iw(J) .LE. O ) STOP 1O CONTINUE J-1 IF( TIME(1) .LE. TSTART ) WRITE(2,2) J,J IF( TNPTS .EQ. 1) GO TO 4 DO 3 J=2,TNPTS IF( TIME(J) .LE. TIME(J-1) ) WRITE(2,2) J,J FORMAT(1H ,/,* TIME(*,I4,*) .LE. TIME(*,I4,*- 1)*) 20 25 26 30 35 226 Program SENANAL, CONT'D. CONTINUE CONTINUE WRITE THE OUTPUT TAPE LABELS WRITE(3.23) TITLE FORMAT(8A1O) WRITE(3,20) LABEL(IMETH),NPARA,TNPTS,NSIMUL,NFUNC FORMAT(A1O,416) WRITE(3,25) IACCUR FORMAT(* FREQUENCY SET *,13) WRITE(3.26)(Iw(J),J=1,NPARA) FORMAT(16I6) WRITE(3.30) FORMAT(* TIME POINTS *) WRITE(3,35)(TIME(J),J=1,TNPTs) FORMAT(7E12.6) WRITE(3,23)(NLABEL(J),J=1,NFUNc) CALL TIMEs(2,O) LOOP OVER THE DIFFERENTS SIMULATIONS DO 1000 ISIMUL=1,NSIMUL IQ ' ISIMUL CALCULATE THE PARAMETER VECTOR FOR THIS SIMULATION CALL PARAM(IMETH,IQ,ITRANS,PHI,PLO,NPARA,P,NSIMUL,IW) CALCULATE THE PARAMETER STATISTICS DO 100 J=1,NPARA PMAx(J) - AMAX1(PMAx(J),P(J)g PMIN(J) - AMIN1(PMIN(J),P(J) PAVE(J) . (P(J) + FLOAT(ISIMUL - 1)*PAVE(J))/FLOAT(ISIMUL) CONTINUE CALL TIMES(3.O) INITIALIZE THE FUNCTION WITH ITS INITIAL VALUE. (NECESSARY IF THE FUNCTIONS ARE ODE'S, OTHERWISE ONE CAN SET BEGIN TO ZERO ) DO98J=1,NFUNC YIN(J) . BEGIN(J) CONTINUE SET INITIAL TIME FOR SIMULATION RUN TIN - TSTART 227 Program SENANAL, CONT'D. 0* CALCULATE THE OUTPUT FUNCTIONS FOR THE "TNPTS" POINTS 0* DO 200 KOUNT . 1,TNPTS TOUT - TIME(KOUNT) CALL MODEL(TIN, TOUT, YIN, NFUNC, TSTART ) cl. 0* WRITE OUT THE SOLUTION AT TOUT C". WRITE(3.10000) (YIN(J),J=1,NFUN0) 1OOOO FORMAT(4020) .C* 0* CALL TIMES(4,O) TIN = TOUT 200 CONTINUE C1! 0* 1000 CONTINUE C. 0* SIMULATIONS ARE OVER WITH 0* WRITE OUT THE PARAMETER STATS C‘.’ WRITE(2,6) WRITE(2,1500) 1500 FORMAT(1H ,1x,* PARAMETER*,3X,* AVERAGE VALUE *,2x, +* MAXIMUM VALUE *,2x,* MINIMUM VALUE *) DO 1620 J=1,NPARA WRITE(2,1610) J,PAVE(J),PMAx(J),PMIN(J) 1610 FORMAT(1H ,5x,15,5x,2x,3(1PE13.6,4x)) 1620 CONTINUE 0* PRINT TIMING DATA CALL TIMEs(1,1) END 228 Program SENANAL; CONT'D. SUBROUTINE READIN SUBROUTINE READIN(IOMEGA,TIME,TSTART,IMETH,NFUNC,ITRANS,PHI, +PLO,NPARA,NSIMUL,TNPTS,TITLE,BEGIN,IW,NLABEL,IACCUR) C) t y C* * 0* SUBROUTINE READIN READS THE INPUT FOR THE PROGRAM SENANAL. ALL * 0* VARIABLES ARE DEFINED IN THE SENANAL COMMENT CARDS. * C’ * 0* THIS IS INSTALLATION DEPENDENT SECTION OF THE METHOD AS IT USES * 0* FREE FORMAT INPUT. (VARIABLES SEPARATED BY A COMMA ) * 0* * 0* FORMAT OF INPUT CARDS * C* * 0* SET 1 - TITLE (ONE CARD ) * 0* * 0* SET 2 = IOMEGA,IMETH,NPARA (INTEGERS) * 0* * 0* SET 3 IFF IOMEGA - 1 * 0: SET 3A . IW(1),IW(2),IW(3)....,IW(NPARA),IACCUR (INTEGERS): 0 0* SET 4 . NSIMUL,TNPTS,ITRANS,NFUNC,TSTART (TSTART IS A REAL, 0* THE OTHERS ARE INTEGERS* C* * 0* SET 5 = TIME(1),TIME(2),...,TIME(TNPTs) ( REALS ) * 0* * 0* SET 6 . PHI(1),PHI(2),...,PHI(NPARA) (REALS) * C* * 0* SET 7 - PLo(1),PL0(2),...,PL0(NPARA) (REALS) * 0* § 0* SET 8 - BEGIN(1),BEGIN(2),...,BEGIN(NFUN0) (REALS) * C* * 0* SET 9 . NLABEL(1),...,NLABEL(NFUNC) * 0* TO BE READ IN ONE (1) VALUE TO A CARD (A10 FORMAT) : 0* C* * 0* VARIABLES * C* * 0* BEGIN(NFUNC) - THE INITIAL CONDITIONS, 0R EQUIVALENTLY‘ * 0* (REAL) THE VALUES OF THE OBJECT FUNCTIONS AT TSTART. : ci- C* * 0* IACCUR . ORDER OF ACCURACY OF THE FREQUENCY SET * 0* (INTEGER) . C* * C‘ * 0* IOMEGA . 0 IF FOURIER 4TH ORDER SET IS TO BE USED * 0* (INTEGER) 1 IF SPECIAL FREQUENCY SET IS To BE USED : C‘ -1 IF STANDARD WALSH FREQUENCY SET IS TO BE USED 229 Program SENANAL, CONT'D. SUBROUTINE READIN C’ * 0* IMETH - METHOD FLAG FOR SENSITIVITY ANALYSIS; . 1 FOR FOURIER * 0* (INTEGER) ANALYSIS, . 2 FOR WALSH ANALYSIS. * C1! 0* ITRANS = FLAG FOR TYPE OF TRANSFORMATION FUNCTION 0* (INTEGER) SEE SUBROUTINE PARAM FOR DETAILS. C1!» 0* Iw(NPARA) . AN ARRAY CONTAINING THE FREQUENCY SET TO BE USED IN 0* (INTEGER) THE S. A. RUN. 0* IF 'IOMECA' .EQ. 1, THIS ARRAY MUST BE READ IN FROM 0* CARDS. OTHERWISE THE FREQUENCY SETS ARE CREATED 0* INTERNALLY. Ci C* NFUNC 8 NUMBER OF OBJECT FUNCTIONS WHICH WILL BE SAVED 0* (INTEGER) AT EACH TIME POINT. C1! 0* NLABEL(NFUNC) = THE NAMES (LABELs) OF THE OBJECT FUNCTIONS 0* NLABEL(1) SHOULD BE THE NAME OF THE FIRST 0* OBJECT FUNCTION, ETC. cl- 0* NPARA - NUMBER OF PARAMETERS TO VARY 0* (INTEGER) 0* NSIMUL - NUMBER OF SIMULATIONS 0* (INTEGER) 0* PHI(NPARA) = MAXIMUM VALUES OF THE PARAMETERs( OR ONE SIGMA MAX) 0* REAL 0* 0* PLO(NPARA) = MINIMUM VALUES OF THE PARAMETERs( OR ONE SIGMA MIN) 0* (REAL) C} 0* TIME(TNPTS) . ARRAY CONTAINS THE TIME POINTS AT WHICH THE 0* (REAL) OUTPUT FUNCTIONS ARE TO BE SAVED AND THE 0* SENSITIVITY ANALYZED. C1} 0* TSTART - INITIAL TIME POINT, SO THERE ARE NO 3. A. VALUES SAVED 0* (REAL) AT THIS POINT 0* TITLE(B) - A ONE CARD TITLE FOR S. A. RUN 0* (INTEGER) (WRITTEN IN 8A1O FORMAT ) 0* TNPTS - NUMBER OF TIME POINTS 0* (INTEGER) 0* ' CW *ttttttakatttatlktlk**********##********tittttt REAL TIME(150),BEGIN(40) 0* C* C* C* 0* C* C* 10 0* 0* c-l- C* 100 230 Program SENANAL, CONT'D. SUBROUTINE READIN INTEGER TITLE(8),IOMEGA,IMETH,NFUN0,ITRANS,TNPTS,NPARA,Iw(NPARA) INTEGER NLABEL(40) « JCOMMON /GETERR/ IFLAG,NUMVAR,RAB0(40) EQUIVALENCE (RAB0(1),IRAB0(1)) INTEGER IRAB0(40) IO IS THE INPUT UNIT ( TAPE1 ) DATA IO/1/ READ IN SET 1 READ(IO,10) (TITLE(J),J=1,8) FORMAT(8A10) READ IN SET 2 IACCUR - 4 ICARD = 2 CALL GETNUM(Io) IF( IFLAG .GE. 0 ) CALL 0ETERR(IFLAG, ICARD, NUMVAR) IOMEGA = IRAB0(1) IMETH . IRAB0(2) NPARA -- IRAB0(3 ) IF(IOMEGA .LT. -1 .OR. IOMEGA .GT. 1 ) STOP "R1" IF(IMETH .LT. 1 .OR. IMETH .GT. 2 ) STOP "R2" IF( NPARA .LT. 1 ) STOP "R3" OBTAIN FREQUENCY SET IF(IOMECA .EQ. 0) CALL FOURST(IW,NPARA) IF(IOMEGA .EQ. -1) CALL WALSET(IW,NPARA) IF(IOMECA .NE. 1) GO TO 100 READ IN SPECIAL FREQUENCY SET (SET 3 ) READ IN THE ACCURACY OF THE SET ALSO NP1 . NPARA + 1 CALL IREAD(IW,NP1,IO, ICARD) IACCUR - IW(NP1) CONTINUE READ IN SET 4 ICARD . ICARD + 1 CALL GETNUM(Io) IF( IFLAG .GE. 0 ) CALL GETERR(IFLAO, ICARD, NUMVAR) NSIMUL . IRAB0(1) ' TNPTS = IRAB0(2) ITRANS - IRAB0(3) NFUNC = IRAB0(4) TSTART . RAB0(5) Ca» 0* 0* Ci 0* c-l 0* 0* (3* 0* cl; c-lv 150 175 101 231 Program SENANAL, CONT'D. SUBROUTINE READIN IF(NFUNC .LT. 1 .OR. NFUNC .GT. 40 ) STOP "R4" IF( NSIMUL .LT. 1) STOP "R5" READ IN THE TIME POINTS ( SET 5) CALL RREAD(TIME,TNPTS,IO, ICARD) READ IN PHI(J) ( SET 6 ) CALL RREAD(PHI,NPARA,IO, ICARD) READ IN PLO(J) ( SET 7 ) CALL RREAD(PLO,NPARA,IO, ICARD) READ IN INITIAL VALUES ( SET 8 ) CALL RREAD(BEGIN,NFUN0,IO, ICARD) DO 175 J=1,NFUN0 READ(IO,1SO) NLABEL(J) ICARD . ICARD + 1 FORMAT(A1O) CONTINUE WRITE(2,1O1) ICARD FORMAT(1H ,/,I6,* DATA CARDS READ IN *) RETURN END 232 Program SENANAL, CONT'D. SUBROUTINE FOURST SUBROUTINE FOURST(IW,NPARA) 0* GWWWWWWMWW C* * 0* SUBROUTINE FOURST CALCULATE THE STANDARD 4TH ORDER CORRECT * 0* FOURIER FREQUENCY SET FOR "NPARA" PARAMETERS. * 0* ._ * 0* REFERENCE; CUKIER,SHAILBY,SHULER. JOURNAL OF CHEMICAL PHYSICS, * 0: VOL 63, NO. 3, (1975) PP 1140-1149. : C CWMWWWWWW INTEGER Iw(NPARA),IOMEGA(50),IDN(49) DATA IOMEGA/O,O,1,5,11,1,17,23,19,25,41,31,23,87,67, + 73.85.143.149,99 .119,237,267,283,151.385. + 157,215,449,163.337.253.375.441.673,773,875,873,587,849, + 623,637,891.943,1171.1225,1335.1725,1663,2019/ DATA IDN/ 4,8,6,1O,2O,22,32,4O,38,26,56,62,46,76,96, + 6O,86,126,134,112,92,128,154,196,34,416,106, + 208,328,198,382,88,348,186,140,170,284,568,302,438, + 41O,248,448,388,596,216,100,488,166/ 0* IF(NPARA .GT. 50 ) STOP "F1" IW(1) - IOMEGA(NPARA) DO 100 J=2,NPARA Iw(J) a IW(J-1) + IDN(NPARA + 1 - J) 100 CONTINUE RETURN END 233 Program SENANAL, CONT'D. SUBROUTINE WALSET SUBROUTINE WALSET(IW,NPARA) C* * C* SUBROUTINE WALSET CALCULATES THE FREQUENCY SET FOR EXACT WALSH * C* ANALYSIS FOR 'NPARA' PARAMETERS. * C* * C* REFERENCE: T.H. PIERCE, PHD THESIS (1980) * C* * INTEGER IW(NPARA) DO 100 J=1,NPARA Iw(J) . 2**(J-1) 1OO CONTINUE RETURN END 23H Program SENANAL, CONT’D. SUBROUTINE IREAD SUBROUTINE IREAD(IRRAY,LAST,IO, ICARD) 0* memm 0* 0* SUBROUTINE IREAD READS IN A VARIABLE LENGTH (LAST) INTEGER 0* ARRAY USING FREE FORMAT INPUT. 0* WWW C INTEGER IRRAY(LAST) **** 0* 0* COMMON /GETERR/ IFLAG,NUMVAR RABc(40) EQUIVALENCE (RABC(1),IRABC(1)) INTEGER IRABc(40) 0* KOUNT a 1 1O CONTINUE ICARD =- ICARD + 1 CALL GETNUM(Io) IF( IFLAG .GE. 0 ) CALL GETERR(IFLAG, ICARD, NUMVAR) 0* IF THE CARD READ WAS BLANK NUMVAR . 0. IF( NUMVAR .LT. 1 ) GO TO 10 , DO 20 J=1,NUMVAR IRRAY(KOUNT) . IRAB0(J) KOUNT - KOUNT + 1 IF(KOUNT .CT. LAST) 00 TO 25 20 CONTINUE 0* 0* RETURN FOR ANOTHER CARD FULL OF VARIABLES GO TO 10 25 CONTINUE 0* ALL DONE SO STOP RETURN END 235 Program SENANAL, CONT'D. SUBROUTINE RREAD SUBROUTINE RREAD(ARRAY,LAST,IO, ICARD) 0* f CHMWWWWWW 0* 0* SUBROUTINE RREAD READS IN A VARIABLE LENGTH (LAST) REAL 0* ARRAY USING THE FREE FORMAT ROUTINE GETNUM. 0* 0W REAL ARRAY(LAST) ' **** 0* 0* COMMON /GETERR/ IFLA0,NUMVAR,RAB0(40) EQUIVALENCE (RAB0(1),IRAB0(1)) INTEGER IRAB0(40) 0* KOUNT = 1 1O CONTINUE ICARD - ICARD + 1 CALL GETNUM(Io) IF( IFLAG .GE. 0 ) CALL GETERR(IFLAG, ICARD, NUMVAR) 0* IF THE CARD READ WAS BLANK NUMVAR - 0. IF( NUMVAR .LT. 1) GO TO 10 DO 20 J=1,NUMVAR ARRAY(KOUNT) - RABO(J) KOUNT = KOUNT + 1 IF(KOUNT .GT. LAST) GO TO 25 2O CONTINUE C* RETURN FOR ANOTHER CARD FULL OF VARIABLES GO TO 10 25 CONTINUE 0* ALL DONE SO STOP RETURN END 236 Program SENANAL, CONT'D. SUBROUTINE PARAM SUBROUTINE PARAM(METH,IQ,TRANS,PHI,PLO,NPARA,P,NSIMUL,IW) CWW 0* u C* SUBROUTINE PARAM COMPUTES THE IQ"TH PARAMETER * c* VECTOR. THIS Is DONE USING A PRESELECTED ( TRANS,METH ) * C* TRANSFORMATION FUNCTION. * 0* * C* METH . 1 ---- FOURIER METHOD * 0* i 0* TRANS * C* 1 -> USE FOURIER LOG-UNIFORM TRANSFORMATION * C* PHI . NOMINAL*EXP(DELTA) * C* PLO . NOMINAL*EXP( -DELTA ) * C* WITH LN(P) SPREAD UNIFORMLY OVER @LN(PHI) , LN(PLO)@ * 0* M C* 2 => USE FOURIER UNIFORM TRANSFORMATION * C* PHI - NOMINAL + DELTA * C* PLO - NOMINAL - DELTA * C* P IS UNIFORMLY SPREAD OVER @ PLO , PHI @ : ci- C* 3 => USE THE FOURIER TEST FUNCTION * C* PHI - NOMINAL*( 1 + DELTA ) * C* PLO = NOMINAL*( 1 - DELTA) * C* P(SQ) . NOMINAL*(1. + DELTA*SIN(W*SQ) ) * 0* * C* 4 => USE THE FOURIER COSH DISTRIBUTION FUNCTION * C* IN LOG(P)-SPACE * C* HERE LN(PHI)-LN(PLO) - 4.0/A * 0* WHERE 82.87 OF THE SAMPLES ARE BETWEEN PHI AND PLO : cl- C* 5 => USE THE FOURIER COSH DISTRIBUTION FUNCTION * C* IN P-SPACE * C* HERE (PHI)-(PLO) = 4.0/A * C* WHERE 82.87 OF THE SAMPLES ARE BETWEEN PHI AND PLO * 0* i 0* * C* METH - 2 ---- WALSH METHOD * 0* i C* TRANS * 0* i c* 1 => USE ARITHMETIC WALSH TRANSFORMATION * C* PHI - NOMINAL + DELTA * C* PLO - NOMINAL - DELTA * C* P IS EITHER PHI OR PLO * C* i C* 2 => USE MULTIPLICATIVE WALSH TRANSFORM * C* PHI . NOMINAL*1O**( DELTA ) 1 C* PLO a NOMINAL*1O**(-DELTA ) 0* C* c! c* 100 0* CAI» 110 200 C'l' 210 C. 300 C" 310 C“ 400 237 Program SENANAL, CONT'D. SUBROUTINE PARAM P IS EITHER PHI OR PLO REAL NOMINAL,DELTA,PHI(NPARA),PLo(NPARA),P(NPARA) INTEGER TRANS,IQ,METH,NSIMUL,IW(NPARA) INTEGER WALH ' EXTERNAL WALH IF ( METH .EQ. 2 ) GO TO 1000 TWODPI = 2.0/ACOS(-1.0) R . FLOAT(NSIMUL) SQ . FLOAT(2*IQ -NSIMUL- 1)/(R*TWODPI) GO TO (100,200,300,400,SOO)TRANS CONTINUE FOURIER METHOD WITH LOG-UNIFORM TRANSFORMATION FUNCTION DO 110 J=1,NPARA DELTA - O.5*ALOG(PHI(J)/PL0(J)) NOMINAL . SQRT(PHI(J)*PLO(J)) P(J) = NOMINAL*EXP(DELTA*TWODPI*ASIN(SIN(SQ*FLOAT(IW(J))))) CONTINUE RETURN CONTINUE FOURIER METHOD USING UNIFORM TRANSFORMATION FUNCTION DO 210 J=1,NPARA NOMINAL = O.5*(PHI(J)+PLO(J)) DELTA - ( PHI(J) - PLo(J) )*O.5 P(J) - NOMINAL + DELTA*TWODPI*ASIN(SIN(SQ*FLOAT(IW(J)))) CONTINUE RETURN CONTINUE FOURIER METHOD WITH TEST TRANSFORMATION FUNCTION DO 310 J=1,NPARA NOMINAL - O.S*(PHI(J)+PLO(J)) DELTA . (PHI(J)-PLo(J))/(PHI(J)+PL0(J)) P(J) . NOMINAL*(1.O + DELTA*SIN(FLOAT(IW(J))*SQ)) CONTINUE RETURN CONTINUE (3* (2* C* 410 C* 500 0* C" C* 510 0* 0* 1000 0* 0* 1100 C* 1110 0* 1200 0* 238 Program SENANAL, CONT'D. SUBROUTINE PARAM COSH-DISTRIBUTION FUNCTION BELL-SHAPE IN LOG(K)-SPACE DO 410 J=1,NPARA AJ = 4.0/(ALOG(PHI(J)) - ALOG(PLO(J))) THETA = SQ * FLOAT(IW(J)) UJ . (1. O/(2.*AJ))*ALOG((1. + SIN(THETA))/(1. - SIN(THETA))) NOMINAL = SQRT(PHI(J)/PL0(J)) P(J) - NOMINAL * EXP(UJ) CONTINUE RETURN CONTINUE COSH-DISTRIBUTION FUNCTION BELL-SHAPED IN K-SPACE DO 510 J=1,NPARA AJ = 4.0/(PHI(J) - PLo(J)) THETA a SQ*FLOAT(Iw(J)) UJ = (1. o/AJ)*ALOG((1. + SIN(THETA))/(1. - SIN(THETA))) NOMINAL = (PHI(J)+PLO(J))*O. 5 P(J) = NOMINAL + UJ CONTINUE RETURN CONTINUE ENTRY INTO HERE FOR WALSH ANALYSIS ISO 8 IQ - 1 GO TO (1100,1200) TRANS CONTINUE ARITHMETIC WALSH TRANSFORMATION FUNCTION Do 1110 J=1,NPARA NOMINAL - O.5*(PHI(J)+PL0(J)) DELTA - ( PHI(J) - PLo(J) )*O.5 P(J) =.NOMINAL + DELTA*FLOAT(WALH(IW(J),ISQ)) CONTINUE RETURN CONTINUE MULTIPLICATIVE WALSH TRANSFORMATION FUNCTION Do 1210 J=1,NPARA DELTA = O.5*ALOG10(PHI(J)/PL0(J)) NOMINAL = SORT(PHI(J)*PL0(J)) P(J) . NOMINAL*10.0**(DELTA*FLOAT(WALH(IW(J),ISQ))) 239 Program SENANAL, CONT'D. SUBROUTINE PARAM 1210 CONTINUE RETURN END c-l‘l-I-I' 2ND Program SENANAL, CONT'D. SUBROUTINE GETNUM SUBROUTINE GETNUM(LINPUT) COMMON /GETERR/ ICRK,JREAD,RABC(40) DIMENSION IRABc(4o) EQUIVALENCE (RABC(1),IRABC(1)) COMMON /EL/ L(80) LOGICAL INTR CWWWWWW Cw C‘I-Inl-l» Cm Cw Cm! c-l-‘I-I-Iv can Cm Cal-Iv cm Gift} CM 0% c-I-lt-l-l Cw Cw Cm! can Cw Cm Cm Cw GM Cm.» Oman» 0% Cw can Cm cw can cfi-I-I-I» can CHI-*** cit-l“ Ca“ Cw can» cit-l1” can *** FREE FORM VARIABLE INPUT ROUTINE. *** ROUTINE ACCEPTS A,F,E AND I FORMAT INPUT. ALL BLANKS EXCEPT IN HOLLERITH STRINGS ARE IGNORED. THE ONLY LEGAL DELIMITER IS COMMA (,), ANY OTHER RESULTS IN ERROR TERMINATION. KL IS THE MAXIMUM COLUMN WIDTH COUNTER. ONLY 80 COLUMNS ARE READ, SO ONLY 40 VARIABLES CAN BE RETURNED. TO ENLARGE THIS, CHANGE THE DATA AND COMMON STATEMENTS TO REFLECT THE SIZE YOU WISH. --- INPUT --- LINPUT -- THE TAPE UNIT BEING READ FROM --- OUTPUT --- JREAD IS THE NUMBER OF VARIABLES RETURNED IN COMMON /GETERR/ RABc(=IRABc) CONTAINS THE READ VARIABLES. COMMON /EL/ CONTAINS THE LINE AS READ IN 80R1 ERROR CODES: (STANDARD IF(UNIT) VALUEs) IORK=1 ILLEGAL CHARACTER (MSG PRINTED) VARIABLES TO POINT OF ERROR RETURNED ICRK=O EOF ON READ, JREAD=O ICRK3-1 NORMAL TERMINATION --- INTERNAL VARIABLES --- S IS SIGN OF VARIABLE, IFA THE SIGN OF THE EXPONENT NUM IS THE MANTISSA, IE THE EXPONENT IP IS THE NUMBER OF DECIMAL PLACES INPUT. I IS THE CHARACTER COUNTER (1-80) INTR -- REAL VARIABLE(FALSE)/INTEGER(TRUE) FLAG HOLLERITH STRINGS OF 10 OR MORE CHARACTERS MAY BE INPUT WITHOUT COMMAS EVERY 1O CHARACTERS AND WILL BE INSERTED 1O CHARACTERS PER WORD WITH BLANK FILL (STANDARD A FORMAT). ANY COMMAS FOUND IN THE HOLLER- ITH STRING END THE STRING AT THAT POINT. THE FIRST CHARACTER OF THE STRING MAY NOT BE A COMMAS, ,) PERIOD(. ) PLUs(+ ) OR MINUS(- ) OR DIGIT(O 9 OR BLANK. 22:21:313231312“11112111““11111111“: Cm CM CM Cm Cm 2H1 Program SENANAL, CONT'D. SUBROUTINE GETNUM HOLLERITH STRINGS WHICH ARE THE LAST INPUT ON A LINE, BUT NOT ENDING WITH A COMMA, WILL BE ASSUMED TO CONTINUE RIGHT OUT TO COLUMN 80(OR KL) AND BLANK FILLED VARIABLES WILL THUS BE RETURNED. ”it: CWW INTEGER PERIOD,COMMA,BLANK,ZERO,PLUS,EE,DD DATA PERIOD,COMMA,BLANK,ZERO /1R., 1R,, 1R , 1RO/ DATA NINE,PLUS,MINUS,EE,DD /1R9, 1R+, 1R-, 1RE, 1RD/ DATA KL /80/ Cm 0.4%; 10 CW 25 C**** 251 26 cifififi 0**** Cm Cm READ THE INPUT LINE FROM UNIT LINPUT READ (LINPUT,1ooo) (L(I),I=1,80) JREAD=O IF (EOF(LINPUT) .NE. 0.) GO TO 998 ICRK=-1 I'1 PREPARE FOR A NEW VARIABLE NUM*IE=IFA=IP=O INTR=.TRUE. S=1.0 DECODE THE FIRST CHARACTER IN THE VARIABLE IF (I .GT. KL) RETURN IF (L(I) .LE. NINE .AND. L(I) .GE. ZERO) Go TO 35 IF IF (L(I) (L(I) S'-1.0 GO IF IF IF DO TO 30 (L(I) (L(I) (L(I) .EQ. PLUS) GO TO 3O .NE. MINUS) GO TO 25 .EQ. PERIOD) GO TO 39 .EQ. COMMA) GO TO 60 .EQ. BLANK) GO TO 291 HOLLERITH VARIABLE (A FORMAT) 26 LL=1,1O JL=LL IF (I .GT. KL) GO TO 27 IF (L(I) .EQ. COMMA) GO TO 27 ISH=6 O-6*LL IE-SHIFT(L(I) ,ISH) NUM'O I=I+1 CONTI R(NUM,IE) NUE FULL WORD(1O CHARS) FILLED IF YOU FALL THRU HERE. STORE THE HOLLERITH VARIABLE JREAD=JREAD+1 IRABO(JREAD)=NUM SKIP THE TRAILING COMMA, OTHERWISE ASSUME THE HOLLERITH STRING CONTINUES CM 27 29 291 cm 30 35 C**** 39 40 CM cm 50 51 52 2&2 Program SENANAL, CONT’D. SUBROUTINE GETNUM IF (I .GT. KL) RETURN IF (L(I) .EQ. COMMA) GO To 291 NUM=O GO TO 251 BLANK FILL WORD DO 29 LL=JL,1O ISH=6O-6*LL IE-SHIFT(BLANK,ISH) NUM-OR(NUM,IE) STORE THE PARTIAL HOLLERITH VARIABLE JREAD-JREAD+1 IRABc(JREAD) - NUM I-I+1 GO TO 10 INTEGER PORTION OF VARIABLE I=I+1 IF (L(I) .EQ. PERIOD) GO TO 39 IF (I .GT. KL) GO To 60 IF (L(I) .EQ. BLANK) GO TO 30 IF (L(I) .EQ. EE) GO TO 50 IF (L(I) .EQ. COMMA) Go TO 60 IF (L(I) .LT. ZERO .OR. L(I) .GT. NINE) GO TO 999' NUM - NUM*1O + (L(I)-27) GO To 30 EVALUATE DECIMAL PORTION INTR-.FALSE. I=I+1 IF (I .GT. KL) Go TO 60 IF (L(I) .EQ. BLANK) GO TO 40 IF (L(I) .EQ. EE) GO To 50 IF (L(I) .EQ. COMMA) GO TO 60 IF (L(I) .LT. ZERO .OR. L(I) .GT. NINE) GO TO 999 INCREMENT THE DECIMAL COUNT IP=IP+1 NUM - NUM*1O + (L(I)-27) GO TO 40 EVALUATE EXPONENT IFA-1 INTR-.FALSE. I-I+1 IF (L(I) .EQ. PLUS) GO TO 51 IF (L(I) .NE. MINUS) GO TO 52 IFA--1 I=I+1 IF (I .GT. KL) GO TO 60 IF (L(I) .EQ. COMMA) GO TO 60 IF (L(I) .EQ. BLANK) GO TO 51 IF (L(I) .LT. ZERO .OR. L(I) .GT. NINE) GO TO 999 62 64 CW 995 999 CM CM 998 2M3 Program SENANAL, CONT'D. SUBROUTINE GETNUM IE - IE*1O + (L(I)-27) GO TO 51 STORE THE FINISHED VARIABLE (EXCEPT HOLLERITH) CONTINUE CHECK ILLEGAL EXPONENT RANGE THE CHECK IS NOT PERFECT: DIGITS BEFORE THE MANTISSA PERIOD ARE NOT CONSIDERED. IEX=IE*IFA-IP IF (IEX .GT. 522) GO TO 995 IF (IEX .LT. -294) GO TO 995 I=I+1 JREADsJREAD+1 IF (INTR) GO To 62 RABC(JREAD) . S*(FLOAT(NUM)*1O.**IEX) GO TO 64 IRABc(JREAD) = S*NUM IF (I .GT. KL) RETURN GO TO 10 ERROR CONDITION CODE I=I-1 CONTINUE ICRK=1 JM=I-1 IF (JM .LE. 0) JM=1 PRINT 1010, L,(BLANK,LL=1,JM),PLUS RETURN EOF ENCOUNTERED, JREAD ALREADY ZEROED, SET ICRK AND RETURN. ICRK=O RETURN CWWWWW C**** 1000 1010 Can!” **FORMATS** FORMAT (80R1) FORMAT (*OILLEGAL CHARACTER FOUND AT PLUS(+) */1X,80R1/1X,80R1) END 2AM Program SENANAL, CONT'D. FUNCTION WALH INTEGER FUNCTION WALH(N,IT) SUBROUTINE WALH(N,IT) COMPUTES THE HADAMARD-ORDERED ' WALSH FUNCTION OF SEQUENCY ' N ' AT TIME POINT ' IT . WHERE N 18 OF THE RANGE ( 0,1,2,5,...(2**(LENGTH+1) - 1) AND IT IS OF THE RANGE ( O,1,2.3.4,...,(2**(LENGTH+1) - 1) VV #****** CWWWWWWW 1O C‘I’ 27 C‘I’ C‘I’ INTEGER TBIT(6O),NBIT(60),M,I REAL OLDN,FRAC DATA LENGTH /15/ DECODE N INTO ITS BINARY REPRESENTATION OLDN - FLOAT(N) D0 10 I-1,LENGTH MsOLDN/2.O FRAC - OLDN/2.0 - FLOAT(M) NBIT(I) - FRAC*2.0 OLDN = FLOAT(M) CONTINUE DECODE IT INTO ITS BINARY REPRESENTATION TOLD - IT DO 27 I=1,LENGTH M-TOLD/2.0 FRAC a TOLD/2.0 - FLOAT(M) TBIT(I) . FRAC*2.O TOLD . FLOAT(M) CONTINUE WE NOW KNOW THE BINARY REP FOR T AND N CALCULATE THE EXPONENT NSUM . NBIT(1)*TBIT(1) DO 50 I=2,LENGTH NSUM . NSUM + NBIT(1)*TBIT(1) CONTINUE WRITE(2,2)(((NBIT(L),L=1,LENGTH),(TBIT(K),K=1,LENGTH)).NSUM) FORMAT(* *,*NBIT=*,1SI1,* TBIT=*,1511,* NSUM -*,I4) WALH - (-1)**NSUM RETURN END 2H5 Program SENANAL, CONT'D. SUBRO UTINE TIMES SUBROUTINE TIMEs(ISUB,ITYPE) Cmmmmmi C* * C* SUBROUTINE TIMES COMPUTES THE CPU TIME SPENT BETWEEN CALLS. * C* THIS IS INSTALLATION DEPENDENT. * C* ITS USE Is FOR DOCUMENTATION PURPOSES ONLY 1 C" C* ISUB - THE PROCEDURE TO BE TIMED. : 0* c* ITYPE - FLAG: .LT. 1 FOR TIMING * C* .GE. 1 FOR FINAL PRINT * CAI Ir GWW C* * 0* REAL TIMS(15),NEW,LAST INTEGER NAME(15) C} DATA TIMs/15*O./ DATA LAST /O./ DATA NAME/1OHREAD INPUT,1OHSIMULATION,1OHPARAM CALC,1OHMODEL CALC +,1OHWRITE OUTP/ DATA LENGTH /5/ c!» IF(ITYPE .GE. 1) GO TO 5 NEW = SECOND(CPU) C* INITIALIZE THE VALUES IN FIRST ENTRY IF( LAST .EQ. O. ) LAST . NEW TIMS(ISUB) = TIMs(ISUB) + NEW - LAST LAST . NEW RETURN 5 CONTINUE WRITE(2,1oo) 1OO FORMAT(* *,/,/,/,5X,* SECONDS*,4X,* PROCEDURE *,/) TSUM . 0. DO 10 J=1,LENGTH TSUM = TSUM + TIMS(J) WRITE(2,150) TIMs(J),NAME(J) 150 FORMAT(* *,5X,F7.3.5X,A10) 1O CONTINUE WRITE(2,160) TSUM 160 FORMAT(* *,/,/,5X,F7.5,5X,* TOTAL TIME *) RETURN END 2H6 Program SENANAL, CONT'D. SUBROUTINE GETERR SUBROUTINE GETERR(IFLAG,ICARD,NUMVAR) CAI 1 0* SUBROUTINE GETERR IS AN ERROR MESSAGE SUBROUTINE WHICH * 0* TERMINATES SENANAL IF INPUT WAS NOT CORRECTLY * C’ READ IN. * 0* * C* IFLAG IS THE FLAG FROM SUBROUTINE GETNUM * 0* 11- 0* ICARD IS THE INPUT CARD NUMBER OF THE ERROR * 0* * 0* NUMVAR IS THE NUMBER OF VARIABLES ON CARD ICARD * Cflmmmmimmmm INTEGER IFLAG,ICARD,NUMVAR WRITE(2,11) IFLAG,ICARD,NUMVAR 11 FORMAT(1H ,5H*****,* ERROR IN GETNUM *,/,15X,* ICRK =*,15, +/.15X,* ICARD -*,I5,/,15X,* NUMVAR -*,15) STOP "GETER" END 2M7 Program TRANS PROGRAM TRANS(OUTPUT365,TAPE2=OUTPUT,TAPE3=513,TAPE4=513, + TAPE53513,TAPE6=513,TAPE7'513,TAPE8=65,TAPE9=6S) Ct} CWWWWWWWW 0* PROGRAM TRANS IS THE FOLLOW-UP PROGRAM TO PROGRAM 'SENANAL'. 0* IN THIS PROGRAM 'TAPEB', THE OUTPUT FILE FROM 'SENANAL' IS 0* READ IN AND OPERATED ON. C* 0* FIRST THE SIMULATION RUNS ARE TRANSPOSED INTO SENSITIVITY- 0* ANALYSIS POINTS. (INSTEAD OF ALL THE TIME POINTS OF ONE FUNCTION 0* A SIMULATION, WE HAVE ALL THE FUNCTIONS AT ONE TIME POINT, 0* A SENSITIVITY-ANALYSIS POINT ) UPON SUCCESSFUL TRANSPOSITION 0* WE ITERATE THRU THE TIME POINTS. EACH S.A. POINT IS THEN 0* TRANSFORMED INTO SEQUENCY SPACE ( FOURIER OR WALSH ). 0* THIS TRANSFORMATION GIVES US THE EXPANSION COEFFICIENTS 0* FROM WHICH WE COMPUTE THE PARTIAL VARIANCES OF THE OBJECT 0* FUNCTIONS. Ci 0* AFTER THE PARTIAL VARIANCES ARE CALCULATED THEY ARE 0* WRITTEN OUT ONTO TAPE9. THE EXPANSION COEFFICIENTS ARE WRITTEN 0* OUT ONTO TAPE8. THE TRANSPOSED MATRIX IS AVAILABLE ON TAPE7, 0* LOGICAL UNIT 'WRITEUP'. lit 3* ** ti 13* ****** #* *¥ ***1‘ **** ** ***.* C-I C* VARIABLES C" C* F( ) - AN ARRAY WHICH HOLDS ONE OBJECT FUNCTION C* IE AT LEAST OF LENGTH 'NSIMUL' C* C* C* A( ) = A REAL ARRAY WHICH WILL HOLD THE COSINE C* COEFFICIENTS IN THE FOURIER METHOD. THEREFORE IT C* MUST BE OF LENGTH (NSIMUL + 1) c-I C* B( ) . A REAL ARRAY WHICH WILL HOLD THE SINE COEFFICIENTS C* IN THE FOURIER METHOD. IT ALSO MUST BE OF LENGTH c* ( NSIMUL + 1) 0* C* IWK( ) . THE WORKING STORAGE OF THIS PROGRAM. THIS ARRAY Is C* USED AS STORAGE FOR DIFFERENT TEMPORARY VARIABLES. C* * C* ' MINIMALLY IT MUST BE DIMENSIONED FOR THE FFT ROUTINES* C* IF NSIMUL IS THE NUMBER OF SIMULATIONS IN THE FOURIER* C* METHOD THEN THRU SYMMETRY WE HAVE 2*NSIMUL POINTS C* TO FOURIER TRANSFORM. * 0* THE EQUATION FOR IWK DIMENSIONS IS: * 2&8 Program TRANS CONT'D. C* MINIMUM LENGTH = 5*( F + N ) + 26 (3* C* WHERE F IS THE NUMBER OF THE PRIME FACTORS OF NSIMUL C* EXCLUDING THE TRIVIAL FACTOR 1 . C* N . 2*NSIMUL 0* C* IE IF NSIMUL . 21 ( HAS To BE ODD ) 0* THEN 21 a 3'7 -> F = 2 0* MINIMUM LENGTH - 5*( 2 + 42 ) + 26 = 158 0* (3* C* ERROR CODES: 0* C* STOP "WALPR" , STOPS EXECUTION IF TWO FREQUENCIES ARE EQUAL C* SEE SUBROUTINE WALPR. (3* c* STOP "MTH" , THE METHOD READ IN ON TAPE3 WAS SOMETHING OTHER 0* THAN WALSH OR FOURIER. SEE PROGRAM TRANS 0* 0* STOP 3 , ERROR IN MATRIX TRANSPOSITION. SEE SUBROUTINE TRANP. 0* 0* C* OUTPUT FORMAT FOR TAPE? -PARTIAL VARIANCES- 0* C* 1) LABEL,TIME,AVE,STDDEV,RELDEV,LENGTH,NPARA C* ( A8,4(2X,E15.7),2I6 -FORMAT ) 0* G: LABEL . NAME OF THE OBJECT FUNCTION C C* TIME . TIME VALUE OF OBJECT FUNCTION 0* C* AVE - AVERAGE VALUE OF OBJECT FUNCTION AT THIS TIME 0* C* STDDEV - SQUARE ROOT OF TOTAL VARIANCE OF OBJECT FUNCTION 0* AT THIS TIME (3* C* RELDEV a STDDEV/AVE: STANDARD DEVIATION DIVIDED BY AVERAGE C* VALUE c* C* LENGTH - NUMBER OF PARTIAL VARIANCES TO WRITE OUT C* ( NPARA*(NPARA + 1))/2 0* C* NPARA - NUMBER OF PARAMETERS ANALYZED. 0* C. C* 2) ( SWLJ(K),K-1,LENGTH ) C* ( 5(2X,E15.7) -FORMAT ) 0* 0* SWLJ(K) = A SINGLE OR COUPLED PARTIAL VARIANCE, WHERE ***... ***ttt ****** *1: slut an: 11* aka: ***!!! 8:11 #:1121113 8* ##1## ***... *** 2M9 Program TRANS CONT'D. 0* CARDS 1 AND 2 ARE REPEATED FOR EACH LABEL-TIME POINT 0* WALSH ANALYSIS 0* C* 1) LABEL,TIME,NCOEFF ( A8,E15.7,I6 -FORMAT ) 0* C* 2) (F(K),K-1,NCOEFF ) ( 4020 -FORMAT ) 0* 0* CARDS 1 AND 2 ARE REPEATED FOR EACH DIFFERENT LABEL-TIME 0* POINT. C* K = NPARA*(L-1) - (L*(L-1))/2 + J * C* * C* * C* 5) (B(IW(L)+1) L=1,NPARA) * C* ( 5(2X,E15.7) -FORMAT * C* * C* B(Iw(L)+1) . IN FOURIER ANALYSIS, IT IS THE SINE * C* EXPANSION COEFFICIENT FOR THE Iw(L)TH * C* FREQUENCY. :1 C* IN WALSH ANALYSIS, IT IS THE EXPANSION * C* COEFFICIENT FOR THE IW(L)TH FREQUENCY. * C* * C* CARDS 1,2 AND 3 ARE REPEATED FOR EACH LABEL-TIME POINT. * C* * C* * C* OUTPUT FORMAT FOR TAPE8 -EXPANSION COEFFICIENTS- : 0* 0* FOURIER ANALYSIS * 0* * C* 1) LABEL,TIME,NZ ( A8,E15.7,I6 -FORMAT) I 0* 0* 2) ((A(K),B(K)),K=1,N2) ( 4020 -FORMAT ) g 0* * * * * * * * * * * * * 0*WWWWWWWW REAL TIME(150) REAL MINSW(10,55),MAXSW(10,55),AVESW(10,55) REAL A(2O48),B(2O48) REAL F(2048) REAL x(2048) 0* INTEGER TITLE(8),METHOD,NPARA,TNPTS,NSIMUL,IW(50).NFUNC INTEGER WRITEUP,WRITED,READUP,READOWN INTEGER IWK(sooo) INTEGER ITYP(2) INTEGER ILABEL(1o) 0* LOGICAL TEST 0* 0* 0* 0* C'I' 0* 0* 0* 0* 1O 11 20 21 250 Program TRANS CONT'D. EQUIVALENCE (IWK(1),F(1)) COMMON SWLJ(21O) DATA IUNIT/3/. IHALF/2048/ DATA IFLAG/O/ DATA READUP/4/, READOWN/S/, WRITEUP/6/, WRITED/7/ DATA MAXSW/550*0.0/, MINSW/SSO*1.O/, AVESW/550*O./ DATA ITYP/1OH FOURIER ,1OH WALSH / DATA TEST/ .FALSE. / ‘ INITIALIZE THE TIMING ROUTINE CALL TIMES(1,O) READ TAPE3 READ(3.1O) (TITLE(J),J=1,8) FORMAT(8A1O) WRITE( 2,11) (TITLE(J),J=1,8) FORMAT(1H ,8A1o) , READ(5,20) METHOD,NPARA,TNPTS,NSIMUL,NFUNC FORMAT(A1O,4I6) WRITE( 2,21)METHOD,NPARA,NFUNC,NSIMUL,TNPTS FORMAT(1H ,/,1H ,A1O,* SENSITIVITY ANALYSIS USING :*,/, +* NUMBER OF PARAMETERS =*,I6,/,* NUMBER OF OBJECT FUNCTIONS =*, + I6,/,* NUMBER OF SIMULATIONS =*,I6,/,* NUMBER OF TIME POINTS +.I6) 3O 4O 41 42 5O 60 0* READ(3.30) JUNK,IACCUR FORMAT(A1O,5X,I3) READ(3.40)(IW(J),J=1,NPARA) FORMAT(16I6) WRITE( 2,41) FORMAT(1H ,* FREQUENCY SET * 1 WRITE( 2,42) (IW(J).J=1.NPARA FORMAT(15X,16I6) READ(3.30) JUNK READ(3.SO) (TIME(J),J=1,TNPTs) FORMAT(7E12.6) READ(3.60)(ILABEL(J),J=1,NFUNc) FORMAT(8(A8,2x)) DETERMINE THE TYPE OF ANALYSIS IF( METHOD .EQ. ITYP(1) ) TEST a .TRUE. IF( METHOD .EQ. ITYP(2) ) TEST = .TRUE. IF(.NOT. TEST ) STOP "MTH" 251 Program TRANS CONT'D. 0* 0* TIME THE INPUT CALL TIMES(1,0) 0* 0* 0* NOW WE ARE READY TO TRANSPOSE THE MATRIX 0 CALL TRANP( IUNIT, READUP, READOWN, WRITEUP, WRITED, + TNPTS, NFUNC, NSIMUL, IWK(1), IWK(1). IWK(IHALFHLIHALFJUI‘BOW) C 0* TIME THE TRANSPOSE OPERATION CALL TIMEs(2,O) 0W 0* C* INITIALIZE N2,LENGTH 0* C* N2 . LENGTH OF A FOURIER TRANSFORM COEFFICIENT VECTOR 0* C* LENGTH - LENGTH OF THE PARTIAL VARIANCE MATRIX WHEN FOLDED C* INTO A LINEAR ARRAY 0* 0mm N2 = NSIMUL + 1 LENGTH - ((NPARA*(NPARA+1))/2) 0* 0* DO 1000 ITIME=1,TNPTS 0* 0* C* NOW TRANSFORM EACH OBJECT FUNCTION AT THE TIME POINT 0* 'TIME(ITIME)' 0* DO 900 NF = 1, NFUNC 0* 0* READ(WRITEUP)(F(K),K=1,NUMRow) 0* 0* NOW HAVING SET UP THE ARRAY F TRANSFORM IT 0 IF(METHOD .EQ. ITYP(2)) CALL WHT( NSIMUL, F, IFLAG,IWK(IHALF+1) ) IF( METHOD.EQ.ITYP(1)) CALL FFAST(F,NSIMUL,X,IWK,A,B) 0* C* CALCULATE THE TIME SPENT IN TRANSFORMATION CALL TIMES(3.O) C 0* NOW CALCULATE THE PARTIAL VARIANCES 0* IF(METHOD.EQ.ITYP(2))CALL WALPAR(F,NSIMUL,IW,NPARA,SWLJ,TOTVAR) ' IF(METHOD.EQ.ITYP(1))CALL FORPAR(A,B,NSIMUL,IW,NPARA,SWLJ, 252 Program TRANS CONT'D. 0* C* CALC THE TIME SPENT IN PARTIAL VARIANCES CALCULATIONS CALL TIMEs(4,o) 0* 0* C* CALC THE PARTIAL VARIANCE STATISTICS 0* DO 300 L=1,NPARA D0 250 J=L,NPARA INDEX - NPARA*(L-1) - (L*(L-1))/2 + J , MINSW(NF,INDEX) - AMIN1( MINSW(NF,INDEx), SWLJ(INDEx) ) MAXSW(NF,INDEX) - AMAX1( MAXSW(NF,INDEX), SWLJ(INDEx) ) AVESW(NF,INDEX)=((ITIME-1.)*AVEsw(NF,INDEx)+SWLJ(INDEx))/FLOAT(ITI +ME) 250 CONTINUE 3OO CONTINUE C* WRITE OUT THE EXPANSION COEFFICIENTS 0* IF( METHOD .EQ. ITYP(2) ) GO TO 375 C* FOURIER METHOD CALL OUTCF( A, B, N2, TIME(ITIME), ILABEL(NF) ) CALL OUTP( SWLJ, A(1), TOTVAR,TIME(ITIME),LENGTH,ILABEL(NF),B,IW, +NPARA) GO TO 400 375 CONTINUE C* WALSH METHOD CALL OUTCW(F, NSIMUL, TIME(ITIME), ILABEL(NF) ) CALL OUTP( SWLJ, F(1), TOTVAR, TIME(ITIME), LENGTH,ILABEL(NF),F,IW +,NPARA) 4OO CONTINUE 0* 0* 0* COMPUTE TIME SPENT IN WRITING OUTPUT CALL TIMES(5,0) 0* 0* 900 CONTINUE 1000 CONTINUE 0* C* WRITE OUT DIAGNOSTICS 0* DO 1100 NF=1,NFUNC SUM - 0.0 WRITE( 2,1400) ILABEL(NF) DO 1050 L=1,NPARA 253 Program TRANS CONT'D. DO 1050 J=L,NPARA INDEX = NPARA*(L-1) - (L*(L-1))/2 + J WRITE( 2,1500) L,J,AVEsw(NE,INDEx),MINsw(NE,INDEX),MAXSW(NP,INDEx) SUM . SUM + AVESW(NP,INDEx) 1050 CONTINUE . WRITE( 2,1600) SUM 1100 CONTINUE 1400 E0RMAT(1H1,1OX,A1O,* CONCENTRATION STATISTICS *,/) 1500 FORMAT(1H ,* (*,Iz,*,*,I2,*) *,* AWESW =*,1PE14.6, +3X,* MIN -*,E14.6,3X,* MAX . *,E14.6) 1600 FORMAT(/,/,/,1OX,* SUM OE AVERAGES -*,G14.6) CALL TIMEs(1,1) END 25“ Program TRANS CONT'D. SUBROUTINE OUTP SUBROUTINE OUTP(SWLJ,AVE,TOTVAR,TIME,LENGTH,LABEL,B,IW,NPARA) (3* QWWWWWWWW C* SUBROUTINE OUTP WRITES OUT THE PARTIAL VARIANCES ON LOGICAL * 0* UNIT 'IUNIT’. * CWWWWW 0* REAL SWLJ(LENGTH) REAL 3(1) INTEGER Iw(1) DATA IUNIT/9/ 0* STDDEV . SORT(TOTVAR) RELDEv - 0.0 IF( AVE .EQ. 0.0 ) GO To 5 RELDEv . STDDEV/AVE 5 CONTINUE WRITE(IUNIT , 10) LABEL , TIME, AVE , STDDEV, RELDEV, LENGTH , NPARA 1O PORMAT(A8,4(2X,E15.7).216) WRITE(IUNIT,20)(SWLJ(L),L=1,LENGTH) 20 FORMAT(5(2X,E15.7)) WRITE(IUNIT,20)(R(Iw(L)+1),L=1,NPARA) RETURN END 255 Program TRANS CONT'D. SUBROUTINE OUTCW SUBROUTINE OUTcw(F,NCOEFF,TIME,LABEL) CWWWWWWWMW* 0* SUBROUTINE OUTCW WRITES OUT THE WALSH EXPANSION COEFFICIENTS * 0* TO LOGICAL UNIT 'IUNIT' * 0W REAL F(NCOEFF) DATA IUNIT/B/ WRITE(IUNIT,10) LABEL,TIME,NCOEFF 1O FORMAT(A8,E15.7,16) WRITE(IUNIT,20)(F(K),K=1,NCOEFF) 20 FORMAT(4020) RETURN END 256 Program TRANS CONT'D. SUBROUTINE OUTCF SUBROUTINE OUTCF(A,B,N2,TIME,LABEL) GWWMWWWWW C* ‘ * c: SUBROUTINE OUTCF WRITES OUT THE FOURIER COEFFICIENTS * C * CMWWWMWW REAL A(N2),B(N2) DATA IUNIT/B/ c* WRITE(IUNIT,10) LABEL,TIME,Nz 1O FORMAT(A8,E15.7,I6) ,WRITE(IUNIT,20)((A(X),B(K)),X=1,N2) 20 FORMAT(4020) RETURN END 257 Program TRANS CONT'D. SUBROUTINE WALPAR SUBROUTINE WALPAR(A, N, IW, NPARA, SWLJ ,TOTVAR) I SUBROUTINE WALPAR CALCULATES THE TOTAL VARIANCE AND * 0* PARTIAL VARIANCES GIVEN THE WALSH EXPANSIONS COEFFICIENTS * 0* AND THE FREQUENCY SET. ONLY THE SINGLE PARTIAL VARIANCES AND * 0: COUPLED PARTIAL VARIANCES, S(L,J) ARE COMPUTED. * C * 0*. ALL PARTIAL VARIANCES ARE STORED IN A LINEAR ARRAY - * 0* (SWLJ( ) ), WHICH IS AN UNFOLDED UPPER TRIANGULAR MATRIX. * 0* THE DIAGONAL ELEMENTS, SWLJ( I,I ), ARE THE I'TH ISINGLE * 0* PARTIAL VARIANCES, AND THE (L,J)'TH ELEMENT IS THE COUPLED * 0* PARTIAL VARIANCE OF THE L'TH ND J'TH PARAMETERS. THIS IS * 0* LINEARLY FOLDED BY * 0* * 0* INDEX - NPARA*(L - 1) - (L*(L - 1))/2 + J * 0* * 0* REFERENCE: T.H. PIERCE PHD. THESIS, M.S.U. 1980 * 0* * 0* INPUT * 0* * 0* A - AN ARRAY OF THE WALSH EXPANSION COEFFICIENTS * 0* * 0* N - THE NUMBER OF EXPANSION COEFFICIENTS IN 'A' * 0* * 0* IV - AN INTEGER ARRAY OF THE FREQUENCY SET USED. * 0* * 0* NPARA . THE NUMBER OF PARAMETERS TO BE ANALYZED * 0* * 0* OUTPUT * 0* * 0* SWLJ . AN ARRAY OF THE SINGLE AND COUPLED PARTIAL VARIANCES * 0* * 0* TOTVAR - THE TOTAL VARIANCE OF THE EXPANDED FUNCTION * 0* PARSEVAL'S FORMULA - AO**2 * 0* * 0* * 0* RESTRICTIONS * 0* * 0* 1) Iw(J) MUST NEVER BE EQUAL TO Iw(X) FOR ANY J,K * 0* * 0: 2) SWLJ MUST BE DIMENSIONED AT LEAST (NPARA*(NPARA+1))/2 * C * REAL A(N) REAL SWLJ(1) 258 Program TRANS CONT'D. SUBROUTINE WALPAR INTEGER IW(NPARA) CWWWWWWW* (3* C‘. CALCULATE THE TOTAL VARIANCE REMEMBER TO ADD ONE(1) TO THE FREQUENCIES TO ACCOUNT FOR THE FREQUENCY AO STORED AS A(1) CWWWWWWW* 0* 100 C‘I' 0* 200 0* C‘. 300 400 TOTVAR - 0. SKIP A(1) AS THIS IS THE AVERAGE VALUE D0 100 J=2,N . TOTVAR - TOTVAR + A(J)*A(J) CONTINUE CALCULATE THE SINGLE PARTIAL VARIANCES DO 200 Ja1,NPARA INDEX - NPARA*(J-1) - (J*(J-1))/2 + J SWLJ(INDEx) - (A(Iw(J)+1)*A(Iw(J)+1))/TOTVAR CONTINUE CALCULATE THE COUPLED PARTIAL VARIANCES NPARM1 . NPARA - 1 DO 400 L=1, NPARM1 JSTART . L + 1 D0 500 J = JSTART, NPARA IF THE FREQUENCIES ARE EQUAL; IVAL=O (MISTAKE) IF(Iw(L) .EQ. Iw(J) ) STOP "WALPR" IVAL a XOR(IW(L),Iw(J)) INDEX . NPARA*(L-1) - (L*(L-1))/2 + J SWLJ(INDEx) . (A(IVAL + 1)*A(IVAL + 1))/TOTVAR CONTINUE CONTINUE RETURN END 259 Program TRANS CONT'D. SUBROUTINE WHT SUBROUTINE WHT(NUM,X,II,Y) CWMWWWWW ***** C**** II . 0 HADAMARD-ORDERED WHT ** 0**** II - 1 INVERSE HADAMARD-ORDERED WHT ** 0**** II - 2 WALSH-ORDERED WHT ** 0**** II - 3 INVERSE WALSH-ORDERED WHT ** 0**** ** 0**** THIS ROUTINE CALCULATES THE PAST WALSH-HADAMARD ** .0**** TRANSFORMS (WHT) FOR ANY GIVEN NUMBER WHICH ** C**** IS A POWER OF TWO. ** 0**** ** C**** NUM . NUMBER OF POINTS ** 0**** X(NUM) - ARRAY OF DATA TO BE TRANSFORMED ** C**** ON OUTPUT X(NUM) IS THE TRANSFORMED ** C**** EXPANSION COEFFICIENTS ** 0* * 0* - REFERENCE: AHMED AND RAO, "ORTHOGONAL TRANSFORMS FOR * 0* DIGITAL SIGNAL PROCESSING ", SPRINGER- * 0* VERLAG, (1975). * 0* i 0 DIMENSION IPOWER(20),X(NUM),Y(NUM) IF(II.LE.1) GO TO 14 0**** BIT REVERSE THE INPUT DO 11 I=1,NUM IB a I - 1 VIL = 1 9 IBD = IB/2 IPOWER(IL) = 1 IF(IB.EQ.(IBD*2)) IPOWER(IL) = O IF(IBD.EQ. 0) GO TO 10 IB = IBD IL = IL + 1 GO TO 9 1o CONTINUE IP . 1 IFAC . NUM DO 12 I1 - 1,IL IFAC . IFA0/2 12 IP . IP + IFAO*IPOWER(I1) 11 Y(IP) - x(I) D0 13 I - 1,NUM 13 X(I) = Y(I) 14 CONTINUE 0**** CALCULATE THE NUMBER OF ITERATIONS 55 ITER . 0 c**** c**** c**** c**** c**** c**** c**** c**** 48 49 C**** 0**** 15 260 Program TRANS CONT'D. SUBROUTINE WHT IREM = NUM IREM = IREM/2 IF(IREM.EQ.O) GO TO 2 ITER=ITER + 1 GO TO 1 CONTINUE BEGIN A LOOP FOR (LOG TO BASE TWO OF NUM) ITERATIONS D0 50 M81,ITER CALCULATE THE NUMBER OF PARTIONS IF(M.EQ.1) NUMP . 1 IF(M.NE.1) NUMP = NUMP*2 MNUM . NUM/NUMP MNUM2 = MNUM/2 BEGIN A LOOP FOR THE NUMBER OF PARTITIONS. ALPH . 1. D0 49 MP . 1,NUMP IB . (MP-1)*MNUM BEGIN A LOOP THROUGH THIS PARTITION. DO 48 MP2 . 1,MNUM2 MNUM21= MNUM2 + MP2 + IB IBA = IB +MP2 Y(IBA) - x(IBA) + ALPH*X(MNUM21) Y(MNUM21) = x(IBA) - ALPH*X(MNUM21) CONTINUE IF(II.GE.2) ALPH = -ALPH CONTINUE Do 7 I=1,NUM x(1) = Y(I) CONTINUE - IF(II.EQ.1 .OR. II.EQ.3) RETURN R-1./FLOAT(NUM) D0 15 I-1,NUM X(I) . x(I)*R RETURN END 261 Program TRANS CONT'D. TRANP SUBROUTINE TRANP( IUNIT, READUP, READOWN, WRITEUP WRITED, + TNPTS, NFUNC, NSIMUL, 0, UP, DOWN, LENGTH,NUMROW3 0* C********************************************************************** 0* SUBROUTINE TRANP TAKES THE MATRIX STORED ON 'IUNIT' AND * 0* TRANSPOSES IT. ( A(I,J) => A(J,I) ) RETURNING THE * 0* TRANSPOSED MATRIX ON LOGICAL UNIT 'WRITEUP'. * 0* , * 0* NOTES * 0* * 0* 1) THE ARRAY '0' MAY BE EQUIVALENCED TO EITHER THE ARRAY * 0* 'UP' OR THE ARRAY 'DOWN'. * 0* * * * 0* 2) LENGTH MUST BE ONE POWER OF TWO GREATER THAN NROW, UNLESS 0* NROW IS A POWER OF TWO. CWWWWWWW INTEGER READUP,READOWN,WRITEUP,WRITED INTEGER TNPTS,NSIMUL,NFUNC,IUNIT Cmmmmmmm 0* THE VECTOR 'C' SHOULD BE OF DIMENSION ONE POWER OF TWO 0* GREATER THAN NROW( UNLESS NROW IS A POWER OF TWO ) REAL 0(LENGTH) * REAL UP(LENGTH),DOWN(LENGTH) 0 DATA KOUNT/O/, ZERO/O./, LCOUNT/O/, NUMADD/O/ DATA NUMADD2/O/ NCOL . TNPTS*NFUNC NROW , NSIMUL 0* READ IN THE MATRIX 1 CONTINUE D0 1000 J=1, TNPTS ISTR a (J-1)*NFUNC + 1 ISTOP = ISTR + NFUNC - 1 READ(IUNIT,10000)(C(K),X=ISTR,ISTOP) 10000 FORMAT(4020) 1000 CONTINUE 0* IF KOUNT=0 ,THEN WE NEED TO WRITE THE UP-TAPE(TAPE1 INITIAL Y) 0* IF KOUNT . 1 , THEN WE WRITE THE DOWN-TAPE( TAPE2 INITIALLY 0* IF(XOUNT .NE. 0 ) GO TO 5 0* WRITE THE ODD ROWS DO 500 K=1,NCOL 500 WRITE(READUP) C(K) KOUNT - 1 c* 0: LCOUNT COUNTS THE NUMBER OF ROWS WRITTEN, BOTH UP AND DOWN C LCOUNT = LCOUNT + 1 262 Program TRANS CONT'D. TRANP 0* IF DONE, IE LCOUNT = NUMBER OF ROWS, THEN GO TO NEXT TASK 0* IF NOT DONE, THEN CONTINUE READ-WRITE (3* IF( LCOUNT .EQ. NROW ) GO TO 9 Go TO 1 5 CONTINUE 0* WRITE THE EVEN ROWS D0 510 K=1,NCOL 510 WRITE(READOWN) 0(K) 0* SET KOUNT=O so NEXT WRITE IS 'DOWN' KOUNT - O LCOUNT . LCOUNT + 1 0* CHECK FOR END OF DATA IF(LCOUNT .EQ. NRow) GO TO GO TO 9 CONTINUE 0* TAPE 1,2 ARE WRITTEN WITH THE MATRIX NOW WE NEED TO MAKE SURE 0* THAT THE ROW-DIMENSION IS A POWER OF TWO,AND IF NOT THEN WE 0* MUST ADD SUFFICIENT ZEROS TO MAKE THE ROW-DIMENSION A POWER OF 2 0* CHECK FOR HAVING WRIITEN AN EVEN NUMBER OF ROWS IF(KOUNT .EQ. o ) GO TO 120 0* ODD NUMBER OF ROWS WERE WRITTEN 0* ADD ONE ROW 0F ZEROS D0 115 K-1,NCOL WRITE(READOWN ) ZERO 115 CONTINUE LCOUNT - LCOUNT + 1 0* EVEN NUMBER OF ROWS WRITTEN 120 CONTINUE 0* FIGURE OUT EXPONENT OF NEAREST POWER OF TWO LARGER D0 116 M=1,50 MDIVID = M , RTEST - FLOAT(LCOUNT)/(2.**M) IF ( RTEST .LE. 1 ) GO TO 118 116 CONTINUE STOP 2 118 CONTINUE 0* CHECK FOR EXACT POWER OF TWO IF (RTEST .EQ. 1. ) GO TO 200 0* NOW CALCULATE THE NUMBER OF ROWS WE MUST ADD NUMADD - 2**MDIVID - LCOUNT 0* NUMADD SHOULD ALSO BE DIVISIBLE BY TWO IF(NUMADD .NE. 2*(NUMADD/2) ) STOP 3 0* WRITE ZEROS INTO DUMMY ROWS NUMADD2 . NUMADD/z 130 200 0* 0* 0* 263 Program TRANS CONT'D. TRANP D0 130 L=1,NUMADD2 DO 130 K=1,NCOL WRITE(READUP ) ZERO WRITE(READOWN ) ZERO CONTINUE CONTINUE LET NUMADD BE THE TOTAL NUMBER OF ADDED ROWS REMEMBER WE MAY HAVE ADDED A ROW EARLIER IF( NROW .NE. LCOUNT )NUMADD - NUMADD + 1 0* THE FINAL CHECK 0* 0* C$ 101 112 1O 0'. C‘.’ C" 0* 20 (3* 0* 3O NUMROW . NUMADD + NROW IF(NUMROW .NE. (2**MDIVID)) STOP 4 EVEN AND ODD TAPE WRITTEN LOOP = 1 REWIND READUP REWIND READOWN REWIND WRITEUP REWIND WRITED DIAGNOSTICS WRITE( 2,101) F0RMAT(/,/,/ * SUBROUTINE TRANP STATISTICS *,/) WRITE( 2,112 NROW,NCOL,NUMADD,NUMADD2 FORMAT(* *,* NR0W=*,I5,* NCOL=*,I4,* NUMADD-*,I6, +* NUMADD2-*,I5) NINSERT - NCOL NCHECK = NUMROW/2 CONTINUE INITIALIZE FOR THE READ-WRITE KOUNT - NUMBER OF INSERTS DONE LCOUNT = TOTAL NUMBER OF READ-WRITES DONE THIS ITERATION KOUNT - O LCOUNT = O CONTINUE READ(READUP )(UP(L),L=1,LOOP) READ(READOWN)(DOWN(L),L=1,LOOP) WRITE(WRITEUP)(UP(L),L-1,IOOP),(DOWN(M),M=1,LOOP) KOUNT . KOUNT + 1 IF( KOUNT .NE. NINSERT ) GO TO 20 LCOUNT - LCOUNT + 1 LCOUNT SHOULD EQUALNCHECK HERE ONLY IF NCHECK - 1 AND IT IS THE LAST MIX IF( LCOUNT .EQ. NCHECK ) GO TO 65 KOUNT . 0 CONTINUE READ(READUP )(UP(L),L=1,LOOP) 5O (3* 65 264 Program TRANS CONT'D. TRANP READ(READOWN )(DOWN(L),L=1,LOOP) WRITE(WRITED)(UP(L),L=1,LOOP),(DOWN(M),M=1,LOOP) KOUNT . KOUNT + 1 _IF( KOUNT .NE. NINSERT ) GO TO 30 LCOUNT = LCOUNT + 1 IF( LCOUNT .EQ. NCHECK ) GO TO 50 KOUNT 8 0 GO TO 20 CONTINUE LOOP 8 LOOP*2 NCHECK a NCHECK/2 ERROR CHECKING - IF( NCHECK .LE. 0 ) STOP 2 ISAVUP = READUP ISAVD =READOWN READUP . WRITEUP READOWN . WRITED WRITEUPuISAVUP WRITED= ISAVD REWIND READUP REWIND READOWN REWIND WRITEUP REWIND WRITED GO TO 10 CONTINUE REWIND WRITEUP RETURN END 265 Program TRANS CONT'D. SUBROUTINE TIMES SUBROUTINE TIMES(ISUB,ITYPE) CWWWWWWWW 0* 0* 0* 0* 0* 0* 0* 0* 0* 0* SUBROUTINE TIMES COMPUTES THE CPU TIME SPENT BETWEEN CALLS. THIS IS INSTALLATION DEPENDENT. ITS USE IS FOR DOCUMENTATION PURPOSES ONLY ISUB . THE PROCEDURE TO BE TIMED. ITYPE 8 FLAG: .LT. 1 FOR TIMING .GE. 1 FOR FINAL PRINT *#***** *** CWWWWWWW C. 0* 0* 0* 5 100 150 10 160 REAL TIMS(15).NEW,LAST INTEGER NAME(15) DATA TIM3/15*O./ DATA LAST /O./ DATA NAME/1OHREAD INPUT,1OHTRANSPOSE ,1OHTRANSFORM ,1OHPARTIALVAR +,1OHWRITE OUTP/ DATA LENGTH /5/ IF(ITYPE .GE. 1) GO TO 5 NEW a SECOND(CPU) INITIALIZE THE VALUES IN FIRST ENTRY IF( LAST .EQ. o. ) LAST . NEW TIMS(ISUB) - TIMs(ISUB) + NEW - LAST LAST . NEW RETURN CONTINUE WRITE( 2,100) FORMAT(* *,/,/,/,5X,* SECONDS*,4X,* PROCEDURE *,/) TSUM = 0. DO 10 J=1,LENGTH TSUM - TSUM + TIMs(J) WRITE( 2,150) TIMs(J),NAME(J) FORMAT(* *,5X,F7.3,5X,A10) CONTINUE WRITE( 2,160) TSUM FORMAT(* *,/,/,6X,F7.3.4x,* TOTAL TIME *) RETURN END 266 Program TRANS CONT'D. SUBROUTINE FFAST SUBROUTINE FFAST(F,NPTS,X,IWK,A,B) i i 0* SUBROUTINE FFAST COMPUTES THE FOURIER TRANSFORM OF A VECTOR * 0* IN THIS CASE 'F' IS LENGTHENED FROM ( -PI/2 , PI/2 ) TO * 0* ( O , 2PI ) AND THEN FOURIER-TRANSFORMED INTO COSINE AND SINE * 0* COEFFICIENTS, A AND B RESPECTIVELY. * 0* * 0* NOTE * 0* * 0* 1) SINCE 'F' Is NEVER USED IN FFCSIN IT MAY BE EQUIVALENCED * 0* To 'IWK'. * 0* * 0* 2) THE ROUTINES ALSO ALLOW THE EQUIVALENCING OF 'A' AND * 0* 'X'. * 0* * 0* 3) A0, THE AVERAGE VALUE, IS STORED AS 'A(1)'. * 0 REAL F(NPTs) REAL x(1),A(1),B(1) 0* INTEGER IWK(1) 0* NPTS MUST BE AN ODD INTEGER AND NOTE WE ARE GOING FROM 0* -PI/2 TO PI/2 AND TRANSFORMING TO (O,2*PI) c****** 0* NPTSP1=NPTS + 1 NPTSZ=NPTS*2 N2=NPTS+1 RNPTS2=(1.O/FLOAT(NPT52)) IQ=(NPTS-1)/2 IQP1=IQ+1 0* C L=0 C 0* TRANSFORM F(-PI/2 , PI/2) TO X(O , 2*PI) 0 DO 1000 J=IQP1,NPTS L=L+1 1000 x(L)-F(J) DO 2000 J=1,IQP1 L=L+1 JJ=NPTSP1-J x(L)=F(JJ) 2000 CONTINUE DO 3000 J=1,IQ L=L+1 267 Program TRANS CONT'D. SUBROUTINE FFAST JJ=IQP1-J x(L)=F(JJ) 3000 CONTINUE D0 4000 J=1,IQ L=L+1 x(L)=F(J) 4000 CONTINUE C C 0* CALL IMSL ROUTINES T0 CALCULATE FOURIER COEFFICIENTS 0 CALL FFCSIN(X,NFT52,A,E,IVH) 0* . , 0* SCALE THE COEFFICIENTS TO THEIR CORRECT VALUES. DO 350 J=1,N2 A(J)=RNFT52*A(J) B(J)=RNFT52*B(J) 350 CONTINUE A(1)-A(1)/2.0 A(N2)=A(N2)/2.0 B(1)=O.O B(N2)=0.0 C c!- RETURN END 268 Program TRANS CONT'D. SUBROUTINE FORPAR SUBROUTINE FORPAR(A, B, NPTS, IW, NPARA, SWLJ, T0TVAR,IACCUR) CWWW 0* CALCULATE THE PARTIAL VARIANCES 0* THIS ROUTINE IS SET UP FOR 4TH ORDER ACCURATE FREQUENCY 0* SETS 0* FIRST CALCULATE THE VARIANCE ( PARSEVAL"S FORMULA - A0**2 C* THEN CALCULATE THE SUM OF HARMONICS NOTING THAT ALL BUT THE C’ NPARA'TH ARE SUMMED TO THE FIRST HARMONIC AND THE NPARA'TH ONLY T t It!‘ 8* t 0: THE FUNDAMENTAL HARMONIC ' z 0 . 0* ALL PARTIAL VARIANCES ARE STORED IN A LINEAR ARRAY * 0* (SWLJ( ) ), WHICH IS AN UNFOLDED UPPER TRIANGULAR MATRIX. * 0* THE DIAGONAL ELEMENTS, SWLJ( I,I ), ARE THE I'TH ISINGLE * 0* PARTIAL VARIANCES, AND THE (L,J)'TH ELEMENT IS THE COUPLED * 0* PARTIAL VARIANCE OF THE L'TH ND J'TH PARAMETERS. THIS IS * 0* LINEARLY FOLDED BY * C* * 0* INDEX - NPARA*(L - 1) - (L*(L - 1))/2 + J * C* i 0* REFERENCE: T.H. PIERCE PHD. THESIS, M.S.U. 1980 * 0* n C’ INPUT a C* a 0* A - AN ARRAY OF THE COSINE EXPANSION COEFFICIENTS * C’ * 0* B . AN ARRAY OF THE SINE EXPANSION COEFFICIENTS * C* * 0* NPTS - NUMBER OF SIMULATIONS * C* i 0* IW . AN INTEGER ARRAY OF THE FREQUENCY SET USED. * C* * 0* NPARA . THE NUMBER OF PARAMETERS TO BE ANALYzED * C* i 0* OUTPUT * C* * 0* SWLJ = AN ARRAY OF THE SINGLE AND COUPLED PARTIAL VARIANCES * 0* a 0* TOTVAR - THE TOTAL VARIANCE OF THE EXPANDED FUNCTION * 0: PARSEVAL'S FORMULA - A0**2 * C it 0* * 0* RESTRICTIONS * C’ * 0* 1) Iw(J) MUST NEVER BE EQUAL T0 Iw(K) FOR ANY J,X * C‘ * 0* 2) SWLJ MUST BE DIMENSIONED AT LEAST (NPARA*(NPARA+1))/2 * CWWWWWWWW REAL A(1),B(1),SWLJ(1) 0* ci- 0* C" 600 269 Program TRANS CONT'D. SUBROUTINE FORPAR INTEGER Iw(NPARA) IRANGE = IACCUR - 1 IMID = IACCUR/2 N2-NPTS+1 NPARM1 = NPARA - 1 TOTVAR = 0.0 SKIP A(1) AND B(1) AS A0, THE AVERAGE VALUE, IS STORED AS A(1); B(1) - 0. D0 600 JJ=2,N2 - TOTVAR = A(JJ)*A(JJ) + B(JJ)*B(JJ) +TOTVAR CONTINUE C* LP IS THE HARMONICS 625 650 ' 675 C* C* Ci’ D0 650 L=1,NPARM1 SUM a 0.0 DC 625 LP . 1,2 LPWP1=LP*Iw(L) + 1 SUM . A(LPWP1)*A(LPWP1) + B(LPHP1)*B(LPWP1) + SUM CONTINUE INDEX = NPARA*(L-1) - (L*(L-1))/2 + L SWLJ(INDEX) = SUM/TOTVAR CONTINUE SUM=O. 0 D0 675 L=1,1 LPWP1 = L*Iw(NPARA) + 1 SUM = SUM + A(LPWP1)*A(LPWP1) + B(LPWP1)*B(LPWP1) CONTINUE INDEX = (NPARA*(NPARA+1))/2 SWLJ(INDEX) = SUM/TOTVAR D0 900 L21, NPARM1 JSTART = L + 1 DO 800 J=JSTART, NPARA SUM . 0. DO 750 HP=1,IRANGE IP - IMID - KP D0 700 IK - 1, IRANGE K = IMID - IK IF(IP .EQ. 0) GO TO 750 IF( K .EQ. 0) GO TO 700 ADD ONE (1) TO THE FREQUENCY COUNT TO ACCOUNT FOR A0 B0 9 IFREQ a IP*Iw(L) + K*IW(J) + 1 IF(IFREQ .LE. 1 ) 00 TO 700 IF(IFREQ .GE. N2) 00 TO 700 700 750 900 270 Program TRANS CONT'D. SUBROUTINE FORPAR SUM = A(IFREQ)*A(IFREQ) + B(IFRE0)*B(IFREQ) + SUM CONTINUE CONTINUE INDEX = NPARA*(L-1) - (L*(L-1))/2 + J SWLJ(INDEX) = SUM/TOTVAR CONTINUE CONTINUE RETURN END 271 Program PLOTSEN PROGRAM PLTSEN(INPUT,OUTPUT,TAPE1=INPUT,TAPE2=OUTPUT, + TAPE9) CW*W*WWWWW 0* THIS PROGRAM PLOTS RESULTS OF TAPE9 SENSITIVITY ANALYSIS FILE. * 0* THE PROGRAM READS CARDS FOR INFORMATION ON WHAT TO PLOT. * 0* IT THEN SEARCHES THE FILE (TAPE9) FOR THE DESIRED VALUES, * 0* AND PLOTS IT ON A LINE PRINTER. * 0* IF MORE THAN ONE PLOT IS DESIRED, IT REWINDS THE FILE * 0* AND REPEATS. * 0* a 0* INPUT * 0* cum1 * 0* * 0* NPLOT, NCONC (2I5 FORMAT) * 0* i 0* NPLOT a THE TOTAL NUMBER OF PLOTS * 0* DESIRED. * 0* * 0* NCONC= THE NUMBER OF DIFFERENT OUTPUT * 0* FUNCTIONS IN TAPE9. * 0* * 0* NOTE CARDS 2-6 ARE TO BE REPEATED FOR ALL THE DESIRED OBJECT * 0* FUNCTIONS. * 0* * 0* CARD 2 * 0* ITEST ( A10 FORMAT) * 0* i 0* ITEST . THE LABEL OF THE OBJECT (OR * 0* OUTPUT) FUNCTION TO BE PLOTTED : 0* 0* CARD 3 * 0* * 0* NFUN0,NPOINT * cl» - I» 0* NFUNC - THE NUMBER OF FUNCTIONS T0 PLOT * 0* FOR THE LPT'TH PLOT. * 0* * 0* NPOINT - THE NUMBER OF POINTS TO BE PLOTTED * 0* IN THE LPT'TH PLOT (X-AXIS) : C" 0* CARD 4 * 0* * 0* ITITLE(8) (8A1O FORMAT ) * 0* i 0* ITITLE = THE PLOT TITLE FOR THE LPT'TH * 272 Program PLOTSEN CONT'D. 0* CARD 5 0* SYM(K) (1OA1 FORMAT) 0* , SYM(K) . THE SYMBOLS TO BE USED IN THE PLOT. 0* ( IE THE SYMBOL FOR THE KTH FUNCTION). 0* CARD 6 - CARD(5+NFUNC) 0* NAME(K) (A10 FORMAT) * t * t t *1! Cat *1! t t 0* NAME ' THE NAME (LABEL) OF THE KTH FUNCTION* 0* TO BE PLOTTED. * 0* * Ci§iii§fiiQfiflGflflQ§i§§i*fifi§§i*fl§G§iG§Q§é*§i*§§fifliiiififlfii§§§§i§§§§§§§§§§§i§ REAL TIME(1OO) REAL PLT(1OO,10) REAL DELx(100) cl» 0HARAOTER*1O JNAME(10),NAME(10),ITITLE(B) CHARACTER*1O ITEST CHI, COMMON /PLTPTs/ sm(10) 0* DATA IO/2/, IS/1/, DELX/1OO*0./ DATA MAX/100/ Ca» 0* READ IN CARD INPUT c-I- cl» 0* READ IN: 0* NFUNC = NUMBER OF FUNCTION TO PLOT 0* NPOINT a NUMBER OF POINTS PER PLOT 0* NPLOT . NUMBER OF PLOTS C1} READ(1,2O) NPLOT,NCONC 20 F0RMAT(3IS) WRITE(2,30) NPLOT 30 FORMAT(‘1',’ THERE ARE ‘,I5,' PLOTS') DO 1000 LPT=1,NPLOT 0* READ IN CORRECT OBJECT FUNCTIONS READ(1,35) ITEST 3S FORMAT(A10) 273 Program PLOTSEN CONT'D. 0* READ IN THE NUMBER OF FUNCTIONS TO BE PLOTTED AND A‘A—J 0* THE NUMBER OF POINTS TO PLOT PER FUNCTION READ(1,*) NFUNC,NPOINT 45 FORMAT(2I5) 0* 0* READ IN PLOT TITLE READ(1,10)(ITITLE(K),K=1,8) 1O F0RMAT(BA10) 0* READ IN PLOT SYMBOLS READ(1,55)(SYM(K),K=1,NFUN0) 55 FORMAT(1OA1) (3* 0* READ IN THE NAME OF THE PLOTTED FUNCTIONS DO 80 K=1,NFUNC READ(1.75) NAME(K) 75 FORMAT(A10) 80 CONTINUE NKOUNT - NCONC*NPOINT 0* LPLOTsLPT CALL READ9(NKOUNT, NFUNC,TIME,ITEST,NCONC,LPLOT,PLT,ITPTs) IFLAG . 0 IF( ITPTS .NE. NPOINT ) IFLAG - 1 IF ( IFLAG .EQ. 1) WRITE(2,310) NPOINT,ITPTS 310 FORMAT(‘1',’ NUMBER OF POINTS EXPECTED =',I6./.SX, + ' NUMBER OF POINTS READ =',I6) IF( IFLAG .EQ. 1 ) NPOINT - ITPTS 0* CALL PLOT(IO,PLT,DELX,IS,ITITLE,NAME,TIME(1),TIME(NPOINT),NPOIII, + NFUNO,MAX) c* 0* WRITE OUT THE PLOTTED POINTS. c* WRITE(2,300)(NAME(K),K=1,NFUNc) 300 FORMAT(‘1',’ POINT',10(1X,A1O,1x)) D0 350 Ka1,NPOINT WRITE(2,325)( K,(PLT(K,L),L=1,NFUN0)) 325 FORMAT(' ',I4,1X,1O(1X,1PE10.3,1x)) 350 CONTINUE REWIND 9 1000 CONTINUE END 27M Program PLOTSEN CONT'D. SUBROUTINE READ9 SUBROUTINE READ9 (NKOUNT , NFUNC , TIM, ITEST , NCONC , LPLOT , PLT , ITIME) CWWWWWWMW 0* (3* cl- 0* 0* 0* 0* (3* 0* 0* 0* c* 0* C* (3* C1! SUBROUTINE READ READS IN THE OUTPUT TAPE7 FROM PROGRAM TRANS. THIS IS READ IN SO THAT IT MAY BE PLOTTED. INPUT UNIT ' 9 OUTPUT UNIT 3 2 VARIABLES : NKOUNT = NUMBER OF TOTAL SENSITIVITY POINTS NCONC = NUMBER OF CONCENTRATIONS TIM(70) = TIME POINT OF S. A. POINT **************** CW C" 0* C" 200 350 C CHARACTER*1O LABEL,ITEST REAL PLT(100,10) REAL TIM(100),SWLK(50),B(50) DATA IIN/9/ ITIME 8 O ISCALE = 1 DO 1000 KOUNT = 1,NKOUNT READ(IIN,10) LABEL,TIME,AVE,STDDEV,RELDEV,LENGTH,NPARA FORMAT(A1O,4(2X,E15.7).2I6) READ(IIN,2O)(SWLK(K),K=1,LENGTH) F0RMAT(5(2X,E15.7)) READ(IIN,20)(B(L),L=1,NPARA) FINDS CORRECT CONCENTRATION LABEL DO 200 I=1,NCONC IVAL=I IF( LABEL .EQ. ITEST) GO TO 350 CONTINUE GO TO 1000 CONTINUE NORMILIZE LPLOT TO ( 1,2,3,4) * 275 Program PLOTSEN CONT'D. SUBRO UTINE READ9 c* IF(LPLOT .LE. 4) GO TO 400 . LPLOT =- LPLOT - 4 00 T0 350 400 CONTINUE 0* FOUND THE CORRECT OBJECT FUNCTION 0* SAVE THE DESIRED VALUES. ITIME . ITIME + 1 TIM(ITIME) - TIME GO TO ( 500. 550, 600, 650 ) LPLOT 500 CONTINUE C SAVE THE AVERAGE VALUE 0 PLT(ITIME,1) . AVE 00 TO 1000 C 550 CONTINUE 0 SAVE THE RELATIVE DEVIATION CURVE PLT(ITIME,1) = RELDEV 00 TO 1000 C 600 CONTINUE C SAVE THE SINGLE PARTIAL VARIANCES D0 610 NP=1,NPARA INDEX = NPARA*(NP-1) - (NP*(NP-1))/2 + NP PLT(ITIME,NP) s SWLK(INDEX) 610 CONTINUE 00 TO 1000 C 650 CONTINUE 0 0 SAVE THE EXPANSION COEFFICIENTS D0 660 NP - 1, NPARA PLT(ITIME,NP) = B(NP) 660 CONTINUE 0 1000 CONTINUE 0* RETURN END C* 276 Program PLOTSEN CONT'D. SUBROUTINE PLOT SUBROUTINE PLOT(IO,ARAY,XARAY,ISCALE,JNAME,NAME,BLOW,BHI,NPT,NF, + MAX) i 0* (2* 0* c* C 0* 0* C* 0* (3* (1* C* (3* 0* 0* 0* 0* 0* 0* 0* C 0* 0* C* 0* C* c* 0* 0* C* 0* 0* C* 0* 0* FUNCTION TO BE READ IN USING A1 FORMAT OR DEFINED WITH 1H FORMAT IO 8 THE OUTPUT UNIT ISCALE 8 TYPE OF PLOT DESIRED, 1=II LINEAR SCALE ,2= SEMILOG, 3' LOG-LOG. FOR LOG-LOG READ IN EQUAL INTERVALS ON A LOG SCALE. NO MIXING OF 1,2,0R 3 ALLOWED IN THE SAME PLOT. BLOW= THE LOWER BOUND OF THE PLOT FOR THE X-AXIS BHI ' THE UPPER BOUND OF THE PLOT FOR THE X-AXIS NPT = THE NUMBER OF POINTS PER FUNCTION TO BE PLOTTED ************ MAX 3 THE INNER DIMENSION OF THE ARAY DEFINED IN THE MAIN PROGRAM* NF - THE NUMBER OF FUNCTIONS TO BE PLOTTED 33* lit ARAY(MAX,NF) - ARRAY OF POINTS TO BE PLOTTED * XARAY(MAX) ' THE ERRORS ASSOCIATED WITH THE POINTS FOR THE FIRST * FUNCTION. ONLY THE FIRST FUNCTION WILL BE PLOTTED WITH ERROR* BARS * * JNAME(8) ' THE TITLE OF THE PLOT. THIS WILL BE PRINTED AT THE TOP* OF THE PLOT (8A1O FORMAT ) : NAME(10) = THE NAME OF EACH FUNCTION TO BE PLOTTED(USE A10 FORMAT* OR 10H ) * H A LABELED COMMON BLOCK IS ALSO REQUIRED * THIS BLOCK CONTAINS THE SYMBOLS TO BE USED IN THE * PLOT FOR EACH FUNCTION * USE * COMMON /PLTPTS/ POINT(10) * WHERE POINT(I) IS THE SYMBOL FOR THE ITH : CWflmWWWWWW Ci C‘I’ DIMENSION ARAY(MAX,NF) REAL' XMAx(10),XMIN(1o) 0HARACTER*7 30(3) 0HARACTER*1O NAME(10),JNAME(10) XARAY(MAX) DIMENSION VAL(106) DIMENSION XDIv(3),XMIN1(11) 0* C“ C“ 160 50 277 Program PLOTSEN CONT'D. SUBROUTINE PLOT COMMON /PLTPTs/ POINT(10) DATA BLANK/1H /, * DASH/1H-/,STAR/1H*/ DATA XDIV/1.,2.,5./ DATA SC/‘LINEAR','LOC','LOC LOG'/ IF( NPT .LE. MAX ) GO To 5 WRITE(IO,1)(JNAME(K),K=1,8) FORMAT(' ARRAY SIZE TOO LARGE IN PLOT OF',/,3x,8A1O) RETURN CONTINUE IF( BLOW .LT. BHI) GO TO so WRITE(IO,160) BLow,BRI FORMAT(' X-AXIS MINIMUM AND MAXIMUM ARE NOT',E1S.7,'.GE.',E15.7) CONTINUE C’ TO PAGE OR NOT TO PAGE 0\ OKONCD 12 13 O CO 630 IF(NPT.LE.40) GO TO 8 WRITE(IO,6) PORMAT('1') GO TO 9 WRITE(IO.7) FORMAT(/////) CONTINUE WRITE HEADER FOR PLOTS WRITE(IO,10)((JNAME(K),K=1,8),SC(ISCALE)) FORMAT(' PLOT OF ',8A10,5X,A10,' SCALE',/) wRITE(IO,13) WRITE(IO,12)(POINT(I),NAME(I),I=1,NF) FORMAT(1OX,A2,5X,A10) FORMAT(SOH ----------------------------- ,/) WRITE(IO,13) FIND MAXIMUM AND MINIMUM M-o INITIALIZE XMAX AND XMIN DO 630 LD-1,NF XMAx(LD)-ARAY(1,LD) XMIN(LD)=ARAY(1,LD) CONTINUE DO 20 LDS=1,NP DO 20 L-1,NPT IF(ARAY(L,LDS).GT.XMAX(LDS)) XMAx(LDS)=ARAY(L,LDs) IF(ARAY(L,LDS).LT.XMIN(LDS)) XMIN(LDs)=ARAY(L,LDs) CONTINUE CHECK FOR ZERO ARRAYS 640 21 156 650 25 22 23 26 29 27 28 3O 31 32 C‘. 0* C‘.’ 80 278 Program PLOTSEN CONT'D. SUBROUTINE PLOT DO 640 NSE=1,NE IF(XMAX(NSF).EQ.O..AND.XMAX(NSF).EQ.XMIN(NSF)) WRITE(IO,21)NSF CONTINUE FORMAT(' ALL POINTS IN THE',I3,' GRAPH OF THIS PLOT ARE ZERO') IP(ISCALE.EQ.3) GO To 2 DIv=(BHI-BLow)/(NPT—1) IF(ISCALE.NE.1) GO TO 3 FAC=1 XMIN2=2.**4O XMAX1--XMIN2 Do 650 JL-1,NE IF(XMAX(JL).GT.XMAX1)XMAX18XMAXEJLg IF(XMIN(JL).LT.XMIN2)XMIN2-XMIN JL CONTINUE XDIE=XMAx1-XMIN2 DO 22 L-1,3 IF((XDIF/XDIV(L)).LE.100.) GO To 23 CONTINUE FAC=FAC*10. XDIF=XDIF/10. GO To 25 XSCALE=FAC*XDIV(L) IF(XSCALE.GT.1.) GO TO 28 IF(XSCALE.EQ.1..AND.XDIF.GT.SO.) CO To 28 DO 29 LL=1,7 DO 26 L=1,3 IF(XDIF*XDIV(L).GT.100.) GO TO 27 CONTINUE PACsPAC*1O. XDIE=XDIE*1O. IF(L.EQ.1) L=4 IF(L.EQ.4.AND.FAC.NE.1.) FAC=FAC/10. XSCALE=1./(EAC*XDIv(L-1)) CONTINUE XMIN2=XMIN2/XSCALE XMIN2=INT(XMIN2/10.+(SIGN(1.,XMIN2)-1.)/2.)*10.*XSCALE DO 30 L=1,11 XMIN1(L)=XMIN2+RLOAT(L-1)*1O.*XSCALE WRITE(IO,31) XMIN1 FORMAT(SX,G12.5,2X.9(G9.2,1X),G12.5) WRITE(IO,32) FORMAT(13X,21('I....'),'I.I') XVAL=BLOW-DIV JO=1 HERE WE LOOP OVER THE POINT PLOTTINC ONE LINE AT A TIME IARAY IS THE ARRAY INDEX OF VAL( ) WHERE A SYMBOL SHOULD BE DO 40 L=1,NPT 800 500 510 730 42 710 720 760 700 44 40 501 502 503 504 505 810 279 Program PLOTSEN CONT'D. SUBROUTINE PLOT DO 800 LS=1,106 VAL(Ls)=BLANK IE(ISCALE.EQ.3) GO TO 500 XVAL=XVAL+DIV GO TO 510 VALOC=VALOC+DIV XVAL=10.**(VALOG) CONTINUE DO 700 Jz=1,NP IARAY-(ARAY(L,Jz) - XMIN2)/XSCALE + 0.5 IE(IARAY.CT.106) GO TO 700 IF(IARAY.LT.O) IARAY . o IF(JZ.GT.1)GO TO 730 IERR=ABS(XARAY(L)/XSCALE)+O.5 JERR-106-IERR KERR-IARAY-1 IE(IARAY.NE.o)CO To 42 IF(IERR.EQ.O)GO TO 700 CO To 710 IE(IARAY.Eo.O)CO TO 700 WAL(IARAY)=POINT(Jz) IF(IERR.EQ.0.0R.JZ.GT.1)GO TO 700 LERR-IARAY-IERR IE(IERR.CE.IARAY)LERR=1 JOHN=IARAY+IERR IF(JOHN.GT.106)JOHN=106 KERR1=IARAY+1 Do 720 JTZ=LERR,KERR VAL(JTz)-DASH CONTINUE DO 760 JTY=KERR1,JOHN VAL(JTY)=DASH CONTINUE CONTINUE WRITE(IO,44)XVAL,VAL,ARAY(L,1) FORMAT(1X,E11.4,1X,'I',106A1,‘I',1X,E11.4) CONTINUE Go To (501.502.503.504,505).J0 WRITE(IO,32) GO TO 810 WRITE(IO,106) GO To 810 WRITE(IO,1O9) CO To 810 WRITE(IO,113) GO TO 810 WRITE(IO,116) CONTINUE 280 Program PLOTSEN CONT'D. 'SUBROUTINE PLOT IF( ISCALE .EQ. 1) GO TO 1000 C* RETURN THE VALUES TO NORMAL SPACE DO 965 K=1,NPT XARAY(K) - 10.0**(XARAY(K)) 965 CONTINUE DO 975 L-1,NF DO 975 K=1,NPT ARAY(K,L) . 1O.O**(ARAY(K,L)) 975 CONTINUE 1000 CONTINUE RETURN C C LOG SCALE C 3 DO 900 MAA'1,NF IF(XMIN(MAA).LE.O.) GO To 110 900 CONTINUE XVAL-BLOW-DIV 150 CONTINUE Do 100 L=1,NPT IF( XARAY(L) .LE. 0 ) XARAY(L) - 1.0 XARAY(L) - ALOG1o(XARAY(L)) DO 100 LL-1,NF 1OO ARAY(L,LL)=ALOG10(ARAY(L,LL)) XMIN2=2.**4O XMAX1--XMIN2 DO 910 JL-1,NF IF(XMAX(JL).GT.XMAX1)XMAX1=XMAX(JL) IE(XMIN(JL).LT.XMIN2)XMIN2=XMIN(JL) 91 O CONTINUE XMAx1=ALOG1O(XMAx1) XMIN2=ALOG10(XMIN2) XDIF=XMAX1-XMIN2 IXDIE=INT